Skip to content

Commit caa7421

Browse files
authored
Merge pull request #1 from codefuse-ai/pr_base_docs
Pr base docs
2 parents 7f3c87b + d88d95f commit caa7421

File tree

396 files changed

+100440
-898
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

396 files changed

+100440
-898
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
public
1+
public

content/en/contribution/contribute/d1.pr.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,8 @@ Try to use options that are already listed. If you need to add new ones, please
8181
### Subject Content
8282
The title should clearly indicate the main content of the current submission.
8383

84-
84+
For Example
85+
`[feature](coagent)<增加antflow兼容和增加coagent demo>`
8586
## Example
8687
comming soon
8788

content/en/docs/chatbot/c1.quickstart.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
title: QuickStart
33
slug: QuickStart
44
description: 介绍主要功能
5-
url: "docs/quickstart"
5+
url: "docs/codefuse-chatbot-quickstart"
66
aliases:
7-
- "/docs/quickstart"
7+
- "/docs/codefuse-chatbot-quickstart"
88
---
99

1010
<p align="left">
Lines changed: 258 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,258 @@
1+
---
2+
title: QuickStart
3+
description: 介绍主要功能
4+
url: docs/codefuse-evalution-quickstart
5+
aliases:
6+
- "/docs/codefuse-evalution-quickstart"
7+
---
8+
9+
10+
11+
12+
13+
## Generation environment:
14+
CodeFuse-13B: Python 3.8 or above,PyTorch 1.12 or above, with a recommendation for 2.0 or above, Transformers 4.24.0 or above ,CUDA 11.4 or above (for GPU users and flash-attention users, this option should be considered).
15+
16+
CodeFuse-CodeLlama-34B:python>=3.8,pytorch>=2.0.0,transformers==4.32.0,Sentencepiece,CUDA 11.
17+
18+
## Evaluation Environment
19+
The evaluation of the generated codes involves compiling and running in multiple programming languages. The versions of the programming language environments and packages we use are as follows:
20+
21+
| Dependency | Version |
22+
| ---------- |----------|
23+
| Python | 3.10.9 |
24+
| JDK | 18.0.2.1 |
25+
| Node.js | 16.14.0 |
26+
| js-md5 | 0.7.3 |
27+
| C++ | 11 |
28+
| g++ | 7.5.0 |
29+
| Boost | 1.75.0 |
30+
| OpenSSL | 3.0.0 |
31+
| go | 1.18.4 |
32+
| cargo | 1.71.1 |
33+
34+
In order to save everyone the trouble of setting up the environments for these languages, we create a Docker image with the required environments and codefuseEval.
35+
```bash
36+
docker pull registry.cn-hangzhou.aliyuncs.com/codefuse/codefuseeval:latest
37+
```
38+
39+
If you are familiar with docker, you can build the image from `codefuseEval/docker/Dockerfile` or configure the Dockerfile as you like it:
40+
41+
```bash
42+
cd codefuseEval/docker
43+
docker build [OPTIONS] .
44+
```
45+
46+
After obtaining the image, you can build a container using the following command:
47+
48+
```bash
49+
docker run -it --gpus all --mount type=bind,source=<LOCAL PATH>,target=<PATH IN CONTAINER> [OPTIONS] <IMAGE NAME:TAG>
50+
```
51+
52+
## Check result Command:
53+
We provide the script to check the result for provided code LLMs. Please use following scripts to check corresponding results and the environment .
54+
55+
```bash
56+
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-CodeLlama-34B/humaneval_result_python.jsonl humaneval_python
57+
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-13B/humaneval_result_python.jsonl humaneval_python
58+
```
59+
60+
## How to use CodeFuseEval
61+
1. Download the model and update current model infomation in ckpt_config.json. Mainly update 「path」parameter in corresponding model and version.
62+
2. Run following generation comand to generate result.
63+
```
64+
bash codefuseEval/script/generation.sh MODELNAME MODELVERSION EVALDATASET OUTFILE
65+
66+
eg:
67+
bash codefuseEval/script/generation.sh CodeFuse-13B v1 humaneval_python result/test.jsonl
68+
```
69+
3. Run following evaluation command to evaluate the generated result for corresponding model and version.
70+
```
71+
bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE>
72+
eg:
73+
bash codefuseEval/script/evaluation.sh codefuseEval/result/test.jsonl pass@k humaneval_python
74+
```
75+
76+
77+
## Evaluation
78+
79+
We recommend evaluating in [the provided image](#evaluation-environment). To evaluate the generated samples, save generated codes in the following JSON list format:
80+
81+
```
82+
{"task_id": "../..", "generation: "..."}
83+
{"task_id": "../..", "generation: "..."}
84+
...
85+
```
86+
87+
and evaluate them using the following script under the root directory of the repository (<font color='red'>please execute with caution, the generated codes might have unexpected behaviours though with very low possibility. See the warnings in [execution.py](execution.py) and uncomment the execution lines at your own risk</font>):
88+
89+
### Evaluation Data
90+
Data are stored in ``codefuseEval/data``, using JSON list format. We first integrated humaneval-X dataset.
91+
92+
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
93+
* ``prompt``: the function declaration and docstring, used for code generation.
94+
* ``declaration``: only the function declaration, used for code translation.
95+
* ``canonical_solution``: human-crafted example solutions.
96+
* ``test``: hidden test samples, used for evaluation
97+
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
98+
* ``prompt_text``: prompt text
99+
* ``prompt_explain``: prompt explanation
100+
* ``func_title``: code function title
101+
* ``prompt_text_chinese``: Chinese prompt
102+
103+
104+
### Evaluation Metrics
105+
In addition to the unbiased pass@k indicators currently provided in [Codex](https://arxiv.org/abs/2107.03374), we will also integrate the relevant indicators of huggingface open source with [CodeBLEU](https://arxiv.org/abs/2009.10297) for integration.
106+
The main indicators currently recommended for users are as follows:
107+
* ``codebleu``
108+
* ``pass@k``
109+
* ``bleu``
110+
* ``bleurt``
111+
112+
For other related metrics, you can check the code of the metric or the evaluation code to meet your requirements.
113+
114+
At the same time, we supplemented the indicators of the total and average generation time of the model for the dataset `total_time_cost` and `Average time cost`
115+
116+
Output during each generation, making it convenient for users to measure the generation performance of the model in the same environment. This indicator is passive output, and it will be output every time it is generated.
117+
118+
### Evaluation Command:
119+
```
120+
bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE> <TEST_GROUDTRUTH>
121+
eg:
122+
bash codefuseEval/script/evaluation.sh codefuseEval/result/test.jsonl pass@k humaneval_python
123+
```
124+
125+
At the same time, we currently provide the following flags, which can directly bring the sample answers in the test data set as generated answers for testing.
126+
127+
* ``TEST_GROUDTRUTH`` default False
128+
129+
When TEST_GROUDTRUTH is True, the self-test mode is turned on, PROBLEM_FILE will be read, and the sample answer will be substituted as the generated answer for testing.
130+
131+
When TEST_GROUDTRUTH is False, open the evaluation mode, read RESULT_FILE and PROBLEM_FILE, and substitute the generated answer for testing.
132+
133+
134+
## More Infomation
135+
136+
### Evaluation self model and dataset
137+
138+
1. Registry your evaluate dataset.
139+
* Download evaluation dataset to store in `codefuseEval/data` or other directory. Dataset must be jsonl.
140+
* Setup information dataset `EVAL_DATASET`,`DATASET_SUPPORT` and `DATASET_LANGUAGE` in `codefuseEval/util.py` for dataset path, dataset task_mode and generation code language
141+
2. Registry your evaluate model.
142+
* Download evaluation model to store in `codefuseEval/model` or other directory.
143+
* Write your evaluation model processor code in `codefuseEval/processor` package.
144+
145+
We designed an infrastructure called Processor. Its main purpose is to handle the differences between different models. It mainly needs to complete three abstract functions:
146+
* ``load_model_tokenizer``:Due to differences in model loading parameters and tokenizer terminators, models need to use different parameters for adaptation and loading. The current function is mainly to help users load and adapt different models.
147+
* ``process_before``: Since prompt adapts to different prompt styles according to different types of evaluation tasks or different models selected by users, the 「process_before」function is extracted mainly to help users process prompts.
148+
* ``process_after``:Due to the diversity of model generation results, in order to adapt to the evaluation framework, the generated result data can be spliced into appropriate use cases for automated operation. The current function mainly processes the generated results to adapt to the evaluation data set and results based on the task type and data set conditions.
149+
150+
You can extend the `BaseProcessor` in `codefuseEval/processor/base.py` and implement above functions
151+
152+
* Setup information model in `ckpt_config.json`. For Example as follow
153+
```
154+
{
155+
"CodeFuse-13B": { //model name
156+
"v1": { //model version
157+
"path": "/mnt/model/CodeFuse13B-evol-instruction-4K/", // model path
158+
"processor_class": "codefuseEval.process.codefuse13b.Codefuse13BProcessor", // model processor
159+
"tokenizer": { // tokenizer params to token input string.
160+
"truncation": true,
161+
"padding": true,
162+
"max_length": 600
163+
},
164+
"generation_config": { //generation config params.
165+
"greedy": { //If JsonObject, it is a decode mode, you can set 「decode_mode」param to load params defined in the decode_mode.
166+
"do_sample": false,
167+
"num_beams": 1,
168+
"max_new_tokens": 512
169+
},
170+
"beams": {
171+
"do_sample": false,
172+
"num_beams": 5,
173+
"max_new_tokens": 600,
174+
"num_return_sequences": 1
175+
},
176+
"dosample": {
177+
"da_sample": true
178+
},
179+
"temperature": 0.2, //If not JsonObject, it is a default param, we will set in generation_config default. You can cover param in decode_mode same name param.
180+
"max_new_tokens": 600,
181+
"num_return_sequences": 1,
182+
"top_p": 0.9,
183+
"num_beams": 1,
184+
"do_sample": true
185+
},
186+
"batch_size": 1, // batch size for generate
187+
"sample_num": 1, // The number of samples generated by a single piece of data
188+
"decode_mode": "beams" // choose decode mode defined in generation_config
189+
}
190+
}
191+
```
192+
193+
### Check dataset Command:
194+
To check whether the reference values provided by the evaluation data set are correct,
195+
we provide the following command to check the dataset.
196+
197+
CodeCompletion
198+
```bash
199+
bash codefuseEval/script/check_dataset.sh humaneval_python
200+
201+
bash codefuseEval/script/check_dataset.sh humaneval_java
202+
203+
bash codefuseEval/script/check_dataset.sh humaneval_js
204+
205+
bash codefuseEval/script/check_dataset.sh humaneval_rust
206+
207+
bash codefuseEval/script/check_dataset.sh humaneval_go
208+
209+
bash codefuseEval/script/check_dataset.sh humaneval_cpp
210+
```
211+
NL2Code
212+
```bash
213+
bash codefuseEval/script/check_dataset.sh mbpp
214+
```
215+
CodeTrans
216+
```
217+
bash codefuseEval/script/check_dataset.sh codeTrans_python_to_java
218+
219+
bash codefuseEval/script/check_dataset.sh codeTrans_python_to_cpp
220+
221+
bash codefuseEval/script/check_dataset.sh codeTrans_cpp_to_java
222+
223+
bash codefuseEval/script/check_dataset.sh codeTrans_cpp_to_python
224+
225+
bash codefuseEval/script/check_dataset.sh codeTrans_java_to_python
226+
227+
bash codefuseEval/script/check_dataset.sh codeTrans_java_to_cpp
228+
```
229+
CodeScience
230+
```
231+
bash codefuseEval/script/check_dataset.sh codeCompletion_matplotlib
232+
233+
bash codefuseEval/script/check_dataset.sh codeCompletion_numpy
234+
235+
bash codefuseEval/script/check_dataset.sh codeCompletion_pandas
236+
237+
bash codefuseEval/script/check_dataset.sh codeCompletion_pytorch
238+
239+
bash codefuseEval/script/check_dataset.sh codeCompletion_scipy
240+
241+
bash codefuseEval/script/check_dataset.sh codeCompletion_sklearn
242+
243+
bash codefuseEval/script/check_dataset.sh codeCompletion_tensorflow
244+
245+
bash codefuseEval/script/check_dataset.sh codeInsertion_matplotlib
246+
247+
bash codefuseEval/script/check_dataset.sh codeInsertion_numpy
248+
249+
bash codefuseEval/script/check_dataset.sh codeInsertion_pandas
250+
251+
bash codefuseEval/script/check_dataset.sh codeInsertion_pytorch
252+
253+
bash codefuseEval/script/check_dataset.sh codeInsertion_scipy
254+
255+
bash codefuseEval/script/check_dataset.sh codeInsertion_sklearn
256+
257+
bash codefuseEval/script/check_dataset.sh codeInsertion_tensorflow
258+
```
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
title: QuickStart
3+
slug: QuickStart
4+
description: QuickStart Document
5+
aliases:
6+
- "/docs/codefuse-mft-vlm-quickstart"
7+
---
8+
9+
10+
11+
## Contents
12+
- [Install](#Install)
13+
- [Datasets](#Datasets)
14+
- [Multimodal Alignment](#Multimodal-Alignment)
15+
- [Visual Instruction Tuning](#Visual-Instruction-Tuning)
16+
- [Evaluation](#Evaluation)
17+
18+
## Install
19+
Please run sh init\_env.sh
20+
21+
## Datasets
22+
Here's the table of datasets we used to train CodeFuse-VLM-14B:
23+
24+
Dataset | Task Type | Number of Samples
25+
| ------------- | ------------- | ------------- |
26+
synthdog-en | OCR | 800,000
27+
synthdog-zh | OCR | 800,000
28+
cc3m(downsampled)| Image Caption | 600,000
29+
cc3m(downsampled)| Image Caption | 600,000
30+
SBU | Image Caption | 850,000
31+
Visual Genome VQA (Downsampled) | Visual Question Answer(VQA) | 500,000
32+
Visual Genome Region descriptions (Downsampled) | Reference Grouding | 500,000
33+
Visual Genome objects (Downsampled) | Grounded Caption | 500,000
34+
OCR VQA (Downsampled) | OCR and VQA | 500,000
35+
36+
Please download these datasets on their own official websites.
37+
38+
## Multimodal Alignment
39+
Please run sh scripts/pretrain.sh or sh scripts/pretrain\_multinode.sh
40+
41+
## Visual Instruction Tuning
42+
Please run sh scripts/finetune.sh or sh scripts/finetune\_multinode.sh
43+
44+
## Evaluation
45+
Please run python scripts in directory llava/eval/. Our pre-trained CodeFuse-VLM-14B can be loaded with the following code:
46+
47+
```
48+
import os
49+
from llava.model.builder import load_mixed_pretrained_model
50+
51+
model_path = '/pretrained/model/path'
52+
tokenizer, model, image_processor, context_len = load_mixed_pretrained_model(model_path, None, 'qwen-vl-14b', os.path.join(model_path, 'Qwen-VL-visual'), 'cross_attn', os.path.join(model_path, 'mm_projector/mm_projector.bin'))
53+
```
54+
55+
You can also run scripts/merge\_qwen\_vl\_weights.sh first and load the merged model by the following code:
56+
57+
```
58+
from llava.model import LlavaQWenForCausalLM
59+
60+
model = LlavaQWenForCausalLM.from_pretrained('/path/to/our/pretrained/model')
61+
```
62+
63+
## CodeFuse-VLM Product Video
64+
Here's the demo video of front-end code copilot backed by our VLM model
65+
66+
https://private-user-images.githubusercontent.com/22836551/300398424-201f667d-6b6b-4548-b3e6-724afc4b3071.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDY1MjE5MTIsIm5iZiI6MTcwNjUyMTYxMiwicGF0aCI6Ii8yMjgzNjU1MS8zMDAzOTg0MjQtMjAxZjY2N2QtNmI2Yi00NTQ4LWIzZTYtNzI0YWZjNGIzMDcxLm1wND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAxMjklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMTI5VDA5NDY1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI0ZmJmZWNlNDZmNWM3NzA0OThlMmY1ODY4MDkxNWY5ZWNiNzRiYjJkYmE4NjEzM2EwYWRiNWY2ODc3N2ViYjEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.BIvWGNx0XV7RoauxB0c2noEdbfZfu8-16LPHtCaCJ9k

0 commit comments

Comments
 (0)