You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CodeFuseEval is a Code Generation benchmark that combines the multi-tasking scenarios of CodeFuse Model with the benchmarks of HumanEval-x and MBPP. This benchmark is designed to evaluate the performance of models in various multi-tasking tasks, including code completion, code generation from natural language, test case generation, cross-language code translation, and code generation from Chinese commands, among others.Continuously open, stay tuned !
16
6
17
-
[中文](README_CN.md)**|****English**
18
7
19
-
</div>
20
-
21
-
CodeFuseEval is a Code Generation benchmark that combines the multi-tasking scenarios of CodeFuse Model with the benchmarks of HumanEval-x and MBPP. This benchmark is designed to evaluate the performance of models in various multi-tasking tasks, including code completion, code generation from natural language, test case generation, cross-language code translation, and code generation from Chinese commands, among others.
8
+
🌐 <ahref="README_CN.md"target="_blank">中文</a>
22
9
10
+

23
11
24
12
## Generation environment:
25
13
CodeFuse-13B: Python 3.8 or above,PyTorch 1.12 or above, with a recommendation for 2.0 or above, Transformers 4.24.0 or above ,CUDA 11.4 or above (for GPU users and flash-attention users, this option should be considered).
We designed an infrastructure called Processor. Its main purpose is to handle the differences between different models. It mainly needs to complete three abstract functions:
19
+
*``load_model_tokenizer``:Due to differences in model loading parameters and tokenizer terminators, models need to use different parameters for adaptation and loading. The current function is mainly to help users load and adapt different models.
20
+
*``process_before``: Since prompt adapts to different prompt styles according to different types of evaluation tasks or different models selected by users, the 「process_before」function is extracted mainly to help users process prompts.
21
+
*``process_after``:Due to the diversity of model generation results, in order to adapt to the evaluation framework, the generated result data can be spliced into appropriate use cases for automated operation. The current function mainly processes the generated results to adapt to the evaluation data set and results based on the task type and data set conditions.
22
+
23
+
24
+
We also modified the relevant configuration of ckpt_config to save the evaluation. For example:
25
+
```commandline
26
+
{
27
+
"CodeFuse-13B": {
28
+
"path": "/mnt/user/294761/bigcode/CodeFuse13B-evol-instruction-4K/", // model path
29
+
"processor_class": "codefuseEval.process.codefuse13b.Codefuse13BProcessor", // processor path (please create file in "codefuseEval.process")
30
+
"tokenizer": {
31
+
"truncation": true,
32
+
"padding": true,
33
+
"max_length": 600
34
+
}, // params for tokenizer to encode input prompts
35
+
"generation_config": { // generation_config, you can combine 「decode_mode」 param set your own decode, please use jsonObject to set different decodemode. Non-JsonObject data will be read directly into generation config
36
+
"greedy": {
37
+
"do_sample": false,
38
+
"num_beams": 1,
39
+
"max_new_tokens": 512
40
+
},
41
+
"beams": {
42
+
"do_sample": false,
43
+
"num_beams": 5,
44
+
"max_new_tokens": 600,
45
+
"num_return_sequences": 1
46
+
},
47
+
"dosample": {
48
+
"do_sample": true
49
+
},
50
+
"temperature": 0.2,
51
+
"max_new_tokens": 600,
52
+
"num_return_sequences": 1,
53
+
"top_p": 0.9,
54
+
"num_beams": 1,
55
+
"do_sample": true
56
+
},
57
+
"task_mode": "code_completion",//current support [code_completion,nl2code,code_trans,codescience] four kinds, if you eval_dataset support many task, suggest you set task mode to get suitable process
58
+
"batch_size": 1,
59
+
"sample_num": 1,
60
+
"decode_mode": "beams" //decode_mode, The configuration of the corresponding decoding mode will be set to the generation config.
Data are stored in ``codefuseEval/data``, using JSON list format. We first integrated humaneval-X dataset.
@@ -56,16 +99,17 @@ Data are stored in ``codefuseEval/data``, using JSON list format. We first integ
56
99
The evaluation of the generated codes involves compiling and running in multiple programming languages. The versions of the programming language environments and packages we use are as follows:
57
100
58
101
| Dependency | Version |
59
-
| ---------- |--------|
60
-
| Python | 3.8.12|
102
+
| ---------- |----------|
103
+
| Python | 3.10.9|
61
104
| JDK | 18.0.2.1 |
62
105
| Node.js | 16.14.0 |
63
106
| js-md5 | 0.7.3 |
64
107
| C++ | 11 |
65
108
| g++ | 7.5.0 |
66
-
| Boost | 1.71.0 |
109
+
| Boost | 1.75.0 |
67
110
| OpenSSL | 3.0.0 |
68
111
| go | 1.18.4 |
112
+
| cargo | 1.71.1 |
69
113
70
114
In order to save everyone the trouble of setting up the environments for these languages, we create a Docker image with the required environments and codefuseEval.
0 commit comments