WizardLM commited on
Commit
650609c
1 Parent(s): 019ee51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -2
README.md CHANGED
@@ -4,6 +4,221 @@ license: bigcode-openrail-m
4
 
5
  This is the Full-Weight of WizardCoder.
6
 
7
- Repository: https://github.com/nlpxucan/WizardLM
8
- Paper:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
 
4
 
5
  This is the Full-Weight of WizardCoder.
6
 
7
+ Repository: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
8
+
9
+ Paper: Coming
10
+
11
+ # WizardCoder: Empowering Code Large Language Models with Evol-Instruct
12
+
13
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
14
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
15
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
16
+
17
+ To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.
18
+
19
+ ## News
20
+
21
+ - 🔥 Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
22
+ - 🔥 We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), [Demo](https://1c48cbf5c83110ed.gradio.app/), and [Paper]().
23
+ - 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
24
+
25
+
26
+ ## Comparing WizardCoder with the Closed-Source Models.
27
+
28
+ The SOTA LLMs for code generation, such as GPT4, Claude, and Bard, are predominantly closed-source. Acquiring access to the APIs of these models proves challenging. In this study, we adopt an alternative approach by retrieving the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding.
29
+
30
+ 🔥 The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
31
+
32
+ <p align="center" width="100%">
33
+ <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
34
+ </p>
35
+
36
+ ## Comparing WizardCoder with the Open-Source Models.
37
+
38
+ The following table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating n samples for each problem to estimate the pass@1 score. The findings clearly demonstrate that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models.
39
+
40
+
41
+ | Model | HumanEval Pass@1 | MBPP Pass@1 |
42
+ |------------------|------------------|-------------|
43
+ | CodeGen-16B-Multi| 18.3 |20.9 |
44
+ | CodeGeeX | 22.9 |24.4 |
45
+ | LLaMA-33B | 21.7 |30.2 |
46
+ | LLaMA-65B | 23.7 |37.7 |
47
+ | PaLM-540B | 26.2 |36.8 |
48
+ | PaLM-Coder-540B | 36.0 |47.0 |
49
+ | PaLM 2-S | 37.6 |50.0 |
50
+ | CodeGen-16B-Mono | 29.3 |35.3 |
51
+ | Code-Cushman-001 | 33.5 |45.9 |
52
+ | StarCoder-15B | 33.6 |43.6* |
53
+ | InstructCodeT5+ | 35.0 |-- |
54
+ | WizardLM-30B 1.0| 37.8 |-- |
55
+ | WizardCoder-15B 1.0 | **57.3** |**51.8** |
56
+
57
+ *: The reproduced result of StarCoder on MBPP.
58
+
59
+ ## Call for Feedbacks
60
+ We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
61
+
62
+
63
+ ## Contents
64
+
65
+ 1. [Online Demo](#online-demo)
66
+
67
+ 2. [Fine-tuning](#fine-tuning)
68
+
69
+ 3. [Inference](#inference)
70
+
71
+ 4. [Evaluation](#evaluation)
72
+
73
+ 5. [Citation](#citation)
74
+
75
+ 6. [Disclaimer](#disclaimer)
76
+
77
+ ## Online Demo
78
+
79
+ We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
80
+
81
+ [Demo Link](https://1c48cbf5c83110ed.gradio.app/) (We adopt the greedy decoding now.)
82
+
83
+ ## Fine-tuning
84
+
85
+ We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X).
86
+ We fine-tune StarCoder-15B with the following hyperparameters:
87
+
88
+ | Hyperparameter | StarCoder-15B |
89
+ |----------------|---------------|
90
+ | Batch size | 512 |
91
+ | Learning rate | 2e-5 |
92
+ | Epochs | 3 |
93
+ | Max length | 2048 |
94
+ | Warmup step | 30 |
95
+ | LR scheduler | cosine |
96
+
97
+ To reproduce our fine-tuning of WizardCoder, please follow the following steps:
98
+ 1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`)
99
+ 2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`)
100
+ 3. Login Huggingface:
101
+ ```bash
102
+ huggingface-cli login
103
+ ```
104
+ 4. Execute the following training command:
105
+ ```bash
106
+ deepspeed train_wizardcoder.py \
107
+ --model_name_or_path "bigcode/starcoder" \
108
+ --data_path "/your/path/to/code_instruction_data.json" \
109
+ --output_dir "/your/path/to/ckpt" \
110
+ --num_train_epochs 3 \
111
+ --model_max_length 2048 \
112
+ --per_device_train_batch_size 16 \
113
+ --per_device_eval_batch_size 1 \
114
+ --gradient_accumulation_steps 4 \
115
+ --evaluation_strategy "no" \
116
+ --save_strategy "steps" \
117
+ --save_steps 50 \
118
+ --save_total_limit 2 \
119
+ --learning_rate 2e-5 \
120
+ --warmup_steps 30 \
121
+ --logging_steps 2 \
122
+ --lr_scheduler_type "cosine" \
123
+ --report_to "tensorboard" \
124
+ --gradient_checkpointing True \
125
+ --deepspeed configs/deepspeed_config.json \
126
+ --fp16 True
127
+ ```
128
+
129
+ ## Inference
130
+
131
+ We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
132
+
133
+ You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file.
134
+
135
+ ```bash
136
+ pip install jsonlines
137
+ ```
138
+
139
+ The decoding command is:
140
+ ```
141
+ python src\inference_wizardcoder.py \
142
+ --base_model "/your/path/to/ckpt" \
143
+ --input_data_path "/your/path/to/input/data.jsonl" \
144
+ --output_data_path "/your/path/to/output/result.jsonl"
145
+ ```
146
+
147
+ The format of `data.jsonl` should be:
148
+ ```
149
+ {"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
150
+ {"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
151
+ ```
152
+
153
+ The prompt for our WizardCoder in `src\inference_wizardcoder.py` is:
154
+ ```
155
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
156
+
157
+ ### Instruction:
158
+ {instruction}
159
+
160
+ ### Response:
161
+ ```
162
+
163
+ ## Evaluation
164
+
165
+ We provide the evaluation script on HumanEval for WizardCoder.
166
+
167
+ 1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment.
168
+ 2. Run the following script to generate the answer.
169
+ ```bash
170
+ model="/path/to/your/model"
171
+ temp=0.2
172
+ max_len=2048
173
+ pred_num=200
174
+ num_seqs_per_iter=2
175
+
176
+ output_path=preds/T${temp}_N${pred_num}
177
+
178
+ mkdir -p ${output_path}
179
+ echo 'Output path: '$output_path
180
+ echo 'Model to eval: '$model
181
+
182
+ # 164 problems, 21 per GPU if GPU=8
183
+ index=0
184
+ gpu_num=8
185
+ for ((i = 0; i < $gpu_num; i++)); do
186
+ start_index=$((i * 21))
187
+ end_index=$(((i + 1) * 21))
188
+
189
+ gpu=$((i))
190
+ echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
191
+ ((index++))
192
+ (
193
+ CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
194
+ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
195
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
196
+ ) &
197
+ if (($index % $gpu_num == 0)); then wait; fi
198
+ done
199
+ ```
200
+ 3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files.
201
+ ```bash
202
+ output_path=preds/T${temp}_N${pred_num}
203
+
204
+ echo 'Output path: '$output_path
205
+ python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
206
+
207
+ evaluate_functional_correctness ${output_path}.jsonl
208
+ ```
209
+
210
+ ## Citation
211
+
212
+ Please cite the repo if you use the data or code in this repo.
213
+
214
+ ```
215
+ @misc{luo2023wizardcoder,
216
+ title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
217
+ author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
218
+ year={2023},
219
+ }
220
+ ```
221
+ ## Disclaimer
222
+
223
+ The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
224