TheBloke commited on
Commit
be00f36
1 Parent(s): 5e5ffd2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +620 -0
README.md ADDED
@@ -0,0 +1,620 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: WizardLM/WizardCoder-33B-V1.1
3
+ inference: false
4
+ library_name: transformers
5
+ metrics:
6
+ - code_eval
7
+ model-index:
8
+ - name: WizardCoder
9
+ results:
10
+ - dataset:
11
+ name: HumanEval
12
+ type: openai_humaneval
13
+ metrics:
14
+ - name: pass@1
15
+ type: pass@1
16
+ value: 0.799
17
+ verified: false
18
+ task:
19
+ type: text-generation
20
+ model_creator: WizardLM
21
+ model_name: Wizardcoder 33B V1.1
22
+ model_type: deepseek
23
+ prompt_template: 'Below is an instruction that describes a task. Write a response
24
+ that appropriately completes the request.
25
+
26
+
27
+ ### Instruction:
28
+
29
+ {prompt}
30
+
31
+
32
+ ### Response:
33
+
34
+ '
35
+ quantized_by: TheBloke
36
+ tags:
37
+ - code
38
+ ---
39
+ <!-- markdownlint-disable MD041 -->
40
+
41
+ <!-- header start -->
42
+ <!-- 200823 -->
43
+ <div style="width: auto; margin-left: auto; margin-right: auto">
44
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
45
+ </div>
46
+ <div style="display: flex; justify-content: space-between; width: 100%;">
47
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
48
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
49
+ </div>
50
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
51
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
52
+ </div>
53
+ </div>
54
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
55
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
56
+ <!-- header end -->
57
+
58
+ # Wizardcoder 33B V1.1 - AWQ
59
+ - Model creator: [WizardLM](https://huggingface.co/WizardLM)
60
+ - Original model: [Wizardcoder 33B V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1)
61
+
62
+ <!-- description start -->
63
+ ## Description
64
+
65
+ This repo contains AWQ model files for [WizardLM's Wizardcoder 33B V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1).
66
+
67
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
68
+
69
+
70
+ ### About AWQ
71
+
72
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
73
+
74
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
75
+
76
+ It is supported by:
77
+
78
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
79
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
80
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
81
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
82
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
83
+
84
+ <!-- description end -->
85
+ <!-- repositories-available start -->
86
+ ## Repositories available
87
+
88
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-AWQ)
89
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GPTQ)
90
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF)
91
+ * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1)
92
+ <!-- repositories-available end -->
93
+
94
+ <!-- prompt-template start -->
95
+ ## Prompt template: Alpaca
96
+
97
+ ```
98
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
99
+
100
+ ### Instruction:
101
+ {prompt}
102
+
103
+ ### Response:
104
+
105
+ ```
106
+
107
+ <!-- prompt-template end -->
108
+
109
+
110
+ <!-- README_AWQ.md-provided-files start -->
111
+ ## Provided files, and AWQ parameters
112
+
113
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
114
+
115
+ Models are released as sharded safetensors files.
116
+
117
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
118
+ | ------ | ---- | -- | ----------- | ------- | ---- |
119
+ | [main](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 8192 | 18.01 GB
120
+
121
+ <!-- README_AWQ.md-provided-files end -->
122
+
123
+ <!-- README_AWQ.md-text-generation-webui start -->
124
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
125
+
126
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
127
+
128
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
129
+
130
+ 1. Click the **Model tab**.
131
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardCoder-33B-V1.1-AWQ`.
132
+ 3. Click **Download**.
133
+ 4. The model will start downloading. Once it's finished it will say "Done".
134
+ 5. In the top left, click the refresh icon next to **Model**.
135
+ 6. In the **Model** dropdown, choose the model you just downloaded: `WizardCoder-33B-V1.1-AWQ`
136
+ 7. Select **Loader: AutoAWQ**.
137
+ 8. Click Load, and the model will load and is now ready for use.
138
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
139
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
140
+ <!-- README_AWQ.md-text-generation-webui end -->
141
+
142
+ <!-- README_AWQ.md-use-from-vllm start -->
143
+ ## Multi-user inference server: vLLM
144
+
145
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
146
+
147
+ - Please ensure you are using vLLM version 0.2 or later.
148
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
149
+
150
+ For example:
151
+
152
+ ```shell
153
+ python3 -m vllm.entrypoints.api_server --model TheBloke/WizardCoder-33B-V1.1-AWQ --quantization awq --dtype auto
154
+ ```
155
+
156
+ - When using vLLM from Python code, again set `quantization=awq`.
157
+
158
+ For example:
159
+
160
+ ```python
161
+ from vllm import LLM, SamplingParams
162
+
163
+ prompts = [
164
+ "Tell me about AI",
165
+ "Write a story about llamas",
166
+ "What is 291 - 150?",
167
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
168
+ ]
169
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
170
+
171
+ ### Instruction:
172
+ {prompt}
173
+
174
+ ### Response:
175
+ '''
176
+
177
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
178
+
179
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
180
+
181
+ llm = LLM(model="TheBloke/WizardCoder-33B-V1.1-AWQ", quantization="awq", dtype="auto")
182
+
183
+ outputs = llm.generate(prompts, sampling_params)
184
+
185
+ # Print the outputs.
186
+ for output in outputs:
187
+ prompt = output.prompt
188
+ generated_text = output.outputs[0].text
189
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
190
+ ```
191
+ <!-- README_AWQ.md-use-from-vllm start -->
192
+
193
+ <!-- README_AWQ.md-use-from-tgi start -->
194
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
195
+
196
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
197
+
198
+ Example Docker parameters:
199
+
200
+ ```shell
201
+ --model-id TheBloke/WizardCoder-33B-V1.1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
202
+ ```
203
+
204
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
205
+
206
+ ```shell
207
+ pip3 install huggingface-hub
208
+ ```
209
+
210
+ ```python
211
+ from huggingface_hub import InferenceClient
212
+
213
+ endpoint_url = "https://your-endpoint-url-here"
214
+
215
+ prompt = "Tell me about AI"
216
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
217
+
218
+ ### Instruction:
219
+ {prompt}
220
+
221
+ ### Response:
222
+ '''
223
+
224
+ client = InferenceClient(endpoint_url)
225
+ response = client.text_generation(prompt,
226
+ max_new_tokens=128,
227
+ do_sample=True,
228
+ temperature=0.7,
229
+ top_p=0.95,
230
+ top_k=40,
231
+ repetition_penalty=1.1)
232
+
233
+ print(f"Model output: ", response)
234
+ ```
235
+ <!-- README_AWQ.md-use-from-tgi end -->
236
+
237
+ <!-- README_AWQ.md-use-from-python start -->
238
+ ## Inference from Python code using Transformers
239
+
240
+ ### Install the necessary packages
241
+
242
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
243
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
244
+
245
+ ```shell
246
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
247
+ ```
248
+
249
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
250
+
251
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
252
+
253
+ ```shell
254
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
255
+ ```
256
+
257
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
258
+
259
+ ```shell
260
+ pip3 uninstall -y autoawq
261
+ git clone https://github.com/casper-hansen/AutoAWQ
262
+ cd AutoAWQ
263
+ pip3 install .
264
+ ```
265
+
266
+ ### Transformers example code (requires Transformers 4.35.0 and later)
267
+
268
+ ```python
269
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
270
+
271
+ model_name_or_path = "TheBloke/WizardCoder-33B-V1.1-AWQ"
272
+
273
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
274
+ model = AutoModelForCausalLM.from_pretrained(
275
+ model_name_or_path,
276
+ low_cpu_mem_usage=True,
277
+ device_map="cuda:0"
278
+ )
279
+
280
+ # Using the text streamer to stream output one token at a time
281
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
282
+
283
+ prompt = "Tell me about AI"
284
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
285
+
286
+ ### Instruction:
287
+ {prompt}
288
+
289
+ ### Response:
290
+ '''
291
+
292
+ # Convert prompt to tokens
293
+ tokens = tokenizer(
294
+ prompt_template,
295
+ return_tensors='pt'
296
+ ).input_ids.cuda()
297
+
298
+ generation_params = {
299
+ "do_sample": True,
300
+ "temperature": 0.7,
301
+ "top_p": 0.95,
302
+ "top_k": 40,
303
+ "max_new_tokens": 512,
304
+ "repetition_penalty": 1.1
305
+ }
306
+
307
+ # Generate streamed output, visible one token at a time
308
+ generation_output = model.generate(
309
+ tokens,
310
+ streamer=streamer,
311
+ **generation_params
312
+ )
313
+
314
+ # Generation without a streamer, which will include the prompt in the output
315
+ generation_output = model.generate(
316
+ tokens,
317
+ **generation_params
318
+ )
319
+
320
+ # Get the tokens from the output, decode them, print them
321
+ token_output = generation_output[0]
322
+ text_output = tokenizer.decode(token_output)
323
+ print("model.generate output: ", text_output)
324
+
325
+ # Inference is also possible via Transformers' pipeline
326
+ from transformers import pipeline
327
+
328
+ pipe = pipeline(
329
+ "text-generation",
330
+ model=model,
331
+ tokenizer=tokenizer,
332
+ **generation_params
333
+ )
334
+
335
+ pipe_output = pipe(prompt_template)[0]['generated_text']
336
+ print("pipeline output: ", pipe_output)
337
+
338
+ ```
339
+ <!-- README_AWQ.md-use-from-python end -->
340
+
341
+ <!-- README_AWQ.md-compatibility start -->
342
+ ## Compatibility
343
+
344
+ The files provided are tested to work with:
345
+
346
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
347
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
348
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
349
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
350
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
351
+
352
+ <!-- README_AWQ.md-compatibility end -->
353
+
354
+ <!-- footer start -->
355
+ <!-- 200823 -->
356
+ ## Discord
357
+
358
+ For further support, and discussions on these models and AI in general, join us at:
359
+
360
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
361
+
362
+ ## Thanks, and how to contribute
363
+
364
+ Thanks to the [chirper.ai](https://chirper.ai) team!
365
+
366
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
367
+
368
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
369
+
370
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
371
+
372
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
373
+
374
+ * Patreon: https://patreon.com/TheBlokeAI
375
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
376
+
377
+ **Special thanks to**: Aemon Algiz.
378
+
379
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
380
+
381
+
382
+ Thank you to all my generous patrons and donaters!
383
+
384
+ And thank you again to a16z for their generous grant.
385
+
386
+ <!-- footer end -->
387
+
388
+ # Original model card: WizardLM's Wizardcoder 33B V1.1
389
+
390
+
391
+ ## WizardCoder: Empowering Code Large Language Models with Evol-Instruct
392
+
393
+ <p style="font-size:28px;" align="center">
394
+ 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p>
395
+ <p align="center">
396
+ <p align="center">
397
+ 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p>
398
+ <p align="center">
399
+ 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
400
+ </p>
401
+ <p align="center">
402
+ 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
403
+ </p>
404
+
405
+ ## News
406
+
407
+ [2023/01/04] 🔥 We released **WizardCoder-33B-V1.1** trained from deepseek-coder-33b-base, the **SOTA OSS Code LLM** on [EvalPlus Leaderboard](https://evalplus.github.io/leaderboard.html), achieves **79.9 pass@1** on HumanEval, **73.2 pass@1** on HumanEval-Plus, **78.9 pass@1** on MBPP, and **66.9 pass@1** on MBPP-Plus.
408
+
409
+ [2023/01/04] 🔥 **WizardCoder-33B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, and **DeepSeek-Coder-33B-instruct** on HumanEval and HumanEval-Plus pass@1.
410
+
411
+ [2023/01/04] 🔥 **WizardCoder-33B-V1.1** is comparable with **ChatGPT 3.5**, and surpasses **Gemini Pro** on MBPP and MBPP-Plus pass@1.
412
+
413
+ | Model | Checkpoint | Paper | HumanEval | HumanEval+ | MBPP | MBPP+ | License |
414
+ | ----- |------| ---- |------|-------| ----- | ----- |----- |
415
+ | GPT-4-Turbo (Nov 2023) | - | - | 85.4 | 81.7 | 83.0 | 70.7 |-|
416
+ | GPT-4 (May 2023) | - | - | 88.4 | 76.8 | - | - |-|
417
+ | GPT-3.5-Turbo (Nov 2023) | - | - | 72.6 | 65.9 | 81.7 | 69.4 |-|
418
+ | Gemini Pro | - | - | 63.4 | 55.5 | 72.9 | 57.9 |-|
419
+ | DeepSeek-Coder-33B-instruct | - | - | 78.7 | 72.6 | 78.7 | 66.7 |-|
420
+ | **WizardCoder-33B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-33B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 79.9 | 73.2 | 78.9 | 66.9 | <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1/resolve/main/LICENSE" target="_blank">MSFTResearch</a> |
421
+ | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 64.6 | 73.2 | 59.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
422
+ | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 | 52.4 | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
423
+ | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
424
+ | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
425
+ | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
426
+ | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
427
+
428
+
429
+ ## ❗ Data Contamination Check:
430
+
431
+ Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on HumanEval and MBPP test set.
432
+
433
+ 🔥
434
+ ❗<b>Note for model system prompts usage:</b>
435
+
436
+ Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
437
+
438
+ **Default version:**
439
+
440
+ ```
441
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
442
+ ```
443
+
444
+
445
+ ## How to Reproduce the Performance of WizardCoder-33B-V1.1
446
+
447
+ We provide all codes [here](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder/src).
448
+
449
+ We also provide all generated [results](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/humaneval_mbpp_wizardcoder33b_v1.1_results.zip).
450
+
451
+ ```
452
+ transformers==4.36.2
453
+ vllm==0.2.5
454
+ ```
455
+
456
+ (1) HumanEval and HumanEval-Plus
457
+
458
+ - Step 1
459
+
460
+ Code Generation (w/o accelerate)
461
+ ```bash
462
+ model="WizardLM/WizardCoder-33B-V1.1"
463
+ temp=0.0
464
+ max_len=2048
465
+ pred_num=1
466
+ num_seqs_per_iter=1
467
+
468
+ output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode
469
+
470
+ mkdir -p ${output_path}
471
+ echo 'Output path: '$output_path
472
+ echo 'Model to eval: '$model
473
+
474
+ # 164 problems, 21 per GPU if GPU=8
475
+ index=0
476
+ gpu_num=8
477
+ for ((i = 0; i < $gpu_num; i++)); do
478
+ start_index=$((i * 21))
479
+ end_index=$(((i + 1) * 21))
480
+
481
+ gpu=$((i))
482
+ echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
483
+ ((index++))
484
+ (
485
+ CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
486
+ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
487
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --greedy_decode
488
+ ) &
489
+ if (($index % $gpu_num == 0)); then wait; fi
490
+ done
491
+ ```
492
+
493
+ Code Generation (w/ vllm accelerate)
494
+ ```bash
495
+ model="WizardLM/WizardCoder-33B-V1.1"
496
+ temp=0.0
497
+ max_len=2048
498
+ pred_num=1
499
+ num_seqs_per_iter=1
500
+
501
+ output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm
502
+
503
+ mkdir -p ${output_path}
504
+ echo 'Output path: '$output_path
505
+ echo 'Model to eval: '$model
506
+
507
+ CUDA_VISIBLE_DEVICES=0,1,2,3 python humaneval_gen_vllm.py --model ${model} \
508
+ --start_index 0 --end_index 164 --temperature ${temp} \
509
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --num_gpus 4 --overwrite
510
+ ```
511
+
512
+ - Step 2: Get the score
513
+
514
+ Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark.
515
+ ```bash
516
+ git clone https://github.com/evalplus/evalplus.git
517
+ cd evalplus
518
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
519
+ pip install -r requirements.txt
520
+ ```
521
+ Get HumanEval and HumanEval-Plus scores.
522
+ ```bash
523
+ output_path=preds/T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode
524
+
525
+ echo 'Output path: '$output_path
526
+ python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
527
+
528
+ evalplus.evaluate --dataset humaneval --samples ${output_path}.jsonl
529
+ ```
530
+
531
+ (2) MBPP and MBPP-Plus
532
+
533
+ The preprocessed questions are provided in [mbppplus.json](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/mbppplus.json).
534
+
535
+ - Step 1
536
+
537
+ Code Generation (w/o accelerate)
538
+ ```bash
539
+ model="WizardLM/WizardCoder-33B-V1.1"
540
+ temp=0.0
541
+ max_len=2048
542
+ pred_num=1
543
+ num_seqs_per_iter=1
544
+
545
+ output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode
546
+
547
+ mkdir -p ${output_path}
548
+ echo 'Output path: '$output_path
549
+ echo 'Model to eval: '$model
550
+
551
+ # 399 problems, 50 per GPU if GPU=8
552
+ index=0
553
+ gpu_num=8
554
+ for ((i = 0; i < $gpu_num; i++)); do
555
+ start_index=$((i * 50))
556
+ end_index=$(((i + 1) * 50))
557
+
558
+ gpu=$((i))
559
+ echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
560
+ ((index++))
561
+ (
562
+ CUDA_VISIBLE_DEVICES=$gpu python mbppplus_gen.py --model ${model} \
563
+ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
564
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --greedy_decode
565
+ ) &
566
+ if (($index % $gpu_num == 0)); then wait; fi
567
+ done
568
+ ```
569
+
570
+ Code Generation (w/ vllm accelerate)
571
+ ```bash
572
+ model="WizardLM/WizardCoder-33B-V1.1"
573
+ temp=0.0
574
+ max_len=2048
575
+ pred_num=1
576
+ num_seqs_per_iter=1
577
+
578
+ output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm
579
+
580
+ mkdir -p ${output_path}
581
+ echo 'Output path: '$output_path
582
+ echo 'Model to eval: '$model
583
+
584
+ CUDA_VISIBLE_DEVICES=0,1,2,3 python mbppplus_gen_vllm.py --model ${model} \
585
+ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
586
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --num_gpus 4
587
+ ```
588
+
589
+ - Step 2: Get the score
590
+
591
+ Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark.
592
+ ```bash
593
+ git clone https://github.com/evalplus/evalplus.git
594
+ cd evalplus
595
+ export PYTHONPATH=$PYTHONPATH:$(pwd)
596
+ pip install -r requirements.txt
597
+ ```
598
+ Get HumanEval and HumanEval-Plus scores.
599
+ ```bash
600
+ output_path=preds/MBPP_T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode
601
+
602
+ echo 'Output path: '$output_path
603
+ python mbppplus_process_preds.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
604
+
605
+ evalplus.evaluate --dataset mbpp --samples ${output_path}.jsonl
606
+ ```
607
+
608
+
609
+ ## Citation
610
+
611
+ Please cite the repo if you use the data, method or code in this repo.
612
+
613
+ ```
614
+ @article{luo2023wizardcoder,
615
+ title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
616
+ author={Luo, Ziyang and Xu, Can and Zhao, Pu and Sun, Qingfeng and Geng, Xiubo and Hu, Wenxiang and Tao, Chongyang and Ma, Jing and Lin, Qingwei and Jiang, Daxin},
617
+ journal={arXiv preprint arXiv:2306.08568},
618
+ year={2023}
619
+ }
620
+ ```