patrickbdevaney commited on
Commit
f6608b7
1 Parent(s): cab7560

tokenizer, vocab, config

Browse files

files required to take the pytorch_model.bin and convert it to GGUF

.gitattributes CHANGED
@@ -33,4 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- WizardCoder-1B-V1.0-ggml-f16.gguf filter=lfs diff=lfs merge=lfs -text
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
README.md ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-openrail-m
3
+ metrics:
4
+ - code_eval
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ model-index:
9
+ - name: WizardCoder
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ dataset:
14
+ type: openai_humaneval
15
+ name: HumanEval
16
+ metrics:
17
+ - name: pass@1
18
+ type: pass@1
19
+ value: 0.573
20
+ verified: false
21
+ ---
22
+
23
+
24
+ <h1 style="margin:20px;" align="center">This is a GGUF Version of WizardCoder 1b v1.0</h1>
25
+ <h2 style="margin:20px;" align="center">Quantization Done by Prashant Vasudevan <a href="https://github.com/vprashrex">Github@vprashrex</a></h2>
26
+ <h2 style="margin:20px;" align="center">Quantization type Q4_K version</h2>
27
+
28
+ <p style="font-size:28px" align="center">
29
+ 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p>
30
+ <p align="center">
31
+
32
+
33
+ <p align="center">
34
+ 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
35
+ </p>
36
+ <p align="center">
37
+ 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
38
+ </p>
39
+
40
+ ## News
41
+
42
+ - 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
43
+ - [2023/06/16] We released **WizardCoder-15B-V1.0** , which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
44
+
45
+ | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
46
+ | ----- |------| ---- |------|-------| ----- | ----- |
47
+ | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
48
+ | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
49
+ | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
50
+ | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
51
+ | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
52
+ | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
53
+
54
+ - Comparing WizardCoder-Python-34B-V1.0 with Other LLMs.
55
+
56
+ 🔥 The following figure shows that our **WizardCoder-Python-34B-V1.0 attains the second position in this benchmark**, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2).
57
+
58
+ <p align="center" width="100%">
59
+ <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/compare_sota.png" alt="WizardCoder" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
60
+ </p>
61
+
62
+ - 🔥 [08/11/2023] We release **WizardMath** Models.
63
+ - 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
64
+ - 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
65
+ - 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
66
+
67
+ | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
68
+ | ----- |------| ---- |------|-------| ----- | ----- |
69
+ | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
70
+ | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
71
+ | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
72
+
73
+
74
+ <font size=4>
75
+
76
+ | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
77
+ | ----- |------| ---- |------|-------| ----- | ----- | ----- |
78
+ | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
79
+ | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
80
+ | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
81
+ | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
82
+ | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
83
+ </font>
84
+
85
+
86
+
87
+
88
+
89
+ # WizardCoder: Empowering Code Large Language Models with Evol-Instruct
90
+
91
+
92
+ To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.
93
+
94
+ ## News
95
+
96
+ - 🔥 Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
97
+ - 🔥 We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), and [Paper]().
98
+ - &#x1F4E3; Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
99
+
100
+
101
+ ## Comparing WizardCoder with the Closed-Source Models.
102
+
103
+
104
+ 🔥 The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
105
+
106
+ <p align="center" width="100%">
107
+ <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
108
+ </p>
109
+
110
+ ❗**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).**
111
+
112
+ ## Comparing WizardCoder with the Open-Source Models.
113
+
114
+ The following table clearly demonstrates that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. ❗**If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
115
+
116
+
117
+ | Model | HumanEval Pass@1 | MBPP Pass@1 |
118
+ |------------------|------------------|-------------|
119
+ | CodeGen-16B-Multi| 18.3 |20.9 |
120
+ | CodeGeeX | 22.9 |24.4 |
121
+ | LLaMA-33B | 21.7 |30.2 |
122
+ | LLaMA-65B | 23.7 |37.7 |
123
+ | PaLM-540B | 26.2 |36.8 |
124
+ | PaLM-Coder-540B | 36.0 |47.0 |
125
+ | PaLM 2-S | 37.6 |50.0 |
126
+ | CodeGen-16B-Mono | 29.3 |35.3 |
127
+ | Code-Cushman-001 | 33.5 |45.9 |
128
+ | StarCoder-15B | 33.6 |43.6* |
129
+ | InstructCodeT5+ | 35.0 |-- |
130
+ | WizardLM-30B 1.0| 37.8 |-- |
131
+ | WizardCoder-15B 1.0 | **57.3** |**51.8** |
132
+
133
+
134
+ ❗**Note: The reproduced result of StarCoder on MBPP.**
135
+
136
+ ❗**Note: The above table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).**
137
+
138
+ ## Call for Feedbacks
139
+ We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
140
+
141
+
142
+ ## Contents
143
+
144
+ 1. [Online Demo](#online-demo)
145
+
146
+ 2. [Fine-tuning](#fine-tuning)
147
+
148
+ 3. [Inference](#inference)
149
+
150
+ 4. [Evaluation](#evaluation)
151
+
152
+ 5. [Citation](#citation)
153
+
154
+ 6. [Disclaimer](#disclaimer)
155
+
156
+ ## Online Demo
157
+
158
+ We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
159
+
160
+
161
+
162
+ ## Fine-tuning
163
+
164
+ We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X).
165
+ We fine-tune StarCoder-15B with the following hyperparameters:
166
+
167
+ | Hyperparameter | StarCoder-15B |
168
+ |----------------|---------------|
169
+ | Batch size | 512 |
170
+ | Learning rate | 2e-5 |
171
+ | Epochs | 3 |
172
+ | Max length | 2048 |
173
+ | Warmup step | 30 |
174
+ | LR scheduler | cosine |
175
+
176
+ To reproduce our fine-tuning of WizardCoder, please follow the following steps:
177
+ 1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`)
178
+ 2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`)
179
+ 3. Login Huggingface:
180
+ ```bash
181
+ huggingface-cli login
182
+ ```
183
+ 4. Execute the following training command:
184
+ ```bash
185
+ deepspeed train_wizardcoder.py \
186
+ --model_name_or_path "bigcode/starcoder" \
187
+ --data_path "/your/path/to/code_instruction_data.json" \
188
+ --output_dir "/your/path/to/ckpt" \
189
+ --num_train_epochs 3 \
190
+ --model_max_length 2048 \
191
+ --per_device_train_batch_size 16 \
192
+ --per_device_eval_batch_size 1 \
193
+ --gradient_accumulation_steps 4 \
194
+ --evaluation_strategy "no" \
195
+ --save_strategy "steps" \
196
+ --save_steps 50 \
197
+ --save_total_limit 2 \
198
+ --learning_rate 2e-5 \
199
+ --warmup_steps 30 \
200
+ --logging_steps 2 \
201
+ --lr_scheduler_type "cosine" \
202
+ --report_to "tensorboard" \
203
+ --gradient_checkpointing True \
204
+ --deepspeed configs/deepspeed_config.json \
205
+ --fp16 True
206
+ ```
207
+
208
+ ## Inference
209
+
210
+ We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
211
+
212
+ You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file.
213
+
214
+ ```bash
215
+ pip install jsonlines
216
+ ```
217
+
218
+ The decoding command is:
219
+ ```
220
+ python src\inference_wizardcoder.py \
221
+ --base_model "/your/path/to/ckpt" \
222
+ --input_data_path "/your/path/to/input/data.jsonl" \
223
+ --output_data_path "/your/path/to/output/result.jsonl"
224
+ ```
225
+
226
+ The format of `data.jsonl` should be:
227
+ ```
228
+ {"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
229
+ {"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
230
+ ```
231
+
232
+ The prompt for our WizardCoder in `src\inference_wizardcoder.py` is:
233
+ ```
234
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
235
+
236
+ ### Instruction:
237
+ {instruction}
238
+
239
+ ### Response:
240
+ ```
241
+
242
+ ## Evaluation
243
+
244
+ We provide the evaluation script on HumanEval for WizardCoder.
245
+
246
+ 1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment.
247
+ 2. Run the following script to generate the answer.
248
+ ```bash
249
+ model="/path/to/your/model"
250
+ temp=0.2
251
+ max_len=2048
252
+ pred_num=200
253
+ num_seqs_per_iter=2
254
+
255
+ output_path=preds/T${temp}_N${pred_num}
256
+
257
+ mkdir -p ${output_path}
258
+ echo 'Output path: '$output_path
259
+ echo 'Model to eval: '$model
260
+
261
+ # 164 problems, 21 per GPU if GPU=8
262
+ index=0
263
+ gpu_num=8
264
+ for ((i = 0; i < $gpu_num; i++)); do
265
+ start_index=$((i * 21))
266
+ end_index=$(((i + 1) * 21))
267
+
268
+ gpu=$((i))
269
+ echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
270
+ ((index++))
271
+ (
272
+ CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
273
+ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
274
+ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
275
+ ) &
276
+ if (($index % $gpu_num == 0)); then wait; fi
277
+ done
278
+ ```
279
+ 3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files.
280
+ ```bash
281
+ output_path=preds/T${temp}_N${pred_num}
282
+
283
+ echo 'Output path: '$output_path
284
+ python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
285
+
286
+ evaluate_functional_correctness ${output_path}.jsonl
287
+ ```
288
+
289
+ ## Citation
290
+
291
+ Please cite the repo if you use the data, method or code in this repo.
292
+
293
+ ```
294
+ @article{luo2023wizardcoder,
295
+ title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
296
+ author={Luo, Ziyang and Xu, Can and Zhao, Pu and Sun, Qingfeng and Geng, Xiubo and Hu, Wenxiang and Tao, Chongyang and Ma, Jing and Lin, Qingwei and Jiang, Daxin},
297
+ journal={arXiv preprint arXiv:2306.08568},
298
+ year={2023}
299
+ }
300
+ ```
301
+ ## Disclaimer
302
+
303
+ WizardCoder model follows the same license as StarCoder. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 49152
3
+ }
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bigcode/starcoderbase-1b",
3
+ "activation_function": "gelu_pytorch_tanh",
4
+ "architectures": [
5
+ "GPTBigCodeForCausalLM"
6
+ ],
7
+ "attention_softmax_in_fp32": true,
8
+ "attn_pdrop": 0.1,
9
+ "bos_token_id": 0,
10
+ "embd_pdrop": 0.1,
11
+ "eos_token_id": 0,
12
+ "inference_runner": 0,
13
+ "initializer_range": 0.02,
14
+ "layer_norm_epsilon": 1e-05,
15
+ "max_batch_size": null,
16
+ "max_sequence_length": null,
17
+ "model_type": "gpt_bigcode",
18
+ "multi_query": true,
19
+ "n_embd": 2048,
20
+ "n_head": 16,
21
+ "n_inner": 8192,
22
+ "n_layer": 24,
23
+ "n_positions": 8192,
24
+ "pad_key_length": true,
25
+ "pre_allocate_kv_cache": false,
26
+ "resid_pdrop": 0.1,
27
+ "scale_attention_softmax_in_fp32": true,
28
+ "scale_attn_weights": true,
29
+ "summary_activation": null,
30
+ "summary_first_dropout": 0.1,
31
+ "summary_proj_to_labels": true,
32
+ "summary_type": "cls_index",
33
+ "summary_use_proj": true,
34
+ "torch_dtype": "float16",
35
+ "transformers_version": "4.29.2",
36
+ "use_cache": false,
37
+ "validate_runner_input": true,
38
+ "vocab_size": 49153
39
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.29.2"
6
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<fim_prefix>",
5
+ "<fim_middle>",
6
+ "<fim_suffix>",
7
+ "<fim_pad>",
8
+ "<filename>",
9
+ "<gh_stars>",
10
+ "<issue_start>",
11
+ "<issue_comment>",
12
+ "<issue_closed>",
13
+ "<jupyter_start>",
14
+ "<jupyter_text>",
15
+ "<jupyter_code>",
16
+ "<jupyter_output>",
17
+ "<empty_output>",
18
+ "<commit_before>",
19
+ "<commit_msg>",
20
+ "<commit_after>",
21
+ "<reponame>"
22
+ ],
23
+ "bos_token": "<|endoftext|>",
24
+ "eos_token": "<|endoftext|>",
25
+ "pad_token": "[PAD]",
26
+ "unk_token": "<|endoftext|>"
27
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "additional_special_tokens": [
4
+ "<|endoftext|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<empty_output>",
19
+ "<commit_before>",
20
+ "<commit_msg>",
21
+ "<commit_after>",
22
+ "<reponame>"
23
+ ],
24
+ "bos_token": "<|endoftext|>",
25
+ "clean_up_tokenization_spaces": true,
26
+ "eos_token": "<|endoftext|>",
27
+ "model_max_length": 8192,
28
+ "padding_side": "right",
29
+ "tokenizer_class": "GPT2Tokenizer",
30
+ "unk_token": "<|endoftext|>",
31
+ "vocab_size": 49152
32
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff