TheBloke commited on
Commit
ebb136d
1 Parent(s): 4f91bad

Upload new k-quant GGML quantised models.

Browse files
Files changed (1) hide show
  1. README.md +64 -217
README.md CHANGED
@@ -17,9 +17,9 @@ license: other
17
  </div>
18
  <!-- header end -->
19
 
20
- # WizardLM 13B 1.0 GGML
21
 
22
- These files are GGML format model files for [WizardLM 13B 1.0](https://huggingface.co/TheBloke/wizardLM-13B-1.0-HF).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
@@ -28,43 +28,71 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
- ## Other repositories available
32
 
33
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GPTQ)
34
- * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGML)
35
- * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16)
36
 
37
- ## Prompt Template
 
38
 
39
- ```
40
- A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
41
- USER: prompt goes here
42
- ASSISTANT:
43
- ```
44
- ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
 
45
 
46
- llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
47
 
48
- I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Provided files
51
- | Name | Quant method | Bits | Size | RAM required | Use case |
52
  | ---- | ---- | ---- | ---- | ---- | ----- |
53
- | WizardLM-13B-1.0.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | 4-bit. |
54
- | WizardLM-13B-1.0.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
55
- | WizardLM-13B-1.0.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
56
- | WizardLM-13B-1.0.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
57
- | WizardLM-13B-1.0.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
58
-
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ## How to run in `llama.cpp`
61
 
62
  I use the following command line; adjust for your tastes and needs:
63
 
64
  ```
65
- ./main -t 12 -m WizardLM-13B-1.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: write a story about llamas ASSISTANT:"
66
  ```
67
- Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
 
68
 
69
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
70
 
@@ -72,8 +100,6 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
72
 
73
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
74
 
75
- Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
76
-
77
  <!-- footer start -->
78
  ## Discord
79
 
@@ -94,210 +120,31 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
94
  * Patreon: https://patreon.com/TheBlokeAI
95
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
96
 
97
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
98
-
99
- Thank you to all my generous patrons and donaters!
100
- <!-- footer end -->
101
-
102
- # Original model card: WizardLM 13B 1.0
103
-
104
- ## WizardLM: An Instruction-following LLM Using Evol-Instruct
105
- Empowering Large Pre-Trained Language Models to Follow Complex Instructions
106
-
107
- <p align="center" width="100%">
108
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/WizardLM.png" alt="WizardLM" style="width: 20%; min-width: 300px; display: block; margin: auto;"></a>
109
- </p>
110
-
111
- [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
112
- [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
113
- [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
114
-
115
- ## News
116
-
117
- At present, our core contributors are preparing the **33B** version and we expect to empower WizardLM with the ability to perform instruction evolution itself, aiming to evolve your specific data at a low cost.
118
-
119
- - 🔥 We released **13B** version of **WizardLM** trained with **250k** evolved instructions (from ShareGPT). Checkout the [Demo_13B](https://a6d4f31b5a1ee33f.gradio.app/), [Demo_13B_bak](https://e79c80d2c2379e77.gradio.app) and the GPT-4 evaluation. Please download our delta model at the following [link](https://huggingface.co/victor123/WizardLM-13B-1.0).
120
- - 🔥 We released **7B** version of **WizardLM** trained with **70k** evolved instructions (from Alpaca data). Checkout the [paper](https://arxiv.org/abs/2304.12244) and [Demo_7B](https://f195ccdce69a86d5.gradio.app) , [Demo_7B_bak](https://ce25bd0feced0f77.gradio.app)
121
- - &#x1F4E3; We are looking for highly motivated students to join us as interns to create more intelligent AI together. Please contact caxu@microsoft.com
122
-
123
- <!-- Although on our **complexity-balanced test set**, **WizardLM-7B has more cases that are preferred by human labelers than ChatGPT** in the high-complexity instructions (difficulty level >= 8), it still lags behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a **baby state**. This repository will **continue to improve WizardLM**, train on larger scales, add more training data, and innovate more advanced large-model training methods. -->
124
-
125
- <b>Note for 13B model usage:</b> To obtain results **identical to our demo**, please strictly follow the prompts and invocation methods provided in the **"src/infer_wizardlm13b.py"** to use our 13B model for inference. Unlike the 7B model, the 13B model adopts the prompt format from Vicuna and supports **multi-turn** conversation.
126
-
127
- <b>Note for demo usage:</b> We only recommend using **English** to experience our model. Support for other languages will be introduced in the future. The demo currently only supports **single-turn** conversation.
128
-
129
- ### GPT-4 automatic evaluation
130
-
131
- We adopt the automatic evaluation framework based on GPT-4 proposed by FastChat to assess the performance of chatbot models. As shown in the following figure, WizardLM-13B achieved better results than Vicuna-13b.
132
- <p align="center" width="100%">
133
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/WizarLM13b-GPT4.png" alt="WizardLM" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
134
- </p>
135
-
136
- ### WizardLM-13B performance on different skills.
137
-
138
- The following figure compares WizardLM-13B and ChatGPT’s skill on Evol-Instruct testset. The result indicates that WizardLM-13B achieves 89.1% of ChatGPT’s performance on average, with almost 100% (or more than) capacity on 10 skills, and more than 90% capacity on 22 skills.
139
-
140
- <p align="center" width="100%">
141
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/evol-testset_skills-13b.png" alt="WizardLM" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
142
- </p>
143
-
144
- ## Call for Feedbacks
145
- We welcome everyone to use your professional and difficult instructions to evaluate WizardLM, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardLM. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
146
-
147
- ## Unofficial Video Introductions
148
- Thanks to the enthusiastic friends, their video introductions are more lively and interesting.
149
- 1. [GET WizardLM NOW! 7B LLM KING That Can Beat ChatGPT! I'm IMPRESSED!](https://www.youtube.com/watch?v=SaJ8wyKMBds)
150
- 2. [WizardLM: Enhancing Large Language Models to Follow Complex Instructions](https://www.youtube.com/watch?v=I6sER-qivYk)
151
-
152
- ## Case Show
153
- We just sample some cases to demonstrate the performance of WizardLM and ChatGPT on data of varying difficulty, and the details pls refer [Case Show](https://github.com/nlpxucan/WizardLM/blob/main/src/case_show.md).
154
-
155
- ## Overview of Evol-Instruct
156
 
157
- [Evol-Instruct](https://github.com/nlpxucan/evol-instruct) is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
158
 
159
- <p align="center" width="100%">
160
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/git_overall.png" alt="WizardLM" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
161
- </p>
162
-
163
- <p align="center" width="100%">
164
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/git_running.png" alt="WizardLM" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
165
- </p>
166
-
167
- ## Contents
168
-
169
- 1. [Online Demo](#online-demo)
170
-
171
- 2. [Training Data](#training-data)
172
-
173
- 3. [WizardLM Weights](#wizardlm-weights)
174
-
175
- 4. [Fine-tuning](#fine-tuning)
176
-
177
- 5. [Distributed Fine-tuning](#distributed-Fine-tuning)
178
-
179
- 6. [Inference](#inference)
180
-
181
- 7. [Evaluation](#evaluation)
182
-
183
- 8. [Citation](#citation)
184
-
185
- 9. [Disclaimer](#disclaimer)
186
-
187
- ## Online Demo
188
-
189
- We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
190
-
191
- [Demo Link](https://011fc8477ad734d7.gradio.app)
192
-
193
- [Demo Backup 1](https://1825e531c43a23c7.gradio.app)
194
-
195
-
196
-
197
-
198
- ## Training Data
199
-
200
- [`alpaca_evol_instruct_70k.json`](https://huggingface.co/datasets/victor123/evol_instruct_70k) contains 70K instruction-following data generated from Evol-Instruct. We used it for fine-tuning the WizardLM model.
201
- This JSON file is a list of dictionaries, each dictionary contains the following fields:
202
-
203
- - `instruction`: `str`, describes the task the model should perform. Each of the 70K instructions is unique.
204
- - `output`: `str`, the answer to the instruction as generated by `gpt-3.5-turbo`.
205
 
 
206
 
 
207
 
208
- ## WizardLM Weights
209
- We release [WizardLM] weights as delta weights to comply with the LLaMA model license.
210
- You can add our delta to the original LLaMA weights to obtain the WizardLM weights. Instructions:
211
- 1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
212
- 2. Please download our delta model at the following [link](https://huggingface.co/victor123/WizardLM)
213
- 3. Use the following scripts to get WizardLM weights by applying our delta:
214
- ```
215
- python src/weight_diff_wizard.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
216
- ```
217
 
218
- ## Fine-tuning
219
-
220
- We fine-tune WizardLM using code from [Llama-X](https://github.com/AetherCortex/Llama-X).
221
- We fine-tune LLaMA-7B and LLaMA-13B with the following hyperparameters:
222
-
223
- | Hyperparameter | LLaMA-7B | LLaMA-13B|
224
- |----------------|----------|----------|
225
- | Batch size | 64 | 384 |
226
- | Learning rate | 2e-5 | 2e-5 |
227
- | Epochs | 3 | 3 |
228
- | Max length | 2048 | 2048 |
229
- | Warmup step | 2 | 50 |
230
- | LR scheduler | cosine | cosine |
231
-
232
- To reproduce our fine-tuning of WizardLM, please follow the following steps:
233
- 1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy.
234
- 2. Replace the train.py with the train_freeform.py in our repo(src/train_freeform.py)
235
- 3. Execute the following training command:
236
- ```bash
237
- deepspeed train_freeform.py \
238
- --model_name_or_path /path/to/llama-7B/hf \
239
- --data_path /path/to/alpaca_evol_instruct_70k.json \
240
- --output_dir /path/to/wizardlm-7B/hf/ft \
241
- --num_train_epochs 3 \
242
- --model_max_length 2048 \
243
- --per_device_train_batch_size 8 \
244
- --per_device_eval_batch_size 1 \
245
- --gradient_accumulation_steps 1 \
246
- --evaluation_strategy "no" \
247
- --save_strategy "steps" \
248
- --save_steps 800 \
249
- --save_total_limit 3 \
250
- --learning_rate 2e-5 \
251
- --warmup_steps 2 \
252
- --logging_steps 2 \
253
- --lr_scheduler_type "cosine" \
254
- --report_to "tensorboard" \
255
- --gradient_checkpointing True \
256
- --deepspeed configs/deepspeed_config.json \
257
- --fp16 True
258
- ```
259
 
260
- ## Distributed Fine-tuning
261
- See [Distributed Fine-tuning](./doc/distributed_finetune.md)
262
 
263
- ## Inference
264
 
265
- We provide the decoding script for WizardLM, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
266
 
267
- You can specify `base_model`, `input_data_path` and `output_data_path` in src\inference_wizardlm.py to set the decoding model, path of input file and path of output file.
268
- The decoding command:
269
  ```
270
- python src\inference_wizardlm.py
271
  ```
272
 
273
- ### Evaluation
274
-
275
- To evaluate Wizard, we conduct human evaluation on the inputs from our human instruct evaluation set [`WizardLM_testset.jsonl`](./data/WizardLM_testset.jsonl) . This evaluation set was collected by the authors and covers a diverse list of user-oriented instructions including difficult Coding Generation & Debugging, Math, Reasoning, Complex Formats, Academic Writing, Extensive Disciplines, and so on. We performed a blind pairwise comparison between Wizard and baselines. Specifically, we recruit 10 well-educated annotators to rank the models from 1 to 5 on relevance, knowledgeable, reasoning, calculation and accuracy.
276
-
277
- WizardLM achieved significantly better results than Alpaca and Vicuna-7b.
278
- <p align="center" width="60%">
279
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/win.png" alt="WizardLM" style="width: 60%; min-width: 300px; display: block; margin: auto;"></a>
280
- </p>
281
-
282
- In the high-difficulty section of our test set (difficulty level >= 8), WizardLM even outperforms ChatGPT, with a win rate 7.9% larger than Chatgpt (42.9% vs. 35.0%). This indicates that our method can significantly improve the ability of large language models to handle complex instructions.
283
- <p align="center" width="60%">
284
- <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/imgs/windiff.png" alt="WizardLM" style="width: 60%; min-width: 300px; display: block; margin: auto;"></a>
285
- </p>
286
-
287
- ### Citation
288
-
289
- Please cite the repo if you use the data or code in this repo.
290
 
291
  ```
292
- @misc{xu2023wizardlm,
293
- title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
294
- author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
295
- year={2023},
296
- eprint={2304.12244},
297
- archivePrefix={arXiv},
298
- primaryClass={cs.CL}
299
- }
300
  ```
301
- ## Disclaimer
302
-
303
- The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardLM is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
 
17
  </div>
18
  <!-- header end -->
19
 
20
+ # WizardLM's WizardLM 13B 1.0 GGML
21
 
22
+ These files are GGML format model files for [WizardLM's WizardLM 13B 1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
 
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
+ ## Repositories available
32
 
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16)
36
 
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
 
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
+
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
 
48
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
51
+
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
 
65
  ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
  | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | WizardLM-13B-1.0.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
69
+ | WizardLM-13B-1.0.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
70
+ | WizardLM-13B-1.0.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
71
+ | WizardLM-13B-1.0.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
72
+ | WizardLM-13B-1.0.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
73
+ | WizardLM-13B-1.0.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
74
+ | WizardLM-13B-1.0.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
75
+ | WizardLM-13B-1.0.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
76
+ | WizardLM-13B-1.0.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
77
+ | WizardLM-13B-1.0.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
78
+ | WizardLM-13B-1.0.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
79
+ | WizardLM-13B-1.0.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
80
+ | WizardLM-13B-1.0.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
81
+ | WizardLM-13B-1.0.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
82
+
83
+
84
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
85
 
86
  ## How to run in `llama.cpp`
87
 
88
  I use the following command line; adjust for your tastes and needs:
89
 
90
  ```
91
+ ./main -t 10 -ngl 32 -m WizardLM-13B-1.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
92
  ```
93
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
94
+
95
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
96
 
97
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
98
 
 
100
 
101
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
102
 
 
 
103
  <!-- footer start -->
104
  ## Discord
105
 
 
120
  * Patreon: https://patreon.com/TheBlokeAI
121
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
122
 
123
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
+ **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
126
 
127
+ Thank you to all my generous patrons and donaters!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
 
129
+ <!-- footer end -->
130
 
131
+ # Original model card: WizardLM's WizardLM 13B 1.0
132
 
133
+ This is WizardLM-13B V1.0 diff weight.
 
 
 
 
 
 
 
 
134
 
135
+ Project Repo: https://github.com/nlpxucan/WizardLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
 
 
137
 
138
+ NOTE: The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation:
139
 
140
+ For **WizardLM-13B-1.0** , the Prompt should be as following:
141
 
 
 
142
  ```
143
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT:
144
  ```
145
 
146
+ For **WizardLM-7B** , the Prompt should be as following:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
 
148
  ```
149
+ {instruction}\n\n### Response:
 
 
 
 
 
 
 
150
  ```