Transformers
llama
text-generation-inference
TheBloke commited on
Commit
f07b77a
β€’
1 Parent(s): 5a58ba3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -63
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  inference: false
3
- license: other
4
  model_creator: WizardLM
5
  model_link: https://huggingface.co/WizardLM/WizardMath-70B-V1.0
6
  model_name: WizardMath 70B V1.0
@@ -9,17 +9,20 @@ quantized_by: TheBloke
9
  ---
10
 
11
  <!-- header start -->
12
- <div style="width: 100%;">
13
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
21
  </div>
22
  </div>
 
 
23
  <!-- header end -->
24
 
25
  # WizardMath 70B V1.0 - GGML
@@ -30,6 +33,14 @@ quantized_by: TheBloke
30
 
31
  This repo contains GGML format model files for [WizardLM's WizardMath 70B V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0).
32
 
 
 
 
 
 
 
 
 
33
  GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
34
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
35
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
@@ -41,7 +52,8 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
41
  ## Repositories available
42
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ)
44
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML)
 
45
  * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-70B-V1.0)
46
 
47
  ## Prompt template: Alpaca-CoT
@@ -55,12 +67,17 @@ Below is an instruction that describes a task. Write a response that appropriate
55
 
56
 
57
  ### Response: Let's think step by step.
 
58
  ```
59
 
60
  <!-- compatibility_ggml start -->
61
  ## Compatibility
62
 
63
- ### Requires llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
 
 
 
 
64
 
65
  Or one of the other tools and libraries listed above.
66
 
@@ -89,57 +106,29 @@ Refer to the Provided Files table below to see what files use which methods, and
89
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
90
  | ---- | ---- | ---- | ---- | ---- | ----- |
91
  | [wizardmath-70b-v1.0.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q2_K.bin) | q2_K | 2 | 28.96 GB| 31.46 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
92
- | [wizardmath-70b-v1.0.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.49 GB| 38.99 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
93
- | [wizardmath-70b-v1.0.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.39 GB| 35.89 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
94
  | [wizardmath-70b-v1.0.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 30.09 GB| 32.59 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
95
  | [wizardmath-70b-v1.0.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.80 GB| 41.30 GB | Original quant method, 4-bit. |
96
- | [wizardmath-70b-v1.0.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.12 GB| 45.62 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
97
- | [wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.69 GB| 44.19 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
98
  | [wizardmath-70b-v1.0.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 39.18 GB| 41.68 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
99
  | [wizardmath-70b-v1.0.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.43 GB| 49.93 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
100
- | [wizardmath-70b-v1.0.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 49.03 GB| 51.53 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
101
  | [wizardmath-70b-v1.0.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.74 GB| 50.24 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
102
- | wizardmath-70b-v1.0.ggmlv3.q5_1.bin | q5_1 | 5 | 51.76 GB | 54.26 GB | Original quant method, 5-bit. Higher accuracy, slower inference than q5_0. |
103
- | wizardmath-70b-v1.0.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
104
- | wizardmath-70b-v1.0.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
105
 
106
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
107
 
108
- ### q5_1, q6_K and q8_0 files require expansion from archive
109
-
110
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
111
 
112
- <details>
113
- <summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
114
-
115
- ### q5_1
116
- Please download:
117
- * `wizardmath-70b-v1.0.ggmlv3.q5_1.zip`
118
- * `wizardmath-70b-v1.0.ggmlv3.q5_1.z01`
119
-
120
- ### q6_K
121
- Please download:
122
- * `wizardmath-70b-v1.0.ggmlv3.q6_K.zip`
123
- * `wizardmath-70b-v1.0.ggmlv3.q6_K.z01`
124
-
125
- ### q8_0
126
- Please download:
127
- * `wizardmath-70b-v1.0.ggmlv3.q8_0.zip`
128
- * `wizardmath-70b-v1.0.ggmlv3.q8_0.z01`
129
-
130
- Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
131
- ```
132
- sudo apt update -y && sudo apt install 7zip
133
- 7zz x wizardmath-70b-v1.0.ggmlv3.q6_K.zip
134
- ```
135
- </details>
136
 
137
- ## How to run in `llama.cpp`
138
 
139
  I use the following command line; adjust for your tastes and needs:
140
 
141
  ```
142
- ./main -t 10 -ngl 40 -gqa 8 -m wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n### Instruction:\nWrite a story about llamas\n\n\n### Response: Let's think step by step."
143
  ```
144
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
145
 
@@ -158,6 +147,7 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
158
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
159
 
160
  <!-- footer start -->
 
161
  ## Discord
162
 
163
  For further support, and discussions on these models and AI in general, join us at:
@@ -177,62 +167,97 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
177
  * Patreon: https://patreon.com/TheBlokeAI
178
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
179
 
180
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
181
 
182
- **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
183
 
184
 
185
  Thank you to all my generous patrons and donaters!
186
 
 
 
187
  <!-- footer end -->
188
 
189
  # Original model card: WizardLM's WizardMath 70B V1.0
190
 
191
 
192
 
193
- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
194
-
195
 
196
 
197
  <p align="center">
198
- πŸ€— <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β€’ 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> <br>
199
  </p>
200
  <p align="center">
201
- πŸ‘‹ Join our <a href="https://discord.gg/bpmeZD7V" target="_blank">Discord</a>
202
  </p>
203
 
 
 
 
 
 
 
 
 
204
 
205
 
206
-
 
 
 
 
 
 
207
 
208
- | Model | Checkpoint | Paper | GSM8k | MATH | License|
209
- | ----- |------| ---- |------|-------| ----- |
210
- | WizardMath-70B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | πŸ“ƒComing Soon| **81.6** | **22.7** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> |
211
- | WizardMath-13B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | πŸ“ƒComing Soon| **63.9** | **14.0** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> |
212
- | WizardMath-7B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | πŸ“ƒComing Soon| **54.9** | **10.7** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a>|
 
 
 
 
 
 
213
 
214
  **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath
215
 
216
- **Twitter**: https://twitter.com/WizardLM_AI/status/1689990201467432960
217
 
218
- **Discord**: https://discord.gg/bpmeZD7V
219
 
 
220
 
 
 
 
 
 
221
 
222
  ❗<b>Note for model system prompts usage:</b>
223
 
224
- ## CoT Version:
 
 
225
 
226
  ```
227
- Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
228
 
229
 
230
- ### Instruction:
231
- {instruction}
232
 
233
 
234
- ### Response: Let's think step by step.
235
  ```
 
 
 
 
 
 
 
236
 
237
  ❗<b>To commen concern about dataset:</b>
238
 
@@ -240,3 +265,17 @@ Recently, there have been clear changes in the open-source policy and regulation
240
  Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
241
  Our researchers have no authority to publicly release them without authorization.
242
  Thank you for your understanding.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  inference: false
3
+ license: llama2
4
  model_creator: WizardLM
5
  model_link: https://huggingface.co/WizardLM/WizardMath-70B-V1.0
6
  model_name: WizardMath 70B V1.0
 
9
  ---
10
 
11
  <!-- header start -->
12
+ <!-- 200823 -->
13
+ <div style="width: auto; margin-left: auto; margin-right: auto">
14
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
24
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
25
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
26
  <!-- header end -->
27
 
28
  # WizardMath 70B V1.0 - GGML
 
33
 
34
  This repo contains GGML format model files for [WizardLM's WizardMath 70B V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0).
35
 
36
+ ### Important note regarding GGML files.
37
+
38
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
39
+
40
+ Please use the GGUF models instead.
41
+
42
+ ### About GGML
43
+
44
  GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
45
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
46
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
 
52
  ## Repositories available
53
 
54
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GPTQ)
55
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGUF)
56
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML)
57
  * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-70B-V1.0)
58
 
59
  ## Prompt template: Alpaca-CoT
 
67
 
68
 
69
  ### Response: Let's think step by step.
70
+
71
  ```
72
 
73
  <!-- compatibility_ggml start -->
74
  ## Compatibility
75
 
76
+ ### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
77
+
78
+ Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
79
+
80
+ For compatibility with latest llama.cpp, please use GGUF files instead.
81
 
82
  Or one of the other tools and libraries listed above.
83
 
 
106
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
107
  | ---- | ---- | ---- | ---- | ---- | ----- |
108
  | [wizardmath-70b-v1.0.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q2_K.bin) | q2_K | 2 | 28.96 GB| 31.46 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
109
  | [wizardmath-70b-v1.0.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 30.09 GB| 32.59 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
110
+ | [wizardmath-70b-v1.0.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.39 GB| 35.89 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
111
+ | [wizardmath-70b-v1.0.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.49 GB| 38.99 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
112
  | [wizardmath-70b-v1.0.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.80 GB| 41.30 GB | Original quant method, 4-bit. |
 
 
113
  | [wizardmath-70b-v1.0.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 39.18 GB| 41.68 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
114
+ | [wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.69 GB| 44.19 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
115
+ | [wizardmath-70b-v1.0.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.12 GB| 45.62 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
116
  | [wizardmath-70b-v1.0.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.43 GB| 49.93 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
117
  | [wizardmath-70b-v1.0.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.74 GB| 50.24 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
118
+ | [wizardmath-70b-v1.0.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/WizardMath-70B-V1.0-GGML/blob/main/wizardmath-70b-v1.0.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 49.03 GB| 51.53 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
 
 
119
 
120
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
121
 
122
+ ## How to run in `llama.cpp`
 
 
123
 
124
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ For compatibility with latest llama.cpp, please use GGUF files instead.
127
 
128
  I use the following command line; adjust for your tastes and needs:
129
 
130
  ```
131
+ ./main -t 10 -ngl 40 -gqa 8 -m wizardmath-70b-v1.0.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n### Instruction:\n{prompt}\n\n\n### Response: Let's think step by step."
132
  ```
133
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
134
 
 
147
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
148
 
149
  <!-- footer start -->
150
+ <!-- 200823 -->
151
  ## Discord
152
 
153
  For further support, and discussions on these models and AI in general, join us at:
 
167
  * Patreon: https://patreon.com/TheBlokeAI
168
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
169
 
170
+ **Special thanks to**: Aemon Algiz.
171
 
172
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper WikieΕ‚, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik BjΓ€reholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
173
 
174
 
175
  Thank you to all my generous patrons and donaters!
176
 
177
+ And thank you again to a16z for their generous grant.
178
+
179
  <!-- footer end -->
180
 
181
  # Original model card: WizardLM's WizardMath 70B V1.0
182
 
183
 
184
 
185
+ ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
 
186
 
187
 
188
  <p align="center">
189
+ πŸ€— <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β€’πŸ± <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> β€’ 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β€’ πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
190
  </p>
191
  <p align="center">
192
+ πŸ‘‹ Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
193
  </p>
194
 
195
+ | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
196
+ | ----- |------| ---- |------|-------| ----- | ----- |
197
+ | WizardCoder-Python-34B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
198
+ | WizardCoder-15B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
199
+ | WizardCoder-Python-13B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
200
+ | WizardCoder-Python-7B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
201
+ | WizardCoder-3B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
202
+ | WizardCoder-1B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
203
 
204
 
205
+ | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
206
+ | ----- |------| ---- |------|-------| ----- | ----- |
207
+ | WizardMath-70B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
208
+ | WizardMath-13B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
209
+ | WizardMath-7B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
210
+
211
+
212
 
213
+ <font size=4>
214
+
215
+ | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
216
+ | ----- |------| ---- |------|-------| ----- | ----- | ----- |
217
+ | <sup>**WizardLM-70B-V1.0**</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>πŸ“ƒ**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
218
+ | <sup>WizardLM-13B-V1.2</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
219
+ | <sup>WizardLM-13B-V1.1</sup> |<sup> πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
220
+ | <sup>WizardLM-30B-V1.0</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
221
+ | <sup>WizardLM-13B-V1.0</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
222
+ | <sup>WizardLM-7B-V1.0 </sup>| <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> πŸ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
223
+ </font>
224
 
225
  **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath
226
 
227
+ **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128
228
 
229
+ **Discord**: https://discord.gg/VZjjHtWrKs
230
 
231
+ ## Comparing WizardMath-V1.0 with Other LLMs.
232
 
233
+ πŸ”₯ The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7).
234
+
235
+ <p align="center" width="100%">
236
+ <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
237
+ </p>
238
 
239
  ❗<b>Note for model system prompts usage:</b>
240
 
241
+ Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
242
+
243
+ **Default version:**
244
 
245
  ```
246
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
247
+ ```
248
 
249
 
250
+ **CoT Version:** οΌˆβ—For the **simple** math questions, we do NOT recommend to use the CoT prompt.οΌ‰
 
251
 
252
 
 
253
  ```
254
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
255
+ ```
256
+
257
+ ## Inference WizardMath Demo Script
258
+
259
+ We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
260
+
261
 
262
  ❗<b>To commen concern about dataset:</b>
263
 
 
265
  Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
266
  Our researchers have no authority to publicly release them without authorization.
267
  Thank you for your understanding.
268
+
269
+
270
+ ## Citation
271
+
272
+ Please cite the repo if you use the data, method or code in this repo.
273
+
274
+ ```
275
+ @article{luo2023wizardmath,
276
+ title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
277
+ author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
278
+ journal={arXiv preprint arXiv:2308.09583},
279
+ year={2023}
280
+ }
281
+ ```