TheBloke commited on
Commit
064db81
1 Parent(s): d08f06d

Upload new k-quant GGML quantised models.

Browse files
Files changed (1) hide show
  1. README.md +88 -20
README.md CHANGED
@@ -31,30 +31,54 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
31
  ## Repositories available
32
 
33
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
34
- * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
35
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_theEditor_13b_HF)
36
 
37
- ## Prompt template
 
38
 
39
- ```
40
- USER: Edit the following for spelling and grammar mistakes:
41
- ASSISTANT:
42
- ```
 
 
 
43
 
44
- ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
45
 
46
- llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
47
 
48
- I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Provided files
51
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
52
  | ---- | ---- | ---- | ---- | ---- | ----- |
53
- | Karen-The-Editor.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | 4-bit. |
54
- | Karen-The-Editor.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
55
- | Karen-The-Editor.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
56
- | Karen-The-Editor.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
57
- | Karen-The-Editor.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
@@ -64,7 +88,7 @@ I have quantised the GGML files in this repo with the latest version. Therefore
64
  I use the following command line; adjust for your tastes and needs:
65
 
66
  ```
67
- ./main -t 10 -ngl 32 -m Karen-The-Editor.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Edit the following for spelling and grammar mistakes: Hello whats' you're name their frend?\nASSISTANT:"
68
  ```
69
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
70
 
@@ -96,23 +120,31 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
96
  * Patreon: https://patreon.com/TheBlokeAI
97
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
98
 
99
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
100
 
101
  Thank you to all my generous patrons and donaters!
 
102
  <!-- footer end -->
103
 
104
  # Original model card: FPHam's Karen The Editor 13B
105
 
106
- ## Karen is an editor for your fiction.
107
 
108
  She fixes grammar and wording issues, but doesn't necessary start rewording everything like ChatGPT into a corporate talk. So it should keep the style intact.
109
 
110
- Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar).
 
 
 
 
 
111
 
112
  Karen gets triggered by this prompt (pun intended):
113
 
114
  ```
115
- USER: Edit the following for spelling and grammar mistakes:
116
  ASSISTANT:
117
  ```
118
 
@@ -129,4 +161,40 @@ context: "Assistant is a meticulous senior editor with a specialization in editi
129
 
130
  I am not sure to what extend the context part has any effect. Doesn't hurt though.
131
 
132
- This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Repositories available
32
 
33
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
35
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_theEditor_13b_HF)
36
 
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
 
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
+
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
 
48
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
51
 
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
 
65
  ## Provided files
66
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
  | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | Karen-The-Editor.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
69
+ | Karen-The-Editor.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
70
+ | Karen-The-Editor.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
71
+ | Karen-The-Editor.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
72
+ | Karen-The-Editor.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
73
+ | Karen-The-Editor.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
74
+ | Karen-The-Editor.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
75
+ | Karen-The-Editor.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
76
+ | Karen-The-Editor.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
77
+ | Karen-The-Editor.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
78
+ | Karen-The-Editor.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
79
+ | Karen-The-Editor.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
80
+ | Karen-The-Editor.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
81
+ | Karen-The-Editor.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
82
 
83
 
84
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
88
  I use the following command line; adjust for your tastes and needs:
89
 
90
  ```
91
+ ./main -t 10 -ngl 32 -m Karen-The-Editor.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
92
  ```
93
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
94
 
 
120
  * Patreon: https://patreon.com/TheBlokeAI
121
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
122
 
123
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
124
+
125
+ **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
126
 
127
  Thank you to all my generous patrons and donaters!
128
+
129
  <!-- footer end -->
130
 
131
  # Original model card: FPHam's Karen The Editor 13B
132
 
133
+ ## Karen is an editor for your fiction. (v.0.2)
134
 
135
  She fixes grammar and wording issues, but doesn't necessary start rewording everything like ChatGPT into a corporate talk. So it should keep the style intact.
136
 
137
+ Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar).
138
+
139
+ ## Quantized version (Quantized by TheBloke)
140
+
141
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
142
+ * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
143
 
144
  Karen gets triggered by this prompt (pun intended):
145
 
146
  ```
147
+ USER: Edit the following for spelling and grammar mistakes:
148
  ASSISTANT:
149
  ```
150
 
 
161
 
162
  I am not sure to what extend the context part has any effect. Doesn't hurt though.
163
 
164
+ This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
165
+
166
+ ## Example:
167
+
168
+ Input
169
+ ```
170
+ USER: Edit the following for spelling and grammar mistakes:
171
+ unce upon time lived name John who have pet monkey Jackie.
172
+ one day while walkin down the street he saw his friend Tom coming to him.
173
+ "Hey Tom" said John "hows it going?"
174
+
175
+ Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
176
+ Tom look at John and ask "Is that yours?"
177
+
178
+ John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
179
+ Tom struggle to keep his balance as he laugh hystericaly.
180
+ ```
181
+ Output
182
+ ```
183
+ ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
184
+ One day while walking down the street he saw his friend Tom approaching him.
185
+ "Hey Tom," said John. "How's it going?"
186
+
187
+ Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
188
+ Tom looked at John and asked, "Is that yours?"
189
+
190
+ John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
191
+ Tom struggled to keep his balance as he laughed hysterically.
192
+ ```
193
+
194
+ ## Goal: to create the best grammar checker you have ever seen
195
+
196
+ ## To do:
197
+ - train on larger dataset, big, enormous, gargantuan
198
+ - see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will be very focused on editing and nothing else)
199
+ - explore what different settings (temperature, top_p, top_k do for this type of finetune)
200
+ - create Rachel, the paraphrasing editor