TheBloke commited on
Commit
613e8ca
1 Parent(s): a11fa55

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +247 -0
README.md ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # OpenAccess AI Collective's Manticore 13B Chat GGML
21
+
22
+ These files are GGML format model files for [OpenAccess AI Collective's Manticore 13B Chat](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg).
23
+
24
+ These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
25
+
26
+ In order to use the increased context length, you can presently use:
27
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
28
+
29
+ Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that.
30
+
31
+ To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
32
+
33
+ ## Repositories available
34
+
35
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-SuperHOT-8K-GPTQ)
36
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-SuperHOT-8K-GGML)
37
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-SuperHOT-8K-fp16)
38
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg)
39
+
40
+ <!-- compatibility_ggml start -->
41
+ ## Compatibility
42
+
43
+ These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
44
+
45
+ However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
46
+
47
+ ## Explanation of the new k-quant methods
48
+
49
+ The new methods available are:
50
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
51
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
52
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
53
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
54
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
55
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
56
+
57
+ Refer to the Provided Files table below to see what files use which methods, and how.
58
+ <!-- compatibility_ggml end -->
59
+
60
+ ## Provided files
61
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
62
+ | ---- | ---- | ---- | ---- | ---- | ----- |
63
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
64
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
65
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
66
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
67
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
68
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
69
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
70
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
71
+ | manticore-13b-chat-pyg-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
72
+
73
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
74
+
75
+ ## How to run in `koboldcpp`
76
+
77
+ On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
78
+
79
+ ```
80
+ python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 manticore-13b-chat-pyg-superhot-8k.ggmlv3.q5_0.bin
81
+ ```
82
+
83
+ Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
84
+
85
+ For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
86
+
87
+ <!-- footer start -->
88
+ ## Discord
89
+
90
+ For further support, and discussions on these models and AI in general, join us at:
91
+
92
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
93
+
94
+ ## Thanks, and how to contribute.
95
+
96
+ Thanks to the [chirper.ai](https://chirper.ai) team!
97
+
98
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
99
+
100
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
101
+
102
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
103
+
104
+ * Patreon: https://patreon.com/TheBlokeAI
105
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
106
+
107
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
108
+
109
+ **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
110
+
111
+ Thank you to all my generous patrons and donaters!
112
+
113
+ <!-- footer end -->
114
+
115
+ # Original model card: Kaio Ken's SuperHOT 8K
116
+
117
+ ### SuperHOT Prototype 2 w/ 8K Context
118
+
119
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
120
+ Tests have shown that the model does indeed leverage the extended context at 8K.
121
+
122
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
123
+
124
+ #### Looking for Merged & Quantized Models?
125
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
126
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
127
+
128
+
129
+ #### Training Details
130
+ I trained the LoRA with the following configuration:
131
+ - 1200 samples (~400 samples over 2048 sequence length)
132
+ - learning rate of 3e-4
133
+ - 3 epochs
134
+ - The exported modules are:
135
+ - q_proj
136
+ - k_proj
137
+ - v_proj
138
+ - o_proj
139
+ - no bias
140
+ - Rank = 4
141
+ - Alpha = 8
142
+ - no dropout
143
+ - weight decay of 0.1
144
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
145
+ - Trained on 4-bit base model
146
+
147
+ # Original model card: OpenAccess AI Collective's Manticore 13B Chat
148
+
149
+
150
+ # Manticore 13B Chat
151
+
152
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
153
+
154
+ Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of
155
+ chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
156
+
157
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
158
+
159
+ # Training Datasets
160
+
161
+ Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B.
162
+
163
+ **Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.**
164
+
165
+ - de-duped pygmalion dataset, filtered down to RP data
166
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
167
+ - hellaswag, updated for detailed explanations w 30K+ rows
168
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
169
+ - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered)
170
+
171
+ Manticore 13B
172
+ - [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
173
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
174
+ - [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
175
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
176
+ - [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
177
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
178
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
179
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
180
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
181
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
182
+
183
+ Not added from Manticore 13B:
184
+ - mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks
185
+
186
+ # Shoutouts
187
+
188
+ Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset.
189
+ # Demo
190
+
191
+ Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
192
+ - https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg
193
+
194
+ ## Release Notes
195
+
196
+ - https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw
197
+
198
+ ## Build
199
+
200
+ Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
201
+ - 3 epochs taking approximately 8 hours. No further epochs will be released.
202
+ - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
203
+
204
+ ## Bias, Risks, and Limitations
205
+ Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
206
+ Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
207
+
208
+ ## Examples
209
+
210
+ ```
211
+ USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day.
212
+ ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades.
213
+ ```
214
+
215
+ ```
216
+ USER: tell me a joke about alpacas
217
+ ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama.
218
+ USER: okay, that is actually pretty good
219
+ ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla!
220
+ USER: why is that funny?
221
+ ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties.
222
+ ```
223
+
224
+ ```
225
+ USER: What is 44 + 77?
226
+ ASSISTANT: 121
227
+ USER: What is the sum of 33 and 77?
228
+ ASSISTANT: 110
229
+ USER: tell me 5 words that start with “ex” and the sum of 33 and 77.
230
+ ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110.
231
+ USER: tell me 3 words that start with “egr” and end in “s”
232
+ ASSISTANT: egret, eggplant, and egrets.
233
+ ```
234
+ - oof
235
+
236
+
237
+ ```
238
+ USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
239
+ ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters.
240
+ USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer.
241
+ ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones.
242
+ USER: But when brothers have sisters, aren’t they shared or the same?
243
+ ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them.
244
+ USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning.
245
+ ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1.
246
+ ```
247
+ - pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess