TheBloke commited on
Commit
0b04cdd
1 Parent(s): 95b17a7

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +242 -0
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # HuggingFaceH4's Starchat Beta GGML
21
+
22
+ These files are GGML format model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
23
+
24
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
28
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
+ * [ctransformers](https://github.com/marella/ctransformers)
30
+
31
+ ## Repositories available
32
+
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starchat-beta-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
36
+
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
+
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
+
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
+
48
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
+
50
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
51
+
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
+
65
+ ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
+ | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | starchat-beta.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
69
+ | starchat-beta.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
70
+ | starchat-beta.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
71
+ | starchat-beta.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
72
+ | starchat-beta.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
73
+
74
+
75
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
76
+
77
+ ## How to run in `llama.cpp`
78
+
79
+ I use the following command line; adjust for your tastes and needs:
80
+
81
+ ```
82
+ ./main -t 10 -ngl 32 -m starchat-beta.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
83
+ ```
84
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
85
+
86
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
87
+
88
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
89
+
90
+ ## How to run in `text-generation-webui`
91
+
92
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
93
+
94
+ <!-- footer start -->
95
+ ## Discord
96
+
97
+ For further support, and discussions on these models and AI in general, join us at:
98
+
99
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
100
+
101
+ ## Thanks, and how to contribute.
102
+
103
+ Thanks to the [chirper.ai](https://chirper.ai) team!
104
+
105
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
106
+
107
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
108
+
109
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
110
+
111
+ * Patreon: https://patreon.com/TheBlokeAI
112
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
113
+
114
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
115
+
116
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
117
+
118
+ Thank you to all my generous patrons and donaters!
119
+
120
+ <!-- footer end -->
121
+
122
+ # Original model card: HuggingFaceH4's Starchat Beta
123
+
124
+
125
+
126
+ <img src="https://huggingface.co/HuggingFaceH4/starchat-beta/resolve/main/model_logo.png" alt="StarChat Beta Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
127
+
128
+ # Model Card for StarChat Beta
129
+
130
+ StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
131
+
132
+ ## Model Details
133
+
134
+ ### Model Description
135
+
136
+ <!-- Provide a longer summary of what this model is. -->
137
+
138
+ - **Model type:** A 16B parameter GPT-like model fine-tuned on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
139
+ - **Language(s) (NLP):** Primarily English and 80+ programming languages.
140
+ - **License:** BigCode Open RAIL-M v1
141
+ - **Finetuned from model:** [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
142
+
143
+ ### Model Sources [optional]
144
+
145
+ <!-- Provide the basic links for the model. -->
146
+
147
+ - **Repository:** https://github.com/bigcode-project/starcoder
148
+ - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
149
+
150
+
151
+ ## Intended uses & limitations
152
+
153
+ The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities.
154
+
155
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
156
+
157
+ ```python
158
+ import torch
159
+ from transformers import pipeline
160
+
161
+ pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")
162
+
163
+ prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
164
+ prompt = prompt_template.format(query="How do I sort a list in Python?")
165
+ # We use a special <|end|> token with ID 49155 to denote ends of a turn
166
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
167
+ # You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
168
+ ```
169
+
170
+ ## Bias, Risks, and Limitations
171
+
172
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
173
+
174
+ StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
175
+ Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
176
+
177
+
178
+ Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
179
+ For example, it may produce code that does not compile or that produces incorrect results.
180
+ It may also produce code that is vulnerable to security exploits.
181
+ We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
182
+
183
+ StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
184
+ In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
185
+
186
+ ## Training and evaluation data
187
+
188
+ StarChat Beta is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
189
+
190
+ ## Training procedure
191
+
192
+ ### Training hyperparameters
193
+
194
+ The following hyperparameters were used during training:
195
+ - learning_rate: 2e-05
196
+ - train_batch_size: 4
197
+ - eval_batch_size: 4
198
+ - seed: 42
199
+ - distributed_type: multi-GPU
200
+ - num_devices: 8
201
+ - gradient_accumulation_steps: 8
202
+ - total_train_batch_size: 256
203
+ - total_eval_batch_size: 32
204
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
205
+ - lr_scheduler_type: cosine
206
+ - lr_scheduler_warmup_ratio: 0.03
207
+ - num_epochs: 6
208
+
209
+ ### Training results
210
+
211
+ | Training Loss | Epoch | Step | Validation Loss |
212
+ |:-------------:|:-----:|:----:|:---------------:|
213
+ | 1.5321 | 0.98 | 15 | 1.2856 |
214
+ | 1.2071 | 1.97 | 30 | 1.2620 |
215
+ | 1.0162 | 2.95 | 45 | 1.2853 |
216
+ | 0.8484 | 4.0 | 61 | 1.3274 |
217
+ | 0.6981 | 4.98 | 76 | 1.3994 |
218
+ | 0.5668 | 5.9 | 90 | 1.4720 |
219
+
220
+
221
+ ### Framework versions
222
+
223
+ - Transformers 4.28.1
224
+ - Pytorch 2.0.1+cu118
225
+ - Datasets 2.12.0
226
+ - Tokenizers 0.13.3
227
+
228
+ ## Citation
229
+
230
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
231
+
232
+ **BibTeX:**
233
+
234
+ ```
235
+ @article{Tunstall2023starchat-alpha,
236
+ author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
237
+ title = {Creating a Coding Assistant with StarCoder},
238
+ journal = {Hugging Face Blog},
239
+ year = {2023},
240
+ note = {https://huggingface.co/blog/starchat},
241
+ }
242
+ ```