TheBloke commited on
Commit
b7b06bd
1 Parent(s): dbc6937

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +222 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - de
5
+ - en
6
+ license: other
7
+ model_type: llama
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+ <!-- header start -->
12
+ <div style="width: 100%;">
13
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+ <div style="display: flex; justify-content: space-between; width: 100%;">
16
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
+ </div>
19
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
21
+ </div>
22
+ </div>
23
+ <!-- header end -->
24
+
25
+ # Jan Philipp Harries' Vicuna 13B v1.3 German GGML
26
+
27
+ These files are GGML format model files for [Jan Philipp Harries' Vicuna 13B v1.3 German](https://huggingface.co/jphme/vicuna-13b-v1.3-ger).
28
+
29
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
30
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
31
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
32
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
33
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
34
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
35
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
36
+
37
+
38
+ ## Repositories available
39
+
40
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vicuna-13B-v1.3-German-GPTQ)
41
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-13B-v1.3-German-GGML)
42
+ * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/vicuna-13b-v1.3-ger)
43
+
44
+ ## Prompt template: Vicuna
45
+
46
+ ```
47
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
48
+
49
+ USER: {prompt}
50
+ ASSISTANT:
51
+ ```
52
+
53
+ <!-- compatibility_ggml start -->
54
+ ## Compatibility
55
+
56
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
57
+
58
+ These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
59
+
60
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
61
+
62
+ These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
63
+
64
+ They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
65
+
66
+ ## Explanation of the new k-quant methods
67
+ <details>
68
+ <summary>Click to see details</summary>
69
+
70
+ The new methods available are:
71
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
72
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
73
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
74
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
75
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
76
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
77
+
78
+ Refer to the Provided Files table below to see what files use which methods, and how.
79
+ </details>
80
+ <!-- compatibility_ggml end -->
81
+
82
+ ## Provided files
83
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
84
+ | ---- | ---- | ---- | ---- | ---- | ----- |
85
+ | vicuna-13b-v1.3-german.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
86
+ | vicuna-13b-v1.3-german.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
87
+ | vicuna-13b-v1.3-german.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
88
+ | vicuna-13b-v1.3-german.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
89
+ | vicuna-13b-v1.3-german.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
90
+ | vicuna-13b-v1.3-german.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
91
+ | vicuna-13b-v1.3-german.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
92
+ | vicuna-13b-v1.3-german.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
93
+ | vicuna-13b-v1.3-german.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
94
+ | vicuna-13b-v1.3-german.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
95
+ | vicuna-13b-v1.3-german.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
96
+ | vicuna-13b-v1.3-german.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
97
+ | vicuna-13b-v1.3-german.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
98
+ | vicuna-13b-v1.3-german.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
99
+
100
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
101
+
102
+ ## How to run in `llama.cpp`
103
+
104
+ I use the following command line; adjust for your tastes and needs:
105
+
106
+ ```
107
+ ./main -t 10 -ngl 32 -m vicuna-13b-v1.3-german.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
108
+ ```
109
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
110
+
111
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
112
+
113
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
114
+
115
+ ## How to run in `text-generation-webui`
116
+
117
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
118
+
119
+ <!-- footer start -->
120
+ ## Discord
121
+
122
+ For further support, and discussions on these models and AI in general, join us at:
123
+
124
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
125
+
126
+ ## Thanks, and how to contribute.
127
+
128
+ Thanks to the [chirper.ai](https://chirper.ai) team!
129
+
130
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
131
+
132
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
133
+
134
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
135
+
136
+ * Patreon: https://patreon.com/TheBlokeAI
137
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
138
+
139
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
140
+
141
+ **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
142
+
143
+
144
+ Thank you to all my generous patrons and donaters!
145
+
146
+ <!-- footer end -->
147
+
148
+ # Original model card: Jan Philipp Harries' Vicuna 13B v1.3 German
149
+
150
+
151
+ # Vicuna 13b v1.3 German
152
+
153
+ vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
154
+
155
+ This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count.
156
+ Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Vicuna.
157
+
158
+ I am working on improving the model´s capabilities and will update the model if there is sufficient interest.
159
+
160
+ A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML).
161
+
162
+ ## Prompt Template
163
+
164
+ ```
165
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
166
+
167
+ USER: Hello!
168
+ ASSISTANT: Hello!</s>
169
+ USER: How are you?
170
+ ASSISTANT: I am good.</s>
171
+ ```
172
+
173
+ ## Results
174
+
175
+ I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations.
176
+
177
+ ## Problems
178
+
179
+ There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset.
180
+ Please report any problems so I can fix this for the next version.
181
+
182
+ ---------------------------
183
+ # Original Vicuna Model Card
184
+
185
+ ## Model Details
186
+
187
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
188
+
189
+ - **Developed by:** [LMSYS](https://lmsys.org/)
190
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
191
+ - **License:** Non-commercial license
192
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
193
+
194
+ ### Model Sources
195
+
196
+ - **Repository:** https://github.com/lm-sys/FastChat
197
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
198
+ - **Paper:** https://arxiv.org/abs/2306.05685
199
+ - **Demo:** https://chat.lmsys.org/
200
+
201
+ ## Uses
202
+
203
+ The primary use of Vicuna is research on large language models and chatbots.
204
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
205
+
206
+ ## How to Get Started with the Model
207
+
208
+ - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
209
+ - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
210
+
211
+ ## Training Details
212
+
213
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
214
+ The training data is around 140K conversations collected from ShareGPT.com.
215
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
216
+
217
+ ## Evaluation
218
+
219
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
220
+
221
+ ## Difference between different versions of Vicuna
222
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)