TheBloke commited on
Commit
8fa7303
β€’
1 Parent(s): f3fa144

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # WizardLM's WizardLM 13B V1.1 GGML
21
+
22
+ These files are GGML format model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1).
23
+
24
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
28
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
+ * [ctransformers](https://github.com/marella/ctransformers)
30
+
31
+ ## Repositories available
32
+
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
36
+
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
+
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
+
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
+
48
+ These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
49
+
50
+ They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
51
+
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
+
65
+ ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
+ | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | wizardlm-13b-v1.1.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
69
+ | wizardlm-13b-v1.1.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
70
+ | wizardlm-13b-v1.1.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
71
+ | wizardlm-13b-v1.1.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
72
+ | wizardlm-13b-v1.1.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
73
+
74
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
75
+
76
+ ## How to run in `llama.cpp`
77
+
78
+ I use the following command line; adjust for your tastes and needs:
79
+
80
+ ```
81
+ ./main -t 10 -ngl 32 -m wizardlm-13b-v1.1.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
82
+ ```
83
+ If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
84
+
85
+ If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
86
+
87
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
88
+
89
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
90
+
91
+ ## How to run in `text-generation-webui`
92
+
93
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
94
+
95
+ <!-- footer start -->
96
+ ## Discord
97
+
98
+ For further support, and discussions on these models and AI in general, join us at:
99
+
100
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
101
+
102
+ ## Thanks, and how to contribute.
103
+
104
+ Thanks to the [chirper.ai](https://chirper.ai) team!
105
+
106
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
107
+
108
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
109
+
110
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
111
+
112
+ * Patreon: https://patreon.com/TheBlokeAI
113
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
114
+
115
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
116
+
117
+ **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
118
+
119
+ Thank you to all my generous patrons and donaters!
120
+
121
+ <!-- footer end -->
122
+
123
+ # Original model card: WizardLM's WizardLM 13B V1.1
124
+
125
+
126
+
127
+ This is the **Full-Weight** of WizardLM-13B V1.1 model.
128
+
129
+ **Repository**: https://github.com/nlpxucan/WizardLM
130
+
131
+ **Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401
132
+
133
+
134
+ - πŸ”₯πŸ”₯πŸ”₯ [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244).
135
+ - πŸ”₯πŸ”₯πŸ”₯ [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
136
+
137
+