TheBloke commited on
Commit
460b955
1 Parent(s): 1eba28a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -41
README.md CHANGED
@@ -2,66 +2,84 @@
2
  inference: false
3
  language:
4
  - en
5
- license: other
 
 
 
6
  model_type: llama
7
  pipeline_tag: text-classification
 
8
  tags:
9
  - llama-2
10
  ---
11
 
12
  <!-- header start -->
13
- <div style="width: 100%;">
14
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
 
 
24
  <!-- header end -->
25
 
26
- # Mikael110's Llama2 13B Guanaco QLoRA GGML
 
 
27
 
28
- These files are GGML format model files for [Mikael110's Llama2 13B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16).
 
 
 
 
 
 
 
 
 
29
 
30
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
31
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
32
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
33
- * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
34
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
35
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
36
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
37
 
38
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
39
 
40
  ## Repositories available
41
 
42
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GPTQ)
43
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML)
44
- * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16)
 
45
 
46
  ## Prompt template: Guanaco
47
 
48
  ```
49
  ### Human: {prompt}
50
  ### Assistant:
 
51
  ```
52
 
53
  <!-- compatibility_ggml start -->
54
  ## Compatibility
55
 
56
- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
57
-
58
- These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
59
 
60
- ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
61
 
62
- These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
63
 
64
- They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
65
 
66
  ## Explanation of the new k-quant methods
67
  <details>
@@ -80,43 +98,51 @@ Refer to the Provided Files table below to see what files use which methods, and
80
  <!-- compatibility_ggml end -->
81
 
82
  ## Provided files
 
83
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
84
  | ---- | ---- | ---- | ---- | ---- | ----- |
85
- | llama-2-13b-guanaco-qlora.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
86
- | llama-2-13b-guanaco-qlora.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
87
- | llama-2-13b-guanaco-qlora.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
88
- | llama-2-13b-guanaco-qlora.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
89
- | llama-2-13b-guanaco-qlora.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
90
- | llama-2-13b-guanaco-qlora.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
91
- | llama-2-13b-guanaco-qlora.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
92
- | llama-2-13b-guanaco-qlora.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
93
- | llama-2-13b-guanaco-qlora.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
94
- | llama-2-13b-guanaco-qlora.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
95
- | llama-2-13b-guanaco-qlora.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
96
- | llama-2-13b-guanaco-qlora.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
97
- | llama-2-13b-guanaco-qlora.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
98
- | llama-2-13b-guanaco-qlora.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
99
 
100
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
101
 
102
  ## How to run in `llama.cpp`
103
 
104
- I use the following command line; adjust for your tastes and needs:
 
 
105
 
106
  ```
107
- ./main -t 10 -ngl 32 -m llama-2-13b-guanaco-qlora.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
108
  ```
109
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
110
 
111
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
112
 
 
 
113
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
114
 
 
 
115
  ## How to run in `text-generation-webui`
116
 
117
- Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
118
 
119
  <!-- footer start -->
 
120
  ## Discord
121
 
122
  For further support, and discussions on these models and AI in general, join us at:
@@ -136,16 +162,18 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
136
  * Patreon: https://patreon.com/TheBlokeAI
137
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
138
 
139
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
140
 
141
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
142
 
143
 
144
  Thank you to all my generous patrons and donaters!
145
 
 
 
146
  <!-- footer end -->
147
 
148
- # Original model card: Mikael110's Llama2 13B Guanaco QLoRA
149
 
150
  This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-13b). It was finetuned from the base [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
151
 
 
2
  inference: false
3
  language:
4
  - en
5
+ license: llama2
6
+ model_creator: Mikael
7
+ model_link: https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16
8
+ model_name: Llama2 13B Guanaco QLoRA
9
  model_type: llama
10
  pipeline_tag: text-classification
11
+ quantized_by: TheBloke
12
  tags:
13
  - llama-2
14
  ---
15
 
16
  <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <div style="display: flex; justify-content: space-between; width: 100%;">
22
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
  </div>
25
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
  </div>
28
  </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
  <!-- header end -->
32
 
33
+ # Llama2 13B Guanaco QLoRA - GGML
34
+ - Model creator: [Mikael](https://huggingface.co/Mikael110)
35
+ - Original model: [Llama2 13B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16)
36
 
37
+ ## Description
38
+
39
+ This repo contains GGML format model files for [Mikael10's Llama2 13B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16).
40
+
41
+ ### Important note regarding GGML files.
42
+
43
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
44
+
45
+ Please use the GGUF models instead.
46
+ ### About GGML
47
 
48
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
49
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
50
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
51
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
52
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
53
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
54
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
55
 
56
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
57
 
58
  ## Repositories available
59
 
60
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GPTQ)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF)
62
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML)
63
+ * [Mikael's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16)
64
 
65
  ## Prompt template: Guanaco
66
 
67
  ```
68
  ### Human: {prompt}
69
  ### Assistant:
70
+
71
  ```
72
 
73
  <!-- compatibility_ggml start -->
74
  ## Compatibility
75
 
76
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
 
 
77
 
78
+ For support with latest llama.cpp, please use GGUF files instead.
79
 
80
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
81
 
82
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
83
 
84
  ## Explanation of the new k-quant methods
85
  <details>
 
98
  <!-- compatibility_ggml end -->
99
 
100
  ## Provided files
101
+
102
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
103
  | ---- | ---- | ---- | ---- | ---- | ----- |
104
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q2_K.bin) | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
105
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
106
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
107
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
108
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
109
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
110
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
111
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
112
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
113
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
114
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
115
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
116
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q6_K.bin) | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
117
+ | [llama-2-13b-guanaco-qlora.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGML/blob/main/llama-2-13b-guanaco-qlora.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
118
 
119
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
120
 
121
  ## How to run in `llama.cpp`
122
 
123
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
124
+
125
+ For compatibility with latest llama.cpp, please use GGUF files instead.
126
 
127
  ```
128
+ ./main -t 10 -ngl 32 -m llama-2-13b-guanaco-qlora.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: Write a story about llamas\n### Assistant:"
129
  ```
130
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
131
 
132
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
133
 
134
+ Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
135
+
136
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
137
 
138
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
139
+
140
  ## How to run in `text-generation-webui`
141
 
142
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
143
 
144
  <!-- footer start -->
145
+ <!-- 200823 -->
146
  ## Discord
147
 
148
  For further support, and discussions on these models and AI in general, join us at:
 
162
  * Patreon: https://patreon.com/TheBlokeAI
163
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
164
 
165
+ **Special thanks to**: Aemon Algiz.
166
 
167
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
168
 
169
 
170
  Thank you to all my generous patrons and donaters!
171
 
172
+ And thank you again to a16z for their generous grant.
173
+
174
  <!-- footer end -->
175
 
176
+ # Original model card: Mikael10's Llama2 13B Guanaco QLoRA
177
 
178
  This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-13b). It was finetuned from the base [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
179