TheBloke commited on
Commit
11156f8
1 Parent(s): d4e8fc0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -25
README.md CHANGED
@@ -1,14 +1,28 @@
1
  ---
2
  arxiv: 2307.09288
 
3
  inference: false
4
  language:
5
  - en
6
  license: other
7
  model_creator: Meta Llama 2
8
- model_link: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
9
  model_name: Llama 2 7B Chat
10
  model_type: llama
11
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  quantized_by: TheBloke
13
  tags:
14
  - facebook
@@ -39,34 +53,36 @@ tags:
39
  - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
40
  - Original model: [Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
41
 
 
42
  ## Description
43
 
44
  This repo contains GGUF format model files for [Meta Llama 2's Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
45
 
 
46
  <!-- README_GGUF.md-about-gguf start -->
47
  ### About GGUF
48
 
49
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
50
 
51
- The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
52
 
53
- Here are a list of clients and libraries that are known to support GGUF:
54
- * [llama.cpp](https://github.com/ggerganov/llama.cpp).
55
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
56
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
57
- * [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
58
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
59
- * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
60
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
61
- * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
62
 
63
  <!-- README_GGUF.md-about-gguf end -->
64
  <!-- repositories-available start -->
65
  ## Repositories available
66
 
 
67
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
68
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF)
69
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGML)
70
  * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
71
  <!-- repositories-available end -->
72
 
@@ -82,12 +98,14 @@ You are a helpful, respectful and honest assistant. Always answer as helpfully a
82
  ```
83
 
84
  <!-- prompt-template end -->
 
 
85
  <!-- compatibility_gguf start -->
86
  ## Compatibility
87
 
88
- These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
89
 
90
- They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
91
 
92
  ## Explanation of quantisation methods
93
  <details>
@@ -123,23 +141,80 @@ Refer to the Provided Files table below to see what files use which methods, and
123
  | [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
124
 
125
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
126
  <!-- README_GGUF.md-provided-files end -->
127
 
128
- <!-- README_GGUF.md-how-to-run start -->
129
- ## Example `llama.cpp` command
 
 
 
 
 
 
 
 
 
 
 
130
 
131
- Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
132
 
133
- For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
134
 
 
 
 
 
135
  ```
136
- ./main -t 10 -ngl 32 -m llama-2-7b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\nWrite a story about llamas[/INST]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
  ```
138
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
139
 
140
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
141
 
142
- Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
143
 
144
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
145
 
@@ -174,7 +249,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
174
  from ctransformers import AutoModelForCausalLM
175
 
176
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
177
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGML", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
178
 
179
  print(llm("AI is going to"))
180
  ```
@@ -196,10 +271,12 @@ For further support, and discussions on these models and AI in general, join us
196
 
197
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
198
 
199
- ## Thanks, and how to contribute.
200
 
201
  Thanks to the [chirper.ai](https://chirper.ai) team!
202
 
 
 
203
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
204
 
205
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
@@ -211,7 +288,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
211
 
212
  **Special thanks to**: Aemon Algiz.
213
 
214
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
215
 
216
 
217
  Thank you to all my generous patrons and donaters!
 
1
  ---
2
  arxiv: 2307.09288
3
+ base_model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
4
  inference: false
5
  language:
6
  - en
7
  license: other
8
  model_creator: Meta Llama 2
 
9
  model_name: Llama 2 7B Chat
10
  model_type: llama
11
  pipeline_tag: text-generation
12
+ prompt_template: '[INST] <<SYS>>
13
+
14
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as
15
+ possible, while being safe. Your answers should not include any harmful, unethical,
16
+ racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
17
+ are socially unbiased and positive in nature. If a question does not make any sense,
18
+ or is not factually coherent, explain why instead of answering something not correct.
19
+ If you don''t know the answer to a question, please don''t share false information.
20
+
21
+ <</SYS>>
22
+
23
+ {prompt}[/INST]
24
+
25
+ '
26
  quantized_by: TheBloke
27
  tags:
28
  - facebook
 
53
  - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
54
  - Original model: [Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
55
 
56
+ <!-- description start -->
57
  ## Description
58
 
59
  This repo contains GGUF format model files for [Meta Llama 2's Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
60
 
61
+ <!-- description end -->
62
  <!-- README_GGUF.md-about-gguf start -->
63
  ### About GGUF
64
 
65
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
66
 
67
+ Here is an incomplate list of clients and libraries that are known to support GGUF:
68
 
69
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
70
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
71
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
72
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
73
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
74
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
75
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
76
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
77
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
78
 
79
  <!-- README_GGUF.md-about-gguf end -->
80
  <!-- repositories-available start -->
81
  ## Repositories available
82
 
83
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-AWQ)
84
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
85
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF)
 
86
  * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
87
  <!-- repositories-available end -->
88
 
 
98
  ```
99
 
100
  <!-- prompt-template end -->
101
+
102
+
103
  <!-- compatibility_gguf start -->
104
  ## Compatibility
105
 
106
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
107
 
108
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
109
 
110
  ## Explanation of quantisation methods
111
  <details>
 
141
  | [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
142
 
143
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
144
+
145
+
146
+
147
  <!-- README_GGUF.md-provided-files end -->
148
 
149
+ <!-- README_GGUF.md-how-to-download start -->
150
+ ## How to download GGUF files
151
+
152
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
153
+
154
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
155
+ - LM Studio
156
+ - LoLLMS Web UI
157
+ - Faraday.dev
158
+
159
+ ### In `text-generation-webui`
160
+
161
+ Under Download Model, you can enter the model repo: TheBloke/Llama-2-7b-Chat-GGUF and below it, a specific filename to download, such as: llama-2-7b-chat.q4_K_M.gguf.
162
 
163
+ Then click Download.
164
 
165
+ ### On the command line, including multiple files at once
166
 
167
+ I recommend using the `huggingface-hub` Python library:
168
+
169
+ ```shell
170
+ pip3 install huggingface-hub>=0.17.1
171
  ```
172
+
173
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
174
+
175
+ ```shell
176
+ huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
177
+ ```
178
+
179
+ <details>
180
+ <summary>More advanced huggingface-cli download usage</summary>
181
+
182
+ You can also download multiple files at once with a pattern:
183
+
184
+ ```shell
185
+ huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
186
+ ```
187
+
188
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
189
+
190
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
191
+
192
+ ```shell
193
+ pip3 install hf_transfer
194
+ ```
195
+
196
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
197
+
198
+ ```shell
199
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
200
+ ```
201
+
202
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
203
+ </details>
204
+ <!-- README_GGUF.md-how-to-download end -->
205
+
206
+ <!-- README_GGUF.md-how-to-run start -->
207
+ ## Example `llama.cpp` command
208
+
209
+ Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
210
+
211
+ ```shell
212
+ ./main -ngl 32 -m llama-2-7b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
213
  ```
 
214
 
215
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
216
 
217
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
218
 
219
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
220
 
 
249
  from ctransformers import AutoModelForCausalLM
250
 
251
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
252
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGUF", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
253
 
254
  print(llm("AI is going to"))
255
  ```
 
271
 
272
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
273
 
274
+ ## Thanks, and how to contribute
275
 
276
  Thanks to the [chirper.ai](https://chirper.ai) team!
277
 
278
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
279
+
280
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
281
 
282
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
288
 
289
  **Special thanks to**: Aemon Algiz.
290
 
291
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
292
 
293
 
294
  Thank you to all my generous patrons and donaters!