TheBloke commited on
Commit
0a12001
1 Parent(s): 51be264

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -17
README.md CHANGED
@@ -41,7 +41,7 @@ This repo contains GGUF format model files for [Meta's LLaMA 7b](https://ai.meta
41
  <!-- README_GGUF.md-about-gguf start -->
42
  ### About GGUF
43
 
44
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
45
 
46
  Here is an incomplate list of clients and libraries that are known to support GGUF:
47
 
@@ -79,7 +79,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
79
  <!-- compatibility_gguf start -->
80
  ## Compatibility
81
 
82
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
83
 
84
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
85
 
@@ -172,25 +172,25 @@ pip3 install hf_transfer
172
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
173
 
174
  ```shell
175
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
176
  ```
177
 
178
- Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
179
  </details>
180
  <!-- README_GGUF.md-how-to-download end -->
181
 
182
  <!-- README_GGUF.md-how-to-run start -->
183
  ## Example `llama.cpp` command
184
 
185
- Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
186
 
187
  ```shell
188
- ./main -ngl 32 -m llama-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
189
  ```
190
 
191
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
192
 
193
- Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
194
 
195
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
196
 
@@ -204,22 +204,24 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
204
 
205
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
206
 
207
- ### How to load this model from Python using ctransformers
208
 
209
  #### First install the package
210
 
211
- ```bash
 
 
212
  # Base ctransformers with no GPU acceleration
213
- pip install ctransformers>=0.2.24
214
  # Or with CUDA GPU acceleration
215
- pip install ctransformers[cuda]>=0.2.24
216
- # Or with ROCm GPU acceleration
217
- CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
218
- # Or with Metal GPU acceleration for macOS systems
219
- CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
220
  ```
221
 
222
- #### Simple example code to load one of these GGUF models
223
 
224
  ```python
225
  from ctransformers import AutoModelForCausalLM
@@ -232,7 +234,7 @@ print(llm("AI is going to"))
232
 
233
  ## How to use with LangChain
234
 
235
- Here's guides on using llama-cpp-python or ctransformers with LangChain:
236
 
237
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
238
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
 
41
  <!-- README_GGUF.md-about-gguf start -->
42
  ### About GGUF
43
 
44
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
45
 
46
  Here is an incomplate list of clients and libraries that are known to support GGUF:
47
 
 
79
  <!-- compatibility_gguf start -->
80
  ## Compatibility
81
 
82
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
83
 
84
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
85
 
 
172
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
173
 
174
  ```shell
175
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
176
  ```
177
 
178
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
179
  </details>
180
  <!-- README_GGUF.md-how-to-download end -->
181
 
182
  <!-- README_GGUF.md-how-to-run start -->
183
  ## Example `llama.cpp` command
184
 
185
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
186
 
187
  ```shell
188
+ ./main -ngl 32 -m llama-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
189
  ```
190
 
191
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
192
 
193
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
194
 
195
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
196
 
 
204
 
205
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
206
 
207
+ ### How to load this model in Python code, using ctransformers
208
 
209
  #### First install the package
210
 
211
+ Run one of the following commands, according to your system:
212
+
213
+ ```shell
214
  # Base ctransformers with no GPU acceleration
215
+ pip install ctransformers
216
  # Or with CUDA GPU acceleration
217
+ pip install ctransformers[cuda]
218
+ # Or with AMD ROCm GPU acceleration (Linux only)
219
+ CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
220
+ # Or with Metal GPU acceleration for macOS systems only
221
+ CT_METAL=1 pip install ctransformers --no-binary ctransformers
222
  ```
223
 
224
+ #### Simple ctransformers example code
225
 
226
  ```python
227
  from ctransformers import AutoModelForCausalLM
 
234
 
235
  ## How to use with LangChain
236
 
237
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
238
 
239
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
240
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)