Transformers
GGUF
llama
text-generation-inference
TheBloke commited on
Commit
15fe43a
1 Parent(s): 5c909fb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -18
README.md CHANGED
@@ -45,7 +45,7 @@ This repo contains GGUF format model files for [Eric Hartford's WizardLM-7B-V1.0
45
  <!-- README_GGUF.md-about-gguf start -->
46
  ### About GGUF
47
 
48
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
49
 
50
  Here is an incomplate list of clients and libraries that are known to support GGUF:
51
 
@@ -83,7 +83,7 @@ A chat between a curious user and an artificial intelligence assistant. The assi
83
  <!-- compatibility_gguf start -->
84
  ## Compatibility
85
 
86
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
87
 
88
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
89
 
@@ -147,7 +147,7 @@ Then click Download.
147
  I recommend using the `huggingface-hub` Python library:
148
 
149
  ```shell
150
- pip3 install huggingface-hub>=0.17.1
151
  ```
152
 
153
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
@@ -176,25 +176,25 @@ pip3 install hf_transfer
176
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
177
 
178
  ```shell
179
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-7B-V1.0-Uncensored-GGUF wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
180
  ```
181
 
182
- Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
183
  </details>
184
  <!-- README_GGUF.md-how-to-download end -->
185
 
186
  <!-- README_GGUF.md-how-to-run start -->
187
  ## Example `llama.cpp` command
188
 
189
- Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
190
 
191
  ```shell
192
- ./main -ngl 32 -m wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
193
  ```
194
 
195
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
196
 
197
- Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
198
 
199
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
200
 
@@ -208,22 +208,24 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
208
 
209
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
210
 
211
- ### How to load this model from Python using ctransformers
212
 
213
  #### First install the package
214
 
215
- ```bash
 
 
216
  # Base ctransformers with no GPU acceleration
217
- pip install ctransformers>=0.2.24
218
  # Or with CUDA GPU acceleration
219
- pip install ctransformers[cuda]>=0.2.24
220
- # Or with ROCm GPU acceleration
221
- CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
222
- # Or with Metal GPU acceleration for macOS systems
223
- CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
224
  ```
225
 
226
- #### Simple example code to load one of these GGUF models
227
 
228
  ```python
229
  from ctransformers import AutoModelForCausalLM
@@ -236,7 +238,7 @@ print(llm("AI is going to"))
236
 
237
  ## How to use with LangChain
238
 
239
- Here's guides on using llama-cpp-python or ctransformers with LangChain:
240
 
241
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
242
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
 
45
  <!-- README_GGUF.md-about-gguf start -->
46
  ### About GGUF
47
 
48
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
49
 
50
  Here is an incomplate list of clients and libraries that are known to support GGUF:
51
 
 
83
  <!-- compatibility_gguf start -->
84
  ## Compatibility
85
 
86
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
87
 
88
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
89
 
 
147
  I recommend using the `huggingface-hub` Python library:
148
 
149
  ```shell
150
+ pip3 install huggingface-hub
151
  ```
152
 
153
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
 
176
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
177
 
178
  ```shell
179
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-7B-V1.0-Uncensored-GGUF wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
180
  ```
181
 
182
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
183
  </details>
184
  <!-- README_GGUF.md-how-to-download end -->
185
 
186
  <!-- README_GGUF.md-how-to-run start -->
187
  ## Example `llama.cpp` command
188
 
189
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
190
 
191
  ```shell
192
+ ./main -ngl 32 -m wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
193
  ```
194
 
195
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
196
 
197
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
198
 
199
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
200
 
 
208
 
209
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
210
 
211
+ ### How to load this model in Python code, using ctransformers
212
 
213
  #### First install the package
214
 
215
+ Run one of the following commands, according to your system:
216
+
217
+ ```shell
218
  # Base ctransformers with no GPU acceleration
219
+ pip install ctransformers
220
  # Or with CUDA GPU acceleration
221
+ pip install ctransformers[cuda]
222
+ # Or with AMD ROCm GPU acceleration (Linux only)
223
+ CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
224
+ # Or with Metal GPU acceleration for macOS systems only
225
+ CT_METAL=1 pip install ctransformers --no-binary ctransformers
226
  ```
227
 
228
+ #### Simple ctransformers example code
229
 
230
  ```python
231
  from ctransformers import AutoModelForCausalLM
 
238
 
239
  ## How to use with LangChain
240
 
241
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
242
 
243
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
244
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)