Text Generation
Transformers
GGUF
English
llama
text-generation-inference
TheBloke commited on
Commit
681ba20
1 Parent(s): f6794e4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -21
README.md CHANGED
@@ -51,7 +51,7 @@ This repo contains GGUF format model files for [Open Access AI Collective's Wiza
51
  <!-- README_GGUF.md-about-gguf start -->
52
  ### About GGUF
53
 
54
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
55
 
56
  Here is an incomplate list of clients and libraries that are known to support GGUF:
57
 
@@ -89,7 +89,7 @@ A chat between a curious user and an artificial intelligence assistant. The assi
89
  <!-- compatibility_gguf start -->
90
  ## Compatibility
91
 
92
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
93
 
94
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
95
 
@@ -144,7 +144,7 @@ The following clients/libraries will automatically download models for you, prov
144
 
145
  ### In `text-generation-webui`
146
 
147
- Under Download Model, you can enter the model repo: TheBloke/wizard-mega-13B-GGUF and below it, a specific filename to download, such as: wizard-mega-13B.q4_K_M.gguf.
148
 
149
  Then click Download.
150
 
@@ -153,13 +153,13 @@ Then click Download.
153
  I recommend using the `huggingface-hub` Python library:
154
 
155
  ```shell
156
- pip3 install huggingface-hub>=0.17.1
157
  ```
158
 
159
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
160
 
161
  ```shell
162
- huggingface-cli download TheBloke/wizard-mega-13B-GGUF wizard-mega-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
163
  ```
164
 
165
  <details>
@@ -182,25 +182,25 @@ pip3 install hf_transfer
182
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
183
 
184
  ```shell
185
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/wizard-mega-13B-GGUF wizard-mega-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
186
  ```
187
 
188
- Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
189
  </details>
190
  <!-- README_GGUF.md-how-to-download end -->
191
 
192
  <!-- README_GGUF.md-how-to-run start -->
193
  ## Example `llama.cpp` command
194
 
195
- Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
196
 
197
  ```shell
198
- ./main -ngl 32 -m wizard-mega-13B.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
199
  ```
200
 
201
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
202
 
203
- Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
204
 
205
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
206
 
@@ -214,35 +214,37 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
214
 
215
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
216
 
217
- ### How to load this model from Python using ctransformers
218
 
219
  #### First install the package
220
 
221
- ```bash
 
 
222
  # Base ctransformers with no GPU acceleration
223
- pip install ctransformers>=0.2.24
224
  # Or with CUDA GPU acceleration
225
- pip install ctransformers[cuda]>=0.2.24
226
- # Or with ROCm GPU acceleration
227
- CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
228
- # Or with Metal GPU acceleration for macOS systems
229
- CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
230
  ```
231
 
232
- #### Simple example code to load one of these GGUF models
233
 
234
  ```python
235
  from ctransformers import AutoModelForCausalLM
236
 
237
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
238
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/wizard-mega-13B-GGUF", model_file="wizard-mega-13B.q4_K_M.gguf", model_type="llama", gpu_layers=50)
239
 
240
  print(llm("AI is going to"))
241
  ```
242
 
243
  ## How to use with LangChain
244
 
245
- Here's guides on using llama-cpp-python or ctransformers with LangChain:
246
 
247
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
248
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
 
51
  <!-- README_GGUF.md-about-gguf start -->
52
  ### About GGUF
53
 
54
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
55
 
56
  Here is an incomplate list of clients and libraries that are known to support GGUF:
57
 
 
89
  <!-- compatibility_gguf start -->
90
  ## Compatibility
91
 
92
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
93
 
94
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
95
 
 
144
 
145
  ### In `text-generation-webui`
146
 
147
+ Under Download Model, you can enter the model repo: TheBloke/wizard-mega-13B-GGUF and below it, a specific filename to download, such as: wizard-mega-13B.Q4_K_M.gguf.
148
 
149
  Then click Download.
150
 
 
153
  I recommend using the `huggingface-hub` Python library:
154
 
155
  ```shell
156
+ pip3 install huggingface-hub
157
  ```
158
 
159
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
160
 
161
  ```shell
162
+ huggingface-cli download TheBloke/wizard-mega-13B-GGUF wizard-mega-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
163
  ```
164
 
165
  <details>
 
182
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
183
 
184
  ```shell
185
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/wizard-mega-13B-GGUF wizard-mega-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
186
  ```
187
 
188
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
189
  </details>
190
  <!-- README_GGUF.md-how-to-download end -->
191
 
192
  <!-- README_GGUF.md-how-to-run start -->
193
  ## Example `llama.cpp` command
194
 
195
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
196
 
197
  ```shell
198
+ ./main -ngl 32 -m wizard-mega-13B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
199
  ```
200
 
201
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
202
 
203
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
204
 
205
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
206
 
 
214
 
215
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
216
 
217
+ ### How to load this model in Python code, using ctransformers
218
 
219
  #### First install the package
220
 
221
+ Run one of the following commands, according to your system:
222
+
223
+ ```shell
224
  # Base ctransformers with no GPU acceleration
225
+ pip install ctransformers
226
  # Or with CUDA GPU acceleration
227
+ pip install ctransformers[cuda]
228
+ # Or with AMD ROCm GPU acceleration (Linux only)
229
+ CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
230
+ # Or with Metal GPU acceleration for macOS systems only
231
+ CT_METAL=1 pip install ctransformers --no-binary ctransformers
232
  ```
233
 
234
+ #### Simple ctransformers example code
235
 
236
  ```python
237
  from ctransformers import AutoModelForCausalLM
238
 
239
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
240
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/wizard-mega-13B-GGUF", model_file="wizard-mega-13B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
241
 
242
  print(llm("AI is going to"))
243
  ```
244
 
245
  ## How to use with LangChain
246
 
247
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
248
 
249
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
250
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)