TheBloke commited on
Commit
28dd720
1 Parent(s): 2f53e43

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -153,7 +153,7 @@ The following clients/libraries will automatically download models for you, prov
153
 
154
  ### In `text-generation-webui`
155
 
156
- Under Download Model, you can enter the model repo: TheBloke/manticore-13b-chat-pyg-GGUF and below it, a specific filename to download, such as: manticore-13b-chat-pyg.q4_K_M.gguf.
157
 
158
  Then click Download.
159
 
@@ -162,13 +162,13 @@ Then click Download.
162
  I recommend using the `huggingface-hub` Python library:
163
 
164
  ```shell
165
- pip3 install huggingface-hub>=0.17.1
166
  ```
167
 
168
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
169
 
170
  ```shell
171
- huggingface-cli download TheBloke/manticore-13b-chat-pyg-GGUF manticore-13b-chat-pyg.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
172
  ```
173
 
174
  <details>
@@ -191,10 +191,10 @@ pip3 install hf_transfer
191
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
192
 
193
  ```shell
194
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/manticore-13b-chat-pyg-GGUF manticore-13b-chat-pyg.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
195
  ```
196
 
197
- Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
198
  </details>
199
  <!-- README_GGUF.md-how-to-download end -->
200
 
@@ -204,7 +204,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
204
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
205
 
206
  ```shell
207
- ./main -ngl 32 -m manticore-13b-chat-pyg.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
208
  ```
209
 
210
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -244,7 +244,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
244
  from ctransformers import AutoModelForCausalLM
245
 
246
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
247
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/manticore-13b-chat-pyg-GGUF", model_file="manticore-13b-chat-pyg.q4_K_M.gguf", model_type="llama", gpu_layers=50)
248
 
249
  print(llm("AI is going to"))
250
  ```
 
153
 
154
  ### In `text-generation-webui`
155
 
156
+ Under Download Model, you can enter the model repo: TheBloke/manticore-13b-chat-pyg-GGUF and below it, a specific filename to download, such as: manticore-13b-chat-pyg.Q4_K_M.gguf.
157
 
158
  Then click Download.
159
 
 
162
  I recommend using the `huggingface-hub` Python library:
163
 
164
  ```shell
165
+ pip3 install huggingface-hub
166
  ```
167
 
168
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
169
 
170
  ```shell
171
+ huggingface-cli download TheBloke/manticore-13b-chat-pyg-GGUF manticore-13b-chat-pyg.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
172
  ```
173
 
174
  <details>
 
191
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
192
 
193
  ```shell
194
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/manticore-13b-chat-pyg-GGUF manticore-13b-chat-pyg.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
195
  ```
196
 
197
+ Windows Command Line users: You can set the environment variable by running `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before the download command.
198
  </details>
199
  <!-- README_GGUF.md-how-to-download end -->
200
 
 
204
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
205
 
206
  ```shell
207
+ ./main -ngl 32 -m manticore-13b-chat-pyg.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
208
  ```
209
 
210
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
244
  from ctransformers import AutoModelForCausalLM
245
 
246
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
247
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/manticore-13b-chat-pyg-GGUF", model_file="manticore-13b-chat-pyg.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
248
 
249
  print(llm("AI is going to"))
250
  ```