TheBloke commited on
Commit
88a4e64
1 Parent(s): e1d1135

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -82,15 +82,8 @@ A chat between a curious user and an artificial intelligence assistant. The assi
82
  ```
83
 
84
  <!-- prompt-template end -->
85
- <!-- licensing start -->
86
- ## Licensing
87
 
88
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
89
 
90
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
91
-
92
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Eric Hartford's Wizard-Vicuna-30B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored).
93
- <!-- licensing end -->
94
  <!-- compatibility_gguf start -->
95
  ## Compatibility
96
 
@@ -149,7 +142,7 @@ The following clients/libraries will automatically download models for you, prov
149
 
150
  ### In `text-generation-webui`
151
 
152
- Under Download Model, you can enter the model repo: TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF and below it, a specific filename to download, such as: Wizard-Vicuna-30B-Uncensored.q4_K_M.gguf.
153
 
154
  Then click Download.
155
 
@@ -164,7 +157,7 @@ pip3 install huggingface-hub>=0.17.1
164
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
165
 
166
  ```shell
167
- huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
168
  ```
169
 
170
  <details>
@@ -187,7 +180,7 @@ pip3 install hf_transfer
187
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
188
 
189
  ```shell
190
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
191
  ```
192
 
193
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -200,7 +193,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
200
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
201
 
202
  ```shell
203
- ./main -ngl 32 -m Wizard-Vicuna-30B-Uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
204
  ```
205
 
206
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -240,7 +233,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
240
  from ctransformers import AutoModelForCausalLM
241
 
242
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
243
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF", model_file="Wizard-Vicuna-30B-Uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50)
244
 
245
  print(llm("AI is going to"))
246
  ```
 
82
  ```
83
 
84
  <!-- prompt-template end -->
 
 
85
 
 
86
 
 
 
 
 
87
  <!-- compatibility_gguf start -->
88
  ## Compatibility
89
 
 
142
 
143
  ### In `text-generation-webui`
144
 
145
+ Under Download Model, you can enter the model repo: TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF and below it, a specific filename to download, such as: Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf.
146
 
147
  Then click Download.
148
 
 
157
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
158
 
159
  ```shell
160
+ huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
161
  ```
162
 
163
  <details>
 
180
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
181
 
182
  ```shell
183
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
184
  ```
185
 
186
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
193
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
194
 
195
  ```shell
196
+ ./main -ngl 32 -m Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
197
  ```
198
 
199
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
233
  from ctransformers import AutoModelForCausalLM
234
 
235
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
236
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Wizard-Vicuna-30B-Uncensored-GGUF", model_file="Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
237
 
238
  print(llm("AI is going to"))
239
  ```