Upload README.md
Browse files
README.md
CHANGED
@@ -1,12 +1,20 @@
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- ehartford/wizard_vicuna_70k_unfiltered
|
4 |
inference: false
|
5 |
license: other
|
6 |
model_creator: George Sung
|
7 |
-
model_link: https://huggingface.co/georgesung/llama2_7b_chat_uncensored
|
8 |
model_name: Llama2 7B Chat Uncensored
|
9 |
model_type: llama
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
quantized_by: TheBloke
|
11 |
---
|
12 |
|
@@ -58,9 +66,9 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
58 |
<!-- repositories-available start -->
|
59 |
## Repositories available
|
60 |
|
|
|
61 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ)
|
62 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF)
|
63 |
-
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML)
|
64 |
* [George Sung's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/georgesung/llama2_7b_chat_uncensored)
|
65 |
<!-- repositories-available end -->
|
66 |
|
@@ -131,6 +139,63 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
131 |
|
132 |
<!-- README_GGUF.md-provided-files end -->
|
133 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
<!-- README_GGUF.md-how-to-run start -->
|
135 |
## Example `llama.cpp` command
|
136 |
|
@@ -216,7 +281,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
216 |
|
217 |
**Special thanks to**: Aemon Algiz.
|
218 |
|
219 |
-
**Patreon special mentions**:
|
220 |
|
221 |
|
222 |
Thank you to all my generous patrons and donaters!
|
|
|
1 |
---
|
2 |
+
base_model: https://huggingface.co/georgesung/llama2_7b_chat_uncensored
|
3 |
datasets:
|
4 |
- ehartford/wizard_vicuna_70k_unfiltered
|
5 |
inference: false
|
6 |
license: other
|
7 |
model_creator: George Sung
|
|
|
8 |
model_name: Llama2 7B Chat Uncensored
|
9 |
model_type: llama
|
10 |
+
prompt_template: '### HUMAN:
|
11 |
+
|
12 |
+
{prompt}
|
13 |
+
|
14 |
+
|
15 |
+
### RESPONSE:
|
16 |
+
|
17 |
+
'
|
18 |
quantized_by: TheBloke
|
19 |
---
|
20 |
|
|
|
66 |
<!-- repositories-available start -->
|
67 |
## Repositories available
|
68 |
|
69 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-AWQ)
|
70 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ)
|
71 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF)
|
|
|
72 |
* [George Sung's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/georgesung/llama2_7b_chat_uncensored)
|
73 |
<!-- repositories-available end -->
|
74 |
|
|
|
139 |
|
140 |
<!-- README_GGUF.md-provided-files end -->
|
141 |
|
142 |
+
<!-- README_GGUF.md-how-to-download start -->
|
143 |
+
## How to download GGUF files
|
144 |
+
|
145 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
146 |
+
|
147 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
148 |
+
- LM Studio
|
149 |
+
- LoLLMS Web UI
|
150 |
+
- Faraday.dev
|
151 |
+
|
152 |
+
### In `text-generation-webui`
|
153 |
+
|
154 |
+
Under Download Model, you can enter the model repo: TheBloke/llama2_7b_chat_uncensored-GGUF and below it, a specific filename to download, such as: llama2_7b_chat_uncensored.q4_K_M.gguf.
|
155 |
+
|
156 |
+
Then click Download.
|
157 |
+
|
158 |
+
### On the command line, including multiple files at once
|
159 |
+
|
160 |
+
I recommend using the `huggingface-hub` Python library:
|
161 |
+
|
162 |
+
```shell
|
163 |
+
pip3 install huggingface-hub>=0.17.1
|
164 |
+
```
|
165 |
+
|
166 |
+
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
167 |
+
|
168 |
+
```shell
|
169 |
+
huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF llama2_7b_chat_uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
170 |
+
```
|
171 |
+
|
172 |
+
<details>
|
173 |
+
<summary>More advanced huggingface-cli download usage</summary>
|
174 |
+
|
175 |
+
You can also download multiple files at once with a pattern:
|
176 |
+
|
177 |
+
```shell
|
178 |
+
huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
179 |
+
```
|
180 |
+
|
181 |
+
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
182 |
+
|
183 |
+
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
|
184 |
+
|
185 |
+
```shell
|
186 |
+
pip3 install hf_transfer
|
187 |
+
```
|
188 |
+
|
189 |
+
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
190 |
+
|
191 |
+
```shell
|
192 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF llama2_7b_chat_uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
193 |
+
```
|
194 |
+
|
195 |
+
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
196 |
+
</details>
|
197 |
+
<!-- README_GGUF.md-how-to-download end -->
|
198 |
+
|
199 |
<!-- README_GGUF.md-how-to-run start -->
|
200 |
## Example `llama.cpp` command
|
201 |
|
|
|
281 |
|
282 |
**Special thanks to**: Aemon Algiz.
|
283 |
|
284 |
+
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
|
285 |
|
286 |
|
287 |
Thank you to all my generous patrons and donaters!
|