Upload README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,16 @@
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- jondurbin/airoboros-2.2
|
4 |
inference: false
|
5 |
license: llama2
|
6 |
model_creator: Jon Durbin
|
7 |
-
model_link: https://huggingface.co/jondurbin/spicyboros-7b-2.2
|
8 |
model_name: Spicyboros 7B 2.2
|
9 |
model_type: llama
|
|
|
10 |
quantized_by: TheBloke
|
|
|
|
|
11 |
---
|
12 |
|
13 |
<!-- header start -->
|
@@ -42,6 +45,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
42 |
<!-- repositories-available start -->
|
43 |
## Repositories available
|
44 |
|
|
|
45 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ)
|
46 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GGUF)
|
47 |
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-7b-2.2)
|
@@ -59,6 +63,7 @@ ASSISTANT:
|
|
59 |
|
60 |
<!-- prompt-template end -->
|
61 |
|
|
|
62 |
<!-- README_GPTQ.md-provided-files start -->
|
63 |
## Provided files and GPTQ parameters
|
64 |
|
@@ -83,22 +88,22 @@ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches
|
|
83 |
|
84 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
85 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
86 |
-
| [main](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes |
|
87 |
-
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage.
|
88 |
-
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy.
|
89 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy.
|
90 |
-
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements
|
91 |
-
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy.
|
92 |
|
93 |
<!-- README_GPTQ.md-provided-files end -->
|
94 |
|
95 |
<!-- README_GPTQ.md-download-from-branches start -->
|
96 |
## How to download from branches
|
97 |
|
98 |
-
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Spicyboros-7B-2.2-GPTQ:
|
99 |
- With Git, you can clone a branch with:
|
100 |
```
|
101 |
-
git clone --single-branch --branch
|
102 |
```
|
103 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
104 |
<!-- README_GPTQ.md-download-from-branches end -->
|
@@ -111,7 +116,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
|
|
111 |
|
112 |
1. Click the **Model tab**.
|
113 |
2. Under **Download custom model or LoRA**, enter `TheBloke/Spicyboros-7B-2.2-GPTQ`.
|
114 |
-
- To download from a specific branch, enter for example `TheBloke/Spicyboros-7B-2.2-GPTQ:
|
115 |
- see Provided Files above for the list of branches for each option.
|
116 |
3. Click **Download**.
|
117 |
4. The model will start downloading. Once it's finished it will say "Done".
|
@@ -159,7 +164,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
|
159 |
|
160 |
model_name_or_path = "TheBloke/Spicyboros-7B-2.2-GPTQ"
|
161 |
# To use a different branch, change revision
|
162 |
-
# For example: revision="
|
163 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
|
164 |
device_map="auto",
|
165 |
trust_remote_code=False,
|
@@ -234,7 +239,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
234 |
|
235 |
**Special thanks to**: Aemon Algiz.
|
236 |
|
237 |
-
**Patreon special mentions**:
|
238 |
|
239 |
|
240 |
Thank you to all my generous patrons and donaters!
|
@@ -248,6 +253,14 @@ And thank you again to a16z for their generous grant.
|
|
248 |
|
249 |
### Overview
|
250 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
251 |
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
|
252 |
|
253 |
Highlights:
|
|
|
1 |
---
|
2 |
+
base_model: https://huggingface.co/jondurbin/spicyboros-7b-2.2
|
3 |
datasets:
|
4 |
- jondurbin/airoboros-2.2
|
5 |
inference: false
|
6 |
license: llama2
|
7 |
model_creator: Jon Durbin
|
|
|
8 |
model_name: Spicyboros 7B 2.2
|
9 |
model_type: llama
|
10 |
+
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
|
11 |
quantized_by: TheBloke
|
12 |
+
tags:
|
13 |
+
- not-for-all-audiences
|
14 |
---
|
15 |
|
16 |
<!-- header start -->
|
|
|
45 |
<!-- repositories-available start -->
|
46 |
## Repositories available
|
47 |
|
48 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-AWQ)
|
49 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ)
|
50 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GGUF)
|
51 |
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-7b-2.2)
|
|
|
63 |
|
64 |
<!-- prompt-template end -->
|
65 |
|
66 |
+
|
67 |
<!-- README_GPTQ.md-provided-files start -->
|
68 |
## Provided files and GPTQ parameters
|
69 |
|
|
|
88 |
|
89 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
90 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
91 |
+
| [main](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. |
|
92 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
|
93 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
|
94 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
|
95 |
+
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
|
96 |
+
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
|
97 |
|
98 |
<!-- README_GPTQ.md-provided-files end -->
|
99 |
|
100 |
<!-- README_GPTQ.md-download-from-branches start -->
|
101 |
## How to download from branches
|
102 |
|
103 |
+
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Spicyboros-7B-2.2-GPTQ:main`
|
104 |
- With Git, you can clone a branch with:
|
105 |
```
|
106 |
+
git clone --single-branch --branch main https://huggingface.co/TheBloke/Spicyboros-7B-2.2-GPTQ
|
107 |
```
|
108 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
109 |
<!-- README_GPTQ.md-download-from-branches end -->
|
|
|
116 |
|
117 |
1. Click the **Model tab**.
|
118 |
2. Under **Download custom model or LoRA**, enter `TheBloke/Spicyboros-7B-2.2-GPTQ`.
|
119 |
+
- To download from a specific branch, enter for example `TheBloke/Spicyboros-7B-2.2-GPTQ:main`
|
120 |
- see Provided Files above for the list of branches for each option.
|
121 |
3. Click **Download**.
|
122 |
4. The model will start downloading. Once it's finished it will say "Done".
|
|
|
164 |
|
165 |
model_name_or_path = "TheBloke/Spicyboros-7B-2.2-GPTQ"
|
166 |
# To use a different branch, change revision
|
167 |
+
# For example: revision="main"
|
168 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
|
169 |
device_map="auto",
|
170 |
trust_remote_code=False,
|
|
|
239 |
|
240 |
**Special thanks to**: Aemon Algiz.
|
241 |
|
242 |
+
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
|
243 |
|
244 |
|
245 |
Thank you to all my generous patrons and donaters!
|
|
|
253 |
|
254 |
### Overview
|
255 |
|
256 |
+
__Usage restriction: To use this model, you must agree to the following:__
|
257 |
+
|
258 |
+
- Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content.
|
259 |
+
- None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.
|
260 |
+
- Use with extreme caution, particularly in locations with less-than-free speech laws.
|
261 |
+
- You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities.
|
262 |
+
|
263 |
+
__Ok, now that the warning is out of the way...__
|
264 |
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
|
265 |
|
266 |
Highlights:
|