Update for Transformers GPTQ support
Browse files- README.md +53 -51
- config.json +35 -24
- gptq_model-4bit-64g.safetensors → model.safetensors +2 -2
- quantize_config.json +1 -1
README.md
CHANGED
@@ -13,26 +13,29 @@ pipeline_tag: text-generation
|
|
13 |
---
|
14 |
|
15 |
<!-- header start -->
|
16 |
-
|
17 |
-
|
|
|
18 |
</div>
|
19 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
21 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
22 |
</div>
|
23 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
24 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
25 |
</div>
|
26 |
</div>
|
|
|
|
|
27 |
<!-- header end -->
|
28 |
|
29 |
-
#
|
30 |
-
- Model creator: Stability AI
|
31 |
-
- Original model: [
|
32 |
|
33 |
## Description
|
34 |
|
35 |
-
This repo contains GPTQ model files for [Stability AI's
|
36 |
|
37 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
38 |
|
@@ -40,8 +43,9 @@ None
|
|
40 |
|
41 |
## Repositories available
|
42 |
|
43 |
-
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/
|
44 |
-
* [
|
|
|
45 |
|
46 |
## Prompt template: Orca-Hashes
|
47 |
|
@@ -63,22 +67,22 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
63 |
|
64 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
65 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
66 |
-
| main | 4 | None | True | 35.33 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
67 |
-
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
68 |
-
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
69 |
-
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
70 |
-
| gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
71 |
-
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
72 |
-
| gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
73 |
-
| gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
|
74 |
| gptq-4bit-128g-actorder_False | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, without Act Order and group size 128g. |
|
75 |
|
76 |
## How to download from branches
|
77 |
|
78 |
-
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/
|
79 |
- With Git, you can clone a branch with:
|
80 |
```
|
81 |
-
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/
|
82 |
```
|
83 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
84 |
|
@@ -89,13 +93,13 @@ Please make sure you're using the latest version of [text-generation-webui](http
|
|
89 |
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
|
90 |
|
91 |
1. Click the **Model tab**.
|
92 |
-
2. Under **Download custom model or LoRA**, enter `TheBloke/
|
93 |
-
- To download from a specific branch, enter for example `TheBloke/
|
94 |
- see Provided Files above for the list of branches for each option.
|
95 |
3. Click **Download**.
|
96 |
4. The model will start downloading. Once it's finished it will say "Done"
|
97 |
5. In the top left, click the refresh icon next to **Model**.
|
98 |
-
6. In the **Model** dropdown, choose the model you just downloaded: `
|
99 |
7. The model will automatically load, and is now ready for use!
|
100 |
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
101 |
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
@@ -103,7 +107,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
|
|
103 |
|
104 |
## How to use this GPTQ model from Python code
|
105 |
|
106 |
-
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
107 |
|
108 |
`GITHUB_ACTIONS=true pip install auto-gptq`
|
109 |
|
@@ -113,8 +117,8 @@ Then try the following example code:
|
|
113 |
from transformers import AutoTokenizer, pipeline, logging
|
114 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
115 |
|
116 |
-
model_name_or_path = "TheBloke/
|
117 |
-
model_basename = "
|
118 |
|
119 |
use_triton = False
|
120 |
|
@@ -122,6 +126,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
|
122 |
|
123 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
124 |
model_basename=model_basename,
|
|
|
125 |
use_safetensors=True,
|
126 |
trust_remote_code=False,
|
127 |
device="cuda:0",
|
@@ -182,6 +187,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
182 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
183 |
|
184 |
<!-- footer start -->
|
|
|
185 |
## Discord
|
186 |
|
187 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -201,34 +207,36 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
201 |
* Patreon: https://patreon.com/TheBlokeAI
|
202 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
203 |
|
204 |
-
**Special thanks to**:
|
205 |
|
206 |
-
**Patreon special mentions**:
|
207 |
|
208 |
|
209 |
Thank you to all my generous patrons and donaters!
|
210 |
|
|
|
|
|
211 |
<!-- footer end -->
|
212 |
|
213 |
-
# Original model card: Stability AI's
|
214 |
|
215 |
-
#
|
216 |
|
217 |
## Model Description
|
218 |
|
219 |
-
`
|
220 |
|
221 |
## Usage
|
222 |
|
223 |
-
Start chatting with `
|
224 |
|
225 |
```python
|
226 |
import torch
|
227 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
228 |
|
229 |
-
tokenizer = AutoTokenizer.from_pretrained("stabilityai/
|
230 |
-
model = AutoModelForCausalLM.from_pretrained("stabilityai/
|
231 |
-
system_prompt = "### System:\nYou are
|
232 |
|
233 |
message = "Write me a poem please"
|
234 |
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
|
@@ -238,7 +246,7 @@ output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_t
|
|
238 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
239 |
```
|
240 |
|
241 |
-
|
242 |
```
|
243 |
### System:
|
244 |
This is a system prompt, please behave and help the user.
|
@@ -246,22 +254,22 @@ This is a system prompt, please behave and help the user.
|
|
246 |
### User:
|
247 |
Your prompt here
|
248 |
|
249 |
-
### Assistant
|
250 |
-
The output of
|
251 |
```
|
252 |
|
253 |
## Model Details
|
254 |
|
255 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
256 |
-
* **Model type**:
|
257 |
* **Language(s)**: English
|
258 |
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
259 |
-
* **License**: Fine-tuned checkpoints (`
|
260 |
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
|
261 |
|
262 |
### Training Dataset
|
263 |
|
264 |
-
`
|
265 |
|
266 |
### Training Procedure
|
267 |
|
@@ -272,21 +280,15 @@ Models are learned via supervised fine-tuning on the aforementioned datasets, tr
|
|
272 |
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
|
273 |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
|
274 |
|
275 |
-
##
|
276 |
-
|
277 |
-
### Intended Use
|
278 |
-
|
279 |
-
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
|
280 |
-
|
281 |
-
### Limitations and bias
|
282 |
|
283 |
-
|
284 |
|
285 |
## Citations
|
286 |
|
287 |
```bibtext
|
288 |
@misc{touvron2023llama,
|
289 |
-
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
290 |
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
|
291 |
year={2023},
|
292 |
eprint={2307.09288},
|
@@ -297,7 +299,7 @@ Although the aforementioned dataset helps to steer the base language models into
|
|
297 |
|
298 |
```bibtext
|
299 |
@misc{mukherjee2023orca,
|
300 |
-
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
|
301 |
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
|
302 |
year={2023},
|
303 |
eprint={2306.02707},
|
|
|
13 |
---
|
14 |
|
15 |
<!-- header start -->
|
16 |
+
<!-- 200823 -->
|
17 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
18 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
19 |
</div>
|
20 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
21 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
22 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
23 |
</div>
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
25 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
26 |
</div>
|
27 |
</div>
|
28 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
29 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
30 |
<!-- header end -->
|
31 |
|
32 |
+
# StableBeluga 2 - GGML
|
33 |
+
- Model creator: [Stability AI](https://huggingface.co/stabilityai)
|
34 |
+
- Original model: [StableBeluga 2](https://huggingface.co/stabilityai/StableBeluga2)
|
35 |
|
36 |
## Description
|
37 |
|
38 |
+
This repo contains GPTQ model files for [Stability AI's StableBeluga 2](https://huggingface.co/stabilityai/StableBeluga2).
|
39 |
|
40 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
41 |
|
|
|
43 |
|
44 |
## Repositories available
|
45 |
|
46 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga2-70B-GPTQ)
|
47 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML)
|
48 |
+
* [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga2)
|
49 |
|
50 |
## Prompt template: Orca-Hashes
|
51 |
|
|
|
67 |
|
68 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
69 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
70 |
+
| main | 4 | None | True | 35.33 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
71 |
+
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
72 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
73 |
+
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
74 |
+
| gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
75 |
+
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
76 |
+
| gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
77 |
+
| gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
|
78 |
| gptq-4bit-128g-actorder_False | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, without Act Order and group size 128g. |
|
79 |
|
80 |
## How to download from branches
|
81 |
|
82 |
+
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/StableBeluga2-GPTQ:gptq-4bit-32g-actorder_True`
|
83 |
- With Git, you can clone a branch with:
|
84 |
```
|
85 |
+
git clone --branch gptq-4bit-32g-actorder_True --single-branch https://huggingface.co/TheBloke/StableBeluga2-GPTQ
|
86 |
```
|
87 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
88 |
|
|
|
93 |
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
|
94 |
|
95 |
1. Click the **Model tab**.
|
96 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/StableBeluga2-70B-GPTQ`.
|
97 |
+
- To download from a specific branch, enter for example `TheBloke/StableBeluga2-70B-GPTQ:gptq-4bit-32g-actorder_True`
|
98 |
- see Provided Files above for the list of branches for each option.
|
99 |
3. Click **Download**.
|
100 |
4. The model will start downloading. Once it's finished it will say "Done"
|
101 |
5. In the top left, click the refresh icon next to **Model**.
|
102 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `StableBeluga2-70B-GPTQ`
|
103 |
7. The model will automatically load, and is now ready for use!
|
104 |
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
105 |
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
|
|
107 |
|
108 |
## How to use this GPTQ model from Python code
|
109 |
|
110 |
+
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.2 or later installed:
|
111 |
|
112 |
`GITHUB_ACTIONS=true pip install auto-gptq`
|
113 |
|
|
|
117 |
from transformers import AutoTokenizer, pipeline, logging
|
118 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
119 |
|
120 |
+
model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
|
121 |
+
model_basename = "model"
|
122 |
|
123 |
use_triton = False
|
124 |
|
|
|
126 |
|
127 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
128 |
model_basename=model_basename,
|
129 |
+
inject_fused_attention=False, # Required for Llama 2 70B models at this time.
|
130 |
use_safetensors=True,
|
131 |
trust_remote_code=False,
|
132 |
device="cuda:0",
|
|
|
187 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
188 |
|
189 |
<!-- footer start -->
|
190 |
+
<!-- 200823 -->
|
191 |
## Discord
|
192 |
|
193 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
207 |
* Patreon: https://patreon.com/TheBlokeAI
|
208 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
209 |
|
210 |
+
**Special thanks to**: Aemon Algiz.
|
211 |
|
212 |
+
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
|
213 |
|
214 |
|
215 |
Thank you to all my generous patrons and donaters!
|
216 |
|
217 |
+
And thank you again to a16z for their generous grant.
|
218 |
+
|
219 |
<!-- footer end -->
|
220 |
|
221 |
+
# Original model card: Stability AI's StableBeluga 2
|
222 |
|
223 |
+
# Stable Beluga 2
|
224 |
|
225 |
## Model Description
|
226 |
|
227 |
+
`Stable Beluga 2` is a Llama2 70B model finetuned on an Orca style Dataset
|
228 |
|
229 |
## Usage
|
230 |
|
231 |
+
Start chatting with `Stable Beluga 2` using the following code snippet:
|
232 |
|
233 |
```python
|
234 |
import torch
|
235 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
236 |
|
237 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga2", use_fast=False)
|
238 |
+
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga2", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
|
239 |
+
system_prompt = "### System:\nYou are Stable Beluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
|
240 |
|
241 |
message = "Write me a poem please"
|
242 |
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
|
|
|
246 |
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
247 |
```
|
248 |
|
249 |
+
Stable Beluga 2 should be used with this prompt format:
|
250 |
```
|
251 |
### System:
|
252 |
This is a system prompt, please behave and help the user.
|
|
|
254 |
### User:
|
255 |
Your prompt here
|
256 |
|
257 |
+
### Assistant:
|
258 |
+
The output of Stable Beluga 2
|
259 |
```
|
260 |
|
261 |
## Model Details
|
262 |
|
263 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
264 |
+
* **Model type**: Stable Beluga 2 is an auto-regressive language model fine-tuned on Llama2 70B.
|
265 |
* **Language(s)**: English
|
266 |
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
267 |
+
* **License**: Fine-tuned checkpoints (`Stable Beluga 2`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga2/blob/main/LICENSE.txt)
|
268 |
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
|
269 |
|
270 |
### Training Dataset
|
271 |
|
272 |
+
` Stable Beluga 2` is trained on our internal Orca-style dataset
|
273 |
|
274 |
### Training Procedure
|
275 |
|
|
|
280 |
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
|
281 |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
|
282 |
|
283 |
+
## Ethical Considerations and Limitations
|
|
|
|
|
|
|
|
|
|
|
|
|
284 |
|
285 |
+
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
286 |
|
287 |
## Citations
|
288 |
|
289 |
```bibtext
|
290 |
@misc{touvron2023llama,
|
291 |
+
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
292 |
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
|
293 |
year={2023},
|
294 |
eprint={2307.09288},
|
|
|
299 |
|
300 |
```bibtext
|
301 |
@misc{mukherjee2023orca,
|
302 |
+
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
|
303 |
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
|
304 |
year={2023},
|
305 |
eprint={2306.02707},
|
config.json
CHANGED
@@ -1,26 +1,37 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "/fsx/dakota/orca/tmp_orca",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 8192,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 28672,
|
12 |
+
"max_position_embeddings": 4096,
|
13 |
+
"model_type": "llama",
|
14 |
+
"num_attention_heads": 64,
|
15 |
+
"num_hidden_layers": 80,
|
16 |
+
"num_key_value_heads": 8,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"pretraining_tp": 1,
|
19 |
+
"rms_norm_eps": 1e-05,
|
20 |
+
"rope_scaling": null,
|
21 |
+
"tie_word_embeddings": false,
|
22 |
+
"torch_dtype": "float32",
|
23 |
+
"transformers_version": "4.32.0.dev0",
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 32000,
|
26 |
+
"quantization_config": {
|
27 |
+
"bits": 4,
|
28 |
+
"group_size": 64,
|
29 |
+
"damp_percent": 0.01,
|
30 |
+
"desc_act": true,
|
31 |
+
"sym": true,
|
32 |
+
"true_sequential": true,
|
33 |
+
"model_name_or_path": null,
|
34 |
+
"model_file_base_name": "model",
|
35 |
+
"quant_method": "gptq"
|
36 |
+
}
|
37 |
}
|
gptq_model-4bit-64g.safetensors → model.safetensors
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8eb138779cf9d387bba9e6f88a73832f32a9b1b4c05d6a2d2d0482bc91662a3
|
3 |
+
size 37989309776
|
quantize_config.json
CHANGED
@@ -6,5 +6,5 @@
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
-
"model_file_base_name":
|
10 |
}
|
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
+
"model_file_base_name": "model"
|
10 |
}
|