Upload README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- jondurbin/airoboros-2.2
|
4 |
inference: false
|
5 |
license: llama2
|
6 |
model_creator: Jon Durbin
|
7 |
-
model_link: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2
|
8 |
model_name: Airoboros L2 70B 2.2
|
9 |
model_type: llama
|
10 |
quantized_by: TheBloke
|
@@ -110,18 +110,18 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
110 |
|
111 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
112 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
113 |
-
| [airoboros-l2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
114 |
-
| [airoboros-l2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
115 |
-
| [airoboros-l2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
116 |
-
| [airoboros-l2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
117 |
-
| [airoboros-l2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
118 |
-
| [airoboros-l2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
119 |
-
| [airoboros-l2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
120 |
-
| [airoboros-l2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
121 |
-
| [airoboros-l2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
122 |
-
| [airoboros-l2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
123 |
-
| airoboros-l2-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
|
124 |
-
| airoboros-l2-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
|
125 |
|
126 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
127 |
|
@@ -134,28 +134,28 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
134 |
|
135 |
### q6_K
|
136 |
Please download:
|
137 |
-
* `airoboros-l2-70b.Q6_K.gguf-split-a`
|
138 |
-
* `airoboros-l2-70b.Q6_K.gguf-split-b`
|
139 |
|
140 |
### q8_0
|
141 |
Please download:
|
142 |
-
* `airoboros-l2-70b.Q8_0.gguf-split-a`
|
143 |
-
* `airoboros-l2-70b.Q8_0.gguf-split-b`
|
144 |
|
145 |
To join the files, do the following:
|
146 |
|
147 |
Linux and macOS:
|
148 |
```
|
149 |
-
cat airoboros-l2-70b.Q6_K.gguf-split-* > airoboros-l2-70b.Q6_K.gguf && rm airoboros-l2-70b.Q6_K.gguf-split-*
|
150 |
-
cat airoboros-l2-70b.Q8_0.gguf-split-* > airoboros-l2-70b.Q8_0.gguf && rm airoboros-l2-70b.Q8_0.gguf-split-*
|
151 |
```
|
152 |
Windows command line:
|
153 |
```
|
154 |
-
COPY /B airoboros-l2-70b.Q6_K.gguf-split-a + airoboros-l2-70b.Q6_K.gguf-split-b airoboros-l2-70b.Q6_K.gguf
|
155 |
-
del airoboros-l2-70b.Q6_K.gguf-split-a airoboros-l2-70b.Q6_K.gguf-split-b
|
156 |
|
157 |
-
COPY /B airoboros-l2-70b.Q8_0.gguf-split-a + airoboros-l2-70b.Q8_0.gguf-split-b airoboros-l2-70b.Q8_0.gguf
|
158 |
-
del airoboros-l2-70b.Q8_0.gguf-split-a airoboros-l2-70b.Q8_0.gguf-split-b
|
159 |
```
|
160 |
|
161 |
</details>
|
@@ -167,7 +167,7 @@ del airoboros-l2-70b.Q8_0.gguf-split-a airoboros-l2-70b.Q8_0.gguf-split-b
|
|
167 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
168 |
|
169 |
```shell
|
170 |
-
./main -ngl 32 -m airoboros-l2-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
|
171 |
```
|
172 |
|
173 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
@@ -207,7 +207,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
207 |
from ctransformers import AutoModelForCausalLM
|
208 |
|
209 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
210 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-70b-2.2-GGUF", model_file="airoboros-l2-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
211 |
|
212 |
print(llm("AI is going to"))
|
213 |
```
|
|
|
1 |
---
|
2 |
+
base_model: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2
|
3 |
datasets:
|
4 |
- jondurbin/airoboros-2.2
|
5 |
inference: false
|
6 |
license: llama2
|
7 |
model_creator: Jon Durbin
|
|
|
8 |
model_name: Airoboros L2 70B 2.2
|
9 |
model_type: llama
|
10 |
quantized_by: TheBloke
|
|
|
110 |
|
111 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
112 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
113 |
+
| [airoboros-l2-70b-2.2.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
114 |
+
| [airoboros-l2-70b-2.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
115 |
+
| [airoboros-l2-70b-2.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
116 |
+
| [airoboros-l2-70b-2.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
117 |
+
| [airoboros-l2-70b-2.2.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
118 |
+
| [airoboros-l2-70b-2.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
119 |
+
| [airoboros-l2-70b-2.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
120 |
+
| [airoboros-l2-70b-2.2.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
121 |
+
| [airoboros-l2-70b-2.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
122 |
+
| [airoboros-l2-70b-2.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF/blob/main/airoboros-l2-70b-2.2.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
123 |
+
| airoboros-l2-70b-2.2.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
|
124 |
+
| airoboros-l2-70b-2.2.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
|
125 |
|
126 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
127 |
|
|
|
134 |
|
135 |
### q6_K
|
136 |
Please download:
|
137 |
+
* `airoboros-l2-70b-2.2.Q6_K.gguf-split-a`
|
138 |
+
* `airoboros-l2-70b-2.2.Q6_K.gguf-split-b`
|
139 |
|
140 |
### q8_0
|
141 |
Please download:
|
142 |
+
* `airoboros-l2-70b-2.2.Q8_0.gguf-split-a`
|
143 |
+
* `airoboros-l2-70b-2.2.Q8_0.gguf-split-b`
|
144 |
|
145 |
To join the files, do the following:
|
146 |
|
147 |
Linux and macOS:
|
148 |
```
|
149 |
+
cat airoboros-l2-70b-2.2.Q6_K.gguf-split-* > airoboros-l2-70b-2.2.Q6_K.gguf && rm airoboros-l2-70b-2.2.Q6_K.gguf-split-*
|
150 |
+
cat airoboros-l2-70b-2.2.Q8_0.gguf-split-* > airoboros-l2-70b-2.2.Q8_0.gguf && rm airoboros-l2-70b-2.2.Q8_0.gguf-split-*
|
151 |
```
|
152 |
Windows command line:
|
153 |
```
|
154 |
+
COPY /B airoboros-l2-70b-2.2.Q6_K.gguf-split-a + airoboros-l2-70b-2.2.Q6_K.gguf-split-b airoboros-l2-70b-2.2.Q6_K.gguf
|
155 |
+
del airoboros-l2-70b-2.2.Q6_K.gguf-split-a airoboros-l2-70b-2.2.Q6_K.gguf-split-b
|
156 |
|
157 |
+
COPY /B airoboros-l2-70b-2.2.Q8_0.gguf-split-a + airoboros-l2-70b-2.2.Q8_0.gguf-split-b airoboros-l2-70b-2.2.Q8_0.gguf
|
158 |
+
del airoboros-l2-70b-2.2.Q8_0.gguf-split-a airoboros-l2-70b-2.2.Q8_0.gguf-split-b
|
159 |
```
|
160 |
|
161 |
</details>
|
|
|
167 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
168 |
|
169 |
```shell
|
170 |
+
./main -ngl 32 -m airoboros-l2-70b-2.2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
|
171 |
```
|
172 |
|
173 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
207 |
from ctransformers import AutoModelForCausalLM
|
208 |
|
209 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
210 |
+
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-70b-2.2-GGUF", model_file="airoboros-l2-70b-2.2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
211 |
|
212 |
print(llm("AI is going to"))
|
213 |
```
|