Commit
•
d966725
1
Parent(s):
7e96183
Upload finetuned Llama-2-7b models (#1)
Browse files- Upload finetuned Llama-2-7b models (7b3757455b7fc5b3609078aafee903218f4dd52a)
Co-authored-by: Denis Kuznedelev <SpiridonSunRotator@users.noreply.huggingface.co>
README.md
CHANGED
@@ -6,12 +6,15 @@ Selected evaluation results for this and other models:
|
|
6 |
|
7 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
8 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
9 |
-
| Llama-2-7b | 1x16 |
|
10 |
-
| Llama-2-7b (THIS) | 2x8 |
|
11 |
| Llama-2-7b | 8x8 | 7.83 | 2.2 | [Link](https://huggingface.co/BlackSamorez/Llama-2-7b-AQLM-2Bit-8x8-hf) |
|
12 |
| Llama-2-13b| 1x16 | 5.41 | 4.1 | [Link](https://huggingface.co/BlackSamorez/Llama-2-13b-AQLM-2Bit-1x16-hf)|
|
13 |
| Llama-2-70b| 1x16 | 3.96 | 18.8 | [Link](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf)|
|
14 |
| Llama-2-70b| 2x8 | 4.83 | 18.2 | [Link](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-2x8-hf) |
|
15 |
| Mixtral-8x7b| 1x16 | 4.37 | 12.6 | [Link](https://huggingface.co/BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf)|
|
16 |
|
|
|
|
|
|
|
17 |
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
|
|
|
6 |
|
7 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
8 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
9 |
+
| Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/BlackSamorez/Llama-2-7b-AQLM-2Bit-1x16-hf) |
|
10 |
+
| Llama-2-7b (THIS) | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/BlackSamorez/Llama-2-7b-AQLM-2Bit-2x8-hf) |
|
11 |
| Llama-2-7b | 8x8 | 7.83 | 2.2 | [Link](https://huggingface.co/BlackSamorez/Llama-2-7b-AQLM-2Bit-8x8-hf) |
|
12 |
| Llama-2-13b| 1x16 | 5.41 | 4.1 | [Link](https://huggingface.co/BlackSamorez/Llama-2-13b-AQLM-2Bit-1x16-hf)|
|
13 |
| Llama-2-70b| 1x16 | 3.96 | 18.8 | [Link](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf)|
|
14 |
| Llama-2-70b| 2x8 | 4.83 | 18.2 | [Link](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-2x8-hf) |
|
15 |
| Mixtral-8x7b| 1x16 | 4.37 | 12.6 | [Link](https://huggingface.co/BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf)|
|
16 |
|
17 |
+
**UPD** (20.02.2024).
|
18 |
+
We applied global finetuning on top of quantized model and improved results compared to first revision.
|
19 |
+
|
20 |
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
|