bnjmnmarie
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- auto-gptq
|
7 |
+
- AutoRound
|
8 |
+
license: apache-2.0
|
9 |
+
---
|
10 |
+
|
11 |
+
|
12 |
+
## Model Details
|
13 |
+
|
14 |
+
This is [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main) (asymmetric quantization) and serialized with the GPTQ format in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.
|
15 |
+
|
16 |
+
Details on the quantization process and how to use the model here:
|
17 |
+
[The Best Quantization Methods to Run Llama 3.1 on Your GPU](https://newsletter.kaitchup.com/p/the-best-quantization-methods-to)
|
18 |
+
|
19 |
+
It is possible to fine-tune an adapter on top of it following the QLoRA methodology. More about this here:
|
20 |
+
[QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU](https://newsletter.kaitchup.com/p/qlora-with-autoround-cheaper-and)
|
21 |
+
|
22 |
+
I used these hyperparameters for quantization:
|
23 |
+
|
24 |
+
```
|
25 |
+
bits, group_size = 4, 128
|
26 |
+
|
27 |
+
autoround = AutoRound(model, tokenizer, nsamples=512, iters=1000, low_gpu_mem_usage=False, bits=bits, group_size=group_size)
|
28 |
+
|
29 |
+
autoround.quantize()
|
30 |
+
output_dir = "./tmp_autoround"
|
31 |
+
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
|
32 |
+
```
|
33 |
+
|
34 |
+
Evaluation results:
|
35 |
+
|
36 |
+
![arc_challenge, musr, gpqa, mmlu_pro, mmlu….png](https://cdn-uploads.huggingface.co/production/uploads/64b93e6bd6c468ac7536607e/ExiQHtJf981JcUsHcbZW9.png)
|
37 |
+
|
38 |
+
|
39 |
+
- **Developed by:** [The Kaitchup](https://newsletter.kaitchup.com/)
|
40 |
+
- **Language(s) (NLP):** English
|
41 |
+
- **License:** Apache 2.0 license
|