Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ inference: false
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
-
This is an experimental <a href="https://github.com/mobiusml/hqq/">HQQ</a> 2-bit quantized <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf"> Llama2-7B-chat model </a> using a low-rank adapter to improve the performance (referred to as HQQ
|
9 |
|
10 |
Quantizing small models at extreme low-bits is a challenging task. The purpose of this model is to show the community what to expect when fine-tuning such models.
|
11 |
We notice that, when given more specialized data, the low-bit model can even outperform the full-precision model at some tasks.
|
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
+
This is an experimental <a href="https://github.com/mobiusml/hqq/">HQQ</a> 2-bit quantized <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf"> Llama2-7B-chat model </a> using a low-rank adapter to improve the performance (referred to as <a href="https://mobiusml.github.io/1bit_blog/">HQQ+</a>).
|
9 |
|
10 |
Quantizing small models at extreme low-bits is a challenging task. The purpose of this model is to show the community what to expect when fine-tuning such models.
|
11 |
We notice that, when given more specialized data, the low-bit model can even outperform the full-precision model at some tasks.
|