Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,48 @@
|
|
2 |
datasets:
|
3 |
- tctsung/chat_restaurant_recommendation
|
4 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
This model is quantized by autoawq package using `tctsung/chat_restaurant_recommendation` as calibration dataset
|
7 |
|
8 |
For more details, see github repo [tctsung/LLM_quantize](https://github.com/tctsung/LLM_quantize.git)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
datasets:
|
3 |
- tctsung/chat_restaurant_recommendation
|
4 |
pipeline_tag: text-generation
|
5 |
+
widget:
|
6 |
+
- text: "Accuracy degradation after AWQ quantization"
|
7 |
+
output:
|
8 |
+
url: "https://github.com/tctsung/LLM_quantize/blob/main/evaluation/Accuracy_degradation.png"
|
9 |
---
|
10 |
This model is quantized by autoawq package using `tctsung/chat_restaurant_recommendation` as calibration dataset
|
11 |
|
12 |
For more details, see github repo [tctsung/LLM_quantize](https://github.com/tctsung/LLM_quantize.git)
|
13 |
+
|
14 |
+
## Key results:
|
15 |
+
|
16 |
+
1. AWQ quantization resulted in a **1.62x improvement** in inference speed, generating **140.47 new tokens per second**.
|
17 |
+
2. The model size was compressed from 4.4GB to 0.78GB, representing a reduction in memory footprint to only **17.57%** of the original model.
|
18 |
+
3. I used 6 different LLM tasks to demonstrate that the quantized model maintains similar accuracy, with a maximum accuracy degradation of only ~1%
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
## Inference tutorial
|
23 |
+
|
24 |
+
```python
|
25 |
+
from vllm import LLM, SamplingParams
|
26 |
+
from transformers import AutoTokenizer
|
27 |
+
|
28 |
+
# load model & tokenizer:
|
29 |
+
model_id = "tctsung/TinyLlama-1.1B-chat-v1.0-awq"
|
30 |
+
model = LLM(model = model_id, dtype='half',
|
31 |
+
quantization='awq', gpu_memory_utilization=0.9)
|
32 |
+
sampling_params = SamplingParams(temperature=1.0,
|
33 |
+
max_tokens=1024,
|
34 |
+
min_p=0.5,
|
35 |
+
top_p=0.85)
|
36 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
37 |
+
|
38 |
+
# define your own sys & user msg:
|
39 |
+
sys_msg = "..."
|
40 |
+
user_msg = "..."
|
41 |
+
chat_msg = [
|
42 |
+
{"role": "system", "content": sys_msg},
|
43 |
+
{"role": "user", "content": user_msg}
|
44 |
+
]
|
45 |
+
input_text = tokenizer.apply_chat_template(chat_msg, tokenize=False, add_generation_prompt=False)
|
46 |
+
output = model.generate(input_text, sampling_params)
|
47 |
+
output_text = output[0].outputs[0].text
|
48 |
+
print(output_text) # show the model output
|
49 |
+
```
|