OPEA
/

Safetensors
llama
4-bit precision
intel/auto-round
File size: 6,883 Bytes
7b5bbcd
 
 
 
7783702
 
7b5bbcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f98568
7b5bbcd
 
ded5a6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b5bbcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
license: llama3.1
datasets:
- NeelNanda/pile-10k
base_model:
- meta-llama/Llama-3.1-70B-Instruct
---
## Model Card Details

This model is an int4 model with group_size 128 and symmetric quantization of [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model  with revision `90c15db` to use AutoGPTQ format

## Inference on CPU/HPU/CUDA

HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).

```python
from auto_round import AutoHfQuantizer ##must import for auto-round format
import torch
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "OPEA/Meta-Llama-3.1-70B-Instruct-int4-sym-inc"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)

model = AutoModelForCausalLM.from_pretrained(
    quantized_model_dir,
    torch_dtype='auto',
    device_map="auto",
    ##revision="90c15db", ##AutoGPTQ format
)

##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU
##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU
##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU

prompt = "There is a girl who likes adventure,"
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]

tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=200,  ##change this to align with the official usage
    do_sample=False  ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

##prompt = "There is a girl who likes adventure,"
##BF16 
"""That sounds exciting. What kind of adventures is she interested in? Is she more into outdoor activities like hiking, rock climbing, or exploring new places, or does she enjoy indoor adventures like solving puzzles, playing escape rooms, or reading fantasy novels?
"""
##INT4
"""That sounds exciting. What kind of adventures is she interested in? Is she more into outdoor activities like hiking, camping, or exploring new places, or is she drawn to thrilling experiences like skydiving, bungee jumping, or trying new extreme sports?
"""

##prompt = "Which one is larger, 9.11 or 9.8"
## INT4
"""9.11 is larger than 9.8."""

## BF16
"""9.11 is larger than 9.8."""

prompt = "How many r in strawberry."
## INT4
"""There are 2 R's in the word "strawberry""
## BF16
"""There are 2 R's in the word "strawberry"."""

##prompt = "Once upon a time,"
## INT4
"""It sounds like you're starting a story. Would you like me to continue it, or would you like to tell me the rest of the story yourself?
"""
## BF16 
"""it seems like we're about to start a classic fairy tale. Would you like to continue the story, or would you like me to take over and spin a yarn for you?
"""

```

### Evaluate the model

pip3 install lm-eval==0.4.5

```bash
 auto-round --eval --model "OPEA/Meta-Llama-3.1-70B-Instruct-int4-sym-inc" --eval_bs 16  --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k
```

| Metric                      | BF16                     | INT4                      |
| --------------------------- | ------------------------ | ------------------------- |
| avg                         | 0.69565                  | 0.6945                    |
| leaderboard_mmlu_pro  5shot | 0.5309                   | 0.5226                    |
| leaderboard_ifeval          | 0.7582=(0.8010+0.7153)/2 | 0.75725=(0.8010+0.7135)/2 |
| lambada_openai              | 0.7557                   | 0.7572                    |
| hellaswag                   | 0.6516                   | 0.6467                    |
| winogrande                  | 0.7861                   | 0.8098                    |
| piqa                        | 0.8313                   | 0.8243                    |
| truthfulqa_mc1              | 0.4064                   | 0.4027                    |
| openbookqa                  | 0.3700                   | 0.3620                    |
| boolq                       | 0.8783                   | 0.8761                    |
| arc_easy                    | 0.8670                   | 0.8590                    |
| arc_challenge               | 0.6237                   | 0.6101                    |
| gsm8k(5shot) strict match   | 0.8886                   | 0.9067                    |

## Reproduce the model

Here is the sample command to reproduce the model.  We found auto-round is not stable for this model, please do not use  --model_dtype "fp16" and symmetric quantization.

```bash
auto-round  \
--model  meta-llama/Meta-Llama-3.1-70B-Instruct \
--device 0 \
--group_size 128 \
--nsamples 512 \
--bits 4 \
--iter 1000 \
--disable_eval \
--low_gpu_mem_usage \
--format 'auto_round' \
--output_dir "./tmp_autoround" 
```

## 

## Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

## Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

## Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)