File size: 4,292 Bytes
1c58550 b2625a9 1c58550 ac682a3 1c58550 ac682a3 1c58550 a4f281e 1c58550 619ba25 1c58550 04434df 255c2eb 1c58550 7528ef2 1c58550 255c2eb bb34312 1c58550 6c4baf9 1c58550 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
inference: false
---
## Vicuña 33b v1.3 (4-bit 128g AWQ Quantized)
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq).
## Model Date
July 14, 2023
## Model License
Please refer to original Vicuna model license ([link](https://huggingface.co/lmsys/vicuna-33b-v1.3)).
Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
## CUDA Version
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher.
For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
## How to Use
```bash
git clone https://github.com/mit-han-lab/llm-awq \
&& cd llm-awq \
&& git checkout ce4a6bb1c238c014a06672cb74f6865573494d66 \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
```
```python
import time
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import snapshot_download
model_name = "abhinavkulkarni/lmsys-vicuna-33b-v1.3-w4-g128-awq"
# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
# Tokenizer
try:
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True)
except:
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
# Model
w_bit = 4
q_config = {
"zero_point": True,
"q_group_size": 128,
}
load_quant = snapshot_download(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config,
torch_dtype=torch.float16, trust_remote_code=True)
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model.tie_weights()
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
inputs=input_ids,
temperature=0.7,
max_new_tokens=512,
top_p=0.15,
top_k=0,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer)
```
## Evaluation
This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness).
[vicuna-33b-v1.3](https://huggingface.co/lmsys/vicuna-33b-v1.3)
| Task |Version| Metric |Value | |Stderr|
|--------|------:|---------------|-----:|---|------|
|wikitext| 1|word_perplexity|9.8210| | |
| | |byte_perplexity|1.5330| | |
| | |bits_per_byte |0.6163| | |
[vicuna-33b-v1.3 (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/lmsys-vicuna-33b-v1.3-w4-g128-awq)
| Task |Version| Metric |Value | |Stderr|
|--------|------:|---------------|-----:|---|------|
|wikitext| 1|word_perplexity|9.9924| | |
| | |byte_perplexity|1.5380| | |
| | |bits_per_byte |0.6210| | |
## Acknowledgements
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
```
@article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
}
```
|