File size: 706 Bytes
3225d56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# QServe benchmarks

This huggingface repository contains configurations and tokenizer files for all models benchmarked in our [QServe](https://github.com/mit-han-lab/qserve) project:

- Llama-3-8B
- Llama-2-7B
- Llama-2-13B
- Llama-2-70B
- Llama-30B
- Mistral-7B
- Yi-34B
- Qwen1.5-72B

Please clone this repository if you wish to run our QServe benchmark code without cloning full models.

Please consider citing our paper if it is helpful:

```
@article{lin2024qserve,
  title={QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving},
  author={Lin*, Yujun and Tang*, Haotian and Yang*, Shang and Zhang, Zhekai and Xiao, Guangxuan and Gan, Chuang and Han, Song},
  year={2024}
}
```