QServe benchmarks
This huggingface repository contains configurations and tokenizer files for all models benchmarked in our QServe project:
- Llama-3-8B
- Llama-2-7B
- Llama-2-13B
- Llama-2-70B
- Llama-30B
- Mistral-7B
- Yi-34B
- Qwen1.5-72B
Please clone this repository if you wish to run our QServe benchmark code without cloning full models.
Please consider citing our paper if it is helpful:
@article{lin2024qserve,
title={QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving},
author={Lin*, Yujun and Tang*, Haotian and Yang*, Shang and Zhang, Zhekai and Xiao, Guangxuan and Gan, Chuang and Han, Song},
year={2024}
}