Edit model card

This is 2-bit quantization of Qwen/Qwen-72B-Chat using QuIP#

Random samples from C4 are used as calibration data.

For Chinese related tasks, please use the zh branch instead, which use bilingual text from C4 and SkyPile as calibration data.

Model loading

Please follow the instruction of QuIP-for-all for usage.

As an alternative, you can use vLLM branch for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from source

Perplexity

Measured at Wikitext with 4096 context length

fp16 2-bit
5.8438 6.9492

Speed

Latency and throughput are measured using vLLM (examples/benchmark_latency.py and examples/benchmark_throughput.py respectively) at single A100-80G.

Latency at batch size 1: 13.5 tokens/s.

Throughput: 0.77 requests/s

Downloads last month
6
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Space using keyfan/Qwen-72B-Chat-2bit 1