Edit model card

You can run it on 11G mem GPU,quantize base QuIP# method, a weights-only quantization method that is able to achieve near fp16 performance using only 2 bits per weight.

url:https://github.com/Cornell-RelaxML/quip-sharp/tree/release20231203

Downloads last month
3
Safetensors
Model size
5.11B params
Tensor type
FP16
·
I16
·
Inference API
Input a message to start chatting with Minami-su/Yi_34B_Chat_2bit.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.