Quantization?

#1
by YaTharThShaRma999 - opened

Is it possible to quantize this model to gptq or gguf? Also how much context can this model handle?

Thanks for releasing such a model(:

Is it possible to quantize this model to gptq or gguf? Also how much context can this model handle?

Thanks for releasing such a model(:

@TheBloke added it to the queue. Will be done soon when no errors happen.

Can maybe someone quant this model? I really want to try it out. Thanks

(edit: Now merged.) You can use this pull if you want to quantize to GGUF: https://github.com/ggerganov/llama.cpp/pull/3943

TheBloke is now uploading the quants.

GPTQ's can be found here: https://huggingface.co/TheBloke/Yi-34B-GPTQ
GGUF's can be found here: https://huggingface.co/TheBloke/Yi-34B-GGUF

Maybe the creators of this model can link it in the readme, as this makes the model much more accessible for us.

Thank you @CyberTimon !

Maybe the creators of this model can link it in the readme, as this makes the model much more accessible for us.

Could you create a PR at https://github.com/01-ai/Yi ?


Also how much context can this model handle?

The related information was already updated in the README.


I'm closing this for now. For any further questions or feature requests, feel encouraged to create a new issue here.

FancyZhao changed discussion status to closed

Sign up or log in to comment