My alternate quantizations.

#16
by ZeroWw - opened

You can find them here: https://huggingface.co/ZeroWw/aya-23-8B-GGUF

They are not the usual quants. The output and embed tensors are kept at f16 while the other tensors are quantized at q5,q6 and q8.

A smaller size and almost no degradation even at q5_k.

Cohere For AI org

That's great, thanks for sharing @ZeroWw !!! We are going to launch an initiative this week to build projects using Aya models called Expedition Aya. You should consider joining if possible! :)

Sign up or log in to comment