Any chance of a 4_2 or 4_0 ggml quantization?

#4
by spanielrassler - opened

I'm running on an m2 pro and 30b with 4_1 is just out of my reach in terms of speed / memory, but those other quant formats work great.

Thanks for all the work you're doing for this community BTW!!

q4_0, q5_0, yeah. Both are more efficient than q4_1 and can fit bloated (cought cough Windows) 32GB RAM systems easier.

I'm updating the model with the latest version of Open Assistant. I'll update the ggml quants and add q5_0

Any chance for a q5_1 version?

Thanks for the new quants!!

Sign up or log in to comment