Add GGUF model file for llama.cpp (f16)

#3
No description provided.
mixedbread ai org
edited Mar 9

Thank you for the PR! Would be lovely if you could the move gguf file to gguf/mxbai-embed-large-v1-f16.gguf :)

Aamir.

Sorry, missed this earlier! Just pushed an update.

mixedbread ai org

Great! Thanks @iamlemec :)

juliuslipp changed pull request status to merged

Why quantize at fp16? Being fp16 will not increase speed in anyway or memory usage, besides original fp16 can also be used on cpu, while quantization penalty degrades quality, even though a little, so why take the hit without any benefits. Why not have it quantized at int5 or int2 (cohere probably succeeded at 1 bit quantization, or maybe they retrained model with 1bit precision I don't know).

mixedbread ai org

?

Well, fp16 will definitely decrease memory utilization relative to fp32, and at least on GPU will yield speed increases. The quality loss is indeed minimal. Since these models are pretty small, I haven't seen the need to go to lower quantization levels, but perhaps there are applications on small devices where this would be desirable.

With regard to the cohere 1bit quantization, keep in mind that they are talking about quantizing the output vectors, while here we are talking about quantizing the model weights. Regardless of how the model is quantized, we'll still get fp32 embedding vectors out, and it can definitely be useful to quantize those to reduce storage needs. I believe the new cohere models are specially trained to be able to yield good quality even with 1bit quantization of embeddings. Going down to 1bit on other models will usually result in pretty large quality losses.

Sign up or log in to comment