AQLM quantization version please

#1
by AiModelsMarket - opened

Hello, can you add an AQLM (https://arxiv.org/abs/2401.06118) quantization version of this model please ? And if possible a jupyter notebook to fine tune it faster ? Thank you! You are the best!

Unsloth AI org

Thank you we'll take a look! Regarding notebooks, have you tried using our Mistral 7b v2 notebook? Here: https://twitter.com/danielhanchen/status/1771737648266178801

And thanks for the kind words! ❤️

Thanks for your answer! The speed of progress is really incredible. Just found out few days ago about AQLM 1 bit quantization and now it was discovered even better quantization of lower than 1 bit ... QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models https://huggingface.co/papers/2310.16795 . Do not get me wrong ...you are already are doing wonderful things . I just put here what I found so that maybe we advance faster and obtain very powerful models at very low computation costs and affordable for anyone . If you get a chance look at those new quantization methods and try to adapt it to your already great work . Thank you again !

Unsloth AI org

Yes thank you for your suggestions, we'll definitely be taking a look at doing more optimizations soon! If you have anymore advice/suggestions please send them our way! ^^

LOL. I can't even have time to read all revolutionary progress that is done in this AI wonder field of science :P . Here is another interesting direction for high optimization : "Quantized Embeddings are here! Unlike model quantization, embedding quantization is a post-processing step for embeddings that converts e.g. float32 embeddings to binary or int8 embeddings. This saves 32x or 4x memory & disk space, and these embeddings are much easier to compare!

Results show 25-45x speedups in retrieval compared to full-size embeddings, while keeping 96% of the performance! " https://huggingface.co/blog/embedding-quantization

Sign up or log in to comment