New exciting quant method

#1
by Yhyu13 - opened

@TheBloke @LoneStriker

Hi, checkout out this quant method which have best performance to model size even compare to GPTQ https://github.com/GreenBitAI/low_bit_llama

Thank you for the pointer to this new quant method. I had not heard of it before. Have you compared the perplexity to llama.cpp's GGUF and Exllamav2's exl2 models at the same model sizes? It would help adoption of any new quantization method if there were equivalent measurements done of both GreenBitAI's quantizations vs. llama.cpp and Exllamav2.

@TheBloke creates llama.cpp models at sizes ranging from Q2 to Q8, Yi models here:
https://huggingface.co/TheBloke/Yi-34B-GGUF/tree/main

I've generated quants from 3.0 bpw to 8.0 bpw here:
https://huggingface.co/models?p=2&sort=created&search=lonestriker%2Fyi-34b

exllamav2 is also capable of fractional bit-quantization because the stated bitrate is an average over the layers. A popular low-bit quant size for Exllamav2 is 2.4 bpw (especially for 70B models.) At 2.4 bpw, a 70B model fits in a single 3090 or 4090 with full contxt. The model size is only 20 GB at 2.4 bpw and 22 GB at 2.65 bpw. Example models here:
https://huggingface.co/models?sort=modified&search=lonestriker+2.4bpw+70b

And for Mixtral, at 3.0 bpw, the model will run entirely in a single 3090 or 4090 at full 32K context:
https://huggingface.co/models?sort=modified&search=lonestriker+3.0bpw+8x7

Sign up or log in to comment