SpiridonSunRotator's picture
Updated model description and added evaluation metrics.
8fcb71a verified
|
raw
history blame
No virus
588 Bytes

Official AQLM quantization of google/gemma-2b.

For this quantization, we used 1 codebook of 16 bits.

Results:

Model AQLM scheme WinoGrande PiQA HellaSwag ArcE ArcC Model size, Gb
gemma-2b 2x8 0.6275 0.7318 0.4582 0.6923 0.3259 1.7

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo.