justheuristic's picture
Create model card
dc68718 verified
|
raw
history blame
1.68 kB
metadata
library_name: transformers
tags:
  - llama
  - facebook
  - meta
  - llama-2
  - conversational
  - text-generation-inference

A quantization of meta-llama/Llama-2-7b using PV-Tuning on top of AQLM quantization

For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.

Model AQLM scheme WikiText 2 PPL Model size, Gb Hub link
Llama-2-7b (this) 1x16g8 5.68 2.4 Link
Llama-2-7b 2x8g8 5.90 2.2 Link
Llama-2-13b 1x16g8 5.05 4.1 Link
Llama-2-70b 1x16g8 3.78 18.8 Link

The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo. The original code for PV-Tuning can be found in the AQLM@pv-tuning branch.