noneUsername's picture
Update README.md
e3d0951 verified
|
raw
history blame
429 Bytes

My first quantization uses the quantization method provided by vllm:

https://docs.vllm.ai/en/latest/quantization/int8.html

NUM_CALIBRATION_SAMPLES = 2048

MAX_SEQUENCE_LENGTH = 8192

smoothing_strength=0.8

I will verify the validity of the model and update the readme as soon as possible.

edit: The performance in my ERP test was comparable to Mistral-Nemo-Instruct-2407-GPTQ-INT8, which I consider a successful quantization.