File size: 528 Bytes
340d1ea
 
5b83e62
 
995700b
 
 
1
2
3
4
5
6
7
This is the gptq 4bit quantization of this model: https://huggingface.co/openaccess-ai-collective/manticore-13b

This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton

And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128) 

CUDA_VISIBLE_DEVICES=0 python llama.py ./Manticore-13b-GPTQ-Triton c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors Manticore-13b-GPTQ-Triton.safetensors