File size: 603 Bytes
51d9bc6
 
 
 
 
 
 
 
11ab1d0
 
51d9bc6
1
2
3
4
5
6
7
8
9
10
11
---
library_name: transformers
pipeline_tag: text-generation
---
Quant of https://huggingface.co/junelee/wizard-vicuna-13b tested working with Occam's KoboldAI/GPTQ.

Someone made a Triton quant already here, but it will not work with Occam's KoboldAI/GPTQ fork: https://huggingface.co/fbjr/wizard-vicuna-13b-4bit-128g

Note that this model is fairly heavily censored (in my opinion) and delivers AI-moralizing responses to prompts that Vicuna 1.1 does not complain about.

```python llama.py ./wizard-vicuna-13b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors```