Quantization approach
Hey @Sreenington how did you get the AWQ version for phi3 model. Can you share the approach? I am trying to do AWQ for phi3 with the approach mentioned here and I get an error that says phi3 is not supported for AWQ. However on the docs from vllm I can see phi3 on the supported list. That's sort of contradictory. Can you help?
Hey man, are you sure? AWQ supports Phi3, try to follow their instructions on their GitHub page (https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py).
Also are you sure that you ain't facing an obscure CUDA compatibility error? Make sure you have a GPU with 24GB VRAM and 7.5+ CUDA Compute Capability (I used an A10G).
The latest release (0.2.5) doesn't have kernel for phi3 yet, the latest branch supports Phi3 though. You should download the repo and directly access the classes.
@Sreenington I am running quantization on a 24 GB GPU and it's not CUDA compatibility issue.
Why does it specify Mistal model in the config.json?