Wizard-Vicuna-Uncensored30B merged with the superHOT(unfinished)30b LoRA and quantized to 4bit using GPTQ-for-Llama.

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.