YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Made by merging the following lora: https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
Then quantizing with ooba's old CUDA branch of GPTQ
python llama.py vicunlocked-30b c4 --wbits 4 --true-sequential --act-order --save_safetensors vicunlocked-30b-4bit.safetensors
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support