File size: 514 Bytes
1e58b2f |
1 2 3 |
[Airoboros 33b GPT4 1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantised using GPTQ-for-LLaMa.
To easily use this model, you can use Oobabooga's [Text Generation WebUI](https://github.com/oobabooga/text-generation-webu) and run it with the `--monkeypatch` flag (and use the Exllama loader for best speeds. Note this must be manually installed unless you use the 1 click installer.) |