Kooten's picture
Update README.md
93fc079 verified
---
license: llama2
language:
- en
---
# WinterGoddess-1.4x-70B-L2 IQ2-GGUF
## Description
IQ2-GGUF quants of [sophosympatheia/Aurora-Nights-70B-v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](), [IQ2-XXS]()
Regular GGUF Quants: [Here](https://huggingface.co/TheBloke/Aurora-Nights-70B-v1.0-GGUF)
## Prompt Format
Unclear
## Contact
Kooten on discord