--- language: - en --- This is a merge of the below models/LoRAs. Merge was done at a 1:1 ratio. - [LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k) - [airoboros-l2-gpt-1.4.1-13b-PEFT](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1-peft) GPTQ quantization is available in a [separate repo](https://huggingface.co/kingbri/airo-llongma-2-13b-16k-GPTQ)