Edit model card

Airoboros c34 2.2.1 Mistral GGUF

CodeLlama 34b With Airoboros 2.2.1 dataset by Jon Durbin

Then

With Mistral AI 7b 0.1 delta bits compared to Llama2 (extracted by Undi95), merged by myself.


Base model (CodeLlama) training context : 16k (max context up to 96k with the base ROPE)

Mistral injection training context : 8k (Sliding Windows Attention is likely inoperant on such a merge/injection)


For test and amusement only.

Prompt : Airoboros

Downloads last month
66
GGUF
Model size
33.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

Inference API
Unable to determine this model's library. Check the docs .