Mistral 7B v0.2 iMat GGUF

Not to be confused with Mistral 7B Instruct v0.2 (this is the latest release from 3/23)

Mistral 7B v0.2 iMat GGUF quantized from fp16 with love.

  • iMat dat file created using groups_merged.txt
  • Not sure what to expect from this model by itself but uploading to repo in case anyone is curious like me

Legacy quants (i.e. Q8, Q5_K_M) in this repo have all been enhanced with importance matrix calculation. These quants show improved KL-Divergence over their static counterparts.

All files have been tested for your safety and convenience. No need to clone the entire repo, just pick the quant that's right for you.

For more information on latest iMatrix quants see this PR - https://github.com/ggerganov/llama.cpp/pull/5747

Downloads last month
9
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including InferenceIllusionist/Mistral-7B-v0.2-iMat-GGUF