Edit model card

Wizard-LM-2-8x22-iMat-GGUF

Quantized from fp32 with love. If you're using the latest version of llama.cpp you should no longer need to combine files before loading.

  • Weighted quantizations created using imatrix file provided by jukofyork
  • Calculated in 105 chunks with n_ctx=512 using groups_merged.txt

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience.

Tip: Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.

BFloat16 model card can be found here

Downloads last month
1,465
GGUF
Model size
141B params
Architecture
llama
+4
Unable to determine this model's library. Check the docs .