File size: 438 Bytes
c88a72b
dce85f9
c88a72b
 
dce85f9
 
 
 
 
e412f32
dce85f9
 
e412f32
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
base_model: migtissera/Tess-2.0-Mixtral-8x22B
license: apache-2.0
---

iMatrix gguf quants of a newer finetune of Mixtral-8x22B

EdgeQuants still underway, IQ4XS version recommended.  Make sure to combine/merge the parts back together before using


```
cat tessIQ4XS.gguf.part* > tessIQ4XS.gguf 
``` 


Then use with llama.cpp version from April 12 or older. April 13 release had massive changes and messed up inferene for MoE models