File size: 445 Bytes
5d26b94
 
 
e8a3fb9
6155a58
 
2095f4e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
---
license: cc-by-nc-2.0
---
GGUF importance matrix (imatrix) quants for https://huggingface.co/wolfram/miquliz-120b-v2.0  
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.

Using IQ2_XXS it seems to fit 100/141 layers using 2K context on a 24GB card.

| Layers | Context | Template |
| --- | --- | --- |
| <pre>140</pre> | <pre>32768</pre> | <pre>[INST] {prompt} [/INST]<br>{response}</pre> |