File size: 678 Bytes
7f70caf
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
Custom GGUF Quants with iMatrix for :
https://huggingface.co/MarsupialAI/LaDameBlanche-v2-95b

- Q8_0 used as quant base : https://huggingface.co/mradermacher/LaDameBlanche-v2-95b-GGUF
- iMatrix here : https://huggingface.co/mradermacher/LaDameBlanche-v2-95b-i1-GGUF

(Yes, I'm lazy, but I can live with a 0.01ppl bump ^^)

The model is a great merge, sensical and creative, imho working better for lesser requirements than the 100b+ Miqu which are worthy only for those with 48GB VRAM or more.

In IQ2_LR(2.7BPW, for 8k context with 36GB VRAM and an IGP running the OS display), ARC Challenge at 57, ARC Easy at 77, PPL 512 at 4.5860.

Mesdames et messieurs, vous êtes servis!