Nexesenex's picture
Create README.md
7f70caf verified
|
raw
history blame contribute delete
No virus
678 Bytes

Custom GGUF Quants with iMatrix for : https://huggingface.co/MarsupialAI/LaDameBlanche-v2-95b

(Yes, I'm lazy, but I can live with a 0.01ppl bump ^^)

The model is a great merge, sensical and creative, imho working better for lesser requirements than the 100b+ Miqu which are worthy only for those with 48GB VRAM or more.

In IQ2_LR(2.7BPW, for 8k context with 36GB VRAM and an IGP running the OS display), ARC Challenge at 57, ARC Easy at 77, PPL 512 at 4.5860.

Mesdames et messieurs, vous êtes servis!