This is a very low loss quantization ( should have >99% perplexity) of what was for a while the #1 model on the openllm leaderboard.

https://huggingface.co/dnhkng/RYS-XLarge

This model was a franken of qwen2 72b , and is very very likely to get surpased by qwen2.5 finetunes, so I'm more posting it for reference, but it's a great model nonetheless.

Downloads last month
5
GGUF
Model size
78B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nisten/rys-78b-55GB-gguf

Base model

dnhkng/RYS-XLarge
Quantized
(5)
this model