https://www.kaggle.com/code/reginliu/qwen2-5-gguf-imatrix

Model Size PPL n_vocab PPL_adjust
qwen2.5-14b-fp16.gguf 27.51 9.5316 +/- 0.08886 152064 9.5316
qwen2.5-14b-IQ4_XS.gguf 7.56 9.6508 +/- 0.09039 152064 9.6508
Downloads last month
4
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support