I think the Q8_0 is corrupted.

#1
by remghoost - opened

It's smaller than the Q5_K_M (which was the first tip off) and it fails to load via both llamacpp and koboldcpp.
gguf-parser fails to load the model as well.

If you get a chance, could you reupload a fixed version....?
Thank you! <3

I used the GGUF-my-repo space to quant it down to Q8_0.
File size looks to be about 15GB (which tracks).

Downloading it now to try it, but I'm guessing it'll be fine.

Here's the repo if you want to download/reupload it to your repo for better visibility.
https://huggingface.co/remghoost/Qwen2.5-Gutenberg-Doppel-14B-Q8_0-GGUF/tree/main

DevQuasar org

Thanks. I’ve found that too corrupt q8 has been removed. Fixed uploading at this time. Thanks the notification!

Sign up or log in to comment