Error quantizing: b'/bin/sh: 1: ./llama.cpp/quantize: not found\n'

#87
by NikolayKozloff - opened

I get this error on every model.

I'm hitting the same problem.

Same here

Still getting this error on this model: BirdL/DeepSeek-Coder-V2-Lite-Instruct-FlashAttnPatch

ggml.ai org

Sorry for the delay on this! - looking into this right now.

ggml.ai org

Patched it, it should be fixed now! ๐Ÿค—

Could some of you try it on different model checkpoints and ping if it works for use-cases as well!

(I'll close the issue once some of you confirm)

Patched it, it should be fixed now! ๐Ÿค—

Could some of you try it on different model checkpoints and ping if it works for use-cases as well!

(I'll close the issue once some of you confirm)

Just created this gguf: https://huggingface.co/NikolayKozloff/Llama-3-8B-Swedish-Norwegian-Danish-chekpoint-18833-1-epoch-15_6_2024-Q8_0-GGUF Thank you very much.

Seems to work again, thanks!

ggml.ai org

Closing this as fixed! ๐Ÿค—

reach-vb changed discussion status to closed

Sign up or log in to comment