Corrupted?
llama.cpp: loading model from models/mpt-7b-instruct.ggmlv2.q5_0.bin
libc++abi: terminating due to uncaught exception of type std::runtime_error: unexpectedly reached end of file
Also THANK YOU for all the quantization jobs you're doing !!!
These files are not compatible with llama.cpp.
Currently they can be used with:
- The example
mpt
binary provided with ggml - rustformers' llm
You're welcome.
But I'm afraid these models can't be loaded in llama.cpp. Please see the README - I've added a section indicating where they can be loaded, which right now is just the basic mpt
example CLI tool that comes with the ggml repo, and the rustformers llm tool. I am sure this list will expand soon!
Ok I think I understand this model is build on a different protocol.
All this is moving fast and I'm still learning.
Thank you guys for the inputs, I'm gonna educate myself on those points.
I totally messed up LLama with ggml!!!