error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?

#4
by YassineLajmi - opened

Hello,

With Llama.cpp I can run alpaca-lora-65B-GGML/alpaca-lora-65B.ggml.q5_1.bin
but when I try gpt4-alpaca-lora_mlp-65B-GGML/gpt4-alpaca-lora_mlp-65B.ggml.q4_0.bin I have this error :

(python39) [root@vmxccai2 llama.cpp]# ./main -m ./models/65B/gpt4-alpaca-lora_mlp-65B.ggml.q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
main: build = 526 (e6a46b0)
main: seed = 1684220404
llama.cpp: loading model from ./models/65B/gpt4-alpaca-lora_mlp-65B-GGML/gpt4-alpaca-lora_mlp-65B.ggml.q4_0.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/65B/gpt4-alpaca-lora_mlp-65B-GGML/gpt4-alpaca-lora_mlp-65B.ggml.q4_0.bin'
main: error: unable to load model

First thing to do is check the SHA256SUM and confirm the model definitely downloaded correctly?

Thank you, I Think that My llama.cpp is old (before 12 May) I will try to checkout the newest version.

OK if you've not got latest llama.cpp, you need the files from the previous_llama branch. Or just update llama.cpp, or whatever UI/code you're using!

Sign up or log in to comment