Error in PrivateGPT

#4
by zWarhammer - opened

I'm trying to load this model in PrivateGPT, using the LlamaCpp mode. I get an error that says:
llama.cpp: loading model from models/llama-2-13b-chat.ggmlv3.q4_1.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file?
llama_init_from_file: failed to load model
I'm using - llama-cpp-python 0.1.50
Any help would be appreciated.

llama-cpp-python 0.1.50 is ancient! It doesn't support the latest GGMLv3 format. Update llama-cpp-python to the latest version, or at least a much more recent one.

Sign up or log in to comment