Running locally: Cannot load model "llama-2-7b-chat.Q2_K.gguf"

#10
by Learner - opened
from llama_cpp import Llama

llm = Llama(model_path="llama-2-7b-chat.Q2_K.gguf", n_ctx=512, n_batch=126)

Gives the error:

gguf_init_from_file: invalid magic number 4f44213c
error loading model: llama_model_loader: failed to load model from llama-2-7b-chat.Q2_K.gguf

llama_load_model_from_file: failed to load model

I changed from the format GGML to GGUF and thought it would resolve the error but it did not.

I have the llama-2-7b-chat.Q2_K.gguf file completely downloaded and can also access it (path is correct).

Any idea?

Could not load Llama model from path: E:/upenn/wrds/llama-2-7b-chat.Q3_K_M.gguf. Received error (type=value_error)

Is the file corrupted somehow, perhaps?

Sign up or log in to comment