Not working with llama.cpp

#5
by djtech - opened

I try using this model with llama.cpp, but this is the result:

> ./main -m ./models/ggml-alpaca-7b-q4.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt main: build = 708 (8596af4) main: seed = 1687096224 llama.cpp: loading model from ./models/ggml-alpaca-7b-q4.bin error loading model: unexpectedly reached end of file llama_init_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model './models/ggml-alpaca-7b-q4.bin' main: error: unable to load model

Better format:

./main -m ./models/ggml-alpaca-7b-q4.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
main: build = 708 (8596af4)
main: seed = 1687096224
llama.cpp: loading model from ./models/ggml-alpaca-7b-q4.bin
error loading model: unexpectedly reached end of file
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/ggml-alpaca-7b-q4.bin'
main: error: unable to load model

I am getting the same issue

Sign up or log in to comment