error loading model
#1
by
vikasrij
- opened
following instructions on model card for making llama.cpp and running model yields:
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 578, got 470
im just getting llama_init_from_gpt_params: error: failed to load model 'granite-8b-code-instruct.Q8_0.gguf'
The Granite models are currently unsupported in llama.cpp. There is a feature request to get support added open: https://github.com/ggerganov/llama.cpp/issues/7116
YorkieOH10
changed discussion status to
closed