fail on 104b-iq2_xxs.gguf with llama.cpp

#12
by telehan - opened

main: build = 2632 (b73e564b)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
main: seed = 1712819802
...
llm_load_print_meta: model ftype = IQ2_XXS - 2.0625 bpw
llm_load_print_meta: model params = 103.81 B
llm_load_print_meta: model size = 26.64 GiB (2.20 BPW)
llm_load_print_meta: general.name = 313aab747f8c3aefdd411b1f6a5a555dd421d9e8
llm_load_print_meta: BOS token = 5 ''
llm_load_print_meta: EOS token = 255001 '<|END_OF_TURN_TOKEN|>'
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 136 'Ä'
llm_load_tensors: ggml ctx size = 0.49 MiB
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 642, got 514
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '~/c4ai-command-r-plus-iMat.GGUF/ggml-c4ai-command-r-plus-104b-iq2_xxs.gguf'
main: error: unable to load model

@telehan That commit won't work, you need at least commit 5dc9dd71.

The test was done on the latest master version yesterday, it seems not merged yet?
https://github.com/ggerganov/llama.cpp/pull/6491

I tested with the last pre-release of ollama and got the same error, i tested with no imat weights and it works.

@telehan Take a look at this post -> https://www.reddit.com/r/LocalLLaMA/comments/1bymeyw/command_r_plus_104b_working_with_ollama_using/
This has nothing to do with the weights being trained using an importance matrix or not. This has to do with ollama using llama.cpp has a backend, so you can use the latest ollama commit but that doesn't use the latest llama.cpp commit. Hopefully this help.

Sign up or log in to comment