tensor 'token_embd.weight' has wrong shape

#1
by Esj-DL - opened

Hi, I'm getting an error when I run.
Model : mixtral_spanish_ft.Q5_K_M.gguf
llm = LlamaCPP(
model_path=pathMixtral,
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=False,
)

""""""""""""""""""
.....
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.38 MiB
error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 4096, 32002, got 4096, 32000, 1, 1
llama_load_model_from_file: failed to load model
""""""""""""""""""

I found the same case in:
https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGUF/discussions/1

This problem is due to quantization or how can it be solved, thanks in advance.

Hi @TheBloke , any clues on how to solve this? Thanks for everything!

same here

error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected  4096, 32002, got  4096, 32000,     1,     1

Same here :(

llama_model_load: error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 4096, 32002, got 4096, 32000, 1, 1

Hi @TheBloke I have the same problem. Thank you in advance.

I'm having the same issue:

error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 4096, 32002, got 4096, 32000, 1, 1
llama_load_model_from_file: failed to load model

Hi!! I'm having the same issue too:

error loading model: create_tensor: tensor 'token_embd.weight' has wrong shape; expected 4096, 32002, got 4096, 32000, 1, 1
llama_load_model_from_file: failed to load model

Same error here, any clues on how to solve this?

Sign up or log in to comment