[bug] llama.cpp error: 'check_tensor_dims after try run gguf in lmstudio

#1
by Milor123 - opened

Hi bro, I am trying configure your model for use with lmstudio

using this command

python convert.py clonadosmios/WS_med_QA_Dolphin --outfile DolphinBIO-QA-q8_0.gguf --outtype q8_0 --vocab-type bpe

But when I try run this in LMStudio i get this problem:
"llama.cpp error: 'check_tensor_dims: tensor 'token_embd.weight' has wrong shape; expected 4096, 128258, got 4096, 128256, 1, 1'"

The same occurs with your other dolphin model WS_med_QA_DolphinBioLLM

what should i do? Please help me
Thank u very much

Sign up or log in to comment