llama_model_load: error loading model: error loading model vocabulary: wstring_convert::from_bytes

#3
by jiyintor - opened

I am using the llama.cpp b3026, but I encountered the following issue:

E:/WorkingArea/llama_cpp/llama.cpp $ main --override-kv deepseek2.attention.q_lora_rank=int:1536 --override-kv deepseek2.attention.kv_lora_rank=int:512 --override-kv deepseek2.expert_shared_count=int:2 --override-kv deepseek2.expert_weights_scale=float:16 --override-kv deepseek2.expert_feed_forward_length=int:1536 --override-kv deepseek2.leading_dense_block_count=int:1 --override-kv deepseek2.rope.scaling.yarn_log_multiplier=float:0.0707 -m E:/model_tmps/DeepSeek-V2-Chat.q4_k_m.split-00001-of-00008.gguf  -c 128 --color -i
Log start
main: build = 3026 (5442939f)
main: built with cc (GCC) 14.1.0 for x86_64-w64-mingw32
main: seed  = 1717598755
llama_model_loader: additional 7 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 35 key-value pairs and 959 tensors from E:/model_tmps/DeepSeek-V2-Chat.q4_k_m.split-00001-of-00008.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.name str              = Deepseek-V2-Chat
llama_model_loader: - kv   2:                      deepseek2.block_count u32              = 60
llama_model_loader: - kv   3:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   4:                 deepseek2.embedding_length u32              = 5120
llama_model_loader: - kv   5:              deepseek2.feed_forward_length u32              = 12288
llama_model_loader: - kv   6:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   7:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   8:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv   9: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                deepseek2.expert_used_count u32              = 6
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       deepseek2.vocab_size u32              = 102400
llama_model_loader: - kv  13:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  14:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  15:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  16: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  17:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  18:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 160
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = deepseek-llm
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,102400]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,102400]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,99757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 100000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 100001
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 100001
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  31:               general.quantization_version u32              = 2
llama_model_loader: - kv  32:                                   split.no u16              = 0
llama_model_loader: - kv  33:                                split.count u16              = 8
llama_model_loader: - kv  34:                        split.tensors.count i32              = 959
llama_model_loader: - type  f32:  300 tensors
llama_model_loader: - type q4_K:  599 tensors
llama_model_loader: - type q6_K:   60 tensors
validate_override: Using metadata override (  int) 'deepseek2.leading_dense_block_count' = 1
validate_override: Using metadata override (  int) 'deepseek2.attention.q_lora_rank' = 1536
validate_override: Using metadata override (  int) 'deepseek2.attention.kv_lora_rank' = 512
validate_override: Using metadata override (  int) 'deepseek2.expert_feed_forward_length' = 1536
validate_override: Using metadata override (  int) 'deepseek2.expert_shared_count' = 2
validate_override: Using metadata override (float) 'deepseek2.expert_weights_scale' = 16.000000
validate_override: Using metadata override (float) 'deepseek2.rope.scaling.yarn_log_multiplier' = 0.070700
llama_model_load: error loading model: error loading model vocabulary: wstring_convert::from_bytes
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'E:/model_tmps/DeepSeek-V2-Chat.q4_k_m.split-00001-of-00008.gguf'
main: error: unable to load model

Supplement: After downloading the Q4km model

jiyintor changed discussion title from key deepseek2.leading_dense_block_count has wrong type i32 but expected type u32 to llama_model_load: error loading model: error loading model vocabulary: wstring_convert::from_bytes

Try the Q3_K_M, I generated the Q4 a while ago so it might be outdated.

It does not work :

(base) jiyin@jiyin:/media/jiyin/ResearchSpace1/llama.cpp$ ./main -m ../Deepseek-v2-chat-236B/DeepSeek-V2-Chat.Q3_K_M-00001-of-00006.gguf -p "hello"
Log start
main: build = 3026 (5442939f)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1717798418
llama_model_loader: additional 5 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 46 key-value pairs and 959 tensors from ../Deepseek-v2-chat-236B/DeepSeek-V2-Chat.Q3_K_M-00001-of-00006.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.name str              = Deepseek-V2-Chat
llama_model_loader: - kv   2:                      deepseek2.block_count u32              = 60
llama_model_loader: - kv   3:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   4:                 deepseek2.embedding_length u32              = 5120
llama_model_loader: - kv   5:              deepseek2.feed_forward_length u32              = 12288
llama_model_loader: - kv   6:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   7:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   8:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv   9: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                deepseek2.expert_used_count u32              = 6
llama_model_loader: - kv  11:                          general.file_type u32              = 12
llama_model_loader: - kv  12:                       deepseek2.vocab_size u32              = 102400
llama_model_loader: - kv  13:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  14:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  15:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  16: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  17:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  18:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 160
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = deepseek-llm
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,102400]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,102400]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,99757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 100000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 100001
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 100001
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  31:               general.quantization_version u32              = 2
llama_model_loader: - kv  32:            deepseek2.attention.q_lora_rank i32              = 1536
llama_model_loader: - kv  33:           deepseek2.attention.kv_lora_rank i32              = 512
llama_model_loader: - kv  34:              deepseek2.expert_shared_count i32              = 2
llama_model_loader: - kv  35:             deepseek2.expert_weights_scale f32              = 16.000000
llama_model_loader: - kv  36:       deepseek2.expert_feed_forward_length i32              = 1536
llama_model_loader: - kv  37:        deepseek2.leading_dense_block_count i32              = 1
llama_model_loader: - kv  38: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.070700
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = imatrix.dat
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = groups_merged.txt
llama_model_loader: - kv  41:             quantize.imatrix.entries_count i32              = 716
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count i32              = 62
llama_model_loader: - kv  43:                                   split.no u16              = 0
llama_model_loader: - kv  44:                                split.count u16              = 6
llama_model_loader: - kv  45:                        split.tensors.count i32              = 959
llama_model_loader: - type  f32:  300 tensors
llama_model_loader: - type q3_K:  479 tensors
llama_model_loader: - type q4_K:  174 tensors
llama_model_loader: - type q5_K:    5 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_load: error loading model: error loading model hyperparameters: key deepseek2.leading_dense_block_count has wrong type i32 but expected type u32
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../Deepseek-v2-chat-236B/DeepSeek-V2-Chat.Q3_K_M-00001-of-00006.gguf'
main: error: unable to load model

I know --override-kv should be added, but i want load this model in Ollama, so.

After multiple attempts, I found that regardless of the model (whether labeled as update or not), when running llama.cpp-b3026 or later versions, it is necessary to --override-kv these parameters, such as the difference between parameter types u32 and i32.

So, my question is: where do these parameter type differences come from? Do they occur during the conversion process or during the quantization process? If I operate based on your bf16 version, can I convert i32 to u32 when creating new model files?

Apologies for the late response; I was on vacation. For the i32 and u32 conversion error, I tried to add override kv flags into quantize, but for some reason it wrote those as i32 while it was supposed to be u32 (doesn’t fail though, so I didn’t notice). I’m not sure why it does this, probably because of a bug in llama.cpp, for which it’s probably best to open a bug report there.

Sign up or log in to comment