Help plz. deepseek2.experts_weight_scale=int:16 should be deepseek2.expert_weights_scale=float:16 ?

#1
by wukongai - opened

Thx for your work!

my log:

cllama --color -c 128000 --temp 0.7 --repeat_penalty 1.1 -n -1 -ins -t 64 --log-append --override-kv deepseek2.attention.q_lora_rank=int:1536 --override-kv deepseek2.attention.kv_lora_rank=int:512 --override-kv deepseek2.expert_shared_count=int:2 --override-kv deepseek2.expert_weights_scale=int:16 --override-kv deepseek2.expert_feed_forward_length=int:1536 --override-kv deepseek2.leading_dense_block_count=int:1 --override-kv deepseek2.rope.scaling.yarn_log_multiplier=float:0.0707 -m /raid5/models/DeepSeek-V2-Chat.bf16.gguf
Log start
main: build = 0 (unknown)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1716990908
llama_model_loader: loaded meta data with 32 key-value pairs and 959 tensors from /raid5/models/DeepSeek-V2-Chat.bf16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.name str = Deepseek-V2-Chat
llama_model_loader: - kv 2: deepseek2.block_count u32 = 60
llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840
llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 5120
llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 12288
llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 128
llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 128
llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6
llama_model_loader: - kv 11: general.file_type u32 = 32
llama_model_loader: - kv 12: deepseek2.vocab_size u32 = 102400
llama_model_loader: - kv 13: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 14: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 15: deepseek2.rope.scaling.factor f32 = 40.000000
llama_model_loader: - kv 16: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 17: deepseek2.attention.key_length u32 = 192
llama_model_loader: - kv 18: deepseek2.attention.value_length u32 = 128
llama_model_loader: - kv 19: deepseek2.expert_count u32 = 160
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = deepseek-llm
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,102400] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 100000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 100001
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 100001
llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 30: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - type f32: 300 tensors
llama_model_loader: - type bf16: 659 tensors
validate_override: Using metadata override ( int) 'deepseek2.leading_dense_block_count' = 1
validate_override: Using metadata override ( int) 'deepseek2.attention.q_lora_rank' = 1536
validate_override: Using metadata override ( int) 'deepseek2.attention.kv_lora_rank' = 512
validate_override: Using metadata override ( int) 'deepseek2.expert_feed_forward_length' = 1536
validate_override: Using metadata override ( int) 'deepseek2.expert_shared_count' = 2
validate_override: Warning: Bad metadata override type for key 'deepseek2.expert_weights_scale', expected float but got int
llama_model_load: error loading model: error loading model hyperparameters: key not found in model: deepseek2.expert_weights_scale
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/raid5/models/DeepSeek-V2-Chat.bf16.gguf'
main: error: unable to load model

Sorry; that was a typo. Just replace it with deepseek2.expert_weights_scale=float:16 and it’ll work.

I've also added these into the Q8_0 quant I'm uploading currently, so --override-kv is no longer required.

I've also added these into the Q8_0 quant I'm uploading currently, so --override-kv is no longer required.

good work! thx!!

This comment has been hidden

Sign up or log in to comment