Cannot load

#21
by Iommed - opened

File "C:\text-generation-webui-main\text-generation-webui-main\modules\ui_model_menu.py", line 245, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\text-generation-webui-main\text-generation-webui-main\modules\models.py", line 87, in load_model

output = load_func_maploader

     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\text-generation-webui-main\text-generation-webui-main\modules\models.py", line 261, in llamacpp_loader

model, tokenizer = LlamaCppModel.from_pretrained(model_file)

               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\text-generation-webui-main\text-generation-webui-main\modules\llamacpp_model.py", line 102, in from_pretrained

result.model = Llama(**params)

           ^^^^^^^^^^^^^^^

File "C:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py", line 311, in init

self._model = _LlamaModel(

          ^^^^^^^^^^^^

File "C:\text-generation-webui-main\text-generation-webui-main\installer_files\env\Lib\site-packages\llama_cpp_cuda_internals.py", line 55, in init

raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: models\Mixtral-8x22B-v0.1.Q4_K_M.gguf

Hi,
Do you have the latest Llama.cpp? Also, I see you are trying to load “ Mixtral-8x22B-v0.1.Q4_K_M.gguf”, did you merge them back yourself l?

Hi!

Cannot load it here, did a git pull and fresh build of llama.cpp:

./main --color -m Mixtral-8x22B-v0.1.Q4_K_M-00001-of-00005.gguf --file prompt.txt
Log start
main: build = 2585 (f87f7b89)
main: built with cc (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0 for x86_64-linux-gnu
main: seed = 1712844801
llama_model_loader: additional 4 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from Mixtral-8x22B-v0.1.Q4_K_M-00001-of-00005.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = models--v2ray--Mixtral-8x22B-v0.1
llama_model_loader: - kv 2: llama.block_count u32 = 56
llama_model_loader: - kv 3: llama.context_length u32 = 65536
llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: llama.attention.head_count u32 = 48
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000,000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0,000010
llama_model_loader: - kv 10: llama.expert_count u32 = 8
llama_model_loader: - kv 11: llama.expert_used_count u32 = 2
llama_model_loader: - kv 12: general.file_type u32 = 15
llama_model_loader: - kv 13: llama.vocab_size u32 = 32000
llama_model_loader: - kv 14: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32000] = [0,000000, 0,000000, 0,000000, 0,0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - kv 25: split.no u16 = 0
llama_model_loader: - kv 26: split.count u16 = 5
llama_model_loader: - kv 27: split.tensors.count i32 = 563
llama_model_loader: - type f32: 113 tensors
llama_model_loader: - type f16: 56 tensors
llama_model_loader: - type q8_0: 112 tensors
llama_model_loader: - type q4_K: 197 tensors
llama_model_loader: - type q5_K: 56 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 65536
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 56
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: f_clamp_kqv = 0,0e+00
llm_load_print_meta: f_max_alibi_bias = 0,0e+00
llm_load_print_meta: f_logit_scale = 0,0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000,0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 65536
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 140,62 B
llm_load_print_meta: model size = 79,71 GiB (4,87 BPW)
llm_load_print_meta: general.name = models--v2ray--Mixtral-8x22B-v0.1
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 4 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 3: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0,22 MiB
llama_model_load: error loading model: create_tensor: tensor 'blk.0.ffn_gate.0.weight' not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'Mixtral-8x22B-v0.1.Q4_K_M-00001-of-00005.gguf'
main: error: unable to load model

I googled the error and it seems is similar to the error llama.cpp gave before supporting the initial 7B. Maybe I am missing some flag to make it read it as a Mixtral architeture?

Hi,

If you have the latest llamacpp library build (mine is 2644 (65c64dc3), yours is older) and all the splits are in the same directory, splits downloaded without being corrupted, have enough memory (lots of memory!), it will automatically load them all the way you are using it. I did the same for the example in the README.

I'll share my logs in case it helps:

./llama.cpp/main --color -m ./Mixtral-8x22B-v0.1/Mixtral-8x22B-v0.1.Q4_K_M-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e
Log start
main: build = 2644 (65c64dc3)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1712850215
llama_model_loader: additional 4 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from quantized/v2ray/Mixtral-8x22B-v0.1/Mixtral-8x22B-v0.1.Q4_K_M-00001-of-00005.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models--v2ray--Mixtral-8x22B-v0.1
llama_model_loader: - kv   2:                          llama.block_count u32              = 56
llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 16384
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 48
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  12:                          general.file_type u32              = 15
llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32000
llama_model_loader: - kv  14:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - kv  25:                                   split.no u16              = 0
llama_model_loader: - kv  26:                                split.count u16              = 5
llama_model_loader: - kv  27:                        split.tensors.count i32              = 563
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   56 tensors
llama_model_loader: - type q8_0:  112 tensors
llama_model_loader: - type q4_K:  197 tensors
llama_model_loader: - type q5_K:   56 tensors
llama_model_loader: - type q6_K:   29 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 65536
llm_load_print_meta: n_embd           = 6144
llm_load_print_meta: n_head           = 48
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 56
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 16384
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 65536
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 140.62 B
llm_load_print_meta: model size       = 79.71 GiB (4.87 BPW)
llm_load_print_meta: general.name     = models--v2ray--Mixtral-8x22B-v0.1
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.39 MiB
llm_load_tensors:        CPU buffer size = 19056.00 MiB
llm_load_tensors:        CPU buffer size = 18787.08 MiB
llm_load_tensors:        CPU buffer size = 17107.45 MiB
llm_load_tensors:        CPU buffer size = 18763.73 MiB
llm_load_tensors:        CPU buffer size =  7906.91 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  112.00 MiB, K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
llama_new_context_with_model:        CPU compute buffer size =   131.51 MiB
llama_new_context_with_model: graph nodes  = 2862
llama_new_context_with_model: graph splits = 1

system_info: n_threads = 64 / 128 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0
|
sampling:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = 1024, n_keep = 1


 Building a website can be done in 10 simple steps:
Step 1: Choose the right website platform
Step 2: Select a hosting provider
Step 3: Pick a domain name
Step 4: Design your website
Step 5: Add content to your site
Step 6: Install extensions and apps
Step 7: Optimize your website
Step 8: Publish your website
Step 9: Analyze and improve your website
Step 10: Market your website

1. Choose the right website platform

The first step to building a website is choosing the right platform. There are many different platforms available, so it’s important to select one that meets your specific needs.

Some of the most popular website platforms include WordPress, Squarespace, Wix, and Weebly. Each platform has


llama_print_timings:        load time =   31652.21 ms
llama_print_timings:      sample time =       6.20 ms /   161 runs   (    0.04 ms per token, 25980.31 tokens per second)
llama_print_timings: prompt eval time =    5165.41 ms /    19 tokens (  271.86 ms per token,     3.68 tokens per second)
llama_print_timings:        eval time =  127630.57 ms /   160 runs   (  797.69 ms per token,     1.25 tokens per second)
llama_print_timings:       total time =  133279.50 ms /   179 tokens

Got it, something was wrong with my llama.cpp, did a clean pull and build and it worked. Thanks!

Awesome! thanks for confirming that it works, I was going to check the hash to be sure I didn't screw up uploading them! :p

Thanks to you for sharing the model!

These are my timing for Q4_K_M using 4 x RTX3090 @8K context (1 internal GPU is connected to a PCIe 3.0 x16, 2nd internal to a x4 PCIe3, and two more are connected using TB3 Razer Core X cases.

llama_print_timings: load time = 35240,34 ms
llama_print_timings: sample time = 39,88 ms / 1608 runs ( 0,02 ms per token, 40317,93 tokens per second)
llama_print_timings: prompt eval time = 24641,15 ms / 3500 tokens ( 7,04 ms per token, 142,04 tokens per second)
llama_print_timings: eval time = 75247,48 ms / 1607 runs ( 46,82 ms per token, 21,36 tokens per second)
llama_print_timings: total time = 100313,51 ms / 5107 tokens

Regards!

Sign up or log in to comment