/home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm gen_config /ssd1/rickzhou/models/Phi-3-mini-4k-instruct --quantization q0f32 --conv-template phi-3 --output /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC [2024-05-27 02:07:22] INFO auto_config.py:115: Found model configuration: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/config.json [2024-05-27 02:07:22] INFO auto_config.py:153: Found model type: phi3. Use `--model-type` to override. [2024-05-27 02:07:22] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-27 02:07:22] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 [2024-05-27 02:07:22] INFO config.py:106: Overriding max_batch_size from 1 to 80 [2024-05-27 02:07:22] INFO gen_config.py:142: [generation_config.json] Setting bos_token_id: 1 [2024-05-27 02:07:22] INFO gen_config.py:142: [generation_config.json] Setting eos_token_id: [32000, 32001, 32007] [2024-05-27 02:07:22] INFO gen_config.py:142: [generation_config.json] Setting pad_token_id: 32000 [2024-05-27 02:07:22] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer.model. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer.model [2024-05-27 02:07:22] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer.json [2024-05-27 02:07:22] INFO gen_config.py:156: Not found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/vocab.json [2024-05-27 02:07:22] INFO gen_config.py:156: Not found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/merges.txt [2024-05-27 02:07:22] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/added_tokens.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/added_tokens.json [2024-05-27 02:07:22] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer_config.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer_config.json Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/ssd1/rickzhou/mlc-llm/python/mlc_llm/__main__.py", line 52, in main() File "/ssd1/rickzhou/mlc-llm/python/mlc_llm/__main__.py", line 33, in main cli.main(sys.argv[2:]) File "/ssd1/rickzhou/mlc-llm/python/mlc_llm/cli/gen_config.py", line 95, in main gen_config( File "/ssd1/rickzhou/mlc-llm/python/mlc_llm/interface/gen_config.py", line 214, in gen_config mlc_chat_config.tokenizer_info = asdict(Tokenizer.detect_tokenizer_info(str(output))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/ssd1/rickzhou/mlc-llm/python/mlc_llm/tokenizer.py", line 114, in detect_tokenizer_info return TokenizerInfo.from_json(_ffi_api.DetectTokenizerInfo(tokenizer_path)) # type: ignore # pylint: disable=no-member ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'mlc_llm._ffi_api' has no attribute 'DetectTokenizerInfo' /home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm gen_config /ssd1/rickzhou/models/Phi-3-mini-4k-instruct --quantization q0f32 --conv-template phi-3 --output /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC [2024-05-27 03:04:28] INFO auto_config.py:115: Found model configuration: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/config.json [2024-05-27 03:04:28] INFO auto_config.py:153: Found model type: phi3. Use `--model-type` to override. [2024-05-27 03:04:28] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-27 03:04:28] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 [2024-05-27 03:04:28] INFO config.py:106: Overriding max_batch_size from 1 to 80 [2024-05-27 03:04:28] INFO gen_config.py:142: [generation_config.json] Setting bos_token_id: 1 [2024-05-27 03:04:28] INFO gen_config.py:142: [generation_config.json] Setting eos_token_id: [32000, 32001, 32007] [2024-05-27 03:04:28] INFO gen_config.py:142: [generation_config.json] Setting pad_token_id: 32000 [2024-05-27 03:04:28] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer.model. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer.model [2024-05-27 03:04:28] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer.json [2024-05-27 03:04:28] INFO gen_config.py:156: Not found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/vocab.json [2024-05-27 03:04:28] INFO gen_config.py:156: Not found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/merges.txt [2024-05-27 03:04:28] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/added_tokens.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/added_tokens.json [2024-05-27 03:04:28] INFO gen_config.py:154: Found tokenizer config: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/tokenizer_config.json. Copying to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/tokenizer_config.json [2024-05-27 03:04:28] INFO gen_config.py:215: Detected tokenizer info: {'token_postproc_method': 'byte_fallback', 'prepend_space_in_encode': True, 'strip_space_in_decode': True} [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting temperature: 1.0 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting presence_penalty: 0.0 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting frequency_penalty: 0.0 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting repetition_penalty: 1.0 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting top_p: 1.0 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting mean_gen_len: 128 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting max_gen_len: 512 [2024-05-27 03:04:28] INFO gen_config.py:31: [System default] Setting shift_fill_factor: 0.3 /home/rickzhou/miniconda3/envs/mlc/lib/python3.11/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings: Expected `int` but got `list` - serialized value may not be as expected return self.__pydantic_serializer__.to_python( [2024-05-27 03:04:28] INFO gen_config.py:222: Dumping configuration file to: /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/mlc-chat-config.json /home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm convert_weight /ssd1/rickzhou/models/Phi-3-mini-4k-instruct --quantization q0f32 --source-format auto --output /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC [2024-05-27 03:04:29] INFO auto_config.py:115: Found model configuration: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/config.json [2024-05-27 03:04:30] INFO auto_device.py:79: Found device: cuda:0 [2024-05-27 03:04:30] INFO auto_device.py:79: Found device: cuda:1 [2024-05-27 03:04:31] INFO auto_device.py:88: Not found device: rocm:0 [2024-05-27 03:04:32] INFO auto_device.py:88: Not found device: metal:0 [2024-05-27 03:04:34] INFO auto_device.py:79: Found device: vulkan:0 [2024-05-27 03:04:34] INFO auto_device.py:79: Found device: vulkan:1 [2024-05-27 03:04:34] INFO auto_device.py:79: Found device: vulkan:2 [2024-05-27 03:04:35] INFO auto_device.py:88: Not found device: opencl:0 [2024-05-27 03:04:35] INFO auto_device.py:35: Using device: cuda:0 [2024-05-27 03:04:35] INFO auto_weight.py:70: Finding weights in: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct [2024-05-27 03:04:35] INFO auto_weight.py:136: Not found Huggingface PyTorch [2024-05-27 03:04:35] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model.safetensors.index.json [2024-05-27 03:04:35] INFO auto_weight.py:106: Using source weight configuration: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model.safetensors.index.json. Use `--source` to override. [2024-05-27 03:04:35] INFO auto_weight.py:110: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-05-27 03:04:35] INFO auto_config.py:153: Found model type: phi3. Use `--model-type` to override. [2024-05-27 03:04:35] INFO phi3_model.py:53: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-27 03:04:35] INFO phi3_model.py:68: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/config.json --quantization NoQuantize(name='q0f32', kind='no-quant', model_dtype='float32') --model-type phi3 --device cuda:0 --source /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model.safetensors.index.json --source-format huggingface-safetensor --output /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC Start storing to cache /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC 0%| | 0/195 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/rickzhou/miniconda3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) 1%| | 1/195 [00:08<26:16, 8.13s/it] [2024-05-27 03:04:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.ln.weight", shape: (3072,), dtype: float32 1%| | 1/195 [00:08<26:16, 8.13s/it] [2024-05-27 03:04:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 1%| | 1/195 [00:08<26:16, 8.13s/it] 2%|▏ | 3/195 [00:08<07:05, 2.22s/it] [2024-05-27 03:04:45] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 2%|▏ | 3/195 [00:08<07:05, 2.22s/it] 2%|▏ | 4/195 [00:08<05:13, 1.64s/it] [2024-05-27 03:04:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.post_attention_layernorm.weight", shape: (3072,), dtype: float32 2%|▏ | 4/195 [00:08<05:13, 1.64s/it] [2024-05-27 03:04:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 2%|▏ | 4/195 [00:09<05:13, 1.64s/it] 3%|▎ | 6/195 [00:09<02:50, 1.11it/s] [2024-05-27 03:04:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.ln.weight", shape: (3072,), dtype: float32 3%|▎ | 6/195 [00:09<02:50, 1.11it/s] [2024-05-27 03:04:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 3%|▎ | 6/195 [00:09<02:50, 1.11it/s] 4%|▍ | 8/195 [00:09<01:49, 1.71it/s] [2024-05-27 03:04:46] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 4%|▍ | 8/195 [00:09<01:49, 1.71it/s] 5%|▍ | 9/195 [00:10<01:47, 1.72it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.post_attention_layernorm.weight", shape: (3072,), dtype: float32 5%|▍ | 9/195 [00:10<01:47, 1.72it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 5%|▍ | 9/195 [00:10<01:47, 1.72it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.22.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 5%|▍ | 9/195 [00:10<01:47, 1.72it/s] 6%|▌ | 12/195 [00:10<01:05, 2.79it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.ln.weight", shape: (3072,), dtype: float32 6%|▌ | 12/195 [00:10<01:05, 2.79it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 6%|▌ | 12/195 [00:10<01:05, 2.79it/s] 7%|▋ | 14/195 [00:10<00:52, 3.46it/s] [2024-05-27 03:04:47] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 7%|▋ | 14/195 [00:10<00:52, 3.46it/s] 8%|▊ | 15/195 [00:11<01:00, 2.98it/s] [2024-05-27 03:04:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.post_attention_layernorm.weight", shape: (3072,), dtype: float32 8%|▊ | 15/195 [00:11<01:00, 2.98it/s] [2024-05-27 03:04:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 8%|▊ | 15/195 [00:11<01:00, 2.98it/s] 9%|▊ | 17/195 [00:11<00:42, 4.19it/s] [2024-05-27 03:04:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.23.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 9%|▊ | 17/195 [00:11<00:42, 4.19it/s] 9%|▉ | 18/195 [00:11<00:45, 3.93it/s] [2024-05-27 03:04:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.ln.weight", shape: (3072,), dtype: float32 9%|▉ | 18/195 [00:11<00:45, 3.93it/s] [2024-05-27 03:04:48] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 9%|▉ | 18/195 [00:11<00:45, 3.93it/s] 10%|█ | 20/195 [00:12<00:37, 4.70it/s] [2024-05-27 03:04:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 10%|█ | 20/195 [00:12<00:37, 4.70it/s] 11%|█ | 21/195 [00:12<00:48, 3.56it/s] [2024-05-27 03:04:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.post_attention_layernorm.weight", shape: (3072,), dtype: float32 11%|█ | 21/195 [00:12<00:48, 3.56it/s] [2024-05-27 03:04:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 11%|█ | 21/195 [00:12<00:48, 3.56it/s] 12%|█▏ | 23/195 [00:12<00:33, 5.06it/s] [2024-05-27 03:04:49] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.24.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 12%|█▏ | 23/195 [00:12<00:33, 5.06it/s] 12%|█▏ | 24/195 [00:13<00:37, 4.54it/s] [2024-05-27 03:04:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.ln.weight", shape: (3072,), dtype: float32 12%|█▏ | 24/195 [00:13<00:37, 4.54it/s] [2024-05-27 03:04:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 12%|█▏ | 24/195 [00:13<00:37, 4.54it/s] 13%|█▎ | 26/195 [00:13<00:32, 5.27it/s] [2024-05-27 03:04:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 13%|█▎ | 26/195 [00:13<00:32, 5.27it/s] 14%|█▍ | 27/195 [00:13<00:44, 3.78it/s] [2024-05-27 03:04:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.post_attention_layernorm.weight", shape: (3072,), dtype: float32 14%|█▍ | 27/195 [00:13<00:44, 3.78it/s] [2024-05-27 03:04:50] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 14%|█▍ | 27/195 [00:13<00:44, 3.78it/s] 15%|█▍ | 29/195 [00:13<00:30, 5.37it/s] [2024-05-27 03:04:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.25.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 15%|█▍ | 29/195 [00:13<00:30, 5.37it/s] 15%|█▌ | 30/195 [00:14<00:35, 4.71it/s] [2024-05-27 03:04:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.ln.weight", shape: (3072,), dtype: float32 15%|█▌ | 30/195 [00:14<00:35, 4.71it/s] [2024-05-27 03:04:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 15%|█▌ | 30/195 [00:14<00:35, 4.71it/s] 16%|█▋ | 32/195 [00:14<00:30, 5.39it/s] [2024-05-27 03:04:51] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 16%|█▋ | 32/195 [00:14<00:30, 5.39it/s] 17%|█▋ | 33/195 [00:15<00:42, 3.82it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.post_attention_layernorm.weight", shape: (3072,), dtype: float32 17%|█▋ | 33/195 [00:15<00:42, 3.82it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 17%|█▋ | 33/195 [00:15<00:42, 3.82it/s] 18%|█▊ | 35/195 [00:15<00:29, 5.44it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.26.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 18%|█▊ | 35/195 [00:15<00:29, 5.44it/s] 18%|█▊ | 36/195 [00:15<00:33, 4.76it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.ln.weight", shape: (3072,), dtype: float32 18%|█▊ | 36/195 [00:15<00:33, 4.76it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 18%|█▊ | 36/195 [00:15<00:33, 4.76it/s] 19%|█▉ | 38/195 [00:15<00:28, 5.45it/s] [2024-05-27 03:04:52] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 19%|█▉ | 38/195 [00:15<00:28, 5.45it/s] 20%|██ | 39/195 [00:16<00:40, 3.85it/s] [2024-05-27 03:04:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.post_attention_layernorm.weight", shape: (3072,), dtype: float32 20%|██ | 39/195 [00:16<00:40, 3.85it/s] [2024-05-27 03:04:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 20%|██ | 39/195 [00:16<00:40, 3.85it/s] 21%|██ | 41/195 [00:16<00:28, 5.48it/s] [2024-05-27 03:04:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.27.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 21%|██ | 41/195 [00:16<00:28, 5.48it/s] 22%|██▏ | 42/195 [00:16<00:32, 4.75it/s] [2024-05-27 03:04:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.ln.weight", shape: (3072,), dtype: float32 22%|██▏ | 42/195 [00:16<00:32, 4.75it/s] [2024-05-27 03:04:53] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 22%|██▏ | 42/195 [00:16<00:32, 4.75it/s] 23%|██▎ | 44/195 [00:17<00:27, 5.43it/s] [2024-05-27 03:04:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 23%|██▎ | 44/195 [00:17<00:27, 5.43it/s] 23%|██▎ | 45/195 [00:17<00:39, 3.83it/s] [2024-05-27 03:04:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.post_attention_layernorm.weight", shape: (3072,), dtype: float32 23%|██▎ | 45/195 [00:17<00:39, 3.83it/s] [2024-05-27 03:04:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 23%|██▎ | 45/195 [00:17<00:39, 3.83it/s] 24%|██▍ | 47/195 [00:17<00:27, 5.44it/s] [2024-05-27 03:04:54] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.28.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 24%|██▍ | 47/195 [00:17<00:27, 5.44it/s] 25%|██▍ | 48/195 [00:18<00:30, 4.75it/s] [2024-05-27 03:04:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.ln.weight", shape: (3072,), dtype: float32 25%|██▍ | 48/195 [00:18<00:30, 4.75it/s] [2024-05-27 03:04:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 25%|██▍ | 48/195 [00:18<00:30, 4.75it/s] 26%|██▌ | 50/195 [00:18<00:26, 5.43it/s] [2024-05-27 03:04:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 26%|██▌ | 50/195 [00:18<00:26, 5.43it/s] 26%|██▌ | 51/195 [00:18<00:37, 3.82it/s] [2024-05-27 03:04:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.post_attention_layernorm.weight", shape: (3072,), dtype: float32 26%|██▌ | 51/195 [00:18<00:37, 3.82it/s] [2024-05-27 03:04:55] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 26%|██▌ | 51/195 [00:18<00:37, 3.82it/s] 27%|██▋ | 53/195 [00:18<00:26, 5.44it/s] [2024-05-27 03:04:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.29.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 27%|██▋ | 53/195 [00:18<00:26, 5.44it/s] 28%|██▊ | 54/195 [00:19<00:29, 4.74it/s] [2024-05-27 03:04:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.ln.weight", shape: (3072,), dtype: float32 28%|██▊ | 54/195 [00:19<00:29, 4.74it/s] [2024-05-27 03:04:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 28%|██▊ | 54/195 [00:19<00:29, 4.74it/s] 29%|██▊ | 56/195 [00:19<00:25, 5.38it/s] [2024-05-27 03:04:56] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 29%|██▊ | 56/195 [00:19<00:25, 5.38it/s] 29%|██▉ | 57/195 [00:20<00:36, 3.80it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.post_attention_layernorm.weight", shape: (3072,), dtype: float32 29%|██▉ | 57/195 [00:20<00:36, 3.80it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 29%|██▉ | 57/195 [00:20<00:36, 3.80it/s] 30%|███ | 59/195 [00:20<00:25, 5.41it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.30.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 30%|███ | 59/195 [00:20<00:25, 5.41it/s] 31%|███ | 60/195 [00:20<00:28, 4.73it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.ln.weight", shape: (3072,), dtype: float32 31%|███ | 60/195 [00:20<00:28, 4.73it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 31%|███ | 60/195 [00:20<00:28, 4.73it/s] 32%|███▏ | 62/195 [00:20<00:24, 5.41it/s] [2024-05-27 03:04:57] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 32%|███▏ | 62/195 [00:20<00:24, 5.41it/s] 32%|███▏ | 63/195 [00:21<00:34, 3.82it/s] [2024-05-27 03:04:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.post_attention_layernorm.weight", shape: (3072,), dtype: float32 32%|███▏ | 63/195 [00:21<00:34, 3.82it/s] [2024-05-27 03:04:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 32%|███▏ | 63/195 [00:21<00:34, 3.82it/s] 33%|███▎ | 65/195 [00:21<00:23, 5.44it/s] [2024-05-27 03:04:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.31.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 33%|███▎ | 65/195 [00:21<00:23, 5.44it/s] 34%|███▍ | 66/195 [00:21<00:27, 4.74it/s] [2024-05-27 03:04:58] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.norm.weight", shape: (3072,), dtype: float32 34%|███▍ | 66/195 [00:21<00:27, 4.74it/s] [2024-05-27 03:04:58] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model-00002-of-00002.safetensors 34%|███▍ | 66/195 [00:21<00:27, 4.74it/s] [2024-05-27 03:04:59] INFO huggingface_loader.py:184: Loading HF parameters from: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model-00001-of-00002.safetensors 34%|███▍ | 66/195 [00:22<00:27, 4.74it/s] [2024-05-27 03:05:09] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.embd.weight", shape: (32064, 3072), dtype: float32 34%|███▍ | 66/195 [00:32<00:27, 4.74it/s] 35%|███▍ | 68/195 [00:33<04:46, 2.25s/it] [2024-05-27 03:05:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.ln.weight", shape: (3072,), dtype: float32 35%|███▍ | 68/195 [00:33<04:46, 2.25s/it] [2024-05-27 03:05:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 35%|███▍ | 68/195 [00:33<04:46, 2.25s/it] 36%|███▌ | 70/195 [00:33<03:09, 1.52s/it] [2024-05-27 03:05:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 36%|███▌ | 70/195 [00:33<03:09, 1.52s/it] 36%|███▋ | 71/195 [00:33<02:43, 1.32s/it] [2024-05-27 03:05:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.post_attention_layernorm.weight", shape: (3072,), dtype: float32 36%|███▋ | 71/195 [00:33<02:43, 1.32s/it] [2024-05-27 03:05:10] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 36%|███▋ | 71/195 [00:33<02:43, 1.32s/it] 37%|███▋ | 73/195 [00:33<01:44, 1.16it/s] [2024-05-27 03:05:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.0.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 37%|███▋ | 73/195 [00:33<01:44, 1.16it/s] 38%|███▊ | 74/195 [00:34<01:30, 1.34it/s] [2024-05-27 03:05:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.ln.weight", shape: (3072,), dtype: float32 38%|███▊ | 74/195 [00:34<01:30, 1.34it/s] [2024-05-27 03:05:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 38%|███▊ | 74/195 [00:34<01:30, 1.34it/s] 39%|███▉ | 76/195 [00:34<01:01, 1.92it/s] [2024-05-27 03:05:11] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 39%|███▉ | 76/195 [00:34<01:01, 1.92it/s] 39%|███▉ | 77/195 [00:35<01:01, 1.91it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.post_attention_layernorm.weight", shape: (3072,), dtype: float32 39%|███▉ | 77/195 [00:35<01:01, 1.91it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 39%|███▉ | 77/195 [00:35<01:01, 1.91it/s] 41%|████ | 79/195 [00:35<00:40, 2.87it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.1.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 41%|████ | 79/195 [00:35<00:40, 2.87it/s] 41%|████ | 80/195 [00:35<00:38, 2.95it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.ln.weight", shape: (3072,), dtype: float32 41%|████ | 80/195 [00:35<00:38, 2.95it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 41%|████ | 80/195 [00:35<00:38, 2.95it/s] 42%|████▏ | 82/195 [00:35<00:29, 3.80it/s] [2024-05-27 03:05:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 42%|████▏ | 82/195 [00:35<00:29, 3.80it/s] 43%|████▎ | 83/195 [00:36<00:35, 3.12it/s] [2024-05-27 03:05:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.post_attention_layernorm.weight", shape: (3072,), dtype: float32 43%|████▎ | 83/195 [00:36<00:35, 3.12it/s] [2024-05-27 03:05:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 43%|████▎ | 83/195 [00:36<00:35, 3.12it/s] 44%|████▎ | 85/195 [00:36<00:24, 4.53it/s] [2024-05-27 03:05:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.10.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 44%|████▎ | 85/195 [00:36<00:24, 4.53it/s] 44%|████▍ | 86/195 [00:36<00:25, 4.20it/s] [2024-05-27 03:05:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.ln.weight", shape: (3072,), dtype: float32 44%|████▍ | 86/195 [00:36<00:25, 4.20it/s] [2024-05-27 03:05:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 44%|████▍ | 86/195 [00:36<00:25, 4.20it/s] 45%|████▌ | 88/195 [00:36<00:21, 4.99it/s] [2024-05-27 03:05:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 45%|████▌ | 88/195 [00:37<00:21, 4.99it/s] 46%|████▌ | 89/195 [00:37<00:28, 3.68it/s] [2024-05-27 03:05:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.post_attention_layernorm.weight", shape: (3072,), dtype: float32 46%|████▌ | 89/195 [00:37<00:28, 3.68it/s] [2024-05-27 03:05:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 46%|████▌ | 89/195 [00:37<00:28, 3.68it/s] 47%|████▋ | 91/195 [00:37<00:19, 5.25it/s] [2024-05-27 03:05:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.11.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 47%|████▋ | 91/195 [00:37<00:19, 5.25it/s] 47%|████▋ | 92/195 [00:37<00:22, 4.67it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.ln.weight", shape: (3072,), dtype: float32 47%|████▋ | 92/195 [00:37<00:22, 4.67it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 47%|████▋ | 92/195 [00:37<00:22, 4.67it/s] 48%|████▊ | 94/195 [00:38<00:18, 5.40it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 48%|████▊ | 94/195 [00:38<00:18, 5.40it/s] 49%|████▊ | 95/195 [00:38<00:25, 3.85it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.post_attention_layernorm.weight", shape: (3072,), dtype: float32 49%|████▊ | 95/195 [00:38<00:25, 3.85it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 49%|████▊ | 95/195 [00:38<00:25, 3.85it/s] 50%|████▉ | 97/195 [00:38<00:17, 5.48it/s] [2024-05-27 03:05:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.12.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 50%|████▉ | 97/195 [00:38<00:17, 5.48it/s] 50%|█████ | 98/195 [00:39<00:20, 4.81it/s] [2024-05-27 03:05:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.ln.weight", shape: (3072,), dtype: float32 50%|█████ | 98/195 [00:39<00:20, 4.81it/s] [2024-05-27 03:05:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 50%|█████ | 98/195 [00:39<00:20, 4.81it/s] 51%|█████▏ | 100/195 [00:39<00:17, 5.50it/s] [2024-05-27 03:05:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 51%|█████▏ | 100/195 [00:39<00:17, 5.50it/s] 52%|█████▏ | 101/195 [00:39<00:24, 3.90it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.post_attention_layernorm.weight", shape: (3072,), dtype: float32 52%|█████▏ | 101/195 [00:39<00:24, 3.90it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 52%|█████▏ | 101/195 [00:39<00:24, 3.90it/s] 53%|█████▎ | 103/195 [00:40<00:16, 5.53it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.13.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 53%|█████▎ | 103/195 [00:40<00:16, 5.53it/s] 53%|█████▎ | 104/195 [00:40<00:18, 4.84it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.ln.weight", shape: (3072,), dtype: float32 53%|█████▎ | 104/195 [00:40<00:18, 4.84it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 53%|█████▎ | 104/195 [00:40<00:18, 4.84it/s] 54%|█████▍ | 106/195 [00:40<00:16, 5.53it/s] [2024-05-27 03:05:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 54%|█████▍ | 106/195 [00:40<00:16, 5.53it/s] 55%|█████▍ | 107/195 [00:41<00:22, 3.91it/s] [2024-05-27 03:05:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.post_attention_layernorm.weight", shape: (3072,), dtype: float32 55%|█████▍ | 107/195 [00:41<00:22, 3.91it/s] [2024-05-27 03:05:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 55%|█████▍ | 107/195 [00:41<00:22, 3.91it/s] 56%|█████▌ | 109/195 [00:41<00:15, 5.55it/s] [2024-05-27 03:05:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.14.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 56%|█████▌ | 109/195 [00:41<00:15, 5.55it/s] 56%|█████▋ | 110/195 [00:41<00:17, 4.84it/s] [2024-05-27 03:05:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.ln.weight", shape: (3072,), dtype: float32 56%|█████▋ | 110/195 [00:41<00:17, 4.84it/s] [2024-05-27 03:05:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 56%|█████▋ | 110/195 [00:41<00:17, 4.84it/s] 57%|█████▋ | 112/195 [00:41<00:15, 5.53it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 57%|█████▋ | 112/195 [00:41<00:15, 5.53it/s] 58%|█████▊ | 113/195 [00:42<00:20, 3.91it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.post_attention_layernorm.weight", shape: (3072,), dtype: float32 58%|█████▊ | 113/195 [00:42<00:20, 3.91it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 58%|█████▊ | 113/195 [00:42<00:20, 3.91it/s] 59%|█████▉ | 115/195 [00:42<00:14, 5.55it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.15.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 59%|█████▉ | 115/195 [00:42<00:14, 5.55it/s] 59%|█████▉ | 116/195 [00:42<00:16, 4.84it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.ln.weight", shape: (3072,), dtype: float32 59%|█████▉ | 116/195 [00:42<00:16, 4.84it/s] [2024-05-27 03:05:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 59%|█████▉ | 116/195 [00:42<00:16, 4.84it/s] 61%|██████ | 118/195 [00:43<00:13, 5.51it/s] [2024-05-27 03:05:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 61%|██████ | 118/195 [00:43<00:13, 5.51it/s] 61%|██████ | 119/195 [00:43<00:19, 3.89it/s] [2024-05-27 03:05:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.post_attention_layernorm.weight", shape: (3072,), dtype: float32 61%|██████ | 119/195 [00:43<00:19, 3.89it/s] [2024-05-27 03:05:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 61%|██████ | 119/195 [00:43<00:19, 3.89it/s] 62%|██████▏ | 121/195 [00:43<00:13, 5.53it/s] [2024-05-27 03:05:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.16.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 62%|██████▏ | 121/195 [00:43<00:13, 5.53it/s] 63%|██████▎ | 122/195 [00:44<00:15, 4.83it/s] [2024-05-27 03:05:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.ln.weight", shape: (3072,), dtype: float32 63%|██████▎ | 122/195 [00:44<00:15, 4.83it/s] [2024-05-27 03:05:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 63%|██████▎ | 122/195 [00:44<00:15, 4.83it/s] 64%|██████▎ | 124/195 [00:44<00:12, 5.53it/s] [2024-05-27 03:05:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 64%|██████▎ | 124/195 [00:44<00:12, 5.53it/s] 64%|██████▍ | 125/195 [00:44<00:17, 3.91it/s] [2024-05-27 03:05:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.post_attention_layernorm.weight", shape: (3072,), dtype: float32 64%|██████▍ | 125/195 [00:44<00:17, 3.91it/s] [2024-05-27 03:05:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 64%|██████▍ | 125/195 [00:44<00:17, 3.91it/s] 65%|██████▌ | 127/195 [00:44<00:12, 5.54it/s] [2024-05-27 03:05:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.17.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 65%|██████▌ | 127/195 [00:45<00:12, 5.54it/s] 66%|██████▌ | 128/195 [00:45<00:13, 4.82it/s] [2024-05-27 03:05:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.ln.weight", shape: (3072,), dtype: float32 66%|██████▌ | 128/195 [00:45<00:13, 4.82it/s] [2024-05-27 03:05:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 66%|██████▌ | 128/195 [00:45<00:13, 4.82it/s] 67%|██████▋ | 130/195 [00:45<00:11, 5.51it/s] [2024-05-27 03:05:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 67%|██████▋ | 130/195 [00:45<00:11, 5.51it/s] 67%|██████▋ | 131/195 [00:46<00:16, 3.91it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.post_attention_layernorm.weight", shape: (3072,), dtype: float32 67%|██████▋ | 131/195 [00:46<00:16, 3.91it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 67%|██████▋ | 131/195 [00:46<00:16, 3.91it/s] 68%|██████▊ | 133/195 [00:46<00:11, 5.55it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.18.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 68%|██████▊ | 133/195 [00:46<00:11, 5.55it/s] 69%|██████▊ | 134/195 [00:46<00:12, 4.85it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.ln.weight", shape: (3072,), dtype: float32 69%|██████▊ | 134/195 [00:46<00:12, 4.85it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 69%|██████▊ | 134/195 [00:46<00:12, 4.85it/s] 70%|██████▉ | 136/195 [00:46<00:10, 5.53it/s] [2024-05-27 03:05:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 70%|██████▉ | 136/195 [00:46<00:10, 5.53it/s] 70%|███████ | 137/195 [00:47<00:14, 3.92it/s] [2024-05-27 03:05:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.post_attention_layernorm.weight", shape: (3072,), dtype: float32 70%|███████ | 137/195 [00:47<00:14, 3.92it/s] [2024-05-27 03:05:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 70%|███████ | 137/195 [00:47<00:14, 3.92it/s] 71%|███████▏ | 139/195 [00:47<00:10, 5.56it/s] [2024-05-27 03:05:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.19.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 71%|███████▏ | 139/195 [00:47<00:10, 5.56it/s] 72%|███████▏ | 140/195 [00:47<00:11, 4.85it/s] [2024-05-27 03:05:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.ln.weight", shape: (3072,), dtype: float32 72%|███████▏ | 140/195 [00:47<00:11, 4.85it/s] [2024-05-27 03:05:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 72%|███████▏ | 140/195 [00:47<00:11, 4.85it/s] 73%|███████▎ | 142/195 [00:48<00:09, 5.54it/s] [2024-05-27 03:05:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 73%|███████▎ | 142/195 [00:48<00:09, 5.54it/s] 73%|███████▎ | 143/195 [00:48<00:13, 3.92it/s] [2024-05-27 03:05:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.post_attention_layernorm.weight", shape: (3072,), dtype: float32 73%|███████▎ | 143/195 [00:48<00:13, 3.92it/s] [2024-05-27 03:05:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 73%|███████▎ | 143/195 [00:48<00:13, 3.92it/s] 74%|███████▍ | 145/195 [00:48<00:09, 5.55it/s] [2024-05-27 03:05:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.2.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 74%|███████▍ | 145/195 [00:48<00:09, 5.55it/s] 75%|███████▍ | 146/195 [00:48<00:10, 4.84it/s] [2024-05-27 03:05:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.ln.weight", shape: (3072,), dtype: float32 75%|███████▍ | 146/195 [00:48<00:10, 4.84it/s] [2024-05-27 03:05:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 75%|███████▍ | 146/195 [00:48<00:10, 4.84it/s] 76%|███████▌ | 148/195 [00:49<00:08, 5.51it/s] [2024-05-27 03:05:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 76%|███████▌ | 148/195 [00:49<00:08, 5.51it/s] 76%|███████▋ | 149/195 [00:49<00:11, 3.88it/s] [2024-05-27 03:05:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.post_attention_layernorm.weight", shape: (3072,), dtype: float32 76%|███████▋ | 149/195 [00:49<00:11, 3.88it/s] [2024-05-27 03:05:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 76%|███████▋ | 149/195 [00:49<00:11, 3.88it/s] 77%|███████▋ | 151/195 [00:49<00:07, 5.51it/s] [2024-05-27 03:05:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.20.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 77%|███████▋ | 151/195 [00:49<00:07, 5.51it/s] 78%|███████▊ | 152/195 [00:50<00:08, 4.82it/s] [2024-05-27 03:05:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.21.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 78%|███████▊ | 152/195 [00:50<00:08, 4.82it/s] 78%|███████▊ | 153/195 [00:50<00:07, 5.45it/s] [2024-05-27 03:05:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.ln.weight", shape: (3072,), dtype: float32 78%|███████▊ | 153/195 [00:50<00:07, 5.45it/s] [2024-05-27 03:05:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 78%|███████▊ | 153/195 [00:50<00:07, 5.45it/s] 79%|███████▉ | 155/195 [00:50<00:06, 6.01it/s] [2024-05-27 03:05:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 79%|███████▉ | 155/195 [00:50<00:06, 6.01it/s] 80%|████████ | 156/195 [00:51<00:09, 4.02it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.post_attention_layernorm.weight", shape: (3072,), dtype: float32 80%|████████ | 156/195 [00:51<00:09, 4.02it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 80%|████████ | 156/195 [00:51<00:09, 4.02it/s] 81%|████████ | 158/195 [00:51<00:06, 5.75it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.3.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 81%|████████ | 158/195 [00:51<00:06, 5.75it/s] 82%|████████▏ | 159/195 [00:51<00:07, 4.95it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.ln.weight", shape: (3072,), dtype: float32 82%|████████▏ | 159/195 [00:51<00:07, 4.95it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 82%|████████▏ | 159/195 [00:51<00:07, 4.95it/s] 83%|████████▎ | 161/195 [00:51<00:06, 5.60it/s] [2024-05-27 03:05:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 83%|████████▎ | 161/195 [00:51<00:06, 5.60it/s] 83%|████████▎ | 162/195 [00:52<00:08, 3.89it/s] [2024-05-27 03:05:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.post_attention_layernorm.weight", shape: (3072,), dtype: float32 83%|████████▎ | 162/195 [00:52<00:08, 3.89it/s] [2024-05-27 03:05:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 83%|████████▎ | 162/195 [00:52<00:08, 3.89it/s] 84%|████████▍ | 164/195 [00:52<00:05, 5.54it/s] [2024-05-27 03:05:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.4.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 84%|████████▍ | 164/195 [00:52<00:05, 5.54it/s] 85%|████████▍ | 165/195 [00:52<00:06, 4.83it/s] [2024-05-27 03:05:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.ln.weight", shape: (3072,), dtype: float32 85%|████████▍ | 165/195 [00:52<00:06, 4.83it/s] [2024-05-27 03:05:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 85%|████████▍ | 165/195 [00:52<00:06, 4.83it/s] 86%|████████▌ | 167/195 [00:53<00:05, 5.43it/s] [2024-05-27 03:05:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 86%|████████▌ | 167/195 [00:53<00:05, 5.43it/s] 86%|████████▌ | 168/195 [00:53<00:07, 3.85it/s] [2024-05-27 03:05:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.post_attention_layernorm.weight", shape: (3072,), dtype: float32 86%|████████▌ | 168/195 [00:53<00:07, 3.85it/s] [2024-05-27 03:05:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 86%|████████▌ | 168/195 [00:53<00:07, 3.85it/s] 87%|████████▋ | 170/195 [00:53<00:04, 5.47it/s] [2024-05-27 03:05:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.5.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 87%|████████▋ | 170/195 [00:53<00:04, 5.47it/s] 88%|████████▊ | 171/195 [00:54<00:05, 4.80it/s] [2024-05-27 03:05:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.ln.weight", shape: (3072,), dtype: float32 88%|████████▊ | 171/195 [00:54<00:05, 4.80it/s] [2024-05-27 03:05:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 88%|████████▊ | 171/195 [00:54<00:05, 4.80it/s] 89%|████████▊ | 173/195 [00:54<00:03, 5.51it/s] [2024-05-27 03:05:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 89%|████████▊ | 173/195 [00:54<00:03, 5.51it/s] 89%|████████▉ | 174/195 [00:54<00:05, 3.92it/s] [2024-05-27 03:05:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.post_attention_layernorm.weight", shape: (3072,), dtype: float32 89%|████████▉ | 174/195 [00:54<00:05, 3.92it/s] [2024-05-27 03:05:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 89%|████████▉ | 174/195 [00:54<00:05, 3.92it/s] 90%|█████████ | 176/195 [00:54<00:03, 5.56it/s] [2024-05-27 03:05:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.6.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 90%|█████████ | 176/195 [00:54<00:03, 5.56it/s] 91%|█████████ | 177/195 [00:55<00:03, 4.86it/s] [2024-05-27 03:05:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.ln.weight", shape: (3072,), dtype: float32 91%|█████████ | 177/195 [00:55<00:03, 4.86it/s] [2024-05-27 03:05:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 91%|█████████ | 177/195 [00:55<00:03, 4.86it/s] 92%|█████████▏| 179/195 [00:55<00:02, 5.54it/s] [2024-05-27 03:05:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 92%|█████████▏| 179/195 [00:55<00:02, 5.54it/s] 92%|█████████▏| 180/195 [00:56<00:03, 3.90it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.post_attention_layernorm.weight", shape: (3072,), dtype: float32 92%|█████████▏| 180/195 [00:56<00:03, 3.90it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 92%|█████████▏| 180/195 [00:56<00:03, 3.90it/s] 93%|█████████▎| 182/195 [00:56<00:02, 5.54it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.7.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 93%|█████████▎| 182/195 [00:56<00:02, 5.54it/s] 94%|█████████▍| 183/195 [00:56<00:02, 4.87it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.ln.weight", shape: (3072,), dtype: float32 94%|█████████▍| 183/195 [00:56<00:02, 4.87it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 94%|█████████▍| 183/195 [00:56<00:02, 4.87it/s] 95%|█████████▍| 185/195 [00:56<00:01, 5.56it/s] [2024-05-27 03:05:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 95%|█████████▍| 185/195 [00:56<00:01, 5.56it/s] 95%|█████████▌| 186/195 [00:57<00:02, 3.88it/s] [2024-05-27 03:05:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.post_attention_layernorm.weight", shape: (3072,), dtype: float32 95%|█████████▌| 186/195 [00:57<00:02, 3.88it/s] [2024-05-27 03:05:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 95%|█████████▌| 186/195 [00:57<00:02, 3.88it/s] 96%|█████████▋| 188/195 [00:57<00:01, 5.46it/s] [2024-05-27 03:05:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.8.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 96%|█████████▋| 188/195 [00:57<00:01, 5.46it/s] 97%|█████████▋| 189/195 [00:57<00:01, 4.81it/s] [2024-05-27 03:05:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.ln.weight", shape: (3072,), dtype: float32 97%|█████████▋| 189/195 [00:57<00:01, 4.81it/s] [2024-05-27 03:05:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.mlp.down_proj.weight", shape: (3072, 8192), dtype: float32 97%|█████████▋| 189/195 [00:57<00:01, 4.81it/s] 98%|█████████▊| 191/195 [00:57<00:00, 5.52it/s] [2024-05-27 03:05:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.mlp.gate_up_proj.weight", shape: (16384, 3072), dtype: float32 98%|█████████▊| 191/195 [00:58<00:00, 5.52it/s] 98%|█████████▊| 192/195 [00:58<00:00, 3.92it/s] [2024-05-27 03:05:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.post_attention_layernorm.weight", shape: (3072,), dtype: float32 98%|█████████▊| 192/195 [00:58<00:00, 3.92it/s] [2024-05-27 03:05:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.mixer.out_proj.weight", shape: (3072, 3072), dtype: float32 98%|█████████▊| 192/195 [00:58<00:00, 3.92it/s] 99%|█████████▉| 194/195 [00:58<00:00, 5.56it/s] [2024-05-27 03:05:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "transformer.h.9.mixer.qkv_proj.weight", shape: (9216, 3072), dtype: float32 99%|█████████▉| 194/195 [00:58<00:00, 5.56it/s] 100%|██████████| 195/195 [00:58<00:00, 4.86it/s] 100%|██████████| 195/195 [00:58<00:00, 3.31it/s] [2024-05-27 03:05:36] INFO huggingface_loader.py:196: Unloading HF weight file: /ssd1/rickzhou/models/Phi-3-mini-4k-instruct/model-00001-of-00002.safetensors [2024-05-27 03:05:36] INFO stats.py:76: Time usage: HF loading: 16.976 sec; Pre-quantization mapping: 1.975 sec; Quantization: 0.000 sec [2024-05-27 03:05:36] INFO stats.py:90: RAM usage: Peak RAM: 9.262 GB. Total bytes loaded from disk: 14.235 GB [2024-05-27 03:05:36] INFO convert_weight.py:155: Parameter size after quantization: 14.235 GB [2024-05-27 03:05:36] INFO convert_weight.py:160: Total parameters: 3,821,079,552 [2024-05-27 03:05:36] INFO convert_weight.py:161: Bits per parameter: 32.000 [2024-05-27 03:05:36] INFO convert_weight.py:166: Saved to directory: /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC All finished, 130 total shards committed, record saved to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/ndarray-cache.json Also saved a bf16 record to /ssd2/models/mlc-delivery/hf/mlc-ai/Phi-3-mini-4k-instruct-q0f32-MLC/ndarray-cache-b16.json