/home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat gen_config /ssd1/cfruan/models/stablelm-zephyr-3b --quantization q0f32 --conv-template stablelm-3b --output /tmp/tmpywap_r1m --context-window-size 4096 [2024-02-02 20:00:54] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/config.json [2024-02-02 20:00:54] INFO auto_config.py:153: Found model type: stablelm_epoch. Use `--model-type` to override. [2024-02-02 20:00:54] INFO stablelm_model.py:45: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-02-02 20:00:54] INFO stablelm_model.py:59: prefill_chunk_size defaults to context_window_size (4096) [2024-02-02 20:00:54] INFO config.py:106: Overriding context_window_size from 4096 to 4096 [2024-02-02 20:00:54] WARNING config.py:99: Warning: Cannot override max_batch_size, because StableLMEpochConfig does not have this field [2024-02-02 20:00:54] INFO gen_config.py:116: [generation_config.json] Setting bos_token_id: 0 [2024-02-02 20:00:54] INFO gen_config.py:116: [generation_config.json] Setting eos_token_id: 0 [2024-02-02 20:00:54] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer.model [2024-02-02 20:00:54] INFO gen_config.py:128: Found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer.json. Copying to /tmp/tmpywap_r1m/tokenizer.json [2024-02-02 20:00:54] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/vocab.json [2024-02-02 20:00:54] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/merges.txt [2024-02-02 20:00:54] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/added_tokens.json [2024-02-02 20:00:54] INFO gen_config.py:128: Found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer_config.json. Copying to /tmp/tmpywap_r1m/tokenizer_config.json [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting pad_token_id: 0 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting temperature: 0.7 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting repetition_penalty: 1.0 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting top_p: 0.95 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting mean_gen_len: 128 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting max_gen_len: 512 [2024-02-02 20:00:54] INFO gen_config.py:69: [System default] Setting shift_fill_factor: 0.3 [2024-02-02 20:00:54] INFO gen_config.py:158: Dumping configuration file to: /tmp/tmpywap_r1m/mlc-chat-config.json /home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat convert_weight /ssd1/cfruan/models/stablelm-zephyr-3b --quantization q0f32 --source-format auto --output /tmp/tmpywap_r1m [2024-02-02 20:00:55] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/config.json [2024-02-02 20:00:55] INFO auto_device.py:76: Found device: cuda:0 [2024-02-02 20:00:55] INFO auto_device.py:76: Found device: cuda:1 [2024-02-02 20:00:55] INFO auto_device.py:85: Not found device: rocm:0 [2024-02-02 20:00:56] INFO auto_device.py:85: Not found device: metal:0 [2024-02-02 20:00:56] INFO auto_device.py:76: Found device: vulkan:0 [2024-02-02 20:00:56] INFO auto_device.py:76: Found device: vulkan:1 [2024-02-02 20:00:56] INFO auto_device.py:76: Found device: vulkan:2 [2024-02-02 20:00:57] INFO auto_device.py:85: Not found device: opencl:0 [2024-02-02 20:00:57] INFO auto_device.py:33: Using device: cuda:0 [2024-02-02 20:00:57] INFO auto_weight.py:70: Finding weights in: /ssd1/cfruan/models/stablelm-zephyr-3b [2024-02-02 20:00:57] INFO auto_weight.py:136: Not found Huggingface PyTorch [2024-02-02 20:00:57] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json [2024-02-02 20:00:57] INFO auto_weight.py:106: Using source weight configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json. Use `--source` to override. [2024-02-02 20:00:57] INFO auto_weight.py:110: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-02-02 20:00:57] INFO auto_config.py:153: Found model type: stablelm_epoch. Use `--model-type` to override. [2024-02-02 20:00:57] INFO stablelm_model.py:45: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-02-02 20:00:57] INFO stablelm_model.py:59: prefill_chunk_size defaults to context_window_size (4096) Weight conversion with arguments: --config /ssd1/cfruan/models/stablelm-zephyr-3b/config.json --quantization NoQuantize(name='q0f32', kind='no-quant', model_dtype='float32') --model-type stablelm_epoch --device cuda:0 --source /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json --source-format huggingface-safetensor --output /tmp/tmpywap_r1m 0%| | 0/260 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) Start storing to cache /tmp/tmpywap_r1m [0001/0260] saving lm_head.weight [0002/0260] saving model.embed_tokens.weight [0003/0260] saving model.layers.0.input_layernorm.bias [0004/0260] saving model.layers.0.input_layernorm.weight [0005/0260] saving model.layers.0.mlp.down_proj.weight [0006/0260] saving model.layers.0.mlp.gate_up_proj.weight [0007/0260] saving model.layers.0.post_attention_layernorm.bias [0008/0260] saving model.layers.0.post_attention_layernorm.weight [0009/0260] saving model.layers.0.self_attn.qkv_proj.weight [0010/0260] saving model.layers.0.self_attn.o_proj.weight [0011/0260] saving model.layers.1.input_layernorm.bias [0012/0260] saving model.layers.1.input_layernorm.weight [0013/0260] saving model.layers.1.mlp.down_proj.weight [0014/0260] saving model.layers.1.mlp.gate_up_proj.weight [0015/0260] saving model.layers.1.post_attention_layernorm.bias [0016/0260] saving model.layers.1.post_attention_layernorm.weight [0017/0260] saving model.layers.1.self_attn.qkv_proj.weight [0018/0260] saving model.layers.1.self_attn.o_proj.weight [0019/0260] saving model.layers.10.input_layernorm.bias [0020/0260] saving model.layers.10.input_layernorm.weight [0021/0260] saving model.layers.10.mlp.down_proj.weight [0022/0260] saving model.layers.10.mlp.gate_up_proj.weight [0023/0260] saving model.layers.10.post_attention_layernorm.bias [0024/0260] saving model.layers.10.post_attention_layernorm.weight [0025/0260] saving model.layers.10.self_attn.qkv_proj.weight [0026/0260] saving model.layers.10.self_attn.o_proj.weight [0027/0260] saving model.layers.11.input_layernorm.bias [0028/0260] saving model.layers.11.input_layernorm.weight [0029/0260] saving model.layers.11.mlp.down_proj.weight [0030/0260] saving model.layers.11.mlp.gate_up_proj.weight [0031/0260] saving model.layers.11.post_attention_layernorm.bias [0032/0260] saving model.layers.11.post_attention_layernorm.weight [0033/0260] saving model.layers.11.self_attn.qkv_proj.weight [0034/0260] saving model.layers.11.self_attn.o_proj.weight [0035/0260] saving model.layers.12.input_layernorm.bias [0036/0260] saving model.layers.12.input_layernorm.weight [0037/0260] saving model.layers.12.mlp.down_proj.weight [0038/0260] saving model.layers.12.mlp.gate_up_proj.weight [0039/0260] saving model.layers.12.post_attention_layernorm.bias [0040/0260] saving model.layers.12.post_attention_layernorm.weight [0041/0260] saving model.layers.12.self_attn.qkv_proj.weight [0042/0260] saving model.layers.12.self_attn.o_proj.weight [0043/0260] saving model.layers.13.input_layernorm.bias [0044/0260] saving model.layers.13.input_layernorm.weight [0045/0260] saving model.layers.13.mlp.down_proj.weight [0046/0260] saving model.layers.13.mlp.gate_up_proj.weight [0047/0260] saving model.layers.13.post_attention_layernorm.bias [0048/0260] saving model.layers.13.post_attention_layernorm.weight [0049/0260] saving model.layers.13.self_attn.qkv_proj.weight [0050/0260] saving model.layers.13.self_attn.o_proj.weight [0051/0260] saving model.layers.14.input_layernorm.bias [0052/0260] saving model.layers.14.input_layernorm.weight [0053/0260] saving model.layers.14.mlp.down_proj.weight [0054/0260] saving model.layers.14.mlp.gate_up_proj.weight [0055/0260] saving model.layers.14.post_attention_layernorm.bias [0056/0260] saving model.layers.14.post_attention_layernorm.weight [0057/0260] saving model.layers.14.self_attn.qkv_proj.weight [0058/0260] saving model.layers.14.self_attn.o_proj.weight [0059/0260] saving model.layers.15.input_layernorm.bias [0060/0260] saving model.layers.15.input_layernorm.weight [0061/0260] saving model.layers.15.mlp.down_proj.weight [0062/0260] saving model.layers.15.mlp.gate_up_proj.weight [0063/0260] saving model.layers.15.post_attention_layernorm.bias [0064/0260] saving model.layers.15.post_attention_layernorm.weight [0065/0260] saving model.layers.15.self_attn.qkv_proj.weight [0066/0260] saving model.layers.15.self_attn.o_proj.weight [0067/0260] saving model.layers.16.input_layernorm.bias [0068/0260] saving model.layers.16.input_layernorm.weight [0069/0260] saving model.layers.16.mlp.down_proj.weight [0070/0260] saving model.layers.16.mlp.gate_up_proj.weight [0071/0260] saving model.layers.16.post_attention_layernorm.bias [0072/0260] saving model.layers.16.post_attention_layernorm.weight [0073/0260] saving model.layers.16.self_attn.qkv_proj.weight [0074/0260] saving model.layers.16.self_attn.o_proj.weight [0075/0260] saving model.layers.17.input_layernorm.bias [0076/0260] saving model.layers.17.input_layernorm.weight [0077/0260] saving model.layers.17.mlp.down_proj.weight [0078/0260] saving model.layers.17.mlp.gate_up_proj.weight [0079/0260] saving model.layers.17.post_attention_layernorm.bias [0080/0260] saving model.layers.17.post_attention_layernorm.weight [0081/0260] saving model.layers.17.self_attn.qkv_proj.weight [0082/0260] saving model.layers.17.self_attn.o_proj.weight [0083/0260] saving model.layers.18.input_layernorm.bias [0084/0260] saving model.layers.18.input_layernorm.weight [0085/0260] saving model.layers.18.mlp.down_proj.weight [0086/0260] saving model.layers.18.mlp.gate_up_proj.weight [0087/0260] saving model.layers.18.post_attention_layernorm.bias [0088/0260] saving model.layers.18.post_attention_layernorm.weight [0089/0260] saving model.layers.18.self_attn.qkv_proj.weight [0090/0260] saving model.layers.18.self_attn.o_proj.weight [0091/0260] saving model.layers.19.input_layernorm.bias [0092/0260] saving model.layers.19.input_layernorm.weight [0093/0260] saving model.layers.19.mlp.down_proj.weight [0094/0260] saving model.layers.19.mlp.gate_up_proj.weight [0095/0260] saving model.layers.19.post_attention_layernorm.bias [0096/0260] saving model.layers.19.post_attention_layernorm.weight [0097/0260] saving model.layers.19.self_attn.qkv_proj.weight [0098/0260] saving model.layers.19.self_attn.o_proj.weight [0099/0260] saving model.layers.2.input_layernorm.bias [0100/0260] saving model.layers.2.input_layernorm.weight [0101/0260] saving model.layers.2.mlp.down_proj.weight [0102/0260] saving model.layers.2.mlp.gate_up_proj.weight [0103/0260] saving model.layers.2.post_attention_layernorm.bias [0104/0260] saving model.layers.2.post_attention_layernorm.weight [0105/0260] saving model.layers.2.self_attn.qkv_proj.weight [0106/0260] saving model.layers.2.self_attn.o_proj.weight [0107/0260] saving model.layers.20.input_layernorm.bias [0108/0260] saving model.layers.20.input_layernorm.weight [0109/0260] saving model.layers.20.mlp.down_proj.weight [0110/0260] saving model.layers.20.mlp.gate_up_proj.weight [0111/0260] saving model.layers.20.post_attention_layernorm.bias [0112/0260] saving model.layers.20.post_attention_layernorm.weight [0113/0260] saving model.layers.20.self_attn.qkv_proj.weight [0114/0260] saving model.layers.20.self_attn.o_proj.weight [0115/0260] saving model.layers.21.input_layernorm.bias [0116/0260] saving model.layers.21.input_layernorm.weight [0117/0260] saving model.layers.21.mlp.down_proj.weight [0118/0260] saving model.layers.21.mlp.gate_up_proj.weight [0119/0260] saving model.layers.21.post_attention_layernorm.bias [0120/0260] saving model.layers.21.post_attention_layernorm.weight [0121/0260] saving model.layers.21.self_attn.qkv_proj.weight [0122/0260] saving model.layers.21.self_attn.o_proj.weight [0123/0260] saving model.layers.22.input_layernorm.bias [0124/0260] saving model.layers.22.input_layernorm.weight [0125/0260] saving model.layers.22.mlp.down_proj.weight [0126/0260] saving model.layers.22.mlp.gate_up_proj.weight [0127/0260] saving model.layers.22.post_attention_layernorm.bias [0128/0260] saving model.layers.22.post_attention_layernorm.weight [0129/0260] saving model.layers.22.self_attn.qkv_proj.weight [0130/0260] saving model.layers.22.self_attn.o_proj.weight [0131/0260] saving model.layers.23.input_layernorm.bias [0132/0260] saving model.layers.23.input_layernorm.weight [0133/0260] saving model.layers.23.mlp.down_proj.weight [0134/0260] saving model.layers.23.mlp.gate_up_proj.weight [0135/0260] saving model.layers.23.post_attention_layernorm.bias [0136/0260] saving model.layers.23.post_attention_layernorm.weight [0137/0260] saving model.layers.23.self_attn.qkv_proj.weight [0138/0260] saving model.layers.23.self_attn.o_proj.weight [0139/0260] saving model.layers.24.input_layernorm.bias [0140/0260] saving model.layers.24.input_layernorm.weight [0141/0260] saving model.layers.24.mlp.down_proj.weight [0142/0260] saving model.layers.24.mlp.gate_up_proj.weight [0143/0260] saving model.layers.24.post_attention_layernorm.bias [0144/0260] saving model.layers.24.post_attention_layernorm.weight [0145/0260] saving model.layers.24.self_attn.qkv_proj.weight [0146/0260] saving model.layers.24.self_attn.o_proj.weight [0147/0260] saving model.layers.25.input_layernorm.bias [0148/0260] saving model.layers.25.input_layernorm.weight [0149/0260] saving model.layers.25.mlp.down_proj.weight [0150/0260] saving model.layers.25.mlp.gate_up_proj.weight [0151/0260] saving model.layers.25.post_attention_layernorm.bias [0152/0260] saving model.layers.25.post_attention_layernorm.weight [0153/0260] saving model.layers.25.self_attn.qkv_proj.weight [0154/0260] saving model.layers.25.self_attn.o_proj.weight [0155/0260] saving model.layers.26.input_layernorm.bias [0156/0260] saving model.layers.26.input_layernorm.weight [0157/0260] saving model.layers.26.mlp.down_proj.weight [0158/0260] saving model.layers.26.mlp.gate_up_proj.weight [0159/0260] saving model.layers.26.post_attention_layernorm.bias [0160/0260] saving model.layers.26.post_attention_layernorm.weight [0161/0260] saving model.layers.26.self_attn.qkv_proj.weight [0162/0260] saving model.layers.26.self_attn.o_proj.weight [0163/0260] saving model.layers.27.input_layernorm.bias [0164/0260] saving model.layers.27.input_layernorm.weight [0165/0260] saving model.layers.27.mlp.down_proj.weight [0166/0260] saving model.layers.27.mlp.gate_up_proj.weight [0167/0260] saving model.layers.27.post_attention_layernorm.bias [0168/0260] saving model.layers.27.post_attention_layernorm.weight [0169/0260] saving model.layers.27.self_attn.qkv_proj.weight [0170/0260] saving model.layers.27.self_attn.o_proj.weight [0171/0260] saving model.layers.28.input_layernorm.bias [0172/0260] saving model.layers.28.input_layernorm.weight [0173/0260] saving model.layers.28.mlp.down_proj.weight [0174/0260] saving model.layers.28.mlp.gate_up_proj.weight [0175/0260] saving model.layers.28.post_attention_layernorm.bias [0176/0260] saving model.layers.28.post_attention_layernorm.weight [0177/0260] saving model.layers.28.self_attn.qkv_proj.weight [0178/0260] saving model.layers.28.self_attn.o_proj.weight [0179/0260] saving model.layers.29.input_layernorm.bias [0180/0260] saving model.layers.29.input_layernorm.weight [0181/0260] saving model.layers.29.mlp.down_proj.weight [0182/0260] saving model.layers.29.mlp.gate_up_proj.weight [0183/0260] saving model.layers.29.post_attention_layernorm.bias [0184/0260] saving model.layers.29.post_attention_layernorm.weight [0185/0260] saving model.layers.29.self_attn.qkv_proj.weight [0186/0260] saving model.layers.29.self_attn.o_proj.weight [0187/0260] saving model.layers.3.input_layernorm.bias [0188/0260] saving model.layers.3.input_layernorm.weight [0189/0260] saving model.layers.3.mlp.down_proj.weight [0190/0260] saving model.layers.3.mlp.gate_up_proj.weight [0191/0260] saving model.layers.3.post_attention_layernorm.bias [0192/0260] saving model.layers.3.post_attention_layernorm.weight [0193/0260] saving model.layers.3.self_attn.qkv_proj.weight [0194/0260] saving model.layers.3.self_attn.o_proj.weight [0195/0260] saving model.layers.30.input_layernorm.bias [0196/0260] saving model.layers.30.input_layernorm.weight [0197/0260] saving model.layers.30.mlp.down_proj.weight [0198/0260] saving model.layers.30.mlp.gate_up_proj.weight [0199/0260] saving model.layers.30.post_attention_layernorm.bias [0200/0260] saving model.layers.30.post_attention_layernorm.weight [0201/0260] saving model.layers.30.self_attn.qkv_proj.weight [0202/0260] saving model.layers.30.self_attn.o_proj.weight [0203/0260] saving model.layers.31.input_layernorm.bias [0204/0260] saving model.layers.31.input_layernorm.weight [0205/0260] saving model.layers.31.mlp.down_proj.weight [0206/0260] saving model.layers.31.mlp.gate_up_proj.weight [0207/0260] saving model.layers.31.post_attention_layernorm.bias [0208/0260] saving model.layers.31.post_attention_layernorm.weight [0209/0260] saving model.layers.31.self_attn.qkv_proj.weight [0210/0260] saving model.layers.31.self_attn.o_proj.weight [0211/0260] saving model.layers.4.input_layernorm.bias [0212/0260] saving model.layers.4.input_layernorm.weight [0213/0260] saving model.layers.4.mlp.down_proj.weight [0214/0260] saving model.layers.4.mlp.gate_up_proj.weight [0215/0260] saving model.layers.4.post_attention_layernorm.bias [0216/0260] saving model.layers.4.post_attention_layernorm.weight [0217/0260] saving model.layers.4.self_attn.qkv_proj.weight [0218/0260] saving model.layers.4.self_attn.o_proj.weight [0219/0260] saving model.layers.5.input_layernorm.bias [0220/0260] saving model.layers.5.input_layernorm.weight [0221/0260] saving model.layers.5.mlp.down_proj.weight [0222/0260] saving model.layers.5.mlp.gate_up_proj.weight [0223/0260] saving model.layers.5.post_attention_layernorm.bias [0224/0260] saving model.layers.5.post_attention_layernorm.weight [0225/0260] saving model.layers.5.self_attn.qkv_proj.weight [0226/0260] saving model.layers.5.self_attn.o_proj.weight [0227/0260] saving model.layers.6.input_layernorm.bias [0228/0260] saving model.layers.6.input_layernorm.weight [0229/0260] saving model.layers.6.mlp.down_proj.weight [0230/0260] saving model.layers.6.mlp.gate_up_proj.weight [0231/0260] saving model.layers.6.post_attention_layernorm.bias [0232/0260] saving model.layers.6.post_attention_layernorm.weight [0233/0260] saving model.layers.6.self_attn.qkv_proj.weight [0234/0260] saving model.layers.6.self_attn.o_proj.weight [0235/0260] saving model.layers.7.input_layernorm.bias [0236/0260] saving model.layers.7.input_layernorm.weight [0237/0260] saving model.layers.7.mlp.down_proj.weight [0238/0260] saving model.layers.7.mlp.gate_up_proj.weight [0239/0260] saving model.layers.7.post_attention_layernorm.bias [0240/0260] saving model.layers.7.post_attention_layernorm.weight [0241/0260] saving model.layers.7.self_attn.qkv_proj.weight [0242/0260] saving model.layers.7.self_attn.o_proj.weight [0243/0260] saving model.layers.8.input_layernorm.bias [0244/0260] saving model.layers.8.input_layernorm.weight [0245/0260] saving model.layers.8.mlp.down_proj.weight [0246/0260] saving model.layers.8.mlp.gate_up_proj.weight [0247/0260] saving model.layers.8.post_attention_layernorm.bias [0248/0260] saving model.layers.8.post_attention_layernorm.weight [0249/0260] saving model.layers.8.self_attn.qkv_proj.weight [0250/0260] saving model.layers.8.self_attn.o_proj.weight [0251/0260] saving model.layers.9.input_layernorm.bias [0252/0260] saving model.layers.9.input_layernorm.weight [0253/0260] saving model.layers.9.mlp.down_proj.weight [0254/0260] saving model.layers.9.mlp.gate_up_proj.weight [0255/0260] saving model.layers.9.post_attention_layernorm.bias [0256/0260] saving model.layers.9.post_attention_layernorm.weight [0257/0260] saving model.layers.9.self_attn.qkv_proj.weight [0258/0260] saving model.layers.9.self_attn.o_proj.weight [0259/0260] saving model.norm.bias[2024-02-02 20:01:41] INFO convert_weight.py:143: Saved to directory: /tmp/tmpywap_r1m [0260/0260] saving model.norm.weight All finished, 114 total shards committed, record saved to /tmp/tmpywap_r1m/ndarray-cache.json Also saved a bf16 record to /tmp/tmpywap_r1m/ndarray-cache-b16.json