/home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat gen_config /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4 --quantization q4f32_1 --conv-template chatml --output /tmp/tmp1cqy774_ /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) [2024-01-16 09:48:04] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/config.json [2024-01-16 09:48:04] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-01-16 09:48:04] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (2048) [2024-01-16 09:48:04] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (2048) [2024-01-16 09:48:04] INFO gen_config.py:117: [config.json] Setting bos_token_id: 1 [2024-01-16 09:48:04] INFO gen_config.py:117: [config.json] Setting eos_token_id: 2 [2024-01-16 09:48:04] INFO gen_config.py:129: Found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/tokenizer.model. Copying to /tmp/tmp1cqy774_/tokenizer.model [2024-01-16 09:48:04] INFO gen_config.py:129: Found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/tokenizer.json. Copying to /tmp/tmp1cqy774_/tokenizer.json [2024-01-16 09:48:04] INFO gen_config.py:131: Not found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/vocab.json [2024-01-16 09:48:04] INFO gen_config.py:131: Not found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/merges.txt [2024-01-16 09:48:04] INFO gen_config.py:129: Found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/added_tokens.json. Copying to /tmp/tmp1cqy774_/added_tokens.json [2024-01-16 09:48:04] INFO gen_config.py:129: Found tokenizer config: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/tokenizer_config.json. Copying to /tmp/tmp1cqy774_/tokenizer_config.json [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting pad_token_id: 0 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting temperature: 0.7 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting repetition_penalty: 1.0 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting top_p: 0.95 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting mean_gen_len: 128 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting max_gen_len: 512 [2024-01-16 09:48:04] INFO gen_config.py:70: [System default] Setting shift_fill_factor: 0.3 [2024-01-16 09:48:04] INFO gen_config.py:159: Dumping configuration file to: /tmp/tmp1cqy774_/mlc-chat-config.json /home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat convert_weight /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4 --quantization q4f32_1 --source-format auto --output /tmp/tmp1cqy774_ /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) [2024-01-16 09:48:05] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/config.json [2024-01-16 09:48:06] INFO auto_device.py:76: Found device: cuda:0 [2024-01-16 09:48:06] INFO auto_device.py:76: Found device: cuda:1 [2024-01-16 09:48:06] INFO auto_device.py:85: Not found device: rocm:0 [2024-01-16 09:48:06] INFO auto_device.py:85: Not found device: metal:0 [2024-01-16 09:48:07] INFO auto_device.py:76: Found device: vulkan:0 [2024-01-16 09:48:07] INFO auto_device.py:76: Found device: vulkan:1 [2024-01-16 09:48:07] INFO auto_device.py:76: Found device: vulkan:2 [2024-01-16 09:48:07] INFO auto_device.py:85: Not found device: opencl:0 [2024-01-16 09:48:07] INFO auto_device.py:33: Using device: cuda:0 [2024-01-16 09:48:07] INFO auto_weight.py:70: Finding weights in: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4 [2024-01-16 09:48:07] INFO auto_weight.py:129: Found source weight format: huggingface-torch. Source configuration: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/pytorch_model.bin [2024-01-16 09:48:07] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/model.safetensors.index.json [2024-01-16 09:48:07] INFO auto_weight.py:106: Using source weight configuration: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/pytorch_model.bin. Use `--source` to override. [2024-01-16 09:48:07] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override. [2024-01-16 09:48:07] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-01-16 09:48:07] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (2048) [2024-01-16 09:48:07] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (2048) [2024-01-16 09:48:11] INFO huggingface_loader.py:169: Loading HF parameters from: /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/pytorch_model.bin Weight conversion with arguments: --config /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type llama --device cuda:0 --source /ssd1/cfruan/models/TinyLlama-1.1B-Chat-v0.4/pytorch_model.bin --source-format huggingface-torch --output /tmp/tmp1cqy774_ 0%| | 0/135 [00:00