python -m mlc_chat gen_config /models/WizardMath-7B-V1.0 --quantization q4f16_1 --conv-template wizard_coder_or_math --output /tmp/tmp3rxs2lx1 [2023-12-17 15:53:19] INFO auto_config.py:116: Found model configuration: /models/WizardMath-7B-V1.0/config.json [2023-12-17 15:53:19] INFO auto_config.py:155: Found model type: llama. Use `--model-type` to override. [2023-12-17 15:53:19] INFO llama_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (2048) [2023-12-17 15:53:19] INFO gen_config.py:114: [generation_config.json] Setting bos_token_id: 1 [2023-12-17 15:53:19] INFO gen_config.py:114: [generation_config.json] Setting eos_token_id: 2 [2023-12-17 15:53:19] INFO gen_config.py:114: [generation_config.json] Setting pad_token_id: 0 [2023-12-17 15:53:19] INFO gen_config.py:128: Not found tokenizer config: /models/WizardMath-7B-V1.0/tokenizer.model [2023-12-17 15:53:19] INFO gen_config.py:126: Found tokenizer config: /models/WizardMath-7B-V1.0/tokenizer.json. Copying to /tmp/tmp3rxs2lx1/tokenizer.json [2023-12-17 15:53:19] INFO gen_config.py:128: Not found tokenizer config: /models/WizardMath-7B-V1.0/vocab.json [2023-12-17 15:53:19] INFO gen_config.py:128: Not found tokenizer config: /models/WizardMath-7B-V1.0/merges.txt [2023-12-17 15:53:19] INFO gen_config.py:126: Found tokenizer config: /models/WizardMath-7B-V1.0/added_tokens.json. Copying to /tmp/tmp3rxs2lx1/added_tokens.json [2023-12-17 15:53:19] INFO gen_config.py:126: Found tokenizer config: /models/WizardMath-7B-V1.0/tokenizer_config.json. Copying to /tmp/tmp3rxs2lx1/tokenizer_config.json [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting temperature: 0.7 [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting repetition_penalty: 1.0 [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting top_p: 0.95 [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting mean_gen_len: 128 [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting max_gen_len: 512 [2023-12-17 15:53:19] INFO gen_config.py:68: [System default] Setting shift_fill_factor: 0.3 [2023-12-17 15:53:19] INFO gen_config.py:156: Dumping configuration file to: /tmp/tmp3rxs2lx1/mlc-chat-config.json python -m mlc_chat convert_weight /models/WizardMath-7B-V1.0 --quantization q4f16_1 --source-format auto --output /tmp/tmp3rxs2lx1 [2023-12-17 15:53:19] INFO auto_config.py:116: Found model configuration: /models/WizardMath-7B-V1.0/config.json [2023-12-17 15:53:20] INFO auto_device.py:75: Found device: cuda:0 [2023-12-17 15:53:20] INFO auto_device.py:75: Found device: cuda:1 [2023-12-17 15:53:20] INFO auto_device.py:84: Not found device: rocm:0 [2023-12-17 15:53:20] INFO auto_device.py:84: Not found device: metal:0 [2023-12-17 15:53:20] INFO auto_device.py:75: Found device: vulkan:0 [2023-12-17 15:53:20] INFO auto_device.py:75: Found device: vulkan:1 [2023-12-17 15:53:20] INFO auto_device.py:75: Found device: vulkan:2 [2023-12-17 15:53:21] INFO auto_device.py:84: Not found device: opencl:0 [2023-12-17 15:53:21] INFO auto_device.py:33: Using device: cuda:0 [2023-12-17 15:53:21] INFO auto_weight.py:70: Finding weights in: /models/WizardMath-7B-V1.0 [2023-12-17 15:53:21] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: /models/WizardMath-7B-V1.0/pytorch_model.bin.index.json [2023-12-17 15:53:21] INFO auto_weight.py:149: Not found Huggingface Safetensor [2023-12-17 15:53:21] INFO auto_weight.py:106: Using source weight configuration: /models/WizardMath-7B-V1.0/pytorch_model.bin.index.json. Use `--source` to override. [2023-12-17 15:53:21] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override. [2023-12-17 15:53:21] INFO auto_config.py:155: Found model type: llama. Use `--model-type` to override. [2023-12-17 15:53:21] INFO llama_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (2048) Weight conversion with arguments: --config /models/WizardMath-7B-V1.0/config.json --quantization GroupQuantize(name='q4f16_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float16', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type llama --device cuda:0 --source /models/WizardMath-7B-V1.0/pytorch_model.bin.index.json --source-format huggingface-torch --output /tmp/tmp3rxs2lx1 0%| | 0/195 [00:00