/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat gen_config /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo --quantization q4f16_1 --conv-template codellama_instruct --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6 --context-window-size 16384 [2024-01-29 23:53:17] INFO auto_config.py:115: Found model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/config.json [2024-01-29 23:53:17] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-01-29 23:53:17] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (16384) [2024-01-29 23:53:17] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (16384) [2024-01-29 23:53:17] INFO config.py:106: Overriding context_window_size from 16384 to 16384 [2024-01-29 23:53:17] INFO config.py:106: Overriding max_batch_size from 1 to 80 [2024-01-29 23:53:17] INFO gen_config.py:116: [generation_config.json] Setting bos_token_id: 1 [2024-01-29 23:53:17] INFO gen_config.py:116: [generation_config.json] Setting eos_token_id: 2 [2024-01-29 23:53:17] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/tokenizer.model. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6/tokenizer.model [2024-01-29 23:53:17] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/tokenizer.json. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6/tokenizer.json [2024-01-29 23:53:17] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/vocab.json [2024-01-29 23:53:17] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/merges.txt [2024-01-29 23:53:17] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/added_tokens.json [2024-01-29 23:53:17] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/tokenizer_config.json. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6/tokenizer_config.json [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting pad_token_id: 0 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting temperature: 0.7 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting repetition_penalty: 1.0 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting top_p: 0.95 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting mean_gen_len: 128 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting max_gen_len: 512 [2024-01-29 23:53:17] INFO gen_config.py:69: [System default] Setting shift_fill_factor: 0.3 [2024-01-29 23:53:17] INFO gen_config.py:158: Dumping configuration file to: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6/mlc-chat-config.json /Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat convert_weight /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo --quantization q4f16_1 --source-format auto --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6 [2024-01-29 23:53:18] INFO auto_config.py:115: Found model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/config.json [2024-01-29 23:53:18] INFO auto_device.py:85: Not found device: cuda:0 [2024-01-29 23:53:19] INFO auto_device.py:85: Not found device: rocm:0 [2024-01-29 23:53:19] INFO auto_device.py:76: Found device: metal:0 [2024-01-29 23:53:19] INFO auto_device.py:85: Not found device: vulkan:0 [2024-01-29 23:53:20] INFO auto_device.py:85: Not found device: opencl:0 [2024-01-29 23:53:20] INFO auto_device.py:33: Using device: metal:0 [2024-01-29 23:53:20] INFO auto_weight.py:70: Finding weights in: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo [2024-01-29 23:53:20] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/pytorch_model.bin.index.json [2024-01-29 23:53:20] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/model.safetensors.index.json [2024-01-29 23:53:20] INFO auto_weight.py:106: Using source weight configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/pytorch_model.bin.index.json. Use `--source` to override. [2024-01-29 23:53:20] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override. [2024-01-29 23:53:20] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. [2024-01-29 23:53:20] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (16384) [2024-01-29 23:53:20] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (16384) Weight conversion with arguments: --config /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/config.json --quantization GroupQuantize(name='q4f16_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float16', linear_weight_layout='NK', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type llama --device metal:0 --source /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp9sl8bm9n/repo/pytorch_model.bin.index.json --source-format huggingface-torch --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp6v51mnz6 0%| | 0/291 [00:00", line 198, in _run_module_as_main File "", line 88, in _run_code File "/Users/cfruan/Documents/mlc-llm-repos/mlc-llm-head/python/mlc_chat/__main__.py", line 47, in main() File "/Users/cfruan/Documents/mlc-llm-repos/mlc-llm-head/python/mlc_chat/__main__.py", line 28, in main cli.main(sys.argv[2:]) File "/Users/cfruan/Documents/mlc-llm-repos/mlc-llm-head/python/mlc_chat/cli/convert_weight.py", line 87, in main convert_weight( File "/Users/cfruan/Documents/mlc-llm-repos/mlc-llm-head/python/mlc_chat/interface/convert_weight.py", line 158, in convert_weight _convert_args(args) File "/Users/cfruan/Documents/mlc-llm-repos/mlc-llm-head/python/mlc_chat/interface/convert_weight.py", line 133, in _convert_args tvmjs.dump_ndarray_cache( File "/Users/cfruan/Documents/tvm-unity/python/tvm/contrib/tvmjs.py", line 212, in dump_ndarray_cache shard_manager.append(data, name=k, shape=shape, dtype=dtype, encode_format=encode_format) File "/Users/cfruan/Documents/tvm-unity/python/tvm/contrib/tvmjs.py", line 110, in append self._commit_internal(data, [rec]) File "/Users/cfruan/Documents/tvm-unity/python/tvm/contrib/tvmjs.py", line 134, in _commit_internal outfile.write(data) OSError: [Errno 28] No space left on device [0387/0485] saving model.layers.31.self_attn.qkv_proj.q_scale [0388/0485] saving model.layers.31.self_attn.o_proj.q_weight [0389/0485] saving model.layers.31.self_attn.o_proj.q_scale [0390/0485] saving model.layers.32.input_layernorm.weight [0391/0485] saving model.layers.32.mlp.down_proj.q_weight [0392/0485] saving model.layers.32.mlp.down_proj.q_scale [0393/0485] saving model.layers.32.mlp.gate_up_proj.q_weight [0394/0485] saving model.layers.32.mlp.gate_up_proj.q_scale [0395/0485] saving model.layers.32.post_attention_layernorm.weight [0396/0485] saving model.layers.32.self_attn.qkv_proj.q_weight [0397/0485] saving model.layers.32.self_attn.qkv_proj.q_scale [0398/0485] saving model.layers.32.self_attn.o_proj.q_weight [0399/0485] saving model.layers.32.self_attn.o_proj.q_scale [0400/0485] saving model.layers.33.input_layernorm.weight [0401/0485] saving model.layers.33.mlp.down_proj.q_weight [0402/0485] saving model.layers.33.mlp.down_proj.q_scale [0403/0485] saving model.layers.33.mlp.gate_up_proj.q_weight [0404/0485] saving model.layers.33.mlp.gate_up_proj.q_scale [0405/0485] saving model.layers.33.post_attention_layernorm.weight [0406/0485] saving model.layers.33.self_attn.qkv_proj.q_weight [0407/0485] saving model.layers.33.self_attn.qkv_proj.q_scale [0408/0485] saving model.layers.33.self_attn.o_proj.q_weight [0409/0485] saving model.layers.33.self_attn.o_proj.q_scale [0410/0485] saving model.layers.34.mlp.gate_up_proj.q_weight [0411/0485] saving model.layers.34.mlp.gate_up_proj.q_scale [0412/0485] saving model.layers.34.self_attn.qkv_proj.q_weight [0413/0485] saving model.layers.34.self_attn.qkv_proj.q_scale [0414/0485] saving model.layers.34.self_attn.o_proj.q_weight [0415/0485] saving model.layers.34.self_attn.o_proj.q_scale [0416/0485] saving model.layers.34.input_layernorm.weight [0417/0485] saving model.layers.34.mlp.down_proj.q_weight [0418/0485] saving model.layers.34.mlp.down_proj.q_scale [0419/0485] saving model.layers.34.post_attention_layernorm.weight [0420/0485] saving model.layers.35.input_layernorm.weight [0421/0485] saving model.layers.35.mlp.down_proj.q_weight [0422/0485] saving model.layers.35.mlp.down_proj.q_scale [0423/0485] saving model.layers.35.mlp.gate_up_proj.q_weight