diff --git "a/logs.txt" "b/logs.txt" new file mode 100644--- /dev/null +++ "b/logs.txt" @@ -0,0 +1,393 @@ +/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat gen_config /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo --quantization q4f16_1 --conv-template codellama_instruct --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59 --context-window-size 16384 +[2024-01-30 00:31:10] INFO auto_config.py:115: Found model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/config.json +[2024-01-30 00:31:10] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. +[2024-01-30 00:31:10] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (16384) +[2024-01-30 00:31:10] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (16384) +[2024-01-30 00:31:10] INFO config.py:106: Overriding context_window_size from 16384 to 16384 +[2024-01-30 00:31:10] INFO config.py:106: Overriding max_batch_size from 1 to 80 +[2024-01-30 00:31:10] INFO gen_config.py:116: [generation_config.json] Setting bos_token_id: 1 +[2024-01-30 00:31:10] INFO gen_config.py:116: [generation_config.json] Setting eos_token_id: 2 +[2024-01-30 00:31:10] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/tokenizer.model. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59/tokenizer.model +[2024-01-30 00:31:10] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/tokenizer.json. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59/tokenizer.json +[2024-01-30 00:31:10] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/vocab.json +[2024-01-30 00:31:10] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/merges.txt +[2024-01-30 00:31:10] INFO gen_config.py:130: Not found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/added_tokens.json +[2024-01-30 00:31:10] INFO gen_config.py:128: Found tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/tokenizer_config.json. Copying to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59/tokenizer_config.json +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting pad_token_id: 0 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting temperature: 0.7 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting repetition_penalty: 1.0 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting top_p: 0.95 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting mean_gen_len: 128 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting max_gen_len: 512 +[2024-01-30 00:31:10] INFO gen_config.py:69: [System default] Setting shift_fill_factor: 0.3 +[2024-01-30 00:31:10] INFO gen_config.py:158: Dumping configuration file to: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59/mlc-chat-config.json +/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat convert_weight /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo --quantization q4f16_1 --source-format auto --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59 +[2024-01-30 00:31:11] INFO auto_config.py:115: Found model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/config.json +[2024-01-30 00:31:11] INFO auto_device.py:85: Not found device: cuda:0 +[2024-01-30 00:31:11] INFO auto_device.py:85: Not found device: rocm:0 +[2024-01-30 00:31:12] INFO auto_device.py:76: Found device: metal:0 +[2024-01-30 00:31:12] INFO auto_device.py:85: Not found device: vulkan:0 +[2024-01-30 00:31:12] INFO auto_device.py:85: Not found device: opencl:0 +[2024-01-30 00:31:12] INFO auto_device.py:33: Using device: metal:0 +[2024-01-30 00:31:12] INFO auto_weight.py:70: Finding weights in: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo +[2024-01-30 00:31:12] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/pytorch_model.bin.index.json +[2024-01-30 00:31:12] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/model.safetensors.index.json +[2024-01-30 00:31:12] INFO auto_weight.py:106: Using source weight configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/pytorch_model.bin.index.json. Use `--source` to override. +[2024-01-30 00:31:12] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override. +[2024-01-30 00:31:12] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override. +[2024-01-30 00:31:12] INFO llama_model.py:51: context_window_size not found in config.json. Falling back to max_position_embeddings (16384) +[2024-01-30 00:31:12] INFO llama_model.py:71: prefill_chunk_size defaults to context_window_size (16384) +Weight conversion with arguments: + --config /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/config.json + --quantization GroupQuantize(name='q4f16_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float16', linear_weight_layout='NK', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) + --model-type llama + --device metal:0 + --source /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmp3q4o77js/repo/pytorch_model.bin.index.json + --source-format huggingface-torch + --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpw9em1i59 + 0%| | 0/195 [00:00