gatepoet's picture
Initial commit
c9f427f verified
raw
history blame
11.1 kB
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm gen_config ../dist/models/ToolLLaMA-2-7b-v2 --quantization q4f32_1 --conv-template llama-2 --output /tmp/tmpxjsa38do --tensor-parallel-shards 2
[2024-03-18 21:03:53] INFO auto_config.py:115: Found model configuration: ../dist/models/ToolLLaMA-2-7b-v2/config.json
[2024-03-18 21:03:53] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override.
[2024-03-18 21:03:53] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-03-18 21:03:53] INFO llama_model.py:72: prefill_chunk_size defaults to context_window_size (4096)
[2024-03-18 21:03:53] INFO config.py:106: Overriding max_batch_size from 1 to 80
[2024-03-18 21:03:53] INFO config.py:106: Overriding tensor_parallel_shards from 1 to 2
[2024-03-18 21:03:53] INFO gen_config.py:133: [generation_config.json] Setting bos_token_id: 1
[2024-03-18 21:03:53] INFO gen_config.py:133: [generation_config.json] Setting eos_token_id: 2
[2024-03-18 21:03:53] INFO gen_config.py:145: Found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer.model. Copying to /tmp/tmpxjsa38do/tokenizer.model
[2024-03-18 21:03:53] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer.json
[2024-03-18 21:03:53] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/vocab.json
[2024-03-18 21:03:53] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/merges.txt
[2024-03-18 21:03:53] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/added_tokens.json
[2024-03-18 21:03:53] INFO gen_config.py:145: Found tokenizer config: ../dist/models/ToolLLaMA-2-7b-v2/tokenizer_config.json. Copying to /tmp/tmpxjsa38do/tokenizer_config.json
[2024-03-18 21:03:53] INFO gen_config.py:153: The model has `tokenizer.model` but not `tokenizer.json`. It is always recommended to prefer JSON instead. Attempting to convert using HuggingFace transformers library
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[2024-03-18 21:03:54] INFO gen_config.py:167: Succesfully converted `tokenizer.model` to: /tmp/tmpxjsa38do/tokenizer.json
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting pad_token_id: 0
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting temperature: 0.7
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting presence_penalty: 0.0
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting frequency_penalty: 0.0
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting repetition_penalty: 1.0
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting top_p: 0.95
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting mean_gen_len: 128
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting max_gen_len: 512
[2024-03-18 21:03:54] INFO gen_config.py:75: [System default] Setting shift_fill_factor: 0.3
[2024-03-18 21:03:54] INFO gen_config.py:198: Dumping configuration file to: /tmp/tmpxjsa38do/mlc-chat-config.json
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm convert_weight ../dist/models/ToolLLaMA-2-7b-v2 --quantization q4f32_1 --source-format auto --output /tmp/tmpxjsa38do
[2024-03-18 21:03:55] INFO auto_config.py:115: Found model configuration: ../dist/models/ToolLLaMA-2-7b-v2/config.json
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:0
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:1
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:2
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:3
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:4
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:5
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:6
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:7
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:8
[2024-03-18 21:03:56] INFO auto_device.py:76: Found device: cuda:9
[2024-03-18 21:03:57] INFO auto_device.py:85: Not found device: rocm:0
[2024-03-18 21:03:58] INFO auto_device.py:85: Not found device: metal:0
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:0
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:1
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:2
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:3
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:4
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:5
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:6
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:7
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:8
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:9
[2024-03-18 21:04:02] INFO auto_device.py:76: Found device: vulkan:10
[2024-03-18 21:04:03] INFO auto_device.py:85: Not found device: opencl:0
[2024-03-18 21:04:03] INFO auto_device.py:33: Using device: cuda:0
[2024-03-18 21:04:03] INFO auto_weight.py:70: Finding weights in: ../dist/models/ToolLLaMA-2-7b-v2
[2024-03-18 21:04:03] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json
[2024-03-18 21:04:03] INFO auto_weight.py:167: Not found Huggingface Safetensor
[2024-03-18 21:04:03] INFO auto_weight.py:106: Using source weight configuration: ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json. Use `--source` to override.
[2024-03-18 21:04:03] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override.
[2024-03-18 21:04:03] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override.
[2024-03-18 21:04:03] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-03-18 21:04:03] INFO llama_model.py:72: prefill_chunk_size defaults to context_window_size (4096)
Weight conversion with arguments:
--config ../dist/models/ToolLLaMA-2-7b-v2/config.json
--quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=40, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=5, max_int_value=7)
--model-type llama
--device cuda:0
--source ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model.bin.index.json
--source-format huggingface-torch
--output /tmp/tmpxjsa38do
Start storing to cache /tmp/tmpxjsa38do
0%| | 0/195 [00:00<?, ?it/s] [2024-03-18 21:04:05] INFO huggingface_loader.py:182: Loading HF parameters from: ../dist/models/ToolLLaMA-2-7b-v2/pytorch_model-00003-of-00003.bin
0%| | 0/195 [00:00<?, ?it/s] 0%| | 0/195 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 47, in <module>
main()
File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 28, in main
cli.main(sys.argv[2:])
File "/home/floriadmin/mlc-llm/python/mlc_llm/cli/convert_weight.py", line 87, in main
convert_weight(
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 182, in convert_weight
_convert_args(args)
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 146, in _convert_args
tvmjs.dump_ndarray_cache(
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 210, in dump_ndarray_cache
for k, origin_v in param_generator:
File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 130, in _param_generator
for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs):
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 117, in load
param = self._load_mlc_param(mlc_name, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 147, in _load_mlc_param
self._load_file(path)
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 186, in _load_file
for name, param in load_func(path):
File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/utils.py", line 42, in load_torch_shard
for name, param in torch.load(path, map_location=torch.device("cpu")).items():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 998, in load
with _open_file_like(f, 'rb') as opened_file:
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 445, in _open_file_like
return _open_file(name_or_buffer, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 426, in __init__
super().__init__(open(name, mode))
^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../dist/models/ToolLLaMA-2-7b-v2/pytorch_model-00003-of-00003.bin'