KeyError: "filename 'storages' not found"

#17
by jiajia100 - opened

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_path, use_fast=self.use_fast_tokenizer, revision=revision
)
model = AutoModelForCausalLM.from_pretrained(
model_path,
low_cpu_mem_usage=True,
**from_pretrained_kwargs,
)

log:

Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards: 14%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1/7 [00:09<00:56, 9.38s/it]
Loading checkpoint shards: 29%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2/7 [00:18<00:45, 9.16s/it]
Loading checkpoint shards: 29%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2/7 [00:18<00:45, 9.19s/it]
2023-10-06 17:09:12,662 | ERROR | stderr |
2023-10-06 17:09:12,662 | ERROR | stderr | Traceback (most recent call last):
2023-10-06 17:09:12,662 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 484, in load_state_dict
2023-10-06 17:09:12,663 | ERROR | stderr | return torch.load(checkpoint_file, map_location=map_location)
2023-10-06 17:09:12,663 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/torch/serialization.py", line 815, in load
2023-10-06 17:09:12,663 | ERROR | stderr | return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
2023-10-06 17:09:12,664 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/torch/serialization.py", line 1018, in _legacy_load
2023-10-06 17:09:12,664 | ERROR | stderr | return legacy_load(f)
2023-10-06 17:09:12,664 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/torch/serialization.py", line 904, in legacy_load
2023-10-06 17:09:12,665 | ERROR | stderr | tar.extract('storages', path=tmpdir)
2023-10-06 17:09:12,665 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/tarfile.py", line 2091, in extract
2023-10-06 17:09:12,665 | ERROR | stderr | tarinfo = self.getmember(member)
2023-10-06 17:09:12,665 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/tarfile.py", line 1813, in getmember
2023-10-06 17:09:12,666 | ERROR | stderr | raise KeyError("filename %r not found" % name)
2023-10-06 17:09:12,666 | ERROR | stderr | KeyError: "filename 'storages' not found"
2023-10-06 17:09:12,666 | ERROR | stderr |
2023-10-06 17:09:12,666 | ERROR | stderr | The above exception was the direct cause of the following exception:
2023-10-06 17:09:12,666 | ERROR | stderr |
2023-10-06 17:09:12,666 | ERROR | stderr | Traceback (most recent call last):
2023-10-06 17:09:12,667 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 495, in load_state_dict
2023-10-06 17:09:12,667 | ERROR | stderr | raise ValueError(
2023-10-06 17:09:12,667 | ERROR | stderr | ValueError: Unable to locate the file models/CodeLlama-34b-Instruct-hf/pytorch_model-00003-of-00007.bin which is necessary to load this pretrained model. Make sure you have saved the model properly.
2023-10-06 17:09:12,667 | ERROR | stderr |
2023-10-06 17:09:12,667 | ERROR | stderr | During handling of the above exception, another exception occurred:
2023-10-06 17:09:12,667 | ERROR | stderr |
2023-10-06 17:09:12,668 | ERROR | stderr | Traceback (most recent call last):
2023-10-06 17:09:12,668 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/runpy.py", line 196, in _run_module_as_main
2023-10-06 17:09:12,668 | ERROR | stderr | return _run_code(code, main_globals, None,
2023-10-06 17:09:12,668 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/runpy.py", line 86, in _run_code
2023-10-06 17:09:12,668 | ERROR | stderr | exec(code, run_globals)
2023-10-06 17:09:12,668 | ERROR | stderr | File "/home/fastchat/serve/model_worker.py", line 467, in
2023-10-06 17:09:12,669 | ERROR | stderr | worker = ModelWorker(
2023-10-06 17:09:12,669 | ERROR | stderr | File "/home/fastchat/serve/model_worker.py", line 210, in init
2023-10-06 17:09:12,669 | ERROR | stderr | self.model, self.tokenizer = load_model(
2023-10-06 17:09:12,669 | ERROR | stderr | File "/home/fastchat/model/model_adapter.py", line 264, in load_model
2023-10-06 17:09:12,669 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2023-10-06 17:09:12,669 | ERROR | stderr | File "/home/fastchat/model/model_adapter.py", line 1280, in load_model
2023-10-06 17:09:12,670 | ERROR | stderr | model = AutoModelForCausalLM.from_pretrained(
2023-10-06 17:09:12,670 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
2023-10-06 17:09:12,670 | ERROR | stderr | return model_class.from_pretrained(
2023-10-06 17:09:12,670 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3307, in from_pretrained
2023-10-06 17:09:12,671 | ERROR | stderr | ) = cls._load_pretrained_model(
2023-10-06 17:09:12,671 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3681, in _load_pretrained_model
2023-10-06 17:09:12,672 | ERROR | stderr | state_dict = load_state_dict(shard_file)
2023-10-06 17:09:12,672 | ERROR | stderr | File "/opt/miniconda3/envs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 500, in load_state_dict
2023-10-06 17:09:12,673 | ERROR | stderr | raise OSError(
2023-10-06 17:09:12,673 | ERROR | stderr | OSError: Unable to load weights from pytorch checkpoint file for 'models/CodeLlama-34b-Instruct-hf/pytorch_model-00003-of-00007.bin' at 'models/CodeLlama-34b-Instruct-hf/pytorch_model-00003-of-00007.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

evns:
transformers 4.34
accelerate 0.23.0

Sign up or log in to comment