Can't use the model, keeps telling me The checkpoint you are trying to load has model type `vstream`

#2
by roxqtang - opened

When I am using :
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("IVGSZ/Flash-VStream-7b")
It returns an error, I tried many ways to deploy the model locally, I still get the same error, what might be wrong?

The error detailed is as followed:

media/tang/Windows-Storage/HW/CV/Cv_Project/Flash-VStream/get_model.py
The token has not been saved to the git credentials helper. Pass add_to_git_credential=True in this function directly or --add-to-git-credential if using via huggingface-cli if you want to set the git credential as well.
Token is valid (permission: write).
Your token has been saved to /home/tang/.cache/huggingface/token
Login successful
Traceback (most recent call last):
File "/home/tang/.conda/envs/NYU-CV/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1038, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tang/.conda/envs/NYU-CV/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 740, in getitem
raise KeyError(key)
KeyError: 'vstream'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/media/tang/Windows-Storage/HW/CV/Cv_Project/Flash-VStream/get_model.py", line 7, in
model = AutoModelForCausalLM.from_pretrained("IVGSZ/Flash-VStream-7b")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tang/.conda/envs/NYU-CV/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tang/.conda/envs/NYU-CV/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1040, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type vstream but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

I could not run the bash scripts/realtime_cli.sh as a result of the following error

(vstream) XXX@AMD:~/Flash-VStream$ bash scripts/realtime_cli.sh
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/site-packages/transformers/modeling_utils.py", line 535, in load_state_dict
return torch.load(
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/site-packages/torch/serialization.py", line 1383, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
Please file an issue with the following so that we can make weights_only=True compatible with your use case: WeightsUnpickler error: Unsupported operand 118

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/ahmed/Flash-VStream/flash_vstream/serve/cli_video_stream.py", line 351, in
main(args)
File "/home/ahmed/Flash-VStream/flash_vstream/serve/cli_video_stream.py", line 226, in main
tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device)
File "/home/ahmed/Flash-VStream/flash_vstream/model/builder.py", line 98, in load_pretrained_model
model = VStreamLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4224, in from_pretrained
) = cls._load_pretrained_model(
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4770, in _load_pretrained_model
state_dict = load_state_dict(
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/site-packages/transformers/modeling_utils.py", line 545, in load_state_dict
raise OSError(
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run git lfs install followed by git lfs pull in the folder you cloned.
Traceback (most recent call last):
File "", line 1, in
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/home/ahmed/anaconda3/envs/vstream/lib/python3.10/multiprocessing/synchronize.py", line 110, in setstate
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment