runtime error

β–ˆβ–ˆ| 413M/413M [00:05<00:00, 76.7MB/s] Loading Q-Former Done Loading LLAMA /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2023: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead. warnings.warn( Traceback (most recent call last): File "/home/user/app/app.py", line 64, in <module> model = model_cls.from_config(model_config).to('cuda:0') File "/home/user/app/minigpt4/models/mini_gpt4.py", line 239, in from_config model = cls( File "/home/user/app/minigpt4/models/mini_gpt4.py", line 90, in __init__ self.llama_tokenizer = LlamaTokenizer.from_pretrained('Vision-CAIR/vicuna-7b', use_fast=False, use_auth_token=True) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2078, in from_pretrained resolved_config_file = cached_file( File "/usr/local/lib/python3.10/site-packages/transformers/utils/hub.py", line 402, in cached_file resolved_file = hf_hub_download( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1184, in hf_hub_download headers = build_hf_headers( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 124, in build_hf_headers token_to_send = get_token_to_send(token) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 158, in get_token_to_send raise LocalTokenNotFoundError( huggingface_hub.errors.LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.

Container logs:

Fetching error logs...