runtime error

e 1608, in get_hf_file_metadata hf_raise_for_status(r) File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/huggingface_hub/utils/", line 293, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66663c9d-7a1a0cff4c1e9ed57a9b8878;c1900dd7-4585-4d86-a89f-1b738e4d959f) Repository Not Found for url: Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. User Access Token "Llama2 ilumio" is expired The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/app/", line 44, in <module> llm = LlmAgent(model="TheBloke/Llama-2-7B-chat-GPTQ",token=os.environ["TOKEN_HF"]) File "/home/user/app/src/tools/", line 10, in __init__ self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast=False,token=token,legacy=False) File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/auto/", line 701, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/auto/", line 534, in get_tokenizer_config resolved_config_file = cached_file( File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/utils/", line 450, in cached_file raise EnvironmentError( OSError: TheBloke/Llama-2-7B-chat-GPTQ is not a local folder and is not a valid model identifier listed on '' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Container logs:

Fetching error logs...