runtime error

/home/user/app/app.py:3: GradioDeprecationWarning: gr.Interface.load() will be deprecated. Use gr.load() instead. gr.Interface.load("models/meta-llama/Llama-2-7b-chat-hf",load_in_8bit=True,use_auth_token="hf_**********************************").launch() Fetching model from: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> gr.Interface.load("models/meta-llama/Llama-2-7b-chat-hf",load_in_8bit=True,use_auth_token="hf_**********************************").launch() File "/home/user/.local/lib/python3.10/site-packages/gradio/interface.py", line 98, in load return external.load( File "/home/user/.local/lib/python3.10/site-packages/gradio/external.py", line 70, in load return load_blocks_from_repo( File "/home/user/.local/lib/python3.10/site-packages/gradio/external.py", line 109, in load_blocks_from_repo blocks: gradio.Blocks = factory_methods[src](name, hf_*****, alias, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/gradio/external.py", line 149, in from_model response.status_code == 200 AssertionError: Could not find model: meta-llama/Llama-2-7b-chat-hf. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter.

Container logs:

Fetching error logs...