runtime error

/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Downloading all files... Files downloaded! gguf_init_from_file: invalid magic characters '<!do' llama_model_load: error loading model: llama_model_loader: failed to load model from ./ggml-model-Q4_K_M.gguf llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 71, in <module> MODEL = load_model() File "/home/user/app/app.py", line 60, in load_model model = Llama( File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 349, in __init__ self._model = _LlamaModel( File "/usr/local/lib/python3.10/site-packages/llama_cpp/_internals.py", line 57, in __init__ raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: ./ggml-model-Q4_K_M.gguf

Container logs:

Fetching error logs...