runtime error

��██▊ | 3.93G/4.47G [00:14<00:02, 229MB/s] 89%|████████▊ | 3.96G/4.47G [00:14<00:02, 245MB/s] 89%|████████▉ | 3.99G/4.47G [00:14<00:01, 264MB/s] 90%|████████▉ | 4.02G/4.47G [00:14<00:01, 285MB/s] 91%|█████████ | 4.05G/4.47G [00:15<00:01, 284MB/s] 91%|█████████ | 4.08G/4.47G [00:15<00:01, 282MB/s] 92%|█████████▏| 4.10G/4.47G [00:15<00:01, 291MB/s] 92%|█████████▏| 4.14G/4.47G [00:15<00:01, 303MB/s] 93%|█████████▎| 4.17G/4.47G [00:15<00:01, 315MB/s] 94%|█████████▍| 4.20G/4.47G [00:15<00:00, 336MB/s] 95%|█████████▍| 4.24G/4.47G [00:15<00:00, 343MB/s] 95%|█████████▌| 4.27G/4.47G [00:15<00:00, 338MB/s] 96%|█████████▌| 4.30G/4.47G [00:15<00:00, 337MB/s] 97%|█████████▋| 4.33G/4.47G [00:15<00:00, 340MB/s] 98%|█████████▊| 4.37G/4.47G [00:16<00:00, 344MB/s] 98%|█████████▊| 4.40G/4.47G [00:16<00:00, 331MB/s] 99%|█████████▉| 4.43G/4.47G [00:16<00:00, 317MB/s] 100%|█████████▉| 4.46G/4.47G [00:16<00:00, 321MB/s] 100%|██████████| 4.47G/4.47G [00:16<00:00, 292MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--ningshanwutuobang--ggml-pandagpt-vicuna-merge/snapshots/85ab300c48d8acdaf91a978429960d5130969de0/ggml-pandagpt-vicuna-q4_1.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> a = PandaGPT((vicuna_path,)) File "/home/user/app/panda_gpt.py", line 27, in __init__ self.model = llama_cpp.Llama(*args, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 340, in __init__ assert self.model is not None AssertionError

Container logs:

Fetching error logs...