runtime error

16.7G/16.9G [13:35<00:07, 22.7MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.7G/16.9G [13:36<00:06, 25.8MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.7G/16.9G [13:36<00:06, 23.8MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.7G/16.9G [13:37<00:05, 27.5MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.7G/16.9G [13:37<00:04, 26.6MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.7G/16.9G [13:38<00:03, 29.7MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:39<00:05, 17.9MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:40<00:04, 18.3MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:41<00:03, 18.8MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:42<00:02, 18.8MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:43<00:01, 19.1MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 16.8G/16.9G [13:44<00:00, 17.1MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16.9G/16.9G [13:44<00:00, 18.1MB/s] mpt-30b-chat.ggmlv0.q4_0.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16.9G/16.9G [13:44<00:00, 20.4MB/s] Fetching 1 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [13:45<00:00, 825.33s/it] Fetching 1 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [13:45<00:00, 825.33s/it] gguf_init_from_file: invalid magic number 67676d6c error loading model: llama_model_loader: failed to load model from mpt-30b-chat.ggmlv0.q4_0.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 50, in <module> model = Llama( File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 365, in __init__ assert self.model is not None AssertionError

Container logs:

Fetching error logs...