runtime error

-vicuna-13B.ggmlv3.q4_1.bin: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 6.29G/8.14G [00:42<00:15, 121MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 6.45G/8.14G [00:44<00:15, 107MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 6.65G/8.14G [00:45<00:11, 124MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 6.87G/8.14G [00:46<00:08, 144MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 7.05G/8.14G [00:47<00:07, 143MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 7.27G/8.14G [00:48<00:05, 147MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 7.43G/8.14G [00:50<00:05, 127MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 7.75G/8.14G [00:51<00:02, 162MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 8.14G/8.14G [00:52<00:00, 156MB/s] gguf_init_from_file: invalid magic characters 'tjgg' error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--wizard-vicuna-13B-GGML/snapshots/18c48a2979551dbc957dc95638384db5f9f63400/wizard-vicuna-13B.ggmlv3.q4_1.bin llama_load_model_from_file: failed to load model AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | Traceback (most recent call last): File "/home/user/app/tabbed.py", line 25, in <module> llm = Llama(model_path=fp, **config["llama_cpp"]) File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 957, in __init__ self._n_vocab = self.n_vocab() File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 2264, in n_vocab return self._model.n_vocab() File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 252, in n_vocab assert self.model is not None AssertionError

Container logs:

Fetching error logs...