runtime error

model params = 1.59 B llm_load_print_meta: model size = 1.21 GiB (6.56 BPW) llm_load_print_meta: general.name = Refact llm_load_print_meta: BOS token = 0 '<|endoftext|>' llm_load_print_meta: EOS token = 0 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: LF token = 145 'Ä' llm_load_tensors: ggml ctx size = 0.11 MiB error loading model: map::at llama_load_model_from_file: failed to load model AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | Traceback (most recent call last): File "//main.py", line 5, in <module> app = create_app( ^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/server/app.py", line 133, in create_app set_llama_proxy(model_settings=model_settings) File "/usr/local/lib/python3.11/site-packages/llama_cpp/server/app.py", line 70, in set_llama_proxy _llama_proxy = LlamaProxy(models=model_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/server/model.py", line 27, in __init__ self._current_model = self.load_llama_from_model_settings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/server/model.py", line 75, in load_llama_from_model_settings _model = llama_cpp.Llama( ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 962, in __init__ self._n_vocab = self.n_vocab() ^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 2274, in n_vocab return self._model.n_vocab() ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/llama_cpp/llama.py", line 251, in n_vocab assert self.model is not None ^^^^^^^^^^^^^^^^^^^^^^ AssertionError

Container logs:

Fetching error logs...