runtime error

Exit code: 1. Reason: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 128004 llama_model_loader: - kv 20: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 33 tensors llama_model_loader: - type q4_K: 96 tensors llama_model_loader: - type q6_K: 17 tensors llama_model_load: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 4, in <module> llm = Llama.from_pretrained( File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 2353, in from_pretrained return cls( File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 369, in __init__ internals.LlamaModel( File "/usr/local/lib/python3.10/site-packages/llama_cpp/_internals.py", line 56, in __init__ raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: /home/user/.cache/huggingface/hub/models--ericbanzuzi--baseline_llm/snapshots/4e3fe1c11a4ddb83aa45534d9965cb64fabb77db/./unsloth.Q4_K_M.gguf

Container logs:

Fetching error logs...