runtime error

██████▋| 4.37G/4.49G [02:01<00:03, 36.3MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 98%|█████████▊| 4.38G/4.49G [02:01<00:02, 41.5MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 98%|█████████▊| 4.39G/4.49G [02:01<00:02, 43.1MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 98%|█████████▊| 4.40G/4.49G [02:02<00:02, 30.7MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 99%|█████████▊| 4.42G/4.49G [02:02<00:01, 38.2MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 99%|█████████▉| 4.44G/4.49G [02:02<00:01, 39.7MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 99%|█████████▉| 4.46G/4.49G [02:03<00:00, 46.3MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 100%|█████████▉| 4.47G/4.49G [02:03<00:00, 38.6MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 100%|██████████| 4.49G/4.49G [02:03<00:00, 47.6MB/s] (…)ternlm-internlm-chat-7b-v1_1.Q4_K_M.gguf: 100%|██████████| 4.49G/4.49G [02:03<00:00, 36.2MB/s] Fetching 1 files: 100%|██████████| 1/1 [02:04<00:00, 124.40s/it] Fetching 1 files: 100%|██████████| 1/1 [02:04<00:00, 124.40s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 14, in <module> model = AutoModelForCausalLM.from_pretrained("s3nh/internlm-internlm-chat-7b-v1_1-GGUF", model_file="internlm-internlm-chat-7b-v1_1.Q4_K_M.gguf", gpu_layers=0) File "/home/user/.local/lib/python3.10/site-packages/ctransformers/hub.py", line 175, in from_pretrained llm = LLM( File "/home/user/.local/lib/python3.10/site-packages/ctransformers/llm.py", line 253, in __init__ raise RuntimeError( RuntimeError: Failed to create LLM 'gguf' from '/home/user/.cache/huggingface/hub/models--s3nh--internlm-internlm-chat-7b-v1_1-GGUF/blobs/cdb98f3c517ec787208a67ec7acb092fb2632085cc8f0c5011e748c96b9eb60a'.

Container logs:

Fetching error logs...