runtime error

Space failed to start. Exit code: 1. Reason: oading (…)l-00041-of-00041.bin: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 860M/983M [00:12<00:01, 74.5MB/s] Downloading (…)l-00041-of-00041.bin: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 881M/983M [00:13<00:01, 61.8MB/s] Downloading (…)l-00041-of-00041.bin: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 891M/983M [00:13<00:01, 54.9MB/s] Downloading (…)l-00041-of-00041.bin: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 912M/983M [00:13<00:01, 69.7MB/s] Downloading (…)l-00041-of-00041.bin: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 923M/983M [00:14<00:00, 63.9MB/s] Downloading (…)l-00041-of-00041.bin: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 944M/983M [00:14<00:00, 64.1MB/s] Downloading (…)l-00041-of-00041.bin: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 975M/983M [00:14<00:00, 75.6MB/s] Downloading (…)l-00041-of-00041.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 983M/983M [00:14<00:00, 66.2MB/s] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41/41 [10:21<00:00, 15.19s/it] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41/41 [10:21<00:00, 15.15s/it] Traceback (most recent call last): File "app.py", line 13, in <module> model, tokenizer = load_model( File "/home/user/app/model.py", line 12, in load_model model = LlamaForCausalLM.from_pretrained( File "/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2588, in from_pretrained raise ValueError( ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

Container logs:

Fetching error logs...