runtime error

afetensors: 37%|███▋ | 14.6G/39.8G [00:08<00:13, 1.91GB/s] model.safetensors: 41%|████▏ | 16.5G/39.8G [00:09<00:12, 1.88GB/s] model.safetensors: 46%|████▌ | 18.4G/39.8G [00:10<00:11, 1.87GB/s] model.safetensors: 51%|█████ | 20.2G/39.8G [00:11<00:10, 1.85GB/s] model.safetensors: 56%|█████▌ | 22.1G/39.8G [00:12<00:09, 1.79GB/s] model.safetensors: 60%|██████ | 23.9G/39.8G [00:13<00:09, 1.73GB/s] model.safetensors: 64%|██████▍ | 25.6G/39.8G [00:14<00:08, 1.72GB/s] model.safetensors: 69%|██████▉ | 27.4G/39.8G [00:15<00:07, 1.72GB/s] model.safetensors: 73%|███████▎ | 29.1G/39.8G [00:16<00:06, 1.69GB/s] model.safetensors: 77%|███████▋ | 30.8G/39.8G [00:17<00:05, 1.67GB/s] model.safetensors: 82%|████████▏ | 32.5G/39.8G [00:18<00:04, 1.68GB/s] model.safetensors: 86%|████████▌ | 34.2G/39.8G [00:19<00:03, 1.63GB/s] model.safetensors: 90%|█████████ | 35.9G/39.8G [00:20<00:02, 1.63GB/s] model.safetensors: 95%|█████████▌| 37.9G/39.8G [00:21<00:01, 1.75GB/s] model.safetensors: 100%|█████████▉| 39.8G/39.8G [00:22<00:00, 1.74GB/s] INFO - The layer lm_head is not quantized. Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> model = AutoGPTQForCausalLM.from_quantized( File "/usr/local/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 135, in from_quantized return quant_func( File "/usr/local/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 1339, in from_quantized model = autogptq_post_init(model, use_act_order=quantize_config.desc_act) File "/usr/local/lib/python3.10/site-packages/auto_gptq/modeling/_utils.py", line 439, in autogptq_post_init prepare_buffers(device, buffers["temp_state"], buffers["temp_dq"]) RuntimeError: no device index

Container logs:

Fetching error logs...