runtime error

config.json: 0%| | 0.00/566 [00:00<?, ?B/s] config.json: 100%|██████████| 566/566 [00:00<00:00, 3.85MB/s] quantize_config.json: 0%| | 0.00/210 [00:00<?, ?B/s] quantize_config.json: 100%|██████████| 210/210 [00:00<00:00, 1.68MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 8, in <module> model = AutoGPTQForCausalLM.from_quantized( File "/home/user/.local/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 129, in from_quantized return quant_func( File "/home/user/.local/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 844, in from_quantized raise FileNotFoundError(f"Could not find a model in {model_name_or_path} with a name in {', '.join(searched_files)}. Please specify the argument model_basename to use a custom file name.") FileNotFoundError: Could not find a model in vita-group/vicuna-7b-v1.3_gptq with a name in gptq_model-2bit-128g.safetensors, model.safetensors. Please specify the argument model_basename to use a custom file name.

Container logs:

Fetching error logs...