runtime error

tied. Please use the `tie_weights` method before using the `infer_auto_device` function. 2023-07-29 20:55:59 WARNING:The safetensors archive passed at models/mayaeary_pygmalion-6b_dev-4bit-128g/pygmalion-6b_dev-4bit-128g.safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata. Traceback (most recent call last): File "/home/user/app/server.py", line 1155, in <module> shared.model, shared.tokenizer = load_model(shared.model_name) File "/home/user/app/modules/models.py", line 78, in load_model output = load_func_map[loader](model_name) File "/home/user/app/modules/models.py", line 292, in AutoGPTQ_loader return modules.AutoGPTQ_loader.load_quantized(model_name) File "/home/user/app/modules/AutoGPTQ_loader.py", line 56, in load_quantized model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params) File "/home/user/.local/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 82, in from_quantized return quant_func( File "/home/user/.local/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 773, in from_quantized accelerate.utils.modeling.load_checkpoint_in_model( File "/home/user/.local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1094, in load_checkpoint_in_model checkpoint = load_state_dict(checkpoint_file, device_map=device_map) File "/home/user/.local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 946, in load_state_dict return safe_load_file(checkpoint_file, device=list(device_map.values())[0]) File "/home/user/.local/lib/python3.10/site-packages/safetensors/torch.py", line 261, in load_file result[k] = f.get_tensor(k) File "/home/user/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...