runtime error

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 14%|█▍ | 1/7 [00:02<00:17, 2.90s/it] Loading pipeline components...: 29%|██▊ | 2/7 [00:11<00:30, 6.17s/it] Loading pipeline components...: 100%|██████████| 7/7 [00:14<00:00, 1.61s/it] Loading pipeline components...: 100%|██████████| 7/7 [00:14<00:00, 2.00s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> sd_pipeline = sd_pipeline.to("cuda") File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 418, in to module.to(device, dtype) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1173, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 804, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1159, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...