runtime error

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Loading diffusion model ... /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Loading pipeline components...: 0%| | 0/8 [00:00<?, ?it/s] Loading pipeline components...: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 5/8 [00:03<00:01, 1.51it/s] Loading pipeline components...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8/8 [00:03<00:00, 2.11it/s] ------------------------------------------------------------------------------- app.py 175 <module> pipeline = pipeline.to(device) pipeline_utils.py 418 to module.to(device, dtype) modeling_utils.py 2692 to return super().to(*args, **kwargs) module.py 1173 to return self._apply(convert) module.py 779 _apply module._apply(fn) module.py 779 _apply module._apply(fn) module.py 779 _apply module._apply(fn) module.py 804 _apply param_applied = fn(param) module.py 1159 convert return t.to( __init__.py 293 _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...