runtime error

Exit code: 1. Reason: , 51.2MB/s] pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 651M/651M [00:02<00:00, 322MB/s] tokenizer.json: 0%| | 0.00/269k [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 269k/269k [00:00<00:00, 59.8MB/s] s1v3.ckpt: 0%| | 0.00/155M [00:00<?, ?B/s] s1v3.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 155M/155M [00:00<00:00, 256MB/s] pretrained_eres2netv2w24s4ep4.ckpt: 0%| | 0.00/108M [00:00<?, ?B/s] pretrained_eres2netv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 108M/108M [00:00<00:00, 223MB/s] s2Gv2ProPlus.pth: 0%| | 0.00/200M [00:00<?, ?B/s] s2Gv2ProPlus.pth: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 200M/200M [00:00<00:00, 354MB/s] [nltk_data] Downloading package averaged_perceptron_tagger_eng to [nltk_data] /home/user/nltk_data... [nltk_data] Unzipping taggers/averaged_perceptron_tagger_eng.zip. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] [INFO] CUDA is not available, skipping nvrtc setup. <All keys matched successfully> Loading Text2Semantic Weights from pretrained_models/s1v3.ckpt with Flash Attn Implement Traceback (most recent call last): File "/home/user/app/inference_webui.py", line 261, in <module> change_gpt_weights("pretrained_models/s1v3.ckpt") File "/home/user/app/inference_webui.py", line 254, in change_gpt_weights t2s_model = CUDAGraphRunner( File "/home/user/app/AR/models/t2s_model_flash_attn.py", line 227, in __init__ self.input_pos = torch.tensor([10]).int().cuda() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...