Loading pipeline components...: 0%| | 0/7 [00:00 import gradio as gr ModuleNotFoundError: No module named 'gradio' Traceback (most recent call last): File "/home/ubuntu/GT_VTR3_1/run/gradio_ootd.py", line 1, in import gradio as gr ModuleNotFoundError: No module named 'gradio' Traceback (most recent call last): File "/home/ubuntu/GT_VTR3_1/run/gradio_ootd.py", line 1, in import gradio as gr ModuleNotFoundError: No module named 'gradio' Traceback (most recent call last): File "/home/ubuntu/GT_VTR3_1/run/gradio_ootd.py", line 1, in import gradio as gr ModuleNotFoundError: No module named 'gradio' Loading pipeline components...: 0%| | 0/7 [00:00 ootd_model_dc.text_encoder.to('cuda') File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2460, in to return super().to(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 146.00 MiB (GPU 0; 14.58 GiB total capacity; 4.96 GiB already allocated; 105.62 MiB free; 5.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Traceback (most recent call last): File "/home/ubuntu/GT_VTR3_1/run/gradio_ootd.py", line 46, in ootd_model_dc.text_encoder.to('cuda') File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2460, in to return super().to(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 3 more times] File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/home/ubuntu/miniconda3/envs/vtr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.58 GiB total capacity; 5.21 GiB already allocated; 5.62 MiB free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Loading pipeline components...: 0%| | 0/7 [00:00