runtime error

move the `torch_dtype=torch.float16` argument, or use another device for inference. Traceback (most recent call last): File "/home/user/app/app.py", line 33, in <module> pipe.enable_xformers_memory_efficient_attention() File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1752, in enable_xformers_memory_efficient_attention self.set_use_memory_efficient_attention_xformers(True, attention_op) File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1778, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/home/user/.local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1768, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 251, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 244, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/home/user/.local/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 203, in set_use_memory_efficient_attention_xformers raise ValueError( ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU

Container logs:

Fetching error logs...