Where to add a cmd arg to stop PyTorch from assuming NVidia DPU?

#245
by hd-scania - opened

I'm on an AMD R4750G APU instead of an NVidia APU which is assumed on SD's PyTorch Python app
Where to add --skip-torch-cuda-test to become an entry of COMMANDLINE_ARGS?

% bash webui.sh
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on hd_scania user
################################################################

################################################################
Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.9 (main, Dec 25 2022, 21:29:15) [GCC 12.2.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Traceback (most recent call last):
  File "/home/hd_scania/stable-diffusion-webui/launch.py", line 355, in <module>
    prepare_environment()
  File "/home/hd_scania/stable-diffusion-webui/launch.py", line 260, in prepare_environment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "/home/hd_scania/stable-diffusion-webui/launch.py", line 121, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "/home/hd_scania/stable-diffusion-webui/launch.py", line 97, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/home/hd_scania/stable-diffusion-webui/venv/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
%

Sign up or log in to comment