Can this be run with an AMD GPU?

by Vehrn - opened

I see CUDA is the default in the setup scripts. When running the setup I get the following error on my 6800AMD GPU. AssertionError: Torch not compiled with CUDA enabled

Thanks in advance.

How to run CompVis/stable-diffusion on AMD Linux

# AMD Driver installation:
# command would be something like this after installing amdgpu-install
# sudo amdgpu-install --rocmrelease=5.2.3 --usecase=dkms,graphics,rocm,lrt,hip,hiplibsdk
# if its installed already, try rocm-smi command it will show available GPUs

cd stable-diffusion/
conda env create -f environment.yaml
conda activate ldm
conda remove cudatoolkit -y
pip3 uninstall torch torchvision -y 
# Install PyTorch ROCm
pip3 install torch torchvision torchaudio --extra-index-url
pip3 install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0

# Place the model as model.ckpt in the models/ldm/stable-diffusion-v1/ folder
python scripts/ --prompt "a photograph of an astronaut riding a horse" --plms

How to run huggingface/diffusers on AMD Linux

git clone
cd diffusers/
pip3 install -e .
pip3 uninstall torch 
pip3 install torch torchvision torchaudio --extra-index-url

Run the code without autocast.. (

# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=True)
pipe ="cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt)["sample"][0]"astronaut_rides_horse.png")

If you are on Windows, try this MLIR/IREE approach:
ONNX DirectML approach on Windows (Slower ) :

May I ask how much VRAM does your device had. I am facing memory troubles on my 3070 with a 8 GB VRAM.

Sign up or log in to comment