Text-to-Image
Diffusers
Safetensors
StableDiffusionPipeline
stable-diffusion
Inference Endpoints

pipe.enable_attention_slicing()

#11
by Teetle44 - opened

As mentioned on the model card page: "If you have low GPU RAM available, make sure to add a pipe.enable_attention_slicing() after sending it to cuda for less VRAM usage (to the cost of speed)"

Which file do I add the pipe.enable_attention_slicing() to?

pipe = pipe.to("cuda")
pipe.enable_attention_slicing()

Sign up or log in to comment