Pipeline callbacks
The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end
parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale.
This guide will show you how to use the callback_on_step_end
parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance.
The callback function should have the following arguments:
pipe
(or the pipeline instance) provides access to useful properties such asnum_timestep
andguidance_scale
. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by settingpipe._guidance_scale=0.0
.step_index
andtimestep
tell you where you are in the denoising loop. Usestep_index
to turn off CFG after reaching 40% ofnum_timestep
.callback_kwargs
is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in thecallback_on_step_end_tensor_inputs
argument, which is passed to the pipeline’s__call__
method. Different pipelines may use different sets of variables, so please check a pipeline’s_callback_tensor_inputs
attribute for the list of variables you can modify. Some common variables includelatents
andprompt_embeds
. For this function, change the batch size ofprompt_embeds
after settingguidance_scale=0.0
in order for it to work properly.
Your callback function should look something like this:
def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs):
# adjust the batch_size of prompt_embeds according to guidance_scale
if step_index == int(pipe.num_timestep * 0.4):
prompt_embeds = callback_kwargs["prompt_embeds"]
prompt_embeds = prompt_embeds.chunk(2)[-1]
# update guidance_scale and prompt_embeds
pipe._guidance_scale = 0.0
callback_kwargs["prompt_embeds"] = prompt_embeds
return callback_kwargs
Now, you can pass the callback function to the callback_on_step_end
parameter and the prompt_embeds
to callback_on_step_end_tensor_inputs
.
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
generator = torch.Generator(device="cuda").manual_seed(1)
out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds'])
out.images[0].save("out_custom_cfg.png")
The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step.
With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all!
🤗 Diffusers currently only supports callback_on_step_end
, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point!
Interrupt the diffusion process
Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback.
The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline.
This callback function should take the following arguments: pipe
, i
, t
, and callback_kwargs
(this must be returned). Set the pipeline’s _interrupt
attribute to True
to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback.
In this example, the diffusion process is stopped after 10 steps even though num_inference_steps
is set to 50.
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe.enable_model_cpu_offload()
num_inference_steps = 50
def interrupt_callback(pipe, i, t, callback_kwargs):
stop_idx = 10
if i == stop_idx:
pipe._interrupt = True
return callback_kwargs
pipe(
"A photo of a cat",
num_inference_steps=num_inference_steps,
callback_on_step_end=interrupt_callback,
)