Loading and Adding Custom Pipelines
Diffusers allows you to conveniently load any custom pipeline from the Hugging Face Hub as well as any official community pipeline via the DiffusionPipeline class.
Loading custom pipelines from the Hub
Custom pipelines can be easily loaded from any model repository on the Hub that defines a diffusion pipeline in a pipeline.py
file.
Let’s load a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline.
All you need to do is pass the custom pipeline repo id with the custom_pipeline
argument alongside the repo from where you wish to load the pipeline modules.
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline"
)
This will load the custom pipeline as defined in the model repository.
By loading a custom pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe 🔒. Make sure to check out the code online before loading & running it automatically.
Loading official community pipelines
Community pipelines are summarized in the community examples folder
Similarly, you need to pass both the repo id from where you wish to load the weights as well as the custom_pipeline
argument. Here the custom_pipeline
argument should consist simply of the filename of the community pipeline excluding the .py
suffix, e.g. clip_guided_stable_diffusion
.
Since community pipelines are often more complex, one can mix loading weights from an official repo id and passing pipeline modules directly.
from diffusers import DiffusionPipeline
from transformers import CLIPFeatureExtractor, CLIPModel
clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id)
clip_model = CLIPModel.from_pretrained(clip_model_id)
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="clip_guided_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
)
Adding custom pipelines to the Hub
To add a custom pipeline to the Hub, all you need to do is to define a pipeline class that inherits
from DiffusionPipeline in a pipeline.py
file.
Make sure that the whole pipeline is encapsulated within a single class and that the pipeline.py
file
has only one such class.
Let’s quickly define an example pipeline.
import torch
from diffusers import DiffusionPipeline
class MyPipeline(DiffusionPipeline):
def __init__(self, unet, scheduler):
super().__init__()
self.register_modules(unet=unet, scheduler=scheduler)
@torch.no_grad()
def __call__(self, batch_size: int = 1, num_inference_steps: int = 50):
# Sample gaussian noise to begin loop
image = torch.randn((batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size))
image = image.to(self.device)
# set step values
self.scheduler.set_timesteps(num_inference_steps)
for t in self.progress_bar(self.scheduler.timesteps):
# 1. predict noise model_output
model_output = self.unet(image, t).sample
# 2. predict previous mean of image x_t-1 and add variance depending on eta
# eta corresponds to η in paper and should be between [0, 1]
# do x_t -> x_t-1
image = self.scheduler.step(model_output, t, image, eta).prev_sample
image = (image / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()
return image
Now you can upload this short file under the name pipeline.py
in your preferred model repository. For Stable Diffusion pipelines, you may also join the community organisation for shared pipelines to upload yours.
Finally, we can load the custom pipeline by passing the model repository name, e.g. sd-diffusers-pipelines-library/my_custom_pipeline
alongside the model repository from where we want to load the unet
and scheduler
components.
my_pipeline = DiffusionPipeline.from_pretrained(
"google/ddpm-cifar10-32", custom_pipeline="patrickvonplaten/my_custom_pipeline"
)