Elucidating the Design Space of Diffusion-Based Generative Models is by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. This pipeline implements the stochastic sampling tailored to variance expanding (VE) models.
The abstract from the paper:
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
class diffusers.KarrasVePipeline< source >
( unet: UNet2DModel scheduler: KarrasVeScheduler )
Pipeline for unconditional image generation.
__call__< source >
( batch_size: int = 1 num_inference_steps: int = 50 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or
- batch_size (
int, optional, defaults to 1) — The number of images to generate.
- generator (
torch.Generator, optional) — A
torch.Generatorto make generation deterministic.
- num_inference_steps (
int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
- output_type (
str, optional, defaults to
"pil") — The output format of the generated image. Choose between
- return_dict (
bool, optional, defaults to
True) — Whether or not to return a ImagePipelineOutput instead of a plain tuple.
True, ImagePipelineOutput is returned, otherwise a
returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
class diffusers.ImagePipelineOutput< source >
( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )
Output class for image pipelines.