Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
The abstract of the paper is the following:
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
The original codebase of this paper can be found here.
|pipeline_ddpm.py||Unconditional Image Generation||-|
class diffusers.DDPMPipeline< source >
( unet scheduler )
- unet (UNet2DModel) — U-Net architecture to denoise the encoded image.
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unetto denoise the encoded image. Can be one of DDPMScheduler, or DDIMScheduler.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__< source >
batch_size: int = 1
generator: typing.Optional[torch._C.Generator] = None
num_inference_steps: int = 1000
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
int, optional, defaults to 1) — The number of images to generate.
torch.Generator, optional) — A torch generator to make generation deterministic.
int, optional, defaults to 1000) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
str, optional, defaults to
"pil") — The output format of the generate image. Choose between PIL:
bool, optional, defaults to
True) — Whether or not to return a ImagePipelineOutput instead of a plain tuple.
return_dict is True, otherwise a `tuple. When returning a tuple, the first element is a list with the