Stochastic Karras VE
Overview
Elucidating the Design Space of DiffusionBased Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
The abstract of the paper is the following:
We argue that the theory and practice of diffusionbased generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new stateoftheart FID of 1.79 for CIFAR10 in a classconditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pretrained score networks from previous work, including improving the FID of an existing ImageNet64 model from 2.07 to nearSOTA 1.55.
This pipeline implements the Stochastic sampling tailored to the VarianceExpanding (VE) models.
Available Pipelines:
Pipeline  Tasks  Colab 

pipeline_stochastic_karras_ve.py  Unconditional Image Generation   
KarrasVePipeline
class diffusers.KarrasVePipeline
< source >( unet: UNet2DModel scheduler: KarrasVeScheduler )
Parameters
 unet (UNet2DModel) — UNet architecture to denoise the encoded image.

scheduler (KarrasVeScheduler) —
Scheduler for the diffusion process to be used in combination with
unet
to denoise the encoded image.
Stochastic sampling from Karras et al. [1] tailored to the VarianceExpanding (VE) models [2]. Use Algorithm 2 and the VE column of Table 1 from [1] for reference.
[1] Karras, Tero, et al. “Elucidating the Design Space of DiffusionBased Generative Models.” https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Scorebased generative modeling through stochastic differential equations.” https://arxiv.org/abs/2011.13456
__call__
< source >(
batch_size: int = 1
num_inference_steps: int = 50
generator: typing.Optional[torch._C.Generator] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
**kwargs
)
→
ImagePipelineOutput or tuple
Parameters

batch_size (
int
, optional, defaults to 1) — The number of images to generate. 
generator (
torch.Generator
, optional) — A torch generator to make generation deterministic. 
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. 
output_type (
str
, optional, defaults to"pil"
) — The output format of the generate image. Choose between PIL:PIL.Image.Image
ornd.array
. 
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a ImagePipelineOutput instead of a plain tuple.
Returns
ImagePipelineOutput or tuple
ImagePipelineOutput
if
return_dict
is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
generated images.