File size: 3,208 Bytes
9b4884a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
Stochastic Karras VE
	

Overview
	
Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
The abstract of the paper is the following:
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.

Available Pipelines:
	
Pipeline
Tasks
Colab
pipeline_stochastic_karras_ve.py
Unconditional Image Generation
-

KarrasVePipeline
	

class diffusers.KarrasVePipeline

<
source
>
(
unet: UNet2DModel
scheduler: KarrasVeScheduler

)


Parameters 

unet (UNet2DModel) — U-Net architecture to denoise the encoded image.


scheduler (KarrasVeScheduler) —
Scheduler for the diffusion process to be used in combination with unet to denoise the encoded image.



Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and
the VE column of Table 1 from [1] for reference.
[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.”
https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic
differential equations.” https://arxiv.org/abs/2011.13456

__call__

<
source
>
(
batch_size: int = 1
num_inference_steps: int = 50
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
**kwargs

)
→
ImagePipelineOutput or tuple

Parameters 

batch_size (int, optional, defaults to 1) —
The number of images to generate.


generator (torch.Generator, optional) —
One or a list of torch generator(s)
to make generation deterministic.


num_inference_steps (int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.


output_type (str, optional, defaults to "pil") —
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array.


return_dict (bool, optional, defaults to True) —
Whether or not to return a ImagePipelineOutput instead of a plain tuple.


Returns

ImagePipelineOutput or tuple



~pipelines.utils.ImagePipelineOutput if return_dict is
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.