File size: 92,810 Bytes
4d628f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
# Taming Diffusion Times In Score-Based Generative Models: Trade-Offs And Solutions

Anonymous authors Paper under double-blind review

## Abstract

Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, an analytical understanding of the role of the diffusion time T is still lacking. Current best practice advocates for a large T
to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution; however, a smaller value of T should be preferred for a better approximation of the score-matching objective and higher computational efficiency. Starting from a variational interpretation of diffusion models, in this work we quantify this trade-off, and suggest a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times. Indeed, we show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process. Empirical results support our analysis; for image data, our method is competitive w.r.t. the state-of-the-art, according to standard sample quality metrics and log-likelihood.

## 1 Introduction

Diffusion-based generative models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2021c; Vahdat et al., 2021; Kingma et al., 2021; Ho et al., 2020; Song et al., 2021a) have recently gained popularity due to their ability to synthesize high-quality audio (Kong et al., 2021; Lee et al., 2022b), image Dhariwal
& Nichol (2021); Nichol & Dhariwal (2021) and other data modalities Tashiro et al. (2021), outperforming known methods based on Generative Adversarial Networks (gans) (Goodfellow et al., 2014), normalizing flows (nfs) (Kingma et al., 2016) or Variational Autoencoders (vaes) and Bayesian autoencoders (baes)
(Kingma & Welling, 2014; Tran et al., 2021).

Diffusion models learn to generate samples from an unknown density p*data* by reversing a *diffusion process* which transforms the distribution of interest into noise. The forward dynamics injects noise into the data following a diffusion process that can be described by a Stochastic Differential Equation (sde) of the form, dxt = f(xt, t)dt + g(t)dwt with x0 ∼ p*data* , (1)
where xt is a random variable at time t, f(·, t) is the *drift term*, g(·) is the *diffusion term* and wt is a Wiener process (or Brownian motion). We will also consider a special class of linear sdes, for which the drift term is decomposed as f(xt, t) = α(t)xt and the diffusion term is independent of xt. This class of parameterizations of sdes is known as *affine* and it admits analytic solutions. We denote the time-varying probability density by p(x, t), where by definition p(x, 0) = p*data* (x), and the conditional on the initial condition x0 by p(x, t| x0).

The forward sde is usually considered for a sufficiently long *diffusion time* T, leading to the density p(x, T).

In principle, when T → ∞, p(x, T) converges to Gaussian noise, regardless of initial conditions.

For generative modeling purposes, we are interested in the inverse dynamics of such process, i.e., transforming samples of the noisy distribution p(x, T) into p*data* (x). Formally, such dynamics can be obtained by considering the solutions of the inverse diffusion process (Anderson, 1982),
dxt =-−f(xt, t′) + g 2(t
′)∇ log p(xt, t′)dt + g(t
′)dwt , (2)
1 where t
′
def = T − t, with the inverse dynamics involving a new Wiener process. Given p(x, T) as the initial condition, the solution of Eq. (2) after a *reverse diffusion time* T, will be distributed as p*data* (x). We refer to the density associated to the backward process as q(x, t′)q(x, t). The simulation of the backward process is referred to as *sampling* and, differently from the forward process, this process is not *affine* and a closed form solution is out of reach. PM: in Eq. (2), should we at least say that the Weiner process is in reverse time?

Practical considerations on diffusion times. In practice, diffusion models are challenging to work with
(Song et al., 2021c). Indeed, a direct access to the true *score* function ∇ log p(xt, t) required in the dynamics of the reverse diffusion is unavailable. This can be solved by approximating it with a parametric function sθ(xt, t), e.g., a neural network, which is trained using the following loss function,

$${\mathcal{L}}(\mathbf{\theta})=\int_{0}^{T}\mathbb{E}_{\sim(\perp)}\lambda(t)\|\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\|^{2}\,,$$

2, (3)
$$\mathcal{L}(\mathbf{\theta})=\frac{1}{1+\frac{\mu_{0}}{p(t)}\frac{\mathbf{m}}{\sim(1)^{\Lambda(t)}||\mathbf{s}\mathbf{\theta}(\mathbf{x}_{t},t)-\mathbf{v}_{\mathrm{LOE}}(\mathbf{x}_{t},t+\mathbf{v}_{\mathrm{LO}})||^{2}}},$$

where λ(t) is a positive weighting factor and the notation E∼(1) means that the expectation is taken with respect to the random process xt in Eq. (1): for a generic function h, E∼(1)[h(xt, x0, t)] = Rh(x, z, t)p(x, t| z)p*data* (z)dxdz. p(t) = U(0, T). This loss, usually referred to as *score matching loss*,
is the cost function considered in Song et al. (2021b) (Eq. (4) ). The condition λ(t) = g(t)
2, adopted in this work, is referred to a *likelihood reweighting*. Due to the affine property of the drift, the term p(xt, t| x0)
is analytically known and normally distributed for all t (expression available in Table 1, and in Särkkä &
Solin (2019)). Note also that we will refer to λ as the *likelihood reweighting* factor when λ(t) = g(t)
2(Song et al., 2021b). Intuitively, the estimation of the *score* is akin to a denoising objective, which operates in a challenging regime. Later we will quantify precisely the difficulty of learning the *score*, as a function of increasing diffusion times.

While the forward and reverse diffusion processes are valid for all T, the noise distribution p(x, T) is analytically known only when the diffusion time is T → ∞. To overcome this problem, the common solution is to replace p(x, T) with a simple distribution p*noise* (x) which, for the classes of sdes we consider in this work, is a Gaussian distribution.

In the literature, the discrepancy between p(x, T) and p*noise* (x)
has been neglected, under the informal assumption of a sufficiently large diffusion time. Unfortunately, while this approximation seems a valid approach to simulate and generate samples, the reverse diffusion process starts from a different initial condition q(x, 0) and, as a consequence, it will converge to a solution q(x, T) that is different from the true p*data* (x). Later, we will expand on the error introduced by this approximation, but for illustration purposes Fig. 1 shows quantitatively this behavior for a simple 1D toy example p*data* (x) = πN (1, 0.1 2) + (1 − π)N (3, 0.5 2), with π = 0.3: when T is small, the distribution p*noise* (x) is very different from p(x, T) and samples from q(x, T) exhibit very low likelihood of being generated from p*data* (x).

$$(3)$$

![1_image_0.png](1_image_0.png)

Figure 1: Effect of T on a toy model: low diffusion times are detrimental for sample quality (likelihood of 1024 samples as median and 95 quantile, on 8 random seeds).

Crucially, Fig. 1 (zoomed region) illustrates an unknown behavior of diffusion models, which we unveil in our analysis. The right balance between efficient *score* estimation and sampling quality can be achieved by diffusion times that are smaller than common best practices. This is a key observation we explore in our work.

Contributions. An appropriate choice of the diffusion time T is a key factor that impacts training convergence, sampling time and quality. On the one hand, the approximation error introduced by considering initial conditions for the reverse diffusion process drawn from a simple distribution pnoise (x) ̸= p(x, T) increases when T is small. This is why the current best practice is to choose a sufficiently long diffusion time. On the other hand, training convergence of the *score* model sθ(xt, t) becomes more challenging to achieve with a large T, which also imposes extremely high computational costs **both** for training and for sampling. This would suggest to choose a smaller diffusion time. Given the importance of this problem, in this work we set off to study—for the first time—the existence of suitable operating regimes to strike the right balance between computational efficiency and model quality. The main contributions of this work are the following.

Contribution 1: In § 2 we provide a new characterization of score-based diffusion models, which allows us to obtain a formal understanding of the impact of the diffusion time T. We do so by introducing a novel decomposition of the evidence lower bound (elbo), which emphasizes the roles of (i) the discrepancy between the "ending" distribution of the diffusion and the "starting" distribution of the reverse diffusion processes, and (ii) of the *score* matching objective. This allows us to claim the existence of an optimal diffusion time, and it provides, for the first time, a formal assessment of the current best practice for selecting T. we use an elbo decomposition which allows us to study the impact of the diffusion time T.

This elbo decomposition emphasizes the roles of (i) the discrepancy between the "ending" distribution of the diffusion and the "starting" distribution of the reverse diffusion processes, and (ii) of the *score* matching objective. Crucially, our analysis does not rely on assumptions on the quality of the score models. We explicitly study the existence of a trade-off and explore experimentally, for the first time, current approaches for selecting the diffusion time T.

Contribution 2: In § 3 we propose a novel method to improve *both* training and sampling efficiency of diffusion-based models, while maintaining high sample quality. Our method introduces an auxiliary distribution, allowing us to transform the simple "starting" distribution of the reverse process used in the literature so as to minimize the discrepancy to the "ending" distribution of the forward process. Then, a standard reverse diffusion can be used to closely match the data distribution. Intuitively, our method allows to build "bridges" across multiple distributions, and to set T toward the advantageous regime of small diffusion times. In addition to our methodological contributions, in § 4, we provide experimental evidence of the benefits of our method, in terms of sample quality and log likelihood. Finally, we conclude in § 5.

Related Work. A concurrent work by Zheng et al. (2022) presents an empirical study of a truncated diffusion process, but lacks a rigorous analysis, and a clear justification for the proposed approach. Recent attempts by Lee et al. (2022b) to optimize p*noise* , or the proposal to do so (Austin et al., 2021) have been studied in different contexts. Related work focus primarily on improving sampling efficiency, using a wide array of techniques. Sample generation times can be drastically reduced considering adaptive step-size integrators (Jolicoeur-Martineau et al., 2021). Other popular choices are based on merging multiple steps of a pretrained model through distillation techniques (Salimans & Ho, 2022) or by taking larger sampling steps with GANs (Xiao et al., 2022). Approaches closer to ours *modify* the sde, or the discrete time processes, to obtain inference efficiency gains. In particular, Song et al. (2021a) considers implicit non-Markovian diffusion processes, while Watson et al. (2021) changes the diffusion processes by optimal scheduling selection and Dockhorn et al. (2022) considers overdamped sdes. Finally, hybrid techniques combining VAEs and diffusion models (Vahdat et al., 2021) or simple auto encoders and diffusion models (Rombach et al., 2022), have positive effects on training and sampling times.

## 2 A New Elbo Decomposition And A Tradeoff On Diffusion **Timeexploring A Tradeoff** On Diffusion Time

The dynamics of a diffusion model can be studied through the lens of variational inference, which allows us to bound the (log-)likelihood using an evidence lower bound (elbo) (Huang et al., 2021).

Our interpretation The interpretation we consider in this work (see also Song et al. (2021b), Thm. 1)
emphasizes the two main factors affecting the quality of sample generation: an imperfect *score*, and a mismatch, measured in terms of the Kullback-Leibler (kl) divergence, between the noise distribution p(x, T)
of the forward process and the distribution p*noise* used to initialize the backward process.

## 2.1 Preliminaries: **The Elbo Decomposition**

Our goal is to study the quality of the generated data distribution as a function of the diffusion time T. Then, instead of focusing on the log-likelihood bounds for single datapoints log q(x, T), we consider the average over the data distribution, i.e. the cross-entropy Ep*data* (x)log q(x, T). By rewriting the Lelbo derived in Huang et al. (2021, Eq. (25)) (details of the steps in the Appendix), we have that

$$\mathbb{E}_{p_{\sf{Min}}\,(\mathbf{\omega})}\log q(\mathbf{x},T)\geq\mathcal{L}_{\textsc{ELBO}}(\mathbf{s_{\theta}},T)=\mathbb{E}_{\sim(1)}\log p_{\sf{base}}\left(\mathbf{x}_{T}\right)-I(\mathbf{s_{\theta}},T)+R(T).\tag{4}$$
where R(T) = 12 R T t=0 E∼(1) hg 2(t)∥∇ log p(xt, t| x0)∥ 2 − 2f ⊤(xt, t)∇ log p(xt, t| x0) idt, and I(sθ, T) = 1 2 R T t=0 g 2(t)E∼(1) h∥sθ(xt, t) − ∇ log p(xt, t| x0)∥ 2idt is equal to the loss term Eq. (3) when λ(t) = g 2(t).
Note that R(T) depends neither on sθ nor on p*noise* , while I(sθ, T), or an equivalent reparameterization
(Huang et al., 2021; Song et al., 2021b, Eq. (1)), is used to learn the approximated *score*, by optimization of the parameters θ. It is then possible to show that

$$I(\mathbf{s_{\theta}},T)\geq\underbrace{I(\mathbf{\nabla}\log p,T)}_{\stackrel{{\mathrm{def}}}{{=}}K(T)}=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[\left\|\mathbf{\nabla}\log p(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\mid\mathbf{x}_{0})\right\|\right]^{2}\mathrm{d}t.\tag{5}$$

Note that the term K(T) = I(∇ log *p, T*) does not depend on θ. Consequently, we can define G(sθ, T) =
I(sθ, T) − K(T) (see Appendix for details), where G(sθ, T) is a positive term that we call the gap term, accounting for the practical case of an imperfect *score*, i.e. sθ(xt, t) ̸= ∇ log p(xt, t). It also holds that

$$\mathbb{E}_{\sim(1)}\log p_{\mathsf{feas}}(\mathbf{x}_{T})=\int\left[\log p_{\mathsf{feas}}(\mathbf{x})-\log p(\mathbf{x},T)+\log p(\mathbf{x},T)\right]p(\mathbf{x},T)\mathrm{d}\mathbf{x}=$$ $$=\mathbb{E}_{\sim(1)}\log p(\mathbf{x}_{T},T)-\mathrm{KL}\left[\log p(\mathbf{x},T)\parallel p_{\mathsf{feas}}(\mathbf{x})\right].$$

Therefore, we can substitute the cross-entropy term E∼(1)log p*noise* (xT ) of rewrite the elbo in Eq. (4) as to obtain

$$\mathbb{E}_{p_{\sf{data}}\left(\mathbf{x}\right)}\log q(\mathbf{x},T)\geq-\text{KL}\left[p(\mathbf{x},T)\parallel p_{\sf{data}}(\mathbf{x})\right]+\mathbb{E}_{\sim(1)}\log p(\mathbf{x}_{T},T)-K(T)+R(T)-\mathcal{G}(\mathbf{s_{\theta}},T).\tag{7}$$

Before concluding our derivation it is necessary to introduce an important Proposition observation (formal proof in Appendix), where we show how to combine different terms of Eq. (7) into the negative entropy term Ep*data* (x)log p*data* (x).

Proposition 1. Given the stochastic dynamics defined in Eq. (1)*, it holds that*

$$\mathbb{E}_{\sim({\LARGE1})}\log p(\mathbf{x}_{T},T)-K(T)+R(T)=\mathbb{E}_{p_{\sf{data}}(\mathbf{x})}\log p_{\sf{data}}(\mathbf{x}).$$
Finally, we can now bound the value of Ep*data* (x)log q(x, T) as
$$\mathbb{E}_{p_{\mathsf{data}}\,(\mathbf{x})}\log q(\mathbf{x},T)\geq\underbrace{\mathbb{E}_{p_{\mathsf{data}}\,(\mathbf{x})}\log p_{\mathsf{data}}\,(\mathbf{x})-\mathcal{G}(\mathbf{s}_{\mathbf{\theta}},T)-\operatorname{KL}\left[p(\mathbf{x},T)\parallel p_{\mathsf{mean}}(\mathbf{x})\right]}_{C_{\mathsf{train}}(\mathbf{s}_{\mathbf{\theta}},T)}.$$
$$\mathbf{\Sigma}$$
$$\mathbf{\Sigma}$$

$$\mathbf{\Sigma}$$

Eq. (9) clearly emphasizes the roles of an approximate score function, through the gap term G(·), and the discrepancy between the noise distribution of the forward process, and the initial distribution of the reverse

| Diffusion process   | p(xt, t | x0) = N(m, sI)         | pnoise (x)                    |                            |         |         |
|---------------------|----------------------------------|-------------------------------|----------------------------|---------|---------|
| Variance Exploding  | α(t) = 0, g(t) = pdσ2(t) dt      | m = x0, s = σ 2 (t) − σ 2 (0) | N(0, (σ 2 (T) − σ 2 (0))I) |         |         |
| − 1 2 R t           | − R t                            |                               |                            |         |         |
| Variance Preserving | α(t) = − 1 2 β(t), g(t) = p β(t) | m = e                         | 0 β(dτ) x0, s = 1 − e      | 0 β(dτ) | N(0, I) |

Table 1: Two main families of diffusion processes, where σ 2(t) = σ 2 max σ2 min tand β(t) = β0 + (β1 − β0)t
process, through the kl term. The (negative) entropy term Ep*data* (x)log p*data* (x), which is constant w.r.t T
and θ, is the best value achievable by the elbo. Indeed, by rearranging Eq. (9), kl [q(x, T) ∥ p*data* (x)] ≤
G(sθ, T) + kl [p(x, T) ∥ p*noise* (x)]. Optimality is achieved when i) we have perfect *score* matching and ii)
the initial conditions for the reverse process are ideal, i.e. q(x, 0) = p(x, T) In the ideal case of perfect score matching, the elbo in Eq. (9) is attained with equality. If, in addition, the initial conditions for the reverse process are ideal, i.e. q(x, 0) = p(x, T), then the results in Anderson (1982) allow us to claim that q(x, T) = p*data* (x).

Next, we show the existence of a tradeoff: the kl decreases with T, while the gap increases with T.

## 2.2 The Tradeoff On Diffusion Time

We begin by showing that the kl term in Eq. (9) decreases with the diffusion time T, which induces to select large T to maximize the elbo. We consider the two main classes of sdes for the forward diffusion process defined in Eq. (1): sdes whose steady state distribution is the standard multivariate Gaussian, referred to as Variance Preserving (VP), and sdes without a stationary distribution, referred to as *Variance Exploding*
(VE), which we summarize in Table 1. The standard approach to generate new samples relies on the backward process defined in Eq. (2), and consists in setting p*noise* in agreement with the form of the forward process sde. The following result bounds the discrepancy between the noise distribution p(x, T) and p*noise* .

Lemma 1. For the classes of sdes considered (Table 1), the discrepancy between p(x, T) and the p*noise*(x)
can be bounded as follows.

For Variance Preserving sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C1 exp−R T
0

$$\beta(t)d t)$$

For Variance Exploding sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C21 σ2(T)−σ2(0) .

Our proof uses results from Villani (2009), the logarithmic Sobolev Inequality and Gronwall inequality (see Appendix for details). The consequence of Lemma 1 is that to maximize the elbo, the diffusion time T
should be as large as possible (ideally, T → ∞), such that the kl term vanishes. This result is in line with current practices for training score-based diffusion processes, that argue for sufficiently long diffusion times
(De Bortoli et al., 2021). Our analysis, on the other hand, highlights how this term is only one of the two contributions to the elbo.

Now, we focus our attention on studying the behavior of the second component, G(·). Before that, we define a few quantities that allow us to write the next important result.

Definition 1. We define the optimal score bsθ for any diffusion time T*, as the score obtained using parameters* that minimize I(sθ, T). Similarly, we define the optimal score gap G(bsθ, T) for any diffusion time T*, as the* gap attained when using the optimal score.

Lemma 2. The optimal score gap term G(bsθ, T) is a non-decreasing function in T. That is, given T2 > T1, and θ1 = arg minθ I(sθ, T1), θ2 = arg minθ I(sθ, T2)*, then* G(sθ2
, T2) ≥ G(sθ1
, T1).

The proof (see Appendix) is a direct consequence of the definition of G and the optimality of the score. Note that Lemma 2 does not imply that G(sθa
, T2) ≥ G(sθb
, T1) holds for generic parameters θa, θb.

## 2.3 Is There An Optimal Diffusion Time?

While diffusion processes are generally studied for T → ∞, for practical reasons, diffusion times in score-based models have been arbitrarily set to be "sufficiently large" in the literature. Here we formally argue, for the first time, about the existence of an optimal diffusion time, which strikes the right balance between the gap G(·) and the kl terms of the elbo in Eq. (9).

Before proceeding any further, we clarify that our final objective is not to find and use an optimal diffusion time. Instead, our result on the existence of optimal diffusion times (which can be smaller than the ones set by than popular heuristics) serves the purpose of motivating the choice of small diffusion times, which however call for a method to overcome approximation errors.

Proposition 2. Consider the elbo decomposition in Eq. (9). We study it as a function of time T, and seek its optimal argument T
⋆ = arg maxT Lelbo(bsθ, T)*. Then, the optimal diffusion time* T
⋆ ∈ R
+, and thus not necessarily T
⋆ = ∞. Additional assumptions on the gap term G(·) *can be used to guarantee strict finiteness* of T
⋆. There exists at least one optimal diffusion *time* T
⋆in the interval [0, ∞], which maximizes the elbo, that is T
⋆ = arg maxT Lelbo(bsθ, T).

It is trivial to verify that, since the optimal gap term G(bsθ, T) is a non decreasing function in T (Lemma 2),
we have ∂G
∂T ≥ 0.Then, we study the sign of the kl derivative, which is always negative as shown in the Appendix. Moreover, we know that that lim T→∞
∂kl
∂T = 0. Consequently, the function ∂Lelbo
∂T =
∂G
∂T +
∂kl
∂T has at least one zero in its domain R
+. To guarantee a stricter bounding of T
⋆, we could study asymptotically the growth rates of G and the kl terms for large T. The investigation is technically involved and outside the scope of this paper. Nevertheless, as discussed hereafter, the numerical investigation carried out in this work suggests finiteness of T
⋆.

While the proof for the general case is available in the Appendix, the analytic solution for the optimal diffusion time is elusive, as a full characterization of the gap term is particularly challenging. Additional assumptions would guarantee boundedness of T
⋆.

![5_image_0.png](5_image_0.png)

Figure 2: elbo decomposition, elbo and likelihood for a 1D toy model, as a function of diffusion time T. Tradeoff and optimality numerical results confirm our theory.
Empirically, we use Fig. 2 to illustrate the tradeoff and the optimality arguments through the lens of the same toy example we use in § 1. On first and third column, we show the elbo decomposition. We can verify that G(sθ, T) is an increasing function of T, whereas the kl term is a decreasing function of T. Even in the simple case of a toy example, the tension between small and large values of T is clear. On the second and fourth, we show the values of the elbo and of the likelihood as a function of T. We then verify the validity of our claims: the elbo is neither maximized by an infinite diffusion time, nor by a "sufficiently large" value.

Instead, there exists an optimal diffusion time which, for this example, is smaller than what is typically used in practical implementations, i.e. T = 1.0 In the Appendix , we show that optimizing the elbo to obtain an optimal diffusion time T
⋆is technically feasible, without resorting to exhaustive grid search. In § 3, we present a new method that admits much smaller diffusion times, and show that the elbo of our approach is at least as good as the one of a standard diffusion model, configured to use its optimal diffusion time T
⋆.

## 2.4 Relation With Diffusion Process Noise Schedule

We remark that a simple modification of the noise schedule to steer the the diffusion process toward a small diffusion time (Kingma et al., 2021; Bao et al., 2022) is not a viable solution. In the Appendix, we discuss how the optimal value of the elbo, in the case of affine sdes, is *invariant* to the choice of the noise schedule.

Indeed, its value depends uniquely on the relative level of corruption of the initial data at the considered final diffusion time T, that is, the *Signal to Noise Ratio*. Naively, we could think that by selecting a twice as fast noise schedule, we would be able to obtain the same elbo of the original schedule by diffusing only for half the time. While true, this does not provide any practical benefit in terms of computational complexity. If the noise schedule is faster, the drift terms involved in the reverse process changes more rapidly. Consequently, to *simulate* the reverse sde with a numerical integration scheme, smaller step sizes are required to keep the same accuracy of the original noise schedule simulation. The net effect is that while the diffusion time for the continuous time dynamics is smaller, the number of integration steps is larger, with a zero net gain. The optimization of the noise schedule can however have important practical effects in terms of stability of the training and variance of the estimations, that we do not tackle in this work (Kingma et al., 2021).

## 2.5 Relation With Literature On Bounds And Goodness Of Score Assumptions

Few other works in the literature attempt to study the convergence properties of Diffusion models. In the work of De Bortoli et al. (2021) (Thm. 1), a total variation (TV) bound between the generated and data distribution is obtained in the form C1 exp(a1T) + C2 exp(−a2T), where the constant C1 depends on the maximum error over [0, T] between the true and approximated score, i.e. maxt∈[0,T] ∥sθ(x, t) − ∇ log p(x, t)∥. In the work of De Bortoli (2022) the requirement is relaxed by setting maxt∈[0,T]
σ 2(t)
1+∥x∥
∥sθ(x, t) − ∇ log p(x, t)∥, where the 1-Wasserstein distance between generated and true data is bounded as C1 + C2 exp(−a2T) + C3 (Thm. 1).

Other works consider the more realistic average square norm instead of the infinity norm, which is consistent with standard training of diffusion models. Moreover, Lee et al. (2022a) show how the TV bound can be expressed as a function of maxt∈[0,T] E
h∥sθ(xt, t) − ∇ log p(xt, t)∥
2i(Thms. 2.2,3.1,3.2). Related to our work, Lee et al. (2022a) find that the TV bound is optimized for a diffusion time that depends, among others, on the maximum score error. Finally, the work by Chen et al. (2022) (Thm. 2), which is concurrent to ours, shows that if maxt∈[0,T] E
h∥sθ(xt, t) − ∇ log p(xt, t)∥
2iis bounded, then the TV distance between true and generated data can be bound as C1 exp(−a1T) + √ϵT, plus a discretization error.

All prior approaches require assumptions on the maximum score error, which *implicitly* depends on: i) the maximum diffusion time T and ii) the class of parametric score networks considered. Hence, such methods allow for the study of convergence properties, but with the following limitations. It is not clear how the score error behaves as the fitting domain ([0, T]) is increased, for generic class of parametric functions and generic p*data* . Moreover, it is difficult to link the error assumptions with the actual training loss of diffusion models.

In this work, instead, we follow a more agnostic path, as we make no assumptions about the error behavior.

We notice that the optimal gap term is **always** a non decreasing function of T. First, we question whether current best practice for setting diffusion times are adequate: we find that in realistic implementations, diffusion times are larger than necessary. Second, we introduce an new approach, with provably the same performance of standard diffusion models but lower computational complexity, as highlighted in § 3.

## 3 A New, Practical Method For Decreasing Diffusion Times

The elbo decomposition in Eq. (9) and the bounds in Lemma 1 and Lemma 2 highlight a dilemma. We thus propose a simple method that allows us to achieve **both** a small gap G(sθ, T), and a small discrepancy kl [p(x, T) ∥ p*noise* (x)]. Before that, let us use Fig. 3 to summarize all densities involved and the effects of the various approximations, which will be useful to visualize our proposal.

The data distribution p*data* (x) is transformed into the noise distribution p(x, T) through the forward diffusion process. Ideally, starting from p(x, T)
we can recover the data distribution by simulating using the exact score ∇ log p. Using the approximated score sθ and the same initial conditions, the backward process ends up in q
(1)(x, T), whose discrepancy 1 to pdata (x) is G(sθ, T). However, the distribution p(x, T) is unknown and replaced with an easy distribution p*noise* (x), accounting for an error a measured as kl [p(x, T) ∥ p*noise* (x)]. With score and initial distribution approximated, the backward process ends up in q
(3)(x, T), where the discrepancy 3 from p*data* is the sum of terms G(sθ, T) + kl [p(x, T) ∥ p*noise* ].

![7_image_0.png](7_image_0.png)

Figure 3: Intuitive illustration of the forward and backward diffusion processes. Discrepancies between distributions are illustrated as distances. Color coding discussed in the text.
Multiple bridges across densities. In a nutshell, our method allows us to reduce the gap term by selecting smaller diffusion times and by using a learned auxiliary model to transform the initial density p*noise* (x) into a density νϕ(x), which is as close as possible to p(x, T), thus avoiding the penalty of a large kl term. To implement this, we first *transform* the simple distribution p*noise* into the distribution νϕ(x), whose discrepancy b kl [p(x, T) ∥ νϕ(x)] is smaller than a . Then, starting from from the auxiliary model νϕ(x),
we use the approximate score sθ to simulate the backward process reaching q
(2)(x, T). This solution has a discrepancy 2 from the data distribution of G(sθ, T) + kl [p(x, T) ∥ νϕ(x)], which we will quantify later in the section. Intuitively, we introduce two bridges. The first bridge connects the noise distribution p*noise* to an auxiliary distribution νϕ(x) that is as close as possible to that obtained by the forward diffusion process.

The second bridge—a standard reverse diffusion process—connects the smooth distribution νϕ(x) to the data distribution. Notably, our approach has important guarantees, which we discuss next.

## 3.1 Auxiliary Model Fitting And Guarantees

We begin by stating the requirements we consider for the density νϕ(x). First, as it is the case for p*noise* , it should be easy to generate samples from νϕ(x) in order to initialize the reverse diffusion process. Second, the auxiliary model should allow us to compute the likelihood of the samples generated through the overall generative process, which begins in p*noise* , passes through νϕ(x), and arrives in q(x, T).

The fitting procedure of the auxiliary model is straightforward. First, we recognize that minimizing kl [p(x, T) ∥ νϕ(x)] w.r.t ϕ also minimizes Ep(x,T)[log νϕ(x)], that we can use as loss function. To obtain the set of optimal parameters ϕ
⋆, we require samples from p(x, T), which can be easily obtained even if the density p(x, T) is not available. Indeed, by sampling from p*data* , and p(x, T | x0), we obtain an unbiased Monte Carlo estimate of Ep(x,T)[log νϕ(x)], and optimization of the loss can be performed. Note that due to the affine nature of the drift, the conditional distribution p(x, T | x0) is easy to sample from, as shown in Table 1. From a practical point of view, it is important to notice that the fitting of νϕ is independent from the training of the score-matching objective, i.e. the result of I(sθ) does not depend on the shape of the auxiliary distribution νϕ. This observation indicates that the two training procedures can be run concurrently, thus enabling considerable time savings.

Next, we show that the first bridge in our model reduces the kl term, even for small diffusion times.

Proposition 3. Let's assume that pnoise(x) is in the family spanned by νϕ, i.e. there exists ϕe *such that*
νϕe = pnoise*. Then we have that*
$$KL\left[p(\mathbf{x},T)\parallel\nu_{\mathbf{\phi}^{*}}(\mathbf{x})\right]\leq KL\left[p(\mathbf{x},T)\parallel\nu_{\mathbf{\phi}}^{*}(\mathbf{x})\right]=KL\left[p(\mathbf{x},T)\parallel p_{\sf meas}(\mathbf{x})\right].\tag{10}$$
Since we introduce the auxiliary distribution ν, we shall define a new elbo for our method:

$T$) = $\mathbb{K}_{\text{pdelta}}\left(x\right)$ l0. 
L
ϕ elbo(sθ, T) = Ep*data* (x)log pdata (x) − G(sθ, T) − kl [p(x, T) ∥ νϕ(x)] (11)

$$(11)$$

Recalling that bsθ is the optimal score for a generic time T, Proposition 3 allows us to claim that L
ϕ
⋆
elbo(bsθ, T) ≥
Lelbo(bsθ, T). Then, we can state the following important result:
Proposition 4. *Given the existence of* T
⋆, defined as the diffusion time such that the elbo *is maximized*
(Proposition 2), there exists at least one diffusion time τ ≤ T
⋆*, such that* L
ϕ
⋆
elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).

Proposition 4 has two interpretations. On the one hand, given two score models optimally trained for their respective diffusion times, our approach guarantees an elbo that is at least as good as that of a standard diffusion model configured with its optimal time T
⋆. Our method achieves this with a smaller diffusion time τ , which offers sampling efficiency and generation quality. On the other hand, if we settle for an equivalent elbo for the standard diffusion model and our approach, with our method we can afford a sub-optimal score model, that requires a smaller computational budget to be trained, while guaranteeing shorter sampling times. We elaborate on this interpretation in § 4, where our approach obtains substantial savings in terms of training iterations.

A final note is in order. The choice of the auxiliary model depends on the selected diffusion time. The larger the T, the "simpler" the auxiliary model can be. Indeed, the noise distribution p(x, T) approaches p*noise* , so that a simple auxiliary model is sufficient to transform p*noise* into a distribution νϕ. Instead, for a small T,
the distribution p(x, T) is closer to the data distribution. Then, the auxiliary model requires high flexibility and capacity. In § 4, we substantiate this discussion with numerical examples and experiments on real data.

## 3.2 Comparison With Schrödinger Bridges

In this Section, we briefly compare our method with the Schrödinger bridges approach (Chen et al., 2021b;a; De Bortoli et al., 2021), which allows one to move from an arbitrary pnoise to p*data* in any finite amount of time T. This is achieved by simulating the sde

$$\mathrm{d}\mathbf{x}_{t}=\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log\hat{\psi}(\mathbf{x}_{t},t^{\prime})\right]\mathrm{d}t+g(t^{\prime})\mathrm{d}\mathbf{w}_{t},\quad\mathbf{x}_{0}\sim\mathbf{j}_{\mathrm{max}}\,,\tag{12}$$

where *ψ, ψ* ˆ solve the Partial Differential Equation (pde) system

$$\begin{cases}\frac{\partial\psi(\mathbf{x},t)}{\partial t}=-\mathbf{\nabla}^{\top}\left(\psi(\mathbf{x},t)\right)\mathbf{f}(\mathbf{x},t)-\frac{g^{2}(t)}{2}\Delta(\psi(\mathbf{x},t)),\\ \frac{\partial\psi}{\partial t}=-\mathbf{\nabla}^{\top}\left(\hat{\psi}(\mathbf{x},t)\mathbf{f}(\mathbf{x},t)\right)+\frac{g^{2}(t)}{2}\Delta(\hat{\psi}(\mathbf{x},t)),\end{cases}\qquad\quad\psi(\mathbf{x},0)\hat{\psi}(\mathbf{x},0)=p_{\texttt{data}}\left(\mathbf{x},t\right),$$
$\psi(\mathbf{x},0)\hat{\psi}(\mathbf{x},0)=p_{\texttt{data}}\left(\mathbf{x}\right),\psi(\mathbf{x},T)\hat{\psi}(\mathbf{x},T)=p_{\texttt{data}}\left(\mathbf{x}\right)$.  
$\mathbf{a}\cdot\mathbf{a}=0$. 
(13)
This approach presents drawbacks compared to classical Diffusion models. First, the functions ψ, ψˆ are not known, and their parametric approximation is costly and complex. Second, it is much harder to obtain quantitative bounds between true and generated data as a function of the quality of such approximations.

The *ψ, ψ* ˆ estimation procedure simplifies considerably in the particular case where p*noise* (x) = p(x, T), for arbitrary T. The solution of Eq. (13) is indeed ψ(x, t) = 1, ψˆ(x, t) = p(x, t). The first pde of the system is satisfied when ψ is a constant. The second pde is the Fokker-Planck equation, satisfied by ψˆ(x, t) = p(x, t).

Boundary conditions are also satisfied. In this scenario, a sensible objective is the score-matching, as getting
∇ log ψˆ equal to the true score ∇ log p allows perfect generation.

Unfortunately, it is difficult to generate samples from p(x, T), the starting conditions of Eq. (12). A trivial solution is to select T → ∞ in order to have p*noise* as the simple and analytically known steady state distribution of Eq. (1). This corresponds to the classical diffusion models approach, which we discussed in the previous Sections. An alternative solution is to keep T finite and *cover* the first part of the bridge from pnoise to p(x, T)
with an auxiliary model. This provides a different interpretation of our method, which allows for smaller diffusion times while keeping good generative quality.

## 3.3 An Extension For Density Estimation

Diffusion models can be also used for density estimation by transforming the diffusion sde into an equivalent Ordinary Differential Equation (ode) whose marginal distribution p(x, t) at each time instant coincide to that of the corresponding sde (Song et al., 2021c). The exact equivalent ode requires the score ∇ log p(xt, t),
which in practice is replaced by the score model sθ, leading to the following ode

$${\rm d}\mathbf{x}_{t}=\left(\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g(t)^{2}\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\right){\rm d}t\quad\mbox{with}\quad\mathbf{x}_{0}\sim p_{\rm data}\,\tag{14}$$

whose time varying probability density is indicated with pe(x, t). Note that the density pe(x, t), is in general not equal to the density p(x, t) associated to Eq. (1), with the exception of perfect score matching (Song et al., 2021b). The reverse time process is modeled as a Continuous Normaxlizing Flow (cnf) (Chen et al.,
2018; Grathwohl et al., 2019) initialized with distribution p*noise* (x); then, the likelihood of a given value x0 is

$$\log\widetilde{p}(\mathbf{x}_{0})=\log p_{\mathsf{train}}(\mathbf{x}_{T})+\int\limits_{t=0}^{T}\mathbf{\nabla}\mathbf{\cdot}\left(\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g(t)^{2}\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\right)\mathrm{d}t.\tag{15}$$

To use our proposed model for density estimation, we also need to take into account the ode dynamics. We focus again on the term log p*noise* (xT ) to improve the expected log likelihood. For consistency, our auxiliary density νϕ should now maximize E∼(14)log νϕ(xT ) instead of E∼(1)log νϕ(xT ). However, the simulation of Eq. (14) requires access to sθ which, in the endeavor of density estimation, is available only once the score model has been trained. Consequently optimization w.r.t. ϕ can only be performed sequentially, whereas for generative purposes it could be done concurrently. While the sequential version is expected to perform better, experimental evidence indicates that improvements are marginal, justifying the adoption of the more efficient concurrent version.

## 4 Experiments

We now present numerical results on the mnist and cifar10 datasets, to support our claims in §§ 2 and 3.

We follow a standard experimental setup (Song et al., 2021a;b; Huang et al., 2021; Kingma et al., 2021): we use a standard U-Net architecture with time embeddings (Ho et al., 2020) and we report the log-likelihood in terms of bit per dimension (bpd) and the Fréchet Inception Distance (fid) scores (uniquely for cifar10).

Although the fid score is a standard metric for ranking generative models, caution should be used against over-interpreting fid improvements (Kynkäänniemi et al., 2022). Similarly, while the theoretical properties of the models we consider are obtained through the lens of elbo maximization, the log-likelihood measured in terms of bpd should be considered with care (Theis et al., 2016). Finally, we also report the number of neural function evaluations (nfe) for computing the relevant metrics. We compare our method to the standard score-based model (Song et al., 2021c). The full description on the experimental setup is presented in Appendix.

On the existence of T
⋆. We look for further empirical evidence of the existence of a T
⋆ < ∞, as stated in Proposition 2. For the moment, we shall focus on the baseline model (Song et al., 2021c),
where no auxiliary models are introduced. Results are reported in Table 2. For mnist, we observe how times T = 0.6 and T = 1.0 have comparable performance in terms of bpd, implying that any T ≥ 1.0 is at best unnecessary and generally detrimental.

Similarly, for cifar10, it is possible to notice that the best value of bpd is achieved for T = 0.6, outperforming all other values.

Table 2: Optimal T in (Song et al., 2021c)

| Dataset   | Time T   | bpd (↓)   |
|-----------|----------|-----------|
| mnist     | 1.0      | 1.16      |
| 0.6       | 1.16     |           |
| 0.4       | 1.25     |           |
| 0.2       | 1.75     |           |
| cifar10   | 1.0      | 3.09      |
| 0.6       | 3.07     |           |
| 0.4       | 3.09     |           |
| 0.2       | 3.38     |           |

Our auxiliary models. In § 3 we introduced an auxiliary model to minimize the mismatch between initial distributions of the backward process. We now specify the family of parametric distributions we have considered. Clearly, the choice of an auxiliary model also depends on the data distribution, in addition to the choice of diffusion time T.

For our experiments, we consider two auxiliary models: (i)
a Dirichlet process Gaussian mixture model (dpgmm) (Rasmussen, 1999; Görür & Edward Rasmussen, 2010) for mnist and (ii) Glow Kingma & Dhariwal (2018), a flexible normalizing flow for cifar10. Both of them satisfy our requirements: they allow exact likelihood computation and they are equipped with a simple sampling procedure. As discussed in § 3, auxiliary model complexity should be adjusted as a function of T. This is confirmed experimentally in Fig. 4, where we use the number of mixture components of the dpgmm as a proxy to measure the complexity of the auxiliary model.

![10_image_0.png](10_image_0.png)

Figure 4: Complexity of the auxiliary model as function of diffusion time (reported median and 95 quantiles on 4 random seeds).
Reducing T **with auxiliary models.** We now show how it is possible to obtain a comparable (or better)
performance than the baseline model for a wide range of diffusion times T. For mnist, setting τ = 0.4 produces good performance both in terms of bpd (Table 3) and visual sample quality (Fig. 5). We also consider the sequential extension (S) to compute the likelihood, but remark marginal improvements compared to a concurrent implementation. Similarly for the cifar10 dataset, in Table 4 we observe how our method achieves better bpd than the baseline diffusion for T = 1. Moreover, our approach outperforms the baselines for the corresponding diffusion time in terms of fid score (additional non-curated samples in the Appendix).

In Figure 10 we provide a non curated subset of qualitative results, showing that our method for a diffusion time equal to 0.4 still produces appealing images, while the vanilla approach fails. We finally notice how the proposed method has comparable performance w.r.t. several other competitors, while stressing that many orthogonal to our solutions (like diffusion in latent space (Vahdat et al., 2021), or the selection of higher order schemes (Jolicoeur-Martineau et al., 2021)) can actually be combined with our methodology.

Training and sampling efficiency In Fig. 7, the horizontal line corresponds to the best performance of a fully trained baseline model for T = 1.0 (Song et al., 2021c). To achieve the same performance of the baseline, variants of our method require fewer iterations, which translate in training efficiency. For the sake of fairness, the total training cost of our method should account for the auxiliary model training, which however can be done concurrently to the diffusion process. As an illustration for cifar10, using four GPUs, the baseline model requires ∼ 6.4 days of training. With our method we trained the auxiliary and diffusion models for ∼ 2.3 and 2 days respectively, leading to a total training time of max{2.3, 2} = 2.3 days. Similar training curves can be obtained for the mnist dataset, where the training time for dpgmms is negligible.

CIFAR10 200 400 600 800 1000 1200 Iterations (thousands)
3.1 3.2 3.3 ScoreSDE
Our (T = 0.2) Our (T = 0.4)
Our (T = 0.6)
BPD
Figure 7: Training curves of score models for different diffusion time T, recorded during the span of 1.3 millions iterations.

Table 3: Experiment results on mnist. For our method, (S) is for the extension in § 3.3

|                    | nfe(↓)   | bpd (↓)       |
|--------------------|----------|---------------|
|                    | Model    | (ode)         |
| ScoreSDE           | 300      | 1.16          |
| ScoreSDE (T = 0.6) | 258      | 1.16          |
| Our (T = 0.6)      | 258      | 1.16 1.14 (S) |
| ScoreSDE (T = 0.4) | 235      | 1.25          |
| Our (T = 0.4)      | 235      | 1.17 1.16 (S) |
| ScoreSDE (T = 0.2) | 191      | 1.75          |
| Our (T = 0.2)      | 191      | 1.33 1.31 (S) |

Figure 5: Visualization of some samples

![10_image_1.png](10_image_1.png)

| fid(↓)                                           | bpd (↓)            | nfe (↓)       | nfe (↓)   |     |
|--------------------------------------------------|--------------------|---------------|-----------|-----|
| Model                                            | (sde)              | (ode)         |           |     |
| ScoreSDE (Song et al., 2021c)                    | 3.64               | 3.09          | 1000      | 221 |
| ScoreSDE (T = 0.6)                               | 5.74               | 3.07          | 600       | 200 |
| ScoreSDE (T = 0.4)                               | 24.91              | 3.09          | 400       | 187 |
| ScoreSDE (T = 0.2)                               | 339.72             | 3.38          | 200       | 176 |
| Our (T = 0.6)                                    | 3.72               | 3.07          | 600       | 200 |
| Our (T = 0.4)                                    | 5.44               | 3.06          | 400       | 187 |
| Our (T = 0.2)                                    | 14.38              | 3.06          | 200       | 176 |
| ARDM (Hoogeboom et al., 2022)                    | −                  | 2.69          | 3072      |     |
| VDM(Kingma et al., 2021)                         | 4.0                | 2.49          | 1000      |     |
| D3PMs (Austin et al., 2021)                      | 7.34               | 3.43          | 1000      |     |
| DDPM (Ho et al., 2020)                           | 3.21               | 3.75          | 1000      |     |
| Gotta Go Fast (Jolicoeur-Martineau et al., 2021) | 2.44               | −             | 180       |     |
| LSGM (Vahdat et al., 2021)                       | 2.10               | 2.87          | 120/138   |     |
| ARDM-P (Hoogeboom et al., 2022)                  | −                  | 2.68/2.74     | 200/50    |     |
| Real data                                        | ScoreSDE (T = 0.4) | Our (T = 0.4) |           |     |

Table 4: Experimental results on cifar10, including other relevant baselines and sampling efficiency enhancements from the literature.

![11_image_0.png](11_image_0.png)

Sampling speed benefits are evident from Tables 3 and 4. When considering the sde version of the methods the number of sampling steps can decrease linearly with T, in accordance with theory (Kloeden & Platen, 1995), while retaining good bpd and fid scores. Similarly, although not in a linear fashion, the number of steps of the ode samplers can be reduced by using a smaller diffusion time T. Finally, we test the proposed methodology on the more challenging celeba 64x64 dataset. In this case, we use a variance exploding diffusion and we consider again Glow as the auxiliary model. The results, presented in Table 6, report the log-likelihood performance of different methods (qualitative results are reported in Appendix). On the two extremes of the complexity we have the original diffusion (VE, T = 1.0) with the best bpd and the highest complexity, and Glow which provides a much simpler scheme with worse performance. In the table we report the bpd and the nfe metrics for smaller diffusion times, in three different configurations:
naively neglecting the mismatch (ScoreSDE) or using the auxiliary model (Our). Interestingly, we found that the best results are obtained by using a combination of diffusion models pretrained for T = 1.0. The summary of the content of this table is the following: by accepting a small degradation in terms of bpd we can reduce the computational cost by almost one order of magnitude. We think it would be interesting to study more performing auxiliary models to improve performance of our method on challenging datasets.

## 5 Conclusion

Diffusion-based generative models emerged as an extremely competitive approach for a wide range of application domains. In practice, however, these models are resource hungry, both for their training and for sampling new data points. In this work, we have introduced the key idea of considering diffusion times T
as a free variable which should be chosen appropriately. We have shown that the choice of T introduces a trade-off, for which an optimal "sweet spot" exists. In standard diffusion-based models, smaller values of T

| bpd (↓)                               | nfe (↓)   |    |
|---------------------------------------|-----------|----|
| Model                                 | (ode)     |    |
| ScoreSDE (Song et al., 2021c)         | 2.13      | 68 |
| ScoreSDE (T = 0.5)                    | 8.06      | 15 |
| ScoreSDE (T = 0.2)                    | 12.1      | 9  |
| Our (T = 0.5)                         | 2.48      | 16 |
| Our (T = 0.2)                         | 2.58      | 9  |
| Our with pretrain diffusion (T = 0.5) | 2.36      | 16 |
| Our with pretrain diffusion (T = 0.2) | 2.32      | 9  |
| Glow (Kingma & Dhariwal, 2018)        | 3.74      | 1  |

Table 5: Experimental results on CELEBA 64
are preferable for efficiency reasons, but sufficiently large T are required to reduce approximation errors of the forward dynamics. Thus, we devised a novel method that allows for an arbitrary selection of diffusion times, where even small values are allowed. Our method closes the gap between practical and ideal diffusion dynamics, using an auxiliary model. Our empirical validation indicated that the performance of our approach was comparable and often superior to standard diffusion models, while being efficient both in training and sampling. Limitations. In this work, the experimental protocol has been defined to corroborate our methodological contribution, and not to achieve state-of-the-art performance. A more extensive empirical evaluation of model architectures, sampling methods, and additional datasets could benefit practitioners in selecting an appropriate configuration of our method. An additional limitation is the descriptive, and not prescriptive, nature of Proposition 2: we know that T
⋆exists, but an explicit expression to identify the optimal diffusion is out of reach.

## Broader Impact Statement

We inherit the same ethical concerns of all generative models, as they could be used to produce fake or misleading information to the public.

## References

Brian DO Anderson. Reverse-Time Diffusion Equation Models. *Stochastic Processes and their Applications*, 12(3):
313–326, 1982.

Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. In *Advances in Neural Information Processing Systems*, volume 34, pp.

17981–17993. Curran Associates, Inc., 2021.

Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. *arXiv preprint arXiv:2201.06503*, 2022.

Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural Ordinary Differential Equations.

In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018.

Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *arXiv preprint arXiv:2209.11215*, 2022.

Tianrong Chen, Guan-Horng Liu, and Evangelos A Theodorou. Likelihood training of schr\" odinger bridge using forward-backward sdes theory. 2021a.

Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. *SIAM Review*, 63(2):249–313, 2021b.

Valentin De Bortoli. Convergence of denoising diffusion models under the manifold hypothesis. *arXiv preprint* arXiv:2208.05314, 2022.

Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling. In *Advances in Neural Information Processing Systems*, volume 34, pp. 17695–17709. Curran Associates, Inc., 2021.

Prafulla Dhariwal and Alexander Nichol. Diffusion Models Beat GANs on Image Synthesis. In *Advances in Neural* Information Processing Systems, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021.

Tim Dockhorn, Arash Vahdat, and Karsten Kreis. Score-Based Generative Modeling with Critically-Damped Langevin Diffusion. In *International Conference on Learning Representations*, 2022.

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In *Advances in Neural Information Processing Systems*, volume 27.

Curran Associates, Inc., 2014.

Dilan Görür and Carl Edward Rasmussen. Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution.

Journal of Computer Science and Technology, 25(4):653–664, 2010.

Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, and David Duvenaud. Scalable Reversible Generative Models with Free-form Continuous Dynamics. In *International Conference on Learning Representations*, 2019.

Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In *Advances in Neural* Information Processing Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020.

Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans.

Autoregressive Diffusion Models. In *International Conference on Learning Representations*, 2022.

Chin-Wei Huang, Jae Hyun Lim, and Aaron C Courville. A Variational Perspective on Diffusion-Based Generative Models and Score Matching. In *Advances in Neural Information Processing Systems*, volume 34, pp. 22863–22876.

Curran Associates, Inc., 2021.

Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, and Ioannis Mitliagkas. Gotta Go Fast When Generating Data with Score-Based Models. *CoRR*, abs/2105.14080, 2021.

Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. *arXiv preprint arXiv:2206.00364*, 2022.

Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational Diffusion Models. In *Advances in Neural* Information Processing Systems, volume 34, pp. 21696–21707. Curran Associates, Inc., 2021.

Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In International Conference on Learning Representations, 2014.

Durk P Kingma and Prafulla Dhariwal. Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.

Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved Variational Inference with Inverse Autoregressive Flow. In *Advances in Neural Information Processing Systems 29*, pp. 4743–4751. Curran Associates, Inc., 2016.

Peter E Kloeden and Eckhard Platen. Numerical solution of Stochastic Differential Equations. 1995.

Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In *International Conference on Learning Representations*, 2021.

Tuomas Kynkäänniemi, Tero Karras, Miika Aittala, Timo Aila, and Jaakko Lehtinen. The Role of ImageNet Classes in Fréchet Inception Distance. *CoRR*, abs/2203.06026, 2022.

Holden Lee, Jianfeng Lu, and Yixin Tan. Convergence for score-based generative modeling with polynomial complexity.

arXiv preprint arXiv:2206.06227, 2022a.

Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, and Tie-Yan Liu. PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior.

In *International Conference on Learning Representations*, 2022b.

Alexander Quinn Nichol and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. In International Conference on Machine Learning, volume 139, pp. 8162–8171. PMLR, 2021.

Carl Rasmussen. The Infinite Gaussian Mixture Model. In S. Solla, T. Leen, and K. Müller (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 1999.

Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), 2022.

Tim Salimans and Jonathan Ho. Progressive Distillation for Fast Sampling of Diffusion Models. In International Conference on Learning Representations, 2022.

Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015.

Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In *International Conference* on Learning Representations, 2021a.

Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.

Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum Likelihood Training of Score-Based Diffusion Models. In *Advances in Neural Information Processing Systems*, volume 34, pp. 1415–1428. Curran Associates, Inc.,
2021b.

Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. ScoreBased Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2021c.

Simo Särkkä and Arno Solin. *Applied Stochastic Differential Equations*. Institute of Mathematical Statistics Textbooks.

Cambridge University Press, 2019.

Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In *Advances in Neural Information Processing Systems*, volume 34, pp.

24804–24816. Curran Associates, Inc., 2021.

Lucas Theis, Aäron van den Oord, and Matthias Bethge. A Note on the Evaluation of Generative Models. In Yoshua Bengio and Yann LeCun (eds.), *International Conference on Learning Representations*, 2016.

Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V Bonilla, and Maurizio Filippone. Model selection for bayesian autoencoders. In *Advances in Neural Information Processing Systems*, volume 34, pp.

19730–19742. Curran Associates, Inc., 2021.

Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based Generative Modeling in Latent Space. In *Advances in* Neural Information Processing Systems, volume 34, pp. 11287–11302. Curran Associates, Inc., 2021.

Cédric Villani. *Optimal transport: old and new*, volume 338. Springer, 2009.

Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to Efficiently Sample from Diffusion Probabilistic Models. *CoRR*, abs/2106.03802, 2021.

Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. In *International Conference on Learning Representations*, 2022.

Huangjie Zheng, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. Truncated diffusion probabilistic models. *CoRR*,
abs/2202.09671, 2022.

## A Generic Definitions And Assumptions

Our work builds upon the work in Song et al. (2021b), which should be considered as a basis for the
developments hereafter. In this supplementary material we use the following shortened notation for a generic
ω > 0:
$${\mathcal{N}}_{\omega}(\mathbf{x})\ {\stackrel{\mathrm{def}}{=}}\ {\mathcal{N}}(\mathbf{x};\mathbf{0},\omega\mathbf{I}).$$
def = N (x; 0, ωI). (16)
It is useful to notice that ∇ log(Nω(x)) = −
1 ω x.

For an arbitrary probability density p(x) we define the convolution (∗ operator) with Nω using notation

$$(16)$$
$$p_{\omega}(\mathbf{x})=p(\mathbf{x})*{\mathcal{N}}_{\omega}(\mathbf{x}).$$
$$(17)$$
$$(18)$$
pω(x) = p(x) ∗ Nω(x). (17)
Equivalently, pω(x) = expω 2 ∆p(x), and consequently dpω(x)
dω =
1 2∆p(x), where ∆ = ∇⊤∇. Notice that by considering the Dirac delta function δ(x), we have the equality δω(x) = Nω(x).

In the following derivations, we make use of the Stam–Gross logarithmic Sobolev inequality result in (Villani, 2009, p. 562 Example 21.3):

$$\operatorname{KL}\left[p(\mathbf{x})\parallel\mathcal{N}_{\omega}(\mathbf{x})\right]=\int p(\mathbf{x})\log\left({\frac{p(\mathbf{x})}{\mathcal{N}_{\omega}(\mathbf{x})}}\right)\mathrm{d}\mathbf{x}\leq{\frac{\omega}{2}}\int\left\|\nabla\left(\log{\frac{p(\mathbf{x})}{\mathcal{N}_{\omega}(\mathbf{x})}}\right)\right\|^{2}p(\mathbf{x})\mathrm{d}\mathbf{x}.$$

## B Deriving Equation (4) From Huang Et Al. **(2021)**

We start with Eq. (25) of Huang et al. (2021) which, in our notation, reads

$$\log g(\mathbf{x},T)\geq\mathbb{E}\left[\log p_{\mathbf{x}_{0}}(\mathbf{x}_{T})\mid\mathbf{x}_{0}=\mathbf{x}\right]-\int_{0}^{T}\mathbb{E}\left[\frac{1}{2}g^{2}(t)\|\mathbf{x}_{0}(\mathbf{x}_{t})\|^{2}+\nabla^{\top}\left(g^{2}(t)\mathbf{x}_{0}(\mathbf{x}_{t})-\mathbf{f}(\mathbf{x}_{t},t)\right)\mid\mathbf{x}_{0}=\mathbf{x}\right]\mathrm{d}t.$$

The first step is to take the expected value w.r.t x0 ∼ p*data* on both sides of the above inequality

$$\mathbb{E}_{p_{\text{data}}}\left[\log q(\mathbf{x},T)\right]\geq\mathbb{E}\left[\log p_{\text{data}}(\mathbf{x}_{T})\right]-\int_{0}^{T}\mathbb{E}\left[\frac{1}{2}g^{2}(t)\|\mathbf{s_{\theta}}(\mathbf{x}_{t})\|^{2}+\nabla^{\top}\left(g^{2}(t)\mathbf{s_{\theta}}(\mathbf{x}_{t})-\mathbf{f}(\mathbf{x}_{t},t)\right)\right]\text{d}t.$$

We focus on rewriting the term

Z T
0
E
-∇⊤g
2(t)sθ(xt) − f(xt, t) dt =
Z T
0
p(x, t|x0)p*data* (x0)∇⊤g
2(t)sθ(x) − f(x, t)dxdx0dt =
−
Z T
0
∇⊤ (p(x, t|x0)p*data* (x0)) g
2(t)sθ(x) − f(x, t)dxdx0dt =
−
Z T
0
(p(x, t|x0)pdata (x0))−1 ∇⊤ (p(x, t|x0)p*data* (x0)) g
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
−
Z T
0
∇⊤ (log(p(x, t|x0)) + log(p*data* (x0))) g
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
−
Z T
0
∇⊤ (log(p(x, t|x0))) g
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
−
Z T
0
E
-∇⊤ (log(p(xt, t|x0))) g
2(t)sθ(xt) − f(xt, t) dt.
$$(19)$$
Consequently, we can rewrite the r.h.s of Equation (19) as

E [log pnoise (xT )] − Z T 0 E 1 2 g 2(t)∥sθ(xt)∥ 2 − g 2(t)∇⊤ (log(p(xt, t|x0))) sθ(x) + g 2(t)∇⊤ (log(p(xt, t|x0))) f(x, t) dt = E [log pnoise (xT )] − Z T 0 E 1 2 g 2(t)∥sθ(xt) − ∇ (log(p(xt, t|x0)))∥ 2dt− 1 2 Z T 0 E -g 2(t)∥∇ (log(p(xt, t|x0)))∥ + 2∇⊤ (log(p(xt, t|x0))) f(x, t)dt,

that is exactly Equation (4).

## C Proof Of **Eq. (5)**

We prove the following result

$$I(\mathbf{s_{\theta}},T)\geq\underbrace{I(\mathbf{\nabla}\log p,T)}_{\triangleq I_{K(T)}}=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[\left\|\mathbf{\nabla}\log p(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\right\|\right]^{2}\mathrm{d}t.$$

Proof. We prove that for generic positive λ(·), and T2 > T1 the following holds:

$$\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(\cdot)}\left[||\mathbf{s}(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||^{2}\right]\mathrm{d}t\geq\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(\cdot)}\left[||\nabla\log p(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||^{2}\right]\mathrm{d}t.\tag{20}$$

First we compute the functional derivative (w.r.t s)

$$\frac{\delta}{\delta\sigma}\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left|s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right|\right|^{2}\right]\mathrm{d}t=2\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left(s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right)\right]\mathrm{d}t=2\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left(s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t)\right)\right]\mathrm{d}t,$$

where we used

$$\begin{array}{l}{{\operatorname{E}_{\sim(1)}\left[\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right]=\int\nabla\log p(\mathbf{x},t|\mathbf{x}_{0})p(\mathbf{x},t|\mathbf{x}_{0})p_{\mathrm{data}}\left(\mathbf{x}_{0}\right)\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{x}_{0}=}}\\ {{\quad\quad\int\nabla p(\mathbf{x},t|\mathbf{x}_{0})p_{\mathrm{data}}\left(\mathbf{x}_{0}\right)\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{x}_{0}=\int\nabla p(\mathbf{x},t)\mathrm{d}\mathbf{x}=\operatorname{E}_{\sim(1)}\left[\nabla\log p(\mathbf{x}_{t},t)\right].}}\end{array}$$

Consequently we can obtain the optimal s through

$$\frac{\delta}{\delta\mathbf{s}}\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathrm{E}_{\sim(\cdot)}\left[\left|\mathbf{s}(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right|\right|^{2}\right]\mathrm{d}t=0\to\mathbf{s}(\mathbf{x},t)=\nabla\log p(\mathbf{x},t).\tag{21}$$

Substitution of this result into Eq. (20) directly proves the desired inequality.

As a byproduct, we prove the correctness of Eq. (5), since it is a particular case of Eq. (20), with λ =
g 2, T1 = 0, T2 = T. Since K(T) is a minimum, the decomposition I(sθ, T) = K(T) + G(sθ, T) implies K(T) + G(sθ, T) ≥ K(T) → G(sθ, T) ≥ 0.

## D Proof Of **Proposition 1**

Proposition 1. Given the stochastic dynamics defined in Eq. (1)*, it holds that*

$$\mathbb{E}_{\sim(\mathbb{1})}\log p(\mathbf{x}_{T},T)-K(T)+R(T)=\mathbb{E}_{p_{\mathrm{data}}(\mathbf{x})}\log p_{\mathrm{data}}(\mathbf{x}).$$
E∼(1)log p(xT , T) − K(T) + R(T) = Ep*data*(x)log p*data*(x). (8)
Proof. We consider the pair of equations

$$\begin{array}{l}{\rm d}\mathbf{x}_{t}=\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x}_{t},t)\right]{\rm d}t+g(t^{\prime}){\rm d}\mathbf{w}(t),\\ {\rm d}\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t){\rm d}t+g(t){\rm d}\mathbf{w}(t),\end{array}\tag{22}$$

where t
′ = T − t, q is the density of the backward process and p is the density of the forward process. These equations can be interpreted as a particular case of the following pair of sdes (corresponding to Huang et al.

(2021) eqn (4) and (17)1).

$$\mathrm{d}\mathbf{x}_{t}=\underbrace{\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x}_{t},t)\right]}_{\mathbf{\mu}(\mathbf{x}_{t},t)}\mathrm{d}t+\underbrace{g(t^{\prime})}_{\sigma(t)}\mathrm{d}\mathbf{w}(t),$$  $$\mathrm{d}\mathbf{x}_{t}=\left[\underbrace{\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\mathbf{\nabla}\log q(\mathbf{x}_{t},t^{\prime})}_{-\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})}+\underbrace{g(t)}_{\sigma(t^{\prime})}\mathbf{a}(\mathbf{x}_{t},t)\right]\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}(t),\tag{10}$$  \[\mathbf{\mu}\
$$(8)$$

where Eq. (22) is recovered considering a(x, t) = σ(t
′)∇ log q(x, t′) = g(t)∇ log q(x, t′). Eq. (23) is associated to an elbo (Huang et al. (2021), Thm 3) that is attained with equality if and only if a(x, t) = σ(t
′)∇ log q(x, t′). Consequently, we can write the following equality associated to the backward process of Eq. (22)

$$\log q(\mathbf{x},T)=\mathbb{E}\left[-\frac{1}{2}\int\limits_{0}^{T}||\mathbf{a}(\mathbf{x}_{t},t)||^{2}+2\nabla^{\top}\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})ds+\log q(\mathbf{x}_{T},0)\quad\left|\mathbf{x}_{0}=\mathbf{x}\right|\right],\tag{1}$$
$$(23)$$
$$(24)$$

where expected value is taken w.r.t. dynamics of the associated forward process.

By careful inspection of the couple of equations we notice that in the process xt the drift includes the
∇ log q(xt, t) term, while in our main (1) we have ∇ log p(xt, t′). In general the two vector fields do not agree. However, if we select as starting distribution of the generating process p(x, T), i.e. q(x, 0) = p(x, T),
then ∀t, q(x, t) = p(x, t′).

Given initial conditions, the time evolution of the density p is fully described by the Fokker-Planck equation

$$\frac{d}{dt}p(\mathbf{x},t)=-\nabla^{\top}\left(\mathbf{f}(\mathbf{x},t)p(\mathbf{x},t)\right)+\frac{g^{2}(t)}{2}\Delta(p(\mathbf{x},t)),\quad p(\mathbf{x},0)=p_{\mathsf{data}}\left(\mathbf{x}\right).\tag{25}$$  In fact,
Similarly, for the density q,
$$\frac{d}{dt}q(\mathbf{x},t)=-\mathbf{\nabla}^{\top}\left(-\mathbf{f}(\mathbf{x},t^{\prime})q(\mathbf{x},t)+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x},t)q(\mathbf{x},t)\right)+\frac{g^{2}(t^{\prime})}{2}\Delta(q(\mathbf{x},t)),\quad q(\mathbf{x},0)=p(\mathbf{x},T).\tag{26}$$

By Taylor expansion we have

$$q(\mathbf{x},\delta t)=q(\mathbf{x},0)+\delta t\left(\frac{d}{\mathrm{d}t}q(\mathbf{x},t)\right)_{t=0}+\mathcal{O}(\delta t^{2})=$$ $$q(\mathbf{x},0)+\delta t\left(-\nabla^{\top}\left(-\mathbf{f}(\mathbf{x},T)q(\mathbf{x},0)+g^{2}(T)\nabla\log q(\mathbf{x},0)q(\mathbf{x},0)\right)+\frac{g^{2}(T)}{2}\Delta(q(\mathbf{x},0))\right)+\mathcal{O}(\delta t^{2})=$$ $$q(\mathbf{x},0)+\delta t\left(\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)q(\mathbf{x},0)\right)-\frac{g^{2}(T)}{2}\Delta(q(\mathbf{x},0))\right)+\mathcal{O}(\delta t^{2}),$$

and

$$\begin{array}{l}{{p(\mathbf{x},T-\delta t)=p(\mathbf{x},T)-\delta t\left(\frac{d}{d t}p(\mathbf{x},t)\right)_{t=T}+\mathcal{O}(\delta t^{2})=}}\\ {{p(\mathbf{x},T)-\delta t\left(-\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)p(\mathbf{x},T)\right)+\frac{g^{2}(T)}{2}\Delta(p(\mathbf{x},T))\right)+\mathcal{O}(\delta t^{2})=}}\\ {{p(\mathbf{x},T)+\delta t\left(\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)p(\mathbf{x},T)\right)-\frac{g^{2}(T)}{2}\Delta(p(\mathbf{x},T))\right)+\mathcal{O}(\delta t^{2})}}\end{array}$$

Since q(x, 0) = p(x, T), we finally have q(x, δt) − p(x, T − δt) = O(δt2). This holds for arbitrarily small δt.

By induction, with similar reasoning, we claim that q(x, t) = p(x, t′).

This last result allows us to rewrite Eq. (22) as the pair of sdes

$$\begin{array}{l}{\rm d}\mathbf{x}_{t}=\left[\,-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log p(\mathbf{x}_{t},t^{\prime})\right]{\rm d}t+g(t^{\prime}){\rm d}\mathbf{w}(t),\\ {\rm d}\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t){\rm d}t+g(t){\rm d}\mathbf{w}(t).\end{array}\tag{27}$$

Moreover, since q(x, T) = p(x, 0) = p*data* (x), together with the result Eq. (24), we have the following equality

$$\log p_{\mathrm{data}}(\mathbf{x})=\mathbb{E}\left[-1\right]$$
$$\frac{1}{2}\int\limits_{0}^{T}||\mathbf{a}(\mathbf{x}_{t},t)||^{2}+2\nabla^{\top}\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})\mbox{d}t+\log p(\mathbf{x}_{T},T)\quad|\mathbf{x}_{0}=\mathbf{x}\Bigg{]}\,.\tag{28}$$

Consequently

− 1 2 Z T Ex∼pdata [log pdata (x)] = E [log p(xT , T)] + E  0 ||a(xt, t)||2 + 2∇⊤µ(xt, t′)dt   = E [log p(xT , T)] + E  − 1 2 Z T 0 g(t) 2||∇ log p(xt, t)||2 + 2∇⊤−f(xt, t) + g 2(t)∇ log p(xt, t)dt   = E [log p(xT , T)] + E  − 1 2 Z T 0 g(t) 2||∇ log p(xt, t)||2 − 2g 2(t)∇⊤ x log p(xt, t)∇ log p(xt, t|x0)dt   + E  − 1 2 Z T 0 2f ⊤(xt, t)∇ log p(xt, t|x0)dt   = E [log p(xT , T)] + E  − 1 2 Z T 0 g(t) 2||∇ log p(xt, t) − ∇ log p(xt, t|x0)||2dt   + − 1 2 Z T E  0 −g(t) 2||∇ log p(xt, t|x0)||2 + 2f ⊤(xt, t)∇ log p(xt, t|x0)dt   .

Remembering the definitions

$$\begin{array}{l}{{K(T)=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[||\nabla\log p(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||\right]^{2}\mathrm{d}t}}\\ {{R(T)=\frac{1}{2}\int\limits_{t=0}^{T}\mathbb{E}_{\sim(1)}\left[g^{2}(t)||\mathbf{\nabla}\log p(\mathbf{x},t\,|\,\mathbf{x}_{0})||\right]^{2}-2\mathbf{f}^{\top}(\mathbf{x},t)\mathbf{\nabla}\log p(\mathbf{x},t\,|\,\mathbf{x}_{0})\mathrm{d}t,}}\end{array}$$

we finally conclude the proof that

$$\mathbb{E}_{\sim(1)}[\log p(\mathbf{x}_{T},T)]-K(T)+R(T)=\mathbb{E}_{\mathbf{x}\sim p_{\texttt{data}}}\left[\log p_{\texttt{data}}(\mathbf{x})\right].\tag{1}$$
$$(29)$$

## E Proof Of **Lemma 1**

In this Section we prove the validity of Lemma 1 for the case of Variance Preserving (VP) and Variance Exploding (VE) sdes. Remember, as reported also in main Table 1, that the above mentioned classes correspond to α(t) = −
1 2 β(t), g(t) = pβ(t), β(t) = β0 + (β1 − β0)t and α(t) = 0, g(t) = qdσ2(t)
dt, σ2(t) =
σmax σmin trespectively.

Lemma 1. For the classes of sdes considered (Table 1), the discrepancy between p(x, T) and the p*noise*(x)
can be bounded as follows.

For Variance Preserving sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C1 exp−R T
0 β(t)dt.

For Variance Exploding sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C21 σ2(T)−σ2(0) .

## E.1 The Variance Preserving (Vp) Convergence

We associate this class of sdes to the Fokker Planck operator

$${\mathcal{L}}^{\dagger}(t)={\frac{1}{2}}\beta(t)\nabla^{\top}\left(\mathbf{x}\cdot+\nabla(\cdot)\right),$$

$$(30)$$
β(t)∇⊤ (x · +∇(·)), (30)
and consequently dp(x,t)
dt = L
†(t)p(x, t). Simple calculations show that lim T→∞
p(x, T) = N1(x).

We compute bound the time derivative of the kl term as

d dt kl [p(x, T) ∥ N1(x)] = Zdp(x, t) dtlogp(x, t) N1(x) dx + Zp(x, t) p(x, t) dp(x, t) dtdx = 1 2 β(t) Z∇⊤ (−∇ log(N1(x))p(x, t)) + ∇p(x, t))) logp(x, t) N1(x) dx = − 1 2 β(t) Zp(x, t) (−∇ log(N1(x)) + ∇ log p(x, t)))⊤ ∇(logp(x, t) N1(x) )dx = − 1 2 β(t) Zp(x, t)∇(logp(x, t) N1(x) ) ⊤∇(logp(x, t) N1(x) )dx = − 1 2 β(t) Zp(x, t)||∇(logp(x, t) N1(x) )||2dx ≤ −β(t)kl [p(x, T) ∥ N1(x)] . (31)
We then apply Gronwall's inequality (Villani, 2009) to d dtkl [p(x, T) ∥ N1(x)] ≤ −β(t)kl [p(x, T) ∥ N1(x)]
to claim

$$\operatorname{KL}\left[p(\mathbf{x},T)\parallel{\mathcal{N}}_{1}(\mathbf{x})\right]\leq\operatorname{KL}\left[p(\mathbf{x},0)\parallel{\mathcal{N}}_{1}(\mathbf{x})\right]\exp\Biggl(-\int_{0}^{T}\beta(s)d s\Biggr).$$

To claim validity of the result, we need to assume that p(x, t) has finite first and second order derivatives, and that kl [p(x, 0) ∥ N1(x)] < ∞.

$$(32)$$

## E.2 The Variance Exploding (Ve) Convergence

The first step is to bound the derivative w.r.t to ω of the divergence kl [pω(x) ∥ Nω(x)], i.e.

d
dωkl [pω(x) ∥ Nω(x)] = Zdpω(x)
dω logpω(x)
Nω(x)
dx +
Zpω(x)
pω(x)
dpω(x)
dω dx −
Zpω(x)
Nω(x)
dNω(x)
dω dx =
1
2
Z(∆pω(x)) logpω(x)
Nω(x)
− (∆Nω(x)) pω(x)
Nω(x)
dx =
1
2
Z∇⊤ (pω(x)∇ log pω(x)) logpω(x)
Nω(x)
− ∇⊤ (Nω(x)∇ log Nω(x)) pω(x)
Nω(x)
dx =
−
1
2
Z(pω(x)∇ log pω(x))⊤ ∇(logpω(x)
Nω(x)
) − (Nω(x)∇ log Nω(x))⊤ ∇(
pω(x)
Nω(x)
)dx =
−
1
2
Z(pω(x)∇ log pω(x))⊤ ∇(logpω(x)
Nω(x)
) − (pω(x)∇ log Nω(x))⊤ ∇(logpω(x)
Nω(x)
)dx =
−
1
2
Zpω(x)||∇(logpω(x)
Nω(x)
)||2dx ≤ −
1
ω
kl [pω(x) ∥ Nω(x)] . (33)
Consequently, using again Gronwall inequality, for all ω1 > ω0 > 0 we have
$\mathrm{KL}\left[p_{\omega_{1}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{1}}(\mathbf{x})\right]\leq\mathrm{KL}\left[p_{\omega_{0}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{0}}(\mathbf{x})\right]\exp(-(\log(\omega_{1})-\log(\omega_{0})))=\mathrm{KL}\left[p_{\omega_{0}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{0}}(\mathbf{x})\right]\omega_{0}\frac{1}{\omega_{1}}.$
This can be directly applied to obtain the bound for VE sde. Consider ω1 = σ 2(T) − σ 2(0) and ω0 =
σ 2(τ ) − σ 2(0) for an arbitrarily small *τ < T*. Then, since for the considered class of variance exploding sde we have p(x, T) = pσ2(T)−σ2(0)(x)

$${\rm KL}\left[p(\mathbf{x},T)\parallel{\cal N}_{\sigma^{2}(T)-\sigma^{2}(0)}(\mathbf{x})\right]\leq C\frac{1}{\sigma^{2}(T)-\sigma^{2}(0)}\tag{34}$$

where C = kl -p(x, τ ) ∥ Nσ2(τ)−σ2(0)(x)(σ 2(τ ) − σ 2(0)).

Similarly to the previous case, we assume that p(x, t) has finite first and second order derivatives, and that C < ∞.

## F Proof Of **Lemma 2**

Lemma 2. The optimal score gap term G(bsθ, T) is a non-decreasing function in T. That is, given T2 > T1, and θ1 = arg minθ I(sθ, T1), θ2 = arg minθ I(sθ, T2)*, then* G(sθ2
, T2) ≥ G(sθ1
, T1).

Proof. For θ1 defined as in the lemma, I(sθ1
, T1) = K(T1) + G(sθ1
, T1). Next, select T2 > T1. Then, for a generic θ, including θ2,

I(sθ, T2) = Z T1 t=0 g 2(t)E∼(1) -||sθ(xt, t) − ∇ log p(xt, t|x0)||2dt + | {z } =I(sθ,T1)≥K(T1)+G(sθ1 ,T1)=I(sθ1 ,T1) Z T2 t=T1 g 2(t)E∼(1) -||sθ(xt, t) − ∇ log p(xt, t|x0)||2dt t=T1 g 2(t)E∼(1)[||∇ log p(xt,t)−∇ log p(xt,t|x0)||2]dt=K(T2)−K(T1) ≥ G(sθ1 , T1) + K(T2), | {z } ≥R T2
from which $\mathcal{G}(\mathbf{s_{\theta}},T_{2})=I(\mathbf{s_{\theta}},T_{2})-K(T_{2})\geq\mathcal{G}(\mathbf{s_{\theta_{1}}},T_{1})$.  

## G Proof Of **Proposition 2**

Proposition 2. Consider the elbo decomposition in Eq. (9). We study it as a function of time T, and seek its optimal argument T
⋆ = arg maxT Lelbo(bsθ, T)*. Then, the optimal diffusion time* T
⋆ ∈ R
+, and thus not necessarily T
⋆ = ∞. Additional assumptions on the gap term G(·) *can be used to guarantee strict finiteness* of T
⋆. There exists at least one optimal diffusion *time* T
⋆in the interval [0, ∞], which maximizes the elbo, that is T
⋆ = arg maxT Lelbo(bsθ, T).

Proof. It is trivial to verify that since the optimal gap term G(bsθ, T) is an increasing function in T Lemma 2, then ∂G
∂T ≥ 0.Then, we study the sign of the kl derivative, which is always negative as shown by Eq. (31) and Eq. (33) (where we also notice d dt =
dω dt d dω keep the sign). Moreover, we know that that lim T→∞
∂kl
∂T = 0. Then, the function ∂Lelbo
∂T =
∂G
∂T +
∂kl
∂T has at least one zero in [0, ∞].

## H Optimization Of T ⋆

It is possible to treat the diffusion time T as an hyper-parameter and perform gradient based optimization jointly with the score model parameters θ. Indeed, simple calculations show that

$$\frac{\partial\mathcal{L}_{\texttt{EED}}(\mathbf{s_{\theta}},T)}{\partial T}=\mathbb{E}\left[\left(\mathbf{f}^{\top}(\mathbf{x}_{T},T)\mathbf{\nabla}+g^{2}(T)\Delta\right)\log p_{\texttt{Bres}}(\mathbf{x}_{T})\right]+$$ $$-\frac{1}{2}\mathbb{E}\left[\left\|\mathbf{s_{\theta}}(\mathbf{x}_{T},T)-\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right\|^{2}\right]+$$ $$\frac{1}{2}\mathbb{E}\left[g^{2}(T)\|\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right\|^{2}-2\mathbf{f}^{\top}(\mathbf{x}_{T},T)\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right]$$
$$\begin{array}{l}{(35)}\\ {\qquad(36)}\end{array}$$
$$(37)$$

## I Proof Of **Proposition 4**

Proposition 4. *Given the existence of* T
⋆, defined as the diffusion time such that the elbo *is maximized*
(Proposition 2), there exists at least one diffusion time τ ≤ T
⋆*, such that* L
ϕ

elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).

Proof. Since ∀T we have L
ϕ elbo(sθ, T) ≥ Lelbo(sθ, T), there exists a countable set of intervals I contained in [0, T ⋆] of variable supports, where L
ϕ elbo is greater than Lelbo(sθ, T). Assuming continuity of L
ϕ elbo, in these intervals is possible to find at least one τ ≤ T
⋆ where L
ϕ

elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).

We notice that the degenerate case I = T
⋆is obtained only when ∀T ≤ T
⋆,kl [p(x, T) ∥ νϕ∗ (x)] =
kl [p(x, T) ∥ p*noise* (x)]. We expect this condition to never occur in practice.

## J Invariance To Noise Schedule

We here discuss about the claims made in § 2.4 about the invariance of the elbo to the particular choice of noise schedule. First in Appendix J.1 we explain how different sdes corresponding to different noise schedules can be translated one into the other. We introduce the concept of signal-to-noise ratio (snr). We clarify the unified score parametrization used in practice in the literature Karras et al. (2022); Kingma et al. (2021).

Then, in Appendix J.2, we prove how the single elements of the elbo depend only on the value of the snr at the final diffusion time T, as claimed in the main paper.

## J.1 Preliminaries

We consider as reference sde a pure Wiener process diffusion,

$$\mathrm{d}\mathbf{x}_{t}=\mathrm{d}\mathbf{w}_{t}\quad\mathrm{with}\quad\mathbf{x}_{0}\sim p_{\mathrm{data}}\ ,$$
dxt = dwt with x0 ∼ p*data* , (38)
It is easily seen that the solution of the random process admits representation

$${\mathbf{x}}_{t}={\mathbf{x}}_{0}+{\sqrt{t}}{\boldsymbol{\epsilon}},\quad{\boldsymbol{\epsilon}}\sim{\mathcal{N}}({\mathbf{0}},{\boldsymbol{I}})$$
$$(39)$$
$$(40)$$
$$(41)$$
√tϵ, ϵ ∼ N (0, I) (39)
In this case the time varying probability density, that we indicate with ψ, satisfies

$$\psi(\mathbf{x},t)=\exp\biggl({\frac{t}{2}}\Delta\biggr)p_{\texttt{data}}(\mathbf{x}),\quad\psi(\mathbf{x},t\,|\,\mathbf{x}_{0})=\exp\biggl({\frac{t}{2}}\Delta\biggr)\delta(\mathbf{x}-\mathbf{x}_{0})$$

Simple calculations show that

$$\nabla\log\psi(\mathbf{x},\sigma^{2})={\frac{\mathbb{E}[\mathbf{x}_{0}\,|\,\mathbf{x}_{0}+\sigma\epsilon=\mathbf{x}]-\mathbf{x}}{\sigma^{2}}}\doteq{\frac{d(\mathbf{x};\sigma^{2})-\mathbf{x}}{\sigma^{2}}},$$

where again x0 ∼ p*data* and the function d can be interpreted as a *denoiser*.

Our goal is to show the relationship between equations like Equation (1), and Equation (38). In particular,
we focus on *affine* sdes, as classically done with Diffusion models. The class of considered affine sdes is the
following:
$$\mathrm{d}\mathbf{x}_{t}=\alpha(t)\mathbf{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}_{t}\quad\mathrm{with}\quad\mathbf{x}_{0}\sim p_{\mathrm{data}}\ ,$$
In this simple linear case the process admits representation
$$(42)$$
$$(43)$$
$$\mathbf{x}_{t}=k(t)\mathbf{x}_{0}+\sigma(t)\mathbf{\epsilon},\quad\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$$
$$(44)$$
xt = k(t)x0 + σ(t)ϵ, ϵ ∼ N (0, I) (43)
where $k(t)=\exp\left(\int\limits_{0}^{t}\sigma(s)ds\right),\sigma^{2}(t)=k^{2}(t)\int\limits_{0}^{t}\frac{\frac{1}{2}\,t^{2}(s)}{t^{2}(s)}ds$. We can rewrite Equation (43) as $\mathbf{x}_{t}=k(t)(\mathbf{x}_{0}+\delta(t)\mathbf{\epsilon})$, and define the SNR as $\delta(t)=\frac{\sigma(t)}{k(t)}$. The density associated to Equation (42) can be expressed as a function of time $t$.  
ψ as follows

$$p(\mathbf{x},t)=k(t)^{-D}\left[\exp\biggl{(}\frac{\partial^{2}(t)}{2}\Delta\biggr{)}p_{\mathsf{Bias}}(\mathbf{x})\right]_{\frac{\mathbf{x}}{k(t)}}=k(t)^{-D}\psi(\frac{\mathbf{x}}{k(t)},\sigma^{2}(t)).$$

The score function associated to Equation (43) has consequently expression

$$\nabla_{\mathbf{x}}\log p(\mathbf{x},t)=\nabla_{\mathbf{x}}\log\psi({\frac{\mathbf{x}}{k(t)}},\sigma^{2}(t))={\frac{1}{k(t)}}\nabla_{{\frac{\mathbf{x}}{k(t)}}}\log\psi({\frac{\mathbf{x}}{k(t)}},\sigma^{2}(t))={\frac{k(t)d({\frac{\mathbf{x}}{k(t)}};\sigma^{2}(t))-\mathbf{x}}{\sigma^{2}(t)}}.$$

## J.2 Different Noise Schedules

Consider a diffusion of the form Equation (38) and a score network s¯θ that approximate the true score.

Inspecting Equation (45), we parametrize the score network associated to a generic diffusion Equation (42)
as a function of the score of the reference diffusion. The score parametrization considered in Kingma et al.

(2021), can be generalized to arbitrary sdes Karras et al. (2022). In particular, as suggested by Equation (41),
we select

$$(45)$$
$$\bar{\mathbf{s}}_{\mathbf{\theta}}(\mathbf{x},t)=\frac{k(t)\mathbf{d}_{\mathbf{\theta}}(\frac{\mathbf{x}}{k(t)};\,\hat{\sigma}^{2}(t))-\mathbf{x}}{\sigma^{2}(t)}\tag{46}$$

We proceed by showing that the different components of the elbo depends on the diffusion time T only through σ˜(T), but not on k(t), σ(t) singularly for any time *t < T*.

Theorem 1. Consider a generic diffusion *Equation* (42) *and parametrize the score network as* s¯θ( x k(t)
, σ˜(t)).

Then, the gap term G(s¯θ, T) associated to *Equation* (42) for a diffusion time T depends only on σ˜(T) *but not* on k(t), σ(t) singularly for any time *t < T*.

Proof. We first rearrange the gap term

$$2{\mathcal{G}}({\bar{\mathbf{s}}}_{\theta},T)=\int\limits_{t=0}^{T}g^{2}(t){\mathbb{E}}_{\sim({\pm}{\mathbb{Z}})}\left[\left\|{\bar{\mathbf{s}}}_{\theta}(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\right\|^{2}\right]{\mathrm{d}}t-$$
$$\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(\frac{t}{2})}\left[\left\|\,\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t)-\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t\,|\,\boldsymbol{x}_{0})\right\|^{2}\right]\mathrm{d}t=$$ $$\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(\frac{t}{2})}\left[\left\|\,\bar{\boldsymbol{s}}_{\boldsymbol{\theta}}(\boldsymbol{x}_{t},t)-\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t)\right\|^{2}\right]\mathrm{d}t$$

as shown in []2. Then

Z
T
t=0
g
2(t)∥s¯θ(x, t) − ∇ log p(x, t)∥
2p(x, t| x0)p*data* (x0)dxdx0dt =
Z
T
t=0
g
2(t)

k(t)dθ(
x
k(t)
; ˜σ
2(t)) − x
σ
2(t)−
k(t)dθ(
x
k(t)
; ˜σ
2(t)) − x
σ
2(t)

2
p(x, t| x0)p*data* (x0)dxdx0dt =
Z
T
t=0
g
2(t)

k(t)dθ(
x
k(t)
; ˜σ
2(t)) − k(t)d(
x
k(t)
; ˜σ
2(t))
σ
2(t)

2
p(x, t| x0)p*data* (x0)dxdx0dt =
t=0
g
2(t)
k
2(t)

dθ(
x
k(t)
; ˜σ
2(t)) − d(
x
k(t)
; ˜σ
2(t))
σ˜
2(t)

2
p(x, t| x0)p*data* (x0)dxdx0dt =
Z
T
t=0
g
2(t)
k
2(t)

dθ(
x
k(t)
; ˜σ
2(t)) − d(
x
k(t)
; ˜σ
2(t))
σ˜
2(t)

2
ψ(
x
k(t)
, σ˜
2(t)| x0)p*data* (x0)k(t)
−Ddxdx0dt =
Z
T
subst. x˜ =x
k(t)
, dx˜ = dxk
−D(t)
Z
T
t=0
g
2(t)
k
2(t)

dθ(x˜; ˜σ
2(t)) − d(x˜; ˜σ
2(t))
σ˜
2(t)

2
ψ(x˜, σ˜
2(t)| x0)p*data* (x0)dxdx0dt =
subst. r = ˜σ
2(t), dr =
g
2(t)
k
2(t)
dt
σ˜
2(T)
Z
t=0
∥s¯θ(x˜, r) − ∇ log ψ(x˜, r | x0)∥
2ψ(x˜, r)p*data* (x0)dx˜dx0dr
For any k(t), σ(t) such that σ˜(T) is the same, the score matching loss is the same Theorem 2. Suppose that for any ϕ of the auxiliary model νϕ(x) *it exists one* ϕ
′*such that* νϕ
′ (x) =
k
−Dνϕ( x k
), for any k > 0*. Notice that this condition is trivially satisfied if the considered parametric model* has the expressiveness to multiply its output by the scalar k*. Then the minimum of Kullback-Leibler divergence* betweeen p(x, T) associated to a generic diffusion *Equation* (42) *and the density of an auxiliary model* νϕ(x)
depends only on σ˜(T) and not on σ(T) *alone.*

2Citation not included to avoid breaking anonymity

![24_image_0.png](24_image_0.png)

Figure 8: Visualization of few samples at different diffusion times T.
Proof. We start with the equality

kl [p(x, T) ∥ νϕ(x)] = kl k(T) −Dψ(x k(T) , σ˜(T)) ∥ νϕ(x) = kl k(T) −Dψ(x k(T) , σ˜(T)) ∥ k(T) −Dνϕ ′ (x k(T) ) = Zk(T) −Dψ(x k(T) , σ˜(T)) log ψ(x k(T) , σ˜(T)) νϕ ′ (x k(T) ) ! dx = Zψ(x˜, σ˜(T)) log ψ(x˜, σ˜(T)) νϕ ′ (x˜) ! dx˜ = kl hψ(x, σ˜(T)) ∥ νϕ ′ (x) i
Then the minimimum only depends on σ˜(T), as it is always possible to achieve the same value independently on the sde by rescaling the auxiliary model output.

## K Experimental Details

We here give some additional details concerning the experimental (§ 4) settings.

## K.1 Toy Example Details

In the toy example, we use 8192 samples from a simple Gaussian mixture with two components as target p*data* (x). In detail, we have p*data* (x) = πN (1, 0.1 2) + (1 − π)N (3, 0.5 2), with π = 0.3. The choice of Gaussian mixture allows to write down explicitly the time-varying density

$$p(\mathbf{x}_{t},t)=\pi{\mathcal{N}}(1,s^{2}(t)+0.1^{2})+(1-\pi){\mathcal{N}}(3,s^{2}(t)+0.5^{2}),$$

where s 2(t) is the marginal variance of the process at time t. We consider a variance exploding sde of the type dxt = σ tdwt, which corresponds to s 2(t) = σ 2t−1 2 log σ
.

## K.2 § 4 **Details**

We considered Variance Preserving sde with default β0, β1 parameter settings. When experimenting on cifar10 we considered the NCSN++ architecture as implemented in Song et al. (2021c). Training of the score matching network has been carried out with the default set of optimizers and schedulers of Song et al.

(2021c), independently of the selected T.

For the mnist dataset we reduced the architecture by considering 64 features, ch_mult = (1, 2) and attention resolutions equal to 8. The optimizer has been selected as the one in the cifar10 experiment but the warmup has been reduced to 1000 and the total number of iterations to 65000.

## K.3 Varying T

We clarify about the T truncation procedure during both training and testing. The sde parameters are kept unchanged irrespective of T. During training, as evident from Eq. (3), it is sufficient to sample randomly

$$(47)$$

the diffusion time from distribution U(0, T) where T can take any positive value. For testing (sampling) we simply modified the algorithmic routines to begin the reverse diffusion processes from a generic T instead of the default 1.0.

## L Non Curated Samples

We provide for completeness collection of non curated samples for the cifar10 Figs. 9 to 12, mnist dataset Figs. 13 to 16 and celebadataset Fig. 17 and Table 6

Table 6: celeba: fid scores for our method and baseline (T =
1.0)
.

| fid (↓)                               | nfe (↓)   |      |
|---------------------------------------|-----------|------|
| Model                                 | (sde)     |      |
| ScoreSDE Song et al. (2021c)          | 3.90      | 1000 |
| Our (T = 0.5)                         | 8.06      | 500  |
| Our (T = 0.2)                         | 86.9      | 200  |
| Our with pretrain diffusion (T = 0.5) | 8.58      | 500  |
| Our with pretrain diffusion (T = 0.2) | 86.7      | 200  |

![26_image_0.png](26_image_0.png)

Figure 9: cifar10:Our(left) and Vanilla(right) method at T = 0.2

![26_image_1.png](26_image_1.png)

Figure 10: cifar10:Our(left) and Vanilla(right) method at T = 0.4

![27_image_0.png](27_image_0.png)

Figure 11: cifar10:Our(left) and Vanilla(right) method at T = 0.6

![27_image_1.png](27_image_1.png)

Figure 12: Vanilla method at T = 1.0

![28_image_0.png](28_image_0.png)

Figure 13: MNIST:Our(left) and Vanilla(right) method at T = 0.2

![28_image_1.png](28_image_1.png)

Figure 14: MNIST:Our(left) and Vanilla(right) method at T = 0.4

![29_image_0.png](29_image_0.png)

Figure 15: MNIST: Our(left) and Vanilla(right) method at T = 0.6

![29_image_1.png](29_image_1.png)

Figure 16: MNIST: Vanilla method at T = 1.0

![30_image_0.png](30_image_0.png)

with pretrained score model and Glow (T = 0.5) and Bottom: baseline diffusion (T = 1.0)