RedTachyon
commited on
Commit
•
9627d6f
1
Parent(s):
ce2ef73
Upload folder using huggingface_hub
Browse files- ZPQhzTSWA7/10_image_0.png +3 -0
- ZPQhzTSWA7/10_image_1.png +3 -0
- ZPQhzTSWA7/ZPQhzTSWA7.md +1204 -0
- ZPQhzTSWA7/ZPQhzTSWA7_meta.json +25 -0
ZPQhzTSWA7/10_image_0.png
ADDED
Git LFS Details
|
ZPQhzTSWA7/10_image_1.png
ADDED
Git LFS Details
|
ZPQhzTSWA7/ZPQhzTSWA7.md
ADDED
@@ -0,0 +1,1204 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# A Simple Convergence Proof Of Adam And Adagrad
|
2 |
+
|
3 |
+
Alexandre Défossez defossez@meta.com Meta AI
|
4 |
+
Léon Bottou Meta AI
|
5 |
+
Francis Bach INRIA / PSL
|
6 |
+
Nicolas Usunier Meta AI
|
7 |
+
Reviewed on OpenReview: *https: // openreview. net/ forum? id= ZPQhzTSWA7*
|
8 |
+
|
9 |
+
## Abstract
|
10 |
+
|
11 |
+
We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer, the dimension d, and the total number of iterations N. This bound can be made arbitrarily small, and with the right hyper-parameters, Adam can be shown to converge with the same rate of convergence O(d ln(N)/
|
12 |
+
√N). When used with the default parameters, Adam doesn't converge, however, and just like constant stepsize SGD, it moves away from the initialization point faster than Adagrad, which might explain its practical success. Finally, we obtain the tightest dependency on the heavy ball momentum decay rate β1 among all previous convergence bounds for non-convex Adam and Adagrad, improving from O((1 − β1)
|
13 |
+
−3) to O((1 − β1)
|
14 |
+
−1).
|
15 |
+
|
16 |
+
## 1 Introduction
|
17 |
+
|
18 |
+
First-order methods with adaptive step sizes have proved useful in many fields of machine learning, be it for sparse optimization (Duchi et al., 2013), tensor factorization (Lacroix et al., 2018) or deep learning (Goodfellow et al., 2016). Duchi et al. (2011) introduced Adagrad, which rescales each coordinate by a sum of squared past gradient values. While Adagrad proved effective for sparse optimization (Duchi et al., 2013), experiments showed that it under-performed when applied to deep learning (Wilson et al., 2017). RMSProp (Tieleman & Hinton, 2012) proposed an exponential moving average instead of a cumulative sum to solve this. Kingma & Ba (2015) developed Adam, one of the most popular adaptive methods in deep learning, built upon RMSProp and added corrective terms at the beginning of training, together with heavy-ball style momentum. In the online convex optimization setting, Duchi et al. (2011) showed that Adagrad achieves optimal regret for online convex optimization. Kingma & Ba (2015) provided a similar proof for Adam when using a decreasing overall step size, although this proof was later shown to be incorrect by Reddi et al. (2018), who introduced AMSGrad as a convergent alternative. Ward et al. (2019) proved that Adagrad also converges to a critical point for non convex objectives with a rate O(ln(N)/
|
19 |
+
√N) when using a scalar adaptive step-size, instead of diagonal. Zou et al. (2019b) extended this proof to the vector case, while Zou et al. (2019a)
|
20 |
+
displayed a bound for Adam, showing convergence when the decay of the exponential moving average scales as 1 − 1/N and the learning rate as 1/
|
21 |
+
√N.
|
22 |
+
|
23 |
+
In this paper, we present a simplified and unified proof of convergence to a critical point for Adagrad and Adam for stochastic non-convex smooth optimization. We assume that the objective function is lower bounded, smooth and the stochastic gradients are almost surely bounded. We recover the standard O(ln(N)/
|
24 |
+
√N) convergence rate for Adagrad for all step sizes, and the same rate with Adam with an appropriate choice of the step sizes and decay parameters, in particular, Adam can converge without using the AMSGrad variant. Compared to previous work, our bound significantly improves the dependency on the momentum parameter β1. The best known bounds for Adagrad and Adam are respectively in O((1 − β1)
|
25 |
+
−3)
|
26 |
+
and O((1−β1)
|
27 |
+
−5) (see Section 3), while our result is in O((1−β1)
|
28 |
+
−1) for both algorithms. This improvement is a step toward understanding the practical efficiency of heavy-ball momentum. Outline. The precise setting and assumptions are stated in the next section, and previous work is then described in Section 3. The main theorems are presented in Section 4, followed by a full proof for the case without momentum in Section 5. The proof of the convergence with momentum is deferred to the supplementary material, Section A. Finally we compare our bounds with experimental results, both on toy and real life problems in Section 6.
|
29 |
+
|
30 |
+
## 2 Setup 2.1 Notation
|
31 |
+
|
32 |
+
Let d ∈ N be the dimension of the problem (i.e. the number of parameters of the function to optimize) and take [d] = {1, 2*, . . . , d*}. Given a function h : R
|
33 |
+
d → R, we denote by ∇h its gradient and ∇ih the i-th component of the gradient. We use a small constant , e.g. 10−8, for numerical stability. Given a sequence
|
34 |
+
(un)n∈N with ∀n ∈ N, un ∈ R
|
35 |
+
d, we denote un,i for n ∈ N and i ∈ [d] the i-th component of the n-th element of the sequence.
|
36 |
+
|
37 |
+
We want to optimize a function F : R
|
38 |
+
d → R. We assume there exists a random function f : R
|
39 |
+
d → R such that E [∇f(x)] = ∇F(x) for all x ∈ R
|
40 |
+
d, and that we have access to an oracle providing i.i.d. samples (fn)n∈N∗ . We note En−1 [��] the conditional expectation knowing f1*, . . . , f*n−1. In machine learning, x typically represents the weights of a linear or deep model, f represents the loss from individual training examples or minibatches, and F is the full training objective function. The goal is to find a critical point of F.
|
41 |
+
|
42 |
+
## 2.2 Adaptive Methods
|
43 |
+
|
44 |
+
We study both Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2015) using a unified formulation.
|
45 |
+
|
46 |
+
We assume we have 0 < β2 ≤ 1, 0 ≤ β1 < β2, and a non negative sequence (αn)n∈N∗ . We define three vectors mn, vn, xn ∈ R
|
47 |
+
diteratively. Given x0 ∈ R
|
48 |
+
d our starting point, m0 = 0, and v0 = 0, we define for all iterations n ∈ N
|
49 |
+
∗,
|
50 |
+
|
51 |
+
$$\begin{array}{l}{{m_{n,i}=\beta_{1}m_{n-1,i}+\nabla_{i}f_{n}(x_{n-1})}}\\ {{v_{n,i}=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2}}}\\ {{x_{n,i}=x_{n-1,i}-\alpha_{n}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.}}\end{array}$$
|
52 |
+
$$(1)$$
|
53 |
+
|
54 |
+
$$\left(2\right)$$
|
55 |
+
. (3)
|
56 |
+
The parameter β1 is a heavy-ball style momentum parameter (Polyak, 1964), while β2 controls the decay rate of the per-coordinate exponential moving average of the squared gradients. Taking β1 = 0, β2 = 1 and αn = α gives Adagrad. While the original Adagrad algorithm did not include a heavy-ball-like momentum, our analysis also applies to the case β1 > 0.
|
57 |
+
|
58 |
+
Adam and its corrective terms The original Adam algorithm (Kingma & Ba, 2015) uses a weighed average, rather than a weighted sum for (1) and (2), i.e. it uses
|
59 |
+
|
60 |
+
$$\tilde{m}_{n,i}=(1-\beta_{1})\sum_{k=1}^{n}\beta_{1}^{n-k}\nabla_{i}f_{n}(x_{k-1})=(1-\beta_{1})m_{n,i},$$
|
61 |
+
$$\left({\mathrm{3}}\right)$$
|
62 |
+
|
63 |
+
We can achieve the same definition by taking αadam = α · √
|
64 |
+
1−β1 1−β2
|
65 |
+
. The original Adam algorithm further includes two corrective terms to account for the fact that mn and vn are biased towards 0 for the first few iterations. Those corrective terms are equivalent to taking a step-size αn of the form
|
66 |
+
|
67 |
+
$$\alpha_{n,\mathrm{adam}}=\alpha\cdot{\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}}\cdot\underbrace{{\frac{1}{1-\beta_{1}^{n}}}}_{\mathrm{\scriptsize~\begin{array}{l}{\mathrm{corefictive}}\\ {\mathrm{term~for~}}m_{n}\end{array}}}\cdot{\frac{\sqrt{1-\beta_{2}^{n}}}{\mathrm{\scriptsize~\begin{array}{l}{\mathrm{corefictive}}\\ {\mathrm{term~for~}}v_{n}\end{array}}}}.$$
|
68 |
+
|
69 |
+
Those corrective terms can be seen as the normalization factors for the weighted sum given by (1) and (2)
|
70 |
+
Note that each term goes to its limit value within a few times 1/(1 − β) updates (with β ∈ {β1, β2}). which explains the (1 − β1) term in (4). In the present work, we propose to drop the corrective term for mn, and to keep only the one for vn, thus using the alternative step size
|
71 |
+
|
72 |
+
$$\quad(4)$$
|
73 |
+
$$\alpha_{n}=\alpha(1-\beta_{1})\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}.\tag{1}$$
|
74 |
+
$$\mathbf{\Sigma}$$
|
75 |
+
|
76 |
+
This simplification motivated by several observations:
|
77 |
+
- By dropping either corrective terms, αn becomes monotonic, which simplifies the proof.
|
78 |
+
|
79 |
+
- For typical values of β1 and β2 (e.g. 0.9 and 0.999), the corrective term for mn converges to its limit value much faster than the one for vn.
|
80 |
+
|
81 |
+
- Removing the corrective term for mn is equivalent to a learning-rate warmup, which is popular in deep learning, while removing the one for vn would lead to an increased step size during early training. For values of β2 close to 1, this can lead to divergence in practice.
|
82 |
+
|
83 |
+
We experimentally verify in Section 6.3 that dropping the corrective term for mn has no observable effect on the training process, while dropping the corrective term for vn leads to observable perturbations. In the following, we thus consider the variation of Adam obtained by taking αn provided by (5).
|
84 |
+
|
85 |
+
## 2.3 Assumptions
|
86 |
+
|
87 |
+
We make three assumptions. We first assume F is bounded below by F∗, that is,
|
88 |
+
|
89 |
+
$$\forall x\in\mathbb{R}^{d},\ F(x)\geq F_{*}.$$
|
90 |
+
$\downarrow$ .
|
91 |
+
|
92 |
+
$\downarrow$ .
|
93 |
+
|
94 |
+
d, F(x) ≥ F∗. (6)
|
95 |
+
We then assume the `∞ *norm of the stochastic gradients is uniformly almost surely bounded*, i.e. there is R ≥
|
96 |
+
√ (
|
97 |
+
√ is used here to simplify the final bounds) so that
|
98 |
+
|
99 |
+
$$\forall x\in\mathbb{R}^{d},\quad\|\nabla f(x)\|_{\infty}\leq R-\ {\sqrt{\epsilon}}\quad{\mathrm{a.s.}},$$
|
100 |
+
$$\mathbf{r}=\mathbf{r}$$
|
101 |
+
√ a.s., (7)
|
102 |
+
and finally, the *smoothness of the objective function*, e.g., its gradient is L-Liptchitz-continuous with respect
|
103 |
+
to the `2-norm:
|
104 |
+
$$\forall x,y\in\mathbb{R}^{d},$$
|
105 |
+
. (8)
|
106 |
+
$$\|\nabla F(x)-\nabla F(y)\|_{2}\leq L\,\|x-y\|_{2}\,.$$
|
107 |
+
We discuss the use of assumption (7) in Section 4.2.
|
108 |
+
|
109 |
+
## 3 Related Work
|
110 |
+
|
111 |
+
Early work on adaptive methods (McMahan & Streeter, 2010; Duchi et al., 2011) showed that Adagrad achieves an optimal rate of convergence of O(1/
|
112 |
+
√N) for convex optimization (Agarwal et al., 2009). Later, RMSProp (Tieleman & Hinton, 2012) and Adam (Kingma & Ba, 2015) were developed for training deep neural networks, using an exponential moving average of the past squared gradients.
|
113 |
+
|
114 |
+
Kingma & Ba (2015) offered a proof that Adam with a decreasing step size converges for convex objectives.
|
115 |
+
|
116 |
+
However, the proof contained a mistake spotted by Reddi et al. (2018), who also gave examples of convex
|
117 |
+
|
118 |
+
$$\mathbf{\Sigma}$$
|
119 |
+
|
120 |
+
problems where Adam does not converge to an optimal solution. They proposed AMSGrad as a convergent variant, which consisted in retaining the maximum value of the exponential moving average. When α goes to zero, AMSGrad is shown to converge in the convex and non-convex setting (Fang & Klabjan, 2019; Zhou et al., 2018). Despite this apparent flaw in the Adam algorithm, it remains a widely popular optimizer, raising the question as to whether it converges. When β2 goes to 1 and α to 0, our results and previous work (Zou et al., 2019a) show that Adam does converge with the same rate as Adagrad. This is coherent with the counter examples of Reddi et al. (2018), because they uses a small exponential decay parameter β2 < 1/5.
|
121 |
+
|
122 |
+
The convergence of Adagrad for non-convex objectives was first tackled by Li & Orabona (2019), who proved its convergence, but under restrictive conditions (e.g., α ≤
|
123 |
+
√/L). The proof technique was improved by Ward et al. (2019), who showed the convergence of "scalar" Adagrad, i.e., with a single learning rate, for any value of α with a rate of O(ln(N)/
|
124 |
+
√N). Our approach builds on this work but we extend it to both Adagrad and Adam, in their coordinate-wise version, as used in practice, while also supporting heavy-ball momentum.
|
125 |
+
|
126 |
+
The coordinate-wise version of Adagrad was also tackled by Zou et al. (2019b), offering a convergence result for Adagrad with either heavy-ball or Nesterov style momentum. We obtain the same rate for heavy-ball momentum with respect to N (i.e., O(ln(N)/
|
127 |
+
√N)), but we improve the dependence on the momentum parameter β1 from O((1 − β1)
|
128 |
+
−3) to O((1 − β1)
|
129 |
+
−1). Chen et al. (2019) also provided a bound for Adagrad and Adam, but without convergence guarantees for Adam for any hyper-parameter choice, and with a worse dependency on β1. Zhou et al. (2018) also cover Adagrad in the stochastic setting, however their proof technique leads to a p1/ term in their bound, typically with =10−8. Finally, a convergence bound for Adam was introduced by Zou et al. (2019a). We recover the same scaling of the bound with respect to α and β2. However their bound has a dependency of O((1 − β1)
|
130 |
+
−5) with respect to β1, while we get O((1 − β1)
|
131 |
+
−1),
|
132 |
+
a significant improvement. Shi et al. (2020) obtain similar convergence results for RMSProp and Adam when considering the random shuffling setup. They use an affine growth condition (i.e. norm of the stochastic gradient is bounded by an affine function of the norm of the deterministic gradient) instead of the boundness of the gradient, but their bound decays with the number of total epochs, not stochastic updates leading to an overall √s extra term with s the size of the dataset. Finally, Faw et al. (2022) use the same affine growth assumption to derive high probability bounds for scalar Adagrad.
|
133 |
+
|
134 |
+
Non adaptive methods like SGD are also well studied in the non convex setting (Ghadimi & Lan, 2013), with a convergence rate of O(1/
|
135 |
+
√N) for a smooth objective with bounded variance of the gradients. Unlike adaptive methods, SGD requires knowing the smoothness constant. When adding heavy-ball momentum, Yang et al. (2016) showed that the convergence bound degrades as O((1−β1)
|
136 |
+
−2), assuming that the gradients are bounded. We apply our proof technique for momentum to SGD in the Appendix, Section B and improve this dependency to O((1 − β1)
|
137 |
+
−1). Recent work by Liu et al. (2020) achieves the same dependency with weaker assumptions. Defazio (2020) provided an in-depth analysis of SGD-M with a tight Liapunov analysis.
|
138 |
+
|
139 |
+
## 4 Main Results
|
140 |
+
|
141 |
+
For a number of iterations N ∈ N
|
142 |
+
∗, we note τN a random index with value in {0*, . . . , N* − 1}, so that
|
143 |
+
|
144 |
+
$$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}.$$
|
145 |
+
|
146 |
+
1. (9)
|
147 |
+
If β1 = 0, this is equivalent to sampling τ uniformly in {0*, . . . , N*−1}. If β1 > 0, the last few 1 1−β1 iterations are sampled rarely, and iterations older than a few times that number are sampled almost uniformly. Our results bound the expected squared norm of the gradient at iteration τ , which is standard for non convex stochastic optimization (Ghadimi & Lan, 2013).
|
148 |
+
|
149 |
+
## 4.1 Convergence Bounds
|
150 |
+
|
151 |
+
For simplicity, we first give convergence results for β1 = 0, along with a complete proof in Section 5. We then provide the results with momentum, with their proofs in the Appendix, Section A.6. We also provide a bound on the convergence of SGD with a O(1/(1 − β1) dependency in the Appendix, Section B.2, along with its proof in Section B.4.
|
152 |
+
|
153 |
+
$$({\mathfrak{g}})$$
|
154 |
+
|
155 |
+
## No Heavy-Ball Momentum
|
156 |
+
|
157 |
+
Theorem 1 (Convergence of Adagrad without momentum). Given the assumptions from Section 2.3, the iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and β1 = 0, and τ *defined by* (9)*, we have for any* N ∈ N
|
158 |
+
∗,
|
159 |
+
|
160 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|^{2}\right]\leq2R\frac{F(x_{0})-F_{\star}}{\alpha\sqrt{N}}+\frac{1}{\sqrt{N}}\left(4dR^{2}+\alpha dRL\right)\ln\left(1+\frac{NR^{2}}{\epsilon}\right).\tag{10}$$
|
161 |
+
|
162 |
+
Theorem 2 (Convergence of Adam without momentum). *Given the assumptions from Section 2.3, the* iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, αn = α q1−β n 2 1−β2 with α > 0 and β1 = 0, and τ *defined by* (9)*, we have for any* N ∈ N
|
163 |
+
∗,
|
164 |
+
|
165 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R{\frac{F(x_{0})-F_{*}}{\alpha N}}+E\left({\frac{1}{N}}\ln\left(1+{\frac{R^{2}}{(1-\beta_{2})\epsilon}}\right)-\ln(\beta_{2})\right),$$
|
166 |
+
|
167 |
+
, (11)
|
168 |
+
with
|
169 |
+
|
170 |
+
$$(11)$$
|
171 |
+
$$E={\frac{4d R^{2}}{\sqrt{1-\beta_{2}}}}+{\frac{\alpha d R L}{1-\beta_{2}}}.$$
|
172 |
+
$$\left(12\right)$$
|
173 |
+
|
174 |
+
## With Heavy-Ball Momentum
|
175 |
+
|
176 |
+
Theorem 3 (Convergence of Adagrad with momentum). Given the assumptions from Section 2.3, the iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and 0 ≤ β1 < 1, and τ *defined by* (9)*, we have for any* N ∈ N
|
177 |
+
∗such that N > β1 1−β1
|
178 |
+
,
|
179 |
+
|
180 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\sqrt{N}\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{\bar{N}}E\ln\left(1+\frac{N R^{2}}{\epsilon}\right),$$
|
181 |
+
|
182 |
+
$$\begin{array}{l}{{\ w i t h\ \tilde{N}=N-\frac{\beta_{1}}{1-\beta_{1}},\ a n d,}}\end{array}$$
|
183 |
+
$$E=\alpha d R L+\frac{12d R^{2}}{1-\beta_{1}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}.$$
|
184 |
+
$$\left(13\right)$$
|
185 |
+
|
186 |
+
Theorem 4 (Convergence of Adam with momentum). Given the assumptions from Section 2.3, the iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, 0 ≤ β1 < β2*, and,*
|
187 |
+
αn = α(1 − β1)
|
188 |
+
q1−β n 2 1−β2 with α > 0, and τ *defined by* (9)*, we have for any* N ∈ N
|
189 |
+
∗such that N > β1 1−β1
|
190 |
+
,
|
191 |
+
|
192 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\frac{F(x_{0})-F_{*}}{\alpha\tilde{N}}+E\left(\frac{1}{\tilde{N}}\ln\left(1+\frac{R^{2}}{(1-\beta_{2})\epsilon}\right)-\frac{N}{\tilde{N}}\ln(\beta_{2})\right),$$
|
193 |
+
|
194 |
+
, (13)
|
195 |
+
with N˜ = N −β1 1−β1
|
196 |
+
, and
|
197 |
+
|
198 |
+
$$E=\frac{\alpha d R L(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{12d R^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}.$$
|
199 |
+
|
200 |
+
## 4.2 Analysis Of The Bounds
|
201 |
+
|
202 |
+
Dependency on d. The dependency in d is present in previous works on coordinate wise adaptive methods (Zou et al., 2019a;b). Note however that R is defined as the `∞ bound on the on the stochastic gradient, so that in the case where the gradient has a similar scale along all dimensions, dR2 would be a reasonable bound for k∇f(x)k 2 2
|
203 |
+
. However, if many dimensions contribute little to the norm of the gradient, this would still lead to a worse dependency in d that e.g. scalar Adagrad Ward et al. (2019) or SGD. Diving into the technicalities of the proof to come, we will see in Section 5 that we apply Lemma 5.2 once per dimension. The contribution from each coordinate is mostly independent of the actual scale of its gradients
|
204 |
+
(as it only appears in the log), so that the right hand side of the convergence bound will grow as d. In contrast, the scalar version of Adagrad (Ward et al., 2019) has a single learning rate, so that Lemma 5.2 is only applied once, removing the dependency on d. However, this variant is rarely used in practice.
|
205 |
+
|
206 |
+
Almost sure bound on the gradient. We chose to assume the existence of an almost sure uniform `∞-
|
207 |
+
bound on the gradients given by (7). This is a strong assumption, although it is weaker than the one used by Duchi et al. (2011) for Adagrad in the convex case, where the iterates were assumed to be almost surely bounded. There exist a few real life problems that verifies this assumption, for instance logistic regression without weight penalty, and with bounded inputs. It is possible instead to assume only a uniform bound on the expected gradient ∇F(x), as done by Ward et al. (2019) and Zou et al. (2019b). This however lead to a bound on E
|
208 |
+
hk∇F(xτ )k 4/3 2 i2/3instead of a bound on E
|
209 |
+
hk∇F(xτ )k 2 2 i, all the other terms staying the same.
|
210 |
+
|
211 |
+
We provide the sketch of the proof using Hölder inequality in the Appendix, Section A.7. It is also possible to replace the bound on the gradient with an affine growth condition, i.e. the norm of the stochastic gradient is bounded by an affine function of the norm of the expected gradient. A proof for scalar Adagrad is provided by Faw et al. (2022). Shi et al. (2020) do the same for RMSProp, however their convergence bound is decays as O(log(T)/
|
212 |
+
√T) with T the number of epoch, not the number of updates, leading to a significantly less tight bound for large datasets.
|
213 |
+
|
214 |
+
Impact of heavy-ball momentum. Looking at Theorems 3 and 4, we see that increasing β1 always deteriorates the bounds. Taking β1 = 0 in those theorems gives us almost exactly the bound without heavy-ball momentum from Theorems 1 and 2, up to a factor 3 in the terms of the form dR2.
|
215 |
+
|
216 |
+
As discussed in Section 3, previous bounds for Adagrad in the non-convex setting deteriorates as O((1−β1)
|
217 |
+
−3)
|
218 |
+
(Zou et al., 2019b), while bounds for Adam deteriorates as O((1 − β1)
|
219 |
+
−5) (Zou et al., 2019a). Our unified proof for Adam and Adagrad achieves a dependency of O((1 − β1)
|
220 |
+
−1), a significant improvement. We refer the reader to the Appendix, Section A.3, for a detailed analysis. While our dependency still contradicts the benefits of using momentum observed in practice, see Section 6, our tighter analysis is a step in the right direction. On sampling of τ Note that in (9), we sample with a lower probability the latest iterations. This can be explained by the fact that the proof technique for stochastic optimization in the non-convex case is based on the idea that for every iteration n, either ∇F(xn) is small, or F(xn+1) will decrease by some amount.
|
221 |
+
|
222 |
+
However, when introducing momentum, and especially when taking the limit β1 → 1, the latest gradient ∇F(xn) has almost no influence over xn+1, as the momentum term updates slowly. Momentum *spreads* the influence of the gradients over time, and thus, it will take a few updates for a gradient to have fully influenced the iterate xn and thus the value of the function F(xn). From a formal point of view, the sampling weights given by (9) naturally appear as part of the proof which is presented in Section A.6.
|
223 |
+
|
224 |
+
## 4.3 Optimal Finite Horizon Adam Is Adagrad
|
225 |
+
|
226 |
+
Let us take a closer look at the result from Theorem 2. It could seem like some quantities can explode but actually not for any reasonable values of α, β2 and N. Let us try to find the best possible rate of convergence for Adam for a finite horizon N, i.e. q ∈ R+ such that E
|
227 |
+
hk∇F(xτ )k 2i= O(ln(N)N −q) for some choice of the hyper-parameters α(N) and β2(N). Given that the upper bound in (11) is a sum of non-negative terms, we need each term to be of the order of ln(N)N −q or negligible. Let us assume that this rate is achieved for α(N) and β2(N). The bound tells us that convergence can only be achieved if lim α(N) = 0 and lim β2(N) = 1, with the limits taken for N → ∞. This motivates us to assume that there exists an asymptotic development of α(N) ∝ N −a + o(N −a), and of 1 − β2(N) ∝ N −b + o(N −b) for a and b positive.
|
228 |
+
|
229 |
+
Thus, let us consider only the leading term in those developments, ignoring the leading constant (which is assumed to be non-zero). Let us further assume that R2, we have
|
230 |
+
|
231 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R{\frac{F(x_{0})-F_{*}}{N^{1-a}}}+E\left({\frac{1}{N}}\ln\left({\frac{R^{2}N^{b}}{\epsilon}}\right)+{\frac{N^{-b}}{1-N^{-b}}}\right),$$
|
232 |
+
|
233 |
+
, (14)
|
234 |
+
$$(14)$$
|
235 |
+
|
236 |
+
with E = 4dR2Nb/2 + *dRLN*b−a. Let us ignore the log terms for now, and use N−b 1−N−b ∼ N −bfor N → ∞ ,
|
237 |
+
to get
|
238 |
+
|
239 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{r})\right\|^{2}\right]\leqslant2R\frac{F(x_{0})-F_{*}}{N^{1-a}}+4dR^{2}N^{b/2-1}+4dR^{2}N^{-b/2}+dRLN^{b-a-1}+dRLN^{-a}.$$
|
240 |
+
|
241 |
+
Adding back the logarithmic term, the best rate we can obtain is O(ln(N)/
|
242 |
+
√N), and it is only achieved for a = 1/2 and b = 1, i.e., α = α1/
|
243 |
+
√N and β2 = 1 − 1/N. We can see the resemblance between Adagrad on one side and Adam with a finite horizon and such parameters on the other. Indeed, an exponential moving average with a parameter β2 = 1−1/N as a typical averaging window length of size N, while Adagrad would be an exact average of the past N terms. In particular, the bound for Adam now becomes
|
244 |
+
|
245 |
+
be an exact average of the pixel $N$ terms. In particular, the bound for which now becomes $$\mathbb{E}\left[\|\nabla F(x_{r})\|^{2}\right]\leq\frac{F(x_{0})-F_{*}}{\alpha_{1}\sqrt{N}}+\frac{1}{\sqrt{N}}\left(4dR^{2}+\alpha_{1}dRL\right)\left(\ln\left(1+\frac{R\,N}{\epsilon}\right)+\frac{N}{N-1}\right),\tag{15}$$ which differ from (10) only by a $+N/(N-1)$ next to the log term.
|
246 |
+
|
247 |
+
Adam and Adagrad are twins. Our analysis highlights an important fact: *Adam is to Adagrad like* constant step size SGD is to decaying step size SGD. While Adagrad is asymptotically optimal, it also leads to a slower decrease of the term proportional to F(x0) − F∗, as 1/
|
248 |
+
√N instead of 1/N for Adam. During the initial phase of training, it is likely that this term dominates the loss, which could explain the popularity of Adam for training deep neural networks rather than Adagrad. With its default parameters, Adam will not converge. It is however possible to choose α and β2 to achieve an critical point for arbitrarily small and, for a known time horizon, they can be chosen to obtain the exact same bound as Adagrad.
|
249 |
+
|
250 |
+
## 5 Proofs For Β1 = 0 **(No Momentum)**
|
251 |
+
|
252 |
+
We assume here for simplicity that β1 = 0, i.e., there is no heavy-ball style momentum. Taking n ∈ N
|
253 |
+
∗, the recursions introduced in Section 2.2 can be simplified into
|
254 |
+
|
255 |
+
$$\begin{cases}v_{n,i}&=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2},\\ x_{n,i}&=x_{n-1,i}-\alpha_{n}{\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}}.\end{cases}$$
|
256 |
+
$$(16)$$
|
257 |
+
|
258 |
+
Remember that we recover Adagrad when αn = α for α > 0 and β2 = 1, while Adam can be obtained taking 0 < β2 < 1, α > 0,
|
259 |
+
|
260 |
+
$$\alpha_{n}=\alpha\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}},$$
|
261 |
+
|
262 |
+
Throughout the proof we denote by En−1 [·] the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1 and vn−1 are deterministic knowing f1*, . . . , f*n−1. For all n ∈ N
|
263 |
+
∗, we also define v˜n ∈ R
|
264 |
+
dso
|
265 |
+
|
266 |
+
particular, $x_{n-1}$ and $v_{n-1}$ are deterministic knowing $f_{1},\ldots,f_{n-1}$. For all $n\in\mathbb{N}^{*}$, we also define that for all $i\in[d]$, $$\tilde{v}_{n,i}=\beta_{2}v_{n-1,i}+\mathbb{E}_{n-1}\left[(\nabla_{i}f_{n}(x_{n-1}))^{2}\right],$$ i.e., we replace the last gradient contribution by its expected value conditioned on $f_{1},\ldots,f_{n-1}$.
|
267 |
+
|
268 |
+
## 5.1 Technical Lemmas
|
269 |
+
|
270 |
+
A problem posed by the update (16) is the correlation between the numerator and denominator. This prevents us from easily computing the conditional expectation and as noted by Reddi et al. (2018), the expected direction of update can have a positive dot product with the objective gradient. It is however possible to control the deviation from the descent direction, following Ward et al. (2019) with this first lemma.
|
271 |
+
|
272 |
+
Lemma 5.1 (adaptive update approximately follow a descent direction). *For all* n ∈ N
|
273 |
+
∗ and i ∈ [d]*, we* have:
|
274 |
+
|
275 |
+
$$\mathbb{E}_{n-1}\left[\nabla_{i}F(x_{n-1}){\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}}\right]\geq{\frac{(\nabla_{i}F(x_{n-1}))^{2}}{2\sqrt{\epsilon+{\bar{v}}_{n,i}}}}-2R\mathbb{E}_{n-1}\left[{\frac{(\nabla_{i}f_{n}(x_{n-1}))^{2}}{\epsilon+v_{n,i}}}\right].$$
|
276 |
+
$$(17)$$
|
277 |
+
$$(18)$$
|
278 |
+
$$\left(19\right)$$
|
279 |
+
. (19)
|
280 |
+
7 Proof. We take i ∈ [d] and note G = ∇iF(xn−1), g = ∇ifn(xn−1), v = vn,i and v˜ = ˜vn,i.
|
281 |
+
|
282 |
+
$$\mathbb{E}_{n-1}\left[{\frac{G g}{\sqrt{\epsilon+v}}}\right]=\mathbb{E}_{n-1}\left[{\frac{G g}{\sqrt{\epsilon+\bar{v}}}}\right]+\mathbb{E}_{n-1}\left[{\underbrace{G g\left({\frac{1}{\sqrt{\epsilon+v}}}-{\frac{1}{\sqrt{\epsilon+\bar{v}}}}\right)}_{A}}\right].$$
|
283 |
+
$$(20)$$
|
284 |
+
$$(21)$$
|
285 |
+
|
286 |
+
Given that g and v˜ are independent knowing f1*, . . . , f*n−1, we immediately have
|
287 |
+
|
288 |
+
$$\mathbb{E}_{n-1}\left[\frac{Gg}{\sqrt{\epsilon+\hat{v}}}\right]=\frac{G^{2}}{\sqrt{\epsilon+\hat{v}}}.\tag{1}$$
|
289 |
+
|
290 |
+
Now we need to control the size of the second term A,
|
291 |
+
|
292 |
+
A = Gg v˜ − v √ + v √ + ˜v( √ + v + √ + ˜v) = Gg En−1 -g 2− g 2 √ + v √ + ˜v( √ + v + √ + ˜v) |A| ≤ |Gg| En−1 -g 2 √ + v( + ˜v) | {z } κ + |Gg|g 2 ( + v) √ + ˜v | {z } ρ .
|
293 |
+
$$(22)$$
|
294 |
+
$$(23)$$
|
295 |
+
The last inequality comes from the fact that √ + v+
|
296 |
+
√ + ˜v ≥ max(√ + v, √ + ˜v) andEn−1
|
297 |
+
-g 2− g 2 ≤
|
298 |
+
En−1
|
299 |
+
-g 2+ g 2. Following Ward et al. (2019), we can use the following inequality to bound κ and ρ,
|
300 |
+
|
301 |
+
$$\forall\lambda>0,\,x,y\in\mathbb{R},x y\leq{\frac{\lambda}{2}}x^{2}+{\frac{y^{2}}{2\lambda}}.$$
|
302 |
+
|
303 |
+
First applying (22) to κ with
|
304 |
+
|
305 |
+
$$\lambda={\frac{\sqrt{\epsilon+\tilde{v}}}{2}},\;x={\frac{|G|}{\sqrt{\epsilon+\tilde{v}}}},\;y={\frac{|g|\,\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}\sqrt{\epsilon+v}}},$$
|
306 |
+
|
307 |
+
we obtain
|
308 |
+
|
309 |
+
$$\kappa\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}+\frac{g^{2}\mathbb{E}_{n-1}\left[g^{2}\right]^{2}}{(\epsilon+\tilde{v})^{3/2}(\epsilon+v)}.$$
|
310 |
+
|
311 |
+
Given that + ˜v ≥ En−1
|
312 |
+
-g 2and taking the conditional expectation, we can simplify as
|
313 |
+
|
314 |
+
$$\mathbb{E}_{n-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\bar{v}}}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\bar{v}}}\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].$$
|
315 |
+
|
316 |
+
Given that pEn−1 [g 2] ≤
|
317 |
+
√ + ˜v and pEn−1 [g 2] ≤ R, we can simplify (23) as
|
318 |
+
|
319 |
+
$$\mathbb{E}_{n-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\vec{v}}}+R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$
|
320 |
+
|
321 |
+
Now turning to ρ, we use (22) with
|
322 |
+
|
323 |
+
$$\lambda=\frac{\sqrt{\epsilon+\tilde{v}}}{2\mathbb{E}_{n-1}\left[g^{2}\right]},\;x=\frac{|G g|}{\sqrt{\epsilon+\tilde{v}}},\;y=\frac{g^{2}}{\epsilon+v},$$
|
324 |
+
|
325 |
+
we obtain
|
326 |
+
|
327 |
+
$$\rho\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}\frac{g^{2}}{\mathbb{E}_{n-1}\left[g^{2}\right]}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}}\frac{g^{4}}{(\epsilon+v)^{2}},$$
|
328 |
+
$$(24)$$
|
329 |
+
$$(25)$$
|
330 |
+
|
331 |
+
Given that + v ≥ g 2 and taking the conditional expectation we obtain
|
332 |
+
|
333 |
+
$$\mathbb{E}_{n-1}\left[\rho\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}}\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right],\tag{1}$$
|
334 |
+
|
335 |
+
which we simplify using the same argument as for (24) into
|
336 |
+
|
337 |
+
$$(26)$$
|
338 |
+
$$\mathbb{E}_{n-1}\left[\rho\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\bar{v}}}+R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$
|
339 |
+
$$(27)$$
|
340 |
+
|
341 |
+
Notice that in (25), we possibly divide by zero. It suffice to notice that if En−1
|
342 |
+
-g 2= 0 then g 2 = 0 a.s. so that ρ = 0 and (27) is still verified. Summing (24) and (27) we can bound
|
343 |
+
|
344 |
+
$$\mathbb{E}_{n-1}\left[|A|\right]\leq\frac{G^{2}}{2\sqrt{\epsilon+\vartheta}}+2R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$ In the case of $\mathbb{E}_{n-1}$, $\mathbb{E}_{n-1}$ is a $n$-dimensional vector.
|
345 |
+
$$(28)$$
|
346 |
+
|
347 |
+
Injecting (28) and (21) into (20) finishes the proof. Anticipating on Section 5.2, the previous Lemma gives us a bound on the deviation from a descent direction. While for a specific iteration, this deviation can take us away from a descent direction, the next lemma tells us that the sum of those deviations cannot grow larger than a logarithmic term. This key insight introduced in Ward et al. (2019) is what makes the proof work.
|
348 |
+
|
349 |
+
Lemma 5.2 (sum of ratios with the denominator being the sum of past numerators). *We assume we have* 0 < β2 ≤ 1 and a non-negative sequence (an)n∈N∗ *. We define for all* n ∈ N
|
350 |
+
∗, bn =Pn j=1 β n−j 2aj *. We have*
|
351 |
+
|
352 |
+
$$\sum_{j=1}^{N}\frac{a_{j}}{\epsilon+b_{j}}\leq\ln\left(1+\frac{b_{N}}{\epsilon}\right)-N\ln(\beta_{2}).\tag{1}$$
|
353 |
+
$\square$
|
354 |
+
$$(29)$$
|
355 |
+
|
356 |
+
Proof. Given that ln is increasing, and the fact that bj > aj ≥ 0, we have for all j ∈ N
|
357 |
+
∗,
|
358 |
+
|
359 |
+
$$\begin{array}{l}{{\frac{a_{j}}{\epsilon+b_{j}}\leq\ln(\epsilon+b_{j})-\ln(\epsilon+b_{j}-a_{j})}}\\ {{\qquad=\ln(\epsilon+b_{j})-\ln(\epsilon+\beta_{2}b_{j-1})}}\\ {{\qquad=\ln\left(\frac{\epsilon+b_{j}}{\epsilon+b_{j-1}}\right)+\ln\left(\frac{\epsilon+b_{j-1}}{\epsilon+\beta_{2}b_{j-1}}\right).}}\end{array}$$
|
360 |
+
|
361 |
+
The first term forms a telescoping series, while the second one is bounded by − ln(β2). Summing over all j ∈ [N] gives the desired result.
|
362 |
+
|
363 |
+
## 5.2 Proof Of Adam And Adagrad Without Momentum
|
364 |
+
|
365 |
+
Let us take an iteration n ∈ N
|
366 |
+
∗, we define the update un ∈ R
|
367 |
+
d:
|
368 |
+
|
369 |
+
$$\forall i\in[d],u_{n,i}=\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}.\tag{1}$$
|
370 |
+
|
371 |
+
Adagrad. As explained in Section 2.2, we have αn = α for α > 0. Using the smoothness of F (8), we have
|
372 |
+
|
373 |
+
$$F(x_{n+1})\leq F(x_{n})-\alpha\nabla F(x_{n})^{T}u_{n}+\frac{\alpha^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.$$
|
374 |
+
|
375 |
+
Taking the conditional expectation with respect to f0*, . . . , f*n−1 we can apply the descent Lemma 5.1. Notice that due to the a.s. `∞ bound on the gradients (7), we have for any i ∈ [d],p + ˜vn,i ≤ R
|
376 |
+
√n, so that,
|
377 |
+
|
378 |
+
$${\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2{\sqrt{\epsilon+{\bar{v}}_{n,i}}}}}\geq{\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2R{\sqrt{n}}}}.$$
|
379 |
+
$$(30)$$
|
380 |
+
$$(31)$$
|
381 |
+
$$(32)$$
|
382 |
+
|
383 |
+
This gives us
|
384 |
+
|
385 |
+
$$\mathbb{E}_{n-1}\left[F(x_{n})\right]\leq F(x_{n-1})-\frac{\alpha}{2R\sqrt{n}}\left\|\nabla F(x_{n-1})\right\|_{2}^{2}+\left(2\alpha R+\frac{\alpha^{2}L}{2}\right)\mathbb{E}_{n-1}\left[\left\|u_{n}\right\|_{2}^{2}\right].$$
|
386 |
+
|
387 |
+
Summing the previous inequality for all n ∈ [N], taking the complete expectation, and using that √n ≤
|
388 |
+
√N
|
389 |
+
gives us,
|
390 |
+
|
391 |
+
$$\mathbb{E}\left[F(x_{N})\right]\leq F(x_{0})-\frac{\alpha}{2R\sqrt{N}}\sum_{n=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{n})\|_{2}^{2}\right]+\left(2\alpha R+\frac{\alpha^{2}L}{2}\right)\sum_{n=0}^{N-1}\mathbb{E}\left[\|u_{n}\|_{2}^{2}\right].$$
|
392 |
+
|
393 |
+
From there, we can bound the last sum on the right hand side using Lemma 5.2 once for each dimension.
|
394 |
+
|
395 |
+
Rearranging the terms, we obtain the result of Theorem 1.
|
396 |
+
|
397 |
+
Adam. As given by (5) in Section 2.2, we have αn = α q1−β n 2 1−β2 for α > 0. Using the smoothness of F
|
398 |
+
|
399 |
+
defined in (8), we have $$F(x_{n})\leq F(x_{n-1})-\alpha_{n}\nabla F(x_{n-1})^{T}u_{n}+\frac{\alpha_{n}^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.\tag{33}$$ We have for any $i\in[d]$, $\sqrt{e+v_{n,i}}\leq R\sqrt{\sum_{j=0}^{n-d}\frac{\beta_{j}^{2}}{2R}}=R\sqrt{\frac{1-\beta_{j}}{1-\beta_{i}}}$, thanks to the a.s. $\ell_{\infty}$ bound on the gradients (7), so that, $$\alpha_{n}\frac{\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2\sqrt{e+v_{n,i}}}\geq\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2R}.\tag{34}$$ Taking the conditional expectation with respect to $f_{1},\ldots,f_{n-1}$ we can apply the descent Lemma 5.1 and
|
400 |
+
$$(33)$$
|
401 |
+
$$(34)$$
|
402 |
+
use (34) to obtain from (33),
|
403 |
+
|
404 |
+
$$\mathbb{E}_{n-1}\left[F(x_{n})\right]\leq F(x_{n-1})-\frac{\alpha}{2R}\left\|\nabla F(x_{n-1})\right\|_{2}^{2}+\left(2\alpha_{n}R+\frac{\alpha_{n}^{2}L}{2}\right)\mathbb{E}_{n-1}\left[\left\|u_{n}\right\|_{2}^{2}\right].$$
|
405 |
+
|
406 |
+
Given that β2 < 1, we have αn ≤ √ α 1−β2
|
407 |
+
. Summing the previous inequality for all n ∈ [N] and taking the complete expectation yields
|
408 |
+
|
409 |
+
$$\mathbb{E}\left[F(x_{N})\right]\leq F(x_{0})-\frac{\alpha}{2R}\sum_{n=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{n})\|_{2}^{2}\right]+\left(\frac{2\alpha R}{\sqrt{1-\beta_{2}}}+\frac{\alpha^{2}L}{2(1-\beta_{2})}\right)\sum_{n=0}^{N-1}\mathbb{E}\left[\|u_{n}\|_{2}^{2}\right].$$
|
410 |
+
|
411 |
+
Applying Lemma 5.2 for each dimension and rearranging the terms finishes the proof of Theorem 2.
|
412 |
+
|
413 |
+
## 6 Experiments
|
414 |
+
|
415 |
+
On Figure 1, we compare the effective dependency of the average squared norm of the gradient in the parameters α, β1 and β2 for Adam, when used on a toy task and CIFAR-10.
|
416 |
+
|
417 |
+
## 6.1 Setup
|
418 |
+
|
419 |
+
Toy problem. In order to support the bounds presented in Section 4, in particular the dependency in β2, we test Adam on a specifically crafted toy problem. We take x ∈ R
|
420 |
+
6 and define for all i ∈ [6], pi = 10−i. We take (Qi)i∈[6], Bernoulli variables with P [Qi = 1] = pi. We then define f for all x ∈ R
|
421 |
+
d as
|
422 |
+
|
423 |
+
$$f(x)=\sum_{i\in[6]}(1-Q_{i})\,{\rm Huber}(x_{i}-1)+\frac{Q_{i}}{\sqrt{p_{i}}}\,{\rm Huber}(x_{i}+1),\tag{35}$$
|
424 |
+
|
425 |
+
with for all y ∈ R,
|
426 |
+
|
427 |
+
$$\mathrm{Huber}(y)={\left\{\begin{array}{l l}{\quad{\frac{y^{2}}{2}}}&{{\mathrm{when~}}|y|\leq1}\\ {\quad|y|-{\frac{1}{2}}}&{{\mathrm{otherwise.}}}\end{array}\right.}$$
|
428 |
+
|
429 |
+
![10_image_0.png](10_image_0.png)
|
430 |
+
|
431 |
+
(a) Average squared norm of the gradient on a toy task, see Section 6, for more details. For the α and 1 − β2 curves, we initialize close to the optimum to make the F0 − F∗ term negligible.
|
432 |
+
|
433 |
+
(b) Average squared norm of the gradient of a small convolutional model Gitman & Ginsburg (2017)
|
434 |
+
trained on CIFAR-10, with a random initialization.
|
435 |
+
|
436 |
+
The full gradient is evaluated every epoch.
|
437 |
+
Figure 1: Observed average squared norm of the objective gradients after a fixed number of iterations when varying a single parameter out of α, 1 − β1 and 1 − β2, on a toy task (left, 106iterations) and on CIFAR-10
|
438 |
+
(right, 600 epochs with a batch size 128). All curves are averaged over 3 runs, error bars are negligible except for small values of α on CIFAR-10. See Section 6 for details.
|
439 |
+
|
440 |
+
![10_image_1.png](10_image_1.png)
|
441 |
+
|
442 |
+
Figure 2: Training trajectories for varying values of α ∈ {10−4, 10−3}, β1 ∈ {0., 0.5, 0.8, 0.9, 0.99} and β2 ∈ {0.9, 0.99, 0.999, 0.9999}. The top row (resp. bottom) gives the training loss (resp. squared norm of the expected gradient). The left column uses all corrective terms in the original Adam algorithm, the middle column drops the corrective term on mn (equivalent to our proof setup), and the right column drops the corrective term on vn. We notice a limited impact when dropping the corrective term on mn, but dropping the corrective term on vn has a much stronger impact.
|
443 |
+
Intuitively, each coordinate is pointing most of the time towards 1, but exceptionally towards -1 with a weight of 1/
|
444 |
+
√pi. Those rare events happens less and less often as i increase, but with an increasing weight.
|
445 |
+
|
446 |
+
Those weights are chosen so that all the coordinates of the gradient have the same variance1. It is necessary to take different probabilities for each coordinate. If we use the same p for all, we observe a phase transition when 1 − β2 ≈ p, but not the continuous improvement we obtain on Figure 1a.
|
447 |
+
|
448 |
+
We plot the variation of E
|
449 |
+
hkF(xτ )k 2 2 iafter 106iterations with batch size 1 when varying either α, 1 − β1 or 1 − β2 through a range of 13 values uniformly spaced in log-scale between 10−6 and 1. When varying α, we take β1 = 0 and β2 = 1 − 10−6. When varying β1, we take α = 10−5 and β2 = 1 − 10−6(i.e. β2 is so that we are in the Adagrad-like regime). Finally, when varying β2, we take β1 = 0 and α = 10−6. We start from x0 close to the optimum by running first 106iterations with α = 10−4, then 106iterations with α = 10−5, always with β2 = 1 − 10−6. This allows to have F(x0) − F∗ ≈ 0 in (11) and (13) and focus on the second part of both bounds. All curves are averaged over three runs. Error bars are plotted but not visible in log-log scale.
|
450 |
+
|
451 |
+
CIFAR-10. We train a simple convolutional network (Gitman & Ginsburg, 2017) on the CIFAR-102image classification dataset. Starting from a random initialization, we train the model on a single V100 for 600 epochs with a batch size of 128, evaluating the full training gradient after each epoch. This is a proxy for E
|
452 |
+
hkF(xτ )k 2 2 i, which would be to costly to evaluate exactly. All runs use the default config α = 10−3, β2 = 0.999 and β1 = 0.9, and we then change one of the parameter.
|
453 |
+
|
454 |
+
We take α from a uniform range in log-space between 10−6 and 10−2 with 9 values, for 1 − β1 the range is from 10−5to 0.3 with 9 values, and for 1−β2, from 10−6to 10−1 with 11 values. Unlike for the toy problem, we do not initialize close to the optimum, as even after 600 epochs, the norm of the gradients indicates that we are not at a critical point. All curves are averaged over three runs. Error bars are plotted but not visible in log-log scale, except for large values of α.
|
455 |
+
|
456 |
+
## 6.2 Analysis
|
457 |
+
|
458 |
+
Toy problem. Looking at Figure 1a, we observe a continual improvement as β2 increases. Fitting a linear regression in log-log scale of E[k∇F(xτ )k 2 2
|
459 |
+
] with respect to 1 − β2 gives a slope of 0.56 which is compatible with our bound (11), in particular the dependency in O(1/
|
460 |
+
√1 − β2). As we initialize close to the optimum, a small step size α yields as expected the best performance. Doing the same regression in log-log scale, we find a slope of 0.87, which is again compatible with the O(α) dependency of the second term in (11). Finally, we observe a limited impact of β1, except when 1 − β1 is small. The regression in log-log scale gives a slope of -0.16, while our bound predicts a slope of -1. CIFAR 10. Let us now turn to Figure 1b. As we start from random weights for this problem, we observe that a large step size gives the best performance, although we observe a high variance for the largest α. This indicates that training becomes unstable for large α, which is not predicted by the theory. This is likely a consequence of the bounded gradient assumption (7) not being verified for deep neural networks.
|
461 |
+
|
462 |
+
We observe a small improvement as 1 − β2 decreases, although nowhere near what we observed on our toy problem. Finally, we observe a sweet spot for the momentum β1, not predicted by our theory. We conjecture that this is due to the variance reduction effect of momentum (averaging of the gradients over multiple mini-batches, while the weights have not moved so much as to invalidate past information).
|
463 |
+
|
464 |
+
## 6.3 Impact Of The Adam Corrective Terms
|
465 |
+
|
466 |
+
Using the same experimental setup on CIFAR-10, we compare the impact of removing either of the corrective term of the original Adam algorithm (Kingma & Ba, 2015), as discussed in Section 2.2. We ran a cartesian product of training for 100 epochs, with β1 ∈ {0, 0.5, 0.8, 0.9, 0.99}, β2 ∈ {0.9, 0.99, 0.999, 0.9999}, and α ∈ {10−4, 10−3}. We report both the training loss and norm of the expected gradient on Figure 2. We notice a limited difference when dropping the corrective term on mn, but dropping the term vn has an 1We deviate from the a.s. bounded gradient assumption for this experiment, see Section 4.2 for a discussion on a.s. bound vs bound in expectation.
|
467 |
+
|
468 |
+
2https://www.cs.toronto.edu/~kriz/cifar.html important impact on the training trajectories. This confirm our motivation for simplifying the proof by removing the corrective term on the momentum.
|
469 |
+
|
470 |
+
## 7 Conclusion
|
471 |
+
|
472 |
+
We provide a simple proof on the convergence of Adam and Adagrad without heavy-ball style momentum.
|
473 |
+
|
474 |
+
Our analysis highlights a link between the two algorithms: with right the hyper-parameters, Adam converges like Adagrad. The extension to heavy-ball momentum is more complex, but we significantly improve the dependence on the momentum parameter for Adam, Adagrad, as well as SGD. We exhibit a toy problem where the dependency on α and β2 experimentally matches our prediction. However, we do not predict the practical interest of momentum, so that improvements to the proof are needed for future work.
|
475 |
+
|
476 |
+
## Broader Impact Statement
|
477 |
+
|
478 |
+
The present theoretical results on the optimization of non convex losses in a stochastic settings impact our understanding of the training of deep neural network. It might allow a deeper understanding of neural network training dynamics and thus reinforce any existing deep learning applications. There would be however no direct possible negative impact to society.
|
479 |
+
|
480 |
+
## References
|
481 |
+
|
482 |
+
Alekh Agarwal, Martin J Wainwright, Peter L Bartlett, and Pradeep K Ravikumar. Information-theoretic lower bounds on the oracle complexity of convex optimization. In *Advances in Neural Information Processing Systems*, 2009.
|
483 |
+
|
484 |
+
Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. In *International Conference on Learning Representations*, 2019.
|
485 |
+
|
486 |
+
Aaron Defazio. Momentum via primal averaging: Theoretical insights and learning rate schedules for nonconvex optimization. *arXiv preprint arXiv:2010.00406*, 2020.
|
487 |
+
|
488 |
+
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12(Jul), 2011.
|
489 |
+
|
490 |
+
John Duchi, Michael I Jordan, and Brendan McMahan. Estimation, optimization, and parallelism when data is sparse. In *Advances in Neural Information Processing Systems 26*, 2013.
|
491 |
+
|
492 |
+
Biyi Fang and Diego Klabjan. Convergence analyses of online adam algorithm in convex setting and two-layer relu neural network. *arXiv preprint arXiv:1905.09356*, 2019.
|
493 |
+
|
494 |
+
Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, and Rachel Ward. The power of adaptivity in sgd: Self-tuning step sizes with unbounded gradients and affine variance.
|
495 |
+
|
496 |
+
In Po-Ling Loh and Maxim Raginsky (eds.), *Proceedings of Thirty Fifth Conference on Learning Theory*, Proceedings of Machine Learning Research. PMLR, 2022.
|
497 |
+
|
498 |
+
Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. *SIAM Journal on Optimization*, 23(4), 2013.
|
499 |
+
|
500 |
+
Igor Gitman and Boris Ginsburg. Comparison of batch normalization and weight normalization algorithms for the large-scale image classification. *arXiv preprint arXiv:1709.08145*, 2017.
|
501 |
+
|
502 |
+
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016.
|
503 |
+
|
504 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *Proc. of the International Conference on Learning Representations (ICLR)*, 2015.
|
505 |
+
|
506 |
+
Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. *arXiv preprint arXiv:1806.07297*, 2018.
|
507 |
+
|
508 |
+
Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes.
|
509 |
+
|
510 |
+
In *AI Stats*, 2019.
|
511 |
+
|
512 |
+
Yanli Liu, Yuan Gao, and Wotao Yin. An improved analysis of stochastic gradient descent with momentum. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural* Information Processing Systems, volume 33, pp. 18261–18271. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/d3f5d4de09ea19461dab00590df91e4f-Paper.pdf.
|
513 |
+
|
514 |
+
H Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization.
|
515 |
+
|
516 |
+
In *COLT*, 2010.
|
517 |
+
|
518 |
+
Boris T Polyak. Some methods of speeding up the convergence of iteration methods. *USSR Computational* Mathematics and Mathematical Physics, 4(5), 1964.
|
519 |
+
|
520 |
+
Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In Proc. of the International Conference on Learning Representations (ICLR), 2018.
|
521 |
+
|
522 |
+
Naichen Shi, Dawei Li, Mingyi Hong, and Ruoyu Sun. Rmsprop converges with proper hyper-parameter. In International Conference on Learning Representations, 2020.
|
523 |
+
|
524 |
+
T. Tieleman and G. Hinton. Lecture 6.5 - rmsprop. COURSERA: Neural Networks for Machine Learning, 2012.
|
525 |
+
|
526 |
+
Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In *International Conference on Machine Learning*, 2019.
|
527 |
+
|
528 |
+
Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In *Advances in Neural Information Processing Systems*, 2017.
|
529 |
+
|
530 |
+
Tianbao Yang, Qihang Lin, and Zhe Li. Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. *arXiv preprint arXiv:1604.03257*, 2016.
|
531 |
+
|
532 |
+
Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. *arXiv preprint arXiv:1808.05671*, 2018.
|
533 |
+
|
534 |
+
Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of Adam and RMSprop. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*,
|
535 |
+
2019a.
|
536 |
+
|
537 |
+
Fengyu Zou, Li Shen, Zenqun Jie, Ju Sun, and Wei Liu. Weighted Adagrad with unified momentum. *arXiv* preprint arXiv:1808.03408, 2019b.
|
538 |
+
|
539 |
+
## Supplementary Material For A Simple Convergence Proof Of Adam And Adagrad Overview
|
540 |
+
|
541 |
+
In Section A, we detail the results for the convergence of Adam and Adagrad with heavy-ball momentum. For an overview of the contributions of our proof technique, see Section A.4.
|
542 |
+
|
543 |
+
Then in Section B, we show how our technique also applies to SGD and improves its dependency in β1 compared with previous work by Yang et al. (2016), from O((1−β1)
|
544 |
+
−2) to O(1−β1)
|
545 |
+
−1. The proof is simpler than for Adam/Adagrad, and show the generality of our technique.
|
546 |
+
|
547 |
+
## A Convergence Of Adaptive Methods With Heavy-Ball Momentum A.1 Setup And Notations
|
548 |
+
|
549 |
+
We recall the dynamic system introduced in Section 2.3. In the rest of this section, we take an iteration n ∈ N
|
550 |
+
∗, and when needed, i ∈ [d] refers to a specific coordinate. Given x0 ∈ R
|
551 |
+
d our starting point, m0 = 0, and v0 = 0, we define
|
552 |
+
|
553 |
+
$$\begin{cases}m_{n,i}&=\beta_{1}m_{n-1,i}+\nabla_{i}f_{n}(x_{n-1}),\\ v_{n,i}&=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2},\\ x_{n,i}&=x_{n-1,i}-\alpha_{n}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.\end{cases}$$
|
554 |
+
$$(\mathrm{A.1})$$
|
555 |
+
|
556 |
+
For Adam, the step size is given by
|
557 |
+
|
558 |
+
$$\alpha_{n}=\alpha(1-\beta_{1}){\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}}.$$
|
559 |
+
. (A.2)
|
560 |
+
For Adagrad (potentially extended with heavy-ball momentum), we have β2 = 1 and
|
561 |
+
|
562 |
+
$$(\mathrm{A.2})$$
|
563 |
+
|
564 |
+
$$(\mathrm{A.3})$$
|
565 |
+
$$\alpha_{n}=\alpha(1-\beta_{1}).$$
|
566 |
+
$$(\mathrm{A.4})$$
|
567 |
+
αn = α(1 − β1). (A.3)
|
568 |
+
Notice we include the factor 1 − β1 in the step size rather than in (A.1), as this allows for a more elegant proof. The original Adam algorithm included compensation factors for both β1 and β2 (Kingma & Ba, 2015) to correct the initial scale of m and v which are initialized at 0. Adam would be exactly recovered by replacing (A.2) with
|
569 |
+
|
570 |
+
$$\alpha_{n}=\alpha\frac{1-\beta_{1}}{1-\beta_{1}^{n}}\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}.\tag{14}$$
|
571 |
+
|
572 |
+
However, the denominator 1 − β n 1 potentially makes (αn)n∈N∗ non monotonic, which complicates the proof.
|
573 |
+
|
574 |
+
Thus, we instead replace the denominator by its limit value for n → ∞. This has little practical impact as
|
575 |
+
(i) early iterates are noisy because v is averaged over a small number of gradients, so making smaller step can be more stable, (ii) for β1 = 0.9 (Kingma & Ba, 2015), (A.2) differs from (A.4) only for the first 50 iterations.
|
576 |
+
|
577 |
+
Throughout the proof we note En−1 [·] the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1, vn−1 is deterministic knowing f1*, . . . , f*n−1. We introduce
|
578 |
+
|
579 |
+
$$G_{n}=\nabla F(x_{n-1})\quad{\mathrm{~and~}}\quad g_{n}=\nabla f_{n}(x_{n-1}).$$
|
580 |
+
Gn = ∇F(xn−1) and gn = ∇fn(xn−1). (A.5)
|
581 |
+
Like in Section 5.2, we introduce the update un ∈ R
|
582 |
+
d, as well as the update without heavy-ball momentum
|
583 |
+
|
584 |
+
Un ∈ R
|
585 |
+
d:
|
586 |
+
$$u_{n,i}=\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}\quad\text{and}\quad U_{n,i}=\frac{g_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.\tag{1}$$
|
587 |
+
$$(\mathrm{A.5})$$
|
588 |
+
|
589 |
+
$$(\mathrm{A.6})$$
|
590 |
+
For any k ∈ N with *k < n*, we define v˜n,k ∈ R
|
591 |
+
d by
|
592 |
+
|
593 |
+
$$\tilde{v}_{n,k,i}=\beta_{2}^{k}v_{n-k,i}+\mathbb{E}_{n-k-1}\left[\sum_{j=n-k+1}^{n}\beta_{2}^{n-j}g_{j,i}^{2}\right],$$ (A.7)
|
594 |
+
$$(\mathrm{A.8})$$
|
595 |
+
|
596 |
+
i.e. the contribution from the k last gradients are replaced by their expected value for know values of f1*, . . . , f*n−k−1. For k = 1, we recover the same definition as in (18).
|
597 |
+
|
598 |
+
## A.2 Results
|
599 |
+
|
600 |
+
For any total number of iterations N ∈ N
|
601 |
+
∗, we define τN a random index with value in {0*, . . . , N* − 1},
|
602 |
+
verifying
|
603 |
+
|
604 |
+
$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}$.
|
605 |
+
If β1 = 0, this is equivalent to sampling τ uniformly in {0*, . . . , N* −1}. If β1 > 0, the last few 1 1−β1 iterations are sampled rarely, and all iterations older than a few times that number are sampled almost uniformly.
|
606 |
+
|
607 |
+
We bound the expected squared norm of the total gradient at iteration τ , which is standard for non convex stochastic optimization (Ghadimi & Lan, 2013).
|
608 |
+
|
609 |
+
Note that like in previous works, the bound worsen as β1 increases, with a dependency of the form O((1 −
|
610 |
+
β1)
|
611 |
+
−1). This is a significant improvement over the existing bound for Adagrad with heavy-ball momentum, which scales as (1−β1)
|
612 |
+
−3(Zou et al., 2019b), or the best known bound for Adam which scales as (1−β1)
|
613 |
+
−5
|
614 |
+
(Zou et al., 2019a).
|
615 |
+
|
616 |
+
Technical lemmas to prove the following theorems are introduced in Section A.5, while the proof of Theorems 3 and 4 are provided in Section A.6.
|
617 |
+
|
618 |
+
Theorem 3 (Convergence of Adagrad with momentum). *Given the assumptions from Section 2.3, the* iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and 0 ≤ β1 < 1, and τ *defined by* (9)*, we have for any* N ∈ N
|
619 |
+
∗such that N > β1 1−β1
|
620 |
+
,
|
621 |
+
|
622 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\sqrt{N}\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{N}E\ln\left(1+\frac{NR^{2}}{\epsilon}\right),\tag{12}$$
|
623 |
+
|
624 |
+
*with $\tilde{N}=N-\frac{\beta_1}{1-\beta_1}$, and,*
|
625 |
+
$$E=\alpha d R L+\frac{12d R^{2}}{1-\beta_{1}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}.$$
|
626 |
+
|
627 |
+
Theorem 4 (Convergence of Adam with momentum). Given the assumptions from Section 2.3, the iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, 0 ≤ β1 < β2*, and,*
|
628 |
+
αn = α(1 − β1)
|
629 |
+
q1−β n 2 1−β2 with α > 0, and τ *defined by* (9)*, we have for any* N ∈ N
|
630 |
+
∗such that N > β1 1−β1
|
631 |
+
,
|
632 |
+
|
633 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|^{2}\right]\leq2R\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+E\left(\frac{1}{N}\ln\left(1+\frac{R^{2}}{(1-\beta_{2})\epsilon}\right)-\frac{N}{N}\ln(\beta_{2})\right),\tag{13}$$
|
634 |
+
|
635 |
+
with N˜ = N −β1 1−β1
|
636 |
+
, and
|
637 |
+
|
638 |
+
$$E=\frac{\alpha d R L(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{12d R^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}.$$
|
639 |
+
|
640 |
+
## A.3 Analysis Of The Results With Momentum
|
641 |
+
|
642 |
+
First notice that taking β1 → 0 in Theorems 3 and 4, we almost recover the same result as stated in 2 and 1, only losing on the term 4dR2 which becomes 12dR2.
|
643 |
+
|
644 |
+
Simplified expressions with momentum Assuming N β1 1−β1 and β1/β2 ≈ β1, which is verified for typical values of β1 and β2 (Kingma & Ba, 2015), it is possible to simplify the bound for Adam (13) as
|
645 |
+
|
646 |
+
$$\mathbb{E}\left[\left|\nabla F(x_{r})\right|^{2}\right]\lessapprox2R\frac{F(x_{0})-F_{\star}}{\alpha N}$$ $$\quad+\left(\frac{\alpha RL}{1-\beta_{2}}+\frac{12dR^{2}}{(1-\beta_{1})\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}dL^{2}\beta_{1}}{(1-\beta_{1})(1-\beta_{2})^{3/2}}\right)\left(\frac{1}{N}\ln\left(1+\frac{R^{2}}{\epsilon(1-\beta_{2})}\right)-\ln(\beta_{2})\right).\tag{4}$$
|
647 |
+
|
648 |
+
$$(\mathrm{A.9})$$
|
649 |
+
$$(\mathrm{A.10})$$
|
650 |
+
. (A.9)
|
651 |
+
Similarly, if we assume N β1 1−β1
|
652 |
+
, we can simplify the bound for Adagrad (??) as
|
653 |
+
|
654 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\lessapprox2R{\frac{F(x_{0})-F_{\star}}{\alpha\sqrt{N}}}+{\frac{1}{\sqrt{N}}}\left(\alpha d R L+{\frac{12d R^{2}}{1-\beta_{1}}}+{\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}}\right)\ln\left(1+{\frac{N R^{2}}{\epsilon}}\right),$$
|
655 |
+
|
656 |
+
, (A.10)
|
657 |
+
Optimal finite horizon Adam is still Adagrad We can perform the same finite horizon analysis as in Section 4.3. If we take α = √α˜N
|
658 |
+
and β2 = 1 − 1/N, then (A.9) simplifies to
|
659 |
+
|
660 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{r})\right\|^{2}\right]\leqslant2R\frac{F(x_{0})-F_{*}}{\hat{\alpha}\sqrt{N}}+\frac{1}{\sqrt{N}}\left(\hat{\alpha}dRL+\frac{12dR^{2}}{1-\beta_{1}}+\frac{2\hat{\alpha}^{2}dL^{2}\beta_{1}}{1-\beta_{1}}\right)\left(\ln\left(1+\frac{NR^{2}}{\epsilon}\right)+1\right).$$ (A.11)
|
661 |
+
|
662 |
+
The term (1 − β2)
|
663 |
+
3/2in the denominator in (A.9) is indeed compensated by the α 2in the numerator and we again recover the proper ln(N)/
|
664 |
+
√N convergence rate, which matches (A.10) up to a +1 term next to the log.
|
665 |
+
|
666 |
+
## A.4 Overview Of The Proof, Contributions And Limitations
|
667 |
+
|
668 |
+
There is a number of steps to the proof. First we derive a Lemma similar in spirit to the descent Lemma 5.1. There are two differences: first, when computing the dot product between the current expected gradient and each past gradient contained in the momentum, we have to re-center the expected gradient to its values in the past, using the smoothness assumption. Besides, we now have to decorrelate more terms between the numerator and denominator, as the numerator contains not only the latest gradient but a decaying sum of the past ones. We similarly extend Lemma 5.2 to support momentum specific terms. The rest of the proof follows mostly as in Section 5, except with a few more manipulation to regroup the gradient terms coming from different iterations.
|
669 |
+
|
670 |
+
Compared with previous work (Zou et al., 2019b;a), the re-centering of past gradients in (A.14) is a key aspect to improve the dependency in β1, with a small price to pay using the smoothness of F which is compensated by the introduction of extra G2n−k,i in (A.1). Then, a tight handling of the different summations as well as the the introduction of a non uniform sampling of the iterates (A.8), which naturally arises when grouping the different terms in (A.49), allow to obtain the overall improved dependency in O((1 − β1)
|
671 |
+
−1).
|
672 |
+
|
673 |
+
The same technique can be applied to SGD, the proof becoming simpler as there is no correlation between the step size and the gradient estimate, see Section B. If you want to better understand the handling of momentum without the added complexity of adaptive methods, we recommend starting with this proof.
|
674 |
+
|
675 |
+
A limitation of the proof technique is that we do not show that heavy-ball momentum can lead to a variance reduction of the update. Either more powerful probabilistic results, or extra regularity assumptions could allow to further improve our worst case bounds of the variance of the update, which in turn might lead to a bound with an improvement when using heavy-ball momentum.
|
676 |
+
|
677 |
+
## A.5 Technical Lemmas
|
678 |
+
|
679 |
+
We first need an updated version of 5.1 that includes momentum.
|
680 |
+
|
681 |
+
Lemma A.1 (Adaptive update with momentum approximately follows a descent direction). *Given* x0 ∈ R
|
682 |
+
d, the iterates defined by the system (A.1) for (αj )j∈N∗ *that is non-decreasing, and under the conditions* (6),
|
683 |
+
(7)*, and* (8), as well as 0 ≤ β1 < β2 ≤ 1*, we have for all iterations* n ∈ N
|
684 |
+
∗,
|
685 |
+
|
686 |
+
$$\mathbb{E}\left[\sum_{k\in\{0\}}G_{n,i}\frac{m_{n,i}}{\sqrt{c+m_{n,i}}}\right]\geq\frac{1}{2}\left(\sum_{k\in\{i\}}\sum_{k=0}^{n-1}\beta_{k}^{2}\mathbb{E}\left[\frac{G_{n-k,i}^{2}}{\sqrt{c+m_{n,k+1,i}}}\right]\right)$$ $$\quad-\frac{\alpha_{n}^{2}L^{2}}{4R}\sqrt{1-\beta_{1}}\left(\sum_{k=1}^{n-1}\|u_{n-i}\|_{2}^{2}\sum_{k=1}^{n-1}\beta_{k}^{2}\sqrt{k}\right)-\frac{3R}{\sqrt{1-\beta_{1}}}\left(\sum_{k=0}^{n-1}\left(\frac{\beta_{k}}{\beta_{2}}\right)^{k}\sqrt{k+1}\,\|U_{n-k}\|_{2}^{2}\right).$$ (A.12)
|
687 |
+
|
688 |
+
Proof. We use multiple times (22) in this proof, which we repeat here for convenience,
|
689 |
+
|
690 |
+
$$\forall\lambda>0,\,x,y\in\mathbb{R},x y\leq{\frac{\lambda}{2}}x^{2}+{\frac{y^{2}}{2\lambda}}.$$
|
691 |
+
. (A.13)
|
692 |
+
Let us take an iteration n ∈ N
|
693 |
+
∗for the duration of the proof. We have
|
694 |
+
|
695 |
+
$$\sum_{i\in[d]}G_{n,i}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}=\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\beta_{i}^{k}G_{n,i}\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}$$ (A.14) $$=\underbrace{\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}_{A}+\underbrace{\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\beta_{i}^{k}\left(G_{n,i}-G_{n-k,i}\right)\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}_{B},$$
|
696 |
+
$$(\mathrm{A.13})$$
|
697 |
+
$$(\mathrm{A.15})$$
|
698 |
+
|
699 |
+
Let us now take an index 0 ≤ k ≤ n − 1. We show that the contribution of past gradients Gn−k and gn−k due to the heavy-ball momentum can be controlled thanks to the decay term β k 1
|
700 |
+
. Let us first have a look at B. Using (A.13) with
|
701 |
+
|
702 |
+
$$\lambda=\frac{\sqrt{1-\beta_{1}}}{2R\sqrt{k+1}},\ x=|G_{n,i}-G_{n-k,i}|\,,\ y=\frac{|g_{n-k,i}|}{\sqrt{\epsilon+v_{n,i}}},$$
|
703 |
+
|
704 |
+
we have
|
705 |
+
|
706 |
+
$$|B|\leq\sum_{i\in[d]}\sum_{k=0}^{n-1}\beta_{1}^{k}\left({\frac{\sqrt{1-\beta_{1}}}{4R\sqrt{k+1}}}\left(G_{n,i}-G_{n-k,i}\right)^{2}+{\frac{R\sqrt{k+1}}{\sqrt{1-\beta_{1}}}}{\frac{g_{n-k,i}^{2}}{e+v_{n,i}}}\right).$$
|
707 |
+
+ vn,i !. (A.15)
|
708 |
+
Notice first that for any dimension i ∈ [d], + vn,i ≥ + β k 2 vn−k,i ≥ β k 2
|
709 |
+
( + vn−k,i), so that
|
710 |
+
|
711 |
+
$ \frac{g_{n-k,i}^2}{\epsilon+v_{n,i}}\leq\frac{1}{\beta_2^k}U_{n-k,i}^2$ where $ (\beta_2)=1$.
|
712 |
+
Besides, using the L-smoothness of F given by (8), we have
|
713 |
+
|
714 |
+
$$\|G_{n}-G_{n-k}\|_{2}^{2}\leq L^{2}\left\|x_{n-1}-x_{n-k-1}\right\|_{2}^{2}$$ $$=L^{2}\left\|\sum_{l=1}^{k}\alpha_{n-l}u_{n-l}\right\|_{2}^{2}$$ $$\leq\alpha_{n}^{2}L^{2}k\sum_{l=1}^{k}\left\|u_{n-l}\right\|_{2}^{2},$$
|
715 |
+
$$(\mathrm{A.16})$$
|
716 |
+
$$(\mathrm{A.17})$$
|
717 |
+
$$(\mathrm{A.18})$$
|
718 |
+
|
719 |
+
using Jensen inequality and the fact that αn is non-decreasing. Injecting (A.16) and (A.17) into (A.15), we obtain
|
720 |
+
|
721 |
+
|B| ≤ nX−1 k=0 α 2 nL 2 4R p1 − β1β k 1 √ k X k l=1 kun−lk 2 2 ! + nX−1 R √1 − β1 β1 β2 k√k + 1 kUn−kk 2 2 ! k=0 =p1 − β1 α 2 nL 2 4R nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +R √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 !
|
722 |
+
. (A.18)
|
723 |
+
Now going back to the A term in (A.14), we will study the main term of the summation, i.e. for i ∈ [d] and k < n
|
724 |
+
|
725 |
+
$$\mathbb{E}\left[G_{n-k,i}{\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}\right]=\mathbb{E}\left[\nabla_{i}F(x_{n-k-1}){\frac{\nabla_{i}f_{n-k}(x_{n-k-1})}{\sqrt{\epsilon+v_{n,i}}}}\right].$$
|
726 |
+
|
727 |
+
Notice that we could almost apply Lemma 5.1 to it, except that we have vn,i in the denominator instead of vn−k,i. Thus we will need to extend the proof to decorrelate more terms. We will further drop indices in the rest of the proof, noting G = Gn−k,i, g = gn−k,i, v˜ = ˜vn,k+1,i and v = vn,i. Finally, let us note
|
728 |
+
|
729 |
+
$$\delta^{2}=\sum_{j=n-k}^{n}\beta_{2}^{n-j}g_{j,i}^{2}\qquad\text{and}\qquad r^{2}=\mathbb{E}_{n-k-1}\left[\delta^{2}\right].\tag{1}$$
|
730 |
+
2. (A.20)
|
731 |
+
In particular we have v˜ − v = r 2 − δ 2. With our new notations, we can rewrite (A.19) as
|
732 |
+
|
733 |
+
we have $v=v-r^{2}=0$. With our new notations, we can rewrite (A.19) as $$\mathbb{E}\left[G\frac{g}{\sqrt{\epsilon+v}}\right]=\mathbb{E}\left[G\frac{g}{\sqrt{\epsilon+v}}+Gg\left(\frac{1}{\sqrt{\epsilon+v}}-\frac{1}{\sqrt{\epsilon+v}}\right)\right]$$ $$=\mathbb{E}\left[\mathbb{E}_{n-k-1}\left[G\frac{g}{\sqrt{\epsilon+v}}\right]+Gg\frac{r^{2}-\delta^{2}}{\sqrt{\epsilon+v}\sqrt{\epsilon+v}(\sqrt{\epsilon+v}+\sqrt{\epsilon+v})}\right]$$ $$=\mathbb{E}\left[\frac{G^{2}}{\sqrt{\epsilon+v}}\right]+\mathbb{E}\left[Gg\frac{r^{2}-\delta^{2}}{\sqrt{\epsilon+v}\sqrt{\epsilon+v}(\sqrt{\epsilon+v}+\sqrt{\epsilon+v})}\right].$$ (A.21) $G$
|
734 |
+
|
735 |
+
$$(\mathrm{A.19})$$
|
736 |
+
$$(\mathrm{A.20})$$
|
737 |
+
We first focus on C:
|
738 |
+
|
739 |
+
$$|C|\leq\underbrace{|G g|\;\frac{r^{2}}{\sqrt{\epsilon+v}(\epsilon+\bar{v})}}_{\kappa}+\underbrace{|G g|\;\frac{\delta^{2}}{(\epsilon+v)\sqrt{\epsilon+\bar{v}}}}_{\rho},$$
|
740 |
+
|
741 |
+
due to the fact that √ + v +
|
742 |
+
√ + ˜v ≥ max(√ + v, √ + ˜v) andr 2 − δ 2 ≤ r 2 + δ 2.
|
743 |
+
|
744 |
+
Applying (A.13) to κ with
|
745 |
+
|
746 |
+
$$\lambda={\frac{\sqrt{1-\beta_{1}}\sqrt{\epsilon+\bar{v}}}{2}},\;x={\frac{|G|}{\sqrt{\epsilon+\bar{v}}}},\;y={\frac{|g|\,r^{2}}{\sqrt{\epsilon+\bar{v}}\sqrt{\epsilon+v}}},$$
|
747 |
+
|
748 |
+
we obtain
|
749 |
+
|
750 |
+
$$\kappa\leq\frac{G^{2}}{4\sqrt{\epsilon+\hat{v}}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{g^{2}r^{4}}{(\epsilon+\hat{v})^{3/2}(\epsilon+v)}.$$
|
751 |
+
|
752 |
+
Given that + ˜v ≥ r 2 and taking the conditional expectation, we can simplify as
|
753 |
+
|
754 |
+
$$\mathbb{E}_{n-k-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\vec{v}}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{r^{2}}{\sqrt{\epsilon+\vec{v}}}\mathbb{E}_{n-k-1}\left[\frac{g^{2}}{\epsilon+v}\right].$$ (A.22)
|
755 |
+
Now turning to ρ, we use (A.13) with
|
756 |
+
|
757 |
+
$$\lambda=\frac{\sqrt{1-\beta_{1}}\sqrt{\epsilon+\tilde{v}}}{2r^{2}},\;x=\frac{|G\delta|}{\sqrt{\epsilon+\tilde{v}}},\;y=\frac{|\delta g|}{\epsilon+v},$$
|
758 |
+
|
759 |
+
we obtain
|
760 |
+
|
761 |
+
$$\rho\leq\frac{G^{2}}{4\sqrt{\epsilon+\hat{v}}}\frac{\delta^{2}}{r^{2}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{r^{2}}{\sqrt{\epsilon+\hat{v}}}\frac{g^{2}\delta^{2}}{(\epsilon+v)^{2}}.$$
|
762 |
+
. (A.23)
|
763 |
+
|
764 |
+
$$(\mathrm{A.23})$$
|
765 |
+
|
766 |
+
Given that + v ≥ δ 2, and En−k−1 hδ 2 r 2 i= 1, we obtain after taking the conditional expectation,
|
767 |
+
|
768 |
+
$$\mathbb{E}_{n-k-1}\left[\rho\right]\leq{\frac{G^{2}}{4{\sqrt{\epsilon+{\vec{v}}}}}}+{\frac{1}{{\sqrt{1-\beta_{1}}}}}{\frac{r^{2}}{{\sqrt{\epsilon+{\vec{v}}}}}}\mathbb{E}_{n-k-1}\left[{\frac{g^{2}}{\epsilon+v}}\right].$$
|
769 |
+
$$(\mathrm{A.24})$$
|
770 |
+
. (A.24)
|
771 |
+
Notice that in A.23, we possibly divide by zero. It suffice to notice that if r 2 = 0 then δ 2 = 0 a.s. so that ρ = 0 and (A.24) is still verified. Summing (A.22) and (A.24), we get
|
772 |
+
|
773 |
+
$$\mathbb{E}_{n-k-1}\left[|C|\right]\leq{\frac{G^{2}}{2{\sqrt{\epsilon+{\vec{v}}}}}}+{\frac{2}{{\sqrt{1-\beta_{1}}}}}{\frac{r^{2}}{{\sqrt{\epsilon+{\vec{v}}}}}}\mathbb{E}_{n-k-1}\left[{\frac{g^{2}}{\epsilon+v}}\right].$$
|
774 |
+
. (A.25)
|
775 |
+
Given that r ≤
|
776 |
+
√ + ˜v by definition of v˜, and that using (7), r ≤
|
777 |
+
√k + 1R, we have3, reintroducing the indices we had dropped
|
778 |
+
|
779 |
+
$$\mathbb{E}_{n-k-1}\left[|C|\right]\leq{\frac{G_{n-k,i}^{2}}{2\sqrt{\epsilon+{\vec{v}}_{n,k+1,i}}}}+{\frac{2R}{\sqrt{1-\beta_{1}}}}{\sqrt{k+1}}\mathbb{E}_{n-k-1}\left[{\frac{g_{n-k,i}^{2}}{\epsilon+v_{n,i}}}\right].$$
|
780 |
+
+ vn,i #. (A.26)
|
781 |
+
Taking the complete expectation and using that by definition + vn,i ≥ + β k 2 vn−k,i ≥ β k 2
|
782 |
+
( + vn−k,i) we get
|
783 |
+
|
784 |
+
$$\mathbb{E}\left[|C|\right]\leq{\frac{1}{2}}\mathbb{E}\left[{\frac{G_{n-k,i}^{2}}{\sqrt{\epsilon+{\bar{v}}_{n,k+1,i}}}}\right]+{\frac{2R}{\sqrt{1-\beta_{1}\beta_{2}^{k}}}}{\sqrt{k+1}}\mathbb{E}\left[{\frac{g_{n-k,i}^{2}}{\epsilon+v_{n-k,i}}}\right].$$
|
785 |
+
|
786 |
+
Injecting (A.27) into (A.21) gives us
|
787 |
+
|
788 |
+
i∈[d] nX−1 k=0 β k 1 E "G2 p n−k,i + ˜vn,k+1,i #− 1 2 E "G2 p n−k,i + ˜vn,k,i #+2R √1 − β1β k 2 √k + 1E "g 2 n−k,i + vn−k,i #!! E [A] ≥ X = 1 2 k=0 β k 1E "G2 p n−k,i + ˜vn,k+1,i # −2R √1 − β1 k=0 β1 β2 k√k + 1E hkUn−kk 2 2 i . (A.28) X i∈[d] nX−1 X i∈[d] nX−1
|
789 |
+
|
790 |
+
$$(\mathrm{A.25})$$
|
791 |
+
$$(\mathrm{A.26})$$
|
792 |
+
$$(\mathrm{A.27})$$
|
793 |
+
Injecting (A.28) and (A.18) into (A.14) finishes the proof. Similarly, we will need an updated version of 5.2.
|
794 |
+
|
795 |
+
Lemma A.2 (sum of ratios of the square of a decayed sum and a decayed sum of square). *We assume we* have 0 < β2 ≤ 1 and 0 < β1 < β2, and a sequence of real numbers (an)n∈N∗ *. We define* bn =Pn j=1 β n−j 2a 2 j and cn =Pn j=1 β n−j 1aj *. Then we have*
|
796 |
+
|
797 |
+
$$\sum_{j=1}^{n}{\frac{e_{j}^{2}}{\epsilon+b_{j}}}\leq{\frac{1}{(1-\beta_{1})(1-\beta_{1}/\beta_{2})}}\left(\ln\left(1+{\frac{b_{n}}{\epsilon}}\right)-n\ln(\beta_{2})\right).$$
|
798 |
+
. (A.29)
|
799 |
+
Proof. Now let us take j ∈ N
|
800 |
+
∗, j ≤ n, we have using Jensen inequality
|
801 |
+
|
802 |
+
$$c_{j}^{2}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\beta_{1}^{j-l}a_{l}^{2},$$
|
803 |
+
$$(\mathrm{A.29})$$
|
804 |
+
|
805 |
+
so that
|
806 |
+
|
807 |
+
$$\frac{c_{j}^{2}}{\epsilon+b_{j}}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\beta_{1}^{j-l}\frac{a_{l}^{2}}{\epsilon+b_{j}}.$$
|
808 |
+
|
809 |
+
3Note that we do not need the almost sure bound on the gradient, and a bound on E-k∇f(x)k 2 ∞
|
810 |
+
would be sufficient.
|
811 |
+
Given that for l ∈ [j], we have by definition + bj ≥ + β j−l 2bl ≥ β j−l 2( + bl), we get
|
812 |
+
|
813 |
+
$$\frac{c_{j}^{2}}{\epsilon+b_{j}}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\left(\frac{\beta_{1}}{\beta_{2}}\right)^{j-l}\frac{a_{l}^{2}}{\epsilon+b_{l}}.$$
|
814 |
+
|
815 |
+
Thus, when summing over all j ∈ [n], we get
|
816 |
+
|
817 |
+
$J\subset[n]$, we get $$\begin{aligned} \sum_{j=1}^n \frac{c_j^2}{\epsilon + b_j} &\leq \frac{1}{1-\beta_1}\sum_{j=1}^n\sum_{l=1}^j\left(\frac{\beta_1}{\beta_2}\right)^{j-l}\frac{a_l^2}{\epsilon + b_l}\nonumber\\ &= \frac{1}{1-\beta_1}\sum_{l=1}^n\frac{a_l^2}{\epsilon + b_l}\sum_{j=l}^n\left(\frac{\beta_1}{\beta_2}\right)^{j-l}\nonumber\\ &\leq \frac{1}{(1-\beta_1)(1-\beta_1/\beta_2)}\sum_{l=1}^n\frac{a_l^2}{\epsilon + b_l}. \end{aligned}$$ in (A.29).
|
818 |
+
$$(\mathrm{A.30})$$
|
819 |
+
$$(\mathrm{A.31})$$
|
820 |
+
$$\square$$
|
821 |
+
Applying Lemma 5.2, we obtain (A.29).
|
822 |
+
|
823 |
+
We also need two technical lemmas on the sum of series.
|
824 |
+
Lemma A.3 (sum of a geometric term times a square root). Given 0 < a < 1 and Q ∈ N*, we have,*
|
825 |
+
$$\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1}\leq\frac{1}{1-a}\left(1+\frac{\sqrt{\pi}}{2\sqrt{-\ln(a)}}\right)\leq\frac{2}{(1-a)^{3/2}}.$$ (A.32)
|
826 |
+
Proof. We first need to study the following integral:
|
827 |
+
|
828 |
+
Proof.: We first need to study the following integral: $$\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\mathrm{d}x=\int_{0}^{\infty}\frac{\mathrm{e}^{\mathrm{i}a(x)x}}{2\sqrt{x}}\ \,\ \text{then introducing}y=\sqrt{x},$$ $$=\int_{0}^{\infty}\mathrm{e}^{a(x)y}\mathrm{d}y\ \,\ \text{then introducing}u=\sqrt{-2\ln(a)}y,$$ $$=\frac{1}{\sqrt{-2\ln(a)}}\int_{0}^{\infty}\mathrm{e}^{-u^{2}/2}\mathrm{d}u$$ $$\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\mathrm{d}x=\frac{\sqrt{x}}{2\sqrt{-\ln(a)}},$$ (A.33) where we used the classical integral of the standard Gaussian density function.
|
829 |
+
Let us now introduce AQ:
|
830 |
+
|
831 |
+
$$A_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1},$$
|
832 |
+
|
833 |
+
then we have
|
834 |
+
|
835 |
+
$$A_{Q}-aA_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1}-\sum_{q=1}^{Q}a^{q}\sqrt{q}\quad,\text{then using the concavity of}\sqrt{\cdot},$$ $$\leq1-a^{Q}\sqrt{Q}+\sum_{q=1}^{Q-1}\frac{a^{q}}{2\sqrt{q}}$$ $$\leq1+\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\,\mathrm{d}x$$ $$(1-a)A_{Q}\leq1+\frac{\sqrt{\pi}}{2\sqrt{-\ln(a)}},$$
|
836 |
+
|
837 |
+
where we used (A.33). Given that p− ln(a) ≥
|
838 |
+
√1 − a we obtain (A.32).
|
839 |
+
|
840 |
+
Lemma A.4 (sum of a geometric term times roughly a power 3/2). Given 0 < a < 1 and Q ∈ N*, we have,*
|
841 |
+
|
842 |
+
$$\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1)\leq\frac{4a}{(1-a)^{5/2}}.$$ (A.34)
|
843 |
+
Proof. Let us introduce AQ:
|
844 |
+
|
845 |
+
$$A_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1),$$
|
846 |
+
|
847 |
+
then we have
|
848 |
+
|
849 |
+
$$\begin{split}A_{Q}-a A_{Q}&=\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1)-\sum_{q=1}^{Q}a^{q}\sqrt{q-1}q\\ &\leq\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\left((q+1)-\sqrt{q}\sqrt{q-1}\right)\\ &\leq\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\left((q+1)-(q-1)\right)\\ &\leq2\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\\ &=2a\sum_{q=0}^{Q-2}a^{q}\sqrt{q+1}\quad,\text{then using Lemma A.3,}\\ (1-a)A_{Q}&\leq\frac{4a}{(1-a)^{3/2}}.\end{split}$$
|
850 |
+
|
851 |
+
## A.6 Proof Of Adam And Adagrad With Momentum
|
852 |
+
|
853 |
+
Common part of the proof Let us a take an iteration n ∈ N
|
854 |
+
∗. Using the smoothness of F defined in
|
855 |
+
(8), we have
|
856 |
+
|
857 |
+
$$F(x_{n})\leq F(x_{n-1})-\alpha_{n}G_{n}^{T}u_{n}+\frac{\alpha_{n}^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.$$
|
858 |
+
|
859 |
+
Taking the full expectation and using Lemma A.1,
|
860 |
+
|
861 |
+
E [F(xn)] ≤ E [F(xn−1)] − αn 2 k=0 β k 1E "G2n−k,i 2p + ˜vn,k+1,i # + α 2 nL 2 E hkunk 2 2 i X i∈[d] nX−1 + α 3 nL 2 4R p1 − β1 nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +3αnR √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 ! . (A.35)
|
862 |
+
|
863 |
+
Notice that because of the bound on the `∞ norm of the stochastic gradients at the iterates (7), we have for any k ∈ N, *k < n*, and any coordinate i ∈ [d],p + ˜vn,k+1,i ≤ R
|
864 |
+
qPn−1 j=0 β j 2
|
865 |
+
. Introducing Ωn =
|
866 |
+
qPn−1 j=0 β j 2
|
867 |
+
,
|
868 |
+
we have
|
869 |
+
|
870 |
+
E [F(xn)] ≤ E [F(xn−1)] −αn 2RΩn nX−1 k=0 β k 1E hkGn−kk 2 2 i+ α 2 nL 2 E hkunk 2 2 i + α 3 nL 2 4R p1 − β1 nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +3αnR √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 ! . (A.36)
|
871 |
+
|
872 |
+
Now summing over all iterations n ∈ [N] for N ∈ N
|
873 |
+
∗, and using that for both Adam (A.2) and Adagrad
|
874 |
+
(A.3), αn is non-decreasing, as well the fact that F is bounded below by F∗ from (6), we get
|
875 |
+
|
876 |
+
1 2R X N n=1 αn Ωn nX−1 ≤ F(x0) − F∗ + α 2 N L 2 X N k=0 β k 1E hkGn−kk 2 2 i n=1 E hkunk 2 2 i | {z } A | {z } B + α 3 N L 2 4R p1 − β1 X N n=1 nX−1 l=1 E hkun−lk 2 2 i nX−1 +3αN R √1 − β1 X N n=1 nX−1 k=0 β1 β2 k√k + 1E hkUn−kk 2 2 i k=l β k 1 √ k . (A.37) | {z } C | {z } D
|
877 |
+
|
878 |
+
First looking at B, we have using Lemma A.2,
|
879 |
+
|
880 |
+
$$B\leq\frac{\alpha_{N}^{2}L}{2(1-\beta_{1})(1-\beta_{1}/\beta_{2})}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\log(\beta_{2})\right).$$ (A.38)
|
881 |
+
Then looking at C and introducing the change of index j = n − l,
|
882 |
+
|
883 |
+
C = α 3 N L 2 4R p1 − β1 X N n=1 Xn j=1 E hkujk 2 2 i nX−1 k=n−j β k 1 √ k = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iX N nX−1 k=n−j β k 1 √ k n=j = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iNX−1 k=0 β k 1 √ k X j+k n=j 1 = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iNX−1 k=0 β k 1 √ k(k + 1) ≤ α 3 N L 2 R X N j=1 E hkujk 2 2 iβ1 (1 − β1) 2 , (A.39)
|
884 |
+
|
885 |
+
$$(\mathrm{A.39})$$
|
886 |
+
$$(\mathrm{A.40})$$
|
887 |
+
|
888 |
+
using Lemma A.4. Finally, using Lemma A.2, we get
|
889 |
+
|
890 |
+
$$C\leq\frac{\alpha_{N}^{3}L^{2}\beta_{1}}{R(1-\beta_{1})^{3}(1-\beta_{1}/\beta_{2})}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\log(\beta_{2})\right).$$
|
891 |
+
. (A.40)
|
892 |
+
Finally, introducing the same change of index j = n − k for D, we get
|
893 |
+
|
894 |
+
D =3αN R √1 − β1 X N n=1 Xn j=1 β1 β2 n−jp1 + n − jE hkUjk 2 2 i =3αN R √1 − β1 X N j=1 E hkUjk 2 2 iX N n=j β1 β2 n−jp1 + n − j ≤6αN R √1 − β1 X N j=1 E hkUjk 2 2 i1 (1 − β1/β2) 3/2 , (A.41)
|
895 |
+
|
896 |
+
$$(\mathrm{A.42})$$
|
897 |
+
|
898 |
+
using Lemma A.3. Finally, using Lemma 5.2 or equivalently Lemma A.2 with $\beta_{1}=0$, we get $$D\leq\frac{6\alpha_{N}R}{\sqrt{1-\beta_{1}(1-\beta_{1}/\beta_{2})^{3/2}}}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\ln(\beta_{2})\right).$$
|
899 |
+
. (A.42)
|
900 |
+
This is as far as we can get without having to use the specific form of αN given by either (A.2) for Adam or
|
901 |
+
(A.3) for Adagrad. We will now split the proof for either algorithm.
|
902 |
+
|
903 |
+
Adam For Adam, using (A.2), we have αn = (1 − β1)Ωnα. Thus, we can simplify the A term from (A.37),
|
904 |
+
also using the usual change of index j = n − k, to get
|
905 |
+
|
906 |
+
A =1 2R X N n=1 αn Ωn Xn j=1 β n−j 1 E hkGjk 2 2 i = α(1 − β1) 2R X N j=1 E hkGjk 2 2 iX N n=j β n−j 1 =α 2R X N j=1 (1 − β N−j+1 1)E hkGjk 2 2 i =α 2R X N j=1 (1 − β N−j+1 1)E hk∇F(xj−1)k 2 2 i =α 2R N X−1 j=0 (1 − β N−j 1)E hk∇F(xj )k 2 2 i. (A.43)
|
907 |
+
|
908 |
+
$$(\mathrm{A.44})$$
|
909 |
+
If we now introduce τ as in (A.8), we can first notice that
|
910 |
+
|
911 |
+
$$\sum_{j=0}^{N-1}(1-\beta_{1}^{N-j})=N-\beta_{1}\frac{1-\beta_{1}^{N}}{1-\beta_{1}}\geq N-\frac{\beta_{1}}{1-\beta_{1}}.$$
|
912 |
+
. (A.44)
|
913 |
+
Introducing
|
914 |
+
|
915 |
+
$$\tilde{N}=N-\frac{\beta_{1}}{1-\beta_{1}},\tag{1}$$
|
916 |
+
$$(\mathrm{A.45})$$
|
917 |
+
|
918 |
+
we then have
|
919 |
+
|
920 |
+
$$A\geq\frac{\alpha\tilde{N}}{2R}\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right].\tag{1}$$
|
921 |
+
|
922 |
+
Further notice that for any coordinate i ∈ [d], we have vN,i ≤R
|
923 |
+
2 1−β2
|
924 |
+
, besides αN ≤ α√
|
925 |
+
1−β1 1−β2
|
926 |
+
, so that putting together (A.37), (A.46), (A.38), (A.40) and (A.42) we get
|
927 |
+
|
928 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq2R\frac{F_{0}-F_{*}}{\alpha\bar{N}}+\frac{E}{\bar{N}}\left(\ln\left(1+\frac{R^{2}}{\epsilon(1-\beta_{2})}\right)-N\log(\beta_{2})\right),$$
|
929 |
+
|
930 |
+
, (A.47)
|
931 |
+
$$(\mathrm{A.46})$$
|
932 |
+
$$(\mathrm{A.47})$$
|
933 |
+
|
934 |
+
with
|
935 |
+
|
936 |
+
$$E=\frac{\alpha dRL(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{2\alpha^{2}dL^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}+\frac{12dR^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}.$$ (A.48) the proof of the proof.
|
937 |
+
This conclude the proof of theorem 4.
|
938 |
+
|
939 |
+
Adagrad For Adagrad, we have αn = (1 − β1)α, β2 = 1 and Ωn ≤
|
940 |
+
√N so that,
|
941 |
+
|
942 |
+
A =1 2R X N n=1 αn Ωn Xn j=1 β n−j 1 E hkGjk 2 2 i ≥ α(1 − β1) 2R √NX N j=1 E hkGjk 2 2 iX N n=j β n−j 1 =α 2R √N X N j=1 (1 − β N−j+1 1)E hkGjk 2 2 i =α 2R √N X N j=1 (1 − β N−j+1 1)E hk∇F(xj−1)k 2 2 i =α 2R √N N X−1 j=0 (1 − β N−j 1)E hk∇F(xj )k 2 2 i. (A.49)
|
943 |
+
|
944 |
+
$$(\mathrm{A.50})$$
|
945 |
+
Reusing (A.44) and (A.45) from the Adam proof, and introducing τ as in (9), we immediately have
|
946 |
+
|
947 |
+
$$A\geq{\frac{\alpha\tilde{N}}{2R\sqrt{N}}}\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|_{2}^{2}\right].$$
|
948 |
+
$$(\mathrm{A.51})$$
|
949 |
+
|
950 |
+
Further notice that for any coordinate i ∈ [d], we have vN ≤ NR2, besides αN = (1 − β1)α, so that putting together (A.37), (A.50), (A.38), (A.40) and (A.42) with β2 = 1, we get
|
951 |
+
|
952 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq2R\sqrt{N}\frac{F_{0}-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{\bar{N}}E\ln\left(1+\frac{N R^{2}}{\epsilon}\right),$$
|
953 |
+
|
954 |
+
, (A.51)
|
955 |
+
with
|
956 |
+
|
957 |
+
$$E=\alpha d R L+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}+\frac{12d R^{2}}{1-\beta_{1}}.$$
|
958 |
+
$$(\mathrm{A.52})$$
|
959 |
+
|
960 |
+
This conclude the proof of theorem 3.
|
961 |
+
|
962 |
+
## A.7 Proof Variant Using Hölder Inequality
|
963 |
+
|
964 |
+
Following (Ward et al., 2019; Zou et al., 2019b), it is possible to get rid of the almost sure bound on the gradient given by (7), and replace it with a bound in expectation, i.e.
|
965 |
+
|
966 |
+
$\forall x\in\mathbb{R}^{d}$, $\mathbb{E}\left[\|\nabla f(x)\|_{2}^{2}\right]\leq\tilde{R}-\sqrt{\epsilon}$. (10.1)
|
967 |
+
Note that we now need an `2 bound in order to properly apply the Hölder inequality hereafter:
|
968 |
+
We do not provide the full proof for the result, but point the reader to the few places where we have used
|
969 |
+
(7). We first use it in Lemma A.1. We inject R into (A.15), which we can just replace with R˜. Then we use
|
970 |
+
(7) to bound r and derive (A.26). Remember that r is defined in (A.20), and is actually a weighted sum of the squared gradients in expectation. Thus, a bound in expectation is acceptable, and Lemma A.1 is valid replacing the assumption (7) with (A.53).
|
971 |
+
|
972 |
+
$$(\mathrm{A.53})$$
|
973 |
+
|
974 |
+
Looking at the actual proof, we use (6) in a single place: just after (A.35), in order to derive an upper bound for the denominator in the following term:
|
975 |
+
|
976 |
+
$$M=\frac{\alpha_{n}}{2}\left(\sum_{i\in[d]}\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\frac{G_{n-k,i}^{2}}{2\sqrt{\epsilon+\hat{v}_{n,k+1,i}}}\right]\right).\tag{1}$$
|
977 |
+
|
978 |
+
Let us introduce V˜n,k+1 =Pi∈[d]
|
979 |
+
v˜n,k+1,i. We immediately have that
|
980 |
+
|
981 |
+
$$M\geq\frac{\alpha_{n}}{2}\left(\sum_{k=0}^{n-1}\beta_{k}^{\frac{1}{2}}\mathbb{E}\left[\frac{\|G_{n-k}\|_{2}^{2}}{2\sqrt{\epsilon+\tilde{V}_{n,k+1}}}\right]\right)$$ Taking $X=\left(\frac{\|G_{n-k}\|_{2}^{2}}{\sqrt{\epsilon+\tilde{V}_{n,k+1}}}\right)^{\frac{2}{3}}$, $Y=\left(\sqrt{\epsilon+\tilde{V}_{n,k+1}}\right)^{\frac{2}{3}}$, we can apply Holder inequality as $$\mathbb{E}\left[|X|^{\frac{3}{2}}\right]\geq\left(\frac{\mathbb{E}\left[|X^{\prime}|\right]}{\mathbb{E}\left[|Y|^{3}\right]^{\frac{3}{2}}}\right)^{\frac{2}{3}},$$ which is
|
982 |
+
$$(\mathrm{A.54})$$
|
983 |
+
$$(\mathrm{A.55})$$
|
984 |
+
|
985 |
+
which gives us $$\mathbb{E}\left[\frac{\|G_{n-k}\|_{2}^{2}}{\sqrt{\epsilon+\hat{V}_{n,k+1}}}\right]\geq\frac{\mathbb{E}\left[\|G_{n-k}\|_{2}^{\frac{d}{2}}\right]^{\frac{d}{2}}}{\sqrt{\mathbb{E}\left[\epsilon+\hat{V}_{n,k+1}\right]}}\geq\frac{\mathbb{E}\left[\|G_{n-k}\|_{2}^{\frac{d}{2}}\right]^{\frac{d}{2}}}{\Omega_{n}R},$$ with $\Omega_{n}=\sqrt{\sum_{j=0}^{n-1}\beta_{j}^{2}}$, and using the fact that $\mathbb{E}\left[\epsilon+\sum_{i\in[d]}\hat{v}_{n,k+1,j}\right]\leq R^{2}\Omega_{n}^{2}$.
|
986 |
+
|
987 |
+
$$(\mathrm{A.56})$$
|
988 |
+
$$(\mathrm{A.57})$$
|
989 |
+
|
990 |
+
Thus we can recover almost exactly (A.36) except we have to replace all terms of the form E
|
991 |
+
hkGn−kk 2 2 iwith E
|
992 |
+
|
993 |
+
hkGn−kk 4 3 2 i 32. The rest of the proof follows as before, with all the dependencies in α, β1, β2 remaining the same.
|
994 |
+
|
995 |
+
## B Non Convex Sgd With Heavy-Ball Momentum
|
996 |
+
|
997 |
+
We extend the existing proof of convergence for SGD in the non convex setting to use heavy-ball momentum (Ghadimi & Lan, 2013). Compared with previous work on momentum for non convex SGD byYang et al. (2016), we improve the dependency in β1 from O((1 − β1)
|
998 |
+
−2) to O((1 − β1)
|
999 |
+
−1). A recent work by Liu et al. (2020) achieve a similar dependency of O(1/(1 − β1)), with weaker assumptions (without the bounded gradients assumptions).
|
1000 |
+
|
1001 |
+
## B.1 Assumptions
|
1002 |
+
|
1003 |
+
We reuse the notations from Section 2.1. Note however that we use here different assumptions than in Section 2.3. We first assume F is bounded below by F∗, that is,
|
1004 |
+
|
1005 |
+
$\forall x\in\mathbb{R}^d,\ F(x)\geq F_*$.
|
1006 |
+
|
1007 |
+
d, F(x) ≥ F∗. (B.1)
|
1008 |
+
We then assume that the stochastic gradients have bounded variance, and that the gradients of F are uniformly bounded, i.e. there exist R and σ so that
|
1009 |
+
|
1010 |
+
$$\forall x\in\mathbb{R}^{d},\|\nabla F(x)\|_{2}^{2}\leq R^{2}\quad{\mathrm{and}}\quad\mathbb{E}\left[\|\nabla f(x)\|_{2}^{2}\right]-\|\nabla F(x)\|_{2}^{2}\leq\sigma^{2},$$
|
1011 |
+
|
1012 |
+
and finally, the *smoothness of the objective function*, e.g., its gradient is L-Liptchitz-continuous with respect
|
1013 |
+
to the `2-norm:
|
1014 |
+
$$\forall x,y\in\mathbb{R}^{d},\|\nabla F(x)-\nabla F(y)\|_{2}\leq L\left\|x-y\right\|_{2}.$$
|
1015 |
+
. (B.3)
|
1016 |
+
$$(\mathrm{B.1})$$
|
1017 |
+
$$(\mathrm{B.2})$$
|
1018 |
+
|
1019 |
+
$$(\mathrm{B.3})$$
|
1020 |
+
|
1021 |
+
## B.2 Result
|
1022 |
+
|
1023 |
+
Let us take a step size α > 0 and a heavy-ball parameter 1 > β1 ≥ 0. Given x0 ∈ R
|
1024 |
+
d, taking m0 = 0, we define for any iteration n ∈ N
|
1025 |
+
∗the iterates of SGD with momentum as,
|
1026 |
+
|
1027 |
+
$$\begin{cases}m_{n}&=\beta_{1}m_{n-1}+\nabla f_{n}(x_{n-1})\\ x_{n}&=x_{n-1}-\alpha m_{n}.\end{cases}$$
|
1028 |
+
$$(\mathrm{B.5})$$
|
1029 |
+
$$(\mathrm{B.4})$$
|
1030 |
+
|
1031 |
+
Note that in (B.4), the scale of the typical size of mn will increases with β1. For any total number of iterations N ∈ N
|
1032 |
+
∗, we define τN a random index with value in {0*, . . . , N* − 1}, verifying
|
1033 |
+
|
1034 |
+
$$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}.$$
|
1035 |
+
1. (B.5)
|
1036 |
+
Theorem B.1 (Convergence of SGD with momemtum). *Given the assumptions from Section B.1, given* τ as defined in (B.5) for a total number of iterations N > 1 1−β1
|
1037 |
+
, x0 ∈ R
|
1038 |
+
d, α > 0, 1 > β1 ≥ 0*, and* (xn)n∈N∗
|
1039 |
+
given by (B.4),
|
1040 |
+
|
1041 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq\frac{1-\beta_{1}}{\alpha\bar{N}}(F(x_{0})-F_{\star})+\frac{N}{N}\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}},$$ (B.6)
|
1042 |
+
|
1043 |
+
with N˜ = N −β1
|
1044 |
+
1−β1
|
1045 |
+
.
|
1046 |
+
|
1047 |
+
## B.3 Analysis
|
1048 |
+
|
1049 |
+
We can first simplify (B.6), if we assume N 1 1−β1
|
1050 |
+
, which is always the case for practical values of N and β1, so that N˜ ≈ N, and,
|
1051 |
+
|
1052 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq{\frac{1-\beta_{1}}{\alpha N}}(F(x_{0})-F_{*})+{\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}}.$$
|
1053 |
+
|
1054 |
+
It is possible to achieve a rate of convergence of the form O(1/
|
1055 |
+
√N), by taking for any C > 0,
|
1056 |
+
|
1057 |
+
$$(\mathrm{B.7})$$
|
1058 |
+
$$\alpha=(1-\beta_{1}){\frac{C}{\sqrt{N}}},$$
|
1059 |
+
$$(\mathrm{B.8})$$
|
1060 |
+
$$(\mathrm{B.9})$$
|
1061 |
+
, (B.8)
|
1062 |
+
which gives us
|
1063 |
+
|
1064 |
+
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq{\frac{1}{C{\sqrt{N}}}}(F(x_{0})-F_{*})+{\frac{C}{\sqrt{N}}}{\frac{L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})}}.$$
|
1065 |
+
|
1066 |
+
In comparison, Theorem 3 by Yang et al. (2016) would give us, assuming now that α = (1−β1) min n1L
|
1067 |
+
, √
|
1068 |
+
C
|
1069 |
+
N
|
1070 |
+
o,
|
1071 |
+
|
1072 |
+
$$\min_{k\in\{0,\ldots,N-1\}}\mathbb{E}\left[\|\nabla F(x_{k})\|_{2}^{2}\right]\leq\frac{2}{N}(F(x_{0})-F_{*})\max\left\{2L,\frac{\sqrt{N}}{C}\right\}$$ $$+\frac{C}{\sqrt{N}}\frac{L}{(1-\beta_{1})^{2}}\left(\beta_{1}^{2}(R^{2}+\sigma^{2})+(1-\beta_{1})^{2}\sigma^{2}\right).$$ (B.10)
|
1073 |
+
|
1074 |
+
We observe an overall dependency in β1 of the form O((1 − β1)
|
1075 |
+
−2) for Theorem 3 by Yang et al. (2016) ,
|
1076 |
+
which we improve to O((1 − β1)
|
1077 |
+
−1) with our proof.
|
1078 |
+
|
1079 |
+
Liu et al. (2020) achieves a similar dependency in (1 − β1) as here, but with weaker assumptions. Indeed, in their Theorem 1, their result contains a term in O(1/α) with α ≤ (1 − β1)M for some problem dependent constant M that does not depend on β1.
|
1080 |
+
|
1081 |
+
Notice that as the typical size of the update mn will increase with β1, by a factor 1/(1−β1), it is convenient to scale down α by the same factor, as we did with (B.8) (without loss of generality, as C can take any value). Taking α of this form has the advantage of keeping the first term on the right hand side in (B.6)
|
1082 |
+
independent of β1, allowing us to focus only on the second term.
|
1083 |
+
|
1084 |
+
## B.4 Proof
|
1085 |
+
|
1086 |
+
For all n ∈ N
|
1087 |
+
∗, we note Gn = ∇F(xn−1) and gn = ∇f(xn−1). En−1 [·] is the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1 and mn−1 are deterministic knowing f1*, . . . , f*n−1.
|
1088 |
+
|
1089 |
+
Lemma B.1 (Bound on mn). Given α > 0, 1 > β1 ≥ 0, and (xn) and (mn) *defined as by B.4, under the* assumptions from Section B.1, we have for all n ∈ N
|
1090 |
+
∗,
|
1091 |
+
|
1092 |
+
$$\mathbb{E}\left[\|m_{n}\|_{2}^{2}\right]\leq\frac{R^{2}+\sigma^{2}}{(1-\beta_{1})^{2}}.\tag{1.1}$$
|
1093 |
+
|
1094 |
+
Proof. Let us take an iteration n ∈ N
|
1095 |
+
∗,
|
1096 |
+
|
1097 |
+
$$\mathbb{E}\left[\|m_{n}\|_{2}^{2}\right]=\mathbb{E}\left[\left\|\sum_{k=0}^{n-1}\beta_{1}^{k}g_{n-k}\right\|_{2}^{2}\right]\quad\text{using Jensen we get,}$$ $$\leq\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\right)\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|g_{n-k}\|_{2}^{2}\right]$$ $$\leq\frac{1}{1-\beta_{1}}\sum_{k=0}^{n-1}\beta_{1}^{k}(R^{2}+\sigma^{2})$$ $$=\frac{R^{2}+\sigma^{2}}{(1-\beta_{1})^{2}}.$$
|
1098 |
+
|
1099 |
+
$$(\mathrm{B.11})$$
|
1100 |
+
|
1101 |
+
Lemma B.2 (sum of a geometric term times index). Given 0 < a < 1, i ∈ N and Q ∈ N *with* Q ≥ i,
|
1102 |
+
|
1103 |
+
$$\sum_{q=i}^{Q}a^{q}q=\frac{a^{i}}{1-a}\left(i-a^{Q-i+1}Q+\frac{a-a^{Q+1-i}}{1-a}\right)\leq\frac{a}{(1-a)^{2}}.$$ (B.12)
|
1104 |
+
Proof. Let Ai =PQ
|
1105 |
+
q=i a qq, we have
|
1106 |
+
|
1107 |
+
$$\begin{array}{c}{{A_{i}-a A_{i}=a^{i}i-a^{Q+1}Q+\sum_{q=i+1}^{Q}a^{q}\left(i+1-i\right)}}\\ {{(1-a)A_{i}=a^{i}i-a^{Q+1}Q+\frac{a^{i+1}-a^{Q+1}}{1-a}.}}\end{array}$$
|
1108 |
+
|
1109 |
+
Finally, taking i = 0 and Q → ∞ gives us the upper bound.
|
1110 |
+
|
1111 |
+
Lemma B.3 (Descent lemma). Given α > 0, 1 > β1 ≥ 0, and (xn) and (mn) *defined as by B.4, under the* assumptions from Section B.1, we have for all n ∈ N
|
1112 |
+
∗,
|
1113 |
+
|
1114 |
+
$$\mathbb{E}\left[\nabla F(x_{n-1})^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|\nabla F(x_{n-k-1})\|_{2}^{2}\right]-\frac{\alpha L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}}$$ (B.13)
|
1115 |
+
Proof. For simplicity, we note Gn = ∇F(xn−1) the expected gradient and gn = ∇fn(xn−1) the stochastisc gradient at iteration n.
|
1116 |
+
|
1117 |
+
$$G_{n}^{T}m_{n}=\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n}^{T}g_{n-k}$$ (B.14) $$=\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n-k}^{T}g_{n-k}+\sum_{k=1}^{n-1}\beta_{1}^{k}(G_{n}-G_{n-k})^{T}g_{n-k}.$$
|
1118 |
+
$$\mathrm{B.14)}$$
|
1119 |
+
|
1120 |
+
This last step is the main difference with previous proofs with momentum (Yang et al., 2016): we replace the current gradient with an old gradient in order to obtain extra terms of the form kGn−kk 2 2
|
1121 |
+
. The price to pay is the second term on the right hand side but we will see that it is still beneficial to perform this step.
|
1122 |
+
|
1123 |
+
Notice that as F is L-smooth so that we have, for all k ∈ N
|
1124 |
+
∗
|
1125 |
+
|
1126 |
+
$$\|G_{n}-G_{n-k}\|_{2}^{2}\leq L^{2}\left\|\sum_{l=1}^{k}\alpha m_{n-l}\right\|^{2}$$ $$\leq\alpha^{2}L^{2}k\sum_{l=1}^{k}\|m_{n-l}\|_{2}^{2}\,,$$ (B.15)
|
1127 |
+
using Jensen inequality. We apply
|
1128 |
+
|
1129 |
+
$$\forall\lambda>0,\,x,y\in\mathbb{R},\|x y\|_{2}\leq{\frac{\lambda}{2}}\,\|x\|_{2}^{2}+{\frac{\|y\|_{2}^{2}}{2\lambda}},$$
|
1130 |
+
|
1131 |
+
with x = Gn − Gn−k, y = gn−k and λ =
|
1132 |
+
1 − β1 kαL to the second term in (B.14), and use (B.15) to get
|
1133 |
+
|
1134 |
+
$$G_{n}^{T}m_{n}\geq\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n-k}^{T}g_{n-k}-\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left((1-\beta_{1})\alpha L\sum_{l=1}^{k}\|m_{n-l}\|_{2}^{2}\right)+\frac{\alpha L k}{1-\beta_{1}}\left\|g_{n-k}\right\|_{2}^{2}\right).$$
|
1135 |
+
|
1136 |
+
Taking the full expectation we have
|
1137 |
+
|
1138 |
+
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[G_{n-k}^{T}\theta_{n-k}\right]-\alpha L\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left(\left(1-\beta_{1}\right)\sum_{l=1}^{k}\mathbb{E}\left[\left[m_{n-l}\right]_{2}^{2}\right]\right)+\frac{k}{1-\beta_{2}}\mathbb{E}\left[\left[\left|g_{n-k}\right|_{2}^{2}\right]\right).\right)\right]\tag{17}$$
|
1139 |
+
|
1140 |
+
$$(\mathrm{B.16})$$
|
1141 |
+
|
1142 |
+
Now let us take k ∈ {0*, . . . , n* − 1}, first notice that
|
1143 |
+
|
1144 |
+
$$\mathbb{E}\left[G_{n-k}^{T}g_{n-k}\right]=\mathbb{E}\left[\mathbb{E}_{n-k-1}\left[\nabla F(x_{n-k-1})^{T}\nabla f_{n-k}(x_{n-k-1})\right]\right]$$ $$=\mathbb{E}\left[\nabla F(x_{n-k-1})^{T}\nabla F(x_{n-k-1})\right]$$ $$=\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right].$$
|
1145 |
+
|
1146 |
+
Furthermore, we have E
|
1147 |
+
hkgn−kk 2 2 i≤ R2 + σ 2from (B.2), while E
|
1148 |
+
hkmn−kk 2 2 i≤
|
1149 |
+
R
|
1150 |
+
2+σ 2
|
1151 |
+
(1−β1)
|
1152 |
+
2 using (B.11) from Lemma B.1. Injecting those three results in (B.17), we have
|
1153 |
+
|
1154 |
+
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\alpha L(R^{2}+\sigma^{2})\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left(\frac{1}{1-\beta_{1}}\sum_{l=1}^{k}1\right)+\frac{k}{1-\beta_{1}}\right)\tag{1}$$ $$=\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\frac{\alpha L}{1-\beta_{1}}(R^{2}+\sigma^{2})\sum_{k=1}^{n-1}\beta_{1}^{k}k.\tag{2}$$
|
1155 |
+
$$(\mathrm{B.20})$$
|
1156 |
+
$$(\mathrm{B.18})$$ $$(\mathrm{B.19})$$
|
1157 |
+
|
1158 |
+
Now, using (B.12) from Lemma B.2, we obtain
|
1159 |
+
|
1160 |
+
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\frac{\alpha L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}},$$
|
1161 |
+
|
1162 |
+
which concludes the proof.
|
1163 |
+
|
1164 |
+
## Proof Of Theorem B.1
|
1165 |
+
|
1166 |
+
Proof. Let us take a specific iteration n ∈ N
|
1167 |
+
∗. Using the smoothness of F given by (B.3), we have,
|
1168 |
+
|
1169 |
+
$$F(x_{n})\leq F(x_{n-1})-\alpha G_{n}^{T}m_{n}+\frac{\alpha^{2}L}{2}\left\|m_{n}\right\|_{2}^{2}.$$
|
1170 |
+
. (B.21)
|
1171 |
+
Taking the expectation, and using Lemma B.3 and Lemma B.1, we get
|
1172 |
+
|
1173 |
+
$$\mathbb{E}\left[F(x_{n})\right]\leq\mathbb{E}\left[F(x_{n-1})\right]-\alpha\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]\right)+\frac{\alpha^{2}L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}}+\frac{\alpha^{2}L(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}$$ $$\leq\mathbb{E}\left[F(x_{n-1})\right]-\alpha\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]\right)+\frac{\alpha^{2}L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{3}}$$
|
1174 |
+
$$(\mathrm{B.21})$$
|
1175 |
+
$$(\mathrm{B.22})$$
|
1176 |
+
3(B.22)
|
1177 |
+
rearranging, and summing over n ∈ {1*, . . . , N*}, we get
|
1178 |
+
|
1179 |
+
$$\underbrace{\alpha\sum_{n=1}^{N}\sum_{k=0}^{n-1}\beta_{k}^{\dagger}\mathbb{E}\left[\left\|G_{n-k}\right\|_{2}^{2}\right]}_{A}\leq F(x_{0})-\mathbb{E}\left[F(x_{N})\right]+N\frac{\alpha^{2}L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{3}}$$
|
1180 |
+
3(B.23)
|
1181 |
+
Let us focus on the A term on the left-hand side first. Introducing the change of index i = n − k, we get
|
1182 |
+
|
1183 |
+
$$A=\alpha\sum_{n=1}^{N}\sum_{i=1}^{n}\beta_{1}^{n-i}\mathbb{E}\left[\|G_{i}\|_{2}^{2}\right]$$ $$=\alpha\sum_{i=1}^{N}\mathbb{E}\left[\|G_{i}\|_{2}^{2}\right]\sum_{n=i}^{N}\beta_{1}^{n-i}$$ $$=\frac{\alpha}{1-\beta_{1}}\sum_{i=1}^{N}\mathbb{E}\left[\|\nabla F(x_{i-1})\|_{2}^{2}\right](1-\beta^{N-i+1})$$ $$=\frac{\alpha}{1-\beta_{1}}\sum_{i=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{i})\|_{2}^{2}\right](1-\beta^{N-i}).$$ (B.24) In the following, we have to use a $\beta$-function ($\mathbb{E}$). The second
|
1184 |
+
|
1185 |
+
$$(\mathrm{B.23})$$
|
1186 |
+
|
1187 |
+
We recognize the unnormalized probability given by the random iterate τ as defined by (B.5). The normalization constant is
|
1188 |
+
|
1189 |
+
$$\sum_{i=0}^{N-1}1-\beta_{1}^{N-i}=N-\beta_{1}{\frac{1-\beta_{1}^{N}}{1-\beta}}\geq N-{\frac{\beta_{1}}{1-\beta_{1}}}=\tilde{N},$$
|
1190 |
+
|
1191 |
+
which we can inject into (B.24) to obtain
|
1192 |
+
|
1193 |
+
$$A\geq\frac{\alpha\tilde{N}}{1-\beta_{1}}\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right].\tag{1}$$
|
1194 |
+
|
1195 |
+
Injecting (B.25) into (B.23), and using the fact that F is bounded below by F∗ (B.1), we have
|
1196 |
+
|
1197 |
+
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|_{2}^{2}\right]\leq\frac{1-\beta_{1}}{\alpha N}(F(x_{0})-F_{*})+\frac{N}{N}\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}$$
|
1198 |
+
|
1199 |
+
2(B.26)
|
1200 |
+
|
1201 |
+
$$(\mathrm{B.25})$$
|
1202 |
+
$$(\mathrm{B.26})$$ $$(\mathrm{B.27})$$
|
1203 |
+
|
1204 |
+
which concludes the proof of Theorem B.1.
|
ZPQhzTSWA7/ZPQhzTSWA7_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 30,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 30,
|
14 |
+
"code": 0,
|
15 |
+
"table": 0,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 234,
|
18 |
+
"unsuccessful_ocr": 28,
|
19 |
+
"equations": 262
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|