File size: 90,488 Bytes
9627d6f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 |
# A Simple Convergence Proof Of Adam And Adagrad
Alexandre Défossez defossez@meta.com Meta AI
Léon Bottou Meta AI
Francis Bach INRIA / PSL
Nicolas Usunier Meta AI
Reviewed on OpenReview: *https: // openreview. net/ forum? id= ZPQhzTSWA7*
## Abstract
We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer, the dimension d, and the total number of iterations N. This bound can be made arbitrarily small, and with the right hyper-parameters, Adam can be shown to converge with the same rate of convergence O(d ln(N)/
√N). When used with the default parameters, Adam doesn't converge, however, and just like constant stepsize SGD, it moves away from the initialization point faster than Adagrad, which might explain its practical success. Finally, we obtain the tightest dependency on the heavy ball momentum decay rate β1 among all previous convergence bounds for non-convex Adam and Adagrad, improving from O((1 − β1)
−3) to O((1 − β1)
−1).
## 1 Introduction
First-order methods with adaptive step sizes have proved useful in many fields of machine learning, be it for sparse optimization (Duchi et al., 2013), tensor factorization (Lacroix et al., 2018) or deep learning (Goodfellow et al., 2016). Duchi et al. (2011) introduced Adagrad, which rescales each coordinate by a sum of squared past gradient values. While Adagrad proved effective for sparse optimization (Duchi et al., 2013), experiments showed that it under-performed when applied to deep learning (Wilson et al., 2017). RMSProp (Tieleman & Hinton, 2012) proposed an exponential moving average instead of a cumulative sum to solve this. Kingma & Ba (2015) developed Adam, one of the most popular adaptive methods in deep learning, built upon RMSProp and added corrective terms at the beginning of training, together with heavy-ball style momentum. In the online convex optimization setting, Duchi et al. (2011) showed that Adagrad achieves optimal regret for online convex optimization. Kingma & Ba (2015) provided a similar proof for Adam when using a decreasing overall step size, although this proof was later shown to be incorrect by Reddi et al. (2018), who introduced AMSGrad as a convergent alternative. Ward et al. (2019) proved that Adagrad also converges to a critical point for non convex objectives with a rate O(ln(N)/
√N) when using a scalar adaptive step-size, instead of diagonal. Zou et al. (2019b) extended this proof to the vector case, while Zou et al. (2019a)
displayed a bound for Adam, showing convergence when the decay of the exponential moving average scales as 1 − 1/N and the learning rate as 1/
√N.
In this paper, we present a simplified and unified proof of convergence to a critical point for Adagrad and Adam for stochastic non-convex smooth optimization. We assume that the objective function is lower bounded, smooth and the stochastic gradients are almost surely bounded. We recover the standard O(ln(N)/
√N) convergence rate for Adagrad for all step sizes, and the same rate with Adam with an appropriate choice of the step sizes and decay parameters, in particular, Adam can converge without using the AMSGrad variant. Compared to previous work, our bound significantly improves the dependency on the momentum parameter β1. The best known bounds for Adagrad and Adam are respectively in O((1 − β1)
−3)
and O((1−β1)
−5) (see Section 3), while our result is in O((1−β1)
−1) for both algorithms. This improvement is a step toward understanding the practical efficiency of heavy-ball momentum. Outline. The precise setting and assumptions are stated in the next section, and previous work is then described in Section 3. The main theorems are presented in Section 4, followed by a full proof for the case without momentum in Section 5. The proof of the convergence with momentum is deferred to the supplementary material, Section A. Finally we compare our bounds with experimental results, both on toy and real life problems in Section 6.
## 2 Setup 2.1 Notation
Let d ∈ N be the dimension of the problem (i.e. the number of parameters of the function to optimize) and take [d] = {1, 2*, . . . , d*}. Given a function h : R
d → R, we denote by ∇h its gradient and ∇ih the i-th component of the gradient. We use a small constant , e.g. 10−8, for numerical stability. Given a sequence
(un)n∈N with ∀n ∈ N, un ∈ R
d, we denote un,i for n ∈ N and i ∈ [d] the i-th component of the n-th element of the sequence.
We want to optimize a function F : R
d → R. We assume there exists a random function f : R
d → R such that E [∇f(x)] = ∇F(x) for all x ∈ R
d, and that we have access to an oracle providing i.i.d. samples (fn)n∈N∗ . We note En−1 [·] the conditional expectation knowing f1*, . . . , f*n−1. In machine learning, x typically represents the weights of a linear or deep model, f represents the loss from individual training examples or minibatches, and F is the full training objective function. The goal is to find a critical point of F.
## 2.2 Adaptive Methods
We study both Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2015) using a unified formulation.
We assume we have 0 < β2 ≤ 1, 0 ≤ β1 < β2, and a non negative sequence (αn)n∈N∗ . We define three vectors mn, vn, xn ∈ R
diteratively. Given x0 ∈ R
d our starting point, m0 = 0, and v0 = 0, we define for all iterations n ∈ N
∗,
$$\begin{array}{l}{{m_{n,i}=\beta_{1}m_{n-1,i}+\nabla_{i}f_{n}(x_{n-1})}}\\ {{v_{n,i}=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2}}}\\ {{x_{n,i}=x_{n-1,i}-\alpha_{n}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.}}\end{array}$$
$$(1)$$
$$\left(2\right)$$
. (3)
The parameter β1 is a heavy-ball style momentum parameter (Polyak, 1964), while β2 controls the decay rate of the per-coordinate exponential moving average of the squared gradients. Taking β1 = 0, β2 = 1 and αn = α gives Adagrad. While the original Adagrad algorithm did not include a heavy-ball-like momentum, our analysis also applies to the case β1 > 0.
Adam and its corrective terms The original Adam algorithm (Kingma & Ba, 2015) uses a weighed average, rather than a weighted sum for (1) and (2), i.e. it uses
$$\tilde{m}_{n,i}=(1-\beta_{1})\sum_{k=1}^{n}\beta_{1}^{n-k}\nabla_{i}f_{n}(x_{k-1})=(1-\beta_{1})m_{n,i},$$
$$\left({\mathrm{3}}\right)$$
We can achieve the same definition by taking αadam = α · √
1−β1 1−β2
. The original Adam algorithm further includes two corrective terms to account for the fact that mn and vn are biased towards 0 for the first few iterations. Those corrective terms are equivalent to taking a step-size αn of the form
$$\alpha_{n,\mathrm{adam}}=\alpha\cdot{\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}}\cdot\underbrace{{\frac{1}{1-\beta_{1}^{n}}}}_{\mathrm{\scriptsize~\begin{array}{l}{\mathrm{corefictive}}\\ {\mathrm{term~for~}}m_{n}\end{array}}}\cdot{\frac{\sqrt{1-\beta_{2}^{n}}}{\mathrm{\scriptsize~\begin{array}{l}{\mathrm{corefictive}}\\ {\mathrm{term~for~}}v_{n}\end{array}}}}.$$
Those corrective terms can be seen as the normalization factors for the weighted sum given by (1) and (2)
Note that each term goes to its limit value within a few times 1/(1 − β) updates (with β ∈ {β1, β2}). which explains the (1 − β1) term in (4). In the present work, we propose to drop the corrective term for mn, and to keep only the one for vn, thus using the alternative step size
$$\quad(4)$$
$$\alpha_{n}=\alpha(1-\beta_{1})\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}.\tag{1}$$
$$\mathbf{\Sigma}$$
This simplification motivated by several observations:
- By dropping either corrective terms, αn becomes monotonic, which simplifies the proof.
- For typical values of β1 and β2 (e.g. 0.9 and 0.999), the corrective term for mn converges to its limit value much faster than the one for vn.
- Removing the corrective term for mn is equivalent to a learning-rate warmup, which is popular in deep learning, while removing the one for vn would lead to an increased step size during early training. For values of β2 close to 1, this can lead to divergence in practice.
We experimentally verify in Section 6.3 that dropping the corrective term for mn has no observable effect on the training process, while dropping the corrective term for vn leads to observable perturbations. In the following, we thus consider the variation of Adam obtained by taking αn provided by (5).
## 2.3 Assumptions
We make three assumptions. We first assume F is bounded below by F∗, that is,
$$\forall x\in\mathbb{R}^{d},\ F(x)\geq F_{*}.$$
$\downarrow$ .
$\downarrow$ .
d, F(x) ≥ F∗. (6)
We then assume the `∞ *norm of the stochastic gradients is uniformly almost surely bounded*, i.e. there is R ≥
√ (
√ is used here to simplify the final bounds) so that
$$\forall x\in\mathbb{R}^{d},\quad\|\nabla f(x)\|_{\infty}\leq R-\ {\sqrt{\epsilon}}\quad{\mathrm{a.s.}},$$
$$\mathbf{r}=\mathbf{r}$$
√ a.s., (7)
and finally, the *smoothness of the objective function*, e.g., its gradient is L-Liptchitz-continuous with respect
to the `2-norm:
$$\forall x,y\in\mathbb{R}^{d},$$
. (8)
$$\|\nabla F(x)-\nabla F(y)\|_{2}\leq L\,\|x-y\|_{2}\,.$$
We discuss the use of assumption (7) in Section 4.2.
## 3 Related Work
Early work on adaptive methods (McMahan & Streeter, 2010; Duchi et al., 2011) showed that Adagrad achieves an optimal rate of convergence of O(1/
√N) for convex optimization (Agarwal et al., 2009). Later, RMSProp (Tieleman & Hinton, 2012) and Adam (Kingma & Ba, 2015) were developed for training deep neural networks, using an exponential moving average of the past squared gradients.
Kingma & Ba (2015) offered a proof that Adam with a decreasing step size converges for convex objectives.
However, the proof contained a mistake spotted by Reddi et al. (2018), who also gave examples of convex
$$\mathbf{\Sigma}$$
problems where Adam does not converge to an optimal solution. They proposed AMSGrad as a convergent variant, which consisted in retaining the maximum value of the exponential moving average. When α goes to zero, AMSGrad is shown to converge in the convex and non-convex setting (Fang & Klabjan, 2019; Zhou et al., 2018). Despite this apparent flaw in the Adam algorithm, it remains a widely popular optimizer, raising the question as to whether it converges. When β2 goes to 1 and α to 0, our results and previous work (Zou et al., 2019a) show that Adam does converge with the same rate as Adagrad. This is coherent with the counter examples of Reddi et al. (2018), because they uses a small exponential decay parameter β2 < 1/5.
The convergence of Adagrad for non-convex objectives was first tackled by Li & Orabona (2019), who proved its convergence, but under restrictive conditions (e.g., α ≤
√/L). The proof technique was improved by Ward et al. (2019), who showed the convergence of "scalar" Adagrad, i.e., with a single learning rate, for any value of α with a rate of O(ln(N)/
√N). Our approach builds on this work but we extend it to both Adagrad and Adam, in their coordinate-wise version, as used in practice, while also supporting heavy-ball momentum.
The coordinate-wise version of Adagrad was also tackled by Zou et al. (2019b), offering a convergence result for Adagrad with either heavy-ball or Nesterov style momentum. We obtain the same rate for heavy-ball momentum with respect to N (i.e., O(ln(N)/
√N)), but we improve the dependence on the momentum parameter β1 from O((1 − β1)
−3) to O((1 − β1)
−1). Chen et al. (2019) also provided a bound for Adagrad and Adam, but without convergence guarantees for Adam for any hyper-parameter choice, and with a worse dependency on β1. Zhou et al. (2018) also cover Adagrad in the stochastic setting, however their proof technique leads to a p1/ term in their bound, typically with =10−8. Finally, a convergence bound for Adam was introduced by Zou et al. (2019a). We recover the same scaling of the bound with respect to α and β2. However their bound has a dependency of O((1 − β1)
−5) with respect to β1, while we get O((1 − β1)
−1),
a significant improvement. Shi et al. (2020) obtain similar convergence results for RMSProp and Adam when considering the random shuffling setup. They use an affine growth condition (i.e. norm of the stochastic gradient is bounded by an affine function of the norm of the deterministic gradient) instead of the boundness of the gradient, but their bound decays with the number of total epochs, not stochastic updates leading to an overall √s extra term with s the size of the dataset. Finally, Faw et al. (2022) use the same affine growth assumption to derive high probability bounds for scalar Adagrad.
Non adaptive methods like SGD are also well studied in the non convex setting (Ghadimi & Lan, 2013), with a convergence rate of O(1/
√N) for a smooth objective with bounded variance of the gradients. Unlike adaptive methods, SGD requires knowing the smoothness constant. When adding heavy-ball momentum, Yang et al. (2016) showed that the convergence bound degrades as O((1−β1)
−2), assuming that the gradients are bounded. We apply our proof technique for momentum to SGD in the Appendix, Section B and improve this dependency to O((1 − β1)
−1). Recent work by Liu et al. (2020) achieves the same dependency with weaker assumptions. Defazio (2020) provided an in-depth analysis of SGD-M with a tight Liapunov analysis.
## 4 Main Results
For a number of iterations N ∈ N
∗, we note τN a random index with value in {0*, . . . , N* − 1}, so that
$$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}.$$
1. (9)
If β1 = 0, this is equivalent to sampling τ uniformly in {0*, . . . , N*−1}. If β1 > 0, the last few 1 1−β1 iterations are sampled rarely, and iterations older than a few times that number are sampled almost uniformly. Our results bound the expected squared norm of the gradient at iteration τ , which is standard for non convex stochastic optimization (Ghadimi & Lan, 2013).
## 4.1 Convergence Bounds
For simplicity, we first give convergence results for β1 = 0, along with a complete proof in Section 5. We then provide the results with momentum, with their proofs in the Appendix, Section A.6. We also provide a bound on the convergence of SGD with a O(1/(1 − β1) dependency in the Appendix, Section B.2, along with its proof in Section B.4.
$$({\mathfrak{g}})$$
## No Heavy-Ball Momentum
Theorem 1 (Convergence of Adagrad without momentum). Given the assumptions from Section 2.3, the iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and β1 = 0, and τ *defined by* (9)*, we have for any* N ∈ N
∗,
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|^{2}\right]\leq2R\frac{F(x_{0})-F_{\star}}{\alpha\sqrt{N}}+\frac{1}{\sqrt{N}}\left(4dR^{2}+\alpha dRL\right)\ln\left(1+\frac{NR^{2}}{\epsilon}\right).\tag{10}$$
Theorem 2 (Convergence of Adam without momentum). *Given the assumptions from Section 2.3, the* iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, αn = α q1−β n 2 1−β2 with α > 0 and β1 = 0, and τ *defined by* (9)*, we have for any* N ∈ N
∗,
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R{\frac{F(x_{0})-F_{*}}{\alpha N}}+E\left({\frac{1}{N}}\ln\left(1+{\frac{R^{2}}{(1-\beta_{2})\epsilon}}\right)-\ln(\beta_{2})\right),$$
, (11)
with
$$(11)$$
$$E={\frac{4d R^{2}}{\sqrt{1-\beta_{2}}}}+{\frac{\alpha d R L}{1-\beta_{2}}}.$$
$$\left(12\right)$$
## With Heavy-Ball Momentum
Theorem 3 (Convergence of Adagrad with momentum). Given the assumptions from Section 2.3, the iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and 0 ≤ β1 < 1, and τ *defined by* (9)*, we have for any* N ∈ N
∗such that N > β1 1−β1
,
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\sqrt{N}\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{\bar{N}}E\ln\left(1+\frac{N R^{2}}{\epsilon}\right),$$
$$\begin{array}{l}{{\ w i t h\ \tilde{N}=N-\frac{\beta_{1}}{1-\beta_{1}},\ a n d,}}\end{array}$$
$$E=\alpha d R L+\frac{12d R^{2}}{1-\beta_{1}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}.$$
$$\left(13\right)$$
Theorem 4 (Convergence of Adam with momentum). Given the assumptions from Section 2.3, the iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, 0 ≤ β1 < β2*, and,*
αn = α(1 − β1)
q1−β n 2 1−β2 with α > 0, and τ *defined by* (9)*, we have for any* N ∈ N
∗such that N > β1 1−β1
,
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\frac{F(x_{0})-F_{*}}{\alpha\tilde{N}}+E\left(\frac{1}{\tilde{N}}\ln\left(1+\frac{R^{2}}{(1-\beta_{2})\epsilon}\right)-\frac{N}{\tilde{N}}\ln(\beta_{2})\right),$$
, (13)
with N˜ = N −β1 1−β1
, and
$$E=\frac{\alpha d R L(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{12d R^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}.$$
## 4.2 Analysis Of The Bounds
Dependency on d. The dependency in d is present in previous works on coordinate wise adaptive methods (Zou et al., 2019a;b). Note however that R is defined as the `∞ bound on the on the stochastic gradient, so that in the case where the gradient has a similar scale along all dimensions, dR2 would be a reasonable bound for k∇f(x)k 2 2
. However, if many dimensions contribute little to the norm of the gradient, this would still lead to a worse dependency in d that e.g. scalar Adagrad Ward et al. (2019) or SGD. Diving into the technicalities of the proof to come, we will see in Section 5 that we apply Lemma 5.2 once per dimension. The contribution from each coordinate is mostly independent of the actual scale of its gradients
(as it only appears in the log), so that the right hand side of the convergence bound will grow as d. In contrast, the scalar version of Adagrad (Ward et al., 2019) has a single learning rate, so that Lemma 5.2 is only applied once, removing the dependency on d. However, this variant is rarely used in practice.
Almost sure bound on the gradient. We chose to assume the existence of an almost sure uniform `∞-
bound on the gradients given by (7). This is a strong assumption, although it is weaker than the one used by Duchi et al. (2011) for Adagrad in the convex case, where the iterates were assumed to be almost surely bounded. There exist a few real life problems that verifies this assumption, for instance logistic regression without weight penalty, and with bounded inputs. It is possible instead to assume only a uniform bound on the expected gradient ∇F(x), as done by Ward et al. (2019) and Zou et al. (2019b). This however lead to a bound on E
hk∇F(xτ )k 4/3 2 i2/3instead of a bound on E
hk∇F(xτ )k 2 2 i, all the other terms staying the same.
We provide the sketch of the proof using Hölder inequality in the Appendix, Section A.7. It is also possible to replace the bound on the gradient with an affine growth condition, i.e. the norm of the stochastic gradient is bounded by an affine function of the norm of the expected gradient. A proof for scalar Adagrad is provided by Faw et al. (2022). Shi et al. (2020) do the same for RMSProp, however their convergence bound is decays as O(log(T)/
√T) with T the number of epoch, not the number of updates, leading to a significantly less tight bound for large datasets.
Impact of heavy-ball momentum. Looking at Theorems 3 and 4, we see that increasing β1 always deteriorates the bounds. Taking β1 = 0 in those theorems gives us almost exactly the bound without heavy-ball momentum from Theorems 1 and 2, up to a factor 3 in the terms of the form dR2.
As discussed in Section 3, previous bounds for Adagrad in the non-convex setting deteriorates as O((1−β1)
−3)
(Zou et al., 2019b), while bounds for Adam deteriorates as O((1 − β1)
−5) (Zou et al., 2019a). Our unified proof for Adam and Adagrad achieves a dependency of O((1 − β1)
−1), a significant improvement. We refer the reader to the Appendix, Section A.3, for a detailed analysis. While our dependency still contradicts the benefits of using momentum observed in practice, see Section 6, our tighter analysis is a step in the right direction. On sampling of τ Note that in (9), we sample with a lower probability the latest iterations. This can be explained by the fact that the proof technique for stochastic optimization in the non-convex case is based on the idea that for every iteration n, either ∇F(xn) is small, or F(xn+1) will decrease by some amount.
However, when introducing momentum, and especially when taking the limit β1 → 1, the latest gradient ∇F(xn) has almost no influence over xn+1, as the momentum term updates slowly. Momentum *spreads* the influence of the gradients over time, and thus, it will take a few updates for a gradient to have fully influenced the iterate xn and thus the value of the function F(xn). From a formal point of view, the sampling weights given by (9) naturally appear as part of the proof which is presented in Section A.6.
## 4.3 Optimal Finite Horizon Adam Is Adagrad
Let us take a closer look at the result from Theorem 2. It could seem like some quantities can explode but actually not for any reasonable values of α, β2 and N. Let us try to find the best possible rate of convergence for Adam for a finite horizon N, i.e. q ∈ R+ such that E
hk∇F(xτ )k 2i= O(ln(N)N −q) for some choice of the hyper-parameters α(N) and β2(N). Given that the upper bound in (11) is a sum of non-negative terms, we need each term to be of the order of ln(N)N −q or negligible. Let us assume that this rate is achieved for α(N) and β2(N). The bound tells us that convergence can only be achieved if lim α(N) = 0 and lim β2(N) = 1, with the limits taken for N → ∞. This motivates us to assume that there exists an asymptotic development of α(N) ∝ N −a + o(N −a), and of 1 − β2(N) ∝ N −b + o(N −b) for a and b positive.
Thus, let us consider only the leading term in those developments, ignoring the leading constant (which is assumed to be non-zero). Let us further assume that R2, we have
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R{\frac{F(x_{0})-F_{*}}{N^{1-a}}}+E\left({\frac{1}{N}}\ln\left({\frac{R^{2}N^{b}}{\epsilon}}\right)+{\frac{N^{-b}}{1-N^{-b}}}\right),$$
, (14)
$$(14)$$
with E = 4dR2Nb/2 + *dRLN*b−a. Let us ignore the log terms for now, and use N−b 1−N−b ∼ N −bfor N → ∞ ,
to get
$$\mathbb{E}\left[\left\|\nabla F(x_{r})\right\|^{2}\right]\leqslant2R\frac{F(x_{0})-F_{*}}{N^{1-a}}+4dR^{2}N^{b/2-1}+4dR^{2}N^{-b/2}+dRLN^{b-a-1}+dRLN^{-a}.$$
Adding back the logarithmic term, the best rate we can obtain is O(ln(N)/
√N), and it is only achieved for a = 1/2 and b = 1, i.e., α = α1/
√N and β2 = 1 − 1/N. We can see the resemblance between Adagrad on one side and Adam with a finite horizon and such parameters on the other. Indeed, an exponential moving average with a parameter β2 = 1−1/N as a typical averaging window length of size N, while Adagrad would be an exact average of the past N terms. In particular, the bound for Adam now becomes
be an exact average of the pixel $N$ terms. In particular, the bound for which now becomes $$\mathbb{E}\left[\|\nabla F(x_{r})\|^{2}\right]\leq\frac{F(x_{0})-F_{*}}{\alpha_{1}\sqrt{N}}+\frac{1}{\sqrt{N}}\left(4dR^{2}+\alpha_{1}dRL\right)\left(\ln\left(1+\frac{R\,N}{\epsilon}\right)+\frac{N}{N-1}\right),\tag{15}$$ which differ from (10) only by a $+N/(N-1)$ next to the log term.
Adam and Adagrad are twins. Our analysis highlights an important fact: *Adam is to Adagrad like* constant step size SGD is to decaying step size SGD. While Adagrad is asymptotically optimal, it also leads to a slower decrease of the term proportional to F(x0) − F∗, as 1/
√N instead of 1/N for Adam. During the initial phase of training, it is likely that this term dominates the loss, which could explain the popularity of Adam for training deep neural networks rather than Adagrad. With its default parameters, Adam will not converge. It is however possible to choose α and β2 to achieve an critical point for arbitrarily small and, for a known time horizon, they can be chosen to obtain the exact same bound as Adagrad.
## 5 Proofs For Β1 = 0 **(No Momentum)**
We assume here for simplicity that β1 = 0, i.e., there is no heavy-ball style momentum. Taking n ∈ N
∗, the recursions introduced in Section 2.2 can be simplified into
$$\begin{cases}v_{n,i}&=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2},\\ x_{n,i}&=x_{n-1,i}-\alpha_{n}{\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}}.\end{cases}$$
$$(16)$$
Remember that we recover Adagrad when αn = α for α > 0 and β2 = 1, while Adam can be obtained taking 0 < β2 < 1, α > 0,
$$\alpha_{n}=\alpha\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}},$$
Throughout the proof we denote by En−1 [·] the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1 and vn−1 are deterministic knowing f1*, . . . , f*n−1. For all n ∈ N
∗, we also define v˜n ∈ R
dso
particular, $x_{n-1}$ and $v_{n-1}$ are deterministic knowing $f_{1},\ldots,f_{n-1}$. For all $n\in\mathbb{N}^{*}$, we also define that for all $i\in[d]$, $$\tilde{v}_{n,i}=\beta_{2}v_{n-1,i}+\mathbb{E}_{n-1}\left[(\nabla_{i}f_{n}(x_{n-1}))^{2}\right],$$ i.e., we replace the last gradient contribution by its expected value conditioned on $f_{1},\ldots,f_{n-1}$.
## 5.1 Technical Lemmas
A problem posed by the update (16) is the correlation between the numerator and denominator. This prevents us from easily computing the conditional expectation and as noted by Reddi et al. (2018), the expected direction of update can have a positive dot product with the objective gradient. It is however possible to control the deviation from the descent direction, following Ward et al. (2019) with this first lemma.
Lemma 5.1 (adaptive update approximately follow a descent direction). *For all* n ∈ N
∗ and i ∈ [d]*, we* have:
$$\mathbb{E}_{n-1}\left[\nabla_{i}F(x_{n-1}){\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}}\right]\geq{\frac{(\nabla_{i}F(x_{n-1}))^{2}}{2\sqrt{\epsilon+{\bar{v}}_{n,i}}}}-2R\mathbb{E}_{n-1}\left[{\frac{(\nabla_{i}f_{n}(x_{n-1}))^{2}}{\epsilon+v_{n,i}}}\right].$$
$$(17)$$
$$(18)$$
$$\left(19\right)$$
. (19)
7 Proof. We take i ∈ [d] and note G = ∇iF(xn−1), g = ∇ifn(xn−1), v = vn,i and v˜ = ˜vn,i.
$$\mathbb{E}_{n-1}\left[{\frac{G g}{\sqrt{\epsilon+v}}}\right]=\mathbb{E}_{n-1}\left[{\frac{G g}{\sqrt{\epsilon+\bar{v}}}}\right]+\mathbb{E}_{n-1}\left[{\underbrace{G g\left({\frac{1}{\sqrt{\epsilon+v}}}-{\frac{1}{\sqrt{\epsilon+\bar{v}}}}\right)}_{A}}\right].$$
$$(20)$$
$$(21)$$
Given that g and v˜ are independent knowing f1*, . . . , f*n−1, we immediately have
$$\mathbb{E}_{n-1}\left[\frac{Gg}{\sqrt{\epsilon+\hat{v}}}\right]=\frac{G^{2}}{\sqrt{\epsilon+\hat{v}}}.\tag{1}$$
Now we need to control the size of the second term A,
A = Gg v˜ − v √ + v √ + ˜v( √ + v + √ + ˜v) = Gg En−1 -g 2− g 2 √ + v √ + ˜v( √ + v + √ + ˜v) |A| ≤ |Gg| En−1 -g 2 √ + v( + ˜v) | {z } κ + |Gg|g 2 ( + v) √ + ˜v | {z } ρ .
$$(22)$$
$$(23)$$
The last inequality comes from the fact that √ + v+
√ + ˜v ≥ max(√ + v, √ + ˜v) andEn−1
-g 2− g 2 ≤
En−1
-g 2+ g 2. Following Ward et al. (2019), we can use the following inequality to bound κ and ρ,
$$\forall\lambda>0,\,x,y\in\mathbb{R},x y\leq{\frac{\lambda}{2}}x^{2}+{\frac{y^{2}}{2\lambda}}.$$
First applying (22) to κ with
$$\lambda={\frac{\sqrt{\epsilon+\tilde{v}}}{2}},\;x={\frac{|G|}{\sqrt{\epsilon+\tilde{v}}}},\;y={\frac{|g|\,\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}\sqrt{\epsilon+v}}},$$
we obtain
$$\kappa\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}+\frac{g^{2}\mathbb{E}_{n-1}\left[g^{2}\right]^{2}}{(\epsilon+\tilde{v})^{3/2}(\epsilon+v)}.$$
Given that + ˜v ≥ En−1
-g 2and taking the conditional expectation, we can simplify as
$$\mathbb{E}_{n-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\bar{v}}}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\bar{v}}}\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].$$
Given that pEn−1 [g 2] ≤
√ + ˜v and pEn−1 [g 2] ≤ R, we can simplify (23) as
$$\mathbb{E}_{n-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\vec{v}}}+R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$
Now turning to ρ, we use (22) with
$$\lambda=\frac{\sqrt{\epsilon+\tilde{v}}}{2\mathbb{E}_{n-1}\left[g^{2}\right]},\;x=\frac{|G g|}{\sqrt{\epsilon+\tilde{v}}},\;y=\frac{g^{2}}{\epsilon+v},$$
we obtain
$$\rho\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}\frac{g^{2}}{\mathbb{E}_{n-1}\left[g^{2}\right]}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}}\frac{g^{4}}{(\epsilon+v)^{2}},$$
$$(24)$$
$$(25)$$
Given that + v ≥ g 2 and taking the conditional expectation we obtain
$$\mathbb{E}_{n-1}\left[\rho\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\tilde{v}}}+\frac{\mathbb{E}_{n-1}\left[g^{2}\right]}{\sqrt{\epsilon+\tilde{v}}}\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right],\tag{1}$$
which we simplify using the same argument as for (24) into
$$(26)$$
$$\mathbb{E}_{n-1}\left[\rho\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\bar{v}}}+R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$
$$(27)$$
Notice that in (25), we possibly divide by zero. It suffice to notice that if En−1
-g 2= 0 then g 2 = 0 a.s. so that ρ = 0 and (27) is still verified. Summing (24) and (27) we can bound
$$\mathbb{E}_{n-1}\left[|A|\right]\leq\frac{G^{2}}{2\sqrt{\epsilon+\vartheta}}+2R\mathbb{E}_{n-1}\left[\frac{g^{2}}{\epsilon+v}\right].\tag{1}$$ In the case of $\mathbb{E}_{n-1}$, $\mathbb{E}_{n-1}$ is a $n$-dimensional vector.
$$(28)$$
Injecting (28) and (21) into (20) finishes the proof. Anticipating on Section 5.2, the previous Lemma gives us a bound on the deviation from a descent direction. While for a specific iteration, this deviation can take us away from a descent direction, the next lemma tells us that the sum of those deviations cannot grow larger than a logarithmic term. This key insight introduced in Ward et al. (2019) is what makes the proof work.
Lemma 5.2 (sum of ratios with the denominator being the sum of past numerators). *We assume we have* 0 < β2 ≤ 1 and a non-negative sequence (an)n∈N∗ *. We define for all* n ∈ N
∗, bn =Pn j=1 β n−j 2aj *. We have*
$$\sum_{j=1}^{N}\frac{a_{j}}{\epsilon+b_{j}}\leq\ln\left(1+\frac{b_{N}}{\epsilon}\right)-N\ln(\beta_{2}).\tag{1}$$
$\square$
$$(29)$$
Proof. Given that ln is increasing, and the fact that bj > aj ≥ 0, we have for all j ∈ N
∗,
$$\begin{array}{l}{{\frac{a_{j}}{\epsilon+b_{j}}\leq\ln(\epsilon+b_{j})-\ln(\epsilon+b_{j}-a_{j})}}\\ {{\qquad=\ln(\epsilon+b_{j})-\ln(\epsilon+\beta_{2}b_{j-1})}}\\ {{\qquad=\ln\left(\frac{\epsilon+b_{j}}{\epsilon+b_{j-1}}\right)+\ln\left(\frac{\epsilon+b_{j-1}}{\epsilon+\beta_{2}b_{j-1}}\right).}}\end{array}$$
The first term forms a telescoping series, while the second one is bounded by − ln(β2). Summing over all j ∈ [N] gives the desired result.
## 5.2 Proof Of Adam And Adagrad Without Momentum
Let us take an iteration n ∈ N
∗, we define the update un ∈ R
d:
$$\forall i\in[d],u_{n,i}=\frac{\nabla_{i}f_{n}(x_{n-1})}{\sqrt{\epsilon+v_{n,i}}}.\tag{1}$$
Adagrad. As explained in Section 2.2, we have αn = α for α > 0. Using the smoothness of F (8), we have
$$F(x_{n+1})\leq F(x_{n})-\alpha\nabla F(x_{n})^{T}u_{n}+\frac{\alpha^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.$$
Taking the conditional expectation with respect to f0*, . . . , f*n−1 we can apply the descent Lemma 5.1. Notice that due to the a.s. `∞ bound on the gradients (7), we have for any i ∈ [d],p + ˜vn,i ≤ R
√n, so that,
$${\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2{\sqrt{\epsilon+{\bar{v}}_{n,i}}}}}\geq{\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2R{\sqrt{n}}}}.$$
$$(30)$$
$$(31)$$
$$(32)$$
This gives us
$$\mathbb{E}_{n-1}\left[F(x_{n})\right]\leq F(x_{n-1})-\frac{\alpha}{2R\sqrt{n}}\left\|\nabla F(x_{n-1})\right\|_{2}^{2}+\left(2\alpha R+\frac{\alpha^{2}L}{2}\right)\mathbb{E}_{n-1}\left[\left\|u_{n}\right\|_{2}^{2}\right].$$
Summing the previous inequality for all n ∈ [N], taking the complete expectation, and using that √n ≤
√N
gives us,
$$\mathbb{E}\left[F(x_{N})\right]\leq F(x_{0})-\frac{\alpha}{2R\sqrt{N}}\sum_{n=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{n})\|_{2}^{2}\right]+\left(2\alpha R+\frac{\alpha^{2}L}{2}\right)\sum_{n=0}^{N-1}\mathbb{E}\left[\|u_{n}\|_{2}^{2}\right].$$
From there, we can bound the last sum on the right hand side using Lemma 5.2 once for each dimension.
Rearranging the terms, we obtain the result of Theorem 1.
Adam. As given by (5) in Section 2.2, we have αn = α q1−β n 2 1−β2 for α > 0. Using the smoothness of F
defined in (8), we have $$F(x_{n})\leq F(x_{n-1})-\alpha_{n}\nabla F(x_{n-1})^{T}u_{n}+\frac{\alpha_{n}^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.\tag{33}$$ We have for any $i\in[d]$, $\sqrt{e+v_{n,i}}\leq R\sqrt{\sum_{j=0}^{n-d}\frac{\beta_{j}^{2}}{2R}}=R\sqrt{\frac{1-\beta_{j}}{1-\beta_{i}}}$, thanks to the a.s. $\ell_{\infty}$ bound on the gradients (7), so that, $$\alpha_{n}\frac{\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2\sqrt{e+v_{n,i}}}\geq\frac{\alpha\left(\nabla_{i}F(x_{n-1})\right)^{2}}{2R}.\tag{34}$$ Taking the conditional expectation with respect to $f_{1},\ldots,f_{n-1}$ we can apply the descent Lemma 5.1 and
$$(33)$$
$$(34)$$
use (34) to obtain from (33),
$$\mathbb{E}_{n-1}\left[F(x_{n})\right]\leq F(x_{n-1})-\frac{\alpha}{2R}\left\|\nabla F(x_{n-1})\right\|_{2}^{2}+\left(2\alpha_{n}R+\frac{\alpha_{n}^{2}L}{2}\right)\mathbb{E}_{n-1}\left[\left\|u_{n}\right\|_{2}^{2}\right].$$
Given that β2 < 1, we have αn ≤ √ α 1−β2
. Summing the previous inequality for all n ∈ [N] and taking the complete expectation yields
$$\mathbb{E}\left[F(x_{N})\right]\leq F(x_{0})-\frac{\alpha}{2R}\sum_{n=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{n})\|_{2}^{2}\right]+\left(\frac{2\alpha R}{\sqrt{1-\beta_{2}}}+\frac{\alpha^{2}L}{2(1-\beta_{2})}\right)\sum_{n=0}^{N-1}\mathbb{E}\left[\|u_{n}\|_{2}^{2}\right].$$
Applying Lemma 5.2 for each dimension and rearranging the terms finishes the proof of Theorem 2.
## 6 Experiments
On Figure 1, we compare the effective dependency of the average squared norm of the gradient in the parameters α, β1 and β2 for Adam, when used on a toy task and CIFAR-10.
## 6.1 Setup
Toy problem. In order to support the bounds presented in Section 4, in particular the dependency in β2, we test Adam on a specifically crafted toy problem. We take x ∈ R
6 and define for all i ∈ [6], pi = 10−i. We take (Qi)i∈[6], Bernoulli variables with P [Qi = 1] = pi. We then define f for all x ∈ R
d as
$$f(x)=\sum_{i\in[6]}(1-Q_{i})\,{\rm Huber}(x_{i}-1)+\frac{Q_{i}}{\sqrt{p_{i}}}\,{\rm Huber}(x_{i}+1),\tag{35}$$
with for all y ∈ R,
$$\mathrm{Huber}(y)={\left\{\begin{array}{l l}{\quad{\frac{y^{2}}{2}}}&{{\mathrm{when~}}|y|\leq1}\\ {\quad|y|-{\frac{1}{2}}}&{{\mathrm{otherwise.}}}\end{array}\right.}$$
![10_image_0.png](10_image_0.png)
(a) Average squared norm of the gradient on a toy task, see Section 6, for more details. For the α and 1 − β2 curves, we initialize close to the optimum to make the F0 − F∗ term negligible.
(b) Average squared norm of the gradient of a small convolutional model Gitman & Ginsburg (2017)
trained on CIFAR-10, with a random initialization.
The full gradient is evaluated every epoch.
Figure 1: Observed average squared norm of the objective gradients after a fixed number of iterations when varying a single parameter out of α, 1 − β1 and 1 − β2, on a toy task (left, 106iterations) and on CIFAR-10
(right, 600 epochs with a batch size 128). All curves are averaged over 3 runs, error bars are negligible except for small values of α on CIFAR-10. See Section 6 for details.
![10_image_1.png](10_image_1.png)
Figure 2: Training trajectories for varying values of α ∈ {10−4, 10−3}, β1 ∈ {0., 0.5, 0.8, 0.9, 0.99} and β2 ∈ {0.9, 0.99, 0.999, 0.9999}. The top row (resp. bottom) gives the training loss (resp. squared norm of the expected gradient). The left column uses all corrective terms in the original Adam algorithm, the middle column drops the corrective term on mn (equivalent to our proof setup), and the right column drops the corrective term on vn. We notice a limited impact when dropping the corrective term on mn, but dropping the corrective term on vn has a much stronger impact.
Intuitively, each coordinate is pointing most of the time towards 1, but exceptionally towards -1 with a weight of 1/
√pi. Those rare events happens less and less often as i increase, but with an increasing weight.
Those weights are chosen so that all the coordinates of the gradient have the same variance1. It is necessary to take different probabilities for each coordinate. If we use the same p for all, we observe a phase transition when 1 − β2 ≈ p, but not the continuous improvement we obtain on Figure 1a.
We plot the variation of E
hkF(xτ )k 2 2 iafter 106iterations with batch size 1 when varying either α, 1 − β1 or 1 − β2 through a range of 13 values uniformly spaced in log-scale between 10−6 and 1. When varying α, we take β1 = 0 and β2 = 1 − 10−6. When varying β1, we take α = 10−5 and β2 = 1 − 10−6(i.e. β2 is so that we are in the Adagrad-like regime). Finally, when varying β2, we take β1 = 0 and α = 10−6. We start from x0 close to the optimum by running first 106iterations with α = 10−4, then 106iterations with α = 10−5, always with β2 = 1 − 10−6. This allows to have F(x0) − F∗ ≈ 0 in (11) and (13) and focus on the second part of both bounds. All curves are averaged over three runs. Error bars are plotted but not visible in log-log scale.
CIFAR-10. We train a simple convolutional network (Gitman & Ginsburg, 2017) on the CIFAR-102image classification dataset. Starting from a random initialization, we train the model on a single V100 for 600 epochs with a batch size of 128, evaluating the full training gradient after each epoch. This is a proxy for E
hkF(xτ )k 2 2 i, which would be to costly to evaluate exactly. All runs use the default config α = 10−3, β2 = 0.999 and β1 = 0.9, and we then change one of the parameter.
We take α from a uniform range in log-space between 10−6 and 10−2 with 9 values, for 1 − β1 the range is from 10−5to 0.3 with 9 values, and for 1−β2, from 10−6to 10−1 with 11 values. Unlike for the toy problem, we do not initialize close to the optimum, as even after 600 epochs, the norm of the gradients indicates that we are not at a critical point. All curves are averaged over three runs. Error bars are plotted but not visible in log-log scale, except for large values of α.
## 6.2 Analysis
Toy problem. Looking at Figure 1a, we observe a continual improvement as β2 increases. Fitting a linear regression in log-log scale of E[k∇F(xτ )k 2 2
] with respect to 1 − β2 gives a slope of 0.56 which is compatible with our bound (11), in particular the dependency in O(1/
√1 − β2). As we initialize close to the optimum, a small step size α yields as expected the best performance. Doing the same regression in log-log scale, we find a slope of 0.87, which is again compatible with the O(α) dependency of the second term in (11). Finally, we observe a limited impact of β1, except when 1 − β1 is small. The regression in log-log scale gives a slope of -0.16, while our bound predicts a slope of -1. CIFAR 10. Let us now turn to Figure 1b. As we start from random weights for this problem, we observe that a large step size gives the best performance, although we observe a high variance for the largest α. This indicates that training becomes unstable for large α, which is not predicted by the theory. This is likely a consequence of the bounded gradient assumption (7) not being verified for deep neural networks.
We observe a small improvement as 1 − β2 decreases, although nowhere near what we observed on our toy problem. Finally, we observe a sweet spot for the momentum β1, not predicted by our theory. We conjecture that this is due to the variance reduction effect of momentum (averaging of the gradients over multiple mini-batches, while the weights have not moved so much as to invalidate past information).
## 6.3 Impact Of The Adam Corrective Terms
Using the same experimental setup on CIFAR-10, we compare the impact of removing either of the corrective term of the original Adam algorithm (Kingma & Ba, 2015), as discussed in Section 2.2. We ran a cartesian product of training for 100 epochs, with β1 ∈ {0, 0.5, 0.8, 0.9, 0.99}, β2 ∈ {0.9, 0.99, 0.999, 0.9999}, and α ∈ {10−4, 10−3}. We report both the training loss and norm of the expected gradient on Figure 2. We notice a limited difference when dropping the corrective term on mn, but dropping the term vn has an 1We deviate from the a.s. bounded gradient assumption for this experiment, see Section 4.2 for a discussion on a.s. bound vs bound in expectation.
2https://www.cs.toronto.edu/~kriz/cifar.html important impact on the training trajectories. This confirm our motivation for simplifying the proof by removing the corrective term on the momentum.
## 7 Conclusion
We provide a simple proof on the convergence of Adam and Adagrad without heavy-ball style momentum.
Our analysis highlights a link between the two algorithms: with right the hyper-parameters, Adam converges like Adagrad. The extension to heavy-ball momentum is more complex, but we significantly improve the dependence on the momentum parameter for Adam, Adagrad, as well as SGD. We exhibit a toy problem where the dependency on α and β2 experimentally matches our prediction. However, we do not predict the practical interest of momentum, so that improvements to the proof are needed for future work.
## Broader Impact Statement
The present theoretical results on the optimization of non convex losses in a stochastic settings impact our understanding of the training of deep neural network. It might allow a deeper understanding of neural network training dynamics and thus reinforce any existing deep learning applications. There would be however no direct possible negative impact to society.
## References
Alekh Agarwal, Martin J Wainwright, Peter L Bartlett, and Pradeep K Ravikumar. Information-theoretic lower bounds on the oracle complexity of convex optimization. In *Advances in Neural Information Processing Systems*, 2009.
Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. In *International Conference on Learning Representations*, 2019.
Aaron Defazio. Momentum via primal averaging: Theoretical insights and learning rate schedules for nonconvex optimization. *arXiv preprint arXiv:2010.00406*, 2020.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12(Jul), 2011.
John Duchi, Michael I Jordan, and Brendan McMahan. Estimation, optimization, and parallelism when data is sparse. In *Advances in Neural Information Processing Systems 26*, 2013.
Biyi Fang and Diego Klabjan. Convergence analyses of online adam algorithm in convex setting and two-layer relu neural network. *arXiv preprint arXiv:1905.09356*, 2019.
Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, and Rachel Ward. The power of adaptivity in sgd: Self-tuning step sizes with unbounded gradients and affine variance.
In Po-Ling Loh and Maxim Raginsky (eds.), *Proceedings of Thirty Fifth Conference on Learning Theory*, Proceedings of Machine Learning Research. PMLR, 2022.
Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. *SIAM Journal on Optimization*, 23(4), 2013.
Igor Gitman and Boris Ginsburg. Comparison of batch normalization and weight normalization algorithms for the large-scale image classification. *arXiv preprint arXiv:1709.08145*, 2017.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *Proc. of the International Conference on Learning Representations (ICLR)*, 2015.
Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. *arXiv preprint arXiv:1806.07297*, 2018.
Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes.
In *AI Stats*, 2019.
Yanli Liu, Yuan Gao, and Wotao Yin. An improved analysis of stochastic gradient descent with momentum. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural* Information Processing Systems, volume 33, pp. 18261–18271. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/d3f5d4de09ea19461dab00590df91e4f-Paper.pdf.
H Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization.
In *COLT*, 2010.
Boris T Polyak. Some methods of speeding up the convergence of iteration methods. *USSR Computational* Mathematics and Mathematical Physics, 4(5), 1964.
Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In Proc. of the International Conference on Learning Representations (ICLR), 2018.
Naichen Shi, Dawei Li, Mingyi Hong, and Ruoyu Sun. Rmsprop converges with proper hyper-parameter. In International Conference on Learning Representations, 2020.
T. Tieleman and G. Hinton. Lecture 6.5 - rmsprop. COURSERA: Neural Networks for Machine Learning, 2012.
Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In *International Conference on Machine Learning*, 2019.
Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In *Advances in Neural Information Processing Systems*, 2017.
Tianbao Yang, Qihang Lin, and Zhe Li. Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. *arXiv preprint arXiv:1604.03257*, 2016.
Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. *arXiv preprint arXiv:1808.05671*, 2018.
Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of Adam and RMSprop. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*,
2019a.
Fengyu Zou, Li Shen, Zenqun Jie, Ju Sun, and Wei Liu. Weighted Adagrad with unified momentum. *arXiv* preprint arXiv:1808.03408, 2019b.
## Supplementary Material For A Simple Convergence Proof Of Adam And Adagrad Overview
In Section A, we detail the results for the convergence of Adam and Adagrad with heavy-ball momentum. For an overview of the contributions of our proof technique, see Section A.4.
Then in Section B, we show how our technique also applies to SGD and improves its dependency in β1 compared with previous work by Yang et al. (2016), from O((1−β1)
−2) to O(1−β1)
−1. The proof is simpler than for Adam/Adagrad, and show the generality of our technique.
## A Convergence Of Adaptive Methods With Heavy-Ball Momentum A.1 Setup And Notations
We recall the dynamic system introduced in Section 2.3. In the rest of this section, we take an iteration n ∈ N
∗, and when needed, i ∈ [d] refers to a specific coordinate. Given x0 ∈ R
d our starting point, m0 = 0, and v0 = 0, we define
$$\begin{cases}m_{n,i}&=\beta_{1}m_{n-1,i}+\nabla_{i}f_{n}(x_{n-1}),\\ v_{n,i}&=\beta_{2}v_{n-1,i}+\left(\nabla_{i}f_{n}(x_{n-1})\right)^{2},\\ x_{n,i}&=x_{n-1,i}-\alpha_{n}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.\end{cases}$$
$$(\mathrm{A.1})$$
For Adam, the step size is given by
$$\alpha_{n}=\alpha(1-\beta_{1}){\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}}.$$
. (A.2)
For Adagrad (potentially extended with heavy-ball momentum), we have β2 = 1 and
$$(\mathrm{A.2})$$
$$(\mathrm{A.3})$$
$$\alpha_{n}=\alpha(1-\beta_{1}).$$
$$(\mathrm{A.4})$$
αn = α(1 − β1). (A.3)
Notice we include the factor 1 − β1 in the step size rather than in (A.1), as this allows for a more elegant proof. The original Adam algorithm included compensation factors for both β1 and β2 (Kingma & Ba, 2015) to correct the initial scale of m and v which are initialized at 0. Adam would be exactly recovered by replacing (A.2) with
$$\alpha_{n}=\alpha\frac{1-\beta_{1}}{1-\beta_{1}^{n}}\sqrt{\frac{1-\beta_{2}^{n}}{1-\beta_{2}}}.\tag{14}$$
However, the denominator 1 − β n 1 potentially makes (αn)n∈N∗ non monotonic, which complicates the proof.
Thus, we instead replace the denominator by its limit value for n → ∞. This has little practical impact as
(i) early iterates are noisy because v is averaged over a small number of gradients, so making smaller step can be more stable, (ii) for β1 = 0.9 (Kingma & Ba, 2015), (A.2) differs from (A.4) only for the first 50 iterations.
Throughout the proof we note En−1 [·] the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1, vn−1 is deterministic knowing f1*, . . . , f*n−1. We introduce
$$G_{n}=\nabla F(x_{n-1})\quad{\mathrm{~and~}}\quad g_{n}=\nabla f_{n}(x_{n-1}).$$
Gn = ∇F(xn−1) and gn = ∇fn(xn−1). (A.5)
Like in Section 5.2, we introduce the update un ∈ R
d, as well as the update without heavy-ball momentum
Un ∈ R
d:
$$u_{n,i}=\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}\quad\text{and}\quad U_{n,i}=\frac{g_{n,i}}{\sqrt{\epsilon+v_{n,i}}}.\tag{1}$$
$$(\mathrm{A.5})$$
$$(\mathrm{A.6})$$
For any k ∈ N with *k < n*, we define v˜n,k ∈ R
d by
$$\tilde{v}_{n,k,i}=\beta_{2}^{k}v_{n-k,i}+\mathbb{E}_{n-k-1}\left[\sum_{j=n-k+1}^{n}\beta_{2}^{n-j}g_{j,i}^{2}\right],$$ (A.7)
$$(\mathrm{A.8})$$
i.e. the contribution from the k last gradients are replaced by their expected value for know values of f1*, . . . , f*n−k−1. For k = 1, we recover the same definition as in (18).
## A.2 Results
For any total number of iterations N ∈ N
∗, we define τN a random index with value in {0*, . . . , N* − 1},
verifying
$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}$.
If β1 = 0, this is equivalent to sampling τ uniformly in {0*, . . . , N* −1}. If β1 > 0, the last few 1 1−β1 iterations are sampled rarely, and all iterations older than a few times that number are sampled almost uniformly.
We bound the expected squared norm of the total gradient at iteration τ , which is standard for non convex stochastic optimization (Ghadimi & Lan, 2013).
Note that like in previous works, the bound worsen as β1 increases, with a dependency of the form O((1 −
β1)
−1). This is a significant improvement over the existing bound for Adagrad with heavy-ball momentum, which scales as (1−β1)
−3(Zou et al., 2019b), or the best known bound for Adam which scales as (1−β1)
−5
(Zou et al., 2019a).
Technical lemmas to prove the following theorems are introduced in Section A.5, while the proof of Theorems 3 and 4 are provided in Section A.6.
Theorem 3 (Convergence of Adagrad with momentum). *Given the assumptions from Section 2.3, the* iterates xn *defined in Section 2.2 with hyper-parameters verifying* β2 = 1, αn = α with α > 0 and 0 ≤ β1 < 1, and τ *defined by* (9)*, we have for any* N ∈ N
∗such that N > β1 1−β1
,
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\leq2R\sqrt{N}\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{N}E\ln\left(1+\frac{NR^{2}}{\epsilon}\right),\tag{12}$$
*with $\tilde{N}=N-\frac{\beta_1}{1-\beta_1}$, and,*
$$E=\alpha d R L+\frac{12d R^{2}}{1-\beta_{1}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}.$$
Theorem 4 (Convergence of Adam with momentum). Given the assumptions from Section 2.3, the iterates xn defined in Section 2.2 with hyper-parameters verifying 0 < β2 < 1, 0 ≤ β1 < β2*, and,*
αn = α(1 − β1)
q1−β n 2 1−β2 with α > 0, and τ *defined by* (9)*, we have for any* N ∈ N
∗such that N > β1 1−β1
,
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|^{2}\right]\leq2R\frac{F(x_{0})-F_{*}}{\alpha\bar{N}}+E\left(\frac{1}{N}\ln\left(1+\frac{R^{2}}{(1-\beta_{2})\epsilon}\right)-\frac{N}{N}\ln(\beta_{2})\right),\tag{13}$$
with N˜ = N −β1 1−β1
, and
$$E=\frac{\alpha d R L(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{12d R^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}d L^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}.$$
## A.3 Analysis Of The Results With Momentum
First notice that taking β1 → 0 in Theorems 3 and 4, we almost recover the same result as stated in 2 and 1, only losing on the term 4dR2 which becomes 12dR2.
Simplified expressions with momentum Assuming N β1 1−β1 and β1/β2 ≈ β1, which is verified for typical values of β1 and β2 (Kingma & Ba, 2015), it is possible to simplify the bound for Adam (13) as
$$\mathbb{E}\left[\left|\nabla F(x_{r})\right|^{2}\right]\lessapprox2R\frac{F(x_{0})-F_{\star}}{\alpha N}$$ $$\quad+\left(\frac{\alpha RL}{1-\beta_{2}}+\frac{12dR^{2}}{(1-\beta_{1})\sqrt{1-\beta_{2}}}+\frac{2\alpha^{2}dL^{2}\beta_{1}}{(1-\beta_{1})(1-\beta_{2})^{3/2}}\right)\left(\frac{1}{N}\ln\left(1+\frac{R^{2}}{\epsilon(1-\beta_{2})}\right)-\ln(\beta_{2})\right).\tag{4}$$
$$(\mathrm{A.9})$$
$$(\mathrm{A.10})$$
. (A.9)
Similarly, if we assume N β1 1−β1
, we can simplify the bound for Adagrad (??) as
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|^{2}\right]\lessapprox2R{\frac{F(x_{0})-F_{\star}}{\alpha\sqrt{N}}}+{\frac{1}{\sqrt{N}}}\left(\alpha d R L+{\frac{12d R^{2}}{1-\beta_{1}}}+{\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}}\right)\ln\left(1+{\frac{N R^{2}}{\epsilon}}\right),$$
, (A.10)
Optimal finite horizon Adam is still Adagrad We can perform the same finite horizon analysis as in Section 4.3. If we take α = √α˜N
and β2 = 1 − 1/N, then (A.9) simplifies to
$$\mathbb{E}\left[\left\|\nabla F(x_{r})\right\|^{2}\right]\leqslant2R\frac{F(x_{0})-F_{*}}{\hat{\alpha}\sqrt{N}}+\frac{1}{\sqrt{N}}\left(\hat{\alpha}dRL+\frac{12dR^{2}}{1-\beta_{1}}+\frac{2\hat{\alpha}^{2}dL^{2}\beta_{1}}{1-\beta_{1}}\right)\left(\ln\left(1+\frac{NR^{2}}{\epsilon}\right)+1\right).$$ (A.11)
The term (1 − β2)
3/2in the denominator in (A.9) is indeed compensated by the α 2in the numerator and we again recover the proper ln(N)/
√N convergence rate, which matches (A.10) up to a +1 term next to the log.
## A.4 Overview Of The Proof, Contributions And Limitations
There is a number of steps to the proof. First we derive a Lemma similar in spirit to the descent Lemma 5.1. There are two differences: first, when computing the dot product between the current expected gradient and each past gradient contained in the momentum, we have to re-center the expected gradient to its values in the past, using the smoothness assumption. Besides, we now have to decorrelate more terms between the numerator and denominator, as the numerator contains not only the latest gradient but a decaying sum of the past ones. We similarly extend Lemma 5.2 to support momentum specific terms. The rest of the proof follows mostly as in Section 5, except with a few more manipulation to regroup the gradient terms coming from different iterations.
Compared with previous work (Zou et al., 2019b;a), the re-centering of past gradients in (A.14) is a key aspect to improve the dependency in β1, with a small price to pay using the smoothness of F which is compensated by the introduction of extra G2n−k,i in (A.1). Then, a tight handling of the different summations as well as the the introduction of a non uniform sampling of the iterates (A.8), which naturally arises when grouping the different terms in (A.49), allow to obtain the overall improved dependency in O((1 − β1)
−1).
The same technique can be applied to SGD, the proof becoming simpler as there is no correlation between the step size and the gradient estimate, see Section B. If you want to better understand the handling of momentum without the added complexity of adaptive methods, we recommend starting with this proof.
A limitation of the proof technique is that we do not show that heavy-ball momentum can lead to a variance reduction of the update. Either more powerful probabilistic results, or extra regularity assumptions could allow to further improve our worst case bounds of the variance of the update, which in turn might lead to a bound with an improvement when using heavy-ball momentum.
## A.5 Technical Lemmas
We first need an updated version of 5.1 that includes momentum.
Lemma A.1 (Adaptive update with momentum approximately follows a descent direction). *Given* x0 ∈ R
d, the iterates defined by the system (A.1) for (αj )j∈N∗ *that is non-decreasing, and under the conditions* (6),
(7)*, and* (8), as well as 0 ≤ β1 < β2 ≤ 1*, we have for all iterations* n ∈ N
∗,
$$\mathbb{E}\left[\sum_{k\in\{0\}}G_{n,i}\frac{m_{n,i}}{\sqrt{c+m_{n,i}}}\right]\geq\frac{1}{2}\left(\sum_{k\in\{i\}}\sum_{k=0}^{n-1}\beta_{k}^{2}\mathbb{E}\left[\frac{G_{n-k,i}^{2}}{\sqrt{c+m_{n,k+1,i}}}\right]\right)$$ $$\quad-\frac{\alpha_{n}^{2}L^{2}}{4R}\sqrt{1-\beta_{1}}\left(\sum_{k=1}^{n-1}\|u_{n-i}\|_{2}^{2}\sum_{k=1}^{n-1}\beta_{k}^{2}\sqrt{k}\right)-\frac{3R}{\sqrt{1-\beta_{1}}}\left(\sum_{k=0}^{n-1}\left(\frac{\beta_{k}}{\beta_{2}}\right)^{k}\sqrt{k+1}\,\|U_{n-k}\|_{2}^{2}\right).$$ (A.12)
Proof. We use multiple times (22) in this proof, which we repeat here for convenience,
$$\forall\lambda>0,\,x,y\in\mathbb{R},x y\leq{\frac{\lambda}{2}}x^{2}+{\frac{y^{2}}{2\lambda}}.$$
. (A.13)
Let us take an iteration n ∈ N
∗for the duration of the proof. We have
$$\sum_{i\in[d]}G_{n,i}\frac{m_{n,i}}{\sqrt{\epsilon+v_{n,i}}}=\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\beta_{i}^{k}G_{n,i}\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}$$ (A.14) $$=\underbrace{\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}_{A}+\underbrace{\sum_{i\in[d]}^{n-1}\sum_{k=0}^{n-1}\beta_{i}^{k}\left(G_{n,i}-G_{n-k,i}\right)\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}_{B},$$
$$(\mathrm{A.13})$$
$$(\mathrm{A.15})$$
Let us now take an index 0 ≤ k ≤ n − 1. We show that the contribution of past gradients Gn−k and gn−k due to the heavy-ball momentum can be controlled thanks to the decay term β k 1
. Let us first have a look at B. Using (A.13) with
$$\lambda=\frac{\sqrt{1-\beta_{1}}}{2R\sqrt{k+1}},\ x=|G_{n,i}-G_{n-k,i}|\,,\ y=\frac{|g_{n-k,i}|}{\sqrt{\epsilon+v_{n,i}}},$$
we have
$$|B|\leq\sum_{i\in[d]}\sum_{k=0}^{n-1}\beta_{1}^{k}\left({\frac{\sqrt{1-\beta_{1}}}{4R\sqrt{k+1}}}\left(G_{n,i}-G_{n-k,i}\right)^{2}+{\frac{R\sqrt{k+1}}{\sqrt{1-\beta_{1}}}}{\frac{g_{n-k,i}^{2}}{e+v_{n,i}}}\right).$$
+ vn,i !. (A.15)
Notice first that for any dimension i ∈ [d], + vn,i ≥ + β k 2 vn−k,i ≥ β k 2
( + vn−k,i), so that
$ \frac{g_{n-k,i}^2}{\epsilon+v_{n,i}}\leq\frac{1}{\beta_2^k}U_{n-k,i}^2$ where $ (\beta_2)=1$.
Besides, using the L-smoothness of F given by (8), we have
$$\|G_{n}-G_{n-k}\|_{2}^{2}\leq L^{2}\left\|x_{n-1}-x_{n-k-1}\right\|_{2}^{2}$$ $$=L^{2}\left\|\sum_{l=1}^{k}\alpha_{n-l}u_{n-l}\right\|_{2}^{2}$$ $$\leq\alpha_{n}^{2}L^{2}k\sum_{l=1}^{k}\left\|u_{n-l}\right\|_{2}^{2},$$
$$(\mathrm{A.16})$$
$$(\mathrm{A.17})$$
$$(\mathrm{A.18})$$
using Jensen inequality and the fact that αn is non-decreasing. Injecting (A.16) and (A.17) into (A.15), we obtain
|B| ≤ nX−1 k=0 α 2 nL 2 4R p1 − β1β k 1 √ k X k l=1 kun−lk 2 2 ! + nX−1 R √1 − β1 β1 β2 k√k + 1 kUn−kk 2 2 ! k=0 =p1 − β1 α 2 nL 2 4R nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +R √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 !
. (A.18)
Now going back to the A term in (A.14), we will study the main term of the summation, i.e. for i ∈ [d] and k < n
$$\mathbb{E}\left[G_{n-k,i}{\frac{g_{n-k,i}}{\sqrt{\epsilon+v_{n,i}}}}\right]=\mathbb{E}\left[\nabla_{i}F(x_{n-k-1}){\frac{\nabla_{i}f_{n-k}(x_{n-k-1})}{\sqrt{\epsilon+v_{n,i}}}}\right].$$
Notice that we could almost apply Lemma 5.1 to it, except that we have vn,i in the denominator instead of vn−k,i. Thus we will need to extend the proof to decorrelate more terms. We will further drop indices in the rest of the proof, noting G = Gn−k,i, g = gn−k,i, v˜ = ˜vn,k+1,i and v = vn,i. Finally, let us note
$$\delta^{2}=\sum_{j=n-k}^{n}\beta_{2}^{n-j}g_{j,i}^{2}\qquad\text{and}\qquad r^{2}=\mathbb{E}_{n-k-1}\left[\delta^{2}\right].\tag{1}$$
2. (A.20)
In particular we have v˜ − v = r 2 − δ 2. With our new notations, we can rewrite (A.19) as
we have $v=v-r^{2}=0$. With our new notations, we can rewrite (A.19) as $$\mathbb{E}\left[G\frac{g}{\sqrt{\epsilon+v}}\right]=\mathbb{E}\left[G\frac{g}{\sqrt{\epsilon+v}}+Gg\left(\frac{1}{\sqrt{\epsilon+v}}-\frac{1}{\sqrt{\epsilon+v}}\right)\right]$$ $$=\mathbb{E}\left[\mathbb{E}_{n-k-1}\left[G\frac{g}{\sqrt{\epsilon+v}}\right]+Gg\frac{r^{2}-\delta^{2}}{\sqrt{\epsilon+v}\sqrt{\epsilon+v}(\sqrt{\epsilon+v}+\sqrt{\epsilon+v})}\right]$$ $$=\mathbb{E}\left[\frac{G^{2}}{\sqrt{\epsilon+v}}\right]+\mathbb{E}\left[Gg\frac{r^{2}-\delta^{2}}{\sqrt{\epsilon+v}\sqrt{\epsilon+v}(\sqrt{\epsilon+v}+\sqrt{\epsilon+v})}\right].$$ (A.21) $G$
$$(\mathrm{A.19})$$
$$(\mathrm{A.20})$$
We first focus on C:
$$|C|\leq\underbrace{|G g|\;\frac{r^{2}}{\sqrt{\epsilon+v}(\epsilon+\bar{v})}}_{\kappa}+\underbrace{|G g|\;\frac{\delta^{2}}{(\epsilon+v)\sqrt{\epsilon+\bar{v}}}}_{\rho},$$
due to the fact that √ + v +
√ + ˜v ≥ max(√ + v, √ + ˜v) andr 2 − δ 2 ≤ r 2 + δ 2.
Applying (A.13) to κ with
$$\lambda={\frac{\sqrt{1-\beta_{1}}\sqrt{\epsilon+\bar{v}}}{2}},\;x={\frac{|G|}{\sqrt{\epsilon+\bar{v}}}},\;y={\frac{|g|\,r^{2}}{\sqrt{\epsilon+\bar{v}}\sqrt{\epsilon+v}}},$$
we obtain
$$\kappa\leq\frac{G^{2}}{4\sqrt{\epsilon+\hat{v}}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{g^{2}r^{4}}{(\epsilon+\hat{v})^{3/2}(\epsilon+v)}.$$
Given that + ˜v ≥ r 2 and taking the conditional expectation, we can simplify as
$$\mathbb{E}_{n-k-1}\left[\kappa\right]\leq\frac{G^{2}}{4\sqrt{\epsilon+\vec{v}}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{r^{2}}{\sqrt{\epsilon+\vec{v}}}\mathbb{E}_{n-k-1}\left[\frac{g^{2}}{\epsilon+v}\right].$$ (A.22)
Now turning to ρ, we use (A.13) with
$$\lambda=\frac{\sqrt{1-\beta_{1}}\sqrt{\epsilon+\tilde{v}}}{2r^{2}},\;x=\frac{|G\delta|}{\sqrt{\epsilon+\tilde{v}}},\;y=\frac{|\delta g|}{\epsilon+v},$$
we obtain
$$\rho\leq\frac{G^{2}}{4\sqrt{\epsilon+\hat{v}}}\frac{\delta^{2}}{r^{2}}+\frac{1}{\sqrt{1-\beta_{1}}}\frac{r^{2}}{\sqrt{\epsilon+\hat{v}}}\frac{g^{2}\delta^{2}}{(\epsilon+v)^{2}}.$$
. (A.23)
$$(\mathrm{A.23})$$
Given that + v ≥ δ 2, and En−k−1 hδ 2 r 2 i= 1, we obtain after taking the conditional expectation,
$$\mathbb{E}_{n-k-1}\left[\rho\right]\leq{\frac{G^{2}}{4{\sqrt{\epsilon+{\vec{v}}}}}}+{\frac{1}{{\sqrt{1-\beta_{1}}}}}{\frac{r^{2}}{{\sqrt{\epsilon+{\vec{v}}}}}}\mathbb{E}_{n-k-1}\left[{\frac{g^{2}}{\epsilon+v}}\right].$$
$$(\mathrm{A.24})$$
. (A.24)
Notice that in A.23, we possibly divide by zero. It suffice to notice that if r 2 = 0 then δ 2 = 0 a.s. so that ρ = 0 and (A.24) is still verified. Summing (A.22) and (A.24), we get
$$\mathbb{E}_{n-k-1}\left[|C|\right]\leq{\frac{G^{2}}{2{\sqrt{\epsilon+{\vec{v}}}}}}+{\frac{2}{{\sqrt{1-\beta_{1}}}}}{\frac{r^{2}}{{\sqrt{\epsilon+{\vec{v}}}}}}\mathbb{E}_{n-k-1}\left[{\frac{g^{2}}{\epsilon+v}}\right].$$
. (A.25)
Given that r ≤
√ + ˜v by definition of v˜, and that using (7), r ≤
√k + 1R, we have3, reintroducing the indices we had dropped
$$\mathbb{E}_{n-k-1}\left[|C|\right]\leq{\frac{G_{n-k,i}^{2}}{2\sqrt{\epsilon+{\vec{v}}_{n,k+1,i}}}}+{\frac{2R}{\sqrt{1-\beta_{1}}}}{\sqrt{k+1}}\mathbb{E}_{n-k-1}\left[{\frac{g_{n-k,i}^{2}}{\epsilon+v_{n,i}}}\right].$$
+ vn,i #. (A.26)
Taking the complete expectation and using that by definition + vn,i ≥ + β k 2 vn−k,i ≥ β k 2
( + vn−k,i) we get
$$\mathbb{E}\left[|C|\right]\leq{\frac{1}{2}}\mathbb{E}\left[{\frac{G_{n-k,i}^{2}}{\sqrt{\epsilon+{\bar{v}}_{n,k+1,i}}}}\right]+{\frac{2R}{\sqrt{1-\beta_{1}\beta_{2}^{k}}}}{\sqrt{k+1}}\mathbb{E}\left[{\frac{g_{n-k,i}^{2}}{\epsilon+v_{n-k,i}}}\right].$$
Injecting (A.27) into (A.21) gives us
i∈[d] nX−1 k=0 β k 1 E "G2 p n−k,i + ˜vn,k+1,i #− 1 2 E "G2 p n−k,i + ˜vn,k,i #+2R √1 − β1β k 2 √k + 1E "g 2 n−k,i + vn−k,i #!! E [A] ≥ X = 1 2 k=0 β k 1E "G2 p n−k,i + ˜vn,k+1,i # −2R √1 − β1 k=0 β1 β2 k√k + 1E hkUn−kk 2 2 i . (A.28) X i∈[d] nX−1 X i∈[d] nX−1
$$(\mathrm{A.25})$$
$$(\mathrm{A.26})$$
$$(\mathrm{A.27})$$
Injecting (A.28) and (A.18) into (A.14) finishes the proof. Similarly, we will need an updated version of 5.2.
Lemma A.2 (sum of ratios of the square of a decayed sum and a decayed sum of square). *We assume we* have 0 < β2 ≤ 1 and 0 < β1 < β2, and a sequence of real numbers (an)n∈N∗ *. We define* bn =Pn j=1 β n−j 2a 2 j and cn =Pn j=1 β n−j 1aj *. Then we have*
$$\sum_{j=1}^{n}{\frac{e_{j}^{2}}{\epsilon+b_{j}}}\leq{\frac{1}{(1-\beta_{1})(1-\beta_{1}/\beta_{2})}}\left(\ln\left(1+{\frac{b_{n}}{\epsilon}}\right)-n\ln(\beta_{2})\right).$$
. (A.29)
Proof. Now let us take j ∈ N
∗, j ≤ n, we have using Jensen inequality
$$c_{j}^{2}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\beta_{1}^{j-l}a_{l}^{2},$$
$$(\mathrm{A.29})$$
so that
$$\frac{c_{j}^{2}}{\epsilon+b_{j}}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\beta_{1}^{j-l}\frac{a_{l}^{2}}{\epsilon+b_{j}}.$$
3Note that we do not need the almost sure bound on the gradient, and a bound on E-k∇f(x)k 2 ∞
would be sufficient.
Given that for l ∈ [j], we have by definition + bj ≥ + β j−l 2bl ≥ β j−l 2( + bl), we get
$$\frac{c_{j}^{2}}{\epsilon+b_{j}}\leq\frac{1}{1-\beta_{1}}\sum_{l=1}^{j}\left(\frac{\beta_{1}}{\beta_{2}}\right)^{j-l}\frac{a_{l}^{2}}{\epsilon+b_{l}}.$$
Thus, when summing over all j ∈ [n], we get
$J\subset[n]$, we get $$\begin{aligned} \sum_{j=1}^n \frac{c_j^2}{\epsilon + b_j} &\leq \frac{1}{1-\beta_1}\sum_{j=1}^n\sum_{l=1}^j\left(\frac{\beta_1}{\beta_2}\right)^{j-l}\frac{a_l^2}{\epsilon + b_l}\nonumber\\ &= \frac{1}{1-\beta_1}\sum_{l=1}^n\frac{a_l^2}{\epsilon + b_l}\sum_{j=l}^n\left(\frac{\beta_1}{\beta_2}\right)^{j-l}\nonumber\\ &\leq \frac{1}{(1-\beta_1)(1-\beta_1/\beta_2)}\sum_{l=1}^n\frac{a_l^2}{\epsilon + b_l}. \end{aligned}$$ in (A.29).
$$(\mathrm{A.30})$$
$$(\mathrm{A.31})$$
$$\square$$
Applying Lemma 5.2, we obtain (A.29).
We also need two technical lemmas on the sum of series.
Lemma A.3 (sum of a geometric term times a square root). Given 0 < a < 1 and Q ∈ N*, we have,*
$$\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1}\leq\frac{1}{1-a}\left(1+\frac{\sqrt{\pi}}{2\sqrt{-\ln(a)}}\right)\leq\frac{2}{(1-a)^{3/2}}.$$ (A.32)
Proof. We first need to study the following integral:
Proof.: We first need to study the following integral: $$\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\mathrm{d}x=\int_{0}^{\infty}\frac{\mathrm{e}^{\mathrm{i}a(x)x}}{2\sqrt{x}}\ \,\ \text{then introducing}y=\sqrt{x},$$ $$=\int_{0}^{\infty}\mathrm{e}^{a(x)y}\mathrm{d}y\ \,\ \text{then introducing}u=\sqrt{-2\ln(a)}y,$$ $$=\frac{1}{\sqrt{-2\ln(a)}}\int_{0}^{\infty}\mathrm{e}^{-u^{2}/2}\mathrm{d}u$$ $$\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\mathrm{d}x=\frac{\sqrt{x}}{2\sqrt{-\ln(a)}},$$ (A.33) where we used the classical integral of the standard Gaussian density function.
Let us now introduce AQ:
$$A_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1},$$
then we have
$$A_{Q}-aA_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q+1}-\sum_{q=1}^{Q}a^{q}\sqrt{q}\quad,\text{then using the concavity of}\sqrt{\cdot},$$ $$\leq1-a^{Q}\sqrt{Q}+\sum_{q=1}^{Q-1}\frac{a^{q}}{2\sqrt{q}}$$ $$\leq1+\int_{0}^{\infty}\frac{a^{x}}{2\sqrt{x}}\,\mathrm{d}x$$ $$(1-a)A_{Q}\leq1+\frac{\sqrt{\pi}}{2\sqrt{-\ln(a)}},$$
where we used (A.33). Given that p− ln(a) ≥
√1 − a we obtain (A.32).
Lemma A.4 (sum of a geometric term times roughly a power 3/2). Given 0 < a < 1 and Q ∈ N*, we have,*
$$\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1)\leq\frac{4a}{(1-a)^{5/2}}.$$ (A.34)
Proof. Let us introduce AQ:
$$A_{Q}=\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1),$$
then we have
$$\begin{split}A_{Q}-a A_{Q}&=\sum_{q=0}^{Q-1}a^{q}\sqrt{q}(q+1)-\sum_{q=1}^{Q}a^{q}\sqrt{q-1}q\\ &\leq\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\left((q+1)-\sqrt{q}\sqrt{q-1}\right)\\ &\leq\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\left((q+1)-(q-1)\right)\\ &\leq2\sum_{q=1}^{Q-1}a^{q}\sqrt{q}\\ &=2a\sum_{q=0}^{Q-2}a^{q}\sqrt{q+1}\quad,\text{then using Lemma A.3,}\\ (1-a)A_{Q}&\leq\frac{4a}{(1-a)^{3/2}}.\end{split}$$
## A.6 Proof Of Adam And Adagrad With Momentum
Common part of the proof Let us a take an iteration n ∈ N
∗. Using the smoothness of F defined in
(8), we have
$$F(x_{n})\leq F(x_{n-1})-\alpha_{n}G_{n}^{T}u_{n}+\frac{\alpha_{n}^{2}L}{2}\left\|u_{n}\right\|_{2}^{2}.$$
Taking the full expectation and using Lemma A.1,
E [F(xn)] ≤ E [F(xn−1)] − αn 2 k=0 β k 1E "G2n−k,i 2p + ˜vn,k+1,i # + α 2 nL 2 E hkunk 2 2 i X i∈[d] nX−1 + α 3 nL 2 4R p1 − β1 nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +3αnR √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 ! . (A.35)
Notice that because of the bound on the `∞ norm of the stochastic gradients at the iterates (7), we have for any k ∈ N, *k < n*, and any coordinate i ∈ [d],p + ˜vn,k+1,i ≤ R
qPn−1 j=0 β j 2
. Introducing Ωn =
qPn−1 j=0 β j 2
,
we have
E [F(xn)] ≤ E [F(xn−1)] −αn 2RΩn nX−1 k=0 β k 1E hkGn−kk 2 2 i+ α 2 nL 2 E hkunk 2 2 i + α 3 nL 2 4R p1 − β1 nX−1 l=1 kun−lk 2 2 nX−1 k=l β k 1 √ k ! +3αnR √1 − β1 nX−1 k=0 β1 β2 k√k + 1 kUn−kk 2 2 ! . (A.36)
Now summing over all iterations n ∈ [N] for N ∈ N
∗, and using that for both Adam (A.2) and Adagrad
(A.3), αn is non-decreasing, as well the fact that F is bounded below by F∗ from (6), we get
1 2R X N n=1 αn Ωn nX−1 ≤ F(x0) − F∗ + α 2 N L 2 X N k=0 β k 1E hkGn−kk 2 2 i n=1 E hkunk 2 2 i | {z } A | {z } B + α 3 N L 2 4R p1 − β1 X N n=1 nX−1 l=1 E hkun−lk 2 2 i nX−1 +3αN R √1 − β1 X N n=1 nX−1 k=0 β1 β2 k√k + 1E hkUn−kk 2 2 i k=l β k 1 √ k . (A.37) | {z } C | {z } D
First looking at B, we have using Lemma A.2,
$$B\leq\frac{\alpha_{N}^{2}L}{2(1-\beta_{1})(1-\beta_{1}/\beta_{2})}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\log(\beta_{2})\right).$$ (A.38)
Then looking at C and introducing the change of index j = n − l,
C = α 3 N L 2 4R p1 − β1 X N n=1 Xn j=1 E hkujk 2 2 i nX−1 k=n−j β k 1 √ k = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iX N nX−1 k=n−j β k 1 √ k n=j = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iNX−1 k=0 β k 1 √ k X j+k n=j 1 = α 3 N L 2 4R p1 − β1 X N j=1 E hkujk 2 2 iNX−1 k=0 β k 1 √ k(k + 1) ≤ α 3 N L 2 R X N j=1 E hkujk 2 2 iβ1 (1 − β1) 2 , (A.39)
$$(\mathrm{A.39})$$
$$(\mathrm{A.40})$$
using Lemma A.4. Finally, using Lemma A.2, we get
$$C\leq\frac{\alpha_{N}^{3}L^{2}\beta_{1}}{R(1-\beta_{1})^{3}(1-\beta_{1}/\beta_{2})}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\log(\beta_{2})\right).$$
. (A.40)
Finally, introducing the same change of index j = n − k for D, we get
D =3αN R √1 − β1 X N n=1 Xn j=1 β1 β2 n−jp1 + n − jE hkUjk 2 2 i =3αN R √1 − β1 X N j=1 E hkUjk 2 2 iX N n=j β1 β2 n−jp1 + n − j ≤6αN R √1 − β1 X N j=1 E hkUjk 2 2 i1 (1 − β1/β2) 3/2 , (A.41)
$$(\mathrm{A.42})$$
using Lemma A.3. Finally, using Lemma 5.2 or equivalently Lemma A.2 with $\beta_{1}=0$, we get $$D\leq\frac{6\alpha_{N}R}{\sqrt{1-\beta_{1}(1-\beta_{1}/\beta_{2})^{3/2}}}\sum_{i\in[d]}\left(\ln\left(1+\frac{v_{N,i}}{\epsilon}\right)-N\ln(\beta_{2})\right).$$
. (A.42)
This is as far as we can get without having to use the specific form of αN given by either (A.2) for Adam or
(A.3) for Adagrad. We will now split the proof for either algorithm.
Adam For Adam, using (A.2), we have αn = (1 − β1)Ωnα. Thus, we can simplify the A term from (A.37),
also using the usual change of index j = n − k, to get
A =1 2R X N n=1 αn Ωn Xn j=1 β n−j 1 E hkGjk 2 2 i = α(1 − β1) 2R X N j=1 E hkGjk 2 2 iX N n=j β n−j 1 =α 2R X N j=1 (1 − β N−j+1 1)E hkGjk 2 2 i =α 2R X N j=1 (1 − β N−j+1 1)E hk∇F(xj−1)k 2 2 i =α 2R N X−1 j=0 (1 − β N−j 1)E hk∇F(xj )k 2 2 i. (A.43)
$$(\mathrm{A.44})$$
If we now introduce τ as in (A.8), we can first notice that
$$\sum_{j=0}^{N-1}(1-\beta_{1}^{N-j})=N-\beta_{1}\frac{1-\beta_{1}^{N}}{1-\beta_{1}}\geq N-\frac{\beta_{1}}{1-\beta_{1}}.$$
. (A.44)
Introducing
$$\tilde{N}=N-\frac{\beta_{1}}{1-\beta_{1}},\tag{1}$$
$$(\mathrm{A.45})$$
we then have
$$A\geq\frac{\alpha\tilde{N}}{2R}\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right].\tag{1}$$
Further notice that for any coordinate i ∈ [d], we have vN,i ≤R
2 1−β2
, besides αN ≤ α√
1−β1 1−β2
, so that putting together (A.37), (A.46), (A.38), (A.40) and (A.42) we get
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq2R\frac{F_{0}-F_{*}}{\alpha\bar{N}}+\frac{E}{\bar{N}}\left(\ln\left(1+\frac{R^{2}}{\epsilon(1-\beta_{2})}\right)-N\log(\beta_{2})\right),$$
, (A.47)
$$(\mathrm{A.46})$$
$$(\mathrm{A.47})$$
with
$$E=\frac{\alpha dRL(1-\beta_{1})}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})}+\frac{2\alpha^{2}dL^{2}\beta_{1}}{(1-\beta_{1}/\beta_{2})(1-\beta_{2})^{3/2}}+\frac{12dR^{2}\sqrt{1-\beta_{1}}}{(1-\beta_{1}/\beta_{2})^{3/2}\sqrt{1-\beta_{2}}}.$$ (A.48) the proof of the proof.
This conclude the proof of theorem 4.
Adagrad For Adagrad, we have αn = (1 − β1)α, β2 = 1 and Ωn ≤
√N so that,
A =1 2R X N n=1 αn Ωn Xn j=1 β n−j 1 E hkGjk 2 2 i ≥ α(1 − β1) 2R √NX N j=1 E hkGjk 2 2 iX N n=j β n−j 1 =α 2R √N X N j=1 (1 − β N−j+1 1)E hkGjk 2 2 i =α 2R √N X N j=1 (1 − β N−j+1 1)E hk∇F(xj−1)k 2 2 i =α 2R √N N X−1 j=0 (1 − β N−j 1)E hk∇F(xj )k 2 2 i. (A.49)
$$(\mathrm{A.50})$$
Reusing (A.44) and (A.45) from the Adam proof, and introducing τ as in (9), we immediately have
$$A\geq{\frac{\alpha\tilde{N}}{2R\sqrt{N}}}\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|_{2}^{2}\right].$$
$$(\mathrm{A.51})$$
Further notice that for any coordinate i ∈ [d], we have vN ≤ NR2, besides αN = (1 − β1)α, so that putting together (A.37), (A.50), (A.38), (A.40) and (A.42) with β2 = 1, we get
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq2R\sqrt{N}\frac{F_{0}-F_{*}}{\alpha\bar{N}}+\frac{\sqrt{N}}{\bar{N}}E\ln\left(1+\frac{N R^{2}}{\epsilon}\right),$$
, (A.51)
with
$$E=\alpha d R L+\frac{2\alpha^{2}d L^{2}\beta_{1}}{1-\beta_{1}}+\frac{12d R^{2}}{1-\beta_{1}}.$$
$$(\mathrm{A.52})$$
This conclude the proof of theorem 3.
## A.7 Proof Variant Using Hölder Inequality
Following (Ward et al., 2019; Zou et al., 2019b), it is possible to get rid of the almost sure bound on the gradient given by (7), and replace it with a bound in expectation, i.e.
$\forall x\in\mathbb{R}^{d}$, $\mathbb{E}\left[\|\nabla f(x)\|_{2}^{2}\right]\leq\tilde{R}-\sqrt{\epsilon}$. (10.1)
Note that we now need an `2 bound in order to properly apply the Hölder inequality hereafter:
We do not provide the full proof for the result, but point the reader to the few places where we have used
(7). We first use it in Lemma A.1. We inject R into (A.15), which we can just replace with R˜. Then we use
(7) to bound r and derive (A.26). Remember that r is defined in (A.20), and is actually a weighted sum of the squared gradients in expectation. Thus, a bound in expectation is acceptable, and Lemma A.1 is valid replacing the assumption (7) with (A.53).
$$(\mathrm{A.53})$$
Looking at the actual proof, we use (6) in a single place: just after (A.35), in order to derive an upper bound for the denominator in the following term:
$$M=\frac{\alpha_{n}}{2}\left(\sum_{i\in[d]}\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\frac{G_{n-k,i}^{2}}{2\sqrt{\epsilon+\hat{v}_{n,k+1,i}}}\right]\right).\tag{1}$$
Let us introduce V˜n,k+1 =Pi∈[d]
v˜n,k+1,i. We immediately have that
$$M\geq\frac{\alpha_{n}}{2}\left(\sum_{k=0}^{n-1}\beta_{k}^{\frac{1}{2}}\mathbb{E}\left[\frac{\|G_{n-k}\|_{2}^{2}}{2\sqrt{\epsilon+\tilde{V}_{n,k+1}}}\right]\right)$$ Taking $X=\left(\frac{\|G_{n-k}\|_{2}^{2}}{\sqrt{\epsilon+\tilde{V}_{n,k+1}}}\right)^{\frac{2}{3}}$, $Y=\left(\sqrt{\epsilon+\tilde{V}_{n,k+1}}\right)^{\frac{2}{3}}$, we can apply Holder inequality as $$\mathbb{E}\left[|X|^{\frac{3}{2}}\right]\geq\left(\frac{\mathbb{E}\left[|X^{\prime}|\right]}{\mathbb{E}\left[|Y|^{3}\right]^{\frac{3}{2}}}\right)^{\frac{2}{3}},$$ which is
$$(\mathrm{A.54})$$
$$(\mathrm{A.55})$$
which gives us $$\mathbb{E}\left[\frac{\|G_{n-k}\|_{2}^{2}}{\sqrt{\epsilon+\hat{V}_{n,k+1}}}\right]\geq\frac{\mathbb{E}\left[\|G_{n-k}\|_{2}^{\frac{d}{2}}\right]^{\frac{d}{2}}}{\sqrt{\mathbb{E}\left[\epsilon+\hat{V}_{n,k+1}\right]}}\geq\frac{\mathbb{E}\left[\|G_{n-k}\|_{2}^{\frac{d}{2}}\right]^{\frac{d}{2}}}{\Omega_{n}R},$$ with $\Omega_{n}=\sqrt{\sum_{j=0}^{n-1}\beta_{j}^{2}}$, and using the fact that $\mathbb{E}\left[\epsilon+\sum_{i\in[d]}\hat{v}_{n,k+1,j}\right]\leq R^{2}\Omega_{n}^{2}$.
$$(\mathrm{A.56})$$
$$(\mathrm{A.57})$$
Thus we can recover almost exactly (A.36) except we have to replace all terms of the form E
hkGn−kk 2 2 iwith E
hkGn−kk 4 3 2 i 32. The rest of the proof follows as before, with all the dependencies in α, β1, β2 remaining the same.
## B Non Convex Sgd With Heavy-Ball Momentum
We extend the existing proof of convergence for SGD in the non convex setting to use heavy-ball momentum (Ghadimi & Lan, 2013). Compared with previous work on momentum for non convex SGD byYang et al. (2016), we improve the dependency in β1 from O((1 − β1)
−2) to O((1 − β1)
−1). A recent work by Liu et al. (2020) achieve a similar dependency of O(1/(1 − β1)), with weaker assumptions (without the bounded gradients assumptions).
## B.1 Assumptions
We reuse the notations from Section 2.1. Note however that we use here different assumptions than in Section 2.3. We first assume F is bounded below by F∗, that is,
$\forall x\in\mathbb{R}^d,\ F(x)\geq F_*$.
d, F(x) ≥ F∗. (B.1)
We then assume that the stochastic gradients have bounded variance, and that the gradients of F are uniformly bounded, i.e. there exist R and σ so that
$$\forall x\in\mathbb{R}^{d},\|\nabla F(x)\|_{2}^{2}\leq R^{2}\quad{\mathrm{and}}\quad\mathbb{E}\left[\|\nabla f(x)\|_{2}^{2}\right]-\|\nabla F(x)\|_{2}^{2}\leq\sigma^{2},$$
and finally, the *smoothness of the objective function*, e.g., its gradient is L-Liptchitz-continuous with respect
to the `2-norm:
$$\forall x,y\in\mathbb{R}^{d},\|\nabla F(x)-\nabla F(y)\|_{2}\leq L\left\|x-y\right\|_{2}.$$
. (B.3)
$$(\mathrm{B.1})$$
$$(\mathrm{B.2})$$
$$(\mathrm{B.3})$$
## B.2 Result
Let us take a step size α > 0 and a heavy-ball parameter 1 > β1 ≥ 0. Given x0 ∈ R
d, taking m0 = 0, we define for any iteration n ∈ N
∗the iterates of SGD with momentum as,
$$\begin{cases}m_{n}&=\beta_{1}m_{n-1}+\nabla f_{n}(x_{n-1})\\ x_{n}&=x_{n-1}-\alpha m_{n}.\end{cases}$$
$$(\mathrm{B.5})$$
$$(\mathrm{B.4})$$
Note that in (B.4), the scale of the typical size of mn will increases with β1. For any total number of iterations N ∈ N
∗, we define τN a random index with value in {0*, . . . , N* − 1}, verifying
$$\forall j\in\mathbb{N},j<N,\mathbb{P}\left[\tau=j\right]\propto1-\beta_{1}^{N-j}.$$
1. (B.5)
Theorem B.1 (Convergence of SGD with momemtum). *Given the assumptions from Section B.1, given* τ as defined in (B.5) for a total number of iterations N > 1 1−β1
, x0 ∈ R
d, α > 0, 1 > β1 ≥ 0*, and* (xn)n∈N∗
given by (B.4),
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq\frac{1-\beta_{1}}{\alpha\bar{N}}(F(x_{0})-F_{\star})+\frac{N}{N}\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}},$$ (B.6)
with N˜ = N −β1
1−β1
.
## B.3 Analysis
We can first simplify (B.6), if we assume N 1 1−β1
, which is always the case for practical values of N and β1, so that N˜ ≈ N, and,
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq{\frac{1-\beta_{1}}{\alpha N}}(F(x_{0})-F_{*})+{\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}}.$$
It is possible to achieve a rate of convergence of the form O(1/
√N), by taking for any C > 0,
$$(\mathrm{B.7})$$
$$\alpha=(1-\beta_{1}){\frac{C}{\sqrt{N}}},$$
$$(\mathrm{B.8})$$
$$(\mathrm{B.9})$$
, (B.8)
which gives us
$$\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right]\leq{\frac{1}{C{\sqrt{N}}}}(F(x_{0})-F_{*})+{\frac{C}{\sqrt{N}}}{\frac{L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})}}.$$
In comparison, Theorem 3 by Yang et al. (2016) would give us, assuming now that α = (1−β1) min n1L
, √
C
N
o,
$$\min_{k\in\{0,\ldots,N-1\}}\mathbb{E}\left[\|\nabla F(x_{k})\|_{2}^{2}\right]\leq\frac{2}{N}(F(x_{0})-F_{*})\max\left\{2L,\frac{\sqrt{N}}{C}\right\}$$ $$+\frac{C}{\sqrt{N}}\frac{L}{(1-\beta_{1})^{2}}\left(\beta_{1}^{2}(R^{2}+\sigma^{2})+(1-\beta_{1})^{2}\sigma^{2}\right).$$ (B.10)
We observe an overall dependency in β1 of the form O((1 − β1)
−2) for Theorem 3 by Yang et al. (2016) ,
which we improve to O((1 − β1)
−1) with our proof.
Liu et al. (2020) achieves a similar dependency in (1 − β1) as here, but with weaker assumptions. Indeed, in their Theorem 1, their result contains a term in O(1/α) with α ≤ (1 − β1)M for some problem dependent constant M that does not depend on β1.
Notice that as the typical size of the update mn will increase with β1, by a factor 1/(1−β1), it is convenient to scale down α by the same factor, as we did with (B.8) (without loss of generality, as C can take any value). Taking α of this form has the advantage of keeping the first term on the right hand side in (B.6)
independent of β1, allowing us to focus only on the second term.
## B.4 Proof
For all n ∈ N
∗, we note Gn = ∇F(xn−1) and gn = ∇f(xn−1). En−1 [·] is the conditional expectation with respect to f1*, . . . , f*n−1. In particular, xn−1 and mn−1 are deterministic knowing f1*, . . . , f*n−1.
Lemma B.1 (Bound on mn). Given α > 0, 1 > β1 ≥ 0, and (xn) and (mn) *defined as by B.4, under the* assumptions from Section B.1, we have for all n ∈ N
∗,
$$\mathbb{E}\left[\|m_{n}\|_{2}^{2}\right]\leq\frac{R^{2}+\sigma^{2}}{(1-\beta_{1})^{2}}.\tag{1.1}$$
Proof. Let us take an iteration n ∈ N
∗,
$$\mathbb{E}\left[\|m_{n}\|_{2}^{2}\right]=\mathbb{E}\left[\left\|\sum_{k=0}^{n-1}\beta_{1}^{k}g_{n-k}\right\|_{2}^{2}\right]\quad\text{using Jensen we get,}$$ $$\leq\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\right)\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|g_{n-k}\|_{2}^{2}\right]$$ $$\leq\frac{1}{1-\beta_{1}}\sum_{k=0}^{n-1}\beta_{1}^{k}(R^{2}+\sigma^{2})$$ $$=\frac{R^{2}+\sigma^{2}}{(1-\beta_{1})^{2}}.$$
$$(\mathrm{B.11})$$
Lemma B.2 (sum of a geometric term times index). Given 0 < a < 1, i ∈ N and Q ∈ N *with* Q ≥ i,
$$\sum_{q=i}^{Q}a^{q}q=\frac{a^{i}}{1-a}\left(i-a^{Q-i+1}Q+\frac{a-a^{Q+1-i}}{1-a}\right)\leq\frac{a}{(1-a)^{2}}.$$ (B.12)
Proof. Let Ai =PQ
q=i a qq, we have
$$\begin{array}{c}{{A_{i}-a A_{i}=a^{i}i-a^{Q+1}Q+\sum_{q=i+1}^{Q}a^{q}\left(i+1-i\right)}}\\ {{(1-a)A_{i}=a^{i}i-a^{Q+1}Q+\frac{a^{i+1}-a^{Q+1}}{1-a}.}}\end{array}$$
Finally, taking i = 0 and Q → ∞ gives us the upper bound.
Lemma B.3 (Descent lemma). Given α > 0, 1 > β1 ≥ 0, and (xn) and (mn) *defined as by B.4, under the* assumptions from Section B.1, we have for all n ∈ N
∗,
$$\mathbb{E}\left[\nabla F(x_{n-1})^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|\nabla F(x_{n-k-1})\|_{2}^{2}\right]-\frac{\alpha L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}}$$ (B.13)
Proof. For simplicity, we note Gn = ∇F(xn−1) the expected gradient and gn = ∇fn(xn−1) the stochastisc gradient at iteration n.
$$G_{n}^{T}m_{n}=\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n}^{T}g_{n-k}$$ (B.14) $$=\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n-k}^{T}g_{n-k}+\sum_{k=1}^{n-1}\beta_{1}^{k}(G_{n}-G_{n-k})^{T}g_{n-k}.$$
$$\mathrm{B.14)}$$
This last step is the main difference with previous proofs with momentum (Yang et al., 2016): we replace the current gradient with an old gradient in order to obtain extra terms of the form kGn−kk 2 2
. The price to pay is the second term on the right hand side but we will see that it is still beneficial to perform this step.
Notice that as F is L-smooth so that we have, for all k ∈ N
∗
$$\|G_{n}-G_{n-k}\|_{2}^{2}\leq L^{2}\left\|\sum_{l=1}^{k}\alpha m_{n-l}\right\|^{2}$$ $$\leq\alpha^{2}L^{2}k\sum_{l=1}^{k}\|m_{n-l}\|_{2}^{2}\,,$$ (B.15)
using Jensen inequality. We apply
$$\forall\lambda>0,\,x,y\in\mathbb{R},\|x y\|_{2}\leq{\frac{\lambda}{2}}\,\|x\|_{2}^{2}+{\frac{\|y\|_{2}^{2}}{2\lambda}},$$
with x = Gn − Gn−k, y = gn−k and λ =
1 − β1 kαL to the second term in (B.14), and use (B.15) to get
$$G_{n}^{T}m_{n}\geq\sum_{k=0}^{n-1}\beta_{1}^{k}G_{n-k}^{T}g_{n-k}-\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left((1-\beta_{1})\alpha L\sum_{l=1}^{k}\|m_{n-l}\|_{2}^{2}\right)+\frac{\alpha L k}{1-\beta_{1}}\left\|g_{n-k}\right\|_{2}^{2}\right).$$
Taking the full expectation we have
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[G_{n-k}^{T}\theta_{n-k}\right]-\alpha L\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left(\left(1-\beta_{1}\right)\sum_{l=1}^{k}\mathbb{E}\left[\left[m_{n-l}\right]_{2}^{2}\right]\right)+\frac{k}{1-\beta_{2}}\mathbb{E}\left[\left[\left|g_{n-k}\right|_{2}^{2}\right]\right).\right)\right]\tag{17}$$
$$(\mathrm{B.16})$$
Now let us take k ∈ {0*, . . . , n* − 1}, first notice that
$$\mathbb{E}\left[G_{n-k}^{T}g_{n-k}\right]=\mathbb{E}\left[\mathbb{E}_{n-k-1}\left[\nabla F(x_{n-k-1})^{T}\nabla f_{n-k}(x_{n-k-1})\right]\right]$$ $$=\mathbb{E}\left[\nabla F(x_{n-k-1})^{T}\nabla F(x_{n-k-1})\right]$$ $$=\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right].$$
Furthermore, we have E
hkgn−kk 2 2 i≤ R2 + σ 2from (B.2), while E
hkmn−kk 2 2 i≤
R
2+σ 2
(1−β1)
2 using (B.11) from Lemma B.1. Injecting those three results in (B.17), we have
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\alpha L(R^{2}+\sigma^{2})\sum_{k=1}^{n-1}\frac{\beta_{1}^{k}}{2}\left(\left(\frac{1}{1-\beta_{1}}\sum_{l=1}^{k}1\right)+\frac{k}{1-\beta_{1}}\right)\tag{1}$$ $$=\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\frac{\alpha L}{1-\beta_{1}}(R^{2}+\sigma^{2})\sum_{k=1}^{n-1}\beta_{1}^{k}k.\tag{2}$$
$$(\mathrm{B.20})$$
$$(\mathrm{B.18})$$ $$(\mathrm{B.19})$$
Now, using (B.12) from Lemma B.2, we obtain
$$\mathbb{E}\left[G_{n}^{T}m_{n}\right]\geq\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]-\frac{\alpha L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}},$$
which concludes the proof.
## Proof Of Theorem B.1
Proof. Let us take a specific iteration n ∈ N
∗. Using the smoothness of F given by (B.3), we have,
$$F(x_{n})\leq F(x_{n-1})-\alpha G_{n}^{T}m_{n}+\frac{\alpha^{2}L}{2}\left\|m_{n}\right\|_{2}^{2}.$$
. (B.21)
Taking the expectation, and using Lemma B.3 and Lemma B.1, we get
$$\mathbb{E}\left[F(x_{n})\right]\leq\mathbb{E}\left[F(x_{n-1})\right]-\alpha\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]\right)+\frac{\alpha^{2}L\beta_{1}(R^{2}+\sigma^{2})}{(1-\beta_{1})^{3}}+\frac{\alpha^{2}L(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}$$ $$\leq\mathbb{E}\left[F(x_{n-1})\right]-\alpha\left(\sum_{k=0}^{n-1}\beta_{1}^{k}\mathbb{E}\left[\|G_{n-k}\|_{2}^{2}\right]\right)+\frac{\alpha^{2}L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{3}}$$
$$(\mathrm{B.21})$$
$$(\mathrm{B.22})$$
3(B.22)
rearranging, and summing over n ∈ {1*, . . . , N*}, we get
$$\underbrace{\alpha\sum_{n=1}^{N}\sum_{k=0}^{n-1}\beta_{k}^{\dagger}\mathbb{E}\left[\left\|G_{n-k}\right\|_{2}^{2}\right]}_{A}\leq F(x_{0})-\mathbb{E}\left[F(x_{N})\right]+N\frac{\alpha^{2}L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{3}}$$
3(B.23)
Let us focus on the A term on the left-hand side first. Introducing the change of index i = n − k, we get
$$A=\alpha\sum_{n=1}^{N}\sum_{i=1}^{n}\beta_{1}^{n-i}\mathbb{E}\left[\|G_{i}\|_{2}^{2}\right]$$ $$=\alpha\sum_{i=1}^{N}\mathbb{E}\left[\|G_{i}\|_{2}^{2}\right]\sum_{n=i}^{N}\beta_{1}^{n-i}$$ $$=\frac{\alpha}{1-\beta_{1}}\sum_{i=1}^{N}\mathbb{E}\left[\|\nabla F(x_{i-1})\|_{2}^{2}\right](1-\beta^{N-i+1})$$ $$=\frac{\alpha}{1-\beta_{1}}\sum_{i=0}^{N-1}\mathbb{E}\left[\|\nabla F(x_{i})\|_{2}^{2}\right](1-\beta^{N-i}).$$ (B.24) In the following, we have to use a $\beta$-function ($\mathbb{E}$). The second
$$(\mathrm{B.23})$$
We recognize the unnormalized probability given by the random iterate τ as defined by (B.5). The normalization constant is
$$\sum_{i=0}^{N-1}1-\beta_{1}^{N-i}=N-\beta_{1}{\frac{1-\beta_{1}^{N}}{1-\beta}}\geq N-{\frac{\beta_{1}}{1-\beta_{1}}}=\tilde{N},$$
which we can inject into (B.24) to obtain
$$A\geq\frac{\alpha\tilde{N}}{1-\beta_{1}}\mathbb{E}\left[\|\nabla F(x_{\tau})\|_{2}^{2}\right].\tag{1}$$
Injecting (B.25) into (B.23), and using the fact that F is bounded below by F∗ (B.1), we have
$$\mathbb{E}\left[\left\|\nabla F(x_{\tau})\right\|_{2}^{2}\right]\leq\frac{1-\beta_{1}}{\alpha N}(F(x_{0})-F_{*})+\frac{N}{N}\frac{\alpha L(1+\beta_{1})(R^{2}+\sigma^{2})}{2(1-\beta_{1})^{2}}$$
2(B.26)
$$(\mathrm{B.25})$$
$$(\mathrm{B.26})$$ $$(\mathrm{B.27})$$
which concludes the proof of Theorem B.1. |