RedTachyon commited on
Commit
21989ec
1 Parent(s): 865c5ac

Upload folder using huggingface_hub

Browse files
ZLVbQEu4Ab/12_image_0.png ADDED

Git LFS Details

  • SHA256: c9d9fcf457150468cac3d87fde2a8504d17c6fd802376dc38c958d83b927b0a8
  • Pointer size: 130 Bytes
  • Size of remote file: 41.3 kB
ZLVbQEu4Ab/2_image_0.png ADDED

Git LFS Details

  • SHA256: 494eda3130c5a649ef078efd0a1cb005ce43e4e419f51dbf78b9f0331cd84a96
  • Pointer size: 130 Bytes
  • Size of remote file: 37.6 kB
ZLVbQEu4Ab/6_image_0.png ADDED

Git LFS Details

  • SHA256: 6985c0f8a8c9312e9de5c1a94f6969b6396c62311046cf18302e3fe13cd97a85
  • Pointer size: 130 Bytes
  • Size of remote file: 16.4 kB
ZLVbQEu4Ab/ZLVbQEu4Ab.md ADDED
@@ -0,0 +1,1151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # When Is Momentum Extragradient Optimal? A Polynomial-Based Analysis
2
+
3
+ Junhyung Lyle Kim⋆jlylekim@rice.edu Rice University, Department of Computer Science Gauthier Gidel gauthier.gidel@umontreal.ca Université de Montréal, Department of Computer Science and Operations Research Mila - Quebec AI Institute, Canada CIFAR AI Chair Anastasios Kyrillidis *anastasios@rice.edu* Rice University, Department of Computer Science Fabian Pedregosa *pedregosa@google.com* Google DeepMind Reviewed on OpenReview: *https: // openreview. net/ forum? id= ZLVbQEu4Ab*
4
+
5
+ ## Abstract
6
+
7
+ The extragradient method has gained popularity due to its robust convergence properties for differentiable games. Unlike single-objective optimization, game dynamics involve complex interactions reflected by the eigenvalues of the game vector field's Jacobian scattered across the complex plane. This complexity can cause the simple gradient method to diverge, even for bilinear games, while the extragradient method achieves convergence. Building on the recently proven accelerated convergence of the momentum extragradient method for bilinear games (Azizian et al., 2020b), we use a polynomial-based analysis to identify three distinct scenarios where this method exhibits further accelerated convergence. These scenarios encompass situations where the eigenvalues reside on the (positive) real line, lie on the real line alongside complex conjugates, or exist solely as complex conjugates. Furthermore, we derive the hyperparameters for each scenario that achieve the fastest convergence rate.
8
+
9
+ ## 1 Introduction
10
+
11
+ While most machine learning problems are formulated as minimization problems, a growing number of works rely instead on game formulations that involve multiple players and objectives. Examples of such problems include generative adversarial networks (GANs) (Goodfellow et al., 2014), actor-critic algorithms
12
+ (Pfau & Vinyals, 2016), sharpness aware minimization (Foret et al., 2021), and fine-tuning language models from human feedback (Munos et al., 2023). This increasing interest in game formulations motivates further theoretical exploration of differentiable games.
13
+
14
+ Optimizing differentiable games presents challenges absent in minimization problems due to the interplay of multiple players and objectives. Notably, the game Jacobian's eigenvalues are distributed on the complex plane, exhibiting richer dynamics compared to single-objective minimization, where the Hessian eigenvalues are restricted to the real line. Consequently, even for simple bilinear games, standard algorithms like the gradient method fail to converge (Mescheder et al., 2018; Balduzzi et al., 2018; Gidel et al., 2019).
15
+
16
+ Fortunately, the extragradient method (EG), originally introduced by Korpelevich Korpelevich (1976), offers a solution. Unlike the gradient method, EG demonstrably converges for bilinear games (Tseng, 1995). This
17
+ ⋆Authors after JLK are listed in alphabetical order.
18
+
19
+ This paper extends Kim et al. (2022) presented at the NeurIPS 2022 Optimization for Machine Learning Workshop.
20
+
21
+ has sparked extensive research analyzing EG from different perspectives, including variational inequality
22
+ (Gidel et al., 2018; Gorbunov et al., 2022), stochastic (Li et al., 2021), and distributed (Liu et al., 2020; Beznosikov et al., 2021) settings.
23
+
24
+ Most existing works, including those mentioned earlier, analyze EG and relevant algorithms by assuming some structure on the objectives, such as (strong) monotonicity or Lipschitzness (Solodov & Svaiter, 1999; Tseng, 1995; Daskalakis & Panageas, 2018; Ryu et al., 2019; Azizian et al., 2020a). Such assumptions, in the context of differentiable games, confine the distribution of the eigenvalues of the game Jacobian; for instance, strong monotonicity implies a lower bound on the real part of the eigenvalues, and the Lipschitz assumption implies an upper bound on the magnitude of the eigenvalues of the Jacobian.
25
+
26
+ Building upon the limitations of prior assumptions, Azizian et al. (2020b) showed that the key factor for effectively analyzing game dynamics lies in the spectrum of the Jacobian on the complex plane. Through a polynomial-based analysis, they demonstrated that first-order methods can sometimes achieve faster rates using momentum. This is achieved by replacing the smoothness and monotonicity assumptions with more precise assumptions on the distribution of the Jacobian eigenvalues, represented by simple shapes like ellipses or line segments. Notably, Azizian et al. (2020b) proved that for bilinear games, the extragradient method with momentum achieves an accelerated convergence rate.
27
+
28
+ In this work, we take a different approach by asking the *reverse question*: for what shapes of the Jacobian spectrum does the momentum extragradient (MEG) method achieve optimal performance? This reverse analysis allows us to study the behavior of MEG in specific settings depending on the hyperparameter setup, encompassing:
29
+ - *Minimization*, where all Jacobian eigenvalues lie on the positive real line.
30
+
31
+ - *Regularized bilinear games*, where all eigenvalues are complex conjugates.
32
+
33
+ - *Intermediate case*, where eigenvalues are both on the real line and as complex conjugates (illustrated in Figure 1).
34
+
35
+ Our contributions can be summarized as follows: - **Characterizing MEG convergence modes**: We derive the residual polynomials of MEG for affine game vector fields and identify three distinct convergence modes based on hyperparameter settings. This analysis can then be applied to different eigenvalue structures of the Jacobian (see Theorem 3).
36
+
37
+ - **Optimal hyperparameters and convergence rates**: For each eigenvalue structure, we derive the optimal hyperparameters of MEG and its (asymptotic) convergence rates. For minimization, MEG exhibits "super-acceleration," where a constant improvement upon classical lower bound rate is attained,1 similarly to the gradient method with momentum (GDM) with cyclical step sizes (Goujaud et al., 2022).
38
+
39
+ For the other two cases involving imaginary eigenvalues, MEG exhibits accelerated convergence rates with the derived optimal hyperparameters.
40
+
41
+ - **Comparison with other methods**. We compare MEG's convergence rates with gradient (GD), GDM,
42
+ and extragradient (EG) methods. For the considered game classes, none of these methods achieve (asymptotically) accelerated rates (Corollaries 1 and 2), unlike MEG. In Section 7, we validate our findings through numerical experiments, including scenarios with slight deviations from our initial assumptions.
43
+
44
+ ## 2 Problem Setup And Related Work
45
+
46
+ Following Letcher et al. (2019); Balduzzi et al. (2018), we define the n-player differentiable game as a family of twice continuously differentiable losses ℓi: R
47
+ d → R, for i = 1*, . . . , n.* The player i controls the parameter w
48
+ (i) ∈ R
49
+ di. We denote the concatenated parameters by w = [w
50
+ (1)*, . . . , w*(n)] ∈ R
51
+ d, where d =Pn i=1 di.
52
+
53
+ 1Note that achieving this improvement is possible by having additional information beyond just the largest (smoothness)
54
+ and smallest (strong convexity) eigenvalues of the Jacobian.
55
+
56
+ ![2_image_0.png](2_image_0.png)
57
+
58
+ Figure 1: *Convergence rates of MEG in terms of the game Jacobian eigenvalues.* The step sizes for MEG, h and γ, and the momentum parameter m are set up according to each case of Theorem 3, illustrating three distinct convergence modes of MEG. For each case, the red line indicates the robust region (c.f., Definition 1)
59
+ where MEG achieves the optimal convergence rate.
60
+ For this problem, a Nash equilibrium satisfies: w
61
+ (i)
62
+ ⋆∈ arg minw(i)∈Rdi ℓi w
63
+ (i), w(¬i)
64
+ ⋆ ∀i ∈ {1*, . . . , n*},
65
+ where the notation ·
66
+ (¬i) denotes all indices except for i. We also define the vector field v of the game as the concatenation of the individual gradients: v(w) = [∇w(1) ℓ1(w)*· · · ∇*w(n) ℓn(w)]⊤, and denote its associated Jacobian with ∇v.
67
+
68
+ Unfortunately, finding Nash equilibria for general games remains an *intractable problem* (Shoham & LeytonBrown, 2008; Letcher et al., 2019).2 Therefore, instead of directly searching for Nash equilibria, we focus on finding *stationary points* of the game's vector field v. This approach is motivated by the fact that any Nash equilibrium necessarily corresponds to a stationary point of the gradient dynamics. In other words, we aim to solve the following problem:
69
+
70
+ $$\mathrm{Find}\quad w^{\star}\in\mathbb{R}^{d}\quad\mathrm{such~that}\quad v(w^{\star})=0.$$
71
+ $$(1)$$
72
+ ⋆) = 0. (1)
73
+ Notation. R(z) and I(z) respectively denote the real and the imaginary part of a complex number z. The spectrum of a matrix M is denoted by Sp(M), and its spectral radius by ρ(M) := max{|λ| : λ ∈ Sp(M)}.
74
+
75
+ M ≻ 0 denotes that M is a positive-definite matrix. C+ denotes the complex plane with positive real part, and R+ denotes positive real numbers.
76
+
77
+ ## 2.1 Related Work
78
+
79
+ The extragradient method, originally introduced in Korpelevich (1976), is a popular algorithm for solving (unconstrained) variational inequality problems in (1) (Gidel et al., 2018). There are several works that study the convergence rate of EG for (strongly) monotone problems (Tseng, 1995; Solodov & Svaiter, 1999; Nemirovski, 2004; Monteiro & Svaiter, 2010; Mokhtari et al., 2020; Gorbunov et al., 2022). Under similar settings, stochastic variants of EG are studied in Palaniappan & Bach (2016); Hsieh et al. (2019; 2020); Li et al. (2021). However, as mentioned earlier, assumptions like (strong) monotonicity or Lipchtizness may not accurately represent how Jacobian eigenvalues are distributed.
80
+
81
+ Instead, we make more fine-grained assumptions on these eigenvalues, to obtain the optimal hyperparameters and convergence rates for MEG via polynomial-based analysis. Such analysis dates back to the development of the conjugate gradient method (Hestenes & Stiefel, 1952), and is still actively used; for instance, to derive lower bounds (Arjevani & Shamir, 2016), to develop accelerated decentralized algorithms (Berthier et al.,
82
+ 2020), and to analyze average-case performance (Pedregosa & Scieur, 2020; Domingo-Enrich et al., 2021).
83
+
84
+ On that end, we use the following lemma (Chihara, 2011), which elucidates the connection between firstorder methods and (residual) polynomials, when the vector field v is affine. First-order methods are the ones in which the sequence of iterates wt is in the span of previous gradients: wt ∈ w0+span{v(w0)*, . . . , v*(wt−1)}.
85
+
86
+ 2Formulating Nash equilibrium search as a nonlinear complementarity problem makes it inherently difficult, classified as PPAD-hard (Daskalakis et al., 2009; Letcher et al., 2019).
87
+ Lemma 1 (Chihara (2011)). Let wt be the iterate generated by a first-order method after t *iterations, with* v(w) = Aw + b. Then, there exists a real polynomial Pt, of degree at most t*, satisfying:*
88
+
89
+ $$w_{t}-w^{\star}=P_{t}(A)(w_{0}-w^{\star})\,,$$
90
+ $$\left(2\right)$$
91
+ ⋆), (2)
92
+ where Pt(0) = 1*, and* v(w
93
+ ⋆) = Aw⋆ + b = 0.
94
+
95
+ By taking ℓ2-norms, (2) further implies the following worst-case convergence rate:
96
+ ∥wt − w
97
+ ⋆∥ = ∥Pt(A)(w0 − w
98
+ ⋆)∥ ⩽ ∥Pt(ZΛZ
99
+ −1)*∥ · ∥*w0 − w
100
+ ⋆∥ ⩽ sup λ∈S⋆
101
+
102
+ $$\lambda)|\cdot\|Z\|\|Z^{-1}\|\cdot\|w_{0}-w^{\star}\|,\quad(3)$$
103
+
104
+ where A = ZΛZ
105
+ −1is the diagonalization of A,
106
+ 3 and the constant ∥Z∥∥Z
107
+ −1∥ disappears if A is a normal matrix. Hence, the worst-case convergence rate of a first-order method can be analyzed by studying the associated residual polynomial Pt evaluated at the eigenvalues λ of the Jacobian ∇v = A, distributed over the set S
108
+ ⋆.
109
+
110
+ Unlocking Faster Rates Through Fine-Grained Spectral Shapes. While Azizian et al. (2020b) characterized lower bounds and optimality for certain first-order methods under simple spectral shapes, we posit that a more granular understanding of S
111
+ ⋆could unlock even faster convergence rates. By meticulously analyzing the residual polynomials of MEG, we identify specific spectral shapes where MEG exhibits optimal performance. This approach resonates with recent advancements in optimization literature (Oymak, 2021; Goujaud et al., 2022), which demonstrate that knowledge beyond merely the largest and smallest eigenvalues (i.e., smoothness and strong convexity) can lead to accelerated convergence in convex smooth minimization.
112
+
113
+ ## 3 Momentum Extragradient Via Chebyshev Polynomials
114
+
115
+ In this section, we delve into the intricate dynamics of the momentum extragradient method (MEG) by harnessing the power of residual polynomials and Chebyshev polynomials.
116
+
117
+ MEG iterates according to the following update rule:
118
+
119
+ $$\mathrm{(MEG)}\quad w_{t+1}=w_{t}-h v(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1})\,,$$
120
+ $$\left(4\right)$$
121
+
122
+ where h is the step size, γ is the extrapolation step size, and m is the momentum parameter.
123
+
124
+ The extragradient method (EG), which serves as the foundation for MEG, was originally proposed by Korpelevich (1976) for saddle point problems. It has garnered renewed interest due to its remarkable ability to converge in certain differentiable games, such as bilinear games, where the standard gradient method falters (Gidel et al., 2019; Azizian et al., 2020b;a).
125
+
126
+ For completeness, we remind the gradient method with momentum (GDM):
127
+
128
+ $$(\mathrm{GDM})$$
129
+ $\mathbf{M}$) $w_{t+1}$
130
+ $$t-w_{t-1})\,,$$
131
+
132
+ (GDM) wt+1 = wt − hv(wt) + m(wt − wt−1), (5)
133
+ from which the gradient method (GD) can be obtained by setting m = 0.
134
+
135
+ As a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b), MEG's behavior can be elegantly analyzed through the lens of residual polynomials, as established in Lemma 1. The following theorem unveils the specific residual polynomials associated with MEG:
136
+ Theorem 1 (Residual polynomials of MEG and their Chebyshev representation). Consider the momentum extragradient method (MEG) in (4) *with a vector field of the form* v(w) = Aw + b. The residual polynomials associated with MEG can be expressed as follows:
137
+
138
+ $$\tilde{P}_{0}(\lambda)=1,\quad\tilde{P}_{1}(\lambda)=1-\frac{h\lambda(1-\gamma\lambda)}{1+m},\quad\mbox{and}\quad\tilde{P}_{t+1}(\lambda)=(1+m-h\lambda(1-\gamma\lambda))\tilde{P}_{t}(\lambda)-m\tilde{P}_{t-1}(\lambda).$$
139
+ 3Note that almost all matrices are diagonalizable over C, in the sense that the set of non-diagonalizable matrices has Lebesgue
140
+ measure zero (Hetzel et al., 2007).
141
+ $$\left(5\right)$$
142
+
143
+ Remarkably, these polynomials can be elegantly rewritten in terms of Chebyshev polynomials of the first and second kind, denoted by Tt(·) and Ut(·)*, respectively:*
144
+
145
+ $$P_{t}^{MEG}(\lambda)={}_{t}{}^{1/2}\left(\frac{2m}{\lambda+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\right),\text{where}\sigma(\lambda)\equiv\sigma(\lambda;h,\gamma,m)=\frac{1+m-h\lambda(1-\gamma\lambda)}{2\sqrt{m}}.\tag{6}$$ _The term $\sigma(\lambda)$, which encapsulates the interplay between step sizes, momentum, and eigenvalues, is referred to as the
146
+ to as the link function.
147
+ The residual polynomials of MEG and GDM, intriguingly, share a similar structure but differ in their link functions. Below are the residual polynomials of GDM, expressed in Chebyshev polynomials (Pedregosa, 2020):
148
+
149
+ $$P_{t}^{\rm GDM}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{t}(\xi(\lambda))+\frac{1-m}{1+m}U_{t}(\xi(\lambda))\right),\quad\mbox{where}\quad\xi(\lambda)=\frac{1+m-h\lambda}{2\sqrt{m}}.\tag{7}$$
150
+
151
+ Notice that the residual polynomials of MEG in (6) and that of GDM in (7) are identical, except for the link functions σ(λ) and ξ(λ), which enter as arguments in Tt(·) and Ut(·).
152
+
153
+ The differences in these link functions are paramount because the behavior of Chebyshev polynomials hinges decisively on their argument's domain:
154
+ Lemma 2 (Goujaud & Pedregosa (2022)). Let z be a complex number, and let Tt(·) and Ut(·) *be the* Chebyshev polynomials of the first and second kind, respectively. The sequence n
155
+ 2m 1+m Tt(z) + 1−m 1+m Ut(z)
156
+
157
+ o t⩾0
158
+
159
+ grows exponentially in $t$ for $z\notin[-1,1]$, while for $z\in[-1,1]$, the following bounds hold:_ $$|T_{t}(z)|\leqslant1\quad\text{and}\quad|U_{t}(z)|\leqslant t+1.\tag{8}$$
160
+ Therefore, to study the optimal convergence behavior of MEG, we are interested in the case where the set of step sizes and the momentum parameters lead to |σ(λ; h, γ, m)| ⩽ 1 so that we can use the bounds in (8).
161
+
162
+ We will refer to those sets of eigenvalues and hyperparameters as the *robust region*, as defined below.
163
+
164
+ Definition 1 (Robust region of MEG). *Consider the MEG method in* (4) *expressed via Chebyshev polynomials, as in* (6)*. We define the set of eigenvalues and hyperparameters such that the image of the link* function σ(λ; h, γ, m) lies in the interval [−1, 1] as the **robust region***, and denote it with* σ
165
+ −1([−1, 1]).
166
+
167
+ Although polynomial-based analysis requires the assumption that the vector field is affine, it captures intuitive insights into how various algorithms behave in different settings, as we remark below.
168
+
169
+ Remark 1. *From the definition of* ξ(λ) in (7), one can infer why negative momentum can help the convergence of GDM (Gidel et al., 2019) when λ ∈ R+*: it forces GDM to stay within the robust region,*
170
+ |ξ(λ)| ⩽ 1. *One can also infer the divergence of GDM in the presence of complex eigenvalues, unless, for* instance, complex momentum is used (Lorraine et al., *2022). Similarly, the residual polynomial of GD is* P
171
+ GD
172
+ t(λ) = (1 − hλ)
173
+ t(Goujaud & Pedregosa, 2022, Example 4.2), and can easily diverge in the presence of complex eigenvalues, which can potentially be alleviated by using complex step sizes. On the contrary, thanks to the quadratic link function of MEG in (6)*, it can converge for much wider subsets of complex eigenvalues.* By analyzing the residual polynomials of MEG, we can also characterize the asymptotic convergence rate of MEG for any combination of hyperparameters, as summarized in the next theorem. Theorem 2 (Asymptotic convergence rate of MEG). *Suppose* v(w) = Aw + b. The asymptotic convergence rate of MEG in (4) is:4
174
+
175
+ $$\limsup_{t\to\infty}\sqrt[2]{\frac{|w_{t}-w^{*}|}{\|w_{0}-w^{*}\|}}=\begin{cases}\sqrt[2]{m},&\text{if}\quad\bar{\sigma}\leqslant1\quad\text{(robust region)};\\ \sqrt[2]{m}\big{(}\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\big{)}^{1/2},&\text{if}\quad\bar{\sigma}\in\Big{(}1,\frac{1+w}{2\sqrt{m}}\Big{)};\\ \geqslant1\text{(no convergence)},&\text{otherwise},\end{cases}\tag{9}$$
176
+ _where $\vartheta=\sup_{\lambda\in\mathcal{S}^{*}}|\sigma(\lambda;h,\gamma,m)|$, and $\sigma(\lambda;h,\gamma,m)\equiv\sigma(\lambda)$ is the link function of MEG defined in (6)._
177
+ 4The reason why we take the 2t-th root is to normalize by the number of vector field computations; we compare in Section 4 the asymptotic rate of MEG in (9) with other gradient methods that use a single vector field computation in the recurrences, such as GD and GDM.
178
+ Optimal hyperparameters for MEG that we obtain in Section 4 minimize the asymptotic convergence rate above. Note that the optimal hyperparameters vary based on the set S
179
+ ⋆, which we detail in Section 3.2.
180
+
181
+ ## 3.1 Three Modes Of The Momentum Extragradient
182
+
183
+ Within the robust region of MEG, we can compute its worst-case convergence rate based on (3) as follows:
184
+
185
+ $$\sup_{\lambda\in\mathcal{S}^{*}}|P_{t}^{\text{MEG}}(\lambda)|\leqslant\,m^{t/2}\Big{(}\frac{2m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|U_{t}(\sigma(\lambda))|\Big{)}\tag{10}$$ $$\leqslant\,m^{t/2}\Big{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\Big{)}\leqslant m^{t/2}(t+1).$$
186
+
187
+ Since the Chebyshev polynomial expressions of MEG in (6) and that of GDM5 are identical except for the link functions, the convergence rate in (10) applies to both MEG and GDM, as long as the link functions |σ(λ)| and |ξ(λ)| are bounded by 1. As a result, we see that the asymptotic convergence rate in (9) only depends on the momentum parameter m, when the hyperparameters are restricted to the robust region. This fact was utilized in tuning GDM for strongly convex quadratic minimization (Zhang & Mitliagkas, 2019).
188
+
189
+ The robust region of MEG can be described with the four extreme points below (derivation in the appendix):
190
+
191
+ $$\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}},\quad\text{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.\tag{11}$$
192
+
193
+ The above four points and their intermediate values characterize the set of Jacobian eigenvalues λ that can be mapped to [−1, 1]. The distribution of these eigenvalues can vary in three different modes depending on the selected hyperparameters of MEG, as stated in the following theorem. Theorem 3. *Consider the momentum extragradient method in* (4)*, expressed with the Chebyshev polynomials* as in (6). Then, the robust region (c.f., Definition *1) have the following three modes:*
194
+
195
+ - ***Case 1***: If $\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2$, then $\sigma^{-1}(-1)$ and $\sigma^{-1}(1)$ are all real numbers; -.
196
+ * _Case 2: If $(1-\sqrt{m})^{2}\leqslant\frac{h}{4\gamma}<(1+\sqrt{m})^{2}$, then $\sigma^{-1}(-1)$ are complex, and $\sigma^{-1}(1)$ are real;_
197
+
198
+ - *Case 3:* If (1 −
199
+ √m)
200
+ 2 >
201
+ h 4γ
202
+ , *then* σ
203
+ −1(−1) and σ
204
+ −1(1) *are all complex numbers.*
205
+ Remark 2. Theorem 3 offers guidance on how to set up the hyperparameters for MEG. This depends on the Jacobian spectra of the game problem being considered. For instance, if one observes only real eigenvalues (i.e., the problem is in fact minimization), the main step size h should be at least 4× *larger than the* extrapolation step size γ*, based on the condition* h 4γ ⩾ (1 + √m)
206
+ 2.
207
+
208
+ We illustrate Theorem 3 in Figure 1. We first set the hyperparameters according to each condition in Theorem 3. We then discretize the interval [−1, 1], and plot σ
209
+ −1([−1, 1]) for each case, represented by red lines. We can see the quadratic link function induced by MEG allows interesting eigenvalue dynamics to be mapped onto the [−1, 1] segment, such as the cross-shape observed in Case 2. Moreover, although MEG
210
+ exhibits the best rates within the robust region, it does not necessarily diverge outside of it, as in the second case of Theorem 2. We illustrate the convergence region of MEG measured by 2pt|Pt(λ)| < 1 from (6) for t = 2000, with different colors indicating varying convergence rates, which slow down as one moves away from the robust region. Interestingly, Figure 1 (right) shows that MEG can also converge in the absence of monotonicity (i.e., in the presence of Jacobian eigenvalues with negative real part) (Gorbunov et al., 2023).
211
+
212
+ ## 3.2 Robust Region-Induced Problem Cases
213
+
214
+ We classify problem classes into three distinct cases based on Theorem 3, each reflecting a different mode of the robust region (Figure 2):
215
+ 5Asymptotically, GDM enjoys √m convergence rate instead of the √4 m of MEG, as it uses a single vector field computation per iteration instead of the two. However, these are not directly comparable, as the values of m that correspond to the robust region are not the same.
216
+ 6
217
+
218
+ ![6_image_0.png](6_image_0.png)
219
+
220
+ Figure 2: *Illustration of the three spectrum models where MEG achieves accelerated convergence rates.*
221
+ Case 1: The problem reduces to minimization, where the Jacobian eigenvalues are distributed on the (positive) real line, but as a *union* of two intervals. We can model such spectrum as:
222
+
223
+ $$\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{1}^{\star}=[\mu_{1},L_{1}]\cup[\mu_{2},L_{2}]\subset\mathbb{R}_{+}.$$
224
+ $$(12)$$
225
+ 1 = [µ1, L1] ∪ [µ2, L2] ⊂ R+. (12)
226
+ Above generalizes the Hessian spectrum that arise in minimizing µ-strongly convex and L-smooth functions, i.e. λ ∈ [*µ, L*]. This spectrum can be obtained from (12) by setting µ1 = *µ, L*2 = L, and L1 = µ2. It was empirically observed that, during DNN training, sometimes a few eigenvalues of the Hessian have significantly larger magnitudes (Papyan, 2020). In such cases, (12) can be more precise than a single interval [*µ, L*]. In particular, Goujaud et al. (2022) utilized (12), and showed that the GDM with alternating step sizes can achieve a (constant factor) improvement over the traditional lower bound for strongly convex and smooth quadratic objectives.
227
+
228
+ In Section 4, we show that MEG enjoys similar improvement. To show that, we define the following quantities following Goujaud et al. (2022), which will be used to obtain the convergence rate of MEG in (18) for this problem class.
229
+
230
+ $$\zeta:={\frac{L_{2}+\mu_{1}}{L_{2}-\mu_{1}}}={\frac{1+\tau}{1-\tau}},\quad{\mathrm{and}}\quad R:={\frac{\mu_{2}-L_{1}}{L_{2}-\mu_{1}}}\in[0,1).$$
231
+ $$(13)$$
232
+
233
+ Here, ζ is the ratio between the center of S
234
+
235
+ 1 and its radius, and τ := L2/µ1 is the inverse condition number.
236
+
237
+ R is the relative gap of µ2 − L1 and L2 − µ1, which becomes 0 if µ2 = L1 (i.e., S
238
+
239
+ 1 becomes [µ1, L2]).
240
+
241
+ Case 2: In this case, the Jacobian eigenvalues are distributed both on the real line and as complex conjugates, exhibiting a *cross-shaped* spectrum. We model this spectrum as:
242
+
243
+ $$\mathrm{Sp}(\nabla v)\subset{\mathcal{S}}_{2}^{\star}=[\mu,L]\cup\{z\in\mathbb{C}:\Re(z)=c^{\prime}>0,\ \Im(z)\in[-c,c]\}.$$
244
+
245
+ The first set [*µ, L*] denotes a segment on the real line, reminiscent of the Hessian spectrum for minimizing µ-strongly convex and L-smooth functions. The second set has a fixed real component (c
246
+ ′ > 0), along with imaginary components symmetric across the real line (i.e., complex conjugates), as the Jacobian is real.
247
+
248
+ This is a strict generalization of the purely imaginary interval ±[*ai, bi*] commonly considered in the bilinear games literature (Liang & Stokes, 2019; Azizian et al., 2020b; Mokhtari et al., 2020). While many recent papers on bilinear games cite GANs (Goodfellow et al., 2014) as a motivation, the work in Berard et al.
249
+
250
+ (2020, Figure 4) empirically shows that the spectrum of GANs is not contained in the imaginary axis; the cross-shaped spectrum model above might be closer to some of the observed GAN spectra. Case 3: In this case, the Jacobain eigenvalues are distributed only as complex conjugates, with a fixed real component, exhibiting a *shifted imaginary* spectrum. We model this spectrum as:
251
+
252
+ $$\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{3}^{\star}=[c+a i,c+b i]\cup[c-a i,c-b i]\subset\mathbb{C}_{+}.$$
253
+ 3 = [c + ai, c + bi] ∪ [c − *ai, c* − bi] ⊂ C+. (15)
254
+ Again, (15) generalizes bilinear games, where the spectrum reduces to ±[*ai, bi*] with c = 0.
255
+
256
+ $$(14)$$
257
+ $$(15)$$
258
+
259
+ Examples of Cases 2 and 3 in quadratic games. To understand these spectra better, we provide examples using quadratic games. Consider the following two player quadratic game, where x ∈ R
260
+ d1 and y ∈ R
261
+ d2 are the parameters controlled by each player, whose loss functions respectively are:
262
+
263
+ $$\ell_{1}(x,y)=\frac{1}{2}x^{\top}S_{1}x+x^{\top}M_{12}y+x^{\top}b_{1}\quad\text{and}\quad\ell_{2}(x,y)=\frac{1}{2}y^{\top}S_{2}y+y^{\top}M_{21}x+y^{\top}b_{2},\tag{16}$$
264
+
265
+ where S1, S2 ≻ 0. Then, the vector field can be written as:
266
+
267
+ $$v(x,y)=\begin{bmatrix}S_{1}x+M_{12}y+b_{1}\\ M_{21}x+S_{2}y+b_{2}\end{bmatrix}=Aw+b,\text{where}A=\begin{bmatrix}S_{1}&M_{12}\\ M_{21}&S_{2}\end{bmatrix},\text{}w=\begin{bmatrix}x\\ y\end{bmatrix},\text{and}b=\begin{bmatrix}b_{1}\\ b_{2}\end{bmatrix}.\tag{17}$$
268
+
269
+ If S1 = S2 = 0 and M12 = −M⊤
270
+ 21, the game Jacobian ∇v = A has only purely imaginary eigenvalues
271
+ (Azizian et al., 2020b, Lemma 7), recovering bilinear games.
272
+
273
+ As the second and the third spectrum models in (14) and (15) generalize bilinear games, we can consider more complex quadratic games, where S1 and S2 does not have to be 0. Specifically, when M12 = −M⊤
274
+ 21, and they share common bases with S1 and S2 specified in the below proposition, then Sp(A) has a cross-shaped spectrum in (14) of Case 2 and a shifted imaginary spectrum in (15) of Case 3.
275
+
276
+ Proposition 1. Let A be a matrix of the form S1 B
277
+ −B⊤ S2
278
+ , where S1, S2 ≻ 0. Without loss of generality, assume that dim(S1) *> dim*(S2) = d*. Then,*
279
+ - *Case 2:* Sp(A) has a cross-shape if there exist orthonormal matrices U, V and diagonal matrices D1, D2 such that S1 = Udiag(a, . . . , a, D1)U
280
+ ⊤, S2 = V diag(*a, . . . , a*)V
281
+ ⊤, and B = UD2V
282
+ ⊤.
283
+
284
+ - *Case 3:* Sp(A) has a shifted imaginary shape if there exist orthonormal matrices U, V and diagonal matrix D2 such that S1 = Udiag(a, . . . , a)U
285
+ ⊤, S2 = V diag(*a, . . . , a*)V
286
+ ⊤, and B = UD2V
287
+ ⊤.
288
+
289
+ We can interpret Case 3 as a *regularized* bilinear game, where S1 and S2 are diagonal matrices with a constant eigenvalue. This implies that the players cannot control their parameter x and y arbitrarily, which can be seen in the loss functions in (16), where S1 and S2 appears in terms x
290
+ ⊤S1x and y
291
+ ⊤S2y. Case 2 can be interpreted similarly, but player 1 (without loss of generality) has more flexibility in its parameter choice due to the additional diagonal matrix D1 in the eigenvalue decomposition of S1.
292
+
293
+ ## 4 Optimal Parameters And Convergence Rates
294
+
295
+ In this section, we obtain the optimal hyperparameters of MEG (in the sense that they achieve the fastest asymptotic convergence rate), for each spectrum model discussed in the previous section.
296
+
297
+ Case 1: minimization. When the condition in Case 1 of Theorem 3 holds (i.e., h 4γ ⩾ (1 + √m)
298
+ 2), both σ
299
+ −1(−1) and σ
300
+ −1(1) (and their intermediate values), line on the real line, forming a union of two intervals
301
+ (see Figure 1, left). The robust region in this case, denoted σ
302
+ −1 Case1
303
+ ([−1, 1]), is expressed as:
304
+
305
+ $$\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{R}_{+}.$$
306
+
307
+ For this case, the optimal hyperparameters of MEG in terms of the worst-case asymptotic convergence rate in (9) can be set as below. Theorem 4 (Case 1). *Consider solving* (1) *for games where the Jacobian has the spectrum in* (12). For this problem, the optimal hyperparameters for the momentum extragradient method in (4) *are:*
308
+
309
+ $$h=\frac{4(\mu_{1}+L_{2})}{(\sqrt{\mu_{2}+L_{1}}+\sqrt{\mu_{1}+L_{2}})^{2}},\ \ \gamma=\frac{1}{\mu_{1}+L_{2}}=\frac{1}{\mu_{2}+L_{1}},\ \ \ \text{and}\ \ \ \gamma=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.$$
310
+
311
+ Recalling (9), we immediately get the asymptotic convergence rate from Theorem 4. Further, this formula can be simplified in the ill-conditioned regime, where the inverse condition number τ := µ1/L2 → 0:
312
+
313
+ $$\sqrt[4]{m}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{1-R^{2}}}+o(\sqrt{\tau}).\tag{18}$$
314
+
315
+ From (18), we see that MEG achieves an accelerated convergence rate 1 − O(
316
+ √τ ), which is known to be
317
+ "optimal" for this function class, and can be asymptotically achieved by GDM6(Polyak, 1987) (see also Theorem 8 with θ = 1). Surprisingly, this rate can be further improved by the factor √1 − R2, exhibiting
318
+ "super-acceleration" phenomenon enjoyed by GDM with (optimal) cyclical step sizes (Goujaud et al., 2022).
319
+
320
+ Note that achieving this improvement is possible by having additional information beyond just the largest
321
+ (L2) and smallest (µ1) eigenvalues of the Hessian.
322
+
323
+ Case 2: cross-shaped spectrum. If the condition in Case 2 of Theorem 3 is satisfied (i.e., (1 −
324
+ √m)
325
+ 2 ⩽
326
+
327
+ h 4γ < (1 + √m)
328
+ 2), then σ
329
+ −1(−1) are complex, while σ
330
+ −1(1) are real (c.f., Figure 1, middle). We can write the robust region σ
331
+ −1 Case2
332
+ ([−1, 1]) as:
333
+
334
+ $$\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}\bigcup\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}.$$
335
+ .
336
+ Here, the first interval lies on R+, as the square root term is real; conversely, in the second interval, the square root term is imaginary, with the fixed real component: 1 2γ
337
+ . We summarize the optimal hyperparameters for this case in the next theorem. Theorem 5 (Case 2). *Consider solving* (1) *for games where the Jacobian has a cross-shaped spectrum as* in (14)*. For this problem, the optimal hyperparameters for the momentum extragradient method in* (4) *are:*
338
+
339
+ $$h=\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}},\quad\ \gamma=\frac{1}{\mu+L},\quad\ \mathrm{and}\quad\ m=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{2}.$$
340
+
341
+ We get the asymptotic rate from Theorem 5, which simplifies in the ill-conditioned regime τ := µ/L → 0 as:
342
+
343
+ $$\sqrt[4]{m}=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{(2c/L)^{2}+1}}+o(\sqrt{\tau}).\tag{19}$$
344
+
345
+ We see that MEG achieves accelerated convergence rate 1−O(pµ/L), as long as c = O(L). We remark that this rate is optimal in the following sense. The lower bound for the problems with cross-shaped spectrum in (14) must be slower than the existing ones for minimizing µ-strongly convex and L-smooth functions, as the former is strictly more general. Since we reach the same asymptotic optimal rate, this must be optimal. Case 3: shifted imaginary spectrum. Lastly, if the condition in Case 3 of Theorem 3 is satisfied (i.e.,
346
+ h 4γ < (1 −
347
+ √m)
348
+ 2), then σ
349
+ −1(−1) and σ
350
+ −1(1) (and the intermediate values) are all complex conjugates (c.f.,
351
+ Figure 1, right). We can write the robust region σ
352
+ −1 Case3
353
+ ([−1, 1]) as:
354
+
355
+ $$\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{C}_{+}.$$
356
+
357
+ We modeled such spectrum in (15), which generalizes bilinear games, where the spectrum reduces to ±[*ai, bi*]
358
+ (i.e., with c = 0). We summarize the optimal hyperparameters for this case below.
359
+
360
+ 6Precisely, GDM with optimal step size and momentum asymptotically achieve 1 − 2
361
+ √τ + o(
362
+ √τ) convergence rate, as τ → 0
363
+ (Goujaud & Pedregosa, 2022, Proposition 3.3).
364
+ Theorem 6 (Case 3). *Consider solving* (1) *for games where the Jacobian has a shifted imaginary spectrum* in (15)*. For this problem, the optimal hyperparameters for the momentum extragradient method in* (4) *are:*
365
+
366
+ $$h=\frac{8c}{(\sqrt{c^{2}+a^{2}+\sqrt{c^{2}+b^{2}}})^{2}},\quad\gamma=\frac{1}{2c},\quad a n d\quad m=\left(\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}\right)^{2}.$$
367
+
368
+ Similarly to before, we compute the asymptotic convergence rate from Theorem 4 using (9).
369
+
370
+ $${\sqrt[4]{m}}=\left({\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}\right)^{1/2}=\left(1-{\frac{2{\sqrt{c^{2}+a^{2}}}}{{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}}\right)^{1/2}$$
371
+ $$(20)$$
372
+
373
+ Note that by setting c = 0, the rate in (20) matches the lower bound of bilinear game: qb−a b+a
374
+ (Azizian et al.,
375
+ 2020b, Proposition 5). Further, with c > 0, the convergence rate in (20) improves, highlighting the contrast between vanilla bilinear games and their regularized counterpart. Remark 3. Notice that the optimal momentum m in both Theorems 5 and 6 are positive. This is in contrast to Gidel et al. (2019), where the **gradient** *method with negative momentum is studied. This difference* elucidates distinct dynamics of how momentum interacts with the **gradient** and the **extragradient** *methods.*
376
+
377
+ ## 5 Comparison With Other Methods
378
+
379
+ Having established MEG's asymptotic convergence rates for various spectrum models, we now compare it with other first-order methods, including GD, GDM, and EG.
380
+
381
+ Comparison with GD and EG. Building upon the fixed-point iteration framework established by Polyak (1987), Azizian et al. (2020a) interpret both GD and EG as fixed-point iterations. Within this framework, iterates of a method are generated according to:
382
+
383
+ $$w_{t+1}=F(w_{t}),\quad\forall t\geqslant0,$$
384
+ $$(21)$$
385
+
386
+ wt+1 = F(wt), ∀t ⩾ 0, (21)
387
+ where F : R
388
+ d → R
389
+ dis an operator representing the method. However, analyzing this scheme in general settings poses challenges due to the potential nonlinearity of F. To address this, under conditions of twice differentiability of F and proximity of w to the stationary point w
390
+ ⋆, the analysis can be simplified by linearizing F around w
391
+ ⋆:
392
+
393
+ $$F(w)\approx F(w^{\star})+\nabla F(w^{\star})(w-w^{\star}).$$
394
+
395
+ Then, for w0 in a neighborhood of w
396
+ ⋆, one can obtain an asymptotic convergence rate of (21) by studying the spectral radius of the Jacobian at the solution: ρ(∇F(w
397
+ ⋆)) ⩽ ρ
398
+ ⋆ < 1. This implies that (21) locally converges linearly to w
399
+ ⋆ at the rate O((ρ
400
+ ⋆ + ε)
401
+ t) for ε ⩾ 0. Further, if F is linear, ε = 0 (Polyak, 1987).
402
+
403
+ The corresponding fixed point operators F
404
+ GD
405
+ hand F
406
+ EG
407
+ hof GD and EG7respectively are:
408
+
409
+ (GD) $w_{t+1}=w_{t}-hv(w_{t})=F_{h}^{\rm GD}(w_{t}),\ \ \mbox{and}$ (EG) $w_{t+1}=w_{t}-hv(w_{t}-hv(w_{t}))=F_{h}^{\rm EG}(w_{t}).$
410
+ The local convergence rate can then be obtained by bounding the spectral radius of the Jacobian of the operators under certain assumptions. We summarize the relevant results below. Theorem 7 (Azizian et al. (2020a); Gidel et al. (2019)). Let w
411
+ ⋆be a stationary point of v*. Further, assume* the eigenvalues of ∇v(w
412
+ ⋆) *all have positive real parts. Then, denoting* S
413
+ ⋆:= Sp(∇v(w
414
+ ⋆)),
415
+ 1. *For the gradient method in* (22) *with step size* h = minλ∈S⋆ R(1/λ)*, it satisfies:*8
416
+
417
+ $$\rho(\nabla F_{h}^{G D}(w^{\star}))^{2}\leqslant1-\min_{\lambda\in S^{\star}}\Re\left(\frac{1}{\lambda}\right)\min_{\lambda\in S^{\star}}\Re(\lambda).$$
418
+ λ∈S⋆
419
+ λ∈S⋆
420
+ R(λ). (24)
421
+ 7Azizian et al. (2020a) assumes that EG uses the same step size h for both the main and the extrapolation steps.
422
+
423
+ 8Note that the spectral radius ρ is squared, but asymptotically is almost the same as √1 − x ⩽ 1 − x/2.
424
+
425
+ $$(24)$$
426
+
427
+ 2. *For the extragradient method in* (23) *with step size* h = (4 supλ∈S⋆ |λ|)
428
+ −1*, it satisfies:*
429
+
430
+ $$n=(4\operatorname*{sup}_{\lambda\in S^{*}}|\lambda|)^{-1},$$
431
+ $$\rho(\nabla F_{h}^{EG}(w^{*}))^{2}\leqslant1-\frac{1}{4}\left(\frac{\min_{\lambda\in\mathcal{S}^{*}}\mathfrak{H}(\lambda)}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|}+\frac{1}{16}\frac{\min_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}\right).\tag{25}$$
432
+
433
+ We can determine the convergence rate of GD and EG by using Theorem 7 since all three cases of our spectrum models in (12), (14), and (15) meet the condition that the eigenvalues of ∇v(w
434
+ ⋆) have positive real parts. The following corollary summarizes this result.
435
+
436
+ Corollary 1. With the conditions in Theorem *7, for each case of the Jacobian spectrum* S
437
+
438
+ 1
439
+ , S
440
+
441
+ 2
442
+ , and S
443
+
444
+ 3
445
+ ,
446
+ the gradient method in (22) *and the extragradient method in* (23) *satisfy the following:*
447
+
448
+ - Case 1: Sp(∇v) ⊂ S⋆ 1 = [µ1, L1] ∪ [µ2, L2] ∈ R+: ρ(∇F GD h(w ⋆))2 ⩽ 1 − µ1 L2 , and ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 µ1 L2 +µ 2 1 16L22 . (26) - Case 2: Sp(∇v) ⊂ S⋆ 2 = [µ, L] ∪ {z ∈ C : R(z) = c ′ > 0, I(z) ∈ [−c, c]}: ρ(∇F GD h(w ⋆))2 ⩽ (1 −2µ 4c 2/(L−µ)+(L−µ)if c ⩾ qL 2−µ 2 4, 1 − µ Lotherwise. ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 √µ c 2+((L−µ)/2)2 +µ 2 16(c 2+((L−µ)/2)2) if c ⩾ q3L2+2Lµ−µ2 4, 1 − 1 4 µ L +µ 2 16L2 otherwise.
449
+ $$(27)$$
450
+ $$(26)$$
451
+ $$(28)$$
452
+ - Case 3: Sp(∇v) ⊂ S⋆ 3 = [c + ai, c + bi] ∪ [c − ai, c − bi] ∈ C+: ρ(∇F GD h(w ⋆))2 ⩽ 1 −c 2 c 2+b 2 , and ρ(∇F EG h(w ⋆))2 ⩽ 1 − 1 4 √ c c 2+b 2 +c 2+a 2 16(c 2+b 2)
453
+ . (28)
454
+ In Case 1, we see from (26) that both GD and EG have convergence rates 1 − O(µ1/L2) = 1 − O(τ ). MEG,
455
+ on the other hand, has an accelerated convergence rate of 1 − O(
456
+ √τ ), as well as an additional constant improvement by a factor of √1 − R2, as we showed in (18). Moving on to Case 2, we showed in (19) that MEG enjoys an accelerated convergence rate of 1 − O(pµ/L) as long as c = O(L). However, both GD and EG in (27) have non-accelerated convergence under the same condition. Lastly, for Case 3, we showed in
457
+ (20) that MEG achieves an asymptotic rate that matches the known lower bound for bilinear games: qb−a b+a
458
+ ,
459
+ with c = 0; further, the rate of MEG improves if c > 0. On the contrary, GD and EG suffer from slower rates, as shown in (28).
460
+
461
+ Comparison with GDM. We now compare the convergence rate of MEG with that of GDM, which iterates as in (5). In Azizian et al. (2020b), it was shown that GD is the optimal method for games where the Jacobian eigenvalues are within a *disc* in the complex plane. This suggests that acceleration is not possible for this type of problem.9 On the other hand, it is well-known that GDM achieves an accelerated convergence rate for strongly-convex (quadratic) minimization, where the eigenvalues of the Hessian lie on the (strictly positive)
462
+ real line segment (Polyak, 1987). Hence, Azizian et al. (2020b) studies the intermediate case, where the Jacobian eigenvalues are within an ellipse, which can be thought of as the real segment [*µ, L*] perturbed with ϵ in an elliptic way. That is, they consider the spectral shape:10
463
+
464
+ $K_{\epsilon}=\left\{z\in\mathbb{C}:\left(\frac{\Re z-(\mu+L)/2}{(L-\mu)/2}\right)^{2}+\left(\frac{2z}{\epsilon}\right)^{2}\leq1\right\}$.
465
+ Similarly to GD and EG above, in Azizian et al. (2020b), GDM is interpreted as a fixed point iteration:11 wt+1 = wt − hv(wt) + m(wt − wt−1) = F
466
+ GDM(wt, wt−1). (29)
467
+ To study the convergence rate of GDM, we use the following theorem from Azizian et al. (2020b):
468
+
469
+ 9Yet, one can consider the case where, e.g., a cross-shape is contained in a disc. Then, by knowing more fine-grained structure of the Jacobian spectrum, MEG can have faster convergence in (19).
470
+
471
+ 10A visual illustration of this ellipse can be found in Azizian et al. (2020b, Figure 2).
472
+
473
+ 11As GDM updates wt+1 using both wt and wt−1, Azizian et al. (2020b) uses an augmented fixed point operator; see Lemma 2 in that work for details.
474
+
475
+ $$(29)$$
476
+
477
+ Theorem 8 (Azizian et al. (2020b)). Define ϵ(µ, L) as ϵ(*µ, L*)/L = (µ/L)
478
+ θ = τ θ with θ > 0 and a ∧ b =
479
+ min(a, b). *If Sp*(∇F
480
+ GDM(w
481
+ ⋆, w⋆)) ⊂ Kϵ, and when τ → 0, *it satisfies:*
482
+
483
+ $$\rho(\nabla F^{GDM}(w^{*},w^{*}))\leqslant\begin{cases}1-2\sqrt{\tau}+O\left(\tau^{\theta\wedge1}\right),&\text{if}\ \ \theta>\frac{1}{2}\\ 1-2(\sqrt{2}-1)\sqrt{\tau}+O\left(\tau\right),&\text{if}\ \ \theta=\frac{1}{2}\\ 1-\tau^{1-\theta}+O\left(\tau^{1\wedge(2-3\theta)}\right),&\text{if}\ \ \theta<\frac{1}{2},\end{cases}\tag{30}$$
484
+
485
+ where the hyperparametes h and m are functions of µ, L, and ϵ *only.*
486
+ For Case 1, GDM converges at the rate 1 − 2
487
+ √τ + O(τ ) (i.e., with θ = 1 from the above), which is always slower than the rate of MEG in (18) by the factor of √1 − R2. For Case 2, we see from Theorem 8 that GDM
488
+ achieves an accelerated rate, i.e., 1 − O(
489
+ √τ ), until θ =
490
+ 1 2
491
+ . In other words, the biggest elliptic perturbation ϵ where GDM permits the accelerated rate is ϵ =
492
+ õL.12 We interpret Theorem 8 for games with cross-shaped Jacobian spectrum in (14) and shifted imaginary spectrum in (15) in the following corollary.
493
+
494
+ Corollary 2. *Consider the gradient method with momentum, interpreted as fixed point iteration as in* (29).
495
+
496
+ For games with cross-shaped Jacobian spectrum in (14) *with* c =
497
+ L−µ 2*, GDM cannot achieve an accelerated* rate when L−µ 2 = *c > ϵ* =
498
+ √µL. Since L > µ, *this further implies* L
499
+ µ >
500
+ √5. *That is, when the condition* number exceeds √5 ≈ 2.236, GDM cannot achieve an accelerated convergence rate. On the contrary, as we showed in (19)*, MEG can converge at an accelerated rate in the ill-conditioned regime.* The convergence rate of GDM for Case 3 cannot be determined from Theorem 8, as this theorem assumes the spectrum model of the real line segment [*µ, L*] with ϵ perturbation (along the imaginary axis), while S
501
+
502
+ 3 in (15) has a fixed real component. Instead, we utilize the link function of GDM in (7) to show that it is unlikely for GDM to stay in the robust region: ξ
503
+ −1([−1, 1]).
504
+
505
+ Proposition 2. *Consider solving* (1) *for games where the Jacobian has a shifted imaginary spectrum in*
506
+ (15)*, using the gradient method with momentum in* (5). For any complex number z = p + qi ∈ C+*, if* 2(1+m)
507
+ h < p, *then GDM cannot stay in the robust region, i.e.,* |ξ(λ)| > 1.
508
+
509
+ Note that the condition 2(1+m)
510
+ h < p is hard to avoid even for small p, considering h is usually a small value.
511
+
512
+ ## 6 Local Convergence For Non-Affine Vector Fields
513
+
514
+ The optimal hyperparameters of MEG for each spectrum model and the associated convergence rate we obtained in Section 4 are attainable when the vector field is affine. A natural question is, then, what can we say about the convergence rate of MEG when the vector field is not affine? To that end, we provide the local convergence of MEG by restarting the momentum, as detailed below. Let us consider the operator G representing the MEG in (4) such that:
515
+
516
+ $\subset\cap\mathcal{A}$
517
+ [wt+1, wt] = G([wt, wt−1]) and G([w
518
+
519
+ $$([w^{\star},w^{\star}])=[w^{\star},w^{\star}].$$
520
+
521
+ In addition, we assume that w1 = w0 −h 1+m v(w0 −γv(w0)), in order to induce the residual polynomials from Theorem 1; see also its proof and Algorithm 1 in the appendix. Now let us consider the following algorithm:
522
+
523
+ $$[w_{tk+i+1},w_{tk+i}]=G\big{(}[w_{tk+i},w_{tk+i-1}]\big{)}\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\tag{31}$$ $$w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v\big{(}w_{(t+1)k}-\gamma v\big{(}w_{(t+1)k}\big{)}\big{)}.$$
524
+
525
+ In other words, we repeat MEG for k steps, and then restart the momentum at [w(t+1)k+1, w(t+1)k]. The local convergence of the restarted MEG is established in the next theorem.
526
+
527
+ Theorem 9 (Local convergence). Let G : R
528
+ 2d → R
529
+ 2d*be the continuously differentiable operator representing* the momentum extragradient method (MEG) in (4)*. Let* w
530
+ ⋆be a stationary point. Let wt *denote the output* of MEG, which enjoys a convergence rate of the form ∥wt − w
531
+ ⋆∥ = C(1 − φ)
532
+ t(t + 1)∥w0 − w
533
+ ⋆∥ *for some* 12Observe that (µ/L)
534
+ 1/2 = ϵ(µ, L)/L =⇒ ϵ(*µ, L*) = √µL.
535
+
536
+ 0 < φ < 1 *when the vector field is affine. Further, consider restarting the momentum of MEG after running* k *steps, as in* (31). Then, for each ε > 0, there exists k > 0 and δ > 0 *such that, for all initializations* w0 satisfying ∥w0 − w
537
+ ⋆∥ ⩽ δ, *the restarted MEG satisfies:*
538
+
539
+ $$\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.$$
540
+
541
+ ## 7 Experiments
542
+
543
+ ![12_Image_0.Png](12_Image_0.Png)
544
+
545
+ Figure 3: *Illustration of the game Jacobian spectra and the performance of different algorithms considered.*
546
+ Jacobian spectrum in the first plot matches S
547
+
548
+ 2 in (14) precisely, while that in the third plot inexactly follows S
549
+
550
+ 2
551
+ . The second (fourth) plot shows the performance of different algorithms for solving quadratic games in
552
+ (16) with the Jacobian spectrum following the first (third) plot.
553
+
554
+ In this section, we perform numerical experiments to optimize a game when the Jacobian has a cross-shaped spectrum in (14). We focus on this spectrum as it may be the most challenging case, involving both real and complex eigenvalues (c.f., Theorem 3). To test the robustness, we consider two cases where the Jacobian spectrum exactly follows S
555
+
556
+ 2 in (14), as well as the inexact case. We illustrate them in Figure 3.
557
+
558
+ We focus on two-player quadratic games, where player 1 controls x ∈ R
559
+ d1 and player 2 controls y ∈ R
560
+ d2 with loss functions in (16). In our setting, the corresponding vector field in (17) satisfies M12 = −M⊤
561
+ 21, but S1 and S2 can be nonzero symmetric matrices. Further, the Jacobian ∇v = A has the cross-shaped eigenvalue structure in (14), with c =
562
+ L−µ 2(c.f., Proposition 1, Case 2). For the problem constants, we use µ = 1, and L = 200. The optimum [x
563
+ ⋆ y
564
+ ⋆]
565
+ ⊤ = w
566
+ ⋆ ∈ R
567
+ 200 is generated using the standard normal distribution. For simplicity, we assume b = [b1 b2]
568
+ ⊤ = [0 0]⊤. For the algorithms, we compare GD in (22), GDM in (5), EG in
569
+ (23), and MEG in (4). All algorithms are initialized with 0. We plot the experimental results in Figure 3.
570
+
571
+ For MEG (optimal), we set the hyperparameters using Theorem 5. For GD (theory) and EG (theory), we set the hyperparameters using Theorem 7, both for the exact and the inexact settings. For GDM (grid search), we perform a grid search of h GDM and mGDM, and choose the best-performing ones, as Theorem 8 does not give a specific form for hyperparameter setup. Specifically, we consider 0.005 ⩽ h GDM ⩽ 0.015 with 10−3 increment, and 0.01 ⩽ mGDM ⩽ 0.99 with 10−2increment. In addition, as Theorem 7 might be conservative, we conduct grid searches for GD and EG as well. For GD (grid search), we use the same setup as h GDM.
572
+
573
+ For EG (grid search), we use 0.001 ⩽ h EG ⩽ 0.05 with 10−4increment.
574
+
575
+ There are several remarks to make. First, although the third plot in Figure 3 does not exactly follow the spectrum model in (14), MEG still works well with the optimal hyperparameters from Theorem 5. As expected, MEG (optimal) required more iterations in the inexact case compared to the exact case. Second, compared to other algorithms, MEG (optimal) indeed exhibits a significantly faster rate of convergence, even when compared to other methods that use grid-search hyperparameter tuning, supporting our theoretical findings in Section 4. Third, while EG (theory) is slower than GD (theory), which confirms Corrolary 1, EG (grid search) can be tuned to converge faster via grid search. Lastly, even though the best performance of GDM (grid search) is obtained through grid search, one can see the GD (grid search) obtains a slightly faster convergence rate than GDM (grid search), confirming Corollay 2.
576
+
577
+ ## 8 Conclusion
578
+
579
+ In the study of differentiable games, finding stationary points efficiently is crucial. This work analyzes the momentum extragradient method, revealing three distinct convergence modes dependent on the Jacobian eigenvalue distribution. Through a polynomial-based analysis, we derive optimal hyperparameters for each mode, achieving accelerated asymptotic convergence rates. We compared the obtained rates with other firstorder methods and showed that the considered methods do not achieve the accelerated convergence rate.
580
+
581
+ Notably, our initial analysis for affine vector fields extends to guarantee local convergence rates on twicedifferentiable vector fields. Numerical experiments on quadratic games validate our theoretical findings.
582
+
583
+ ## Acknowledgments
584
+
585
+ The authors would like to thank Fangshuo Liao, Baptiste Goujaud, Damien Scieur, Miri Son, and Giorgio Young for their useful discussions and feedback. This work is supported by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in collaboration with Intel), NSF CMMI No. 2037545, NSF CAREER award No. 2145629, NSF CIF No. 2008555, Rice InterDisciplinary Excellence Award (IDEA), and the Canada CIFAR AI Chairs program.
586
+
587
+ ## References
588
+
589
+ Yossi Arjevani and Ohad Shamir. On the iteration complexity of oblivious first-order optimization algorithms.
590
+
591
+ In *International Conference on Machine Learning*. PMLR, 2016.
592
+
593
+ Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In *International Conference on* Artificial Intelligence and Statistics. PMLR, 2020a.
594
+
595
+ Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020b.
596
+
597
+ David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In *International Conference on Machine Learning*. PMLR,
598
+ 2018.
599
+
600
+ Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, and Simon Lacoste-Julien. A closer look at the optimization landscapes of generative adversarial networks. In International Conference on Learning Representations, 2020.
601
+
602
+ Raphaël Berthier, Francis Bach, and Pierre Gaillard. Accelerated gossip in networks of given dimension using Jacobi polynomial iterations. *SIAM Journal on Mathematics of Data Science*, 2020.
603
+
604
+ Aleksandr Beznosikov, Pavel Dvurechensky, Anastasia Koloskova, Valentin Samokhin, Sebastian U Stich, and Alexander Gasnikov. Decentralized local stochastic extra-gradient for variational inequalities. *arXiv* preprint arXiv:2106.08315, 2021.
605
+
606
+ Theodore S Chihara. *An introduction to orthogonal polynomials*. Courier Corporation, 2011.
607
+
608
+ Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. *Advances in neural information processing systems*, 31, 2018.
609
+
610
+ Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. *SIAM Journal on Computing*, 2009.
611
+
612
+ Carles Domingo-Enrich, Fabian Pedregosa, and Damien Scieur. Average-case acceleration for bilinear games and normal matrices. In *International Conference on Learning Representations*, 2021.
613
+
614
+ Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2021.
615
+
616
+ Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. *arXiv preprint arXiv:1802.10551*, 2018.
617
+
618
+ Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
619
+
620
+ Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in Neural Information Processing* Systems, volume 27, 2014.
621
+
622
+ Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
623
+
624
+ Eduard Gorbunov, Adrien Taylor, Samuel Horváth, and Gauthier Gidel. Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity. In International Conference on Machine Learning. PMLR, 2023.
625
+
626
+ Baptiste Goujaud and Fabian Pedregosa. Cyclical step-sizes, 2022. URL http://fa.bianp.net/blog/2022/
627
+ cyclical/.
628
+
629
+ Baptiste Goujaud, Damien Scieur, Aymeric Dieuleveut, Adrien B Taylor, and Fabian Pedregosa. Superacceleration with cyclical step-sizes. In *International Conference on Artificial Intelligence and Statistics*.
630
+
631
+ PMLR, 2022.
632
+
633
+ Magnus R Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving. *Journal of research of* the National Bureau of Standards, 49(6):409, 1952.
634
+
635
+ Andrew J Hetzel, Jay S Liew, and Kent E Morrison. The probability that a matrix of integers is diagonalizable. *The American Mathematical Monthly*, 114(6):491–499, 2007. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of singlecall stochastic extra-gradient methods. *Advances in Neural Information Processing Systems*, 32, 2019.
636
+
637
+ Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. *Advances in Neural* Information Processing Systems, 33, 2020.
638
+
639
+ Junhyung Lyle Kim, Gauthier Gidel, Anastasios Kyrillidis, and Fabian Pedregosa. Momentum extragradient is optimal for games with cross-shaped spectrum. In *OPT 2022: Optimization for Machine Learning*
640
+ (NeurIPS 2022 Workshop), 2022.
641
+
642
+ Galina M Korpelevich. The extragradient method for finding saddle points and other problems. *Matecon*,
643
+ 1976.
644
+
645
+ Peter Lancaster and Hanafi K Farahat. Norms on direct sums and tensor products. *mathematics of computation*, 1972.
646
+
647
+ Alistair Letcher, David Balduzzi, Sébastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. *The Journal of Machine Learning Research*, 2019.
648
+
649
+ Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael I Jordan.
650
+
651
+ On the convergence of stochastic extragradient for bilinear games with restarted iteration averaging. arXiv preprint arXiv:2107.00464, 2021.
652
+
653
+ Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. In *The 22nd International Conference on Artificial Intelligence and* Statistics, pp. 907–915. PMLR, 2019.
654
+
655
+ Mingrui Liu, Wei Zhang, Youssef Mroueh, Xiaodong Cui, Jarret Ross, Tianbao Yang, and Payel Das. A
656
+ decentralized parallel algorithm for training generative adversarial nets. *Advances in Neural Information* Processing Systems, 2020.
657
+
658
+ Jonathan P. Lorraine, David Acuna, Paul Vicol, and David Duvenaud. Complex momentum for optimization in games. In *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, volume 151 of *Proceedings of Machine Learning Research*, 2022.
659
+
660
+ Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In *International conference on machine learning*. PMLR, 2018.
661
+
662
+ Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020.
663
+
664
+ Renato DC Monteiro and Benar Fux Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. *SIAM Journal on Optimization*, 20(6):2755–2787, 2010.
665
+
666
+ Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, et al. Nash learning from human feedback. *arXiv preprint arXiv:2312.00886*, 2023.
667
+
668
+ Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 2004.
669
+
670
+ Samet Oymak. Provable super-convergence with a large cyclical learning rate. *IEEE Signal Processing* Letters, 28, 2021.
671
+
672
+ Balamurugan Palaniappan and Francis Bach. Stochastic variance reduction methods for saddle-point problems. *Advances in Neural Information Processing Systems*, 2016.
673
+
674
+ Vardan Papyan. Traces of class/cross-class structure pervade deep learning spectra. The Journal of Machine Learning Research, 21, 2020.
675
+
676
+ Fabian Pedregosa. Momentum: when Chebyshev meets Chebyshev, 2020. URL http://fa.bianp.net/
677
+ blog/2020/momentum/.
678
+
679
+ Fabian Pedregosa and Damien Scieur. Acceleration through spectral density estimation. In Proceedings of the 37th International Conference on Machine Learning. PMLR, November 2020.
680
+
681
+ David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016.
682
+
683
+ Boris T Polyak. Introduction to optimization. optimization software. *Inc., Publications Division, New York*,
684
+ 1:32, 1987.
685
+
686
+ Ernest K Ryu, Kun Yuan, and Wotao Yin. ODE analysis of stochastic gradient methods with optimism and anchoring for minimax problems. *arXiv preprint arXiv:1905.10899*, 2019.
687
+
688
+ Yoav Shoham and Kevin Leyton-Brown. *Multiagent systems: Algorithmic, game-theoretic, and logical foundations*. Cambridge University Press, 2008.
689
+
690
+ Mikhail V Solodov and Benar F Svaiter. A hybrid approximate extragradient–proximal point algorithm using the enlargement of a maximal monotone operator. *Set-Valued Analysis*, 7(4):323–345, 1999.
691
+
692
+ Paul Tseng. On linear convergence of iterative methods for the variational inequality problem. *Journal of* Computational and Applied Mathematics, 1995.
693
+
694
+ Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. Proceedings of Machine Learning and Systems, 1, 2019.
695
+
696
+ ## A Missing Proofs In Section 3 A.1 Proof Of Lemma 1
697
+
698
+ Proof of Lemma 1 can be found for example in Azizian et al. (2020b, Section B).
699
+
700
+ To obtain the residual polynomials of MEG, w1 has to be set slightly differently from the rest of the iterates, as we write in the pseudocode below:
701
+ Algorithm 1: Momentum extragradient (MEG) method Input: Initialization w0, hyperparameters *h, γ, m*.
702
+
703
+ Set: w1 = w0 −h 1+m v(w0 − γv(w0))
704
+ for t = 1, 2*, . . .* do wt+1 = wt − hv(wt − γv(wt)) + m(wt − wt−1)
705
+ end
706
+
707
+ ## Derivation Of The First Part
708
+
709
+ Proof. We want to find the residual polynomial P˜t(A) of the extragradient with momentum (MEG) in (4).
710
+
711
+ That is, we want to find
712
+
713
+ $$w_{t}-w^{\star}=\tilde{P}_{t}(A)(w_{0}-w^{\star}),$$
714
+ $$(32)$$
715
+ ⋆), (32)
716
+ where {wt}t=0 is the iterates generated by MEG, which is possible by Lemma 1, as MEG is a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b). We now prove this is by induction. To do so, we will use the following properties. First, note that as we are looking for a stationary point, it holds that v(w
717
+ ⋆) = 0. Further, as v is linear by the assumption of Lemma 1, it holds that v(w) = A(w − w
718
+ ⋆).
719
+
720
+ Base case. For t = 0, P˜0(A) is a degree-zero polynomial, and hence equals Id, which denotes the identity matrix. Thus, w0 − w
721
+ ⋆ = Id(w0 − w
722
+ ⋆) holds true.
723
+
724
+ For completeness, we also prove when t = 1. In that case, observe that MEG in proceeds as w1 = w0 −
725
+ h 1+m v(w0 − γv(w0)). Subtracting w
726
+ ⋆ on both sides, we have:
727
+
728
+ w1 − w ⋆ = w0 − w ⋆ −h 1 + m v(w0 − γv(w0)) = w0 − w ⋆ −h 1 + m v(w0 − γA(w0 − w ⋆)) = w0 − w ⋆ −h 1 + m A(w0 − γA(w0 − w ⋆) − w ⋆) = w0 − w ⋆ −h 1 + m A(w0 − w ⋆) + hγ 1 + m A 2(w0 − w ⋆) = Id −h 1 + m A +hγ 1 + m A 2 (w0 − w ⋆) = Id −h 1 + m A(Id − γA) (w0 − w ⋆) = P˜1(A)(w0 − w ⋆).
729
+ Induction step. As the induction hypothesis, assume P˜t satisfies (32). We want to prove this holds for t + 1. We have:
730
+
731
+ $$w_{t+1}=w_{t}-hv(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1})$$ $$=w_{t}-hv(w_{t}-\gamma A(w_{t}-w^{\star}))+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-\gamma A(w_{t}-w^{\star})-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-w^{\star})+h\gamma A^{2}(w_{t}-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(I_{d}-\gamma A)(w_{t}-w^{\star})+m(w_{t}-w_{t-1}).$$
732
+
733
+ Subtracting w
734
+ ⋆ on both sides, we have:
735
+
736
+ wt+1 − w ⋆ = wt − w ⋆ − hA(Id − γA)(wt − w ⋆) + m(wt − wt−1) = (Id − hA(Id − γA)) (wt − w ⋆) + m(wt − w ⋆ − (wt−1 − w ⋆)) (32) = (Id − hA(Id − γA)) P˜t(A)(w0 − w ⋆) + m(P˜t(A)(w0 − w ⋆) − P˜t−1(A)(w0 − w ⋆)) = (Id + mId − hA(Id − γA)) P˜t(A)(w0 − w ⋆) − mP˜t−1(A)(w0 − w ⋆) = P˜t+1(A)(w0 − w ⋆),
737
+ where in the third equality, we used the induction hypothesis in (32).
738
+
739
+ ## Derivation Of The Second Part In (6)
740
+
741
+ Proof. We show Pt = P˜t for all t via induction.
742
+
743
+ Base case. For t = 0, by the definition of Chebyshev polynomials of the first and the second kinds, we have T0(λ) = U0(λ) = 1. Thus,
744
+
745
+ $$P_{0}(\lambda)=m^{0}\left(\frac{2m}{1+m}T_{0}(\sigma(\lambda))+\frac{1-m}{1+m}U_{0}(\sigma(\lambda))\right)$$ $$=\frac{2m}{1+m}+\frac{1-m}{1+m}=1=\tilde{P}_{0}(\lambda).$$
746
+
747
+ Again, for completeness, we prove when t = 1 as well. In that case, by the definition of Chebyshev polynomials of the first and the second kinds, we have T1(λ) = λ, and U1(λ) = 2λ. Therefore,
748
+
749
+ $$P_{1}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{1}(\sigma(\lambda))+\frac{1-m}{1+m}U_{1}(\sigma(\lambda))\right)$$ $$=m^{t/2}\left(\frac{2m}{1+m}\,\sigma(\lambda)+\frac{1-m}{1+m}\cdot2\cdot\sigma(\lambda)\right)$$ $$=m^{t/2}\left(\frac{2\sigma(\lambda)}{1+m}\right)$$ $$=1-\frac{h\lambda(1-\gamma\lambda)}{1+m}=\tilde{P}_{1}(\lambda).$$
750
+
751
+ Induction step. As the induction hypothesis, assume that Pt = P˜t for t. In this step, we show that the same holds for t + 1.
752
+
753
+ Pt+1 = m(t+1)/2 2m 1 + m Tt+1(σ(λ)) + 1 − m 1 + m Ut+1(σ(λ)) = m(t+1)/2 2m 1 + m 2σ(λ)Tt(σ(λ)) − Tt−1(σ(λ)) + 1 − m 1 + m 2σ(λ)Ut(σ(λ) − Ut−1(σ(λ))) = 2σ(λ) · m1/2· mt/2 2m 1 + m Tt(σ(λ)) + 1 − m 1 + m Ut(σ(λ)) | {z } Pt(λ) − m · m(t−1)/2 2m 1 + m Tt−1(σ(λ)) + 1 − m 1 + m Ut−1(σ(λ)) | {z } Pt−1(λ) = 2σ(λ) · √m · P˜t(λ) − m · P˜t−1(λ) = (1 + m − hλ(1 − γλ))P˜t(λ) − mP˜t−1(λ),
754
+ $\square$
755
+ where in the second to last equality we use the induction hypothesis.
756
+
757
+ ## A.3 Proof Of Lemma 2
758
+
759
+ Proof of Lemma 2 can be found in Goujaud & Pedregosa (2022).
760
+
761
+ Proof. We first recall that using (3), we can upper bound the worst-case convergence rate as:
762
+
763
+ $$\sup_{\lambda\in\mathcal{S}^{+}}|P_{t}(\lambda)|\stackrel{{\eqref{eq:20}}}{{=}}\sup_{\lambda\in\mathcal{S}^{+}}\left|m^{t/2}\bigg{(}\frac{2m}{1+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\bigg{)}\right|$$ $$\leqslant m^{t/2}\bigg{(}\frac{2m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|U_{t}(\sigma(\lambda))|\bigg{)}\tag{33}$$ $\lambda$ is $\lambda$-independent. For $\lambda$, $\lambda$ is $\lambda$-independent. The $\mathcal{T}_{t}(\lambda)$ and $\mathcal{U}_{t}(\lambda)$ has a unique
764
+ Now, denote σ¯ := supλ∈S⋆ |σ(λ; *h, γ, m*)|. For the first case, if σ¯ ⩽ 1, both Tt(x) and Ut(x) behave nicely, per Lemma 2. Thus, we have
765
+
766
+ $$(33)\stackrel{{(8)}}{{\leqslant}}m^{t/2}\bigg{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\bigg{)}\leqslant m^{t/2}(t+1)\implies\limsup_{t\to\infty}\left(m^{t/2}(t+1)\right)^{\frac{1}{2t}}=\sqrt[4]{m}.\tag{33}$$
767
+
768
+ For the second case, we use the following expressions of Chebyshev polynomials:
769
+
770
+ $$T_{n}(x)={\frac{\left(x-{\sqrt{x^{2}-1}}\right)^{n}+\left(x+{\sqrt{x^{2}-1}}\right)^{n}}{2}},\quad{\mathrm{and}}$$ $$U_{n}(x)={\frac{\left(x+{\sqrt{x^{2}-1}}\right)^{n+1}-\left(x-{\sqrt{x^{2}-1}}\right)^{n+1}}{2{\sqrt{x^{2}-1}}}}.$$
771
+
772
+ Therefore, in the second case where σ >¯ 1, we have both Tn(x) and Un(x) growing at rate (x +
773
+ √x 2 − 1)n.
774
+
775
+ Hence, we have:
776
+
777
+ (33) $\leqslant O\left({m}^{t/2}\left(\bar{\sigma}+\sqrt{{\bar{\sigma}}^{2}-1}\right)^{t}\right)\implies\limsup\limits_{t\to\infty}$
778
+ $$\left(m^{t/2}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{t}\right)^{\frac{1}{2t}}=\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}.$$
779
+ Finally, in order for MEG to converge in the second case, we need:
780
+
781
+ $$\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}<1$$
782
+
783
+ which is equivalent to
784
+
785
+ $$\bar{\sigma}\leqslant\frac{\sqrt{m}(m+1)}{2m}=\frac{m+1}{2\sqrt{m}}.$$
786
+
787
+ ## A.5 Derivation Of Extreme Points Of Robust Region In (11)
788
+
789
+ We first write a general formula for inverting a quadratic function. For f(x) = ax2 + bx + c, its inverse is given by:
790
+
791
+ $$f(x)=a x^{2}+b x+c:=y$$ $$f^{-1}(y)={\frac{-b\pm{\sqrt{b^{2}-4a(c-y)}}}{2a}},$$
792
+
793
+ with some abuse of notation (i.e., f
794
+ −1 above is not a function).
795
+
796
+ Applying the above to the link function of MEG in (6), we get
797
+
798
+ $$\sigma^{-1}(y)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{1+m}{h\gamma}+\frac{2\sqrt{m}}{h\gamma}\cdot y}.$$
799
+
800
+ With this formula, we can plug in 1 and −1 to get:
801
+
802
+ $$\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}\quad\mathrm{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.$$
803
+
804
+ Proof. We analyze each case separately.
805
+
806
+ Case 1: There are two square roots: q1
807
+ anonymous uare roots: $\sqrt{\frac{1}{4\gamma^2}-\frac{(1-\sqrt{m})^2}{h\gamma}}$ and $\sqrt{\frac{1}{4\gamma^2}-\frac{(1+\sqrt{m})^2}{h\gamma}}.$ The second one is real if: $$\frac{1}{4\gamma^2}\geqslant\frac{(1+\sqrt{m})^2}{h\gamma}\implies\frac{h\gamma}{4\gamma^2}=\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2,$$
808
+
809
+ which implies the first is real, as (1 + √m)
810
+ 2 ⩾ (1 −
811
+ √m)
812
+ 2.
813
+
814
+ **Case 3:** There are two square roots: $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{k\gamma}}$ and $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{k\gamma}}$. The first one is complex if: $$\frac{1}{4\gamma^{2}}<\frac{(1-\sqrt{m})^{2}}{h\gamma}\implies\frac{h\gamma}{4\gamma^{2}}=\frac{h}{4\gamma}<(1-\sqrt{m})^{2},$$
815
+ which implies the second is complex, as (1 + √m)
816
+ 2 ⩾ (1 −
817
+ √m)
818
+ 2.
819
+
820
+ Case 2: This case follows automatically from the above two cases.
821
+
822
+ ## A.7 Proof Of Proposition 1
823
+
824
+ Proof. Define D3 = diag(*a, . . . , a*) of dimensions d × d. Let us prove that if there exists *U, V* orthonormal matrices and D1, D2 matrices with non-zeros coefficients only on the diagonal such that (with a slight abuse of notation)
825
+
826
+ $$S_{1}=U\mathrm{diag}(D_{3},D_{1})U^{\top},S_{2}=V D_{3}V^{\top},\quad\mathrm{and}\quad B=U D_{2}V^{\top},$$
827
+
828
+ then the spectrum of A is crossed shaped. In that case, we have
829
+
830
+ $$\begin{array}{r l}{A={\left[\begin{array}{l l l}{U[D_{3};D_{1}]U^{\top}}&{U D_{2}V^{\top},}\\ {-V D_{2}^{\top}U^{\top}}&{V D_{3}V^{\top}}\end{array}\right]}}\\ {={\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]{\left[\begin{array}{l l l}{[D_{3};D_{1}]}&{D_{2}}\\ {-D_{2}^{\top}}&{D_{3}}\end{array}\right]}{\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]}^{\top}.}\end{array}$$
831
+
832
+ Now by considering the basis W = ((U1, 0),(0, V1)*, . . . ,*(Udv
833
+ , 0),(0, Vdv
834
+ ),(Udv+1, 0)*, . . . ,*(Ud, 0)) we have that A can be block diagonalized in that basis as
835
+
836
+ $$A=W\mathrm{diag}\left(\left[\begin{matrix}a&[D_{2}]_{11}\\ -[D_{2}]_{11}&a\end{matrix}\right],\ldots,\left[\begin{matrix}a&[D_{2}]_{d_{u},d_{v}}\\ -[D_{2}]_{d_{u},d_{v}}&a\end{matrix}\right],[D_{1}]_{1},\ldots,[D_{1}]_{d_{u}-d_{v},d_{u}}\right)W^{\top}.$$
837
+ W⊤. (34)
838
+ Now, notice that
839
+
840
+ $$(34)$$
841
+ $${\rm Sp}\left(\begin{bmatrix}a&-b\\ b&a\end{bmatrix}\right)=\{a\pm bi\},\tag{1}$$
842
+ $$(35)$$
843
+
844
+ since the associated characteristic polynomial of the above matrix is:
845
+
846
+ $$(a-\lambda)^{2}+b^{2}=0\implies$$
847
+
848
+ 2 = 0 =⇒ a − λ = ±bi =⇒ λ = a ± bi.
849
+
850
+ Hence, using (35) in the formulation of A in (34), we have that the spectrum of A is cross-shaped.
851
+
852
+ ## B Missing Proofs In Section 4 B.1 Proof Of Theorem 4
853
+
854
+ Proof. We write the conditions required for Theorem 5 below:
855
+
856
+ $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1},$$ $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=L_{1},$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2},\quad\text{and}$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{2}.$$ In both the solutions (10) and (20) are
857
+
858
+ By adding (37) and (38) (or equivalently by ading (36) and (39)), we get
859
+
860
+ $$\gamma={\frac{1}{\mu_{1}+L_{2}}}={\frac{1}{\mu_{2}+L_{1}}}.$$
861
+ . (40)
862
+ $$(39)$$
863
+ $$(40)$$
864
+
865
+ From (36), we have:
866
+
867
+ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1}$$ $$\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}=\left(\frac{1}{2\gamma}-\mu_{1}\right)^{2}$$ $$\frac{(1-\sqrt{m})^{2}}{h}=\mu_{1}(1-\gamma\mu_{1})$$ $$h=\frac{(1-\sqrt{m})^{2}}{\mu_{1}(1-\gamma\mu_{1})}=\frac{(1-\sqrt{m})^{2}(\mu_{1}+L_{2})}{\mu_{1}L_{2}}\tag{41}$$ we:
868
+ $$(41)$$
869
+
870
+ Similarly, from (38), we have:
871
+
872
+ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2}$$ $$\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}=\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}$$ $$\left(\frac{\mu_{2}+L_{1}}{2}\right)^{2}-\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}=\mu_{2}L_{1}=\frac{(1+\sqrt{m})^{2}}{h\gamma}\tag{5.1}$$ which is for $\mu_{2}$.
873
+ $$\left(42\right)$$
874
+ $$(43)$$
875
+
876
+ Combining (41) and (42), and solving for m, we get µ2L1(1 −
877
+ √m)
878
+ 2 = µ1L2(1 + √m)
879
+ 2
880
+
881
+ $$m\rangle^{2}=\mu_{1}L_{2}(1+\sqrt{m})$$ $$m=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}\stackrel{(13)}{=}\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.$$ (11)
882
+ . (43)
883
+ Finally, plugging (43) back to (41), we get:
884
+
885
+ 1), we get: $ h=\frac{(1-\sqrt{m})^2(\mu_1+L_2)}{\mu_1L_2}$ $ =\frac{4\mu_1L_2}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}\cdot\frac{\mu_1+L_2}{\mu_1L_2}$ $ =\frac{4(\mu_1+L_2)}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}$.
886
+ Proof. We write the conditions required for Theorem 5 below:
887
+ First, by adding (44) and (45), we get:
888
+
889
+ $${\frac{1}{\gamma}}=\mu+L\implies\gamma={\frac{1}{\mu+L}}.$$
890
+ . (47)
891
+ $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu,$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=L,\quad\text{and}$$ $$\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=c.$$
892
+ $$(444)$$
893
+ $$(45)$$
894
+ $$(46)$$
895
+ $$(47)$$
896
+
897
+ Plugging (47) back into (44), we have:
898
+
899
+ $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu$$ $$\frac{\mu+L}{2}-\mu=\sqrt{\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}}$$ $$\left(\frac{L-\mu}{2}\right)^{2}=\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}$$ $$\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}=\left(\frac{\mu+L}{2}\right)^{2}-\left(\frac{L-\mu}{2}\right)^{2}=\mu L$$ $$h=\frac{(1-\sqrt{m})^{2}(\mu+L)}{\mu L}.\tag{48}$$
900
+
901
+ Plugging (47) and (48) into (46), we have:
902
+
903
+ s(1 + √m) 2 hγ − 1 4γ 2 = c s(1 + √m) 2 · µL (1 − √m) 2− µ + L 2 2= c (1 + √m) 2· µL (1 − √m) 2= c 2 + µ + L 2 2= 4c 2 + (µ + L) 2 4 (1 + √m) 2 (1 − √m) 2 = 4c 2 + (µ + L) 2 4µL (1 + √m)p4µL = (1 − √m)p4c 2 + (µ + L) 2 √m(p4c 2 + (µ + L) 2 +p4µL) = p4c 2 + (µ + L) 2 −p4µL √m = p4c 2 + (µ + L) 2 − √4µL p4c 2 + (µ + L) 2 + √4µL . (49)
904
+ Finally, to simplify (48) further, from (49), we have:
905
+
906
+ $$1-\sqrt{m}=\frac{4\sqrt{\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}.$$
907
+
908
+ Hence, from (48),
909
+
910
+ $$h={\frac{(\mu+L)(1-\sqrt{m})^{2}}{\mu L}}={\frac{{\frac{16\mu L(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}}{\mu L}}={\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}.$$
911
+ . (50)
912
+ $$(50)$$
913
+
914
+ Proof. We write the conditions required for (6) below:
915
+
916
+ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}-\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\gamma$ is not a good approximation.
917
+ − (1 + √m) 2 hγ = c + bi, (51) − (1 + √m) 2 hγ = c + ai, (52) − (1 + √m) 2 hγ = c − ai, and (53) − (1 − √m) 2 hγ = c − bi. (54)
918
+ First, we can see from all cases that the optimal γ is
919
+
920
+ $$(51)$$
921
+ $$(52)$$
922
+ $$(53)$$
923
+ $$(54)$$
924
+ $$\gamma={\frac{1}{2c}}.$$
925
+ $$(55)$$
926
+ . (55)
927
+ (51) and (54) equivalently imply
928
+
929
+ $$\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=b$$ $$(1+\sqrt{m})^{2}=h\gamma b^{2}+\frac{h}{4\gamma}=\frac{h(c^{2}+b^{2})}{2c}$$ $$h=\frac{2c(1+\sqrt{m})^{2}}{c^{2}+b^{2}}.$$
930
+ $$(56)$$
931
+
932
+ Similarly, (52) and (53) imply
933
+
934
+ and (50) imply $$\sqrt{\frac{(1-\sqrt{m})^2}{h\gamma}-\frac{1}{4\gamma^2}}=a$$ $$\frac{(1-\sqrt{m})^2}{h\gamma}=a^2+\frac{1}{4\gamma^2}=a^2+c^2$$ $$\frac{(1-\sqrt{m})^2(c^2+b^2)}{(1+\sqrt{m})^2}=a^2+c^2$$ $$(1-\sqrt{m})\sqrt{c^2+b^2}=(1+\sqrt{m})\sqrt{c^2+a^2}$$ $$\sqrt{m}=\frac{\sqrt{c^2+b^2}-\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}=1-\frac{2\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}\tag{57}$$ In (56), we get...
935
+ Plugging (57) to (56), we get
936
+
937
+ $$h={\frac{2c(1+{\sqrt{m}})^{2}}{c^{2}+b^{2}}}={\frac{8c}{({\sqrt{c^{2}+b^{2}}}+{\sqrt{c^{2}+a^{2}}})^{2}}}.$$
938
+
939
+ ## C Missing Proofs In Sections 5 C.1 Proof Of Corollary 1
940
+
941
+ Proof. To compute the convergence rates of GD and EG from Theorem 7 applied to each spectrum model in (12), (14), and (15), we need to compute minλ∈∆⋆ R(1/λ) and minλ∈∆⋆ R(λ) for GD. Similarly for EG,
942
+ we need to compute additionally minλ∈∆⋆ |λ|, minλ∈∆⋆ |λ| 2, and supλ∈∆⋆ |λ| 2.
943
+
944
+ $\square$
945
+ Case 1: It's straightforward to compute
946
+
947
+ $$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(1/\lambda)=1/L_{2},\quad\mathrm{and}\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(\lambda)=\mu_{1}$$
948
+
949
+ Thus, GD for Case 1 has the rate
950
+
951
+ $$1-{\frac{\mu_{1}}{L_{2}}}=1-\tau.$$
952
+
953
+ For EG, it's also simple to obtain
954
+
955
+ $$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|=L_{2},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=\mu_{1}^{2},\quad\mathrm{and}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=L_{2}^{2}.$$
956
+
957
+ Thus, EG for Case 1 has the rate
958
+
959
+ $$1-\frac{1}{4}\left(\frac{\mu_{1}}{L_{2}}+\frac{1}{16}\left(\frac{\mu_{1}}{L_{2}}\right)^{2}\right).$$
960
+
961
+ Case 2: For a complex number z = p + qi ∈ C, we can compute R(1/z) as:
962
+
963
+ $${\frac{1}{z}}={\frac{1}{p+q i}}={\frac{p-q i}{p^{2}+q^{2}}}={\frac{p}{p^{2}+q^{2}}}-{\frac{q}{p^{2}+q^{2}}}i\implies\Re\left({\frac{1}{z}}\right)={\frac{p}{p^{2}+q^{2}}}.$$
964
+
965
+ The four extreme points of the cross-shaped spectrum model in (14) are:
966
+
967
+ $$\mu=\mu+0i,$$
968
+ µ = µ + 0*i, L* = L + 0i, and L − µ
969
+ $$L=L+0i,\quad\mathrm{and}\quad\frac{L-\mu}{2}\pm c i.$$
970
+
971
+ Hence, R(1/z) for each of the above points is:
972
+
973
+ $$\Re\left(\frac{1}{\mu}\right)=\frac{\mu}{\mu^{2}}=\frac{1}{\mu},$$ $$\Re\left(\frac{1}{L}\right)=\frac{L}{L^{2}}=\frac{1}{L},\quad\text{and}$$ $$\Re\left(\frac{1}{\frac{L-\mu}{2}\pm ci}\right)=\frac{\frac{L-\mu}{2}}{\left(\frac{L-\mu}{2}\right)^{2}+c^{2}}$$ $$=\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.$$
974
+
975
+ Therefore, minλ∈S⋆
976
+ 2 R1λ
977
+ =
978
+ 1 L
979
+ . As *µ < L,* we only need to compare the last two values. Observe that:
980
+
981
+ $$\begin{array}{c}{{c>\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{4c^{2}>(L-\mu)(L+\mu)}}\\ {{4c^{2}>2L(L-\mu)-(L-\mu)^{2}}}\\ {{\frac{1}{L}>\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.}}\end{array}$$
982
+
983
+ Therefore,
984
+
985
+ $$\operatorname*{min}_{\lambda\in S_{2}^{*}}\Re\left({\frac{1}{\lambda}}\right)={\begin{cases}{\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{\frac{1}{L}}}&{{\mathrm{otherwise.}}}\end{cases}}$$
986
+
987
+ For minλ∈S⋆
988
+ 2 R(λ), it's straightforward from the definition that
989
+
990
+ $$\operatorname*{min}_{\lambda\in{\mathcal{S}}_{2}^{*}}\Re(\lambda)=\mu.$$
991
+
992
+ Thus, GD for Case 2 has the rate
993
+
994
+ $$\begin{cases}1-{\frac{2\mu(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&\quad{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}\\ 1-{\frac{\mu}{L}}&\quad{\mathrm{otherwise.}}\end{cases}$$
995
+
996
+ Similarly for EG, we need to compute minλ∈S⋆
997
+ 2 R(λ), which was computed above; additionally, we need to compute minλ∈S⋆
998
+ 2 |λ|, minλ∈S⋆
999
+ 2 |λ| 2, and supλ∈S⋆
1000
+ 2 |λ| 2. For z = p + qi ∈ C, |z| =pp 2 + q 2. Hence, we have
1001
+
1002
+ $$|\mu+0i|=\mu,\quad|L+0i|=L,\quad{\mathrm{and}}\quad\left|{\frac{L-\mu}{2}}\pm c i\right|={\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}\,.$$
1003
+
1004
+ Observe that:
1005
+
1006
+ $$c>\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}}$$ $$c^{2}>L^{2}-\frac{L^{2}-2L\mu+\mu^{2}}{4}$$ $$c^{2}+\left(\frac{L-\mu}{2}\right)^{2}>L^{2}$$
1007
+
1008
+ Thus, for supλ∈S⋆
1009
+ 2 |λ|, we have
1010
+
1011
+ $$\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{2}^{*}}|\lambda|={\begin{cases}{\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}}}}\\ L&{{\mathrm{otherwise,}}}\end{cases}}$$
1012
+
1013
+ from which supλ∈S⋆
1014
+ 2 |λ| 2can also be obtained. Lastly, minλ∈S⋆
1015
+ 2 |λ| 2 = µ 2, as we know *µ < L,* and (L − µ)/2 is the center of [*µ, L*].
1016
+
1017
+ Combining all three, we get the rate of EG for Case 3 is
1018
+
1019
+ we get the rate of L6 for Case 3 is $$\begin{cases}1-\frac{1}{4}\left(\frac{\mu}{\sqrt{c^{2}+\left(\frac{L-\mu}{2}\right)^{2}}}+\frac{\mu^{2}}{16\left(c^{2}+\left(\frac{L-\mu}{2}\right)^{2}\right)}\right)&\text{if}c\geqslant\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}},\\ 1-\frac{1}{4}\Big{(}\frac{\mu}{L}+\frac{\mu^{2}}{16L^{2}}\Big{)}&\text{otherwise.}\end{cases}$$
1020
+ Case 3: Since (15) has fixed real component, minλ∈S⋆
1021
+ 3 R(λ) = c.
1022
+
1023
+ For minλ∈S⋆
1024
+ 3 R(1/λ) we can compute compare
1025
+
1026
+ $$\Re\left(\frac{1}{c+ai}\right)=\frac{c}{c^{2}+a^{2}}>\frac{c}{c^{2}+b^{2}}=\Re\left(\frac{1}{c+bi}\right),$$ since $a<b$ from (15). Thus, GD for Case 3 has the rate $$1-\frac{c^{2}}{c^{2}+b^{2}}.$$
1027
+
1028
+ For EG, it's also simple to obtain
1029
+
1030
+ $$\operatorname*{lim}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|={\sqrt{c^{2}+b^{2}}},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+a^{2},\quad{\mathrm{and}}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+b^{2}.$$
1031
+
1032
+ Thus, EG has the rate
1033
+
1034
+ $$1-{\frac{1}{4}}\left({\frac{c}{\sqrt{c^{2}+b^{2}}}}+{\frac{1}{16}}{\frac{(c^{2}+a^{2})}{(c^{2}+b^{2})}}\right).$$
1035
+
1036
+ ## C.2 Proof Of Corollary 2
1037
+
1038
+ Proof. Per Theorem 8, the largest ϵ that permits acceleration for GDM is ϵ =
1039
+ õL. Therefore, in the special case of (14) we consider, i.e., when c =
1040
+ L−µ 2, GDM *cannot* achieve acceleration if L−µ 2 >
1041
+ õL. Hence, we have:
1042
+
1043
+ $$\begin{array}{c}{{\frac{L-\mu}{2}>\sqrt{\mu L}}}\\ {{L-\mu>2\sqrt{\mu L}}}\\ {{L^{2}+\mu^{2}>6\mu L>6\mu^{2}\ \ (\because L>\mu)}}\\ {{L>\sqrt{5}\mu.}}\end{array}$$
1044
+
1045
+ ## C.3 Proof Of Proposition 2
1046
+
1047
+ Proof. For an arbitrary complex number p + qi with p > 0, and using the link function of GDM from (7), we have
1048
+
1049
+ $$|\xi(p+qi)|=\sqrt{\left(\frac{1+m-hp}{2\sqrt{m}}\right)^{2}+\left(\frac{hq}{2\sqrt{m}}\right)^{2}}\leqslant1$$ $$\frac{(1+m-hp)^{2}+h^{2}q^{2}}{4m}\leqslant1$$ $$(1+m-hp)^{2}+h^{2}q^{2}\leqslant4m$$ $$(1-m)^{2}+hp(hp-2(1+m))+h^{2}q^{2}\leqslant0$$ $$\frac{(1-m)^{2}+h^{2}q^{2}}{hp}\leqslant2(1+m)-hp$$
1050
+ $$(58)$$
1051
+
1052
+ Notice that the LHS is positive. Therefore, if the RHS is negative, the above inequality cannot hold. In other words, if 2(1+m)
1053
+ h < p, GDM cannot stay in the robust region. This is very hard to satisfy, even with a small p.
1054
+
1055
+ ## D Missing Proofs In Section 6
1056
+
1057
+ Let us consider an affine vector field v(w) = Aw + b and its associated augmented MEG linear operator J:
1058
+
1059
+ $$\begin{array}{r l}{{\left[w_{t+1}-w^{*}\right]=J\left[\begin{array}{l l}{w_{t}-w^{*}}\\ {w_{t}-w^{*}}\end{array}\right]}}&{{\mathrm{with}}}\quad J={\left[\begin{array}{l l}{(1+\beta)I_{d}-h A(I_{d}-\gamma A)}&{-\beta I_{d}}\\ {I_{d}}&{0_{d}}\end{array}\right],}}\end{array}$$
1060
+ , (58)
1061
+ where Id and 0d respectively stand for the identity and the null matrices. To show the local convergence of (restarted) MEG for non-affine vector fields in Theorem 9, we first establish the following lemma, which connects the augmented state and the non-augmented one.
1062
+
1063
+ Lemma 3. Let P MEG
1064
+ tbe the residual polynomial associated with t updates of MEG (c.f., Theorem *1). Let* J *be defined as* (58)*. If* w1 = w0 −h 1+m v(w0 − γv(w0))*, we then have*
1065
+
1066
+ $$J^{t}\begin{bmatrix}w_{1}-w^{\star}\\ w_{0}-w^{\star}\end{bmatrix}=\begin{bmatrix}P_{t}^{M E G}(A)(w_{1}-w^{\star})\\ P_{t}^{M E G}(A)(w_{0}-w^{\star})\end{bmatrix}.$$
1067
+
1068
+ Consequently, if we denote zt+1 := [wt+1, wt] and z∗ := [w
1069
+ ⋆, w⋆]*, we have*
1070
+
1071
+ $$\|z_{t+1}-z_{*}\|\leqslant C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|.$$
1072
+ t∥z0 − z∗∥. (60)
1073
+ $$(59)$$
1074
+ $$(60)$$
1075
+ Proof. Let us express J J t = P 11 t(A) P 12 t(A) P 21 t(A) P 22 t(A) , and J t w1 − w ⋆ w0 − w ⋆ = P 11 t(A)(w0 − w ⋆) + P 12 t(A)(w0 − w ⋆) P 21 t(A)(w0 − w ⋆) + P 22 t(A)(w0 − w ⋆) By writing J t+1 = JJt and using the block-matrix form of J in (58), we get that for any t ⩾ 0, P 11 t+1(A) = ((1 + β)Id − hA(Id − γA))P 11 t(A) − βP21 t(A) P 21 t+1(A) = P 11 t(A) (62) P 12 t+1(A) = ((1 + β)Id − hA(Id − γA))P 12 t(A) − βP22 t(A) P 22 t+1(A) = P 12 t(A). (63)
1076
+ tsuch that
1077
+
1078
+ . (61)
1079
+ $$(61)$$
1080
+ $$(62)$$
1081
+ Hence, we have that,
1082
+
1083
+ $$\begin{array}{l}{{P_{t+1}^{11}(A)\stackrel{(62)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{11}(A)-\beta P_{t-1}^{11}(A)}}\\ {{P_{t+1}^{12}(A)\stackrel{(63)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{12}(A)-\beta P_{t-1}^{12}(A).}}\end{array}$$
1084
+
1085
+ We claim that
1086
+
1087
+ $$(63)$$
1088
+ (64) $\left(65\right)$ (65)
1089
+ $$P_{t}^{11}(A)+P_{t}^{12}(A)=P_{t}^{\mathrm{MEG}}(A)\quad\mathrm{for~all}\quad t\geqslant0.$$
1090
+ $$(66)$$
1091
+ t(A) for all t ⩾ 0. (66)
1092
+ We prove this via induction.
1093
+
1094
+ For the base case, using the fact that w1 = w0 −h 1+m (w0 − γv(w0)), we have that
1095
+
1096
+ $(P_{1}^{11}(A)+P_{1}^{12}(A))(w_{0}-w^{\star})=w_{1}-w^{\star}=(I_{d}-\frac{1}{1+m}(I_{d}-\gamma A))(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star})\,,$ $(P_{0}^{11}(A)+P_{0}^{22}(A))(w_{0}-w^{\star})=(P_{1}^{21}(A)+P_{1}^{22}(A))(w_{0}-w^{\star})=I_{d}(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star})$.
1097
+ To show the induction step, by adding (64) and (65), we get
1098
+
1099
+ $P_{t+1}^{11}(A)+P_{t+1}^{12}(A)=((1+\beta)I_{d}-hA(I_{d}-\gamma A))(P_{t}^{11}(A)+P_{t}^{12}(A))-\beta(P_{t-1}^{11}(A)+P_{t-1}^{12}(A))$ $\stackrel{{\rm()}}{{=}}((1+\beta)I_{d}-hA(I_{d}-\gamma A))P_{t}^{\rm MECG}(A)-\beta P_{t-1}^{\rm MEG}(A),$
1100
+ where in the last step we used the induction hypothesis. Also notice P
1101
+ 11 t+1(A) + P
1102
+ 12 t+1(A) = P
1103
+ MEG
1104
+ t+1 on the left-hand side.
1105
+
1106
+ Hence we have for any t ⩾ 0,
1107
+
1108
+ $$(P_{t}^{11}(A)+P_{t}^{12}(A))(w_{0}-w^{\star})=P_{t}^{\mathrm{MEG}}(A)(w_{0}-w^{\star}).$$
1109
+
1110
+ Therfore, going back to (61), we have:
1111
+
1112
+ wt+1 − w ⋆ wt − w ⋆ = J t w1 − w ⋆ w0 − w ⋆ = (P 11 t(A) + P 12 t(A))(w0 − w ⋆) (P 21 t(A) + P 22 t(A))(w0 − w ⋆) (62),(63) =(P 11 t(A) + P 12 t(A))(w0 − w ⋆) (P 11 t−1 (A) + P 12 t−1 (A))(w0 − w ⋆) (66) = P MEG t(A)(w0 − w ⋆) P MEG t−1(A)(w0 − w ⋆) Thm. 1 = P MEG t(A)(w0 − w ⋆) P MEG t(A)(w−1 − w ⋆) = P MEG t(A) 0 0 P MEG t(A) · w0 − w ⋆ w−1 − w ⋆ = P MEG t(A) ⊗ I2 · w0 − w ⋆ w−1 − w ⋆ ,
1113
+ $$(67)$$
1114
+ where we use the convention that w0 = w−1. Finally, using the fact that ∥A ⊗ B∥ = ∥A∥∥B∥ for ℓ2-operator norm (Lancaster & Farahat, 1972), we have
1115
+
1116
+ $$\|z_{t+1}-z_{*}\|\leqslant\|P_{t}^{\mathrm{MEG}}(A)\|\|z_{0}-z_{*}\|\,\stackrel{(10)}{\leqslant}\,C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|.$$
1117
+
1118
+ Proof. We first recall the restarted MEG algorithm we consider in (31):
1119
+
1120
+ $[w_{tk+i+1},w_{tk+i}]=G([w_{tk+i},w_{tk+i-1}])\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\quad1$ $w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v(w_{(t+1)k}-\gamma v(w_{(t+1)k})).$
1121
+ $$\square$$
1122
+ In other words, we repeat MEG for k steps, and then re-start the momentum at [w(t+1)k+1, w(t+1)k].
1123
+
1124
+ We can analyze this method as follows, where we denote zt := [wt, wt−1] and z∗ = [w
1125
+ ⋆, w⋆]:
1126
+
1127
+ ∥z(t+1)k − z∗∥ = ∥G (k)(ztk) − z∗∥ (68) = ∥∇G (k)(˜ztk)(ztk − z∗)∥ ⩽ ∥∇G (k)(z∗)(ztk − z∗)∥ + ∥(∇G (k)(˜ztk) − ∇G (k)(z∗))(ztk − z∗)∥ (60) ⩽ C(k + 1)(1 − φ) k∥ztk − z∗∥ + ∥∇G (k)(˜ztk) − ∇G (k)(z∗)∥∥(ztk − z∗)∥, where in the second line we use the Mean Value Theorem:
1128
+ $$\exists\xi_{tk}\in[z_{tk},z_{*}]\quad\text{such that}\quad G^{(k)}(z_{tk})=G^{(k)}(z_{*})+\nabla G^{(k)}(\xi_{tk})(z_{tk}-z_{*})$$ $$=z_{*}+\nabla G^{(k)}(\hat{z}_{tk})(z_{tk}-z_{*})\quad\text{(since$z_{*}$is the fixed point).}$$
1129
+ In the fourth line we used the fact that ∇G(k)(z∗)(ztk − z∗) exactly correspond to k updates of MEG when the vector field is affine, as well as Lemma 3 to account for the augmented state.
1130
+
1131
+ Now let us consider *φ > ε >* 0 and k large enough such that C(k + 1)(1 − φ)
1132
+ k ⩽ (1 − φ +
1133
+ ε 2
1134
+ )
1135
+ k. Since ∇G is assumed to be continuous, ∇G(k)is continuous too. Therefore, there exists δ > 0 such that ∥ztk − z∗∥ ⩽ δ implies ∥∇G(k)(˜ztk) − ∇G(k)(z∗)∥ ⩽ ε
1136
+ ′. In particular, choose ε
1137
+ ′ = (1 − φ + ε)
1138
+ k − (1 − φ +
1139
+ ε 2
1140
+ )
1141
+ k ∼kε 2(1−φ)
1142
+ .
1143
+
1144
+ Then, we have
1145
+
1146
+ $$\|z_{(t+1)k}-z_{*}\|\leqslant C(k+1)(1-\varphi)^{k}\|z_{tk}-z_{*}\|+\|\nabla G^{(k)}(\tilde{z}_{tk})-\nabla G^{(k)}(z_{*})\|\|z_{tk}-z_{*}\|$$ $$\leqslant(1-\varphi+\frac{\pi}{2})^{k}\|z_{tk}-z_{*}\|+\varepsilon^{\prime}\|z_{tk}-z_{*}\|$$ $$\leqslant(1-\varphi+\varepsilon)^{k}\|z_{tk}-z_{*}\|<\|z_{tk}-z_{*}\|<\|z_{0}-z_{*}\|.$$
1147
+
1148
+ From the above, we can conclude that for all ε > 0, there exists k > 0 and δ > 0 such that, for all initialization satisfying ∥w0 − w
1149
+ ⋆∥ ⩽ δ, the restarted MEG described above satisfies:
1150
+
1151
+ $$\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.$$
ZLVbQEu4Ab/ZLVbQEu4Ab_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 30,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 30,
14
+ "code": 0,
15
+ "table": 0,
16
+ "equations": {
17
+ "successful_ocr": 161,
18
+ "unsuccessful_ocr": 16,
19
+ "equations": 177
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }