tmlr-md-dump / ZLVbQEu4Ab /ZLVbQEu4Ab.md
RedTachyon's picture
Upload folder using huggingface_hub
21989ec verified
|
raw
history blame
89 kB

When Is Momentum Extragradient Optimal? A Polynomial-Based Analysis

Junhyung Lyle Kim⋆jlylekim@rice.edu Rice University, Department of Computer Science Gauthier Gidel gauthier.gidel@umontreal.ca UniversitΓ© de MontrΓ©al, Department of Computer Science and Operations Research Mila - Quebec AI Institute, Canada CIFAR AI Chair Anastasios Kyrillidis anastasios@rice.edu Rice University, Department of Computer Science Fabian Pedregosa pedregosa@google.com Google DeepMind Reviewed on OpenReview: https: // openreview. net/ forum? id= ZLVbQEu4Ab

Abstract

The extragradient method has gained popularity due to its robust convergence properties for differentiable games. Unlike single-objective optimization, game dynamics involve complex interactions reflected by the eigenvalues of the game vector field's Jacobian scattered across the complex plane. This complexity can cause the simple gradient method to diverge, even for bilinear games, while the extragradient method achieves convergence. Building on the recently proven accelerated convergence of the momentum extragradient method for bilinear games (Azizian et al., 2020b), we use a polynomial-based analysis to identify three distinct scenarios where this method exhibits further accelerated convergence. These scenarios encompass situations where the eigenvalues reside on the (positive) real line, lie on the real line alongside complex conjugates, or exist solely as complex conjugates. Furthermore, we derive the hyperparameters for each scenario that achieve the fastest convergence rate.

1 Introduction

While most machine learning problems are formulated as minimization problems, a growing number of works rely instead on game formulations that involve multiple players and objectives. Examples of such problems include generative adversarial networks (GANs) (Goodfellow et al., 2014), actor-critic algorithms (Pfau & Vinyals, 2016), sharpness aware minimization (Foret et al., 2021), and fine-tuning language models from human feedback (Munos et al., 2023). This increasing interest in game formulations motivates further theoretical exploration of differentiable games.

Optimizing differentiable games presents challenges absent in minimization problems due to the interplay of multiple players and objectives. Notably, the game Jacobian's eigenvalues are distributed on the complex plane, exhibiting richer dynamics compared to single-objective minimization, where the Hessian eigenvalues are restricted to the real line. Consequently, even for simple bilinear games, standard algorithms like the gradient method fail to converge (Mescheder et al., 2018; Balduzzi et al., 2018; Gidel et al., 2019).

Fortunately, the extragradient method (EG), originally introduced by Korpelevich Korpelevich (1976), offers a solution. Unlike the gradient method, EG demonstrably converges for bilinear games (Tseng, 1995). This ⋆Authors after JLK are listed in alphabetical order.

This paper extends Kim et al. (2022) presented at the NeurIPS 2022 Optimization for Machine Learning Workshop.

has sparked extensive research analyzing EG from different perspectives, including variational inequality (Gidel et al., 2018; Gorbunov et al., 2022), stochastic (Li et al., 2021), and distributed (Liu et al., 2020; Beznosikov et al., 2021) settings.

Most existing works, including those mentioned earlier, analyze EG and relevant algorithms by assuming some structure on the objectives, such as (strong) monotonicity or Lipschitzness (Solodov & Svaiter, 1999; Tseng, 1995; Daskalakis & Panageas, 2018; Ryu et al., 2019; Azizian et al., 2020a). Such assumptions, in the context of differentiable games, confine the distribution of the eigenvalues of the game Jacobian; for instance, strong monotonicity implies a lower bound on the real part of the eigenvalues, and the Lipschitz assumption implies an upper bound on the magnitude of the eigenvalues of the Jacobian.

Building upon the limitations of prior assumptions, Azizian et al. (2020b) showed that the key factor for effectively analyzing game dynamics lies in the spectrum of the Jacobian on the complex plane. Through a polynomial-based analysis, they demonstrated that first-order methods can sometimes achieve faster rates using momentum. This is achieved by replacing the smoothness and monotonicity assumptions with more precise assumptions on the distribution of the Jacobian eigenvalues, represented by simple shapes like ellipses or line segments. Notably, Azizian et al. (2020b) proved that for bilinear games, the extragradient method with momentum achieves an accelerated convergence rate.

In this work, we take a different approach by asking the reverse question: for what shapes of the Jacobian spectrum does the momentum extragradient (MEG) method achieve optimal performance? This reverse analysis allows us to study the behavior of MEG in specific settings depending on the hyperparameter setup, encompassing:

  • Minimization, where all Jacobian eigenvalues lie on the positive real line.

  • Regularized bilinear games, where all eigenvalues are complex conjugates.

  • Intermediate case, where eigenvalues are both on the real line and as complex conjugates (illustrated in Figure 1).

Our contributions can be summarized as follows: - Characterizing MEG convergence modes: We derive the residual polynomials of MEG for affine game vector fields and identify three distinct convergence modes based on hyperparameter settings. This analysis can then be applied to different eigenvalue structures of the Jacobian (see Theorem 3).

  • Optimal hyperparameters and convergence rates: For each eigenvalue structure, we derive the optimal hyperparameters of MEG and its (asymptotic) convergence rates. For minimization, MEG exhibits "super-acceleration," where a constant improvement upon classical lower bound rate is attained,1 similarly to the gradient method with momentum (GDM) with cyclical step sizes (Goujaud et al., 2022).

For the other two cases involving imaginary eigenvalues, MEG exhibits accelerated convergence rates with the derived optimal hyperparameters.

  • Comparison with other methods. We compare MEG's convergence rates with gradient (GD), GDM, and extragradient (EG) methods. For the considered game classes, none of these methods achieve (asymptotically) accelerated rates (Corollaries 1 and 2), unlike MEG. In Section 7, we validate our findings through numerical experiments, including scenarios with slight deviations from our initial assumptions.

2 Problem Setup And Related Work

Following Letcher et al. (2019); Balduzzi et al. (2018), we define the n-player differentiable game as a family of twice continuously differentiable losses β„“i: R d β†’ R, for i = 1*, . . . , n.* The player i controls the parameter w (i) ∈ R di. We denote the concatenated parameters by w = [w (1), . . . , w(n)] ∈ R d, where d =Pn i=1 di.

1Note that achieving this improvement is possible by having additional information beyond just the largest (smoothness) and smallest (strong convexity) eigenvalues of the Jacobian.

2_image_0.png

Figure 1: Convergence rates of MEG in terms of the game Jacobian eigenvalues. The step sizes for MEG, h and Ξ³, and the momentum parameter m are set up according to each case of Theorem 3, illustrating three distinct convergence modes of MEG. For each case, the red line indicates the robust region (c.f., Definition 1) where MEG achieves the optimal convergence rate. For this problem, a Nash equilibrium satisfies: w (i) β‹†βˆˆ arg minw(i)∈Rdi β„“i w (i), w(Β¬i) ⋆ βˆ€i ∈ {1*, . . . , n*}, where the notation Β· (Β¬i) denotes all indices except for i. We also define the vector field v of the game as the concatenation of the individual gradients: v(w) = [βˆ‡w(1) β„“1(w)Β· Β· Β· βˆ‡w(n) β„“n(w)]⊀, and denote its associated Jacobian with βˆ‡v.

Unfortunately, finding Nash equilibria for general games remains an intractable problem (Shoham & LeytonBrown, 2008; Letcher et al., 2019).2 Therefore, instead of directly searching for Nash equilibria, we focus on finding stationary points of the game's vector field v. This approach is motivated by the fact that any Nash equilibrium necessarily corresponds to a stationary point of the gradient dynamics. In other words, we aim to solve the following problem:

Findwβ‹†βˆˆRdsuch thatv(w⋆)=0.\mathrm{Find}\quad w^{\star}\in\mathbb{R}^{d}\quad\mathrm{such~that}\quad v(w^{\star})=0. (1)(1) ⋆) = 0. (1) Notation. R(z) and I(z) respectively denote the real and the imaginary part of a complex number z. The spectrum of a matrix M is denoted by Sp(M), and its spectral radius by ρ(M) := max{|Ξ»| : Ξ» ∈ Sp(M)}.

M ≻ 0 denotes that M is a positive-definite matrix. C+ denotes the complex plane with positive real part, and R+ denotes positive real numbers.

2.1 Related Work

The extragradient method, originally introduced in Korpelevich (1976), is a popular algorithm for solving (unconstrained) variational inequality problems in (1) (Gidel et al., 2018). There are several works that study the convergence rate of EG for (strongly) monotone problems (Tseng, 1995; Solodov & Svaiter, 1999; Nemirovski, 2004; Monteiro & Svaiter, 2010; Mokhtari et al., 2020; Gorbunov et al., 2022). Under similar settings, stochastic variants of EG are studied in Palaniappan & Bach (2016); Hsieh et al. (2019; 2020); Li et al. (2021). However, as mentioned earlier, assumptions like (strong) monotonicity or Lipchtizness may not accurately represent how Jacobian eigenvalues are distributed.

Instead, we make more fine-grained assumptions on these eigenvalues, to obtain the optimal hyperparameters and convergence rates for MEG via polynomial-based analysis. Such analysis dates back to the development of the conjugate gradient method (Hestenes & Stiefel, 1952), and is still actively used; for instance, to derive lower bounds (Arjevani & Shamir, 2016), to develop accelerated decentralized algorithms (Berthier et al., 2020), and to analyze average-case performance (Pedregosa & Scieur, 2020; Domingo-Enrich et al., 2021).

On that end, we use the following lemma (Chihara, 2011), which elucidates the connection between firstorder methods and (residual) polynomials, when the vector field v is affine. First-order methods are the ones in which the sequence of iterates wt is in the span of previous gradients: wt ∈ w0+span{v(w0), . . . , v(wtβˆ’1)}.

2Formulating Nash equilibrium search as a nonlinear complementarity problem makes it inherently difficult, classified as PPAD-hard (Daskalakis et al., 2009; Letcher et al., 2019). Lemma 1 (Chihara (2011)). Let wt be the iterate generated by a first-order method after t iterations, with v(w) = Aw + b. Then, there exists a real polynomial Pt, of degree at most t*, satisfying:*

wtβˆ’w⋆=Pt(A)(w0βˆ’w⋆) ,w_{t}-w^{\star}=P_{t}(A)(w_{0}-w^{\star})\,, (2)\left(2\right) ⋆), (2) where Pt(0) = 1*, and* v(w ⋆) = Aw⋆ + b = 0.

By taking β„“2-norms, (2) further implies the following worst-case convergence rate: βˆ₯wt βˆ’ w ⋆βˆ₯ = βˆ₯Pt(A)(w0 βˆ’ w ⋆)βˆ₯ β©½ βˆ₯Pt(ZΞ›Z βˆ’1)βˆ₯ Β· βˆ₯w0 βˆ’ w ⋆βˆ₯ β©½ sup λ∈S⋆

Ξ»)βˆ£β‹…βˆ₯Zβˆ₯βˆ₯Zβˆ’1βˆ₯β‹…βˆ₯w0βˆ’w⋆βˆ₯,(3)\lambda)|\cdot\|Z\|\|Z^{-1}\|\cdot\|w_{0}-w^{\star}\|,\quad(3)

where A = ZΞ›Z βˆ’1is the diagonalization of A, 3 and the constant βˆ₯Zβˆ₯βˆ₯Z βˆ’1βˆ₯ disappears if A is a normal matrix. Hence, the worst-case convergence rate of a first-order method can be analyzed by studying the associated residual polynomial Pt evaluated at the eigenvalues Ξ» of the Jacobian βˆ‡v = A, distributed over the set S ⋆.

Unlocking Faster Rates Through Fine-Grained Spectral Shapes. While Azizian et al. (2020b) characterized lower bounds and optimality for certain first-order methods under simple spectral shapes, we posit that a more granular understanding of S ⋆could unlock even faster convergence rates. By meticulously analyzing the residual polynomials of MEG, we identify specific spectral shapes where MEG exhibits optimal performance. This approach resonates with recent advancements in optimization literature (Oymak, 2021; Goujaud et al., 2022), which demonstrate that knowledge beyond merely the largest and smallest eigenvalues (i.e., smoothness and strong convexity) can lead to accelerated convergence in convex smooth minimization.

3 Momentum Extragradient Via Chebyshev Polynomials

In this section, we delve into the intricate dynamics of the momentum extragradient method (MEG) by harnessing the power of residual polynomials and Chebyshev polynomials.

MEG iterates according to the following update rule:

(MEG)wt+1=wtβˆ’hv(wtβˆ’Ξ³v(wt))+m(wtβˆ’wtβˆ’1) ,\mathrm{(MEG)}\quad w_{t+1}=w_{t}-h v(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1})\,, (4)\left(4\right)

where h is the step size, Ξ³ is the extrapolation step size, and m is the momentum parameter.

The extragradient method (EG), which serves as the foundation for MEG, was originally proposed by Korpelevich (1976) for saddle point problems. It has garnered renewed interest due to its remarkable ability to converge in certain differentiable games, such as bilinear games, where the standard gradient method falters (Gidel et al., 2019; Azizian et al., 2020b;a).

For completeness, we remind the gradient method with momentum (GDM):

(GDM)(\mathrm{GDM}) $\mathbf{M}$) $w_{t+1}$ tβˆ’wtβˆ’1) ,t-w_{t-1})\,,

(GDM) wt+1 = wt βˆ’ hv(wt) + m(wt βˆ’ wtβˆ’1), (5) from which the gradient method (GD) can be obtained by setting m = 0.

As a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b), MEG's behavior can be elegantly analyzed through the lens of residual polynomials, as established in Lemma 1. The following theorem unveils the specific residual polynomials associated with MEG: Theorem 1 (Residual polynomials of MEG and their Chebyshev representation). Consider the momentum extragradient method (MEG) in (4) with a vector field of the form v(w) = Aw + b. The residual polynomials associated with MEG can be expressed as follows:

P~0(Ξ»)=1,P~1(Ξ»)=1βˆ’hΞ»(1βˆ’Ξ³Ξ»)1+m,\mboxandP~t+1(Ξ»)=(1+mβˆ’hΞ»(1βˆ’Ξ³Ξ»))P~t(Ξ»)βˆ’mP~tβˆ’1(Ξ»).\tilde{P}_{0}(\lambda)=1,\quad\tilde{P}_{1}(\lambda)=1-\frac{h\lambda(1-\gamma\lambda)}{1+m},\quad\mbox{and}\quad\tilde{P}_{t+1}(\lambda)=(1+m-h\lambda(1-\gamma\lambda))\tilde{P}_{t}(\lambda)-m\tilde{P}_{t-1}(\lambda). 3Note that almost all matrices are diagonalizable over C, in the sense that the set of non-diagonalizable matrices has Lebesgue measure zero (Hetzel et al., 2007). (5)\left(5\right)

Remarkably, these polynomials can be elegantly rewritten in terms of Chebyshev polynomials of the first and second kind, denoted by Tt(Β·) and Ut(Β·), respectively:

PtMEG(Ξ»)=t1/2(2mΞ»+mTt(Οƒ(Ξ»))+1βˆ’m1+mUt(Οƒ(Ξ»))),whereΟƒ(Ξ»)≑σ(Ξ»;h,Ξ³,m)=1+mβˆ’hΞ»(1βˆ’Ξ³Ξ»)2m.(6)P_{t}^{MEG}(\lambda)={}_{t}{}^{1/2}\left(\frac{2m}{\lambda+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\right),\text{where}\sigma(\lambda)\equiv\sigma(\lambda;h,\gamma,m)=\frac{1+m-h\lambda(1-\gamma\lambda)}{2\sqrt{m}}.\tag{6} _The term $\sigma(\lambda)$, which encapsulates the interplay between step sizes, momentum, and eigenvalues, is referred to as the to as the link function. The residual polynomials of MEG and GDM, intriguingly, share a similar structure but differ in their link functions. Below are the residual polynomials of GDM, expressed in Chebyshev polynomials (Pedregosa, 2020):

PtGDM(Ξ»)=mt/2(2m1+mTt(ΞΎ(Ξ»))+1βˆ’m1+mUt(ΞΎ(Ξ»))),\mboxwhereΞΎ(Ξ»)=1+mβˆ’hΞ»2m.(7)P_{t}^{\rm GDM}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{t}(\xi(\lambda))+\frac{1-m}{1+m}U_{t}(\xi(\lambda))\right),\quad\mbox{where}\quad\xi(\lambda)=\frac{1+m-h\lambda}{2\sqrt{m}}.\tag{7}

Notice that the residual polynomials of MEG in (6) and that of GDM in (7) are identical, except for the link functions Οƒ(Ξ») and ΞΎ(Ξ»), which enter as arguments in Tt(Β·) and Ut(Β·).

The differences in these link functions are paramount because the behavior of Chebyshev polynomials hinges decisively on their argument's domain: Lemma 2 (Goujaud & Pedregosa (2022)). Let z be a complex number, and let Tt(Β·) and Ut(Β·) be the Chebyshev polynomials of the first and second kind, respectively. The sequence n 2m 1+m Tt(z) + 1βˆ’m 1+m Ut(z) o tβ©Ύ0

grows exponentially in $t$ for $z\notin[-1,1]$, while for $z\in[-1,1]$, the following bounds hold:_ $$|T_{t}(z)|\leqslant1\quad\text{and}\quad|U_{t}(z)|\leqslant t+1.\tag{8}$$ Therefore, to study the optimal convergence behavior of MEG, we are interested in the case where the set of step sizes and the momentum parameters lead to |Οƒ(Ξ»; h, Ξ³, m)| β©½ 1 so that we can use the bounds in (8).

We will refer to those sets of eigenvalues and hyperparameters as the robust region, as defined below.

Definition 1 (Robust region of MEG). Consider the MEG method in (4) expressed via Chebyshev polynomials, as in (6). We define the set of eigenvalues and hyperparameters such that the image of the link function Οƒ(Ξ»; h, Ξ³, m) lies in the interval [βˆ’1, 1] as the robust region, and denote it with Οƒ βˆ’1([βˆ’1, 1]).

Although polynomial-based analysis requires the assumption that the vector field is affine, it captures intuitive insights into how various algorithms behave in different settings, as we remark below.

Remark 1. From the definition of ΞΎ(Ξ») in (7), one can infer why negative momentum can help the convergence of GDM (Gidel et al., 2019) when Ξ» ∈ R+: it forces GDM to stay within the robust region, |ΞΎ(Ξ»)| β©½ 1. One can also infer the divergence of GDM in the presence of complex eigenvalues, unless, for instance, complex momentum is used (Lorraine et al., 2022). Similarly, the residual polynomial of GD is P GD t(Ξ») = (1 βˆ’ hΞ») t(Goujaud & Pedregosa, 2022, Example 4.2), and can easily diverge in the presence of complex eigenvalues, which can potentially be alleviated by using complex step sizes. On the contrary, thanks to the quadratic link function of MEG in (6), it can converge for much wider subsets of complex eigenvalues. By analyzing the residual polynomials of MEG, we can also characterize the asymptotic convergence rate of MEG for any combination of hyperparameters, as summarized in the next theorem. Theorem 2 (Asymptotic convergence rate of MEG). Suppose v(w) = Aw + b. The asymptotic convergence rate of MEG in (4) is:4

\limsup_{t\to\infty}\sqrt[2]{\frac{|w_{t}-w^{*}|}{\|w_{0}-w^{*}\|}}=\begin{cases}\sqrt[2]{m},&\text{if}\quad\bar{\sigma}\leqslant1\quad\text{(robust region)};\\ \sqrt[2]{m}\big{(}\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\big{)}^{1/2},&\text{if}\quad\bar{\sigma}\in\Big{(}1,\frac{1+w}{2\sqrt{m}}\Big{)};\\ \geqslant1\text{(no convergence)},&\text{otherwise},\end{cases}\tag{9} where $\vartheta=\sup{\lambda\in\mathcal{S}^{*}}|\sigma(\lambda;h,\gamma,m)|$, and $\sigma(\lambda;h,\gamma,m)\equiv\sigma(\lambda)$ is the link function of MEG defined in (6)._ 4The reason why we take the 2t-th root is to normalize by the number of vector field computations; we compare in Section 4 the asymptotic rate of MEG in (9) with other gradient methods that use a single vector field computation in the recurrences, such as GD and GDM. Optimal hyperparameters for MEG that we obtain in Section 4 minimize the asymptotic convergence rate above. Note that the optimal hyperparameters vary based on the set S ⋆, which we detail in Section 3.2.

3.1 Three Modes Of The Momentum Extragradient

Within the robust region of MEG, we can compute its worst-case convergence rate based on (3) as follows:

\sup_{\lambda\in\mathcal{S}^{*}}|P_{t}^{\text{MEG}}(\lambda)|\leqslant\,m^{t/2}\Big{(}\frac{2m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sup_{\lambda\in\mathcal{S}^{*}}|U_{t}(\sigma(\lambda))|\Big{)}\tag{10} $$\leqslant,m^{t/2}\Big{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\Big{)}\leqslant m^{t/2}(t+1).$$

Since the Chebyshev polynomial expressions of MEG in (6) and that of GDM5 are identical except for the link functions, the convergence rate in (10) applies to both MEG and GDM, as long as the link functions |Οƒ(Ξ»)| and |ΞΎ(Ξ»)| are bounded by 1. As a result, we see that the asymptotic convergence rate in (9) only depends on the momentum parameter m, when the hyperparameters are restricted to the robust region. This fact was utilized in tuning GDM for strongly convex quadratic minimization (Zhang & Mitliagkas, 2019).

The robust region of MEG can be described with the four extreme points below (derivation in the appendix):

Οƒβˆ’1(βˆ’1)=12Ξ³Β±14Ξ³2βˆ’(1+m)2hΞ³,andΟƒβˆ’1(1)=12Ξ³Β±14Ξ³2βˆ’(1βˆ’m)2hΞ³.(11)\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}},\quad\text{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.\tag{11}

The above four points and their intermediate values characterize the set of Jacobian eigenvalues Ξ» that can be mapped to [βˆ’1, 1]. The distribution of these eigenvalues can vary in three different modes depending on the selected hyperparameters of MEG, as stated in the following theorem. Theorem 3. Consider the momentum extragradient method in (4), expressed with the Chebyshev polynomials as in (6). Then, the robust region (c.f., Definition 1) have the following three modes:

  • Case 1: If $\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2$, then $\sigma^{-1}(-1)$ and $\sigma^{-1}(1)$ are all real numbers; -.
  • Case 2: If $(1-\sqrt{m})^{2}\leqslant\frac{h}{4\gamma}<(1+\sqrt{m})^{2}$, then $\sigma^{-1}(-1)$ are complex, and $\sigma^{-1}(1)$ are real;
  • Case 3: If (1 βˆ’ √m) 2 > h 4Ξ³ , then Οƒ βˆ’1(βˆ’1) and Οƒ βˆ’1(1) are all complex numbers. Remark 2. Theorem 3 offers guidance on how to set up the hyperparameters for MEG. This depends on the Jacobian spectra of the game problem being considered. For instance, if one observes only real eigenvalues (i.e., the problem is in fact minimization), the main step size h should be at least 4Γ— larger than the extrapolation step size Ξ³*, based on the condition* h 4Ξ³ β©Ύ (1 + √m)

We illustrate Theorem 3 in Figure 1. We first set the hyperparameters according to each condition in Theorem 3. We then discretize the interval [βˆ’1, 1], and plot Οƒ βˆ’1([βˆ’1, 1]) for each case, represented by red lines. We can see the quadratic link function induced by MEG allows interesting eigenvalue dynamics to be mapped onto the [βˆ’1, 1] segment, such as the cross-shape observed in Case 2. Moreover, although MEG exhibits the best rates within the robust region, it does not necessarily diverge outside of it, as in the second case of Theorem 2. We illustrate the convergence region of MEG measured by 2pt|Pt(Ξ»)| < 1 from (6) for t = 2000, with different colors indicating varying convergence rates, which slow down as one moves away from the robust region. Interestingly, Figure 1 (right) shows that MEG can also converge in the absence of monotonicity (i.e., in the presence of Jacobian eigenvalues with negative real part) (Gorbunov et al., 2023).

3.2 Robust Region-Induced Problem Cases

We classify problem classes into three distinct cases based on Theorem 3, each reflecting a different mode of the robust region (Figure 2): 5Asymptotically, GDM enjoys √m convergence rate instead of the √4 m of MEG, as it uses a single vector field computation per iteration instead of the two. However, these are not directly comparable, as the values of m that correspond to the robust region are not the same. 6

6_image_0.png

Figure 2: Illustration of the three spectrum models where MEG achieves accelerated convergence rates. Case 1: The problem reduces to minimization, where the Jacobian eigenvalues are distributed on the (positive) real line, but as a union of two intervals. We can model such spectrum as:

Sp⁑(βˆ‡v)βŠ‚S1⋆=[ΞΌ1,L1]βˆͺ[ΞΌ2,L2]βŠ‚R+.\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{1}^{\star}=[\mu_{1},L_{1}]\cup[\mu_{2},L_{2}]\subset\mathbb{R}_{+}. (12)(12) 1 = [Β΅1, L1] βˆͺ [Β΅2, L2] βŠ‚ R+. (12) Above generalizes the Hessian spectrum that arise in minimizing Β΅-strongly convex and L-smooth functions, i.e. Ξ» ∈ [Β΅, L]. This spectrum can be obtained from (12) by setting Β΅1 = Β΅, L2 = L, and L1 = Β΅2. It was empirically observed that, during DNN training, sometimes a few eigenvalues of the Hessian have significantly larger magnitudes (Papyan, 2020). In such cases, (12) can be more precise than a single interval [Β΅, L]. In particular, Goujaud et al. (2022) utilized (12), and showed that the GDM with alternating step sizes can achieve a (constant factor) improvement over the traditional lower bound for strongly convex and smooth quadratic objectives.

In Section 4, we show that MEG enjoys similar improvement. To show that, we define the following quantities following Goujaud et al. (2022), which will be used to obtain the convergence rate of MEG in (18) for this problem class.

ΞΆ:=L2+ΞΌ1L2βˆ’ΞΌ1=1+Ο„1βˆ’Ο„,andR:=ΞΌ2βˆ’L1L2βˆ’ΞΌ1∈[0,1).\zeta:={\frac{L_{2}+\mu_{1}}{L_{2}-\mu_{1}}}={\frac{1+\tau}{1-\tau}},\quad{\mathrm{and}}\quad R:={\frac{\mu_{2}-L_{1}}{L_{2}-\mu_{1}}}\in[0,1). (13)(13)

Here, ΞΆ is the ratio between the center of S ⋆ 1 and its radius, and Ο„ := L2/Β΅1 is the inverse condition number.

R is the relative gap of Β΅2 βˆ’ L1 and L2 βˆ’ Β΅1, which becomes 0 if Β΅2 = L1 (i.e., S ⋆ 1 becomes [Β΅1, L2]).

Case 2: In this case, the Jacobian eigenvalues are distributed both on the real line and as complex conjugates, exhibiting a cross-shaped spectrum. We model this spectrum as:

Sp(βˆ‡v)βŠ‚S2⋆=[ΞΌ,L]βˆͺ{z∈C:β„œ(z)=cβ€²>0, β„‘(z)∈[βˆ’c,c]}.\mathrm{Sp}(\nabla v)\subset{\mathcal{S}}_{2}^{\star}=[\mu,L]\cup\{z\in\mathbb{C}:\Re(z)=c^{\prime}>0,\ \Im(z)\in[-c,c]\}.

The first set [Β΅, L] denotes a segment on the real line, reminiscent of the Hessian spectrum for minimizing Β΅-strongly convex and L-smooth functions. The second set has a fixed real component (c β€² > 0), along with imaginary components symmetric across the real line (i.e., complex conjugates), as the Jacobian is real.

This is a strict generalization of the purely imaginary interval Β±[ai, bi] commonly considered in the bilinear games literature (Liang & Stokes, 2019; Azizian et al., 2020b; Mokhtari et al., 2020). While many recent papers on bilinear games cite GANs (Goodfellow et al., 2014) as a motivation, the work in Berard et al.

(2020, Figure 4) empirically shows that the spectrum of GANs is not contained in the imaginary axis; the cross-shaped spectrum model above might be closer to some of the observed GAN spectra. Case 3: In this case, the Jacobain eigenvalues are distributed only as complex conjugates, with a fixed real component, exhibiting a shifted imaginary spectrum. We model this spectrum as:

Sp⁑(βˆ‡v)βŠ‚S3⋆=[c+ai,c+bi]βˆͺ[cβˆ’ai,cβˆ’bi]βŠ‚C+.\operatorname{Sp}(\nabla v)\subset{\mathcal{S}}_{3}^{\star}=[c+a i,c+b i]\cup[c-a i,c-b i]\subset\mathbb{C}_{+}. 3 = [c + ai, c + bi] βˆͺ [c βˆ’ ai, c βˆ’ bi] βŠ‚ C+. (15) Again, (15) generalizes bilinear games, where the spectrum reduces to Β±[ai, bi] with c = 0.

(14)(14) (15)(15)

Examples of Cases 2 and 3 in quadratic games. To understand these spectra better, we provide examples using quadratic games. Consider the following two player quadratic game, where x ∈ R d1 and y ∈ R d2 are the parameters controlled by each player, whose loss functions respectively are:

β„“1(x,y)=12x⊀S1x+x⊀M12y+x⊀b1andβ„“2(x,y)=12y⊀S2y+y⊀M21x+y⊀b2,(16)\ell_{1}(x,y)=\frac{1}{2}x^{\top}S_{1}x+x^{\top}M_{12}y+x^{\top}b_{1}\quad\text{and}\quad\ell_{2}(x,y)=\frac{1}{2}y^{\top}S_{2}y+y^{\top}M_{21}x+y^{\top}b_{2},\tag{16}

where S1, S2 ≻ 0. Then, the vector field can be written as:

v(x,y)=[S1x+M12y+b1M21x+S2y+b2]=Aw+b,whereA=[S1M12M21S2],w=[xy],andb=[b1b2].(17)v(x,y)=\begin{bmatrix}S_{1}x+M_{12}y+b_{1}\\ M_{21}x+S_{2}y+b_{2}\end{bmatrix}=Aw+b,\text{where}A=\begin{bmatrix}S_{1}&M_{12}\\ M_{21}&S_{2}\end{bmatrix},\text{}w=\begin{bmatrix}x\\ y\end{bmatrix},\text{and}b=\begin{bmatrix}b_{1}\\ b_{2}\end{bmatrix}.\tag{17}

If S1 = S2 = 0 and M12 = βˆ’M⊀ 21, the game Jacobian βˆ‡v = A has only purely imaginary eigenvalues (Azizian et al., 2020b, Lemma 7), recovering bilinear games.

As the second and the third spectrum models in (14) and (15) generalize bilinear games, we can consider more complex quadratic games, where S1 and S2 does not have to be 0. Specifically, when M12 = βˆ’M⊀ 21, and they share common bases with S1 and S2 specified in the below proposition, then Sp(A) has a cross-shaped spectrum in (14) of Case 2 and a shifted imaginary spectrum in (15) of Case 3.

Proposition 1. Let A be a matrix of the form S1 B βˆ’B⊀ S2 , where S1, S2 ≻ 0. Without loss of generality, assume that dim(S1) > dim(S2) = d*. Then,*

  • Case 2: Sp(A) has a cross-shape if there exist orthonormal matrices U, V and diagonal matrices D1, D2 such that S1 = Udiag(a, . . . , a, D1)U ⊀, S2 = V diag(a, . . . , a)V ⊀, and B = UD2V ⊀.

  • Case 3: Sp(A) has a shifted imaginary shape if there exist orthonormal matrices U, V and diagonal matrix D2 such that S1 = Udiag(a, . . . , a)U ⊀, S2 = V diag(a, . . . , a)V ⊀, and B = UD2V ⊀.

We can interpret Case 3 as a regularized bilinear game, where S1 and S2 are diagonal matrices with a constant eigenvalue. This implies that the players cannot control their parameter x and y arbitrarily, which can be seen in the loss functions in (16), where S1 and S2 appears in terms x ⊀S1x and y ⊀S2y. Case 2 can be interpreted similarly, but player 1 (without loss of generality) has more flexibility in its parameter choice due to the additional diagonal matrix D1 in the eigenvalue decomposition of S1.

4 Optimal Parameters And Convergence Rates

In this section, we obtain the optimal hyperparameters of MEG (in the sense that they achieve the fastest asymptotic convergence rate), for each spectrum model discussed in the previous section.

Case 1: minimization. When the condition in Case 1 of Theorem 3 holds (i.e., h 4Ξ³ β©Ύ (1 + √m) 2), both Οƒ βˆ’1(βˆ’1) and Οƒ βˆ’1(1) (and their intermediate values), line on the real line, forming a union of two intervals (see Figure 1, left). The robust region in this case, denoted Οƒ βˆ’1 Case1 ([βˆ’1, 1]), is expressed as:

[12Ξ³βˆ’14Ξ³2βˆ’(1βˆ’m)2ℏγ,12Ξ³βˆ’14Ξ³2βˆ’(1+m)2ℏγ]⋃[12Ξ³+14Ξ³2βˆ’(1+m)2ℏγ,12Ξ³+14Ξ³2βˆ’(1βˆ’m)2ℏγ]βŠ‚R+.\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{R}_{+}.

For this case, the optimal hyperparameters of MEG in terms of the worst-case asymptotic convergence rate in (9) can be set as below. Theorem 4 (Case 1). Consider solving (1) for games where the Jacobian has the spectrum in (12). For this problem, the optimal hyperparameters for the momentum extragradient method in (4) are:

h=4(ΞΌ1+L2)(ΞΌ2+L1+ΞΌ1+L2)2,  Ξ³=1ΞΌ1+L2=1ΞΌ2+L1,   and   Ξ³=(ΞΌ2L1βˆ’ΞΌ1L2ΞΌ2L1+ΞΌ1L2)2=(ΞΆ2βˆ’R2βˆ’ΞΆ2βˆ’1ΞΆ2βˆ’R2+ΞΆ2βˆ’1)2.h=\frac{4(\mu_{1}+L_{2})}{(\sqrt{\mu_{2}+L_{1}}+\sqrt{\mu_{1}+L_{2}})^{2}},\ \ \gamma=\frac{1}{\mu_{1}+L_{2}}=\frac{1}{\mu_{2}+L_{1}},\ \ \ \text{and}\ \ \ \gamma=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.

Recalling (9), we immediately get the asymptotic convergence rate from Theorem 4. Further, this formula can be simplified in the ill-conditioned regime, where the inverse condition number Ο„ := Β΅1/L2 β†’ 0:

m4=(ΞΆ2βˆ’R2βˆ’ΞΆ2βˆ’1ΞΆ2βˆ’R2+ΞΆ2βˆ’1)1/21βˆ’2Ο„1βˆ’R2+o(Ο„).(18)\sqrt[4]{m}=\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{1-R^{2}}}+o(\sqrt{\tau}).\tag{18}

From (18), we see that MEG achieves an accelerated convergence rate 1 βˆ’ O( βˆšΟ„ ), which is known to be "optimal" for this function class, and can be asymptotically achieved by GDM6(Polyak, 1987) (see also Theorem 8 with ΞΈ = 1). Surprisingly, this rate can be further improved by the factor √1 βˆ’ R2, exhibiting "super-acceleration" phenomenon enjoyed by GDM with (optimal) cyclical step sizes (Goujaud et al., 2022).

Note that achieving this improvement is possible by having additional information beyond just the largest (L2) and smallest (Β΅1) eigenvalues of the Hessian.

Case 2: cross-shaped spectrum. If the condition in Case 2 of Theorem 3 is satisfied (i.e., (1 βˆ’ √m) 2 β©½

h 4Ξ³ < (1 + √m) 2), then Οƒ βˆ’1(βˆ’1) are complex, while Οƒ βˆ’1(1) are real (c.f., Figure 1, middle). We can write the robust region Οƒ βˆ’1 Case2 ([βˆ’1, 1]) as:

[122βˆ’14β‹…7βˆ’(1βˆ’m)2ℏγ,122+14β‹…72βˆ’(1βˆ’m)2ℏγ]⏟CE+⋃[122βˆ’14β‹…72βˆ’(1+m)2ℏγ,122+14β‹…72βˆ’(1+m)2ℏγ]⏟CE+.\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}\bigcup\underbrace{\left[\frac{1}{2^{2}}-\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2^{2}}+\sqrt{\frac{1}{4\cdot7^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]}_{\mathbb{C}\mathbb{E}_{+}}. . Here, the first interval lies on R+, as the square root term is real; conversely, in the second interval, the square root term is imaginary, with the fixed real component: 1 2Ξ³ . We summarize the optimal hyperparameters for this case in the next theorem. Theorem 5 (Case 2). Consider solving (1) for games where the Jacobian has a cross-shaped spectrum as in (14). For this problem, the optimal hyperparameters for the momentum extragradient method in (4) are:

h=16(ΞΌ+L)(4c2+(ΞΌ+L)2+4ΞΌL)2, Ξ³=1ΞΌ+L, and m=(4c2+(ΞΌ+L)2βˆ’4ΞΌL4c2+(ΞΌ+L)2+4ΞΌL)2.h=\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}},\quad\ \gamma=\frac{1}{\mu+L},\quad\ \mathrm{and}\quad\ m=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{2}.

We get the asymptotic rate from Theorem 5, which simplifies in the ill-conditioned regime Ο„ := Β΅/L β†’ 0 as:

m4=(4c2+(ΞΌ+L)2βˆ’4ΞΌL4c2+(ΞΌ+L)2+4ΞΌL)1/21βˆ’2Ο„(2c/L)2+1+o(Ο„).(19)\sqrt[4]{m}=\left(\frac{\sqrt{4c^{2}+(\mu+L)^{2}}-\sqrt{4\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}\right)^{1/2}\underset{\tau\to0}{=}1-\frac{2\sqrt{\tau}}{\sqrt{(2c/L)^{2}+1}}+o(\sqrt{\tau}).\tag{19}

We see that MEG achieves accelerated convergence rate 1βˆ’O(pΒ΅/L), as long as c = O(L). We remark that this rate is optimal in the following sense. The lower bound for the problems with cross-shaped spectrum in (14) must be slower than the existing ones for minimizing Β΅-strongly convex and L-smooth functions, as the former is strictly more general. Since we reach the same asymptotic optimal rate, this must be optimal. Case 3: shifted imaginary spectrum. Lastly, if the condition in Case 3 of Theorem 3 is satisfied (i.e., h 4Ξ³ < (1 βˆ’ √m) 2), then Οƒ βˆ’1(βˆ’1) and Οƒ βˆ’1(1) (and the intermediate values) are all complex conjugates (c.f., Figure 1, right). We can write the robust region Οƒ βˆ’1 Case3 ([βˆ’1, 1]) as:

[12Ξ³+14Ξ³2βˆ’(1+m)2ℏγ,12Ξ³+14Ξ³2βˆ’(1βˆ’m)2ℏγ]⋃[12Ξ³βˆ’14Ξ³2βˆ’(1βˆ’m)2ℏγ,12Ξ³βˆ’14Ξ³2βˆ’(1+m)2ℏγ]βŠ‚C+.\left[\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}}\right]\bigcup\left[\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{\hbar\gamma}},\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{\hbar\gamma}}\right]\subset\mathbb{C}_{+}.

We modeled such spectrum in (15), which generalizes bilinear games, where the spectrum reduces to Β±[ai, bi] (i.e., with c = 0). We summarize the optimal hyperparameters for this case below.

6Precisely, GDM with optimal step size and momentum asymptotically achieve 1 βˆ’ 2 βˆšΟ„ + o( βˆšΟ„) convergence rate, as Ο„ β†’ 0 (Goujaud & Pedregosa, 2022, Proposition 3.3). Theorem 6 (Case 3). Consider solving (1) for games where the Jacobian has a shifted imaginary spectrum in (15). For this problem, the optimal hyperparameters for the momentum extragradient method in (4) are:

h=8c(c2+a2+c2+b2)2,Ξ³=12c,andm=(c2+b2βˆ’c2+a2c2+b2+c2+a2)2.h=\frac{8c}{(\sqrt{c^{2}+a^{2}+\sqrt{c^{2}+b^{2}}})^{2}},\quad\gamma=\frac{1}{2c},\quad a n d\quad m=\left(\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}\right)^{2}.

Similarly to before, we compute the asymptotic convergence rate from Theorem 4 using (9).

m4=(c2+b2βˆ’c2+a2c2+b2+c2+a2)1/2=(1βˆ’2c2+a2c2+b2+c2+a2)1/2{\sqrt[4]{m}}=\left({\frac{\sqrt{c^{2}+b^{2}}-\sqrt{c^{2}+a^{2}}}{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}\right)^{1/2}=\left(1-{\frac{2{\sqrt{c^{2}+a^{2}}}}{{\sqrt{c^{2}+b^{2}}+\sqrt{c^{2}+a^{2}}}}}\right)^{1/2} (20)(20)

Note that by setting c = 0, the rate in (20) matches the lower bound of bilinear game: qbβˆ’a b+a (Azizian et al., 2020b, Proposition 5). Further, with c > 0, the convergence rate in (20) improves, highlighting the contrast between vanilla bilinear games and their regularized counterpart. Remark 3. Notice that the optimal momentum m in both Theorems 5 and 6 are positive. This is in contrast to Gidel et al. (2019), where the gradient method with negative momentum is studied. This difference elucidates distinct dynamics of how momentum interacts with the gradient and the extragradient methods.

5 Comparison With Other Methods

Having established MEG's asymptotic convergence rates for various spectrum models, we now compare it with other first-order methods, including GD, GDM, and EG.

Comparison with GD and EG. Building upon the fixed-point iteration framework established by Polyak (1987), Azizian et al. (2020a) interpret both GD and EG as fixed-point iterations. Within this framework, iterates of a method are generated according to:

wt+1=F(wt),βˆ€tβ©Ύ0,w_{t+1}=F(w_{t}),\quad\forall t\geqslant0, (21)(21)

wt+1 = F(wt), βˆ€t β©Ύ 0, (21) where F : R d β†’ R dis an operator representing the method. However, analyzing this scheme in general settings poses challenges due to the potential nonlinearity of F. To address this, under conditions of twice differentiability of F and proximity of w to the stationary point w ⋆, the analysis can be simplified by linearizing F around w ⋆:

F(w)β‰ˆF(w⋆)+βˆ‡F(w⋆)(wβˆ’w⋆).F(w)\approx F(w^{\star})+\nabla F(w^{\star})(w-w^{\star}).

Then, for w0 in a neighborhood of w ⋆, one can obtain an asymptotic convergence rate of (21) by studying the spectral radius of the Jacobian at the solution: ρ(βˆ‡F(w ⋆)) β©½ ρ ⋆ < 1. This implies that (21) locally converges linearly to w ⋆ at the rate O((ρ ⋆ + Ξ΅) t) for Ξ΅ β©Ύ 0. Further, if F is linear, Ξ΅ = 0 (Polyak, 1987).

The corresponding fixed point operators F GD hand F EG hof GD and EG7respectively are:

(GD) $w_{t+1}=w_{t}-hv(w_{t})=F_{h}^{\rm GD}(w_{t}),\ \ \mbox{and}$ (EG) $w_{t+1}=w_{t}-hv(w_{t}-hv(w_{t}))=F_{h}^{\rm EG}(w_{t}).$ The local convergence rate can then be obtained by bounding the spectral radius of the Jacobian of the operators under certain assumptions. We summarize the relevant results below. Theorem 7 (Azizian et al. (2020a); Gidel et al. (2019)). Let w ⋆be a stationary point of v*. Further, assume* the eigenvalues of βˆ‡v(w ⋆) all have positive real parts. Then, denoting S ⋆:= Sp(βˆ‡v(w ⋆)),

  1. For the gradient method in (22) with step size h = minλ∈S⋆ R(1/Ξ»)*, it satisfies:*8

ρ(βˆ‡FhGD(w⋆))2β©½1βˆ’min⁑λ∈Sβ‹†β„œ(1Ξ»)min⁑λ∈Sβ‹†β„œ(Ξ»).\rho(\nabla F_{h}^{G D}(w^{\star}))^{2}\leqslant1-\min_{\lambda\in S^{\star}}\Re\left(\frac{1}{\lambda}\right)\min_{\lambda\in S^{\star}}\Re(\lambda). λ∈S⋆ λ∈S⋆ R(Ξ»). (24) 7Azizian et al. (2020a) assumes that EG uses the same step size h for both the main and the extrapolation steps.

8Note that the spectral radius ρ is squared, but asymptotically is almost the same as √1 βˆ’ x β©½ 1 βˆ’ x/2.

(24)(24)

  1. For the extragradient method in (23) with step size h = (4 supλ∈S⋆ |Ξ»|) βˆ’1*, it satisfies:*

n=(4sup⁑λ∈Sβˆ—βˆ£Ξ»βˆ£)βˆ’1,n=(4\operatorname*{sup}_{\lambda\in S^{*}}|\lambda|)^{-1}, ρ(βˆ‡FhEG(wβˆ—))2β©½1βˆ’14(min⁑λ∈Sβˆ—H(Ξ»)sup⁑λ∈Sβˆ—βˆ£Ξ»βˆ£+116min⁑λ∈Sβˆ—βˆ£Ξ»βˆ£2sup⁑λ∈Sβˆ—βˆ£Ξ»βˆ£2).(25)\rho(\nabla F_{h}^{EG}(w^{*}))^{2}\leqslant1-\frac{1}{4}\left(\frac{\min_{\lambda\in\mathcal{S}^{*}}\mathfrak{H}(\lambda)}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|}+\frac{1}{16}\frac{\min_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}{\sup_{\lambda\in\mathcal{S}^{*}}|\lambda|^{2}}\right).\tag{25}

We can determine the convergence rate of GD and EG by using Theorem 7 since all three cases of our spectrum models in (12), (14), and (15) meet the condition that the eigenvalues of βˆ‡v(w ⋆) have positive real parts. The following corollary summarizes this result.

Corollary 1. With the conditions in Theorem 7, for each case of the Jacobian spectrum S ⋆ 1 , S ⋆ 2 , and S ⋆ 3 , the gradient method in (22) and the extragradient method in (23) satisfy the following:

  • Case 1: Sp(βˆ‡v) βŠ‚ S⋆ 1 = [Β΅1, L1] βˆͺ [Β΅2, L2] ∈ R+: ρ(βˆ‡F GD h(w ⋆))2 β©½ 1 βˆ’ Β΅1 L2 , and ρ(βˆ‡F EG h(w ⋆))2 β©½ 1 βˆ’ 1 4 Β΅1 L2 +Β΅ 2 1 16L22 . (26) - Case 2: Sp(βˆ‡v) βŠ‚ S⋆ 2 = [Β΅, L] βˆͺ {z ∈ C : R(z) = c β€² > 0, I(z) ∈ [βˆ’c, c]}: ρ(βˆ‡F GD h(w ⋆))2 β©½ (1 βˆ’2Β΅ 4c 2/(Lβˆ’Β΅)+(Lβˆ’Β΅)if c β©Ύ qL 2βˆ’Β΅ 2 4, 1 βˆ’ Β΅ Lotherwise. ρ(βˆ‡F EG h(w ⋆))2 β©½ 1 βˆ’ 1 4 √¡ c 2+((Lβˆ’Β΅)/2)2 +Β΅ 2 16(c 2+((Lβˆ’Β΅)/2)2) if c β©Ύ q3L2+2LΒ΅βˆ’Β΅2 4, 1 βˆ’ 1 4 Β΅ L +Β΅ 2 16L2 otherwise. (27)(27) (26)(26) (28)(28)
  • Case 3: Sp(βˆ‡v) βŠ‚ S⋆ 3 = [c + ai, c + bi] βˆͺ [c βˆ’ ai, c βˆ’ bi] ∈ C+: ρ(βˆ‡F GD h(w ⋆))2 β©½ 1 βˆ’c 2 c 2+b 2 , and ρ(βˆ‡F EG h(w ⋆))2 β©½ 1 βˆ’ 1 4 √ c c 2+b 2 +c 2+a 2 16(c 2+b 2) . (28) In Case 1, we see from (26) that both GD and EG have convergence rates 1 βˆ’ O(Β΅1/L2) = 1 βˆ’ O(Ο„ ). MEG, on the other hand, has an accelerated convergence rate of 1 βˆ’ O( βˆšΟ„ ), as well as an additional constant improvement by a factor of √1 βˆ’ R2, as we showed in (18). Moving on to Case 2, we showed in (19) that MEG enjoys an accelerated convergence rate of 1 βˆ’ O(pΒ΅/L) as long as c = O(L). However, both GD and EG in (27) have non-accelerated convergence under the same condition. Lastly, for Case 3, we showed in (20) that MEG achieves an asymptotic rate that matches the known lower bound for bilinear games: qbβˆ’a b+a , with c = 0; further, the rate of MEG improves if c > 0. On the contrary, GD and EG suffer from slower rates, as shown in (28).

Comparison with GDM. We now compare the convergence rate of MEG with that of GDM, which iterates as in (5). In Azizian et al. (2020b), it was shown that GD is the optimal method for games where the Jacobian eigenvalues are within a disc in the complex plane. This suggests that acceleration is not possible for this type of problem.9 On the other hand, it is well-known that GDM achieves an accelerated convergence rate for strongly-convex (quadratic) minimization, where the eigenvalues of the Hessian lie on the (strictly positive) real line segment (Polyak, 1987). Hence, Azizian et al. (2020b) studies the intermediate case, where the Jacobian eigenvalues are within an ellipse, which can be thought of as the real segment [Β΅, L] perturbed with Ο΅ in an elliptic way. That is, they consider the spectral shape:10

$K_{\epsilon}=\left{z\in\mathbb{C}:\left(\frac{\Re z-(\mu+L)/2}{(L-\mu)/2}\right)^{2}+\left(\frac{2z}{\epsilon}\right)^{2}\leq1\right}$.
Similarly to GD and EG above, in Azizian et al. (2020b), GDM is interpreted as a fixed point iteration:11 wt+1 = wt βˆ’ hv(wt) + m(wt βˆ’ wtβˆ’1) = F GDM(wt, wtβˆ’1). (29) To study the convergence rate of GDM, we use the following theorem from Azizian et al. (2020b):

9Yet, one can consider the case where, e.g., a cross-shape is contained in a disc. Then, by knowing more fine-grained structure of the Jacobian spectrum, MEG can have faster convergence in (19).

10A visual illustration of this ellipse can be found in Azizian et al. (2020b, Figure 2).

11As GDM updates wt+1 using both wt and wtβˆ’1, Azizian et al. (2020b) uses an augmented fixed point operator; see Lemma 2 in that work for details.

(29)(29)

Theorem 8 (Azizian et al. (2020b)). Define Ο΅(Β΅, L) as Ο΅(Β΅, L)/L = (Β΅/L) ΞΈ = Ο„ ΞΈ with ΞΈ > 0 and a ∧ b = min(a, b). If Sp(βˆ‡F GDM(w ⋆, w⋆)) βŠ‚ KΟ΅, and when Ο„ β†’ 0, it satisfies:

ρ(βˆ‡FGDM(wβˆ—,wβˆ—))β©½{1βˆ’2Ο„+O(Ο„ΞΈβˆ§1),if  ΞΈ>121βˆ’2(2βˆ’1)Ο„+O(Ο„),if  ΞΈ=121βˆ’Ο„1βˆ’ΞΈ+O(Ο„1∧(2βˆ’3ΞΈ)),if  ΞΈ<12,(30)\rho(\nabla F^{GDM}(w^{*},w^{*}))\leqslant\begin{cases}1-2\sqrt{\tau}+O\left(\tau^{\theta\wedge1}\right),&\text{if}\ \ \theta>\frac{1}{2}\\ 1-2(\sqrt{2}-1)\sqrt{\tau}+O\left(\tau\right),&\text{if}\ \ \theta=\frac{1}{2}\\ 1-\tau^{1-\theta}+O\left(\tau^{1\wedge(2-3\theta)}\right),&\text{if}\ \ \theta<\frac{1}{2},\end{cases}\tag{30}

where the hyperparametes h and m are functions of Β΅, L, and Ο΅ only. For Case 1, GDM converges at the rate 1 βˆ’ 2 βˆšΟ„ + O(Ο„ ) (i.e., with ΞΈ = 1 from the above), which is always slower than the rate of MEG in (18) by the factor of √1 βˆ’ R2. For Case 2, we see from Theorem 8 that GDM achieves an accelerated rate, i.e., 1 βˆ’ O( βˆšΟ„ ), until ΞΈ = 1 2 . In other words, the biggest elliptic perturbation Ο΅ where GDM permits the accelerated rate is Ο΅ = √¡L.12 We interpret Theorem 8 for games with cross-shaped Jacobian spectrum in (14) and shifted imaginary spectrum in (15) in the following corollary.

Corollary 2. Consider the gradient method with momentum, interpreted as fixed point iteration as in (29).

For games with cross-shaped Jacobian spectrum in (14) with c = Lβˆ’Β΅ 2*, GDM cannot achieve an accelerated* rate when Lβˆ’Β΅ 2 = c > Ο΅ = √¡L. Since L > Β΅, this further implies L Β΅ > √5. That is, when the condition number exceeds √5 β‰ˆ 2.236, GDM cannot achieve an accelerated convergence rate. On the contrary, as we showed in (19), MEG can converge at an accelerated rate in the ill-conditioned regime. The convergence rate of GDM for Case 3 cannot be determined from Theorem 8, as this theorem assumes the spectrum model of the real line segment [Β΅, L] with Ο΅ perturbation (along the imaginary axis), while S ⋆ 3 in (15) has a fixed real component. Instead, we utilize the link function of GDM in (7) to show that it is unlikely for GDM to stay in the robust region: ΞΎ βˆ’1([βˆ’1, 1]).

Proposition 2. Consider solving (1) for games where the Jacobian has a shifted imaginary spectrum in (15), using the gradient method with momentum in (5). For any complex number z = p + qi ∈ C+, if 2(1+m) h < p, *then GDM cannot stay in the robust region, i.e.,* |ξ(λ)| > 1.

Note that the condition 2(1+m) h < p is hard to avoid even for small p, considering h is usually a small value.

6 Local Convergence For Non-Affine Vector Fields

The optimal hyperparameters of MEG for each spectrum model and the associated convergence rate we obtained in Section 4 are attainable when the vector field is affine. A natural question is, then, what can we say about the convergence rate of MEG when the vector field is not affine? To that end, we provide the local convergence of MEG by restarting the momentum, as detailed below. Let us consider the operator G representing the MEG in (4) such that:

$\subset\cap\mathcal{A}$ [wt+1, wt] = G([wt, wtβˆ’1]) and G([w

([w⋆,w⋆])=[w⋆,w⋆].([w^{\star},w^{\star}])=[w^{\star},w^{\star}].

In addition, we assume that w1 = w0 βˆ’h 1+m v(w0 βˆ’Ξ³v(w0)), in order to induce the residual polynomials from Theorem 1; see also its proof and Algorithm 1 in the appendix. Now let us consider the following algorithm:

[w_{tk+i+1},w_{tk+i}]=G\big{(}[w_{tk+i},w_{tk+i-1}]\big{)}\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\tag{31} $$w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v\big{(}w_{(t+1)k}-\gamma v\big{(}w_{(t+1)k}\big{)}\big{)}.$$

In other words, we repeat MEG for k steps, and then restart the momentum at [w(t+1)k+1, w(t+1)k]. The local convergence of the restarted MEG is established in the next theorem.

Theorem 9 (Local convergence). Let G : R 2d β†’ R 2dbe the continuously differentiable operator representing the momentum extragradient method (MEG) in (4). Let w ⋆be a stationary point. Let wt denote the output of MEG, which enjoys a convergence rate of the form βˆ₯wt βˆ’ w ⋆βˆ₯ = C(1 βˆ’ Ο†) t(t + 1)βˆ₯w0 βˆ’ w ⋆βˆ₯ for some 12Observe that (Β΅/L) 1/2 = Ο΅(Β΅, L)/L =β‡’ Ο΅(Β΅, L) = √¡L.

0 < Ο† < 1 *when the vector field is affine. Further, consider restarting the momentum of MEG after running* k *steps, as in* (31). Then, for each Ξ΅ > 0, there exists k > 0 and Ξ΄ > 0 such that, for all initializations w0 satisfying βˆ₯w0 βˆ’ w ⋆βˆ₯ β©½ Ξ΄, the restarted MEG satisfies:

βˆ₯wtβˆ’w⋆βˆ₯=O((1βˆ’Ο†+Ξ΅)t)βˆ₯w0βˆ’w⋆βˆ₯.\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.

7 Experiments

12_Image_0.Png

Figure 3: Illustration of the game Jacobian spectra and the performance of different algorithms considered. Jacobian spectrum in the first plot matches S ⋆ 2 in (14) precisely, while that in the third plot inexactly follows S ⋆ 2 . The second (fourth) plot shows the performance of different algorithms for solving quadratic games in (16) with the Jacobian spectrum following the first (third) plot.

In this section, we perform numerical experiments to optimize a game when the Jacobian has a cross-shaped spectrum in (14). We focus on this spectrum as it may be the most challenging case, involving both real and complex eigenvalues (c.f., Theorem 3). To test the robustness, we consider two cases where the Jacobian spectrum exactly follows S ⋆ 2 in (14), as well as the inexact case. We illustrate them in Figure 3.

We focus on two-player quadratic games, where player 1 controls x ∈ R d1 and player 2 controls y ∈ R d2 with loss functions in (16). In our setting, the corresponding vector field in (17) satisfies M12 = βˆ’M⊀ 21, but S1 and S2 can be nonzero symmetric matrices. Further, the Jacobian βˆ‡v = A has the cross-shaped eigenvalue structure in (14), with c = Lβˆ’Β΅ 2(c.f., Proposition 1, Case 2). For the problem constants, we use Β΅ = 1, and L = 200. The optimum [x ⋆ y ⋆] ⊀ = w ⋆ ∈ R 200 is generated using the standard normal distribution. For simplicity, we assume b = [b1 b2] ⊀ = [0 0]⊀. For the algorithms, we compare GD in (22), GDM in (5), EG in (23), and MEG in (4). All algorithms are initialized with 0. We plot the experimental results in Figure 3.

For MEG (optimal), we set the hyperparameters using Theorem 5. For GD (theory) and EG (theory), we set the hyperparameters using Theorem 7, both for the exact and the inexact settings. For GDM (grid search), we perform a grid search of h GDM and mGDM, and choose the best-performing ones, as Theorem 8 does not give a specific form for hyperparameter setup. Specifically, we consider 0.005 β©½ h GDM β©½ 0.015 with 10βˆ’3 increment, and 0.01 β©½ mGDM β©½ 0.99 with 10βˆ’2increment. In addition, as Theorem 7 might be conservative, we conduct grid searches for GD and EG as well. For GD (grid search), we use the same setup as h GDM.

For EG (grid search), we use 0.001 β©½ h EG β©½ 0.05 with 10βˆ’4increment.

There are several remarks to make. First, although the third plot in Figure 3 does not exactly follow the spectrum model in (14), MEG still works well with the optimal hyperparameters from Theorem 5. As expected, MEG (optimal) required more iterations in the inexact case compared to the exact case. Second, compared to other algorithms, MEG (optimal) indeed exhibits a significantly faster rate of convergence, even when compared to other methods that use grid-search hyperparameter tuning, supporting our theoretical findings in Section 4. Third, while EG (theory) is slower than GD (theory), which confirms Corrolary 1, EG (grid search) can be tuned to converge faster via grid search. Lastly, even though the best performance of GDM (grid search) is obtained through grid search, one can see the GD (grid search) obtains a slightly faster convergence rate than GDM (grid search), confirming Corollay 2.

8 Conclusion

In the study of differentiable games, finding stationary points efficiently is crucial. This work analyzes the momentum extragradient method, revealing three distinct convergence modes dependent on the Jacobian eigenvalue distribution. Through a polynomial-based analysis, we derive optimal hyperparameters for each mode, achieving accelerated asymptotic convergence rates. We compared the obtained rates with other firstorder methods and showed that the considered methods do not achieve the accelerated convergence rate.

Notably, our initial analysis for affine vector fields extends to guarantee local convergence rates on twicedifferentiable vector fields. Numerical experiments on quadratic games validate our theoretical findings.

Acknowledgments

The authors would like to thank Fangshuo Liao, Baptiste Goujaud, Damien Scieur, Miri Son, and Giorgio Young for their useful discussions and feedback. This work is supported by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in collaboration with Intel), NSF CMMI No. 2037545, NSF CAREER award No. 2145629, NSF CIF No. 2008555, Rice InterDisciplinary Excellence Award (IDEA), and the Canada CIFAR AI Chairs program.

References

Yossi Arjevani and Ohad Shamir. On the iteration complexity of oblivious first-order optimization algorithms.

In International Conference on Machine Learning. PMLR, 2016.

WaΓ―ss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020a.

WaΓ―ss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020b.

David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In International Conference on Machine Learning. PMLR, 2018.

Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, and Simon Lacoste-Julien. A closer look at the optimization landscapes of generative adversarial networks. In International Conference on Learning Representations, 2020.

RaphaΓ«l Berthier, Francis Bach, and Pierre Gaillard. Accelerated gossip in networks of given dimension using Jacobi polynomial iterations. SIAM Journal on Mathematics of Data Science, 2020.

Aleksandr Beznosikov, Pavel Dvurechensky, Anastasia Koloskova, Valentin Samokhin, Sebastian U Stich, and Alexander Gasnikov. Decentralized local stochastic extra-gradient for variational inequalities. arXiv preprint arXiv:2106.08315, 2021.

Theodore S Chihara. An introduction to orthogonal polynomials. Courier Corporation, 2011.

Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. Advances in neural information processing systems, 31, 2018.

Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. SIAM Journal on Computing, 2009.

Carles Domingo-Enrich, Fabian Pedregosa, and Damien Scieur. Average-case acceleration for bilinear games and normal matrices. In International Conference on Learning Representations, 2021.

Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021.

Gauthier Gidel, Hugo Berard, GaΓ«tan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. arXiv preprint arXiv:1802.10551, 2018.

Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, RΓ©mi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27, 2014.

Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: O (1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In International Conference on Artificial Intelligence and Statistics. PMLR, 2022.

Eduard Gorbunov, Adrien Taylor, Samuel HorvΓ‘th, and Gauthier Gidel. Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity. In International Conference on Machine Learning. PMLR, 2023.

Baptiste Goujaud and Fabian Pedregosa. Cyclical step-sizes, 2022. URL http://fa.bianp.net/blog/2022/ cyclical/.

Baptiste Goujaud, Damien Scieur, Aymeric Dieuleveut, Adrien B Taylor, and Fabian Pedregosa. Superacceleration with cyclical step-sizes. In International Conference on Artificial Intelligence and Statistics.

PMLR, 2022.

Magnus R Hestenes and Eduard Stiefel. Methods of conjugate gradients for solving. Journal of research of the National Bureau of Standards, 49(6):409, 1952.

Andrew J Hetzel, Jay S Liew, and Kent E Morrison. The probability that a matrix of integers is diagonalizable. The American Mathematical Monthly, 114(6):491–499, 2007. Yu-Guan Hsieh, Franck Iutzeler, JΓ©rΓ΄me Malick, and Panayotis Mertikopoulos. On the convergence of singlecall stochastic extra-gradient methods. Advances in Neural Information Processing Systems, 32, 2019.

Yu-Guan Hsieh, Franck Iutzeler, JΓ©rΓ΄me Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. Advances in Neural Information Processing Systems, 33, 2020.

Junhyung Lyle Kim, Gauthier Gidel, Anastasios Kyrillidis, and Fabian Pedregosa. Momentum extragradient is optimal for games with cross-shaped spectrum. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022.

Galina M Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 1976.

Peter Lancaster and Hanafi K Farahat. Norms on direct sums and tensor products. mathematics of computation, 1972.

Alistair Letcher, David Balduzzi, SΓ©bastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. The Journal of Machine Learning Research, 2019.

Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael I Jordan.

On the convergence of stochastic extragradient for bilinear games with restarted iteration averaging. arXiv preprint arXiv:2107.00464, 2021.

Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 907–915. PMLR, 2019.

Mingrui Liu, Wei Zhang, Youssef Mroueh, Xiaodong Cui, Jarret Ross, Tianbao Yang, and Payel Das. A decentralized parallel algorithm for training generative adversarial nets. Advances in Neural Information Processing Systems, 2020.

Jonathan P. Lorraine, David Acuna, Paul Vicol, and David Duvenaud. Complex momentum for optimization in games. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, 2022.

Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International conference on machine learning. PMLR, 2018.

Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In International Conference on Artificial Intelligence and Statistics. PMLR, 2020.

Renato DC Monteiro and Benar Fux Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM Journal on Optimization, 20(6):2755–2787, 2010.

RΓ©mi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, et al. Nash learning from human feedback. arXiv preprint arXiv:2312.00886, 2023.

Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 2004.

Samet Oymak. Provable super-convergence with a large cyclical learning rate. IEEE Signal Processing Letters, 28, 2021.

Balamurugan Palaniappan and Francis Bach. Stochastic variance reduction methods for saddle-point problems. Advances in Neural Information Processing Systems, 2016.

Vardan Papyan. Traces of class/cross-class structure pervade deep learning spectra. The Journal of Machine Learning Research, 21, 2020.

Fabian Pedregosa. Momentum: when Chebyshev meets Chebyshev, 2020. URL http://fa.bianp.net/ blog/2020/momentum/.

Fabian Pedregosa and Damien Scieur. Acceleration through spectral density estimation. In Proceedings of the 37th International Conference on Machine Learning. PMLR, November 2020.

David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016.

Boris T Polyak. Introduction to optimization. optimization software. Inc., Publications Division, New York, 1:32, 1987.

Ernest K Ryu, Kun Yuan, and Wotao Yin. ODE analysis of stochastic gradient methods with optimism and anchoring for minimax problems. arXiv preprint arXiv:1905.10899, 2019.

Yoav Shoham and Kevin Leyton-Brown. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, 2008.

Mikhail V Solodov and Benar F Svaiter. A hybrid approximate extragradient–proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Analysis, 7(4):323–345, 1999.

Paul Tseng. On linear convergence of iterative methods for the variational inequality problem. Journal of Computational and Applied Mathematics, 1995.

Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. Proceedings of Machine Learning and Systems, 1, 2019.

A Missing Proofs In Section 3 A.1 Proof Of Lemma 1

Proof of Lemma 1 can be found for example in Azizian et al. (2020b, Section B).

To obtain the residual polynomials of MEG, w1 has to be set slightly differently from the rest of the iterates, as we write in the pseudocode below: Algorithm 1: Momentum extragradient (MEG) method Input: Initialization w0, hyperparameters h, Ξ³, m.

Set: w1 = w0 βˆ’h 1+m v(w0 βˆ’ Ξ³v(w0)) for t = 1, 2*, . . .* do wt+1 = wt βˆ’ hv(wt βˆ’ Ξ³v(wt)) + m(wt βˆ’ wtβˆ’1) end

Derivation Of The First Part

Proof. We want to find the residual polynomial P˜t(A) of the extragradient with momentum (MEG) in (4).

That is, we want to find

wtβˆ’w⋆=P~t(A)(w0βˆ’w⋆),w_{t}-w^{\star}=\tilde{P}_{t}(A)(w_{0}-w^{\star}), (32)(32) ⋆), (32) where {wt}t=0 is the iterates generated by MEG, which is possible by Lemma 1, as MEG is a first-order method (Arjevani & Shamir, 2016; Azizian et al., 2020b). We now prove this is by induction. To do so, we will use the following properties. First, note that as we are looking for a stationary point, it holds that v(w ⋆) = 0. Further, as v is linear by the assumption of Lemma 1, it holds that v(w) = A(w βˆ’ w ⋆).

Base case. For t = 0, P˜0(A) is a degree-zero polynomial, and hence equals Id, which denotes the identity matrix. Thus, w0 βˆ’ w ⋆ = Id(w0 βˆ’ w ⋆) holds true.

For completeness, we also prove when t = 1. In that case, observe that MEG in proceeds as w1 = w0 βˆ’ h 1+m v(w0 βˆ’ Ξ³v(w0)). Subtracting w ⋆ on both sides, we have:

w1 βˆ’ w ⋆ = w0 βˆ’ w ⋆ βˆ’h 1 + m v(w0 βˆ’ Ξ³v(w0)) = w0 βˆ’ w ⋆ βˆ’h 1 + m v(w0 βˆ’ Ξ³A(w0 βˆ’ w ⋆)) = w0 βˆ’ w ⋆ βˆ’h 1 + m A(w0 βˆ’ Ξ³A(w0 βˆ’ w ⋆) βˆ’ w ⋆) = w0 βˆ’ w ⋆ βˆ’h 1 + m A(w0 βˆ’ w ⋆) + hΞ³ 1 + m A 2(w0 βˆ’ w ⋆) = Id βˆ’h 1 + m A +hΞ³ 1 + m A 2 (w0 βˆ’ w ⋆) = Id βˆ’h 1 + m A(Id βˆ’ Ξ³A) (w0 βˆ’ w ⋆) = P˜1(A)(w0 βˆ’ w ⋆). Induction step. As the induction hypothesis, assume P˜t satisfies (32). We want to prove this holds for t + 1. We have:

wt+1=wtβˆ’hv(wtβˆ’Ξ³v(wt))+m(wtβˆ’wtβˆ’1)w_{t+1}=w_{t}-hv(w_{t}-\gamma v(w_{t}))+m(w_{t}-w_{t-1}) $$=w_{t}-hv(w_{t}-\gamma A(w_{t}-w^{\star}))+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-\gamma A(w_{t}-w^{\star})-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(w_{t}-w^{\star})+h\gamma A^{2}(w_{t}-w^{\star})+m(w_{t}-w_{t-1})$$ $$=w_{t}-hA(I_{d}-\gamma A)(w_{t}-w^{\star})+m(w_{t}-w_{t-1}).$$

Subtracting w ⋆ on both sides, we have:

wt+1 βˆ’ w ⋆ = wt βˆ’ w ⋆ βˆ’ hA(Id βˆ’ Ξ³A)(wt βˆ’ w ⋆) + m(wt βˆ’ wtβˆ’1) = (Id βˆ’ hA(Id βˆ’ Ξ³A)) (wt βˆ’ w ⋆) + m(wt βˆ’ w ⋆ βˆ’ (wtβˆ’1 βˆ’ w ⋆)) (32) = (Id βˆ’ hA(Id βˆ’ Ξ³A)) P˜t(A)(w0 βˆ’ w ⋆) + m(P˜t(A)(w0 βˆ’ w ⋆) βˆ’ P˜tβˆ’1(A)(w0 βˆ’ w ⋆)) = (Id + mId βˆ’ hA(Id βˆ’ Ξ³A)) P˜t(A)(w0 βˆ’ w ⋆) βˆ’ mP˜tβˆ’1(A)(w0 βˆ’ w ⋆) = P˜t+1(A)(w0 βˆ’ w ⋆), where in the third equality, we used the induction hypothesis in (32).

Derivation Of The Second Part In (6)

Proof. We show Pt = P˜t for all t via induction.

Base case. For t = 0, by the definition of Chebyshev polynomials of the first and the second kinds, we have T0(Ξ») = U0(Ξ») = 1. Thus,

P0(Ξ»)=m0(2m1+mT0(Οƒ(Ξ»))+1βˆ’m1+mU0(Οƒ(Ξ»)))P_{0}(\lambda)=m^{0}\left(\frac{2m}{1+m}T_{0}(\sigma(\lambda))+\frac{1-m}{1+m}U_{0}(\sigma(\lambda))\right) $$=\frac{2m}{1+m}+\frac{1-m}{1+m}=1=\tilde{P}_{0}(\lambda).$$

Again, for completeness, we prove when t = 1 as well. In that case, by the definition of Chebyshev polynomials of the first and the second kinds, we have T1(Ξ») = Ξ», and U1(Ξ») = 2Ξ». Therefore,

P1(Ξ»)=mt/2(2m1+mT1(Οƒ(Ξ»))+1βˆ’m1+mU1(Οƒ(Ξ»)))P_{1}(\lambda)=m^{t/2}\left(\frac{2m}{1+m}T_{1}(\sigma(\lambda))+\frac{1-m}{1+m}U_{1}(\sigma(\lambda))\right) $$=m^{t/2}\left(\frac{2m}{1+m},\sigma(\lambda)+\frac{1-m}{1+m}\cdot2\cdot\sigma(\lambda)\right)$$ $$=m^{t/2}\left(\frac{2\sigma(\lambda)}{1+m}\right)$$ $$=1-\frac{h\lambda(1-\gamma\lambda)}{1+m}=\tilde{P}_{1}(\lambda).$$

Induction step. As the induction hypothesis, assume that Pt = P˜t for t. In this step, we show that the same holds for t + 1.

Pt+1 = m(t+1)/2 2m 1 + m Tt+1(Οƒ(Ξ»)) + 1 βˆ’ m 1 + m Ut+1(Οƒ(Ξ»)) = m(t+1)/2 2m 1 + m 2Οƒ(Ξ»)Tt(Οƒ(Ξ»)) βˆ’ Ttβˆ’1(Οƒ(Ξ»)) + 1 βˆ’ m 1 + m 2Οƒ(Ξ»)Ut(Οƒ(Ξ») βˆ’ Utβˆ’1(Οƒ(Ξ»))) = 2Οƒ(Ξ») Β· m1/2Β· mt/2 2m 1 + m Tt(Οƒ(Ξ»)) + 1 βˆ’ m 1 + m Ut(Οƒ(Ξ»)) | {z } Pt(Ξ») βˆ’ m Β· m(tβˆ’1)/2 2m 1 + m Ttβˆ’1(Οƒ(Ξ»)) + 1 βˆ’ m 1 + m Utβˆ’1(Οƒ(Ξ»)) | {z } Ptβˆ’1(Ξ») = 2Οƒ(Ξ») Β· √m Β· P˜t(Ξ») βˆ’ m Β· P˜tβˆ’1(Ξ») = (1 + m βˆ’ hΞ»(1 βˆ’ Ξ³Ξ»))P˜t(Ξ») βˆ’ mP˜tβˆ’1(Ξ»), $\square$ where in the second to last equality we use the induction hypothesis.

A.3 Proof Of Lemma 2

Proof of Lemma 2 can be found in Goujaud & Pedregosa (2022).

Proof. We first recall that using (3), we can upper bound the worst-case convergence rate as:

\sup_{\lambda\in\mathcal{S}^{+}}|P_{t}(\lambda)|\stackrel{{\eqref{eq:20}}}{{=}}\sup_{\lambda\in\mathcal{S}^{+}}\left|m^{t/2}\bigg{(}\frac{2m}{1+m}T_{t}(\sigma(\lambda))+\frac{1-m}{1+m}U_{t}(\sigma(\lambda))\bigg{)}\right| $$\leqslant m^{t/2}\bigg{(}\frac{2m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|T_{t}(\sigma(\lambda))|+\frac{1-m}{1+m}\sum_{\lambda\in\mathcal{S}^{+}}|U_{t}(\sigma(\lambda))|\bigg{)}\tag{33}$$ $\lambda$ is $\lambda$-independent. For $\lambda$, $\lambda$ is $\lambda$-independent. The $\mathcal{T}{t}(\lambda)$ and $\mathcal{U}{t}(\lambda)$ has a unique Now, denote σ¯ := supλ∈S⋆ |Οƒ(Ξ»; h, Ξ³, m)|. For the first case, if σ¯ β©½ 1, both Tt(x) and Ut(x) behave nicely, per Lemma 2. Thus, we have

(33)\stackrel{{(8)}}{{\leqslant}}m^{t/2}\bigg{(}\frac{2m}{1+m}+\frac{1-m}{1+m}(t+1)\bigg{)}\leqslant m^{t/2}(t+1)\implies\limsup_{t\to\infty}\left(m^{t/2}(t+1)\right)^{\frac{1}{2t}}=\sqrt[4]{m}.\tag{33}

For the second case, we use the following expressions of Chebyshev polynomials:

Tn(x)=(xβˆ’x2βˆ’1)n+(x+x2βˆ’1)n2,andT_{n}(x)={\frac{\left(x-{\sqrt{x^{2}-1}}\right)^{n}+\left(x+{\sqrt{x^{2}-1}}\right)^{n}}{2}},\quad{\mathrm{and}} $$U_{n}(x)={\frac{\left(x+{\sqrt{x^{2}-1}}\right)^{n+1}-\left(x-{\sqrt{x^{2}-1}}\right)^{n+1}}{2{\sqrt{x^{2}-1}}}}.$$

Therefore, in the second case where Οƒ >Β― 1, we have both Tn(x) and Un(x) growing at rate (x + √x 2 βˆ’ 1)n.

Hence, we have:

(33) $\leqslant O\left({m}^{t/2}\left(\bar{\sigma}+\sqrt{{\bar{\sigma}}^{2}-1}\right)^{t}\right)\implies\limsup\limits_{t\to\infty}$ (mt/2(ΟƒΛ‰+ΟƒΛ‰2βˆ’1)t)12t=m4(ΟƒΛ‰+ΟƒΛ‰2βˆ’1)1/2.\left(m^{t/2}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{t}\right)^{\frac{1}{2t}}=\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}. Finally, in order for MEG to converge in the second case, we need:

m4(ΟƒΛ‰+ΟƒΛ‰2βˆ’1)1/2<1\sqrt[4]{m}\left(\bar{\sigma}+\sqrt{\bar{\sigma}^{2}-1}\right)^{1/2}<1

which is equivalent to

ΟƒΛ‰β©½m(m+1)2m=m+12m.\bar{\sigma}\leqslant\frac{\sqrt{m}(m+1)}{2m}=\frac{m+1}{2\sqrt{m}}.

A.5 Derivation Of Extreme Points Of Robust Region In (11)

We first write a general formula for inverting a quadratic function. For f(x) = ax2 + bx + c, its inverse is given by:

f(x)=ax2+bx+c:=yf(x)=a x^{2}+b x+c:=y $$f^{-1}(y)={\frac{-b\pm{\sqrt{b^{2}-4a(c-y)}}}{2a}},$$

with some abuse of notation (i.e., f βˆ’1 above is not a function).

Applying the above to the link function of MEG in (6), we get

Οƒβˆ’1(y)=12Ξ³Β±14Ξ³2βˆ’1+mhΞ³+2mhΞ³β‹…y.\sigma^{-1}(y)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{1+m}{h\gamma}+\frac{2\sqrt{m}}{h\gamma}\cdot y}.

With this formula, we can plug in 1 and βˆ’1 to get:

Οƒβˆ’1(βˆ’1)=12Ξ³Β±14Ξ³2βˆ’(1+m)2hΞ³andΟƒβˆ’1(1)=12Ξ³Β±14Ξ³2βˆ’(1βˆ’m)2hΞ³.\sigma^{-1}(-1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}\quad\mathrm{and}\quad\sigma^{-1}(1)=\frac{1}{2\gamma}\pm\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}.

Proof. We analyze each case separately.

Case 1: There are two square roots: q1 anonymous uare roots: $\sqrt{\frac{1}{4\gamma^2}-\frac{(1-\sqrt{m})^2}{h\gamma}}$ and $\sqrt{\frac{1}{4\gamma^2}-\frac{(1+\sqrt{m})^2}{h\gamma}}.$ The second one is real if: $$\frac{1}{4\gamma^2}\geqslant\frac{(1+\sqrt{m})^2}{h\gamma}\implies\frac{h\gamma}{4\gamma^2}=\frac{h}{4\gamma}\geqslant(1+\sqrt{m})^2,$$

which implies the first is real, as (1 + √m) 2 β©Ύ (1 βˆ’ √m) 2.

Case 3: There are two square roots: $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{k\gamma}}$ and $\sqrt{\frac{1}{k\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{k\gamma}}$. The first one is complex if: $$\frac{1}{4\gamma^{2}}<\frac{(1-\sqrt{m})^{2}}{h\gamma}\implies\frac{h\gamma}{4\gamma^{2}}=\frac{h}{4\gamma}<(1-\sqrt{m})^{2},$$ which implies the second is complex, as (1 + √m) 2 β©Ύ (1 βˆ’ √m) 2.

Case 2: This case follows automatically from the above two cases.

A.7 Proof Of Proposition 1

Proof. Define D3 = diag(a, . . . , a) of dimensions d Γ— d. Let us prove that if there exists U, V orthonormal matrices and D1, D2 matrices with non-zeros coefficients only on the diagonal such that (with a slight abuse of notation)

S1=Udiag(D3,D1)U⊀,S2=VD3V⊀,andB=UD2V⊀,S_{1}=U\mathrm{diag}(D_{3},D_{1})U^{\top},S_{2}=V D_{3}V^{\top},\quad\mathrm{and}\quad B=U D_{2}V^{\top},

then the spectrum of A is crossed shaped. In that case, we have

\begin{array}{r l}{A={\left[\begin{array}{l l l}{U[D_{3};D_{1}]U^{\top}}&{U D_{2}V^{\top},}\\ {-V D_{2}^{\top}U^{\top}}&{V D_{3}V^{\top}}\end{array}\right]}}\\ {={\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]{\left[\begin{array}{l l l}{[D_{3};D_{1}]}&{D_{2}}\\ {-D_{2}^{\top}}&{D_{3}}\end{array}\right]}{\left[\begin{array}{l l l}{U}&{0}\\ {0}&{V}\end{array}\right]}^{\top}.}\end{array}

Now by considering the basis W = ((U1, 0),(0, V1), . . . ,(Udv , 0),(0, Vdv ),(Udv+1, 0), . . . ,(Ud, 0)) we have that A can be block diagonalized in that basis as

A=Wdiag([a[D2]11βˆ’[D2]11a],…,[a[D2]du,dvβˆ’[D2]du,dva],[D1]1,…,[D1]duβˆ’dv,du)W⊀.A=W\mathrm{diag}\left(\left[\begin{matrix}a&[D_{2}]_{11}\\ -[D_{2}]_{11}&a\end{matrix}\right],\ldots,\left[\begin{matrix}a&[D_{2}]_{d_{u},d_{v}}\\ -[D_{2}]_{d_{u},d_{v}}&a\end{matrix}\right],[D_{1}]_{1},\ldots,[D_{1}]_{d_{u}-d_{v},d_{u}}\right)W^{\top}. W⊀. (34) Now, notice that

(34)(34) Sp([aβˆ’bba])={aΒ±bi},(1){\rm Sp}\left(\begin{bmatrix}a&-b\\ b&a\end{bmatrix}\right)=\{a\pm bi\},\tag{1} (35)(35)

since the associated characteristic polynomial of the above matrix is:

(aβˆ’Ξ»)2+b2=0β€…β€ŠβŸΉβ€…β€Š(a-\lambda)^{2}+b^{2}=0\implies

2 = 0 =β‡’ a βˆ’ Ξ» = Β±bi =β‡’ Ξ» = a Β± bi.

Hence, using (35) in the formulation of A in (34), we have that the spectrum of A is cross-shaped.

B Missing Proofs In Section 4 B.1 Proof Of Theorem 4

Proof. We write the conditions required for Theorem 5 below:

12Ξ³βˆ’14Ξ³2βˆ’(1βˆ’m)2hΞ³=ΞΌ1,\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1}, $$\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=L_{1},$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2},\quad\text{and}$$ $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{2}.$$ In both the solutions (10) and (20) are

By adding (37) and (38) (or equivalently by ading (36) and (39)), we get

Ξ³=1ΞΌ1+L2=1ΞΌ2+L1.\gamma={\frac{1}{\mu_{1}+L_{2}}}={\frac{1}{\mu_{2}+L_{1}}}. . (40) (39)(39) (40)(40)

From (36), we have:

12Ξ³+14Ξ³2βˆ’(1βˆ’m)2hΞ³=ΞΌ1\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu_{1} $$\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}=\left(\frac{1}{2\gamma}-\mu_{1}\right)^{2}$$ $$\frac{(1-\sqrt{m})^{2}}{h}=\mu_{1}(1-\gamma\mu_{1})$$ $$h=\frac{(1-\sqrt{m})^{2}}{\mu_{1}(1-\gamma\mu_{1})}=\frac{(1-\sqrt{m})^{2}(\mu_{1}+L_{2})}{\mu_{1}L_{2}}\tag{41}$$ we: (41)(41)

Similarly, from (38), we have:

12Ξ³+14Ξ³2βˆ’(1+m)2hΞ³=ΞΌ2\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}}=\mu_{2} $$\frac{1}{4\gamma^{2}}-\frac{(1+\sqrt{m})^{2}}{h\gamma}=\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}$$ $$\left(\frac{\mu_{2}+L_{1}}{2}\right)^{2}-\left(\frac{\mu_{2}-L_{1}}{2}\right)^{2}=\mu_{2}L_{1}=\frac{(1+\sqrt{m})^{2}}{h\gamma}\tag{5.1}$$ which is for $\mu_{2}$.
(42)\left(42\right) (43)(43)

Combining (41) and (42), and solving for m, we get Β΅2L1(1 βˆ’ √m) 2 = Β΅1L2(1 + √m) 2

m⟩2=μ1L2(1+m)m\rangle^{2}=\mu_{1}L_{2}(1+\sqrt{m}) $$m=\left(\frac{\sqrt{\mu_{2}L_{1}}-\sqrt{\mu_{1}L_{2}}}{\sqrt{\mu_{2}L_{1}}+\sqrt{\mu_{1}L_{2}}}\right)^{2}\stackrel{(13)}{=}\left(\frac{\sqrt{\zeta^{2}-R^{2}}-\sqrt{\zeta^{2}-1}}{\sqrt{\zeta^{2}-R^{2}}+\sqrt{\zeta^{2}-1}}\right)^{2}.$$ (11) . (43) Finally, plugging (43) back to (41), we get:

1), we get: $ h=\frac{(1-\sqrt{m})^2(\mu_1+L_2)}{\mu_1L_2}$ $ =\frac{4\mu_1L_2}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}\cdot\frac{\mu_1+L_2}{\mu_1L_2}$ $ =\frac{4(\mu_1+L_2)}{(\sqrt{\mu_2+L_1}+\sqrt{\mu_1+L_2})^2}$. Proof. We write the conditions required for Theorem 5 below: First, by adding (44) and (45), we get:

1Ξ³=ΞΌ+Lβ€…β€ŠβŸΉβ€…β€ŠΞ³=1ΞΌ+L.{\frac{1}{\gamma}}=\mu+L\implies\gamma={\frac{1}{\mu+L}}. . (47) 12Ξ³βˆ’14Ξ³2βˆ’(1βˆ’m)2hΞ³=ΞΌ,\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu, $$\frac{1}{2\gamma}+\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=L,\quad\text{and}$$ $$\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=c.$$ (444)(444) (45)(45) (46)(46) (47)(47)

Plugging (47) back into (44), we have:

12Ξ³βˆ’14Ξ³2βˆ’(1βˆ’m)2hΞ³=ΞΌ\frac{1}{2\gamma}-\sqrt{\frac{1}{4\gamma^{2}}-\frac{(1-\sqrt{m})^{2}}{h\gamma}}=\mu $$\frac{\mu+L}{2}-\mu=\sqrt{\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}}$$ $$\left(\frac{L-\mu}{2}\right)^{2}=\left(\frac{\mu+L}{2}\right)^{2}-\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}$$ $$\frac{(1-\sqrt{m})^{2}(\mu+L)}{h}=\left(\frac{\mu+L}{2}\right)^{2}-\left(\frac{L-\mu}{2}\right)^{2}=\mu L$$ $$h=\frac{(1-\sqrt{m})^{2}(\mu+L)}{\mu L}.\tag{48}$$

Plugging (47) and (48) into (46), we have:

s(1 + √m) 2 hΞ³ βˆ’ 1 4Ξ³ 2 = c s(1 + √m) 2 Β· Β΅L (1 βˆ’ √m) 2βˆ’ Β΅ + L 2 2= c (1 + √m) 2Β· Β΅L (1 βˆ’ √m) 2= c 2 + Β΅ + L 2 2= 4c 2 + (Β΅ + L) 2 4 (1 + √m) 2 (1 βˆ’ √m) 2 = 4c 2 + (Β΅ + L) 2 4Β΅L (1 + √m)p4Β΅L = (1 βˆ’ √m)p4c 2 + (Β΅ + L) 2 √m(p4c 2 + (Β΅ + L) 2 +p4Β΅L) = p4c 2 + (Β΅ + L) 2 βˆ’p4Β΅L √m = p4c 2 + (Β΅ + L) 2 βˆ’ √4Β΅L p4c 2 + (Β΅ + L) 2 + √4Β΅L . (49) Finally, to simplify (48) further, from (49), we have:

1βˆ’m=4ΞΌL4c2+(ΞΌ+L)2+4ΞΌL.1-\sqrt{m}=\frac{4\sqrt{\mu L}}{\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L}}.

Hence, from (48),

h=(ΞΌ+L)(1βˆ’m)2ΞΌL=16ΞΌL(ΞΌ+L)(4c2+(ΞΌ+L)2+4ΞΌL)2ΞΌL=16(ΞΌ+L)(4c2+(ΞΌ+L)2+4ΞΌL)2.h={\frac{(\mu+L)(1-\sqrt{m})^{2}}{\mu L}}={\frac{{\frac{16\mu L(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}}{\mu L}}={\frac{16(\mu+L)}{(\sqrt{4c^{2}+(\mu+L)^{2}}+\sqrt{4\mu L})^{2}}}. . (50) (50)(50)

Proof. We write the conditions required for (6) below:

$\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}-\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\dfrac{1}{2\gamma}+\sqrt{\dfrac{1}{4\gamma^2}}-1$ $\gamma$ is not a good approximation. βˆ’ (1 + √m) 2 hΞ³ = c + bi, (51) βˆ’ (1 + √m) 2 hΞ³ = c + ai, (52) βˆ’ (1 + √m) 2 hΞ³ = c βˆ’ ai, and (53) βˆ’ (1 βˆ’ √m) 2 hΞ³ = c βˆ’ bi. (54) First, we can see from all cases that the optimal Ξ³ is

(51)(51) (52)(52) (53)(53) (54)(54) Ξ³=12c.\gamma={\frac{1}{2c}}. (55)(55) . (55) (51) and (54) equivalently imply

(1+m)2hΞ³βˆ’14Ξ³2=b\sqrt{\frac{(1+\sqrt{m})^{2}}{h\gamma}-\frac{1}{4\gamma^{2}}}=b $$(1+\sqrt{m})^{2}=h\gamma b^{2}+\frac{h}{4\gamma}=\frac{h(c^{2}+b^{2})}{2c}$$ $$h=\frac{2c(1+\sqrt{m})^{2}}{c^{2}+b^{2}}.$$ (56)(56)

Similarly, (52) and (53) imply

and (50) imply $$\sqrt{\frac{(1-\sqrt{m})^2}{h\gamma}-\frac{1}{4\gamma^2}}=a$$ $$\frac{(1-\sqrt{m})^2}{h\gamma}=a^2+\frac{1}{4\gamma^2}=a^2+c^2$$ $$\frac{(1-\sqrt{m})^2(c^2+b^2)}{(1+\sqrt{m})^2}=a^2+c^2$$ $$(1-\sqrt{m})\sqrt{c^2+b^2}=(1+\sqrt{m})\sqrt{c^2+a^2}$$ $$\sqrt{m}=\frac{\sqrt{c^2+b^2}-\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}=1-\frac{2\sqrt{c^2+a^2}}{\sqrt{c^2+b^2}+\sqrt{c^2+a^2}}\tag{57}$$ In (56), we get... Plugging (57) to (56), we get

h=2c(1+m)2c2+b2=8c(c2+b2+c2+a2)2.h={\frac{2c(1+{\sqrt{m}})^{2}}{c^{2}+b^{2}}}={\frac{8c}{({\sqrt{c^{2}+b^{2}}}+{\sqrt{c^{2}+a^{2}}})^{2}}}.

C Missing Proofs In Sections 5 C.1 Proof Of Corollary 1

Proof. To compute the convergence rates of GD and EG from Theorem 7 applied to each spectrum model in (12), (14), and (15), we need to compute minΞ»βˆˆβˆ†β‹† R(1/Ξ») and minΞ»βˆˆβˆ†β‹† R(Ξ») for GD. Similarly for EG, we need to compute additionally minΞ»βˆˆβˆ†β‹† |Ξ»|, minΞ»βˆˆβˆ†β‹† |Ξ»| 2, and supΞ»βˆˆβˆ†β‹† |Ξ»| 2.

$\square$ Case 1: It's straightforward to compute

min⁑λ∈S1βˆ—β„œ(1/Ξ»)=1/L2,andmin⁑λ∈S1βˆ—β„œ(Ξ»)=ΞΌ1\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(1/\lambda)=1/L_{2},\quad\mathrm{and}\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}\Re(\lambda)=\mu_{1}

Thus, GD for Case 1 has the rate

1βˆ’ΞΌ1L2=1βˆ’Ο„.1-{\frac{\mu_{1}}{L_{2}}}=1-\tau.

For EG, it's also simple to obtain

min⁑λ∈S1βˆ—βˆ£Ξ»βˆ£=L2,min⁑λ∈S1βˆ—βˆ£Ξ»βˆ£2=ΞΌ12,andsup⁑λ∈S1βˆ—βˆ£Ξ»βˆ£2=L22.\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|=L_{2},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=\mu_{1}^{2},\quad\mathrm{and}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{1}^{*}}|\lambda|^{2}=L_{2}^{2}.

Thus, EG for Case 1 has the rate

1βˆ’14(ΞΌ1L2+116(ΞΌ1L2)2).1-\frac{1}{4}\left(\frac{\mu_{1}}{L_{2}}+\frac{1}{16}\left(\frac{\mu_{1}}{L_{2}}\right)^{2}\right).

Case 2: For a complex number z = p + qi ∈ C, we can compute R(1/z) as:

1z=1p+qi=pβˆ’qip2+q2=pp2+q2βˆ’qp2+q2iβ€…β€ŠβŸΉβ€…β€Šβ„œ(1z)=pp2+q2.{\frac{1}{z}}={\frac{1}{p+q i}}={\frac{p-q i}{p^{2}+q^{2}}}={\frac{p}{p^{2}+q^{2}}}-{\frac{q}{p^{2}+q^{2}}}i\implies\Re\left({\frac{1}{z}}\right)={\frac{p}{p^{2}+q^{2}}}.

The four extreme points of the cross-shaped spectrum model in (14) are:

ΞΌ=ΞΌ+0i,\mu=\mu+0i, Β΅ = Β΅ + 0i, L = L + 0i, and L βˆ’ Β΅ L=L+0i,andLβˆ’ΞΌ2Β±ci.L=L+0i,\quad\mathrm{and}\quad\frac{L-\mu}{2}\pm c i.

Hence, R(1/z) for each of the above points is:

β„œ(1ΞΌ)=ΞΌΞΌ2=1ΞΌ,\Re\left(\frac{1}{\mu}\right)=\frac{\mu}{\mu^{2}}=\frac{1}{\mu}, $$\Re\left(\frac{1}{L}\right)=\frac{L}{L^{2}}=\frac{1}{L},\quad\text{and}$$ $$\Re\left(\frac{1}{\frac{L-\mu}{2}\pm ci}\right)=\frac{\frac{L-\mu}{2}}{\left(\frac{L-\mu}{2}\right)^{2}+c^{2}}$$ $$=\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.$$

Therefore, minλ∈S⋆ 2 R1Ξ»

1 L . As Β΅ < L, we only need to compare the last two values. Observe that:

c>L2βˆ’ΞΌ244c2>(Lβˆ’ΞΌ)(L+ΞΌ)4c2>2L(Lβˆ’ΞΌ)βˆ’(Lβˆ’ΞΌ)21L>2(Lβˆ’ΞΌ)4c2+(Lβˆ’ΞΌ)2.\begin{array}{c}{{c>\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{4c^{2}>(L-\mu)(L+\mu)}}\\ {{4c^{2}>2L(L-\mu)-(L-\mu)^{2}}}\\ {{\frac{1}{L}>\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}.}}\end{array}

Therefore,

min⁑λ∈S2βˆ—β„œ(1Ξ»)={2(Lβˆ’ΞΌ)4c2+(Lβˆ’ΞΌ)2ifc>L2βˆ’ΞΌ241Lotherwise.\operatorname*{min}_{\lambda\in S_{2}^{*}}\Re\left({\frac{1}{\lambda}}\right)={\begin{cases}{\frac{2(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}}\\ {{\frac{1}{L}}}&{{\mathrm{otherwise.}}}\end{cases}}

For minλ∈S⋆ 2 R(Ξ»), it's straightforward from the definition that

min⁑λ∈S2βˆ—β„œ(Ξ»)=ΞΌ.\operatorname*{min}_{\lambda\in{\mathcal{S}}_{2}^{*}}\Re(\lambda)=\mu.

Thus, GD for Case 2 has the rate

{1βˆ’2ΞΌ(Lβˆ’ΞΌ)4c2+(Lβˆ’ΞΌ)2ifc>L2βˆ’ΞΌ241βˆ’ΞΌLotherwise.\begin{cases}1-{\frac{2\mu(L-\mu)}{4c^{2}+(L-\mu)^{2}}}&\quad{\mathrm{if}}\quad c>{\sqrt{\frac{L^{2}-\mu^{2}}{4}}}\\ 1-{\frac{\mu}{L}}&\quad{\mathrm{otherwise.}}\end{cases}

Similarly for EG, we need to compute minλ∈S⋆ 2 R(Ξ»), which was computed above; additionally, we need to compute minλ∈S⋆ 2 |Ξ»|, minλ∈S⋆ 2 |Ξ»| 2, and supλ∈S⋆ 2 |Ξ»| 2. For z = p + qi ∈ C, |z| =pp 2 + q 2. Hence, we have

∣μ+0i∣=ΞΌ,∣L+0i∣=L,and∣Lβˆ’ΞΌ2Β±ci∣=c2+(Lβˆ’ΞΌ2)2 .|\mu+0i|=\mu,\quad|L+0i|=L,\quad{\mathrm{and}}\quad\left|{\frac{L-\mu}{2}}\pm c i\right|={\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}\,.

Observe that:

c>3L2+2LΞΌβˆ’ΞΌ24c>\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}} $$c^{2}>L^{2}-\frac{L^{2}-2L\mu+\mu^{2}}{4}$$ $$c^{2}+\left(\frac{L-\mu}{2}\right)^{2}>L^{2}$$

Thus, for supλ∈S⋆ 2 |Ξ»|, we have

sup⁑λ∈S2βˆ—βˆ£Ξ»βˆ£={c2+(Lβˆ’ΞΌ2)2ifc>3L2+2LΞΌβˆ’ΞΌ24Lotherwise,\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{2}^{*}}|\lambda|={\begin{cases}{\sqrt{c^{2}+\left({\frac{L-\mu}{2}}\right)^{2}}}&{{\mathrm{if}}\quad c>{\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}}}}\\ L&{{\mathrm{otherwise,}}}\end{cases}}

from which supλ∈S⋆ 2 |Ξ»| 2can also be obtained. Lastly, minλ∈S⋆ 2 |Ξ»| 2 = Β΅ 2, as we know Β΅ < L, and (L βˆ’ Β΅)/2 is the center of [Β΅, L].

Combining all three, we get the rate of EG for Case 3 is

we get the rate of L6 for Case 3 is $$\begin{cases}1-\frac{1}{4}\left(\frac{\mu}{\sqrt{c^{2}+\left(\frac{L-\mu}{2}\right)^{2}}}+\frac{\mu^{2}}{16\left(c^{2}+\left(\frac{L-\mu}{2}\right)^{2}\right)}\right)&\text{if}c\geqslant\sqrt{\frac{3L^{2}+2L\mu-\mu^{2}}{4}},\ 1-\frac{1}{4}\Big{(}\frac{\mu}{L}+\frac{\mu^{2}}{16L^{2}}\Big{)}&\text{otherwise.}\end{cases}$$ Case 3: Since (15) has fixed real component, minλ∈S⋆ 3 R(Ξ») = c.

For minλ∈S⋆ 3 R(1/Ξ») we can compute compare

β„œ(1c+ai)=cc2+a2>cc2+b2=β„œ(1c+bi),\Re\left(\frac{1}{c+ai}\right)=\frac{c}{c^{2}+a^{2}}>\frac{c}{c^{2}+b^{2}}=\Re\left(\frac{1}{c+bi}\right), since $a<b$ from (15). Thus, GD for Case 3 has the rate $$1-\frac{c^{2}}{c^{2}+b^{2}}.$$

For EG, it's also simple to obtain

lim⁑λ∈S3βˆ—βˆ£Ξ»βˆ£=c2+b2,min⁑λ∈S3βˆ—βˆ£Ξ»βˆ£2=c2+a2,andsup⁑λ∈S3βˆ—βˆ£Ξ»βˆ£2=c2+b2.\operatorname*{lim}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|={\sqrt{c^{2}+b^{2}}},\quad\operatorname*{min}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+a^{2},\quad{\mathrm{and}}\quad\operatorname*{sup}_{\lambda\in{\mathcal{S}}_{3}^{*}}|\lambda|^{2}=c^{2}+b^{2}.

Thus, EG has the rate

1βˆ’14(cc2+b2+116(c2+a2)(c2+b2)).1-{\frac{1}{4}}\left({\frac{c}{\sqrt{c^{2}+b^{2}}}}+{\frac{1}{16}}{\frac{(c^{2}+a^{2})}{(c^{2}+b^{2})}}\right).

C.2 Proof Of Corollary 2

Proof. Per Theorem 8, the largest Ο΅ that permits acceleration for GDM is Ο΅ = √¡L. Therefore, in the special case of (14) we consider, i.e., when c = Lβˆ’Β΅ 2, GDM cannot achieve acceleration if Lβˆ’Β΅ 2 > √¡L. Hence, we have:

Lβˆ’ΞΌ2>ΞΌLLβˆ’ΞΌ>2ΞΌLL2+ΞΌ2>6ΞΌL>6ΞΌ2  (∡L>ΞΌ)L>5ΞΌ.\begin{array}{c}{{\frac{L-\mu}{2}>\sqrt{\mu L}}}\\ {{L-\mu>2\sqrt{\mu L}}}\\ {{L^{2}+\mu^{2}>6\mu L>6\mu^{2}\ \ (\because L>\mu)}}\\ {{L>\sqrt{5}\mu.}}\end{array}

C.3 Proof Of Proposition 2

Proof. For an arbitrary complex number p + qi with p > 0, and using the link function of GDM from (7), we have

∣ξ(p+qi)∣=(1+mβˆ’hp2m)2+(hq2m)2β©½1|\xi(p+qi)|=\sqrt{\left(\frac{1+m-hp}{2\sqrt{m}}\right)^{2}+\left(\frac{hq}{2\sqrt{m}}\right)^{2}}\leqslant1 $$\frac{(1+m-hp)^{2}+h^{2}q^{2}}{4m}\leqslant1$$ $$(1+m-hp)^{2}+h^{2}q^{2}\leqslant4m$$ $$(1-m)^{2}+hp(hp-2(1+m))+h^{2}q^{2}\leqslant0$$ $$\frac{(1-m)^{2}+h^{2}q^{2}}{hp}\leqslant2(1+m)-hp$$ (58)(58)

Notice that the LHS is positive. Therefore, if the RHS is negative, the above inequality cannot hold. In other words, if 2(1+m) h < p, GDM cannot stay in the robust region. This is very hard to satisfy, even with a small p.

D Missing Proofs In Section 6

Let us consider an affine vector field v(w) = Aw + b and its associated augmented MEG linear operator J:

\begin{array}{r l}{{\left[w_{t+1}-w^{*}\right]=J\left[\begin{array}{l l}{w_{t}-w^{*}}\\ {w_{t}-w^{*}}\end{array}\right]}}&{{\mathrm{with}}}\quad J={\left[\begin{array}{l l}{(1+\beta)I_{d}-h A(I_{d}-\gamma A)}&{-\beta I_{d}}\\ {I_{d}}&{0_{d}}\end{array}\right],}}\end{array} , (58) where Id and 0d respectively stand for the identity and the null matrices. To show the local convergence of (restarted) MEG for non-affine vector fields in Theorem 9, we first establish the following lemma, which connects the augmented state and the non-augmented one.

Lemma 3. Let P MEG tbe the residual polynomial associated with t updates of MEG (c.f., Theorem 1). Let J be defined as (58). If w1 = w0 βˆ’h 1+m v(w0 βˆ’ Ξ³v(w0)), we then have

Jt[w1βˆ’w⋆w0βˆ’w⋆]=[PtMEG(A)(w1βˆ’w⋆)PtMEG(A)(w0βˆ’w⋆)].J^{t}\begin{bmatrix}w_{1}-w^{\star}\\ w_{0}-w^{\star}\end{bmatrix}=\begin{bmatrix}P_{t}^{M E G}(A)(w_{1}-w^{\star})\\ P_{t}^{M E G}(A)(w_{0}-w^{\star})\end{bmatrix}.

Consequently, if we denote zt+1 := [wt+1, wt] and zβˆ— := [w ⋆, w⋆], we have

βˆ₯zt+1βˆ’zβˆ—βˆ₯β©½C(t+1)(1βˆ’Ο†)tβˆ₯z0βˆ’zβˆ—βˆ₯.\|z_{t+1}-z_{*}\|\leqslant C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|. tβˆ₯z0 βˆ’ zβˆ—βˆ₯. (60) (59)(59) (60)(60) Proof. Let us express J J t = P 11 t(A) P 12 t(A) P 21 t(A) P 22 t(A) , and J t w1 βˆ’ w ⋆ w0 βˆ’ w ⋆ = P 11 t(A)(w0 βˆ’ w ⋆) + P 12 t(A)(w0 βˆ’ w ⋆) P 21 t(A)(w0 βˆ’ w ⋆) + P 22 t(A)(w0 βˆ’ w ⋆) By writing J t+1 = JJt and using the block-matrix form of J in (58), we get that for any t β©Ύ 0, P 11 t+1(A) = ((1 + Ξ²)Id βˆ’ hA(Id βˆ’ Ξ³A))P 11 t(A) βˆ’ Ξ²P21 t(A) P 21 t+1(A) = P 11 t(A) (62) P 12 t+1(A) = ((1 + Ξ²)Id βˆ’ hA(Id βˆ’ Ξ³A))P 12 t(A) βˆ’ Ξ²P22 t(A) P 22 t+1(A) = P 12 t(A). (63) tsuch that

. (61) (61)(61) (62)(62) Hence, we have that,

Pt+111(A)((1+Ξ²)Idβˆ’hA(Idβˆ’Ξ³A))Pt11(A)βˆ’Ξ²Ptβˆ’111(A)Pt+112(A)((1+Ξ²)Idβˆ’hA(Idβˆ’Ξ³A))Pt12(A)βˆ’Ξ²Ptβˆ’112(A).\begin{array}{l}{{P_{t+1}^{11}(A)\stackrel{(62)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{11}(A)-\beta P_{t-1}^{11}(A)}}\\ {{P_{t+1}^{12}(A)\stackrel{(63)}{=}((1+\beta)I_{d}-h A(I_{d}-\gamma A))P_{t}^{12}(A)-\beta P_{t-1}^{12}(A).}}\end{array}

We claim that

(63)(63) (64) $\left(65\right)$ (65) Pt11(A)+Pt12(A)=PtMEG(A)for alltβ©Ύ0.P_{t}^{11}(A)+P_{t}^{12}(A)=P_{t}^{\mathrm{MEG}}(A)\quad\mathrm{for~all}\quad t\geqslant0. (66)(66) t(A) for all t β©Ύ 0. (66) We prove this via induction.

For the base case, using the fact that w1 = w0 βˆ’h 1+m (w0 βˆ’ Ξ³v(w0)), we have that

$(P_{1}^{11}(A)+P_{1}^{12}(A))(w_{0}-w^{\star})=w_{1}-w^{\star}=(I_{d}-\frac{1}{1+m}(I_{d}-\gamma A))(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star}),,$ $(P_{0}^{11}(A)+P_{0}^{22}(A))(w_{0}-w^{\star})=(P_{1}^{21}(A)+P_{1}^{22}(A))(w_{0}-w^{\star})=I_{d}(w_{0}-w^{\star})=P_{0}^{\rm MEG}(A)(w_{0}-w^{\star})$.
To show the induction step, by adding (64) and (65), we get

$P_{t+1}^{11}(A)+P_{t+1}^{12}(A)=((1+\beta)I_{d}-hA(I_{d}-\gamma A))(P_{t}^{11}(A)+P_{t}^{12}(A))-\beta(P_{t-1}^{11}(A)+P_{t-1}^{12}(A))$ $\stackrel{{\rm()}}{{=}}((1+\beta)I_{d}-hA(I_{d}-\gamma A))P_{t}^{\rm MECG}(A)-\beta P_{t-1}^{\rm MEG}(A),$ where in the last step we used the induction hypothesis. Also notice P 11 t+1(A) + P 12 t+1(A) = P MEG t+1 on the left-hand side.

Hence we have for any t β©Ύ 0,

(Pt11(A)+Pt12(A))(w0βˆ’w⋆)=PtMEG(A)(w0βˆ’w⋆).(P_{t}^{11}(A)+P_{t}^{12}(A))(w_{0}-w^{\star})=P_{t}^{\mathrm{MEG}}(A)(w_{0}-w^{\star}).

Therfore, going back to (61), we have:

wt+1 βˆ’ w ⋆ wt βˆ’ w ⋆ = J t w1 βˆ’ w ⋆ w0 βˆ’ w ⋆ = (P 11 t(A) + P 12 t(A))(w0 βˆ’ w ⋆) (P 21 t(A) + P 22 t(A))(w0 βˆ’ w ⋆) (62),(63) =(P 11 t(A) + P 12 t(A))(w0 βˆ’ w ⋆) (P 11 tβˆ’1 (A) + P 12 tβˆ’1 (A))(w0 βˆ’ w ⋆) (66) = P MEG t(A)(w0 βˆ’ w ⋆) P MEG tβˆ’1(A)(w0 βˆ’ w ⋆) Thm. 1 = P MEG t(A)(w0 βˆ’ w ⋆) P MEG t(A)(wβˆ’1 βˆ’ w ⋆) = P MEG t(A) 0 0 P MEG t(A) Β· w0 βˆ’ w ⋆ wβˆ’1 βˆ’ w ⋆ = P MEG t(A) βŠ— I2 Β· w0 βˆ’ w ⋆ wβˆ’1 βˆ’ w ⋆ , (67)(67) where we use the convention that w0 = wβˆ’1. Finally, using the fact that βˆ₯A βŠ— Bβˆ₯ = βˆ₯Aβˆ₯βˆ₯Bβˆ₯ for β„“2-operator norm (Lancaster & Farahat, 1972), we have

βˆ₯zt+1βˆ’zβˆ—βˆ₯β©½βˆ₯PtMEG(A)βˆ₯βˆ₯z0βˆ’zβˆ—βˆ₯  C(t+1)(1βˆ’Ο†)tβˆ₯z0βˆ’zβˆ—βˆ₯.\|z_{t+1}-z_{*}\|\leqslant\|P_{t}^{\mathrm{MEG}}(A)\|\|z_{0}-z_{*}\|\,\stackrel{(10)}{\leqslant}\,C(t+1)(1-\varphi)^{t}\|z_{0}-z_{*}\|.

Proof. We first recall the restarted MEG algorithm we consider in (31):

$[w_{tk+i+1},w_{tk+i}]=G([w_{tk+i},w_{tk+i-1}])\quad\text{for}\quad1\leqslant i\leqslant k-1,\quad\text{and then}\quad1$ $w_{(t+1)k+1}=w_{(t+1)k}-\frac{h}{1+m}v(w_{(t+1)k}-\gamma v(w_{(t+1)k})).$ β–‘\square In other words, we repeat MEG for k steps, and then re-start the momentum at [w(t+1)k+1, w(t+1)k].

We can analyze this method as follows, where we denote zt := [wt, wtβˆ’1] and zβˆ— = [w ⋆, w⋆]:

βˆ₯z(t+1)k βˆ’ zβˆ—βˆ₯ = βˆ₯G (k)(ztk) βˆ’ zβˆ—βˆ₯ (68) = βˆ₯βˆ‡G (k)(˜ztk)(ztk βˆ’ zβˆ—)βˆ₯ β©½ βˆ₯βˆ‡G (k)(zβˆ—)(ztk βˆ’ zβˆ—)βˆ₯ + βˆ₯(βˆ‡G (k)(˜ztk) βˆ’ βˆ‡G (k)(zβˆ—))(ztk βˆ’ zβˆ—)βˆ₯ (60) β©½ C(k + 1)(1 βˆ’ Ο†) kβˆ₯ztk βˆ’ zβˆ—βˆ₯ + βˆ₯βˆ‡G (k)(˜ztk) βˆ’ βˆ‡G (k)(zβˆ—)βˆ₯βˆ₯(ztk βˆ’ zβˆ—)βˆ₯, where in the second line we use the Mean Value Theorem: βˆƒΞΎtk∈[ztk,zβˆ—]such thatG(k)(ztk)=G(k)(zβˆ—)+βˆ‡G(k)(ΞΎtk)(ztkβˆ’zβˆ—)\exists\xi_{tk}\in[z_{tk},z_{*}]\quad\text{such that}\quad G^{(k)}(z_{tk})=G^{(k)}(z_{*})+\nabla G^{(k)}(\xi_{tk})(z_{tk}-z_{*}) $$=z_{}+\nabla G^{(k)}(\hat{z}{tk})(z{tk}-z_{})\quad\text{(since$z_{*}$is the fixed point).}$$ In the fourth line we used the fact that βˆ‡G(k)(zβˆ—)(ztk βˆ’ zβˆ—) exactly correspond to k updates of MEG when the vector field is affine, as well as Lemma 3 to account for the augmented state.

Now let us consider Ο† > Ξ΅ > 0 and k large enough such that C(k + 1)(1 βˆ’ Ο†) k β©½ (1 βˆ’ Ο† + Ξ΅ 2 ) k. Since βˆ‡G is assumed to be continuous, βˆ‡G(k)is continuous too. Therefore, there exists Ξ΄ > 0 such that βˆ₯ztk βˆ’ zβˆ—βˆ₯ β©½ Ξ΄ implies βˆ₯βˆ‡G(k)(˜ztk) βˆ’ βˆ‡G(k)(zβˆ—)βˆ₯ β©½ Ξ΅ β€². In particular, choose Ξ΅ β€² = (1 βˆ’ Ο† + Ξ΅) k βˆ’ (1 βˆ’ Ο† + Ξ΅ 2 ) k ∼kΞ΅ 2(1βˆ’Ο†) .

Then, we have

βˆ₯z(t+1)kβˆ’zβˆ—βˆ₯β©½C(k+1)(1βˆ’Ο†)kβˆ₯ztkβˆ’zβˆ—βˆ₯+βˆ₯βˆ‡G(k)(z~tk)βˆ’βˆ‡G(k)(zβˆ—)βˆ₯βˆ₯ztkβˆ’zβˆ—βˆ₯\|z_{(t+1)k}-z_{*}\|\leqslant C(k+1)(1-\varphi)^{k}\|z_{tk}-z_{*}\|+\|\nabla G^{(k)}(\tilde{z}_{tk})-\nabla G^{(k)}(z_{*})\|\|z_{tk}-z_{*}\| $$\leqslant(1-\varphi+\frac{\pi}{2})^{k}|z_{tk}-z_{}|+\varepsilon^{\prime}|z_{tk}-z_{}|$$ $$\leqslant(1-\varphi+\varepsilon)^{k}|z_{tk}-z_{}|<|z_{tk}-z_{}|<|z_{0}-z_{*}|.$$

From the above, we can conclude that for all Ξ΅ > 0, there exists k > 0 and Ξ΄ > 0 such that, for all initialization satisfying βˆ₯w0 βˆ’ w ⋆βˆ₯ β©½ Ξ΄, the restarted MEG described above satisfies:

βˆ₯wtβˆ’w⋆βˆ₯=O((1βˆ’Ο†+Ξ΅)t)βˆ₯w0βˆ’w⋆βˆ₯.\|w_{t}-w^{\star}\|=O((1-\varphi+\varepsilon)^{t})\|w_{0}-w^{\star}\|.