# Federated Minimax Optimization With Client Heterogeneity Pranay Sharma pranaysh@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University Rohan Panda rohanpan@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University Gauri Joshi *gaurij@andrew.cmu.edu* Department of Electrical and Computer Engineering Carnegie Mellon University Reviewed on OpenReview: *https: // openreview. net/ forum? id= NnUmg1chLL* ## Abstract Minimax optimization has seen a surge in interest with the advent of modern applications such as GANs, and it is inherently more challenging than simple minimization. The difficulty is exacerbated by the training data residing at multiple edge devices or *clients*, especially when these clients can have heterogeneous datasets and heterogeneous local computation capabilities. We propose a general federated minimax optimization framework that subsumes such settings and several existing methods like Local SGDA. We show that naive aggregation of model updates made by clients running unequal number of local steps can result in optimizing a mismatched objective function - a phenomenon previously observed in standard federated minimization. To fix this problem, we propose normalizing the client updates by the number of local steps. We analyze the convergence of the proposed algorithm for classes of nonconvex-concave and nonconvex-nonconcave functions and characterize the impact of heterogeneous client data, partial client participation, and heterogeneous local computations. For all the function classes considered, we significantly improve the existing computation and communication complexity results. Experimental results support our theoretical claims. ## 1 Introduction The massive surge in machine learning (ML) research in the past decade has brought forth new applications that cannot be modeled as simple minimization problems. Many of these problems, including generative adversarial networks (GANs) Goodfellow et al. (2014); Arjovsky et al. (2017); Sanjabi et al. (2018), adversarial neural network training Madry et al. (2018), robust optimization Namkoong & Duchi (2016); Mohajerin Esfahani & Kuhn (2018), and fair machine learning Madras et al. (2018); Mohri et al. (2019), have an underlying min-max structure. However, the underlying problem is often nonconvex, while classical minimax theory deals almost exclusively with convex-concave problems. Another feature of modern ML applications is the inherently distributed nature of the training data Xing et al. (2016). The data collection is often outsourced to edge devices or *clients*. However, the clients may then be unable (due to resource constraints) or unwilling (due to privacy concerns) to share their data with a central server. Federated Learning (FL) Konečn`y et al. (2016); Kairouz et al. (2019) was proposed to alleviate this problem. In exchange for retaining control of their data, the clients shoulder some of the computational load, and run part of the training process locally, using only their own data. The communication with the server is infrequent, leading to further resource savings. Since its introduction, FL has been an active area of research, with some remarkable successes Li et al. (2020); Wang et al. (2021). Research has shown practical Table 1: Comparison of (**per client**) stochastic gradient complexity and the number of communication rounds needed to reach an ϵ-stationary solution (Definition 1), for different classes of nonconvex minimax problems. Here, n is the total number of clients. For a fair comparison with existing works, our results in this table are specialized to the case when all clients (i) have equal weights (pi = 1/n), (ii) perform equal number of local updates (τi = τ ), and (iii) use the same local update algorithm SGDA. See Table 2 for comparison under more general settings, when (i)-(iii) do not hold. | Setting and Assumptions | Full Client Participation (FCP) | | | | |----------------------------------------------------------------------------------------------------------------------|-----------------------------------|----------------|---------------------|---------------| | Work | System | Partial Client | Stochastic Gradient | Communication | | Heterogeneitya | Participation | Complexity | Rounds | | | Nonconvex-Strongly-concave (NC-SC)/Nonconvex-Polyak-Łojasiewicz (NC-PL): Theorem 1 | | | | | | (n = 1) Lin et al. (2020a) | - | - | O(1/ϵ4 ) | - | | Sharma et al. (2022) | ✗ | ✗ | O(1/(nϵ4 )) | O(1/ϵ3 ) | | Yang et al. (2022a) | ✗ | ✓ | O(1/(nϵ4 )) | O(1/ϵ2 ) | | Ours: (Corollary 1.2, Remark 3) | ✓ | ✓ | O 1/(nϵ4 ) | O 1/ϵ2 | | Nonconvex-Concave (NC-C): Theorem 2 | | | | | | (n = 1) Lin et al. (2020a) | - | - | O(1/ϵ8 ) | - | | Sharma et al. (2022) | ✗ | ✗ | O(1/(nϵ8 )) | O(1/ϵ7 ) | | Ours: (Corollary 2.2) | ✓ | ✓ | O 1/(nϵ8 ) | O 1/ϵ4 | | Nonconvex-One-point-concave (NC-1PC): Theorem 2 | | | | | | Deng & Mahdavi (2021) | ✗ | ✗ | O(1/ϵ12) | O(n 1/6 /ϵ8 ) | | Sharma et al. (2022) | ✗ | ✗ | O(1/ϵ8 ) | O(1/ϵ7 ) | | Ours: (Remark 5) | ✓ | ✓ | O 1/(nϵ8 ) | O 1/ϵ4 | | a Individual clients can run an unequal number of local iterations, using different local optimizers (see Section 4) | . | | | | benefits of, and provided theoretical justifications for commonly used practical techniques, such as, multiple local updates at the clients Stich (2018); Khaled et al. (2020); Koloskova et al. (2020); Wang & Joshi (2021), partial client participation Yang et al. (2021), communication compression Hamer et al. (2020); Chen et al. (2021). Further, impact of heterogeneity in the clients' local data Zhao et al. (2018); Sattler et al. (2019), as well as their system capabilities Wang et al. (2020); Mitra et al. (2021) has been studied. However, all this research has been focused almost solely on simple minimization problems. With its increasing usage in large-scale applications, FL systems must adapt to a wide range of clients. Data heterogeneity has received significant attention from the community. However, system-level heterogeneity remains relatively unexplored. The effect of client variability or *heterogeneity* can be controlled by forcing all the clients to carry out an equal number of local updates and utilize the same local optimizer Yu et al. (2019); Haddadpour et al. (2019). However, this approach is inefficient if the client dataset sizes are widely different. Also, it would entail faster clients sitting idle for long durations Reisizadeh et al. (2022); Tziotis et al. (2022), waiting for stragglers to finish. Additionally, using the same optimizer might be inefficient or expensive for clients, depending on their system capabilities. Therefore, adapting to system-level heterogeneity forms a desideratum for real-world FL schemes. Contributions. We consider a general federated minimax optimization framework, in the presence of both inter-client data and system heterogeneity. **System heterogeneity** means the participating clients can run an unequal number of local steps, and utilize different local solvers. We consider the problem $$\operatorname*{min}_{\mathbf{x}\in\mathbb{R}^{d x}}\operatorname*{max}_{\mathbf{y}\in{\mathcal{Y}}}\left\{F(\mathbf{x},\mathbf{y}):=\sum_{i=1}^{n}p_{i}f_{i}(\mathbf{x},\mathbf{y})\right\},$$ i=1 pifi(x, y)} , (1) where fiis the local loss of client i, piis the weight assigned to client i (e.g., the relative sample size), and n is the total number of clients. We study several classes of nonconvex minimax problems (see Table 1). Further, - In our proposed algorithm, the participating clients may each perform different number of local steps, with different local optimizers. In this setting, naive aggregation of local model updates (as done in existing methods like Local Stochastic Gradient Descent Ascent) may lead to convergence in terms of a mismatched global objective (Corollaries 1.1, 2.1). We propose a simple normalization strategy to fix this problem. $$(1)$$ - We achieve order-optimal or state-of-the-art computation complexity and significantly improve the communication complexity of existing methods (Corollaries 1.2, 2.2). - Under the special case where all the clients (i) are assigned equal weights pi = 1/n in (1), (ii) carry out equal number of local updates (τi = τ for all i), and (iii) utilize the same local-update algorithm, our results become directly comparable with existing work (see Table 1) and improve upon them as follows. 1. For nonconvex-strongly-concave (NC-SC - Corollary 1.2) and nonconvex-PL (NC-PL - Remark 3) problems, our method has the order-optimal gradient complexity O(1/(nϵ4)). Further, we improve the communication from O(1/ϵ3) in Sharma et al. (2022) to O(1/ϵ2). 1 2. For nonconvex-concave (NC-C - Corollary 2.2) and nonconvex-one-point-concave (NC-1PC - Remark 5) problems, we achieve state-of-the-art gradient complexity, while significantly improving the communication costs from O(1/ϵ7) in Sharma et al. (2022) to O(1/ϵ4). For NC-1PC functions, we prove the linear speedup in gradient complexity with n that was conjectured in Sharma et al. (2022). 3. As an intermediate result in our proof, we prove the theoretical convergence of Local SGD for one-point-convex function minimization (see Lemma C.5 in Appendix C.4). The achieved convergence rate is the same as that shown for convex minimization in the existing literature Khaled et al. (2020). It is worth pointing out that our proof technique is different from existing minimax literature (e.g., Sharma et al. (2022); Yang et al. (2022b)). With all the clients carrying out the same number of local steps, the existing federated analyses rely on virtual sequences of average iterates, to mimic the proof steps in centralized settings Lin et al. (2020a); Yang et al. (2022c). In our case, since different clients run different number of local steps, this strategy is no longer viable (see Remark 9). ## 2 Related Work 2.1 Single-Client Minimax Nonconvex-Strongly-concave (NC-SC). To our knowledge, Lin et al. (2020a) is the first work to analyze a single-loop algorithm for stochastic (and deterministic) NC-SC problems. Although the O(κ 3/ϵ4) complexity shown is optimal in ϵ, the algorithm required O(ϵ −2) batch-size. Qiu et al. (2020) utilized momentum to achieve O(ϵ −4) convergence with O(1) batch-size. Recent works Yang et al. (2022c); Sharma et al. (2022) achieve the same rate without momentum. Yang et al. (2022c) also improved the dependence on the condition number κ. Second-order stationarity for NC-SC has been recently studied in Luo & Chen (2021). Lower bounds for this problem class have appeared in Luo et al. (2020); Li et al. (2021); Zhang et al. (2021). Nonconvex-Concave (NC-C). Again, Lin et al. (2020a) was the first to analyze a single-loop algorithm for stochastic NC-C problems, proving O(ϵ −8) complexity. In deterministic problems, this has been improved using nested Nouiehed et al. (2019); Thekumparampil et al. (2019) as well as single-loop Xu et al. (2020); Zhang et al. (2020) algorithms. For stochastic problems, Rafique et al. (2021) and the recent work Zhang et al. (2022) improved the complexity to O(ϵ −6). However, both the algorithms have a nested structure, which at every step, solve a simpler problem iteratively. Achieving O(ϵ −6) complexity with a single-loop algorithm has so far proved elusive. ## 2.2 Distributed/Federated Minimax Recent years have also seen an increasing body of work in distributed minimax optimization. Some of this work is focused on decentralized settings, as in Rogozin et al. (2021); Beznosikov et al. (2021b,c); Metelev et al. (2022). Of immediate relevance to us is the federated setting, where clients carry out multiple local updates between successive communication rounds. The relevant works which focused on convex-concave problems include Reisizadeh et al. (2020); Hou et al. (2021); Liao et al. (2021); Sun & Wei (2022). Special classes of nonconvex 1The recent work Yang et al. (2022a) proposes FSGDA algorithm and also achieves O(1/ϵ2) communication cost for NC-PL functions. However, our work is more general since we allow different number of local steps and different local solvers at the clients. minimax problems in the federated setting have been studied in recent works, such as, nonconvex-linear Deng et al. (2020), nonconvex-PL Deng & Mahdavi (2021); Xie et al. (2021), and nonconvex-one-point-concave Deng & Mahdavi (2021). The complexity guarantees for several function classes considered in Deng & Mahdavi (2021) were further improved in Sharma et al. (2022). However, all these works consider specialized federated settings, either assuming full-client participation, or system-wise identical clients, each carrying out equal number of local updates. As we see in this paper, partial client participation is the most source of error in simple FL algorithms. Also, system-level heterogeneity can have crucial implications on the algorithm performance. Comparison with Wang et al. (2020); Sharma et al. (2022); Yang et al. **(2022a).** Wang et al. (2022a) was, to our knowledge, the first work to consider the problem of system heterogeneity in simple minimization problems, and proposed a normalized averaging scheme to avoid optimizing an inconsistent objective. Compared to Wang et al. (2020), we consider a more challenging problem and achieve higher communication savings (Table 1) 2. Sharma et al. (2022) studied minimax problems in the federated setting but assumed an equal number of SGDA-like local updates, with full client participation. The recent work Yang et al. (2022a) considers NC-SC problem with full and partial client participation and achieves similar communication savings as ours. In comparison, our work considers a more general minimax FL framework with partial client participation, clients running an unequal number of local updates, and using different local solvers. Further, we analyze multiple classes of nonconvex-concave and nonconvex-nonconcave functions, improving the communication and computation complexity of existing minimax methods. ## 3 Preliminaries Notations. We let ∥·∥ denote the Euclidean norm ∥·∥2. Given a positive integer m, the set {1, 2*, . . . , m*} is denoted by [m]. Vectors at client i are denoted with subscript i, e.g., xi, while iteration indices are denoted using superscripts, e.g., y (t) or y (t,k). Given a function g, we define its gradient vector as -∇xg(x, y) ⊤, ∇yg(x, y) ⊤⊤, and its stochastic gradient as ∇g(x, y; ξ), where ξ denotes the randomness. Convergence Metrics. In the presence of nonconvexity, we can only prove convergence to an *approximate* stationary point, which is defined next. Definition 1 (ϵ-Stationarity). A point x is an ϵ-stationary point of a differentiable function g if ∥∇g(x)∥ ≤ ϵ. Definition 2. Stochastic Gradient (SG) complexity is the total number of gradients computed by all the clients during the course of the algorithm. In special cases, where all the clients are weighted equally (pi = 1/n, for all i ∈ [n]) and carry out equal number of local steps τ , we state the *per-client* gradient complexity for comparison with existing work. See Table 1 and Corollaries 1.2 and 2.2. Definition 3 (Communication Rounds). During a single communication round, the server sends its *global* model to a set of clients, which carry out multiple local updates starting from the same model, and return their *local* vectors to the server. The server then aggregates these local vectors to arrive at a new global model. Throughout this paper, we denote the number of communication rounds by T. Next, we discuss some assumptions used in the paper. Assumption 1 (Smoothness). Each local function fiis differentiable and has Lipschitz continuous gradients. That is, there exists a constant Lf > 0 such that at each client i ∈ [n], for all x, x ′ ∈ R d1 and y, y ′ ∈ Y, ∥∇fi(x, y) − ∇fi(x ′, y ′)∥ ≤ Lf ∥(x, y) − (x ′, y ′)∥ . Assumption 2 (Bounded Diameter). The constraint set Y is convex and bounded. 2Under the conditions pi = 1*/n, τ*i = τ for all i, for smooth minimization problems, Wang et al. (2020) requires O(1/ϵ3) communication rounds. For NC-SC problems (a harder problem class), we show an improved O(1/ϵ2) communication rounds. Assumption 3 (*Local* Variance). The stochastic gradient oracle at each client is *unbiased*. Also, there exist constants σL, βL ≥ 0 such that at each client i ∈ [n], for all x, y, $$\mathbb{E}_{\xi_{i}}[\nabla f_{i}(\mathbf{x},\mathbf{y};\xi_{i})]=\nabla f_{i}(\mathbf{x},\mathbf{y}),$$ $\mathbb{E}_{\xi_{i}}\left\|\nabla f_{i}(\mathbf{x},\mathbf{y};\xi_{i})-\nabla f_{i}(\mathbf{x},\mathbf{y})\right\|^{2}\leq\beta_{L}^{2}\left\|\nabla f_{i}(\mathbf{x},\mathbf{y})\right\|^{2}+\sigma_{L}^{2}$. Assumption 4 (*Global* Heterogeneity). For any set of non-negative weights {wi} n i=1 such that Pn i=1 wi = 1, there exist constants βG ≥ 1, σG ≥ 0 such that for all x, y, $$\sum_{i=1}^{n}w_{i}\left\|\nabla_{x}f_{i}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}\leq\beta_{G}^{2}\left\|\sum_{i=1}^{n}w_{i}\nabla_{x}f_{i}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}+\sigma_{G}^{2},$$ $$\sum_{i=1}^{n}w_{i}\left\|\nabla_{y}f_{i}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}\leq\beta_{G}^{2}\left\|\sum_{i=1}^{n}w_{i}\nabla_{y}f_{i}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}+\sigma_{G}^{2}.$$ If all fi's are identical, we have βG = 1, and σG = 0. Most existing work uses simplified versions of Assumptions 3, 4, assuming βL = 0 and/or βG = 0. ## 4 Algorithm For Heterogeneous Federated Minimax Optimization In this section, we propose a federated minimax algorithm to handle system heterogeneity across clients. ## 4.1 Limitations Of Local Sgda Following the success of FedAvg McMahan et al. (2017) in FL, Deng & Mahdavi (2021) was the first to explore a simple extension Local stochastic gradient descent-ascent (SGDA) in minimax problems. Between successive communication rounds, clients take multiple simultaneous descent/ascent steps to respectively update the min-variable x and max-variable y. Subsequent work in Sharma et al. (2022) improved the convergence results and showed that LocalSGDA achieves optimal gradient complexity for several classes of nonconvex minimax problems. However, existing work on LocalSGDA also assumes the participation of all n clients in every communication round. More crucially, as observed with simple minimization problems Wang et al. (2020), if clients carry out an unequal number of local updates, or if their local optimizers are not all the same, LocalSGDA (like FedAvg) might converge to the stationary point of a different objective. This is further discussed in Sections 5.1 and 5.2, and illustrated in Figure 1, where the learning process gets disproportionately skewed towards the clients carrying out more local updates. ![4_image_0.png](4_image_0.png) Figure 1: FedAvg with heterogeneous local updates. The green (red) triangle represents the local optimizer of f1(f2), while (x ∗, y ∗) is the global optimizer. The number of local updates at the clients is τ1 = 2, τ2 = 5. Generalized Local SGDA Update Rule. To understand this mismatched convergence phenomenon with naive aggregation in local SGDA, recall that Local SGDA updates are of the form $${\bf x}^{(t+1)}={\bf x}^{(t)}+\gamma_{x}^{s}\sum_{i=1}^{n}p_{i}\Delta_{{\bf x},i}^{(t)},\qquad{\bf y}^{(t+1)}={\bf y}^{(t)}+\gamma_{y}^{s}\sum_{i=1}^{n}p_{i}\Delta_{{\bf y},i}^{(t)},$$ where γ s x , γs y are the server learning rates, ∆ (t) x,i = 1 ηcx x (t,τ (t) i) i − x (t), ∆ (t) y,i = 1 ηcy y (t,τ (t) i) i − y (t)are the scaled local updates. x (t,τ (t) i) iis the iterate at client i after taking τ (t) ilocal steps, and η c x , ηcy are the client learning rates. Let us consider a generalized version of this update rule where ∆ (t) x,i, ∆ (t) y,i are linear combinations of local stochastic gradients computed by client i, as ∆ (t) y,i =Pτ (t) i −1 k=0 a (t,k) i ∇yfi(x (t,k) i, y (t,k) i; ξ (t,k) i), where a (t,k) i ≥ 0. Commonly used client optimizers, such as, SGD, local momentum, variable local learning rates can be accommodated in this general form (see Appendix A.1 for some examples). For this more general form, we can rewrite the x, y updates at the server as follows x (t+1) = x (t) − γ s x Pn i=1 piG (t) x,i a¯ (t) i ∥a¯ (t) i ∥1 ∥a¯ (t) i∥1 = x (t) − Xn j=1 pj∥a¯ (t) j ∥1 γ s x Xn i=1 pi∥a¯ (t) i ∥ P 1 n j=1 pj∥a¯ (t) j ∥1 | {z } wi G (t) x,ia¯ (t) i , ∥a¯ (t) i ∥1 | {z } g (t) x,i | {z } τ (t) eff y (t+1) = y (t) + τ (t) eff γ s y Pn i=1 wig (t) y,i, (2) $\frac{1}{2}$ . where G (t) x,i = [∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i)]τ (t) i k=0 ∈ R dx×τ (t) i contains the τ (t) istochastic gradients stacked columnwise, a¯ (t) i = [a t,0 i, a t,1 i*, . . . , a* t,τ (t) i −1 i] ⊤, g (t) x,i, g (t) y,i are the normalized aggregates of the stochastic gradients and τ (t) eff is the *effective* number of local steps. Note that for simplicity, we assume that the constraint set Y has a large diameter. However, our algorithm can be easily modified to accommodate projection steps. Similar to ![5_image_0.png](5_image_0.png) Figure 2: Generalized update rule in (2). Note that (g (t) x,i, g (t) y,i) = 1 τi (∆(t) x,i, ∆ (t) y,i). Also, at the server, the weighted sum Pn i=1 wig (t) x,i gets scaled by τ (t) eff . the observation for simple minimization problems in Wang et al. (2020), we see in Theorems 1, 2 that the resulting iterates of this general algorithm end up converging to the stationary point of a different objective Fe =Pn i=1 wifi. Further, in Corollary 1.1, we observe that this mismatch is a result of using weights wiin (2) to weigh the clients' contribution. ## 4.2 Proposed Normalized Federated Minimax Algorithm From the generalized update rule, we can see that setting the weights wi equal to pi will ensure that the surrogate objective F˜ matches with the original global objective F. Setting wi = pi results in normalization Algorithm 1 Fed-Norm-SGDA and Fed-Norm-SGDA+ 1: **Input:** initialization x (0), y (0), Number of communication rounds T, learning rates: client {η c x , ηcy}, server {γ s x , γs y}, \#local-updates {τ (t) i}i,t, S, s = −1 2: for t = 0 to T − 1 do 3: Server selects client set C (t); sends them (x (t), y (t)) 4: if t mod S = 0 **then** 5: s ← s + 1 6: Server sends xb (s) = x (t)to clients in C (t) 7: **end if** 8: x (t,0) i = x (t), y (t,0) i = y (t)for i ∈ C(t) 9: for k = 0*, . . . , τ* (t) i − 1 do 10: x (t,k+1) i = x (t,k) i − η c xa (t,k) i ∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i) 11: y (t,k+1) i = y (t,k) i + η c ya (t,k) i ∇yfi(xb (s), y (t,k) i; ξ (t,k) i) \# y-update for Fed-Norm-SGDA+ 16: g (t) y,i =Pτ (t) i −1 k=0 a (t,k) i ∥a¯ (t) i ∥1 ∇yfi(xb (s), y (t,k) i; ξ (t,k) i) 17: g (t) y,i =Pτ (t) i −1 k=0 a (t,k) i ∥a¯ (t) i ∥1 ∇yfi(x (t,k) i, y (t,k) i; ξ (t,k) i) 12: y (t,k+1) i = y (t,k) i + η c ya (t,k) i ∇yfi(x (t,k) i, y (t,k) i; ξ (t,k) i) \# y-update for Fed-Norm-SGDA 13: **end for** 14: Client i aggregates its gradients to compute g (t) x,i, g (t) y,i 15: g (t) x,i =Pτ (t) i −1 k=0 a (t,k) i ∥a¯ (t) i ∥1 ∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i) 18: Clients i ∈ C(t)communicate {g (t) x,i, g (t) y,i} to the server 19: Server computes aggregate vectors {g (t) x , g (t) y } using (3) 20: Server step: nx (t+1) = x (t) − τ (t) eff γ s xg (t) x , y (t+1) = y (t) + τ (t) eff γ s yg (t) y 21: **end for** 22: **Return:** x¯ (T) drawn uniformly at random from {x (t)} T t=1 of the local progress at each client before their aggregation at the server. As a result, we can preserve convergence to a stationary point of the original objective function F, even with heterogeneous {τ (t) i}, as we see in Theorem 1 and Theorem 2. The algorithm follows the steps given in Algorithm 1. In each communication round t, the server selects a client set C (t) and communicates its model parameters (x (t), y (t)) to these clients. The selected clients then run multiple local stochastic gradient steps. The number of local steps {τ (t) i} can vary across clients and across rounds. At the end of τ (t) ilocal steps, client i aggregates its local stochastic gradients into {g (t) x,i, g (t) y,i}, which are then sent to the server. Note that the gradients at client i, {∇fi(·, ·; ξ (t,k) i)} τ (t) i k=0, are normalized by ∥a¯ (t) i∥1 , where a¯ (t) i = [a t,0 i, a t,1 i*, . . . , a* t,τ (t) i −1 i] ⊤ is the vector of weights assigned to individual stochastic gradients in the local updates.3 The server aggregates these local vectors to compute global direction estimates g (t) x , g (t) y , which are then used to update the server model parameters (x (t), y (t)). 3For LocalSGDA Deng & Mahdavi (2021); Sharma et al. (2022), a (t,k) i = 1 for all i ∈ [n], t ∈ [T], k ∈ [τ (t) i] and ∥a¯ (t) i ∥1 = τ (t) i. Therefore, g (t) x,i, g (t) y,i are simply the average of the stochastic gradients computed in the t-th round. Client Selection. In each round t, the server samples |C(t)| clients uniformly at random *without replacement* (WOR). While aggregating client updates at the server, client i update is weighed by w˜i = win/|C(t)|, i.e., $$\mathbf{g}_{\mathbf{x}}^{(t)}=\sum_{i\in\mathcal{C}^{(t)}}\tilde{w}_{i}\mathbf{g}_{\mathbf{x},i}^{(t)},\qquad\mathbf{g}_{\mathbf{y}}^{(t)}=\sum_{i\in\mathcal{C}^{(t)}}\tilde{w}_{i}\mathbf{g}_{\mathbf{y},i}^{(t)}.\tag{1}$$ $\quad(3)^{\frac{1}{2}}$ . Note that EC(t) [g $\mathbf{g}^{(t)}=\sum_{i=1}^{n}w_{i}\mathbf{g}^{(t)}_{\mathbf{x},i},\mathbf{E}_{\mathcal{C}^{(t)}}[\mathbf{g}^{(t)}_{\mathbf{y}}]=\sum_{i=1}^{n}w_{i}\mathbf{g}^{(t)}_{\mathbf{y},i}$. ## 5 Convergence Results Next, we present the convergence results for different classes of nonconvex minimax problems. For simplicity, throughout this section we assume the parameters utilized in Algorithm 1 to be fixed across t. Therefore, a (t,k) i ≡ a (k) i, a¯ (t) i ≡ ai, τ (t) i ≡ τi, τ (t) eff ≡ τeff and |C(t)| = P, for all t. ## 5.1 Non-Convex-Strongly-Concave (Nc-Sc) Case Assumption 5 (µ-Strong-concavity (SC) in y). A function f is µ-strong concave (µ > 0) in y if $-f({\bf x},\bar{\bf y})\geq-f({\bf x},\bar{\bf y})-\langle\nabla_{y}f({\bf x},\bar{\bf y}),\bar{\bf y}-\bar{\bf y}\rangle+\frac{\mu}{2}\big{\|}\bar{\bf y}-\bar{\bf y}\big{\|}^{2},\qquad\mbox{for all${\bf x}\in\mathbb{R}^{d_{x}}$,and$\bar{\bf y},\bar{\bf y}\in\mathbb{R}^{d_{y}}$.}$ General Convergence Result. We first show that the iterates of Algorithm 1 converge to the stationary point of a surrogate objective Fe, where Fe(x, y) ≜Pn i=1 wifi(x, y). {wi} n i=1 are the aggregation weights used by the server (Line 19). See Appendix B for the full statement and proof. Theorem 1. Suppose the local loss functions {fi}i satisfy Assumptions 1, 2, 3, 4, 5. Suppose the server selects |C(t)| = P clients in each round t*. Given appropriate choices of client and server learning rates,* (η c x , ηcy ) and (γ s x , γs y ) respectively (see Appendix *B.2), the iterates generated by Fed-Norm-SGDA satisfy* $$\min_{t\in[T]}\mathbb{E}\|\nabla\tilde{\Phi}(\mathbf{x}^{(t)})\|^{2}\leq\underbrace{\mathcal{O}\left(\kappa^{2}\sigma_{G}\sqrt{\frac{n-P}{n-1}\frac{E_{\infty}}{PT}}\right)}_{\text{Parallel transformation}}+\underbrace{\mathcal{O}\left(\kappa^{2}\sqrt{\frac{\Delta_{\tilde{\mathbf{x}}}+A_{\mathbf{x}}\sigma_{\mathbf{x}}^{2}+B_{\mathbf{x}}\beta_{\mathbf{x}}^{2}\sigma_{G}^{2}}{P\tau_{G}T}}\right)}_{\text{Error with full approximation}}+\underbrace{\mathcal{O}\left(\kappa^{2}\frac{C_{\mathbf{w}}\sigma_{\mathbf{x}}^{2}+D\sigma_{\mathbf{w}}^{2}}{PT^{2}T}\right)}_{\text{Local applications}}.\tag{4}$$ $$\left(4\right)$$ where, κ = Lf /µ *is the condition number,* Φe(x) ≜ maxy Fe(x, y) *is the envelope function,* ∆Φe ≜ Φe(x (0)) − minx Φe(x), τ¯ = 1 n Pn i=1 τi, τeff =Pn i=1 pi∥ai∥1 , Aw ≜ nτeff Pn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 , Bw ≜ nτeff maxi wi∥ai∥ 2 2 ∥ai∥ 2 1 , Cw ≜ Pn i=1 wi(∥ai∥ 2 2 − [α (t,τi−1) i] 2), D ≜ maxi(β 2 L ∥ai,−1∥ 2 2 + ∥ai,−1∥ 2 1 )*, where* ai,−1 ≜ [a (0) i, a (1) i*, . . . , a* (τi−2) i] ⊤ and Ew ≜ n maxi wi. Remark 1. The *first* term in (4) results from client subsampling (*P < n*). This explains its dependence on the data heterogeneity σG. The *second* term represents the optimization error for a centralized algorithm (see Appendix C.3 in Lin et al. (2020a)). The *last* term represents *client-drift*, the error if the client(s) run multiple local updates. Theorem 1 states convergence for a surrogate objective Fe. Next, we see convergence for the true objective F. Corollary 1.1 (Convergence in terms of F). *Given* Φ(x) ≜ maxy F(x, y)*, under the conditions of Theorem* 1, $$\min_{t\in[H]}\left\|\nabla\Phi(\mathbf{x}^{(t)})\right\|^{2}\leq2\left(2\chi_{\mathbf{p}|\mathbf{w}}^{2}\sigma_{H}^{2}+1\right)\epsilon_{opt}+4\chi_{\mathbf{p}|\mathbf{w}}^{2}\sigma_{G}^{2}+\frac{4L_{t}^{2}}{T}\sum_{t=0}^{T-1}\left\|\mathbf{y}^{*}(\mathbf{x}^{(t)})-\widetilde{\mathbf{y}}^{*}(\mathbf{x}^{(t)})\right\|^{2}.\tag{5}$$ where χ 2 p∥w ≜Pn i=1 (pi−wi) 2 wi, ϵopt ≜ 1 T PT −1 t=0 ∇Φ( e x (t)) 2 denotes the optimization error in (4)*. If* pi = wi for all i ∈ [n]*, then* χ 2 p∥w = 0. Also, then Fe(x, y) ≡ F(x, y)*. Therefore,* y ∗(x) = arg maxy F(x, y) and ye ∗(x) = arg maxy Fe(x, y) are identical, for all x*. Hence,* (5) *yields* mint∈[T] ∇Φ(x (t)) 2≤ 2ϵopt. It follows from Corollary 1.1 that if in Algorithm 1, the server aggregation weights {wi} (Line 19) are the same as {pi}, we get convergence to a stationary point of the true objective F. For the rest of this subsection, we assume wi = pi for all i ∈ [n]. Table 2: Comparison of convergence rates of Fed-Norm-SGDA (Theorem 1) and Fed-Norm-SGDA+ (Theorem 2), if all the clients run SGDA/SGDA+ based local updates, i.e., a (t,k) i = 1, for all *i, k, t*. The results are stated for (i) (pi = 1*/n, τ*i = τ, ∀ i ∈ [n]); and (ii) (pi ̸= pj ), (τi ≠ τj ). The additional factors in (ii) relative to (i) are highlighted in blue. We state the results under partial-client participation (PCP). FCP results follow by choosing P = n. For simplicity, we assume uniformly bounded local variance (βL = 0 in Assumption 3). | Nonconvex-Strongly-concave (NC-SC)/Nonconvex-Polyak-Łojasiewicz (NC-PL): (Theorem 1, Remark 3) System Setting Convergence Rate κ 2σ 2 2nτ Sharma et al. (2022) with P = n: O √ L + κ T σ L + σ 2 2 nτT G 1 pi = n , ∀ i ∈ [n] τi = τ, ∀ i ∈ [n] 2 2 σ P 1 L 2 i Yang et al. (2022a) with P < n: O √ G 1 − + √ P T h 1 + σ τ + σ G P T n 2 2 Ours with P < n: O κ 2σG q (n−P ) √ σL κ κ h σ L 2 i (n−1)P T + P τT + T τ + σ G Ours: κ q(n−P )n maxi pi 2 p 2 κ σ 2 Pn piτi 2 τ 2 pi ̸= pj , τi ̸= τj O 2σG (n−1)P T + κ √ σL q nτeff Pn i + L i=1 τ¯ + σ Gmaxi i 2 P τeffT i=1 τi T τ¯ τ¯ τ¯ = 1 Pn n i=1 τi Nonconvex-Concave (NC-C)/Nonconvex-One-Point-Concave (NC-1PC): (Theorem 2, Remark 5) System Setting Convergence Rate 3/2 1 + O (nτ) Sharma et al. (2022) with P = n: O 1/4 √ T (τP T) pi = 1 , ∀ i ∈ [n] n τi = τ, ∀ i ∈ [n]   Ours with P < n: √ σG n−P √ σL O n−1 1 P T 1/4 + 1 τP T 1/4 + (τP ) 1/4 2 1 + τ n−P −1/4 + O 1 h σ L 2 i O τ + (G2 x + σ G) T 3/4 n−1 T 3/4  2 √ σG n−P n maxi pi P T 1/4 + √ σL nτeff Pn p i 1/4 8 q 1 + n 2 O n−1 i=1 P ∥p∥ 2 τeffP T τi (τeffP ) 1/4 2 −1/4 +O 1 + n ∥p∥ 2 nτeff Pn p i + τeff n−P n maxi pi T 3/4 P 2 i=1 τi n−1 Ours: pi ̸= pj , τi ̸= τj 1 Pn τ¯ = i=1 τi n 1 σ Pn piτi τ 2 +O 2 i=1 2 + (G2 x + σ G)maxi 2 i T 3/4 L τ¯ τ¯ 2 | |---| In Table 2, we specialize the bound in (4) to SGDA-based local updates. We compare the bound under two cases: **Case 1**: equally-weighted clients (pi = 1/n, for all i), all running τi ≡ τ local updates; and Case 2: unequally weighted clients (pi ̸= pj ), running unequal local updates (τi ̸= τj ). The setting in **Case 1** has previously been considered in Sharma et al. (2022) (under full participation) and Yang et al. (2022a) 4. Compared to Sharma et al. (2022), our bound has a smaller local-updates error term. This results in improved communication cost (see Corollary 1.2). The additional factors going from **Case 1** to the more general Case 2 are highlighted in blue. The following insights can be drawn from Table 2. - **Partial Client Participation Error:** O σG √P T is the *most significant* component of convergence error. Unlike the other two errors, it does not decrease with increasing local updates τeff. Consequently, we do not observe communication savings by performing multiple local updates at the clients. It remains an open problem to achieve speedup in terms of local updates in partial participation settings. - **Unequal client weights:** if the clients are weighted disparately, we observe an increase in the stochastic gradient complexity. To see this, let τi ≡ τ . The resulting bound is O σG q(n−P )n∥p∥∞ (n−1)P T + σL √n∥p∥ √ 2 P τeffT+ 1 T [ σ 2 L τ¯ +σ 2 G] . Since ∥p∥∞ , ∥p∥2 ≤ 1, in the worst case (when only one of the clients has all the weight), the 4The condition number κ dependence is not explicitly stated in the results in Yang et al. (2022a). complexity is worse by a factor of n. This happens because the client sampling is not done in proportion to their weights. Rather, the server first samples the clients uniformly, and then scales their updates to get an unbiased estimator (3). We leave exploring non-uniform WOR sampling further as a future direction. Corollary 1.2 (Improved Communication Cost). *Suppose all the clients are weighted equally (*pi = 1/n for all i), with each carrying out τ local steps of SGDA. Further, assume Φ *is bounded from below. Then, to* reach x such that E∥∇Φ(x)∥ ≤ ϵ, - Under full participation, the per-client gradient complexity of Fed-Norm-SGDA is T τ = O(κ 4/(nϵ4))*. The* number of communication rounds required is T = O(κ 2/ϵ2). - *Under partial participation, the per-client gradient complexity of Fed-Norm-SGDA is* O(κ 4/(P ϵ4)). In general, running multiple local updates does not yield any communication savings. However, in the special case when inter-client data heterogeneity σG = 0, the communication cost is O(κ 2/ϵ2). Remark 2. The gradient complexity in Corollary 1.2 is optimal in ϵ, and achieves linear speedup in the number of participating clients. The communication complexity improves the corresponding results in Deng & Mahdavi (2021); Sharma et al. (2022). We match the communication cost in the recent work Yang et al. (2022a). In addition, our work considers a more general FL setting with unequally weighted clients (pi ̸= pj ), running unequal local updates (τi ̸= τj ), using distinct local solvers (ai ̸= aj ). ## Extending The Results To Nonconvex-Pl Functions Assumption 6. A function f satisfies µ-PL condition in y (µ > 0), if for any fixed x: 1) maxy′ f(x, y ′) has a nonempty solution set; and 2) for all y $$\|\nabla_{y}f(\mathbf{x},\mathbf{y})\|^{2}\geq2\mu(\operatorname*{max}_{\mathbf{y}^{\prime}}f(\mathbf{x},\mathbf{y}^{\prime})-f(\mathbf{x},\mathbf{y})).$$ Remark 3. If Assumptions 1, 2, 3, 4 hold, and the global function F satisfies Assumption 6, then for appropriately chosen learning rates (Appendix B.5), the bound in Theorem 1 holds. ## 5.2 Non-Convex-Concave (Nc-C) Case In this subsection, we consider smooth nonconvex functions which satisfy the following assumptions. Assumption 7 (Concavity). The function f is concave in y if for a fixed x ∈ R d1, for all y, y ′ ∈ R d2, f(x, y) ≤ f(x, y ′) + ⟨∇yf(x, y ′), y − y ′⟩. Assumption 8 (Lipschitz continuity in x). Given a function f, there exists a constant Gx, such that for each y ∈ R d2, and all x, x ′ ∈ R d1, $$\|f(\mathbf{x},\mathbf{y})-f(\mathbf{x}^{\prime},\mathbf{y})\|\leq G_{\mathbf{x}}\,\|\mathbf{x}-\mathbf{x}^{\prime}\|\,.$$ The envelope function Φ(x) = maxy f(x, y) used so far, may no longer be smooth in the absence of a unique maximizer. However, Φ(·) is weakly convex (Lin et al., 2020a, Lemma 4.7). Therefore, we use the alternate definition of stationarity, proposed in Davis & Drusvyatskiy (2019), utilizing the Moreau envelope of Φ. Definition 4 (Moreau Envelope). The function ϕλ is the λ-Moreau envelope of ϕ, for λ > 0, if for all x ∈ R dx , $$\phi_{\lambda}({\bf x})=\operatorname*{min}_{{\bf x}^{\prime}}\phi({\bf x}^{\prime})+\frac{1}{2\lambda}\left\|{\bf x}^{\prime}-{\bf x}\right\|^{2}.$$ Drusvyatskiy & Paquette (2019) showed that a small ∥∇ϕλ(x)∥ indicates the existence of some point xe in the vicinity of x, that is *nearly stationary* for ϕ. Hence, in our case, we focus on minimizing ∥∇Φλ(x)∥. Proposed Algorithm. For nonconvex-concave functions, we use Fed-Norm-SGDA+. The x-updates are identical to Fed-Norm-SGDA. For the y updates however, the clients compute stochastic gradients ∇yfi(xb (s), y (t,k) i; ξ (t,k) i) keeping the x-component fixed at xb (s)for S communication rounds. This *trick*, originally proposed in Deng & Mahdavi (2021), gives the analytical benefit of a double-loop algorithm (which updates y several times before updating x once) while also updating x simultaneously. Theorem 2. Suppose the local loss functions {fi} satisfy Assumptions 1, 2, 3, 4, 7, 8, the y iterates are bounded, and the server selects |C(t)| = P clients for all t*. With appropriate client and server learning rates,* (η c x , ηcy ) and (γ s x y min t∈[T] E ∇Φe1/2Lf (x (t)) 2≤ O σ 2 G n − P n − 1 ∆¯Φe Ew P T p1 + Fw 1/4 + O Cwσ 2 L + D(G2x + σ 2 G ) τ¯ 2T3/4 | {z } Partial participation error | {z } Local updates error + O ∆¯eΦ σ 2 LAw τeffP T p1 + Fw 1/4+ ∆¯ eΦ (1 + Fw) T3/4 τeffP Aw + τeff n−P n−1 Ew 1/4 | {z } Error with ful l synchronization , , γs ) respectively (see Appendix *C.2), the iterates of Fed-Norm-SGDA+ satisfy* $$\quad(6)$$ where Φ1/2Lfis the Moreau envelope of Φ*, and* ∆¯Φe ≜ Φe1/2Lf (x0) − minx Φe1/2Lf (x)*. The constants* Aw, Cw, D, Ew, τ, τ ¯ eff are defined in Theorem *1, and* Fw ≜ n(n−P ) P (n−1) Pn i=1 w 2 i . See Appendix C for the proof. Theorem 2 states convergence for a surrogate objective Fe. Next, we see convergence for the true objective F. Corollary 2.1 (Convergence in terms of F). *Given envelope functions* Φ(x) ≜ maxy F(x, y), Φe(x) ≜ maxy Fe(x, y)*, under the conditions of Theorem* 2, $$\operatorname*{min}_{t\in[T]}\left\|\nabla\Phi_{1/2L_{f}}(\mathbf{x}^{(t)})\right\|^{2}\leq\epsilon_{o p t}^{\prime}+\frac{8L_{f}^{2}}{T}\sum_{t=0}^{T-1}\left\|{\tilde{\mathbf{x}}}^{(t)}-{\bar{\mathbf{x}}}^{(t)}\right\|^{2},$$ where Φ1/2Lf is the Moreau envelope of Φ, xe (t) ≜ arg minx′{Φe(x ′)+Lf x ′ −x (t) 2}, x¯ (t) ≜ arg minx′{Φ(x ′)+ Lf x ′ − x (t) 2}*, for all* t, ϵ ′ opt *is the error bound in* (6). Similar to Corollary 1.1, if we replace {wi} with {pi} for all i ∈ [n] in the server updates in Algorithm 1, then Fe ≡ F, and xe (t) and x¯ (t) are identical for all t. Consequently, Theorem 2 gives us convergence in terms of the true objective F. For the rest of this subsection, we assume wi = pi for all i ∈ [n]. Remark 4. Some existing works do not require Assumption 8 for NC-C functions, and also improve the convergence rate. However, these methods either have a double-loop structure Rafique et al. (2021); Zhang et al. (2022), or work with deterministic problems Xu et al. (2020); Zhang et al. (2020). Proposing a single-loop method for stochastic NC-C problems with the same advantages is an open problem. Again, in Table 2, we specialize the bound in (6) to SGDA+ based local updates. As in the last section - Partial client participation is the *most significant* source of convergence error. - Unequal client weights (pi ̸= pj ) can increase the stochastic gradient complexity, due to the presence of n ∥p∥∞ , n ∥p∥ 2 2 factors. Corollary 2.2 (Improved Communication Cost). *Suppose all the clients are weighted equally (*pi = 1/n for all i), with each carrying out τ *local steps of SGDA+. Further, assume that* Φ1/2Lf is bounded from below. Then, to reach x *such that* E∥∇Φ1/2Lf (x)∥ ≤ ϵ, - Under full participation, the per-client gradient complexity of Fed-Norm-SGDA+ is T τ = O(1/(nϵ8)). The number of communication rounds required is T = O(1/ϵ4). - Under partial participation, the per-client gradient complexity of Fed-Norm-SGDA+ is O(1/(P ϵ8)). In general, running multiple local updates does not yield any communication savings. However, in the special case when inter-client data heterogeneity σG = 0*, the communication cost is* O(1/ϵ4). In terms of communication requirements, we achieve massive savings (compared to O(1/ϵ7) in Sharma et al. (2022)). Our gradient complexity results achieve linear speedup in the number of participating clients. Further, as stated earlier, our work considers a more general FL setting with unequally weighted clients (pi ̸= pj ), running unequal local updates (τi ̸= τj ), using distinct local solvers (ai ̸= aj ). Extending the Results to Nonconvex-One-Point-Concave Functions. One-point-convexity has been observed in SGD dynamics during neural network training Li & Yuan (2017); Kleinberg et al. (2018). Assumption 9 (One-point-Concavity in y). The function f is said to be one-point-concave in y if fixing x ∈ R d1, for all y ∈ R d2, $$\langle\nabla_{y}f(\mathbf{x},\mathbf{y}^{\prime}),\mathbf{y}-\mathbf{y}^{*}(\mathbf{x})\rangle\leq f(\mathbf{x},\mathbf{y})-f(\mathbf{x},\mathbf{y}^{*}(\mathbf{x})),$$ where y ∗(x) ∈ arg maxy f(x, y). It turns out, Theorem 2 holds for the more general class of nonconvex-one-point-concave (NC-1PC) functions. See Appendix C.4 for more details. Remark 5. Suppose Assumptions 1, 3, 2, 4, 8 hold. Suppose for all x, all the fi's satisfy Assumption 9 at a common global minimizer y ∗(x). Then, the bound in Theorem 2 holds. Remark 6. Hence, we settle the conjecture posed in Sharma et al. (2022) that *linear speedup* can be achieved for NC-1PC functions. As an intermediate step in our proof, we show convergence of Local SGD for one-point-convex functions. This extends the convex result for Local SGD to a larger class of functions. ## 6 Experiments In this section, we evaluate the empirical performance of the proposed algorithms. We consider a robust neural training problem Sinha et al. (2017); Nouiehed et al. (2019), and a fair classification problem Mohri et al. (2019); Deng et al. (2020). Due to space constraints, additional details of our experiments, and some additional results are included in Appendix D. Our experiments were run on a network of n = 15 clients, each equipped with an NVIDIA TitanX GPU. We model data heterogeneity across clients using Dirichlet distribution Wang et al. (2019) with parameter α, Dirn(α). Small α ⇒ higher heterogeneity across clients. Robust NN training. We consider the following robust neural network (NN) training problem. $$\operatorname*{min}_{\mathbf{x}}\;\operatorname*{max}_{\|\mathbf{y}\|^{2}\leq1}\sum_{j=1}^{N}\ell\left(h_{\mathbf{x}}(\mathbf{a}_{i}+\mathbf{y}),b_{i}\right),$$ $$\left(7\right)$$ ℓ (hx(ai + y), bi), (7) where x denotes the NN parameters, (ai, bi) denote the feature and label of the i-th sample, y denotes the adversarially added feature perturbation, and hx denotes the NN output. Impact of system heterogeneity. In Figure 3, we compare the effect of heterogeneous number of local updates across clients, on the performance of our proposed Fed-Norm-SGDA+. We compare with Local SGDA+ Deng & Mahdavi (2021), and Local SGDA+ with momentum Sharma et al. (2022). Clients sample the number of epochs they run locally via uniform sampling over the set {2 *. . . , E*}, i.e., τi ∼ *Unif*[2 : E]. We observe that Fed-Norm-SGDA+ adapts well to system heterogeneity and outperforms both existing methods. Impact of partial participation and heterogeneity. Next, we compare the impact of different levels of partial client participation on performance. We compare the full participation setting (n = 15) with P = 5, 10. Clients sample the number of epochs they run locally via τi ∼ *Unif*[2, 5]. We plot the results for two different values of the data heterogeneity parameter α = 0.1, 1.0. As seen in all our theoretical results where partial participation was the most significant component of convergence error, smaller values of P result in performance loss. Further, higher inter-client heterogeneity (modeled by smaller values of α) results in worse performance. We further explore the impact of α on performance in Appendix D. ![12_image_0.png](12_image_0.png) Figure 3: Comparison of the effect of heterogeneous number of local updates {τi} on the performance of Fed-Norm-SGDA+ (Algorithm 1), Local SGDA+, and Local SGDA+ with momentum, while solving (7) on CIFAR10 dataset, with VGG11 model. The solid (dashed) curves are for E = 5 (E = 7), and α = 0.1. ![12_image_1.png](12_image_1.png) Figure 4: Comparison of the effects of partial client participation (PCP) on the performance of Fed-NormSGDA+, for the robust NN training problem on the CIFAR10 dataset, with the VGG11 model. The figure shows the robust test accuracy. The solid (dashed) curves are for α = 0.1 (α = 1.0). ![12_image_2.png](12_image_2.png) $$({\boldsymbol{\delta}})$$ Figure 5: Comparison of Local SGDA, Local SGDA with momentum, and Fed-Norm-SGDA, for the fair classification task on the CIFAR10 dataset, with the VGG11 model. The solid (dashed) curves are for E = 5 (E = 7), α = 0.1. Fair Classification. We consider minimax formulation of the fair classification problem Mohri et al. (2019); Nouiehed et al. (2019). $$\operatorname*{min}_{\mathbf{x}}\operatorname*{max}_{\mathbf{y}\in\Delta_{C}}\sum_{c=1}^{C}y_{c}F_{c}(\mathbf{x})-{\frac{\lambda}{2}}\left\|\mathbf{y}\right\|^{2},$$ 2, (8) where x denotes the parameters of the NN, {Fc} C c=1 denote the loss corresponding to class c, and ∆C is the C-dimensional probability simplex. In Figure 5, we plot the worst distribution test accuracy achieved by Fed-Norm-SGDA, Local SGDA Deng & Mahdavi (2021) and Local SGDA with momentum Sharma et al. (2022). As in Figure 3, clients sample τi ∼ Unif[2, E]. We plot the test accuracy on the worst distribution in each case. Again, Fed-Norm-SGDA outperforms existing methods. ## 7 Conclusion In this work, we considered nonconvex minimax problems in the federated setting, where in addition to inter-client data heterogeneity and partial client participation, there is system heterogeneity as well. Clients may run unequal number of local update steps, using different local solvers. In such settings, we observed that existing methods, such as Local SGDA, might converge to the stationary point of an objective quite different from the original intended objective. We showed that normalizing individual client contributions solves this problem. Using our generalized framework, we analyzed several classes of nonconvex minimax functions and significantly improved existing computation and communication complexity results. Potential future directions include analyzing federated systems with unpredictable client presence Yang et al. (2022b). ## Acknowledgments This work was supported in part by NSF grants CCF 2045694, CNS-2112471, CPS-2111751, and ONR N00014-23-1- 2149. Jiarui Li helped with plotting figures for some experiments. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, and Alexander Gasnikov. Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees. arXiv preprint arXiv:2110.03313, 2021a. Aleksandr Beznosikov, Alexander Rogozin, Dmitry Kovalev, and Alexander Gasnikov. Near-optimal decentralized algorithms for saddle point problems over time-varying networks. In *International Conference on* Optimization and Applications, pp. 246–257. Springer, 2021b. Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, and Alexander Gasnikov. Distributed saddlepoint problems under similarity. In *Advances in Neural Information Processing Systems*, volume 34, 2021c. Aleksandr Beznosikov, Vadim Sushko, Abdurakhmon Sadiev, and Alexander Gasnikov. Decentralized personalized federated min-max problems. *arXiv preprint arXiv:2106.07289*, 2021d. Mingzhe Chen, Nir Shlezinger, H Vincent Poor, Yonina C Eldar, and Shuguang Cui. Communication-efficient federated learning. *Proceedings of the National Academy of Sciences*, 118(17):e2024789118, 2021. Ziyi Chen, Yi Zhou, Tengyu Xu, and Yingbin Liang. Proximal gradient descent-ascent: Variable convergence under kł geometry. In *International Conference on Learning Representations*, 2020. Hanseul Cho and Chulhee Yun. Sgda with shuffling: faster convergence for nonconvex-p {\L} minimax optimization. *arXiv preprint arXiv:2210.05995*, 2022. Damek Davis and Dmitriy Drusvyatskiy. Stochastic model-based minimization of weakly convex functions. SIAM Journal on Optimization, 29(1):207–239, 2019. Yuyang Deng and Mehrdad Mahdavi. Local stochastic gradient descent ascent: Convergence analysis and communication efficiency. In *International Conference on Artificial Intelligence and Statistics*, pp. 1387–1395. PMLR, 2021. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Distributionally robust federated averaging. In *Advances in Neural Information Processing Systems*, volume 33, pp. 15111–15122, 2020. Dmitriy Drusvyatskiy and Courtney Paquette. Efficiency of minimizing compositions of convex functions and smooth maps. *Mathematical Programming*, 178(1):503–558, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, volume 27, 2014. Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local sgd with periodic averaging: Tighter analysis and adaptive synchronization. *Advances in Neural Information* Processing Systems, 32:11082–11094, 2019. Jenny Hamer, Mehryar Mohri, and Ananda Theertha Suresh. Fedboost: A communication-efficient algorithm for federated learning. In *International Conference on Machine Learning*, pp. 3973–3983. PMLR, 2020. Charlie Hou, Kiran K Thekumparampil, Giulia Fanti, and Sewoong Oh. Efficient algorithms for federated saddle point optimization. *arXiv preprint arXiv:2102.06333*, 2021. Divyansh Jhunjhunwala, Pranay Sharma, Aushim Nagarkatti, and Gauri Joshi. FedVARP: Tackling the variance due to partial client participation in federated learning. In *The 38th Conference on Uncertainty in* Artificial Intelligence, 2022. Chi Jin, Praneeth Netrapalli, and Michael Jordan. What is local optimality in nonconvex-nonconcave minimax optimization? In *International Conference on Machine Learning*, pp. 4880–4889. PMLR, 2020. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. *arXiv preprint arXiv:1912.04977*, 2019. Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 795–811. Springer, 2016. Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Tighter theory for local sgd on identical and heterogeneous data. In *International Conference on Artificial Intelligence and Statistics*, pp. 4519–4529. PMLR, 2020. Bobby Kleinberg, Yuanzhi Li, and Yang Yuan. An alternative view: When does sgd escape local minima? In International Conference on Machine Learning, pp. 2698–2707. PMLR, 2018. Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized sgd with changing topology and local updates. In *International Conference on Machine* Learning, pp. 5381–5393. PMLR, 2020. Jakub Konečn`y, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. *arXiv preprint arXiv:1610.02527*, 2016. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvex-nonconcave minimax problems. In *Advances in Neural Information Processing Systems*, volume 34, 2021. Yunwen Lei, Zhenhuan Yang, Tianbao Yang, and Yiming Ying. Stability and generalization of stochastic gradient methods for minimax problems. In *International Conference on Machine Learning*, pp. 6175–6186. PMLR, 2021. Haochuan Li, Yi Tian, Jingzhao Zhang, and Ali Jadbabaie. Complexity lower bounds for nonconvexstrongly-concave min-max optimization. In *Advances in Neural Information Processing Systems*, volume 34, 2021. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine*, 37(3):50–60, 2020. Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems, volume 30, 2017. Luofeng Liao, Li Shen, Jia Duan, Mladen Kolar, and Dacheng Tao. Local adagrad-type algorithm for stochastic convex-concave minimax problems. *arXiv preprint arXiv:2106.10022*, 2021. Tianyi Lin, Chi Jin, and Michael Jordan. On gradient descent ascent for nonconvex-concave minimax problems. In *International Conference on Machine Learning*, pp. 6083–6093. PMLR, 2020a. Tianyi Lin, Chi Jin, and Michael I Jordan. Near-optimal algorithms for minimax optimization. In Conference on Learning Theory, pp. 2738–2779. PMLR, 2020b. Weijie Liu, Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil, Zebang Shen, and Nenggan Zheng. A decentralized proximal point-type method for saddle point problems. *arXiv preprint arXiv:1910.14380*, 2019. Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, and Yongxin Chen. Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications. *IEEE Transactions on Signal* Processing, 68:3676–3691, 2020. Luo Luo and Cheng Chen. Finding second-order stationary point for nonconvex-strongly-concave minimax problem. *arXiv preprint arXiv:2110.04814*, 2021. Luo Luo, Haishan Ye, Zhichao Huang, and Tong Zhang. Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems. In *Advances in Neural Information Processing* Systems, volume 33, pp. 20566–20577, 2020. Luo Luo, Guangzeng Xie, Tong Zhang, and Zhihua Zhang. Near optimal stochastic algorithms for finite-sum unbalanced convex-concave minimax optimization. *arXiv preprint arXiv:2106.01761*, 2021. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In *International Conference on Machine Learning*, pp. 3384–3393. PMLR, 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning Representations*, 2018. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In *Artificial Intelligence and Statistics*, pp. 1273–1282. PMLR, 2017. Dmitriy Metelev, Alexander Rogozin, Alexander Gasnikov, and Dmitry Kovalev. Decentralized saddle-point problems with different constants of strong convexity and strong concavity. *arXiv preprint arXiv:2206.00090*, 2022. Aritra Mitra, Rayana Jaafar, George J Pappas, and Hamed Hassani. Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients. *Advances in Neural Information Processing Systems*, 34: 14606–14619, 2021. Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. *Mathematical Programming*, 171 (1):115–166, 2018. Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic federated learning. In *International* Conference on Machine Learning, pp. 4615–4625. PMLR, 2019. Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In *Advances in Neural Information Processing Systems*, volume 29, 2016. Yurii Nesterov. *Lectures on convex optimization*, volume 137. Springer, 2018. Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D Lee, and Meisam Razaviyayn. Solving a class of non-convex min-max games using iterative first order methods. In Advances in Neural Information Processing Systems, volume 32, pp. 14934–14942, 2019. Yuyuan Ouyang and Yangyang Xu. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. *Mathematical Programming*, 185(1):1–35, 2021. Shuang Qiu, Zhuoran Yang, Xiaohan Wei, Jieping Ye, and Zhaoran Wang. Single-timescale stochastic nonconvex-concave optimization for smooth nonlinear TD learning. *arXiv preprint arXiv:2008.10103*, 2020. Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning. *Optimization Methods and Software*, pp. 1–35, 2021. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Robust federated learning: The case of affine distribution shifts. In *Advances in Neural Information Processing Systems*, volume 33, pp. 21554–21565, 2020. Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, and Ramtin Pedarsani. Stragglerresilient federated learning: Leveraging the interplay between statistical accuracy and system heterogeneity. IEEE Journal on Selected Areas in Information Theory, 2022. Alexander Rogozin, Alexander Beznosikov, Darina Dvinskikh, Dmitry Kovalev, Pavel Dvurechensky, and Alexander Gasnikov. Decentralized distributed optimization for saddle point problems. *arXiv preprint* arXiv:2102.07758, 2021. Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, and Jason D Lee. On the convergence and robustness of training gans with regularized optimal transport. *Advances in Neural Information Processing Systems*, 31, 2018. Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. Robust and communicationefficient federated learning from non-iid data. *IEEE transactions on neural networks and learning systems*, 31(9):3400–3413, 2019. Pranay Sharma, Rohan Panda, Gauri Joshi, and Pramod Varshney. Federated minimax optimization: Improved convergence analyses and algorithms. In *International Conference on Machine Learning*, pp. 19683–19730. PMLR, 2022. Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. In *International Conference on Learning Representations*, 2017. Sebastian U Stich. Local sgd converges fast and communicates little. In International Conference on Learning Representations, 2018. Zhenyu Sun and Ermin Wei. A communication-efficient algorithm with linear convergence for federated minimax learning. *arXiv preprint arXiv:2206.01132*, 2022. Kiran K Thekumparampil, Prateek Jain, Praneeth Netrapalli, and Sewoong Oh. Efficient algorithms for smooth minimax optimization. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Quoc Tran-Dinh, Deyi Liu, and Lam M Nguyen. Hybrid variance-reduced sgd algorithms for minimax problems with nonconvex-linear function. In *Advances in Neural Information Processing Systems*, volume 33, pp. 11096–11107, 2020. Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, and Aryan Mokhtari. Straggler-resilient personalized federated learning. *arXiv preprint arXiv:2206.02078*, 2022. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. In *International Conference on Learning Representations*, 2019. Jianyu Wang and Gauri Joshi. Cooperative SGD: A unified framework for the design and analysis of local-update sgd algorithms. *Journal of Machine Learning Research*, 22(213):1–50, 2021. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In *Advances in Neural Information Processing Systems*, volume 33, pp. 7611–7623, 2020. Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021. Yuanhao Wang and Jian Li. Improved algorithms for convex-concave minimax optimization. In *Advances in* Neural Information Processing Systems, volume 33, pp. 4800–4810, 2020. Blake E Woodworth, Kumar Kshitij Patel, and Nati Srebro. Minibatch vs local sgd for heterogeneous distributed learning. *Advances in Neural Information Processing Systems*, 33:6281–6292, 2020. Guangzeng Xie, Luo Luo, Yijiang Lian, and Zhihua Zhang. Lower complexity bounds for finite-sum convex-concave minimax optimization problems. In *International Conference on Machine Learning*, pp. 10504–10513. PMLR, 2020. Jiahao Xie, Chao Zhang, Yunsong Zhang, Zebang Shen, and Hui Qian. A federated learning framework for nonconvex-pl minimax problems. *arXiv preprint arXiv:2105.14216*, 2021. Eric P Xing, Qirong Ho, Pengtao Xie, and Dai Wei. Strategies and principles of distributed machine learning on big data. *Engineering*, 2(2):179–195, 2016. Zi Xu, Huiling Zhang, Yang Xu, and Guanghui Lan. A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems. *arXiv preprint arXiv:2006.02032*, 2020. Haibo Yang, Minghong Fang, and Jia Liu. Achieving linear speedup with partial worker participation in non-iid federated learning. In *International Conference on Learning Representations*, 2021. Haibo Yang, Zhuqing Liu, Xin Zhang, and Jia Liu. Sagda: Achieving O(ϵ −2) communication complexity in federated min-max learning. *arXiv preprint arXiv:2210.00611*, 2022a. Haibo Yang, Xin Zhang, Prashant Khanduri, and Jia Liu. Anarchic federated learning. In International Conference on Machine Learning, pp. 25331–25363. PMLR, 2022b. Junchi Yang, Siqi Zhang, Negar Kiyavash, and Niao He. A catalyst framework for minimax optimization. In Advances in Neural Information Processing Systems, volume 33, pp. 5667–5678, 2020. Junchi Yang, Antonio Orvieto, Aurelien Lucchi, and Niao He. Faster single-loop algorithms for minimax optimization without strong concavity. In *International Conference on Artificial Intelligence and Statistics*, pp. 5485–5517. PMLR, 2022c. TaeHo Yoon and Ernest K Ryu. Accelerated algorithms for smooth convex-concave minimax problems with o (1/kˆ2) rate on squared gradient norm. In *International Conference on Machine Learning*, pp. 12098–12109. PMLR, 2021. Hao Yu, Rong Jin, and Sen Yang. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In *International Conference on Machine Learning*, pp. 7184–7193. PMLR, 2019. Chulhee Yun, Shashank Rajput, and Suvrit Sra. Minibatch vs local sgd with shuffling: Tight convergence bounds and beyond. In *International Conference on Learning Representations*, 2022. Jiawei Zhang, Peijun Xiao, Ruoyu Sun, and Zhiquan Luo. A single-loop smoothed gradient descent-ascent algorithm for nonconvex-concave min-max problems. In *Advances in Neural Information Processing Systems*, volume 33, pp. 7377–7389, 2020. Siqi Zhang, Junchi Yang, Cristóbal Guzmán, Negar Kiyavash, and Niao He. The complexity of nonconvexstrongly-concave minimax optimization. In *Conference on Uncertainty in Artificial Intelligence*, pp. 482–492. PMLR, 2021. Xuan Zhang, Necdet Serhat Aybat, and Mert Gurbuzbalaban. Sapd+: An accelerated stochastic method for nonconvex-concave minimax problems. *arXiv preprint arXiv:2205.15084*, 2022. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. *arXiv preprint arXiv:1806.00582*, 2018. ## Contents 1 Introduction 1 2 Related Work 3 2.1 Single-client minimax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Distributed/Federated Minimax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Preliminaries 4 4 Algorithm for Heterogeneous Federated Minimax Optimization 5 4.1 Limitations of Local SGDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2 Proposed Normalized Federated Minimax Algorithm . . . . . . . . . . . . . . . . . . . . . . . 6 5 Convergence Results 8 5.1 Non-convex-Strongly-Concave (NC-SC) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Non-convex-Concave (NC-C) Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6 Experiments 12 7 Conclusion 14 A Background 21 A.1 Gradient Aggregation with Different Solvers at Clients . . . . . . . . . . . . . . . . . . . . . . 21 A.2 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B **Convergence of Fed-Norm-SGDA for Nonconvex-Strongly-Concave Functions (Theorem** 1) 23 B.1 Intermediate Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 B.2 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 B.3 Proofs of the Intermediate Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 B.4 Auxiliary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 B.5 Convergence under Polyak Łojasiewicz (PL) Condition . . . . . . . . . . . . . . . . . . . . . . 39 C Convergence of Fed-Norm-SGDA+ for Nonconvex Concave Functions (Theorem 2) 39 C.1 Intermediate Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 C.2 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 C.3 Proofs of the Intermediate Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 C.4 Extending the result for Nonconvex One-Point-Concave (NC-1PC) Functions . . . . . . . . . 51 D Additional Experiments 51 ## Appendix The appendix are organized as follows. In Section A we mention some basic mathematical results and inequalities which are used throughout the paper. In Appendix B we prove the non-asymptotic convergence of Fed-Norm-SGDA (Algorithm 1) for smooth nonconvex-strongly-concave (and nonconvex-PŁ) functions, and derive gradient complexity and communication cost of the algorithm to achieve an ϵ-stationary point. In Appendix C, we prove the non-asymptotic convergence of Fed-Norm-SGDA+ (Algorithm 1) for smooth nonconvex-concave and nonconvex-one-point-concave functions. Finally, in Appendix D we provide the details of the additional experiments we performed. ## A Background A.1 Gradient Aggregation With Different Solvers At Clients Local SGDA. Suppose τ (t) i = τ (t) eff = τ for all i ∈ [n], t ∈ [T]. Also, a (t,k) i = 1 for all k ∈ [τ ], t. Then, the local iterate updates in Algorithm 1-Fed-Norm-SGDA reduce to (the updates for Fed-Norm-SGDA+ are analogous) $$\begin{array}{l}{{{\bf x}_{i}^{(t,k+1)}={\bf x}_{i}^{(t,k)}-\eta_{x}^{c}\nabla_{x}f_{i}({\bf x}_{i}^{(t,k)},{\bf y}_{i}^{(t,k)};\xi_{i}^{(t,k)}),}}\\ {{{\bf y}_{i}^{(t,k+1)}={\bf y}_{i}^{(t,k)}+\eta_{y}^{c}\nabla_{y}f_{i}({\bf x}_{i}^{(t,k)},{\bf y}_{i}^{(t,k)};\xi_{i}^{(t,k)}),}}\end{array}$$ for k ∈ {0*, . . . , τ* − 1} and the gradient aggregate vectors (g (t) x,i, g (t) y,i) are simply the average of individual gradients $${\bf g}_{{\bf x},i}^{(t)}=\frac{1}{\tau}\sum_{k=0}^{\tau-1}\nabla_{x}f_{i}({\bf x}_{i}^{(t,k)},{\bf y}_{i}^{(t,k)};\xi_{i}^{(t,k)}),\quad{\bf g}_{{\bf y},i}^{(t)}=\frac{1}{\tau}\sum_{k=0}^{\tau-1}\nabla_{y}f_{i}({\bf x}_{i}^{(t,k)},{\bf y}_{i}^{(t,k)};\xi_{i}^{(t,k)})$$ Note that these are precisely the iterates of LocalSGDA proposed in Deng & Mahdavi (2021); Sharma et al. (2022), with the only difference that in LocalSGDA, the clients communicate the iterates {x (t,τ) i, y (t,τ) i} to the server, which averages them to compute {x (t+1), y (t+1)}. While here, the clients communicate {g (t) x,i, g (t) y,i}. Also, in Fed-Norm-SGDA, the clients and server use separate learning rates, which results in tighter bounds on the local-updates error. With Momentum in Local Updates. Suppose each local client uses a momentum buffer with momentum scale ρ. Then, for k ∈ {0*, . . . , τ* (t) i − 1} d t,k+1 x,i = ρd t,k x,i + ∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i), x (t,k+1) i = x (t,k) i − η c xd t,k+1 x,i d t,k+1 y,i = ρd t,k y,i + ∇yfi(x (t,k) i, y (t,k) i; ξ (t,k) i), y (t,k+1) i = y (t,k) i + η c yd t,k+1 y,i , Simple calculations show that the coefficient of ∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i) and ∇yfi(x (t,k) i, y (t,k) i; ξ (t,k) i) in the gradient aggregate vectors (g (t) x,i, g (t) y,i) is $$\sum_{j\geq k}^{\tau_{i}^{(t)}-1}=1+\rho+\cdots+\rho^{\tau_{i}^{(t)}-1-k}={\frac{1-\rho^{\tau_{i}^{(t)}-k}}{1-\rho}}.$$ Therefore, the aggregation vector is a¯ (t) i =1 1−ρ [1 − ρ τ (t) i , 1 − ρ τ (t) i −1*, . . . ,* 1 − ρ], and $$\|\bar{\mathbf{a}}_{i}^{(t)}\|_{1}=\sum_{k=0}^{\tau_{i}^{(t)}-1}\frac{1-\rho^{\tau_{i}^{(t)}-k}}{1-\rho}=\frac{1}{1-\rho}\left[\tau_{i}^{(t)}-\frac{\rho(1-\rho^{\tau_{i}^{(t)}})}{1-\rho}\right].$$ ## A.2 Auxiliary Results Remark 7 (Impact of heterogeneity σG even with τ = 1). Consider two simple minimization problems: **(P1):**$\min\limits_{\bf x}\frac{1}{n}\sum\limits_{i=1}^{n}f_{i}({\bf x})$ and **(P2):**$\min\limits_{\bf x}f(x)$. (P1) is a simple distributed minimization problem, with n clients, which we solve using synchronous distributed SGD. At iteration t, each client i computes stochastic gradient ∇fi(x (t); ξ (t) i), and sends it to the server, which averages these, and takes a step in the direction 1n Pn i=1 ∇fi(x (t); ξ (t) i). On the other hand, **(P1)** is a centralized minimization problem, where at each iteration t, the agent computes a stochastic gradient estimator with batch-size n, 1 n Pn i=1 ∇f(x (t); ξ (t) i). We compare the variance of the two global gradient estimators as follows. $$\begin{array}{ll}&\mbox{(\bf P1)}\\ \mathbb{E}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}({\bf x}^{(t)};\xi_{i}^{(t)})-\nabla f({\bf x}^{(t)})\Big{\|}^{2}\\ &\leq\frac{1}{n^{2}}\sum_{i=1}^{n}\left[\rho_{\bf x}^{2}+\beta_{i}^{2}\mathbb{E}\big{|}\nabla f_{i}({\bf x}^{(t)})\big{|}\right]^{2}\\ &=\frac{\sigma_{\bf x}^{2}}{n}+\frac{\beta_{i}^{2}}{n}\left[\beta_{i}^{2}\mathbb{E}\big{|}\nabla f({\bf x}^{(t)})\big{|}^{2}+\sigma_{\bf c}^{2}\right].\end{array}$$ () Since almost all the existing works consider the local variance bound (Assumption 3) with βL = 0, the global gradient estimator in both synchronous distributed SGD **(P1)** and single-agent minibatch SGD **(P2)** have the same σ 2 L nvariance bound. Therefore, in most existing federated works on minimization Wang et al. (2020); Yang et al. (2021) and minimax problems Sharma et al. (2022), the full synchronization error only depends on the local variance σ 2 L . However, as seen above, for βL > 0, this *apparent equivalence* breaks down. Koloskova et al. (2020), which considers similar local variance assumption as ours for minimization problems, also show similar dependence on heterogeneity σG. Lemma A.1 (Young's inequality). *Given two same-dimensional vectors* u, v ∈ R d, the Euclidean inner product can be bounded as follows: $$\langle\mathbf{u},\mathbf{v}\rangle\leq{\frac{\left\|\mathbf{u}\right\|^{2}}{2\gamma}}+{\frac{\gamma\left\|\mathbf{v}\right\|^{2}}{2}}$$ for every constant γ > 0. Lemma A.2 (Strong Concavity). A function g : X × Y is strongly concave in y*, if there exists a constant* µ > 0, such that for all x ∈ X *, and for all* y, y ′ ∈ Y*, the following inequality holds.* $$g({\bf x},{\bf y})\leq g({\bf x},{\bf y^{\prime}})+\langle\nabla_{y}g({\bf x},{\bf y^{\prime}}),{\bf y^{\prime}}-{\bf y}\rangle-\frac{\mu}{2}\left\|{\bf y}-{\bf y^{\prime}}\right\|^{2}.$$ Lemma A.3 (Jensen's inequality). Given a convex function f and a random variable X*, the following holds.* $$f\left(\mathbb{E}[X]\right)\leq\mathbb{E}\left[f(X)\right].$$ Lemma A.4 (Sum of squares). For a positive integer K, and a set of vectors x1, . . . , xK*, the following holds:* $$\left\|\sum_{k=1}^{K}\mathbf{x}_{k}\right\|^{2}\leq K\sum_{k=1}^{K}\left\|\mathbf{x}_{k}\right\|^{2}.$$ Lemma A.5 (Quadratic growth condition Karimi et al. (2016)). If function g *satisfies Assumptions* 1, 5, then for all x*, the following conditions holds* $$\begin{array}{c}{{g({\bf x})-\operatorname*{min}_{{\bf z}}g({\bf z})\geq\frac{\mu}{2}\left\|{\bf x}_{p}-{\bf x}\right\|^{2},}}\\ {{\qquad\qquad\|\nabla g({\bf x})\|^{2}\geq2\mu\left(g({\bf x})-\operatorname*{min}_{{\bf z}}g({\bf z})\right).}}\end{array}$$ Lemma A.6. For L-smooth, convex function g*, the following inequality holds* $$\mathbb{E}\left\|\nabla g(\mathbf{y})-\nabla g(\mathbf{x})\right\|^{2}\leq2L\left[g(\mathbf{y})-g(\mathbf{x})-\nabla g(\mathbf{x})^{\top}(\mathbf{y}-\mathbf{x})\right].\tag{9}$$ Lemma A.7 (Proposition 6 in Cho & Yun (2022)). For L-smooth function g *which is bounded below by* g ∗, the following inequality holds for all x $$\mathbb{E}\left\|\nabla g(\mathbf{x})\right\|^{2}\leq2L\left[g(\mathbf{x})-g^{*}\right].$$ ∗] . (10) ## B Convergence Of Fed-Norm-Sgda For Nonconvex-Strongly-Concave Functions (Theorem 1) We organize this section as follows. First, in Appendix B.1 we present some intermediate results, which we use to prove the main theorem. Next, in Appendix B.2, we present the proof of Theorem 1, which is followed by the proofs of the intermediate results in Appendix B.3. Appendix B.4 contains some auxiliary results. Finally, in Appendix B.5 we discuss the convergence result for nonconvex-PL functions. The problem we solve is $$\operatorname*{min}_{\mathbf{x}}\operatorname*{max}_{\mathbf{y}}\left\{{\widetilde{F}}(\mathbf{x},\mathbf{y})\triangleq\sum_{i=1}^{n}w_{i}f_{i}(\mathbf{x},\mathbf{y})\right\}.$$ We define Φe(x) ≜ maxy Fe(x, y) and ye ∗(x) ∈ arg maxy Fe(x, y). Since Fe(x, ·) is µ-strongly concave, ye ∗(x) is unique. In Fed-Norm-SGDA (Algorithm 1), the client updates are given by $$\begin{split}\mathbf{x}_{i}^{(t,k)}&=\mathbf{x}^{(t)}-\eta_{x}^{e}\sum_{j=0}^{k-1}a_{i}^{(j)}(k)\nabla_{x}f_{i}(\mathbf{x}_{i}^{(t,j)},\mathbf{y}_{i}^{(t,j)};\xi_{i}^{(t,j)}),\\ \mathbf{y}_{i}^{(t,k)}&=\mathbf{y}^{(t)}+\eta_{y}^{e}\sum_{j=0}^{k-1}a_{i}^{(j)}(k)\nabla_{y}f_{i}(\mathbf{x}_{i}^{(t,j)},\mathbf{y}_{i}^{(t,j)};\xi_{i}^{(t,j)}),\end{split}\tag{11}$$ $$(10)^{\frac{1}{2}}$$ where 1 ≤ k ≤ τi. These client updates are then aggregated to compute {g (t) x,i, g (t) y,i} g (t) x,i = 1 ∥ai∥ τXi−1 k=0 a (k) i(τi)∇xfi x (t,k) i, y (t,k) i; ξ (t,k) i ; h (t) x,i = 1 ∥ai∥ τXi−1 k=0 a (k) i(τi)∇xfi x (t,k) i, y (t,k) i g (t) y,i = 1 ∥ai∥ τXi−1 k=0 a (k) i(τi)∇yfi x (t,k) i, y (t,k) i; ξ (t,k) i ; h (t) y,i = 1 ∥ai∥ τXi−1 k=0 a (k) i(τi)∇yfi x (t,k) i, y (t,k) i . Remark 8. Note that we have made explicit, the dependence on k in a (j) i(k) above. This was omitted in the main paper to avoid tedious notation. However, for some local optimizers, such as local momentum at the clients (Appendix A.1), the coefficients a (j) i(k) change with k. We assume in our subsequent analysis that a (j) i(k) ≤ α for all j ∈ {0, 1*, . . . , k* −1} and for all k ∈ {1, 2*, . . . , τ*i}. We also use the notation ∥ai∥ ≜ai(τi). At iteration t, the server samples |C(t)| clients *without* replacement **(WOR)** uniformly at random. While aggregating at the server, client i update is weighed by w˜i = win/|C(t)|. The aggregates (g (t) x , g (t) y ) computed at the server are of the form g (t) x = X i∈C(t) w˜ig (t) x,i, such that EC(t) [g (t) x ] = EC(t) hXn i=1 I(i ∈ C(t)) ˜wig (t) x,ii= Xn i=1 wig (t) x,i (12) g (t) y = X i∈C(t) w˜ig (t) y,i, such that EC(t) [g (t) y ] = EC(t) hXn i=1 I(i ∈ C(t)) ˜wig (t) y,ii= Xn i=1 wig (t) y,i For simplicity of analysis, unless stated otherwise, we assume that |C(t)| = P for all t. Finally, server updates the x, y variables as $${\bf x}^{(t+1)}={\bf x}^{(t)}-\tau_{\mathrm{eff}}\gamma_{x}^{s}{\bf g}_{\bf x}^{(t)},\qquad{\bf y}^{(t+1)}={\bf y}^{(t)}+\tau_{\mathrm{eff}}\gamma_{y}^{s}{\bf g}_{\bf y}^{(t)}.$$ We define by F(t ′) the σ-algebra generated by {{x (t,k) i, y (t,k) i}i,k} t ′−1 t=0 . Throughout, we denote the conditional expectation E[·|F(t)] by the shorthand Et[·]. ## B.1 Intermediate Lemmas We begin with the following result from Nouiehed et al. (2019) about the smoothness of Φ( e ·). Lemma B.1. If a function f(·, ·) satisfies Assumptions 1, 5 (Lf -smoothness and µ-strong concavity in y), then ϕ(·) ≜ maxy f(·, y) is LΦ*-smooth with* LΦ = κLf /2 + Lf , where κ = Lf /µ *is the condition number.* Lemma B.2. Suppose the local client loss functions {fi} satisfy Assumptions 1, *4, and the stochastic oracles* for the local functions satisfy Assumption 3. Suppose the server selects P *clients in each round. Then the* iterates generated by Fed-Norm-SGDA (Algorithm *1) satisfy* Et g (t) x 2 = Et X i∈C(t) w˜ig (t) x,i 2 ≤ n P P − 1 n − 1 Et Xn i=1 wih (t) x,i 2 + n P Xn i=1 w 2 i ∥ai∥ 2 1 τXi−1 k=0 [a (k) i(τi)]2 σ 2 L + β 2 LEt ∇xfi(x (t,k) i, y (t,k) i) 2 (13) + n(n − P) n − 1 "2L 2 f P Xn i=1 w 2 i ∥ai∥1 τXi−1 k=0 a (k) i(τi)∆(t,k) x,y (i) + (max iwi) 2 P β 2 G ∇xFe(x (t), y (t)) 2 + σ 2 G #, where, ∆ (t,k) x,y (i) ≜ Et h∥x (t,k) i − x (t)∥ 2 + ∥y (t,k) i − y (t)∥ 2iis the iterate drift for client i*, at local iteration* k in the t*-th communication round.* Lemma B.3. Suppose the local client loss functions {fi} satisfy Assumptions 1, 4, 5, and the stochastic oracles for the local functions satisfy Assumption *3. Also, the server learning rate* γ s xsatisfies 64τeffγ s xLΦβ 2 Lβ 2 G n P (maxi wi∥ai∥ 2 2 /∥ai∥ 2 1 ) ≤ 1, 8τeffγ s xLΦ(maxi wi) n P n−P n−1 max{8β 2 G, 1} ≤ 1*, and* 8τeffγ s xLΦβ 2 L n P (maxi,k wia (k) i(τi)/∥ai∥1 ) ≤ 1. Then the iterates generated by Algorithm 1 *satisfy* Et hΦ( e x (t+1)) − Φ( e x (t)) i≤ − 7τeffγ s x 16 ∇Φ( e x (t)) 2 − τeffγ s x 2 1 − n(P − 1) P(n − 1) τeffγ s xLΦ Et Xn i=1 wih (t) x,i 2 + 5 4 τeffγ s xL 2 f Xn i=1 wi ∥ai∥1 τXi−1 k=0 a (k) i(τi)∆(t,k) x,y (i) + 9τeffγ s xL 2 f 4µ hΦ( e x (t)) − Fe(x (t), y (t)) i (14) + τ 2 eff[γ s x ] 2LΦ 2 n P " σ 2 L Xn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 + σ 2 G 2(max iwi) n − P n − 1 + 2β 2 L max i wi∥ai∥ 2 2 ∥ai∥ 2 1 !# . Remark. The bound in Equation (14) looks very similar to the corresponding one-step decay bound for simple smooth minimization problems. The major difference is the presence of hΦ( e x (t)) − Fe(x (t), y (t)) i, which quantifies the inaccuracy of y (t)in solving the max problem maxy Fe(x (t), y). The term Pn i=1wi ∥ai∥1 Pτi−1 k=0 a (k) i(τi)∆(t,k) x,y (i) is the client drift and is bounded in Lemma B.4 below. Lemma B.4. Suppose the local loss functions {fi} satisfy Assumptions 1, 4, 5, and the stochastic oracles for the local functions satisfy Assumption 3. Further, in Algorithm *1, we choose learning rates* η c x , ηcy such that max{η c x , ηcy} ≤ 1 2Lf (maxi ∥ai∥1 ) √2(1+β 2 L ) . Then, the iterates {x (t) i, y (t) i} *generated by Fed-Norm-SGDA* (Algorithm *1) satisfy* L 2 f Xn i=1 wi ∥ai∥1 τXi−1 k=0 a (k) i(τi)∆(t,k) x,y (i) ≤ 2[η c x ] 2 + [η c y ] 2L 2 fσ 2 L Xn i=1 wi ∥ai,−1∥ 2 2 + 4L 2 fMa−1 [η c x ] 2 + [η c y ] 2σ 2 G + 8L 2 fMa−1 β 2 G[η c x ] 2∇Φ( e x (t)) 2 + 8L 3 fMa−1 β 2 G 2κ[η c x ] 2 + [η c y ] 2hΦ( e x (t)) − Fe(x (t), y (t)) i, _where $M_{\mathbf{a}_{-1}}\triangleq\max_{i}\left(\left\|\mathbf{a}_{i,-1}\right\|_{1}^{2}+\beta_{L}^{2}\left\|\mathbf{a}_{i,-1}\right\|_{2}^{2}\right)$._ Lemma B.5. Suppose the local loss functions {fi} satisfy Assumptions 1, 4, 5, and the stochastic oracles for the local functions satisfy Assumption *3. The server learning rates* γ s x , γs y satisfy the following conditions: $$\begin{array}{c}{{\tau_{e f f}\gamma_{y}^{s}\kappa L_{f}\beta_{G}^{2}\frac{n}{P}\max\left\{\beta_{L}^{2}\max_{i}\frac{w_{i}\|\mathbf{a}_{i}\|_{2}^{2}}{\|\mathbf{a}_{i}\|_{1}^{2}},\frac{n-P}{n-1}\max_{i}w_{i}\right\}\leq\frac{1}{64},\gamma_{x}^{s}\leq\frac{\gamma_{y}^{s}}{81\kappa^{2}},}}\\ {{{}}}\\ {{{}}}\\ {{8\tau_{e f}L_{f}\gamma_{y}^{s}\frac{n}{P}\max\left\{\frac{n-P}{n-1}\max_{i}w_{i},\beta_{L}^{2}\max_{i,k}\frac{w_{i}a_{i}^{(k)}(r_{i})}{\|\mathbf{a}_{i}\|_{1}}\right\}\leq1}}\end{array}$$ The client learning rates η c x , ηcy satisfy η c yLfβG ≤1 16√κMa−1 and η c x : η c xLfβG ≤1 64κ √Ma−1 , respectively. Then the iterates generated by Fed-Norm-SGDA (Algorithm *1) satisfy* 1 T T X−1 t=0 E hΦ( e x (t)) − Fe(x (t), y (t)) i ≤ 4 hΦ( e x (0)) − Fe(x (0), y (0)) i τeffγ s yµT +1 12µκ2 1 T T X−1 t=0 E ∇Φ( e x (t)) 2 + 4τeff[γ s x ] 2LΦ γ s yµ n(P − 1) P(n − 1)E Xn i=1 wih (t) x,i 2 + 8τeffγ s yκ n P " σ 2 L Xn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 + 2σ 2 G n − P n − 1 max iwi + β 2 L max i wi∥ai∥ 2 2 ∥ai∥ 2 1 !# + 8κLf [η c x ] 2 + [η c y ] 2"σ 2 L Xn i=1 wi ∥ai,−1∥ 2 2 + 2σ 2 GMa−1 # . (15) Remark 9. The proof of Lemma B.5 differs from similar results in the existing literature Sharma et al. (2022); Yang et al. (2022a). As in these works, if all the clients are running the same number of local steps (τi = τ , for all i), we can define virtual sequences of average iterates x (t,k) = 1 P Pi∈C(t) x (t,k) i, y (t,k) = 1 P Pi∈C(t) y (t,k) i, for all k ∈ [0, τ − 1], t. Define F ′(*t, k*) as the σ-algebra generated by $${\mathcal{F}}^{\prime}(t,k)\triangleq\sigma\left\{\{\{{\mathbf{x}}_{i}^{(s,k)},{\mathbf{x}}_{i}^{(s,k)}\}_{i,k}\}_{s=0}^{t-1}\,\bigcup\left\{\{{\mathbf{x}}_{i}^{(t,j)},{\mathbf{x}}_{i}^{(t,j)}\}_{i}\right\}_{j=0}^{k-1}\right\}.$$ Since, conditioned on F ′(*t, k*), x t,k+1 *⊥ {∇*yfi(x (t,k) i, y (t,k) i; ξ (t,k) i)} n i=1, using {x (t,k), y (t,k)} considerably simplifies the analysis. However, with τi ̸= τj , the virtual sequences {x (t,k), y (t,k)} can no longer be defined for all k. Hence, we need an alternate proof strategy. ## B.2 Proof Of Theorem 1 For the sake of completeness, we first state the full statement of Theorem 1 here. Theorem. Suppose the local loss functions {fi}i satisfy Assumptions 1, 3, 4, *5. Suppose the server selects* clients using without-replacement sampling scheme **(WOR)***. Also, the server learning rates* γ s x , γs y and the client learning rates η c x , ηcy satisfy the conditions specified in Lemma *B.5. Then the iterates generated by* Fed-Norm-SGDA (Algorithm 1) satisfy min t∈[0:T −1] E ∇Φ( e x (t)) 2 ≤ 1 T T X−1 t=0 E ∇Φ( e x (t)) 2 ≤ O κ 2 ∆Φe τeffγ s yT + γ s yLf P Awσ 2 L + Bwβ 2 Lσ 2 G | {z } Error with ful l synchronization + O κ 2 n − P n − 1 γ s yLfEwτeffσ 2 G P ! + Oκ 2[η c x ] 2 + [η c y ] 2L 2 f -Cwσ 2 L + Dσ2G | {z } Error due to local updates , | {z } Partial Participation Error where κ = Lf /µ *is the condition number,* Φe(x) ≜ maxy Fe(x, y) *is the envelope function,* ∆Φe ≜ Φe(x (0)) − minx Φe(x), Aw ≜ nτeff Pn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 , Bw ≜ nτeff maxi wi∥ai∥ 2 2 ∥ai∥ 2 1 , Cw ≜Pn i=1 wi(∥ai∥ 2 2 − [α (t,τi−1) i] 2), D ≜ maxi(β 2 L ∥ai,−1∥ 2 2 + ∥ai,−1∥ 2 1 )*, where* ai,−1 ≜ [a (0) i, a (1) i*, . . . , a* (τi−2) i] ⊤ for all i and Ew ≜ n maxi wi. Using γ s y = Θ r P τeffLf T-∆eΦ +Awσ 2 L+(Bwβ 2 L+ n−P n−1 Ewτeff)σ 2 G ! and η c x ≤ η c y = Θ1 Lf τ¯ √T *, where* τ¯ = 1 n Pn i=1 τi in the bounds above, we get min t∈[T] E ∇Φ( e x (t)) 2 ≤ O + O κ 2 s∆Φe + Awσ 2 L + Bwβ 2 L σ 2 G P τeffT κ 2 sn − P n − 1 · Ewσ 2 G P T + O κ 2 Cwσ 2 L + Dσ2G τ¯ 2T . | {z } Local updates error | {z } Error with ful l synchronization | {z } Partial participation error Proof. Using Lemma B.3, and substituting in the bound on iterates' drift from Lemma B.4, we can bound Et hΦ( e x (t+1)) − Φ( e x (t)) i≤ − 7τeffγ s x 16 ∇Φ( e x (t)) 2 − τeffγ s x 2 1 − n(P − 1) P(n − 1) τeffγ s xLΦ Et Xn i=1 wih (t) x,i 2 + 9τeffγ s xL 2 f 4µ hΦ( e x (t)) − Fe(x (t), y (t)) i + τ 2 eff[γ s x ] 2LΦ 2 n P " σ 2 L Xn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 + σ 2 G 2(max iwi) n − P n − 1 + 2β 2 L max i wi∥ai∥ 2 2 ∥ai∥ 2 1 !# + 5 2 τeffγ s x [η c x ] 2 + [η c y ] 2L 2 f " σ 2 L Xn i=1 wi ∥ai,−1∥ 2 2 + 2σ 2 GMa−1 # + 10τeffγ s xL 2 fMa−1 β 2 G [η c x ] 2∇Φ( e x (t)) 2 + Lf 2κ[η c x ] 2 + [η c y ] 2hΦ( e x (t)) − Fe(x (t), y (t)) i. (16) Summing (16) over t = 0*, . . . , T* − 1, substituting the bound on E hΦ( e x (t)) − Fe(x (t), y (t)) ifrom Lemma B.5, and rearranging the terms, we get 1 T T X−1 t=0 E ∇Φ( e x (t)) 2 = O κ 2∆Φe τeffγ s yT + τeffγ s yLfκ 2 n P " σ 2 L Xn i=1 w 2 i ∥ai∥ 2 2 ∥ai∥ 2 1 + σ 2 G n − P n − 1 max iwi + β 2 L max i wi∥ai∥ 2 2 ∥ai∥ 2 1 !#! + O κ 2[η c x ] 2 + [η c y ] 2L 2 f " σ 2 L Xn i=1 wi ∥ai∥ 2 2 − [a (t,τi−1) i] 2+ σ 2 G max i ∥ai,−1∥ 2 1 + β 2 L ∥ai,−1∥ 2 2 #! (17) Consequently, using constants Aw, Bw, Cw, D, Ew, (17) can be simplified to 1 T T X−1 t=0 E ∇Φ( e x (t)) 2 ≤O κ 2 ∆Φe τeffγ s yT + γ s yLf P Awσ 2 L + Bwβ 2 L + n − P n − 1 Ewτeffσ 2 G 26 $$+\,{\mathcal{O}}\left(\kappa^{2}\left([\eta_{x}^{c}]^{2}+[\eta_{y}^{c}]^{2}\right)L_{f}^{2}\left[C_{w}\sigma_{L}^{2}+D\sigma_{G}^{2}\right]\right),$$ which completes the proof. ## Convergence In Terms Of F Proof of Corollary *1.1.* According to the definition of F(x) and Fe(x), we have ∇Φ(x) − ∇Φ( e x) = Xn i=1 [pi∇xfi(x, y ∗(x)) − wi∇xfi(x, ye ∗(x))] (y ∗(x) ∈ arg maxy F(x, y)) = Xn i=1 pi[∇xfi(x, y ∗(x)) − ∇xfi(x, ye ∗(x))] +Xn i=1 (pi − wi) ∇xfi(x, ye ∗(x)) = [∇xF(x, y ∗(x)) − ∇xF(x, ye ∗(x))] +Xn i=1 pi − wi √wi· √wi∇xfi(x, ye ∗(x)). Taking norm, using Lf -smoothness and applying Cauchy–Schwarz inequality, we get $$\left\|\nabla\Phi(\mathbf{x})-\nabla\widetilde{\Phi}(\mathbf{x})\right\|^{2}\leq2L_{f}^{2}\left\|\mathbf{y}^{*}(\mathbf{x})-\widetilde{\mathbf{y}}^{*}(\mathbf{x})\right\|^{2}+2\left[\sum_{i=1}^{n}\frac{\left(p_{i}-w_{i}\right)^{2}}{w_{i}}\right]\left[\sum_{i=1}^{n}w_{i}\left\|\nabla_{x}f_{i}\left(\mathbf{x},\widetilde{\mathbf{y}}^{*}(\mathbf{x})\right)\right\|^{2}\right]$$ $$\leq2L_{f}^{2}\left\|\mathbf{y}^{*}(\mathbf{x})-\widetilde{\mathbf{y}}^{*}(\mathbf{x})\right\|^{2}+2\chi_{\mathbf{p}|\mathbf{w}}^{2}\left[\left.\partial_{G}^{2}\right\|\nabla\widetilde{\Phi}(\mathbf{x})\right\|^{2}+\sigma_{G}^{2}\right],$$ $\square$ $\square$ where the last inequality uses Assumption 4. Next, note that $$\left\|\nabla\Phi(\mathbf{x})\right\|^{2}\leq2\left\|\nabla\Phi(\mathbf{x})-\nabla\widetilde{\Phi}(\mathbf{x})\right\|^{2}+2\left\|\nabla\widetilde{\Phi}(\mathbf{x})\right\|^{2}.$$ Therefore, we obtain min t∈[T] ∇Φ(x (t)) 2 ≤ 1 T T X−1 t=0 ∇Φ(x (t)) 2 ≤ 2 h2χ 2 p∥wβ 2 G + 1i1 T T X−1 t=0 ∇Φ( e x (t)) 2 + 4 "χ 2 p∥wσ 2 G + L 2 f 1 T T X−1 t=0 y ∗(x (t)) − ye ∗(x (t)) 2# = 2 h2χ 2 p∥wβ 2 G + 1iϵopt + 4 "χ 2 p∥wσ 2 G + L 2 f 1 T T X−1 t=0 y ∗(x (t)) − ye ∗(x (t)) 2# . where ϵopt denotes the optimization error in the right hand side of (4) in Theorem 1. Proof of Corollary *1.2.* If clients are weighted equally (wi = pi = 1/n for all i), with each carrying out τ steps of local SGDA, from (4) we get $$\operatorname*{min}_{t\in[T]}\left\|\nabla\Phi(\mathbf{x}^{(t)})\right\|^{2}\leq\mathcal{O}\left({\sqrt{\frac{n-P}{n-1}}}{\frac{\kappa^{2}\sigma_{G}}{\sqrt{P T}}}+\kappa^{2}\Big({\frac{\sigma_{L}+\beta_{L}\sigma_{G}}{\sqrt{P\tau T}}}+{\frac{\sigma_{L}^{2}+\tau\sigma_{G}^{2}}{\tau T}}\Big)\right).$$ - For full client participation, this reduces to $$\operatorname*{min}_{t\in[T]}\mathbb{E}\left\|\nabla\tilde{\Phi}(\mathbf{x}^{(t)})\right\|^{2}\leq{\mathcal{O}}\left({\frac{1}{\sqrt{n\tau T}}}+{\frac{1}{T}}\right).$$ To reach an ϵ-stationary point, assuming nτ ≤ T, the per-client gradient complexity is T τ = O κ 4 nϵ4 . Since τ ≤ *T /n*, the minimum number of communication rounds required is T = O κ 2 ϵ 2 . - For partial participation, O n − P n − 1 σ 2 G q τ P T is the dominant term, and we do not get any convergence benefit of multiple local updates. Consequently, per-gradient client complexity and number of communication rounds are both T τ = O κ 4 P ϵ4 , for τ = O(1). However, if the data across clients comes from identical distributions (σG = 0), then we recover per-client gradient complexity of O κ 4 P ϵ4 , and number of communication rounds = O κ 2 ϵ 2 . ## B.3 Proofs Of The Intermediate Lemmas Proof of Lemma *B.2.* Et X i∈C(t) w˜ig (t) x,i 2 = Et X i∈C(t) w˜i g (t) x,i − h (t) x,i + h (t) x,i 2 (a) = Et X i∈C(t) w˜i g (t) x,i − h (t) x,i 2 + Et X i∈C(t) w˜ih (t) x,i 2 + Et X i∈C(t) w˜ih (t) x,i 2 (∵ Et[g (t) x,i] = h (t) x,i for all clients i ∈ C(t)) = Et X i∈C(t) w˜ 2 i g (t) x,i − h (t) x,i 2 = n P Xn i=1 w 2 i Et g (t) x,i − h (t) x,i 2 + Et X i∈C(t) w˜ih (t) x,i 2 (∵ w˜i = win/P and P(i ∈ C(t)) = P/n) k=0 [a (k) i(τi)]2 σ 2 L + β 2 L ∇xfi(x (t,k) i, y (t,k) i) 2+ Et X i∈C(t) w˜ih (t) x,i 2 . (18) ≤ n P Xn i=1 w 2 i ∥ai∥ 2 1 τXi−1 Here, (a) follows from the following reasoning. Et X i,j∈C(t) w˜iw˜j Dh (t) x,i, g (t) x,j − h (t) x,jE = Et X i∈C(t) w˜ 2 i E hDh (t) x,i, g (t) x,i − h (t) x,iE| F(t), C (t)i+ X i̸=j w˜iw˜j E hDh (t) x,i, g (t) x,j − h (t) x,jE| F(t), C (t)i | {z } =0 (Assumption 3; independence of stochastic gradients across clients) = Et X i∈C(t) w˜ 2 i E hDh (t) x,i, g (t) x,i − h (t) x,iE| F(t), C (t)i w˜ 2 i ∥ai∥ 2 1 τXi−1 k=0 τXi−1 j=0 a (k) i(τi)a (j) i(τi) E h D∇xfi x (t,k) i, y (t,k) i; ξ (t,k) i − ∇xfi x (t,k) i, y (t,k) i , = Et " X i∈C(t) ∇xfi x (t,j) i, y (t,j) i E | F(t), C (t)i# w˜ 2 i ∥ai∥ 2 1 "τXi−1 = Et " X k=0 [a (k) i(τi)]2 E hD E h∇xfi(x (t,k) i, y (t,k) i; ξ (t,k) i) − ∇xfi(x (t,k) i, y (t,k) i) x (t,k) i, y (t,k) i i , i∈C(t) | {z } =0 ∇xfi(x (t,k) i, y (t,k) i) E F(t), C (t)i# + 2X j200 | |-----------------------------------|---------|------------|------------| | c | | | | | Client Learning Rate (η y ) | 0.02 | 2 × 10−3 | 2 × 10−4 | | Client Learning Rate (η c ) | 0.016 | 1.6 × 10−3 | 1.6 × 10−4 | | x s | s | | | | Server Learning Rate (γ x = γ y ) | 1 | 1 | 1 | Table 3: Parameter values for experiments in robust NN training experiments. Fair Classification We also demonstrate the impact of partial client participation in the fair classification problem. Figure 10 complements Figure 10 in the main paper, evaluating fairness of a VGG11 model on ![51_image_0.png](51_image_0.png) Figure 6: Comparison of the effect of heterogeneous number of local updates {τi} on the performance of Fed-Norm-SGDA+ (Algorithm 1), Local SGDA+, and Local SGDA+ with momentum, while solving (7) on CIFAR10 dataset, with VGG11 model. The solid (dashed) curves are for E = 5 (E = 7), and α = 0.1. ![51_image_1.png](51_image_1.png) Figure 7: Comparison of the effects of partial client participation (PCP) on the performance of Fed-NormSGDA+, for the robust NN training problem on the CIFAR10 dataset, with the VGG11 model. The figure shows the robust test accuracy. The solid (dashed) curves are for α = 0.1 (α = 1.0). CIFAR10 dataset. We have plotted the test accuracy of the model over the worst distribution. With an increasing number of participating clients, the performance consistently improves. Batch-size of 32 is used. Momentum parameter 0.9 is used only in Local SGDA (M). Table 4: Parameter values for experiments in fair classification experiments. | | c | | |-----------------------------|-----------|----| | Client Learning Rate (η y ) | 0.02 | | | Client Learning Rate (η c ) | 0.016 | | | | x | | | Server Learning Rate (γ s | s | | | | x = γ y ) | 1 | ![52_image_0.png](52_image_0.png) Figure 8: Effect of inter-client data heterogeneity (quantified by α) on the performance of Fed-Norm-SGDA+ in a robust NN training task. ![52_image_1.png](52_image_1.png) Figure 9: Effect of increasing client-set on the performance of Fed-Norm-SGDA+ in a robust NN training task. ![52_image_2.png](52_image_2.png) Figure 10: Effect of partial client participation on the performance of Fed-Norm-SGDA in a fair image classification task.