venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS | Title
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Abstract
For optimization of a large sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods. Our algorithm, DINGO, is derived by optimization of the gradient’s norm as a surrogate function. DINGO does not impose any specific form on the underlying functions and its application range extends far beyond convexity and smoothness. The underlying sub-problems of DINGO are simple linear least-squares, for which a plethora of efficient algorithms exist. DINGO involves a few hyper-parameters that are easy to tune and we theoretically show that a strict reduction in the surrogate objective is guaranteed, regardless of the selected hyper-parameters.
1 Introduction
Consider the optimization problem
min w∈Rd
{ f(w) , 1
m m∑ i=1 fi(w) } , (1)
in a centralized distributed computing environment involving one driver machine and m worker machines, in which the ith worker can only locally access the ith component function, fi. Such distributed computing settings arise increasingly more frequently as a result of technological and communication advancements that have enabled the collection of and access to large scale datasets.
As a concrete example, take a data fitting application, in which given n data points, {xi}ni=1, and their corresponding loss, `i(w;xi), parameterized by w, the goal is to minimize the overall loss as minw∈Rd ∑n i=1 `i(w;xi)/n. Such problems appear frequently in machine learning, e.g., [1, 2, 3] and scientific computing, e.g., [4, 5, 6]. However, in “big data” regimes where n 1, lack of adequate computational resources, in particular storage, can severely limit, or even prevent, any attempts at solving such optimization problems in a traditional stand-alone way, e.g., using a single machine. This can be remedied through distributed computing, in which resources across a network of stand-alone computational nodes are “pooled” together so as to scale to the problem at hand [7]. In such a setting, where n data points are distributed across m workers, one can instead consider (1) with
fi(w) , 1 |Si| ∑ j∈Si `j(w;xj), i = 1, 2, . . . ,m, (2)
where Si ⊆ {1, 2, . . . , n}, with cardinality denoted by |Si|, correspond to the distribution of data across the nodes, i.e., the ith node has access to a portion of the data indexed by the set Si.
In distributed settings, the amount of communications, i.e., messages exchanged across the network, are often considered a major bottleneck of computations (often more so than local computation
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
times), as they can be expensive in terms of both physical resources and time through latency [8, 9]. First-order methods [10], e.g., stochastic gradient descent (SGD) [11], solely rely on gradient information and as a result are rather easy to implement in distributed settings. They often require the performance of many computationally inexpensive iterations, which can be suitable for execution on a single machine. However, as a direct consequence, they can incur excessive communication costs in distributed environments and, hence, they might not be able to take full advantage of the available distributed computational resources.
By employing curvature information in the form of the Hessian matrix, second-order methods aim at transforming the gradient such that it is a more suitable direction to follow. Compared with first-order alternatives, although second-order methods perform more computations per iteration, they often require far fewer iterations to achieve similar results. In distributed settings, this feature can directly translate to significantly less communication costs. As a result, distributed second-order methods have the potential to become the method of choice for distributed optimization tasks.
Notation
We let 〈·, ·〉 denote the common Euclidean inner product defined by 〈x,y〉 = xTy for x,y ∈ Rd. Given a vector v and matrix A, we denote their vector `2 norm and matrix spectral norm as ‖v‖ and ‖A‖, respectively. For x, z ∈ Rd we let [x, z] , { x+ τ(z− x) | 0 ≤ τ ≤ 1 } . The range and null space of a matrix A is denoted byR(A) and N (A), respectively. The Moore–Penrose inverse [12] of A is denoted by A†. We let wt ∈ Rd denote the point at iteration t. For notational convenience, we denote gt,i , ∇fi(wt), Ht,i , ∇2fi(wt), gt , ∇f(wt) and Ht , ∇2f(wt). We also let
H̃t,i , [ Ht,i φI ] ∈ R2d×d and g̃t , ( gt 0 ) ∈ R2d, (3)
where φ > 0, I is the identity matrix, and 0 is the zero vector.
Related Work and Contributions
Owing to the above-mentioned potential, many distributed second-order optimization algorithms have recently emerged to solve (1). Among them, most notably are GIANT [13], DiSCO [9], DANE [14], InexactDANE and AIDE [15]. While having many advantages, each of these methods respectively come with several disadvantages that can limit their applicability in certain regimes. Namely, some rely on, rather stringent, (strong) convexity assumptions, while for others the underlying subproblems involve non-linear optimization problems that are themselves non-trivial to solve. A subtle, yet potentially severe, draw-back for many of the above-mentioned methods is that their performance can be sensitive to, and severely affected by, the choice of their corresponding hyper-parameters.
Here, we present a novel communication efficient distributed second-order optimization method that aims to alleviate many of the aforementioned disadvantages. Our approach is inspired by and follows many ideas of recent results on Newton-MR [16], which extends the application range of the classical Newton-CG beyond (strong) convexity and smoothness. More specifically, our algorithm, named DINGO for “DIstributed Newton-type method for Gradient-norm Optimization”, is derived by optimization of the gradient’s norm as a surrogate function for (1), i.e.,
min w∈Rd
{ 1
2 ∥∥∇f(w)∥∥2 = 1 2m2 ∥∥∥∥∥ m∑ i=1 ∇fi(w) ∥∥∥∥∥ 2} . (4)
When f is invex, [17, 18], the problems (1) and (4) have the same solutions. Recall that invexity is the generalization of convexity, which extends the sufficiency of the first order optimality condition, e.g., Karush-Kuhn-Tucker conditions, to a broader class of problems than simple convex programming. In other words, invexity is a special case of non-convexity, which subsumes convexity as a sub-class. In this light, unlike DiSCO and GIANT, by considering the surrogate function (4), DINGO’s application range and theoretical guarantees extend far beyond convex settings to invex problems. Naturally, by considering (4), DINGO may converge to a local maximum or saddle point in non-invex problems.
Similar to GIANT and DiSCO, and in contrast to DANE, InexactDANE and AIDE, our algorithm involves a few hyper-parameters that are easy to tune and the underlying sub-problems are simple linear least-squares, for which a plethora of efficient algorithms exist. However, the theoretical
analysis of both GIANT and DiSCO is limited to the case where each fi is strongly convex, and for GIANT they are also of the special form where in (2) we have `j(w;xj) = ψj(〈w,xj〉) + γ‖w‖2, γ > 0 is a regularization parameter and ψj is convex, e.g., linear predictor models. In contrast, DINGO does not impose any specific form on the underlying functions. Also, unlike GIANT, we allow for |Si| < d in (2). Moreover, we theoretically show that DINGO is not too sensitive to the choice of its hyper-parameters in that a strict reduction in the gradient norm is guaranteed, regardless of the selected hyper-parameters. See Tables 1 and 2 for a summary of high-level algorithm properties. Finally, we note that, unlike GIANT, DiSCO, InexactDANE and AIDE, our theoretical analysis requires exact solutions to the sub-problems. Despite the fact that the sub-problems of DINGO are simple ordinary least-squares, and that DINGO performs well in practice with very crude solutions, this is admittedly a theoretical restriction, which we aim to address in future.
The distributed computing environment that we consider is also assumed by GIANT, DiSCO, DANE, InexactDANE and AIDE. Moreover, as with these methods, we restrict communication to vectors of size linear in d, i.e., O(d). A communication round is performed when the driver uses a broadcast operation to send information to one or more workers in parallel, or uses a reduce operation to receive information from one or more workers in parallel. For example, computing the gradient at iteration t, namely gt = ∑m i=1 gt,i/m, requires two communication rounds, i.e., the driver broadcasts wt to all workers and then, by a reduce operation, receives gt,i for all i. We further remind that the distributed computational model considered here is such that the main bottleneck involves the communications across the network.
2 DINGO
In this section, we describe the derivation of DINGO, as depicted in Algorithm 1. Each iteration t involves the computation of two main ingredients: an update direction pt, and an appropriate step-size αt. As usual, our next iterate is then set as wt+1 = wt + αtpt.
Update Direction
We begin iteration t by distributively computing the gradient gt. Thereafter, we distributively compute the Hessian-gradient product Htgt = ∑m i=1 Ht,igt/m as well as the vectors ∑m i=1 H
† t,igt/m and∑m
i=1 H̃ † t,ig̃t/m. Computing the update direction pt involves three cases, all of which involve simple
linear least-squares sub-problems: Case 1 If 〈 ∑m
i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2, where θ is as in Algorithm 1, then we let pt = ∑m i=1 pt,i/m, with pt,i = −H † t,igt. Here, we check that the potential update direction
“− ∑m
i=1 H † t,igt/m” is a suitable descent direction for our surrogate objective (4). We do this since
we have not imposed any restrictive assumptions on (1), e.g., strong convexity of each fi, that would automatically guarantee descent; see Lemma 1 for an example of such restrictive assumptions. Case 2 If Case 1 fails, we include regularization and check again that the new potential update direction yields suitable descent. Namely, if 〈 ∑m i=1 H̃
† t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, then we let pt =∑m
i=1 pt,i/m, with pt,i = −H̃ † t,ig̃t.
Case 3 If all else fails, we enforce descent in the norm of the gradient. More specifically, as Case 2 does not hold, the set
It , { i = 1, 2, . . . ,m | 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖ 2 } , (5)
is non-empty. In parallel, the driver broadcasts Htgt to each worker i ∈ It and has it locally compute the solution to
argmin pt,i
1 2 ‖Ht,ipt,i + gt‖2 +
φ2
2 ‖pt,i‖2, such that 〈pt,i,Htgt〉 ≤ −θ‖gt‖2,
where φ is as in (3). It is easy to show that the solution to this problem is
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt . (6)
The term λt,i in (6) is positive by the definition of It and well-defined by Assumption 5, which implies that for gt 6= 0 we have Htgt 6= 0. In conclusion, for Case 3, each worker i ∈ It computes (6) and, using a reduce operation, the driver then computes the update direction pt = ∑m i=1 pt,i/m, which by construction yields descent in the surrogate objective (4). Note that pt,i = −H̃†t,ig̃t for all i /∈ It have already been obtained as part of Case 2. Remark 1. The three cases help avoid the need for any unnecessary assumptions on data distribution or the knowledge of any practically unknowable constants. In fact, given Lemma 1, which imposes a certain assumption on the data distribution, we could have stated our algorithm in its simplest form, i.e., with only Case 1. This would be more in line with some prior works, e.g., GIANT, but it would have naturally restricted the applicability of our method in terms of data distributions. Remark 2. In practice, like GIANT and DiSCO, our method DINGO never requires the computation or storage of an explicitly formed Hessian. Instead, it only requires Hessian-vector products, which can be computed at a similar cost to computing the gradient itself. Computing matrix pseudo-inverse and vector products, e.g., H†t,igt, constitute the sub-problems of our algorithm. This, in turn, is done through solving least-squares problems using iterative methods that only require matrix-vector products (see Section 4 for some such methods). Thus DINGO is suitable for large dimension d in (1).
Line Search
After computing the update direction pt, DINGO computes the next iterate wt+1 by moving along pt by an appropriate step-size αt and forming wt+1 = wt + αtpt. We use an Armijo-type line search to choose this step-size. Specifically, as we are minimizing the norm of the gradient as a surrogate function, we choose the largest αt ∈ (0, 1] such that
‖gt+1‖2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉, (7) for some constant ρ ∈ (0, 1). By construction of pt we always have 〈pt,Htgt〉 ≤ −θ‖gt‖2. Therefore, after each iteration we are strictly decreasing the norm of the gradient, and line-search guarantees that this occurs irrespective of all hyper-parameters of DINGO, i.e., θ, φ and ρ.
Algorithm 1 DINGO 1: input initial point w0 ∈ Rd, gradient tolerance δ ≥ 0, maximum iterations T , line search
parameter ρ ∈ (0, 1), parameter θ > 0, and regularization parameter φ > 0 as in (3). 2: for t = 0, 1, 2, . . . , T − 1 do 3: Distributively compute the full gradient gt. 4: if ‖gt‖ ≤ δ then 5: return wt 6: else 7: The driver broadcasts gt and, in parallel, each worker i computes Ht,igt, H † t,igt and H̃ † t,ig̃t.
8: By a reduce operation, the driver computes Htgt = 1m ∑m i=1 Ht,igt, 1 m ∑m i=1 H † t,igt and
1 m ∑m i=1 H̃ † t,ig̃t.
9: if 〈
1 m ∑m i=1 H † t,igt,Htgt 〉 ≥ θ‖gt‖2 then
10: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H † t,igt.
11: else if 〈
1 m ∑m i=1 H̃ † t,ig̃t,Htgt 〉 ≥ θ‖gt‖2 then
12: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H̃ † t,ig̃t. 13: else 14: The driver computes pt,i = −H̃†t,ig̃t for all i such that 〈H̃ † t,ig̃t,Htgt〉 ≥ θ‖gt‖2. 15: The driver broadcasts Htgt to each worker i such that 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖2 and, in parallel, they compute
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt .
16: Using a reduce operation, the driver computes pt = 1m ∑m
i=1 pt,i. 17: end if 18: Choose the largest αt ∈ (0, 1] such that ∥∥∇f(wt + αtpt)∥∥2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉. 19: The driver computes wt+1 = wt + αtpt. 20: end if 21: end for 22: return wT .
3 Theoretical Analysis
In this section, we present convergence results for DINGO. The reader can find proofs of lemmas and theorems in the supplementary material. For notational convenience, in our analysis we have C1 , {t | 〈 ∑m i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2}, C2 , {t | 〈 ∑m i=1 H̃ † t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, t /∈ C1}, and C3 , {t | t /∈ (C1 ∪C2)}, which are sets indexing iterations t that are in Case 1, Case 2 and Case 3, respectively. The convergence analysis under these cases are treated separately in Sections 3.2, 3.3 and 3.4. The unifying result is then simply given in Corollary 1. We begin, in Section 3.1, by establishing general underlying assumptions for our analysis. The analysis of Case 1 and Case 3 require their own specific assumptions, which are discussed in Sections 3.2 and 3.4, respectively. Remark 3. As long as the presented assumptions are satisfied, our algorithm converges for any choice of θ and φ, i.e., these hyper-parameters do not require the knowledge of the, practically unknowable, parameters from these assumptions. However, in Lemma 3 we give qualitative guidelines for a better choice of θ and φ to avoid Case 2 and Case 3, which are shown to be less desirable than Case 1.
3.1 General Assumptions
As DINGO makes use of Hessian-vector products, we make the following straightforward assumption. Assumption 1 (Twice Differentiability). The functions fi in (1) are twice differentiable.
Notice that we do not require each fi to be twice continuously differentiable. In particular, our analysis carries through even if the Hessian is discontinuous. This is in sharp contrast to popular belief that the application of non-smooth Hessian can hurt more so than it helps, e.g., [19]. Note that
even if the Hessian is discontinuous, Assumption 1 is sufficient in ensuring that Ht,i is symmetric, for all t and i, [20]. Following [16], we also make the following general assumption on f . Assumption 2 (Moral-Smoothness [16]). For all iterations t, there exists a constant L ∈ (0,∞) such that
∥∥∇2f(w)∇f(w)−∇2f(wt)∇f(wt)∥∥ ≤ L‖w−wt‖, for all w ∈ [wt,wt + pt], where pt is the update direction of DINGO at iteration t.
As discussed in [16] with explicit examples, Assumption 2 is strictly weaker than the common assumptions of the gradient and Hessian being both Lipschitz continuous. Using [16, Lemma 10], it follows from Assumptions 1 and 2 that∥∥∇f(wt + αpt)∥∥2 ≤ ∥∥gt∥∥2 + 2α〈pt,Htgt〉+ α2L‖pt‖2, (8) for all α ∈ [0, 1] and all iterations t.
3.2 Analysis of Case 1
In this section, we analyze the convergence of iterations of DINGO that fall under Case 1. For such iterations, we make the following assumption about the action of the pseudo-inverse of Ht,i on gt. Assumption 3 (Pseudo-Inverse Regularity of Ht,i on gt). For all t ∈ C1 and all i = 1, 2, . . . ,m, there exists constants γi ∈ (0,∞) such that ‖H†t,igt‖ ≤ γi‖gt‖.
Assumption 3 may appear unconventional. However, it may be seen as more general than the following assumption. Assumption 4 (Pseudo-Inverse Regularity of Ht on its Range Space [16]). There exists a constant γ ∈ (0,∞) such that for all iterates wt we have ‖Htp‖ ≥ γ‖p‖ for all p ∈ R(Ht).
Assumption 4 implies ‖H†tgt‖ = ‖H † t ( UtU T t + U ⊥ t (U ⊥ t ) T ) gt‖ = ‖H†tUtUTt gt‖ ≤ γ−1‖gt‖, where Ut and U⊥t denote arbitrary orthonormal bases for R(Ht) and R(Ht)⊥, respectively, and R(Ht)⊥ = N (HTt ) = N (H † t). Recall that Assumption 4 is a significant relaxation of strong convexity. As an example, an under-determined least-squares problem f(w) = ‖Aw − b‖2/2, which is clearly not strongly convex, satisfies Assumption 4 with γ = σ2min(A), where σmin(A) is the smallest non-zero singular value of A. Theorem 1 (Convergence Under Case 1). Suppose we run DINGO. Then under Assumptions 1, 2 and 3, for all t ∈ C1 we have ‖gt+1‖2 ≤ (1− 2τ1ρθ)‖gt‖2, where τ1 = min { 1, 2(1− ρ)θ/(Lγ2) } ,
γ = ∑m
i=1 γi/m, L is as in Assumption 2, γi are as in Assumption 3, ρ and θ are as in Algorithm 1.
From the proof of Theorem 1, it is easy to see that ∀t ∈ C1 we are guaranteed that 0 < 1−2τ1ρθ < 1. In Theorem 1, the term γ is the average of the γi’s. This is beneficial as it “smooths out” nonuniformity in γi’s; for example, γ ≥ mini γi. Under specific assumptions on (1), we can theoretically guarantee that t ∈ C1 for all iterations t. The following lemma provides one such example. Lemma 1. Suppose Assumption 1 holds and that we run DINGO. Furthermore, suppose that for all iterations t and all i = 1, 2, . . . ,m, the Hessian matrix Ht,i is invertible and there exists constants εi ∈ [0,∞) and νi ∈ (0,∞) such that ‖Ht,i −Ht‖ ≤ εi and νi‖gt‖ ≤ ‖Ht,igt‖. If∑m
i=1(1− εi/νi)/m ≥ θ then t ∈ C1 for all t, where θ is as in Algorithm 1.
As an example, the Assumptions of Lemma 1 trivially hold if each fi is strongly convex and we assume certain data distribution. Under the assumptions of Lemma 1, if the Hessian matrix for each worker is on average a reasonable approximation to the full Hessian, i.e., εi is on average sufficiently small so that ∑m i=1 εi/νi < m, then we can choose θ small enough to ensure that t ∈ C1 for all t. In other words, for the iterates to stay in C1, we do not require the Hessian matrix of each individual worker to be a high-quality approximation to full Hessian (which could indeed be hard to enforce in many practical applications). As long as the data is distributed in such a way that Hessian matrices are on average reasonable approximations, we can guarantee to have t ∈ C1 for all t.
3.3 Analysis of Case 2
We now analyze the convergence of DINGO for iterations that fall under Case 2. For this case, we do not require any additional assumptions to that of Assumptions 1 and 2. Instead, we use the upper
bound: ‖H̃†t,i‖ ≤ 1/φ for all iterations t and all i = 1, 2, . . . ,m, where φ is as in Algorithm 1; see Lemma 4 in the supplementary material for a proof of this upper bound.
Theorem 2 (Convergence Under Case 2). Suppose we run DINGO. Then under Assumptions 1 and 2, for all t ∈ C2 we have ‖gt+1‖2 ≤ (1− 2τ2ρθ)‖gt‖2, where τ2 = min { 1, 2(1− ρ)φ2θ/L } , L is as in Assumption 2, and ρ, θ and φ are as in Algorithm 1.
In our experience, we have found that Case 2 does not occur frequently in practice. It serves more of a theoretical purpose and is used to identify when Case 3 is required. Case 2 may be thought of as a specific instance of Case 3, in which It is empty. However, it merits its own case, as in analysis it does not require additional assumptions to Assumptions 1 and 2, and in practice it may avoid an additional two communication rounds. If we were to bypass Case 2 to Case 3 and allow It to be empty, then Theorem 3 of Section 3.4 with |It| = 0, which states the convergence for Case 3, indeed coincides with Theorem 2.
3.4 Analysis of Case 3
Now we turn to the final case, and analyze the convergence of iterations of DINGO that fall under Case 3. For such iterations, we make the following assumption. Assumption 5. For all t ∈ C3 and all i = 1, 2, . . . ,m there exists constants δi ∈ (0,∞) such that∥∥(H̃Tt,i)†Htgt∥∥ ≥ δi‖gt‖. Assumption 5, like Assumption 3, may appear unconventional. In Lemma 2 we show how Assumption 5 is implied by three other reasonable assumptions, one of which is as follows.
Assumption 6 (Gradient-Hessian Null-Space Property [16]). There exists a constant ν ∈ (0, 1] such that ∥∥(U⊥w)T∇f(w)∥∥2 ≤ (1− ν)ν−1∥∥UTw∇f(w)∥∥2, for all w ∈ Rd, where Uw and U⊥w denote any orthonormal bases forR ( ∇2f(w) ) and its orthogonal complement, respectively.
Assumption 6 implies that, as the iterations progress, the gradient will not become arbitrarily orthogonal to the range space of the Hessian matrix. As an example, any least-squares problem f(w) = ‖Aw − b‖2/2 satisfies Assumption 6 with ν = 1 ; see [16] for detailed discussion and many more examples of Assumption 6.
Lemma 2. Suppose Assumptions 4 and 6 hold and ‖Ht,i‖2 ≤ τi, ∀t ∈ C3, i = 1, 2, . . . ,m, τi ∈ (0,∞), i.e., local Hessians are bounded. Then, Assumption 5 holds with δi = γ √ ν/(τi + φ2), where φ is as in Algorithm 1, and γ and ν are as in Assumptions 4 and 6, respectively.
The following theorem provides convergence properties for iterations of DINGO that are in Case 3. Theorem 3 (Convergence Under Case 3). Suppose we run DINGO. Then under Assumptions 1, 2 and 5, for all t ∈ C3 we have ‖gt+1‖2 ≤ (1 − 2ωtρθ)‖gt‖2 ≤ (1 − 2τ3ρθ)‖gt‖2, where ωt = min{1, 2(1− ρ)θ/Lc2t}, τ3 = min{1, 2(1− ρ)θ/Lc2},
ct = 1
mφ
( m+ |It|+ θ ∑ i∈It 1 δi ) , c = 2 φ + θ mφ m∑ i=1 1 δi ,
L is as in Assumption 2, δi are as in Assumption 5, It is as in (5), and ρ, θ and φ are as in Algorithm 1.
Note that the convergence in Theorem 3 is given in both iteration dependent and independent format, since the former explicitly relates the convergence rate to the size of It, while the latter simply upper-bounds this, and hence is qualitatively less informative.
Comparing Theorems 2 and 3, iterations of DINGO should have slower convergence if they are in Case 3 rather than Case 2. By Theorem 3, if an iteration t resorts to Case 3 then we may have slower convergence for larger |It|. Moreover, this iteration would require two more communication rounds than if it were to stop in Case 1 or Case 2. Therefore, one may wish to choose θ and φ appropriately to reduce the chances that iteration t falls in Case 3 or that |It| is large. Under this consideration, Lemma 3 presents a necessary condition on a relationship between θ and φ.
Lemma 3. Suppose we run DINGO. Under Assumption 1, if |It| < m for some iteration t, then θφ ≤ ‖Htgt‖/‖gt‖.
Lemma 3 suggests that we should pick θ and φ so that their product, θφ, is small. Clearly, choosing smaller θ will increase the chance of an iteration of DINGO being in Case 1 or Case 2. However, this also gives a lower rate of convergence in Theorems 1 and 2. Choosing smaller φ will preserve more curvature information of the Hessian Ht,i in H̃ † t,i. However, φ should still be reasonably large, as making φ smaller also makes some of the sub-problems of DINGO more ill-conditioned. There is a non-trivial trade-off between φ and θ, and Lemma 3 gives an appropriate way to set them.
We can finally present a unifying result on the overall worst-case linear convergence rate of DINGO.
Corollary 1 (Overall Linear Convergence of DINGO). Suppose we run DINGO. Then under Assumptions 1, 2, 3 and 5, for all iterations t we have ‖gt+1‖2 ≤ (1−2τρθ)‖gt‖2 with τ = min{τ1, τ2, τ3}, where τ1, τ2 and τ3 are as in Theorems 1, 2, and 3, respectively, and ρ and θ are as in Algorithm 1.
From Corollary 1, DINGO can achieve ‖gt‖ ≤ ε with O(log(ε)/(τρθ)) communication rounds. Moreover, the term τ is a lower bound on the step-size under all cases, which can determine the maximum communication cost needed during line-search. For example, knowing τ could determine the number of step-sizes used in backtracking line-search for DINGO in Section 4.
4 Experiments
In this section, we evaluate the empirical performance of DINGO, GIANT, DiSCO, InexactDANE, AIDE, Asynchronous SGD (Async-SGD) and Synchronous SGD (Sync-SGD) [11] on the strongly convex problem of softmax cross-entropy minimization with regularization on the CIFAR10 dataset [21], see Figure 1. This dataset has 50000 training samples, 10000 test samples and each datapoint xi ∈ R3072 has a label yi ∈ {1, 2, . . . , 10}. This problem has dimension d = 27648. In the supplementary material, the reader can find additional experiments on another softmax regression
as well as on a Gaussian mixture model and autoencoder problem. In all experiments we consider (1) with (2), where the sets S1, S2, . . . , Sm randomly partition the index set {1, 2, . . . , n}, with each having equal size s = n/m. Code is available at https://github.com/RixonC/DINGO.
We describe some implementation details. All sub-problem solvers are limited to 50 iterations and do not employ preconditioning. For DINGO, we use the sub-problem solvers MINRES-QLP [22], LSMR [23] and CG [24] when computing H†t,igt, H̃ † t,ig̃t and (H̃ T t,iH̃t,i)
−1(Htgt), respectively. We choose CG for the latter problem as the approximation x of (H̃Tt,iH̃t,i)
−1Htgt is guaranteed to satisfy 〈Htgt,x〉 > 0 regardless of the number of CG iterations performed. For DINGO, unless otherwise stated, we set θ = 10−4 and φ = 10−6. We use backtracking line search for DINGO and GIANT to select the largest step-size in {1, 2−1, 2−2, . . . , 2−50} which passes, with an Armijo line-search parameter of 10−4. For InexactDANE, we set η = 1 and µ = 0, as in [15], and use SVRG [25] as a local solver with the best learning rate from {10−6, 10−5, . . . , 106}. We have each iteration of AIDE invoke one iteration of InexactDANE, with the same parameters as in the stand-alone InexactDANE method, and use the best catalyst acceleration parameter τ ∈ {10−6, 10−5, . . . , 106}, as in [15]. For Async-SGD and Sync-SGD we report the best learning rate from {10−6, 10−5, . . . , 106} and each worker uses a mini-batch of size n/(5m).
DiSCO has consistent performance, regardless of the number of workers, due to the distributed PCG algorithm. This essentially allows DiSCO to perform Newton’s method over the full dataset. This is unnecessarily costly, in terms of communication rounds, when s is reasonably large. Thus we see it perform comparatively poorly in Plots 1(a), 1(b), and 1(c). DiSCO outperforms GIANT and DINGO in Plot 1(d). This is likely because the local directions (−H−1t,i gt and pt,i for GIANT and DINGO, respectively) give poor updates as they are calculated using very small subsets of data, i.e., in Plot 1(d) each worker has access to only 5 data points, while d = 27648.
A significant advantage of DINGO to InexactDANE, AIDE, Async-SGD and Sync-SGD is that it is relatively easy to tune hyper-parameters. Namely, making bad choices for ρ, θ and φ in DINGO will give sub-optimal performance; however, it is still theoretically guaranteed to strictly decrease the norm of the gradient. In contrast, some choices of hyper-parameters in InexactDANE, AIDE, AsyncSGD and Sync-SGD will cause divergence and these choices can be problem specific. Moreover, these methods can be very sensitive to the chosen hyper-parameters with some being very difficult to select. For example, the acceleration parameter τ in AIDE was found to be difficult and time consuming to tune and the performance of AIDE was sensitive to it; notice the variation in selected τ in Figure 1. This difficulty was also observed in [13, 15]. We found that simply choosing ρ, θ and φ to be small, in DINGO, gave high performance. Figure 2 compares different values of θ.
5 Future Work
The following is left for future work. First, extending the analysis of DINGO to include convergence results under inexact update. Second, finding more efficient methods of line search, for practical implementations of DINGO, than backtracking line search. Using backtracking line search for GIANT and DINGO requires the communication of some constant number of scalars and vectors, respectively. Hence, for DINGO, it may transmit a large amount of data over the network, while still only requiring two communication rounds per iteration of DINGO. Lastly, considering modifications to DINGO that prevent convergence to a local maximum/saddle point in non-invex problems.
Acknowledgments
Both authors gratefully acknowledge the generous support by the Australian Research Council (ARC) Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS). Fred Roosta was partially supported by DARPA as well as ARC through a Discovery Early Career Researcher Award (DE180100923). Part of this work was done while Fred Roosta was visiting the Simons Institute for the Theory of Computing. | 1. What is the focus and contribution of the paper regarding distributed Newton methods for gradient-norm optimization?
2. What are the strengths of the proposed approach, particularly in its ability to handle non-convex objectives without specific forms or restrictions?
3. What are the weaknesses or limitations of the method, such as concerns related to convergence or local minima?
4. How does the proposed method compare to existing works in terms of communication vs computation balance, number of hyperparameters, and sensitivity to hyperparameters?
5. What are some minor issues or suggestions for improving the clarity and presentation of the work, such as redundant phrases, unclear statements, or missing words? | Review | Review
In this paper, the authors propose a distributed Newton method for gradient-norm optimization. The method does not impose any specific form on the underlying objective function. The authors present convergence analysis for the method and illustrate the performance of the method on a convex problem (in the main paper). Originality: The topic of the paper, in my opinion, is very interesting. The paper presents an efficient Newton method that is motivated via the optimization of the norm of the gradient. As a result, no assumptions are made on the objective function. This is a key differentiating feature of the method as compared to other such methods. Quality: The overall quality of the paper is very good. The motivation is clear, the theory is well thought out and presented, and the numerical results show the superiority of the method (on a convex problem). I state my minor comments/questions/concerns below. Clarity: The paper is well-written and motivated. Below I state a few minor issues and concerns, as well as a few suggestions to improve the clarity and presentation of the work. Significance: In my opinion such distributed Newton methods are of interest to both the optimization and machine learning communities. This paper attempts to alleviate some of the issues (communication vs computation balance, number of hyper-parameters and sensitivity to hyper-parameters) of existing works. I believe that this paper, after minor corrections and modifications, should be accepted for publication at NeurIPS. Issues/Questions/Comments/Suggestions: - âincreasingly more frequentlyâ: is this phrase not redundant? - âregardless of the selected hyper-parametersâ: The theory shows that this is indeed the case for the method. Of course, the method was designed, in a certain sense, such that that claim would be true. Nevertheless, it could be a bit misleading. Although, strict reduction is guaranteed with any hyper-parameter setting, this reduction could be very small if the hyper-parameters are chosen poorly. The authors should comment about this in the manuscript. - Related Work and Contribution paragraph 1: the authors should more clearly state which disadvantages apply to each method. - Line 66: âderived by optimization of theâ: missing word? - Deriving DINGO from he optimization of the gradient norm is interesting and has many advantages as stated by the authors (e.g., no restrictions on the functions). However, does this approach have any limitations/drawbacks? For example, convergence to a local maximum or saddle point? The authors should add a comment about this in the main paper. - The discussion about the hyper-parameters is interesting, and shows the strength of the proposed approach. I suggest that the authors present this information in a table. For each method, the authors could clearly show the hyper-parameters associated. Moreover, in the table the authors could clearly state the per iteration cost (in terms of communications) of each method. - The drawback of the current analysis of DINGO is that it requires exact solutions to the sub-problems. The authors clearly state this. What is the limitation? How would the analysis change to account for inexact solves? The authors should comment about this in the manuscript. - Per iteration cost of DINGO: the authors should discuss the per iteration cost of DINGO in the manuscript. Both in terms of communication and computation, and compare with existing methods. If I am not mistaken, in the worst case, DINGO requires 4 rounds of communications per iteration, plus the number of communications associated with satisfying the line search condition. - Line Search: Usually, the Armijo condition does not have the factor of 2. The authors should comment about this in the paper. - DINGO algorithm is complicated: Algorithm 1 is complicated (with the three cases). The authors may want to give a high level description of the method before the present the actual algorithm. - Effect of theta parameter: The theta parameter controls the search direction chosen by the algorithm. Essentially, it controls that the search direction is not orthogonal to the gradient of (4), and that it is a descent direction. The authors should comment about this in the paper and discuss the role of theta. - Assumptions 3-6: The authors should add to the discussion of these assumptions. Why they are realistic? Why they are necessary? - In this experiment presented in the main paper, the function is strongly convex, and thus all iterates fall into Case 1. The authors should discuss the effect of Case 2 and 3 iterates on the performance of the method. - Future Work: What do the authors mean with âmore efficient methodsâ? |
NIPS | Title
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Abstract
For optimization of a large sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods. Our algorithm, DINGO, is derived by optimization of the gradient’s norm as a surrogate function. DINGO does not impose any specific form on the underlying functions and its application range extends far beyond convexity and smoothness. The underlying sub-problems of DINGO are simple linear least-squares, for which a plethora of efficient algorithms exist. DINGO involves a few hyper-parameters that are easy to tune and we theoretically show that a strict reduction in the surrogate objective is guaranteed, regardless of the selected hyper-parameters.
1 Introduction
Consider the optimization problem
min w∈Rd
{ f(w) , 1
m m∑ i=1 fi(w) } , (1)
in a centralized distributed computing environment involving one driver machine and m worker machines, in which the ith worker can only locally access the ith component function, fi. Such distributed computing settings arise increasingly more frequently as a result of technological and communication advancements that have enabled the collection of and access to large scale datasets.
As a concrete example, take a data fitting application, in which given n data points, {xi}ni=1, and their corresponding loss, `i(w;xi), parameterized by w, the goal is to minimize the overall loss as minw∈Rd ∑n i=1 `i(w;xi)/n. Such problems appear frequently in machine learning, e.g., [1, 2, 3] and scientific computing, e.g., [4, 5, 6]. However, in “big data” regimes where n 1, lack of adequate computational resources, in particular storage, can severely limit, or even prevent, any attempts at solving such optimization problems in a traditional stand-alone way, e.g., using a single machine. This can be remedied through distributed computing, in which resources across a network of stand-alone computational nodes are “pooled” together so as to scale to the problem at hand [7]. In such a setting, where n data points are distributed across m workers, one can instead consider (1) with
fi(w) , 1 |Si| ∑ j∈Si `j(w;xj), i = 1, 2, . . . ,m, (2)
where Si ⊆ {1, 2, . . . , n}, with cardinality denoted by |Si|, correspond to the distribution of data across the nodes, i.e., the ith node has access to a portion of the data indexed by the set Si.
In distributed settings, the amount of communications, i.e., messages exchanged across the network, are often considered a major bottleneck of computations (often more so than local computation
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
times), as they can be expensive in terms of both physical resources and time through latency [8, 9]. First-order methods [10], e.g., stochastic gradient descent (SGD) [11], solely rely on gradient information and as a result are rather easy to implement in distributed settings. They often require the performance of many computationally inexpensive iterations, which can be suitable for execution on a single machine. However, as a direct consequence, they can incur excessive communication costs in distributed environments and, hence, they might not be able to take full advantage of the available distributed computational resources.
By employing curvature information in the form of the Hessian matrix, second-order methods aim at transforming the gradient such that it is a more suitable direction to follow. Compared with first-order alternatives, although second-order methods perform more computations per iteration, they often require far fewer iterations to achieve similar results. In distributed settings, this feature can directly translate to significantly less communication costs. As a result, distributed second-order methods have the potential to become the method of choice for distributed optimization tasks.
Notation
We let 〈·, ·〉 denote the common Euclidean inner product defined by 〈x,y〉 = xTy for x,y ∈ Rd. Given a vector v and matrix A, we denote their vector `2 norm and matrix spectral norm as ‖v‖ and ‖A‖, respectively. For x, z ∈ Rd we let [x, z] , { x+ τ(z− x) | 0 ≤ τ ≤ 1 } . The range and null space of a matrix A is denoted byR(A) and N (A), respectively. The Moore–Penrose inverse [12] of A is denoted by A†. We let wt ∈ Rd denote the point at iteration t. For notational convenience, we denote gt,i , ∇fi(wt), Ht,i , ∇2fi(wt), gt , ∇f(wt) and Ht , ∇2f(wt). We also let
H̃t,i , [ Ht,i φI ] ∈ R2d×d and g̃t , ( gt 0 ) ∈ R2d, (3)
where φ > 0, I is the identity matrix, and 0 is the zero vector.
Related Work and Contributions
Owing to the above-mentioned potential, many distributed second-order optimization algorithms have recently emerged to solve (1). Among them, most notably are GIANT [13], DiSCO [9], DANE [14], InexactDANE and AIDE [15]. While having many advantages, each of these methods respectively come with several disadvantages that can limit their applicability in certain regimes. Namely, some rely on, rather stringent, (strong) convexity assumptions, while for others the underlying subproblems involve non-linear optimization problems that are themselves non-trivial to solve. A subtle, yet potentially severe, draw-back for many of the above-mentioned methods is that their performance can be sensitive to, and severely affected by, the choice of their corresponding hyper-parameters.
Here, we present a novel communication efficient distributed second-order optimization method that aims to alleviate many of the aforementioned disadvantages. Our approach is inspired by and follows many ideas of recent results on Newton-MR [16], which extends the application range of the classical Newton-CG beyond (strong) convexity and smoothness. More specifically, our algorithm, named DINGO for “DIstributed Newton-type method for Gradient-norm Optimization”, is derived by optimization of the gradient’s norm as a surrogate function for (1), i.e.,
min w∈Rd
{ 1
2 ∥∥∇f(w)∥∥2 = 1 2m2 ∥∥∥∥∥ m∑ i=1 ∇fi(w) ∥∥∥∥∥ 2} . (4)
When f is invex, [17, 18], the problems (1) and (4) have the same solutions. Recall that invexity is the generalization of convexity, which extends the sufficiency of the first order optimality condition, e.g., Karush-Kuhn-Tucker conditions, to a broader class of problems than simple convex programming. In other words, invexity is a special case of non-convexity, which subsumes convexity as a sub-class. In this light, unlike DiSCO and GIANT, by considering the surrogate function (4), DINGO’s application range and theoretical guarantees extend far beyond convex settings to invex problems. Naturally, by considering (4), DINGO may converge to a local maximum or saddle point in non-invex problems.
Similar to GIANT and DiSCO, and in contrast to DANE, InexactDANE and AIDE, our algorithm involves a few hyper-parameters that are easy to tune and the underlying sub-problems are simple linear least-squares, for which a plethora of efficient algorithms exist. However, the theoretical
analysis of both GIANT and DiSCO is limited to the case where each fi is strongly convex, and for GIANT they are also of the special form where in (2) we have `j(w;xj) = ψj(〈w,xj〉) + γ‖w‖2, γ > 0 is a regularization parameter and ψj is convex, e.g., linear predictor models. In contrast, DINGO does not impose any specific form on the underlying functions. Also, unlike GIANT, we allow for |Si| < d in (2). Moreover, we theoretically show that DINGO is not too sensitive to the choice of its hyper-parameters in that a strict reduction in the gradient norm is guaranteed, regardless of the selected hyper-parameters. See Tables 1 and 2 for a summary of high-level algorithm properties. Finally, we note that, unlike GIANT, DiSCO, InexactDANE and AIDE, our theoretical analysis requires exact solutions to the sub-problems. Despite the fact that the sub-problems of DINGO are simple ordinary least-squares, and that DINGO performs well in practice with very crude solutions, this is admittedly a theoretical restriction, which we aim to address in future.
The distributed computing environment that we consider is also assumed by GIANT, DiSCO, DANE, InexactDANE and AIDE. Moreover, as with these methods, we restrict communication to vectors of size linear in d, i.e., O(d). A communication round is performed when the driver uses a broadcast operation to send information to one or more workers in parallel, or uses a reduce operation to receive information from one or more workers in parallel. For example, computing the gradient at iteration t, namely gt = ∑m i=1 gt,i/m, requires two communication rounds, i.e., the driver broadcasts wt to all workers and then, by a reduce operation, receives gt,i for all i. We further remind that the distributed computational model considered here is such that the main bottleneck involves the communications across the network.
2 DINGO
In this section, we describe the derivation of DINGO, as depicted in Algorithm 1. Each iteration t involves the computation of two main ingredients: an update direction pt, and an appropriate step-size αt. As usual, our next iterate is then set as wt+1 = wt + αtpt.
Update Direction
We begin iteration t by distributively computing the gradient gt. Thereafter, we distributively compute the Hessian-gradient product Htgt = ∑m i=1 Ht,igt/m as well as the vectors ∑m i=1 H
† t,igt/m and∑m
i=1 H̃ † t,ig̃t/m. Computing the update direction pt involves three cases, all of which involve simple
linear least-squares sub-problems: Case 1 If 〈 ∑m
i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2, where θ is as in Algorithm 1, then we let pt = ∑m i=1 pt,i/m, with pt,i = −H † t,igt. Here, we check that the potential update direction
“− ∑m
i=1 H † t,igt/m” is a suitable descent direction for our surrogate objective (4). We do this since
we have not imposed any restrictive assumptions on (1), e.g., strong convexity of each fi, that would automatically guarantee descent; see Lemma 1 for an example of such restrictive assumptions. Case 2 If Case 1 fails, we include regularization and check again that the new potential update direction yields suitable descent. Namely, if 〈 ∑m i=1 H̃
† t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, then we let pt =∑m
i=1 pt,i/m, with pt,i = −H̃ † t,ig̃t.
Case 3 If all else fails, we enforce descent in the norm of the gradient. More specifically, as Case 2 does not hold, the set
It , { i = 1, 2, . . . ,m | 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖ 2 } , (5)
is non-empty. In parallel, the driver broadcasts Htgt to each worker i ∈ It and has it locally compute the solution to
argmin pt,i
1 2 ‖Ht,ipt,i + gt‖2 +
φ2
2 ‖pt,i‖2, such that 〈pt,i,Htgt〉 ≤ −θ‖gt‖2,
where φ is as in (3). It is easy to show that the solution to this problem is
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt . (6)
The term λt,i in (6) is positive by the definition of It and well-defined by Assumption 5, which implies that for gt 6= 0 we have Htgt 6= 0. In conclusion, for Case 3, each worker i ∈ It computes (6) and, using a reduce operation, the driver then computes the update direction pt = ∑m i=1 pt,i/m, which by construction yields descent in the surrogate objective (4). Note that pt,i = −H̃†t,ig̃t for all i /∈ It have already been obtained as part of Case 2. Remark 1. The three cases help avoid the need for any unnecessary assumptions on data distribution or the knowledge of any practically unknowable constants. In fact, given Lemma 1, which imposes a certain assumption on the data distribution, we could have stated our algorithm in its simplest form, i.e., with only Case 1. This would be more in line with some prior works, e.g., GIANT, but it would have naturally restricted the applicability of our method in terms of data distributions. Remark 2. In practice, like GIANT and DiSCO, our method DINGO never requires the computation or storage of an explicitly formed Hessian. Instead, it only requires Hessian-vector products, which can be computed at a similar cost to computing the gradient itself. Computing matrix pseudo-inverse and vector products, e.g., H†t,igt, constitute the sub-problems of our algorithm. This, in turn, is done through solving least-squares problems using iterative methods that only require matrix-vector products (see Section 4 for some such methods). Thus DINGO is suitable for large dimension d in (1).
Line Search
After computing the update direction pt, DINGO computes the next iterate wt+1 by moving along pt by an appropriate step-size αt and forming wt+1 = wt + αtpt. We use an Armijo-type line search to choose this step-size. Specifically, as we are minimizing the norm of the gradient as a surrogate function, we choose the largest αt ∈ (0, 1] such that
‖gt+1‖2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉, (7) for some constant ρ ∈ (0, 1). By construction of pt we always have 〈pt,Htgt〉 ≤ −θ‖gt‖2. Therefore, after each iteration we are strictly decreasing the norm of the gradient, and line-search guarantees that this occurs irrespective of all hyper-parameters of DINGO, i.e., θ, φ and ρ.
Algorithm 1 DINGO 1: input initial point w0 ∈ Rd, gradient tolerance δ ≥ 0, maximum iterations T , line search
parameter ρ ∈ (0, 1), parameter θ > 0, and regularization parameter φ > 0 as in (3). 2: for t = 0, 1, 2, . . . , T − 1 do 3: Distributively compute the full gradient gt. 4: if ‖gt‖ ≤ δ then 5: return wt 6: else 7: The driver broadcasts gt and, in parallel, each worker i computes Ht,igt, H † t,igt and H̃ † t,ig̃t.
8: By a reduce operation, the driver computes Htgt = 1m ∑m i=1 Ht,igt, 1 m ∑m i=1 H † t,igt and
1 m ∑m i=1 H̃ † t,ig̃t.
9: if 〈
1 m ∑m i=1 H † t,igt,Htgt 〉 ≥ θ‖gt‖2 then
10: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H † t,igt.
11: else if 〈
1 m ∑m i=1 H̃ † t,ig̃t,Htgt 〉 ≥ θ‖gt‖2 then
12: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H̃ † t,ig̃t. 13: else 14: The driver computes pt,i = −H̃†t,ig̃t for all i such that 〈H̃ † t,ig̃t,Htgt〉 ≥ θ‖gt‖2. 15: The driver broadcasts Htgt to each worker i such that 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖2 and, in parallel, they compute
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt .
16: Using a reduce operation, the driver computes pt = 1m ∑m
i=1 pt,i. 17: end if 18: Choose the largest αt ∈ (0, 1] such that ∥∥∇f(wt + αtpt)∥∥2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉. 19: The driver computes wt+1 = wt + αtpt. 20: end if 21: end for 22: return wT .
3 Theoretical Analysis
In this section, we present convergence results for DINGO. The reader can find proofs of lemmas and theorems in the supplementary material. For notational convenience, in our analysis we have C1 , {t | 〈 ∑m i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2}, C2 , {t | 〈 ∑m i=1 H̃ † t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, t /∈ C1}, and C3 , {t | t /∈ (C1 ∪C2)}, which are sets indexing iterations t that are in Case 1, Case 2 and Case 3, respectively. The convergence analysis under these cases are treated separately in Sections 3.2, 3.3 and 3.4. The unifying result is then simply given in Corollary 1. We begin, in Section 3.1, by establishing general underlying assumptions for our analysis. The analysis of Case 1 and Case 3 require their own specific assumptions, which are discussed in Sections 3.2 and 3.4, respectively. Remark 3. As long as the presented assumptions are satisfied, our algorithm converges for any choice of θ and φ, i.e., these hyper-parameters do not require the knowledge of the, practically unknowable, parameters from these assumptions. However, in Lemma 3 we give qualitative guidelines for a better choice of θ and φ to avoid Case 2 and Case 3, which are shown to be less desirable than Case 1.
3.1 General Assumptions
As DINGO makes use of Hessian-vector products, we make the following straightforward assumption. Assumption 1 (Twice Differentiability). The functions fi in (1) are twice differentiable.
Notice that we do not require each fi to be twice continuously differentiable. In particular, our analysis carries through even if the Hessian is discontinuous. This is in sharp contrast to popular belief that the application of non-smooth Hessian can hurt more so than it helps, e.g., [19]. Note that
even if the Hessian is discontinuous, Assumption 1 is sufficient in ensuring that Ht,i is symmetric, for all t and i, [20]. Following [16], we also make the following general assumption on f . Assumption 2 (Moral-Smoothness [16]). For all iterations t, there exists a constant L ∈ (0,∞) such that
∥∥∇2f(w)∇f(w)−∇2f(wt)∇f(wt)∥∥ ≤ L‖w−wt‖, for all w ∈ [wt,wt + pt], where pt is the update direction of DINGO at iteration t.
As discussed in [16] with explicit examples, Assumption 2 is strictly weaker than the common assumptions of the gradient and Hessian being both Lipschitz continuous. Using [16, Lemma 10], it follows from Assumptions 1 and 2 that∥∥∇f(wt + αpt)∥∥2 ≤ ∥∥gt∥∥2 + 2α〈pt,Htgt〉+ α2L‖pt‖2, (8) for all α ∈ [0, 1] and all iterations t.
3.2 Analysis of Case 1
In this section, we analyze the convergence of iterations of DINGO that fall under Case 1. For such iterations, we make the following assumption about the action of the pseudo-inverse of Ht,i on gt. Assumption 3 (Pseudo-Inverse Regularity of Ht,i on gt). For all t ∈ C1 and all i = 1, 2, . . . ,m, there exists constants γi ∈ (0,∞) such that ‖H†t,igt‖ ≤ γi‖gt‖.
Assumption 3 may appear unconventional. However, it may be seen as more general than the following assumption. Assumption 4 (Pseudo-Inverse Regularity of Ht on its Range Space [16]). There exists a constant γ ∈ (0,∞) such that for all iterates wt we have ‖Htp‖ ≥ γ‖p‖ for all p ∈ R(Ht).
Assumption 4 implies ‖H†tgt‖ = ‖H † t ( UtU T t + U ⊥ t (U ⊥ t ) T ) gt‖ = ‖H†tUtUTt gt‖ ≤ γ−1‖gt‖, where Ut and U⊥t denote arbitrary orthonormal bases for R(Ht) and R(Ht)⊥, respectively, and R(Ht)⊥ = N (HTt ) = N (H † t). Recall that Assumption 4 is a significant relaxation of strong convexity. As an example, an under-determined least-squares problem f(w) = ‖Aw − b‖2/2, which is clearly not strongly convex, satisfies Assumption 4 with γ = σ2min(A), where σmin(A) is the smallest non-zero singular value of A. Theorem 1 (Convergence Under Case 1). Suppose we run DINGO. Then under Assumptions 1, 2 and 3, for all t ∈ C1 we have ‖gt+1‖2 ≤ (1− 2τ1ρθ)‖gt‖2, where τ1 = min { 1, 2(1− ρ)θ/(Lγ2) } ,
γ = ∑m
i=1 γi/m, L is as in Assumption 2, γi are as in Assumption 3, ρ and θ are as in Algorithm 1.
From the proof of Theorem 1, it is easy to see that ∀t ∈ C1 we are guaranteed that 0 < 1−2τ1ρθ < 1. In Theorem 1, the term γ is the average of the γi’s. This is beneficial as it “smooths out” nonuniformity in γi’s; for example, γ ≥ mini γi. Under specific assumptions on (1), we can theoretically guarantee that t ∈ C1 for all iterations t. The following lemma provides one such example. Lemma 1. Suppose Assumption 1 holds and that we run DINGO. Furthermore, suppose that for all iterations t and all i = 1, 2, . . . ,m, the Hessian matrix Ht,i is invertible and there exists constants εi ∈ [0,∞) and νi ∈ (0,∞) such that ‖Ht,i −Ht‖ ≤ εi and νi‖gt‖ ≤ ‖Ht,igt‖. If∑m
i=1(1− εi/νi)/m ≥ θ then t ∈ C1 for all t, where θ is as in Algorithm 1.
As an example, the Assumptions of Lemma 1 trivially hold if each fi is strongly convex and we assume certain data distribution. Under the assumptions of Lemma 1, if the Hessian matrix for each worker is on average a reasonable approximation to the full Hessian, i.e., εi is on average sufficiently small so that ∑m i=1 εi/νi < m, then we can choose θ small enough to ensure that t ∈ C1 for all t. In other words, for the iterates to stay in C1, we do not require the Hessian matrix of each individual worker to be a high-quality approximation to full Hessian (which could indeed be hard to enforce in many practical applications). As long as the data is distributed in such a way that Hessian matrices are on average reasonable approximations, we can guarantee to have t ∈ C1 for all t.
3.3 Analysis of Case 2
We now analyze the convergence of DINGO for iterations that fall under Case 2. For this case, we do not require any additional assumptions to that of Assumptions 1 and 2. Instead, we use the upper
bound: ‖H̃†t,i‖ ≤ 1/φ for all iterations t and all i = 1, 2, . . . ,m, where φ is as in Algorithm 1; see Lemma 4 in the supplementary material for a proof of this upper bound.
Theorem 2 (Convergence Under Case 2). Suppose we run DINGO. Then under Assumptions 1 and 2, for all t ∈ C2 we have ‖gt+1‖2 ≤ (1− 2τ2ρθ)‖gt‖2, where τ2 = min { 1, 2(1− ρ)φ2θ/L } , L is as in Assumption 2, and ρ, θ and φ are as in Algorithm 1.
In our experience, we have found that Case 2 does not occur frequently in practice. It serves more of a theoretical purpose and is used to identify when Case 3 is required. Case 2 may be thought of as a specific instance of Case 3, in which It is empty. However, it merits its own case, as in analysis it does not require additional assumptions to Assumptions 1 and 2, and in practice it may avoid an additional two communication rounds. If we were to bypass Case 2 to Case 3 and allow It to be empty, then Theorem 3 of Section 3.4 with |It| = 0, which states the convergence for Case 3, indeed coincides with Theorem 2.
3.4 Analysis of Case 3
Now we turn to the final case, and analyze the convergence of iterations of DINGO that fall under Case 3. For such iterations, we make the following assumption. Assumption 5. For all t ∈ C3 and all i = 1, 2, . . . ,m there exists constants δi ∈ (0,∞) such that∥∥(H̃Tt,i)†Htgt∥∥ ≥ δi‖gt‖. Assumption 5, like Assumption 3, may appear unconventional. In Lemma 2 we show how Assumption 5 is implied by three other reasonable assumptions, one of which is as follows.
Assumption 6 (Gradient-Hessian Null-Space Property [16]). There exists a constant ν ∈ (0, 1] such that ∥∥(U⊥w)T∇f(w)∥∥2 ≤ (1− ν)ν−1∥∥UTw∇f(w)∥∥2, for all w ∈ Rd, where Uw and U⊥w denote any orthonormal bases forR ( ∇2f(w) ) and its orthogonal complement, respectively.
Assumption 6 implies that, as the iterations progress, the gradient will not become arbitrarily orthogonal to the range space of the Hessian matrix. As an example, any least-squares problem f(w) = ‖Aw − b‖2/2 satisfies Assumption 6 with ν = 1 ; see [16] for detailed discussion and many more examples of Assumption 6.
Lemma 2. Suppose Assumptions 4 and 6 hold and ‖Ht,i‖2 ≤ τi, ∀t ∈ C3, i = 1, 2, . . . ,m, τi ∈ (0,∞), i.e., local Hessians are bounded. Then, Assumption 5 holds with δi = γ √ ν/(τi + φ2), where φ is as in Algorithm 1, and γ and ν are as in Assumptions 4 and 6, respectively.
The following theorem provides convergence properties for iterations of DINGO that are in Case 3. Theorem 3 (Convergence Under Case 3). Suppose we run DINGO. Then under Assumptions 1, 2 and 5, for all t ∈ C3 we have ‖gt+1‖2 ≤ (1 − 2ωtρθ)‖gt‖2 ≤ (1 − 2τ3ρθ)‖gt‖2, where ωt = min{1, 2(1− ρ)θ/Lc2t}, τ3 = min{1, 2(1− ρ)θ/Lc2},
ct = 1
mφ
( m+ |It|+ θ ∑ i∈It 1 δi ) , c = 2 φ + θ mφ m∑ i=1 1 δi ,
L is as in Assumption 2, δi are as in Assumption 5, It is as in (5), and ρ, θ and φ are as in Algorithm 1.
Note that the convergence in Theorem 3 is given in both iteration dependent and independent format, since the former explicitly relates the convergence rate to the size of It, while the latter simply upper-bounds this, and hence is qualitatively less informative.
Comparing Theorems 2 and 3, iterations of DINGO should have slower convergence if they are in Case 3 rather than Case 2. By Theorem 3, if an iteration t resorts to Case 3 then we may have slower convergence for larger |It|. Moreover, this iteration would require two more communication rounds than if it were to stop in Case 1 or Case 2. Therefore, one may wish to choose θ and φ appropriately to reduce the chances that iteration t falls in Case 3 or that |It| is large. Under this consideration, Lemma 3 presents a necessary condition on a relationship between θ and φ.
Lemma 3. Suppose we run DINGO. Under Assumption 1, if |It| < m for some iteration t, then θφ ≤ ‖Htgt‖/‖gt‖.
Lemma 3 suggests that we should pick θ and φ so that their product, θφ, is small. Clearly, choosing smaller θ will increase the chance of an iteration of DINGO being in Case 1 or Case 2. However, this also gives a lower rate of convergence in Theorems 1 and 2. Choosing smaller φ will preserve more curvature information of the Hessian Ht,i in H̃ † t,i. However, φ should still be reasonably large, as making φ smaller also makes some of the sub-problems of DINGO more ill-conditioned. There is a non-trivial trade-off between φ and θ, and Lemma 3 gives an appropriate way to set them.
We can finally present a unifying result on the overall worst-case linear convergence rate of DINGO.
Corollary 1 (Overall Linear Convergence of DINGO). Suppose we run DINGO. Then under Assumptions 1, 2, 3 and 5, for all iterations t we have ‖gt+1‖2 ≤ (1−2τρθ)‖gt‖2 with τ = min{τ1, τ2, τ3}, where τ1, τ2 and τ3 are as in Theorems 1, 2, and 3, respectively, and ρ and θ are as in Algorithm 1.
From Corollary 1, DINGO can achieve ‖gt‖ ≤ ε with O(log(ε)/(τρθ)) communication rounds. Moreover, the term τ is a lower bound on the step-size under all cases, which can determine the maximum communication cost needed during line-search. For example, knowing τ could determine the number of step-sizes used in backtracking line-search for DINGO in Section 4.
4 Experiments
In this section, we evaluate the empirical performance of DINGO, GIANT, DiSCO, InexactDANE, AIDE, Asynchronous SGD (Async-SGD) and Synchronous SGD (Sync-SGD) [11] on the strongly convex problem of softmax cross-entropy minimization with regularization on the CIFAR10 dataset [21], see Figure 1. This dataset has 50000 training samples, 10000 test samples and each datapoint xi ∈ R3072 has a label yi ∈ {1, 2, . . . , 10}. This problem has dimension d = 27648. In the supplementary material, the reader can find additional experiments on another softmax regression
as well as on a Gaussian mixture model and autoencoder problem. In all experiments we consider (1) with (2), where the sets S1, S2, . . . , Sm randomly partition the index set {1, 2, . . . , n}, with each having equal size s = n/m. Code is available at https://github.com/RixonC/DINGO.
We describe some implementation details. All sub-problem solvers are limited to 50 iterations and do not employ preconditioning. For DINGO, we use the sub-problem solvers MINRES-QLP [22], LSMR [23] and CG [24] when computing H†t,igt, H̃ † t,ig̃t and (H̃ T t,iH̃t,i)
−1(Htgt), respectively. We choose CG for the latter problem as the approximation x of (H̃Tt,iH̃t,i)
−1Htgt is guaranteed to satisfy 〈Htgt,x〉 > 0 regardless of the number of CG iterations performed. For DINGO, unless otherwise stated, we set θ = 10−4 and φ = 10−6. We use backtracking line search for DINGO and GIANT to select the largest step-size in {1, 2−1, 2−2, . . . , 2−50} which passes, with an Armijo line-search parameter of 10−4. For InexactDANE, we set η = 1 and µ = 0, as in [15], and use SVRG [25] as a local solver with the best learning rate from {10−6, 10−5, . . . , 106}. We have each iteration of AIDE invoke one iteration of InexactDANE, with the same parameters as in the stand-alone InexactDANE method, and use the best catalyst acceleration parameter τ ∈ {10−6, 10−5, . . . , 106}, as in [15]. For Async-SGD and Sync-SGD we report the best learning rate from {10−6, 10−5, . . . , 106} and each worker uses a mini-batch of size n/(5m).
DiSCO has consistent performance, regardless of the number of workers, due to the distributed PCG algorithm. This essentially allows DiSCO to perform Newton’s method over the full dataset. This is unnecessarily costly, in terms of communication rounds, when s is reasonably large. Thus we see it perform comparatively poorly in Plots 1(a), 1(b), and 1(c). DiSCO outperforms GIANT and DINGO in Plot 1(d). This is likely because the local directions (−H−1t,i gt and pt,i for GIANT and DINGO, respectively) give poor updates as they are calculated using very small subsets of data, i.e., in Plot 1(d) each worker has access to only 5 data points, while d = 27648.
A significant advantage of DINGO to InexactDANE, AIDE, Async-SGD and Sync-SGD is that it is relatively easy to tune hyper-parameters. Namely, making bad choices for ρ, θ and φ in DINGO will give sub-optimal performance; however, it is still theoretically guaranteed to strictly decrease the norm of the gradient. In contrast, some choices of hyper-parameters in InexactDANE, AIDE, AsyncSGD and Sync-SGD will cause divergence and these choices can be problem specific. Moreover, these methods can be very sensitive to the chosen hyper-parameters with some being very difficult to select. For example, the acceleration parameter τ in AIDE was found to be difficult and time consuming to tune and the performance of AIDE was sensitive to it; notice the variation in selected τ in Figure 1. This difficulty was also observed in [13, 15]. We found that simply choosing ρ, θ and φ to be small, in DINGO, gave high performance. Figure 2 compares different values of θ.
5 Future Work
The following is left for future work. First, extending the analysis of DINGO to include convergence results under inexact update. Second, finding more efficient methods of line search, for practical implementations of DINGO, than backtracking line search. Using backtracking line search for GIANT and DINGO requires the communication of some constant number of scalars and vectors, respectively. Hence, for DINGO, it may transmit a large amount of data over the network, while still only requiring two communication rounds per iteration of DINGO. Lastly, considering modifications to DINGO that prevent convergence to a local maximum/saddle point in non-invex problems.
Acknowledgments
Both authors gratefully acknowledge the generous support by the Australian Research Council (ARC) Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS). Fred Roosta was partially supported by DARPA as well as ARC through a Discovery Early Career Researcher Award (DE180100923). Part of this work was done while Fred Roosta was visiting the Simons Institute for the Theory of Computing. | 1. What is the novelty of the proposed approach compared to prior works?
2. How does the reviewer assess the quality and clarity of the paper's content?
3. Are there any concerns regarding the method's theoretical analysis or practical significance? | Review | Review
Originality. As far as I understood, the paper proposes an extension of [15] for the distributed setup. This requires careful analysis, but not so many new ideas. I think the comparison with https://ieeexplore.ieee.org/abstract/document/8675539 should be added. Quality. The numerical experiments seem to be convincing. I didn't check the proofs thoroughly, but the high-level intuition is not clear for me. As far as I see, the method uses the inverse of individual Hessians H_i,t, but not the inverse of the full Hessian H_t. I don't understand, why this works in the light of the fact that H_t^{-1} != \sum H_i,t^{-1}. Also, it is not clear for me why the considered three cases are mutually exclusive. Clarity. Here I have the same questions as in the quality part. It would be nice to provide more explanations. At the same time, I very much appreciate that the authors give some intuition and illustration for their quite complicated assumptions and their connection to the standard assumptions. Significance. I think the method can be useful in practice. Especially since it has smaller number of hyperparameters, wider range of applications including non-convex functions. ===========After rebuttal============ I can't say that the authors addressed all my questions from the review. Especially the ones in the quality part. But, I found some answers in the GIANT paper. So, I leave my score unchanged. |
NIPS | Title
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Abstract
For optimization of a large sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods. Our algorithm, DINGO, is derived by optimization of the gradient’s norm as a surrogate function. DINGO does not impose any specific form on the underlying functions and its application range extends far beyond convexity and smoothness. The underlying sub-problems of DINGO are simple linear least-squares, for which a plethora of efficient algorithms exist. DINGO involves a few hyper-parameters that are easy to tune and we theoretically show that a strict reduction in the surrogate objective is guaranteed, regardless of the selected hyper-parameters.
1 Introduction
Consider the optimization problem
min w∈Rd
{ f(w) , 1
m m∑ i=1 fi(w) } , (1)
in a centralized distributed computing environment involving one driver machine and m worker machines, in which the ith worker can only locally access the ith component function, fi. Such distributed computing settings arise increasingly more frequently as a result of technological and communication advancements that have enabled the collection of and access to large scale datasets.
As a concrete example, take a data fitting application, in which given n data points, {xi}ni=1, and their corresponding loss, `i(w;xi), parameterized by w, the goal is to minimize the overall loss as minw∈Rd ∑n i=1 `i(w;xi)/n. Such problems appear frequently in machine learning, e.g., [1, 2, 3] and scientific computing, e.g., [4, 5, 6]. However, in “big data” regimes where n 1, lack of adequate computational resources, in particular storage, can severely limit, or even prevent, any attempts at solving such optimization problems in a traditional stand-alone way, e.g., using a single machine. This can be remedied through distributed computing, in which resources across a network of stand-alone computational nodes are “pooled” together so as to scale to the problem at hand [7]. In such a setting, where n data points are distributed across m workers, one can instead consider (1) with
fi(w) , 1 |Si| ∑ j∈Si `j(w;xj), i = 1, 2, . . . ,m, (2)
where Si ⊆ {1, 2, . . . , n}, with cardinality denoted by |Si|, correspond to the distribution of data across the nodes, i.e., the ith node has access to a portion of the data indexed by the set Si.
In distributed settings, the amount of communications, i.e., messages exchanged across the network, are often considered a major bottleneck of computations (often more so than local computation
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
times), as they can be expensive in terms of both physical resources and time through latency [8, 9]. First-order methods [10], e.g., stochastic gradient descent (SGD) [11], solely rely on gradient information and as a result are rather easy to implement in distributed settings. They often require the performance of many computationally inexpensive iterations, which can be suitable for execution on a single machine. However, as a direct consequence, they can incur excessive communication costs in distributed environments and, hence, they might not be able to take full advantage of the available distributed computational resources.
By employing curvature information in the form of the Hessian matrix, second-order methods aim at transforming the gradient such that it is a more suitable direction to follow. Compared with first-order alternatives, although second-order methods perform more computations per iteration, they often require far fewer iterations to achieve similar results. In distributed settings, this feature can directly translate to significantly less communication costs. As a result, distributed second-order methods have the potential to become the method of choice for distributed optimization tasks.
Notation
We let 〈·, ·〉 denote the common Euclidean inner product defined by 〈x,y〉 = xTy for x,y ∈ Rd. Given a vector v and matrix A, we denote their vector `2 norm and matrix spectral norm as ‖v‖ and ‖A‖, respectively. For x, z ∈ Rd we let [x, z] , { x+ τ(z− x) | 0 ≤ τ ≤ 1 } . The range and null space of a matrix A is denoted byR(A) and N (A), respectively. The Moore–Penrose inverse [12] of A is denoted by A†. We let wt ∈ Rd denote the point at iteration t. For notational convenience, we denote gt,i , ∇fi(wt), Ht,i , ∇2fi(wt), gt , ∇f(wt) and Ht , ∇2f(wt). We also let
H̃t,i , [ Ht,i φI ] ∈ R2d×d and g̃t , ( gt 0 ) ∈ R2d, (3)
where φ > 0, I is the identity matrix, and 0 is the zero vector.
Related Work and Contributions
Owing to the above-mentioned potential, many distributed second-order optimization algorithms have recently emerged to solve (1). Among them, most notably are GIANT [13], DiSCO [9], DANE [14], InexactDANE and AIDE [15]. While having many advantages, each of these methods respectively come with several disadvantages that can limit their applicability in certain regimes. Namely, some rely on, rather stringent, (strong) convexity assumptions, while for others the underlying subproblems involve non-linear optimization problems that are themselves non-trivial to solve. A subtle, yet potentially severe, draw-back for many of the above-mentioned methods is that their performance can be sensitive to, and severely affected by, the choice of their corresponding hyper-parameters.
Here, we present a novel communication efficient distributed second-order optimization method that aims to alleviate many of the aforementioned disadvantages. Our approach is inspired by and follows many ideas of recent results on Newton-MR [16], which extends the application range of the classical Newton-CG beyond (strong) convexity and smoothness. More specifically, our algorithm, named DINGO for “DIstributed Newton-type method for Gradient-norm Optimization”, is derived by optimization of the gradient’s norm as a surrogate function for (1), i.e.,
min w∈Rd
{ 1
2 ∥∥∇f(w)∥∥2 = 1 2m2 ∥∥∥∥∥ m∑ i=1 ∇fi(w) ∥∥∥∥∥ 2} . (4)
When f is invex, [17, 18], the problems (1) and (4) have the same solutions. Recall that invexity is the generalization of convexity, which extends the sufficiency of the first order optimality condition, e.g., Karush-Kuhn-Tucker conditions, to a broader class of problems than simple convex programming. In other words, invexity is a special case of non-convexity, which subsumes convexity as a sub-class. In this light, unlike DiSCO and GIANT, by considering the surrogate function (4), DINGO’s application range and theoretical guarantees extend far beyond convex settings to invex problems. Naturally, by considering (4), DINGO may converge to a local maximum or saddle point in non-invex problems.
Similar to GIANT and DiSCO, and in contrast to DANE, InexactDANE and AIDE, our algorithm involves a few hyper-parameters that are easy to tune and the underlying sub-problems are simple linear least-squares, for which a plethora of efficient algorithms exist. However, the theoretical
analysis of both GIANT and DiSCO is limited to the case where each fi is strongly convex, and for GIANT they are also of the special form where in (2) we have `j(w;xj) = ψj(〈w,xj〉) + γ‖w‖2, γ > 0 is a regularization parameter and ψj is convex, e.g., linear predictor models. In contrast, DINGO does not impose any specific form on the underlying functions. Also, unlike GIANT, we allow for |Si| < d in (2). Moreover, we theoretically show that DINGO is not too sensitive to the choice of its hyper-parameters in that a strict reduction in the gradient norm is guaranteed, regardless of the selected hyper-parameters. See Tables 1 and 2 for a summary of high-level algorithm properties. Finally, we note that, unlike GIANT, DiSCO, InexactDANE and AIDE, our theoretical analysis requires exact solutions to the sub-problems. Despite the fact that the sub-problems of DINGO are simple ordinary least-squares, and that DINGO performs well in practice with very crude solutions, this is admittedly a theoretical restriction, which we aim to address in future.
The distributed computing environment that we consider is also assumed by GIANT, DiSCO, DANE, InexactDANE and AIDE. Moreover, as with these methods, we restrict communication to vectors of size linear in d, i.e., O(d). A communication round is performed when the driver uses a broadcast operation to send information to one or more workers in parallel, or uses a reduce operation to receive information from one or more workers in parallel. For example, computing the gradient at iteration t, namely gt = ∑m i=1 gt,i/m, requires two communication rounds, i.e., the driver broadcasts wt to all workers and then, by a reduce operation, receives gt,i for all i. We further remind that the distributed computational model considered here is such that the main bottleneck involves the communications across the network.
2 DINGO
In this section, we describe the derivation of DINGO, as depicted in Algorithm 1. Each iteration t involves the computation of two main ingredients: an update direction pt, and an appropriate step-size αt. As usual, our next iterate is then set as wt+1 = wt + αtpt.
Update Direction
We begin iteration t by distributively computing the gradient gt. Thereafter, we distributively compute the Hessian-gradient product Htgt = ∑m i=1 Ht,igt/m as well as the vectors ∑m i=1 H
† t,igt/m and∑m
i=1 H̃ † t,ig̃t/m. Computing the update direction pt involves three cases, all of which involve simple
linear least-squares sub-problems: Case 1 If 〈 ∑m
i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2, where θ is as in Algorithm 1, then we let pt = ∑m i=1 pt,i/m, with pt,i = −H † t,igt. Here, we check that the potential update direction
“− ∑m
i=1 H † t,igt/m” is a suitable descent direction for our surrogate objective (4). We do this since
we have not imposed any restrictive assumptions on (1), e.g., strong convexity of each fi, that would automatically guarantee descent; see Lemma 1 for an example of such restrictive assumptions. Case 2 If Case 1 fails, we include regularization and check again that the new potential update direction yields suitable descent. Namely, if 〈 ∑m i=1 H̃
† t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, then we let pt =∑m
i=1 pt,i/m, with pt,i = −H̃ † t,ig̃t.
Case 3 If all else fails, we enforce descent in the norm of the gradient. More specifically, as Case 2 does not hold, the set
It , { i = 1, 2, . . . ,m | 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖ 2 } , (5)
is non-empty. In parallel, the driver broadcasts Htgt to each worker i ∈ It and has it locally compute the solution to
argmin pt,i
1 2 ‖Ht,ipt,i + gt‖2 +
φ2
2 ‖pt,i‖2, such that 〈pt,i,Htgt〉 ≤ −θ‖gt‖2,
where φ is as in (3). It is easy to show that the solution to this problem is
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt . (6)
The term λt,i in (6) is positive by the definition of It and well-defined by Assumption 5, which implies that for gt 6= 0 we have Htgt 6= 0. In conclusion, for Case 3, each worker i ∈ It computes (6) and, using a reduce operation, the driver then computes the update direction pt = ∑m i=1 pt,i/m, which by construction yields descent in the surrogate objective (4). Note that pt,i = −H̃†t,ig̃t for all i /∈ It have already been obtained as part of Case 2. Remark 1. The three cases help avoid the need for any unnecessary assumptions on data distribution or the knowledge of any practically unknowable constants. In fact, given Lemma 1, which imposes a certain assumption on the data distribution, we could have stated our algorithm in its simplest form, i.e., with only Case 1. This would be more in line with some prior works, e.g., GIANT, but it would have naturally restricted the applicability of our method in terms of data distributions. Remark 2. In practice, like GIANT and DiSCO, our method DINGO never requires the computation or storage of an explicitly formed Hessian. Instead, it only requires Hessian-vector products, which can be computed at a similar cost to computing the gradient itself. Computing matrix pseudo-inverse and vector products, e.g., H†t,igt, constitute the sub-problems of our algorithm. This, in turn, is done through solving least-squares problems using iterative methods that only require matrix-vector products (see Section 4 for some such methods). Thus DINGO is suitable for large dimension d in (1).
Line Search
After computing the update direction pt, DINGO computes the next iterate wt+1 by moving along pt by an appropriate step-size αt and forming wt+1 = wt + αtpt. We use an Armijo-type line search to choose this step-size. Specifically, as we are minimizing the norm of the gradient as a surrogate function, we choose the largest αt ∈ (0, 1] such that
‖gt+1‖2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉, (7) for some constant ρ ∈ (0, 1). By construction of pt we always have 〈pt,Htgt〉 ≤ −θ‖gt‖2. Therefore, after each iteration we are strictly decreasing the norm of the gradient, and line-search guarantees that this occurs irrespective of all hyper-parameters of DINGO, i.e., θ, φ and ρ.
Algorithm 1 DINGO 1: input initial point w0 ∈ Rd, gradient tolerance δ ≥ 0, maximum iterations T , line search
parameter ρ ∈ (0, 1), parameter θ > 0, and regularization parameter φ > 0 as in (3). 2: for t = 0, 1, 2, . . . , T − 1 do 3: Distributively compute the full gradient gt. 4: if ‖gt‖ ≤ δ then 5: return wt 6: else 7: The driver broadcasts gt and, in parallel, each worker i computes Ht,igt, H † t,igt and H̃ † t,ig̃t.
8: By a reduce operation, the driver computes Htgt = 1m ∑m i=1 Ht,igt, 1 m ∑m i=1 H † t,igt and
1 m ∑m i=1 H̃ † t,ig̃t.
9: if 〈
1 m ∑m i=1 H † t,igt,Htgt 〉 ≥ θ‖gt‖2 then
10: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H † t,igt.
11: else if 〈
1 m ∑m i=1 H̃ † t,ig̃t,Htgt 〉 ≥ θ‖gt‖2 then
12: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H̃ † t,ig̃t. 13: else 14: The driver computes pt,i = −H̃†t,ig̃t for all i such that 〈H̃ † t,ig̃t,Htgt〉 ≥ θ‖gt‖2. 15: The driver broadcasts Htgt to each worker i such that 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖2 and, in parallel, they compute
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt .
16: Using a reduce operation, the driver computes pt = 1m ∑m
i=1 pt,i. 17: end if 18: Choose the largest αt ∈ (0, 1] such that ∥∥∇f(wt + αtpt)∥∥2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉. 19: The driver computes wt+1 = wt + αtpt. 20: end if 21: end for 22: return wT .
3 Theoretical Analysis
In this section, we present convergence results for DINGO. The reader can find proofs of lemmas and theorems in the supplementary material. For notational convenience, in our analysis we have C1 , {t | 〈 ∑m i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2}, C2 , {t | 〈 ∑m i=1 H̃ † t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, t /∈ C1}, and C3 , {t | t /∈ (C1 ∪C2)}, which are sets indexing iterations t that are in Case 1, Case 2 and Case 3, respectively. The convergence analysis under these cases are treated separately in Sections 3.2, 3.3 and 3.4. The unifying result is then simply given in Corollary 1. We begin, in Section 3.1, by establishing general underlying assumptions for our analysis. The analysis of Case 1 and Case 3 require their own specific assumptions, which are discussed in Sections 3.2 and 3.4, respectively. Remark 3. As long as the presented assumptions are satisfied, our algorithm converges for any choice of θ and φ, i.e., these hyper-parameters do not require the knowledge of the, practically unknowable, parameters from these assumptions. However, in Lemma 3 we give qualitative guidelines for a better choice of θ and φ to avoid Case 2 and Case 3, which are shown to be less desirable than Case 1.
3.1 General Assumptions
As DINGO makes use of Hessian-vector products, we make the following straightforward assumption. Assumption 1 (Twice Differentiability). The functions fi in (1) are twice differentiable.
Notice that we do not require each fi to be twice continuously differentiable. In particular, our analysis carries through even if the Hessian is discontinuous. This is in sharp contrast to popular belief that the application of non-smooth Hessian can hurt more so than it helps, e.g., [19]. Note that
even if the Hessian is discontinuous, Assumption 1 is sufficient in ensuring that Ht,i is symmetric, for all t and i, [20]. Following [16], we also make the following general assumption on f . Assumption 2 (Moral-Smoothness [16]). For all iterations t, there exists a constant L ∈ (0,∞) such that
∥∥∇2f(w)∇f(w)−∇2f(wt)∇f(wt)∥∥ ≤ L‖w−wt‖, for all w ∈ [wt,wt + pt], where pt is the update direction of DINGO at iteration t.
As discussed in [16] with explicit examples, Assumption 2 is strictly weaker than the common assumptions of the gradient and Hessian being both Lipschitz continuous. Using [16, Lemma 10], it follows from Assumptions 1 and 2 that∥∥∇f(wt + αpt)∥∥2 ≤ ∥∥gt∥∥2 + 2α〈pt,Htgt〉+ α2L‖pt‖2, (8) for all α ∈ [0, 1] and all iterations t.
3.2 Analysis of Case 1
In this section, we analyze the convergence of iterations of DINGO that fall under Case 1. For such iterations, we make the following assumption about the action of the pseudo-inverse of Ht,i on gt. Assumption 3 (Pseudo-Inverse Regularity of Ht,i on gt). For all t ∈ C1 and all i = 1, 2, . . . ,m, there exists constants γi ∈ (0,∞) such that ‖H†t,igt‖ ≤ γi‖gt‖.
Assumption 3 may appear unconventional. However, it may be seen as more general than the following assumption. Assumption 4 (Pseudo-Inverse Regularity of Ht on its Range Space [16]). There exists a constant γ ∈ (0,∞) such that for all iterates wt we have ‖Htp‖ ≥ γ‖p‖ for all p ∈ R(Ht).
Assumption 4 implies ‖H†tgt‖ = ‖H † t ( UtU T t + U ⊥ t (U ⊥ t ) T ) gt‖ = ‖H†tUtUTt gt‖ ≤ γ−1‖gt‖, where Ut and U⊥t denote arbitrary orthonormal bases for R(Ht) and R(Ht)⊥, respectively, and R(Ht)⊥ = N (HTt ) = N (H † t). Recall that Assumption 4 is a significant relaxation of strong convexity. As an example, an under-determined least-squares problem f(w) = ‖Aw − b‖2/2, which is clearly not strongly convex, satisfies Assumption 4 with γ = σ2min(A), where σmin(A) is the smallest non-zero singular value of A. Theorem 1 (Convergence Under Case 1). Suppose we run DINGO. Then under Assumptions 1, 2 and 3, for all t ∈ C1 we have ‖gt+1‖2 ≤ (1− 2τ1ρθ)‖gt‖2, where τ1 = min { 1, 2(1− ρ)θ/(Lγ2) } ,
γ = ∑m
i=1 γi/m, L is as in Assumption 2, γi are as in Assumption 3, ρ and θ are as in Algorithm 1.
From the proof of Theorem 1, it is easy to see that ∀t ∈ C1 we are guaranteed that 0 < 1−2τ1ρθ < 1. In Theorem 1, the term γ is the average of the γi’s. This is beneficial as it “smooths out” nonuniformity in γi’s; for example, γ ≥ mini γi. Under specific assumptions on (1), we can theoretically guarantee that t ∈ C1 for all iterations t. The following lemma provides one such example. Lemma 1. Suppose Assumption 1 holds and that we run DINGO. Furthermore, suppose that for all iterations t and all i = 1, 2, . . . ,m, the Hessian matrix Ht,i is invertible and there exists constants εi ∈ [0,∞) and νi ∈ (0,∞) such that ‖Ht,i −Ht‖ ≤ εi and νi‖gt‖ ≤ ‖Ht,igt‖. If∑m
i=1(1− εi/νi)/m ≥ θ then t ∈ C1 for all t, where θ is as in Algorithm 1.
As an example, the Assumptions of Lemma 1 trivially hold if each fi is strongly convex and we assume certain data distribution. Under the assumptions of Lemma 1, if the Hessian matrix for each worker is on average a reasonable approximation to the full Hessian, i.e., εi is on average sufficiently small so that ∑m i=1 εi/νi < m, then we can choose θ small enough to ensure that t ∈ C1 for all t. In other words, for the iterates to stay in C1, we do not require the Hessian matrix of each individual worker to be a high-quality approximation to full Hessian (which could indeed be hard to enforce in many practical applications). As long as the data is distributed in such a way that Hessian matrices are on average reasonable approximations, we can guarantee to have t ∈ C1 for all t.
3.3 Analysis of Case 2
We now analyze the convergence of DINGO for iterations that fall under Case 2. For this case, we do not require any additional assumptions to that of Assumptions 1 and 2. Instead, we use the upper
bound: ‖H̃†t,i‖ ≤ 1/φ for all iterations t and all i = 1, 2, . . . ,m, where φ is as in Algorithm 1; see Lemma 4 in the supplementary material for a proof of this upper bound.
Theorem 2 (Convergence Under Case 2). Suppose we run DINGO. Then under Assumptions 1 and 2, for all t ∈ C2 we have ‖gt+1‖2 ≤ (1− 2τ2ρθ)‖gt‖2, where τ2 = min { 1, 2(1− ρ)φ2θ/L } , L is as in Assumption 2, and ρ, θ and φ are as in Algorithm 1.
In our experience, we have found that Case 2 does not occur frequently in practice. It serves more of a theoretical purpose and is used to identify when Case 3 is required. Case 2 may be thought of as a specific instance of Case 3, in which It is empty. However, it merits its own case, as in analysis it does not require additional assumptions to Assumptions 1 and 2, and in practice it may avoid an additional two communication rounds. If we were to bypass Case 2 to Case 3 and allow It to be empty, then Theorem 3 of Section 3.4 with |It| = 0, which states the convergence for Case 3, indeed coincides with Theorem 2.
3.4 Analysis of Case 3
Now we turn to the final case, and analyze the convergence of iterations of DINGO that fall under Case 3. For such iterations, we make the following assumption. Assumption 5. For all t ∈ C3 and all i = 1, 2, . . . ,m there exists constants δi ∈ (0,∞) such that∥∥(H̃Tt,i)†Htgt∥∥ ≥ δi‖gt‖. Assumption 5, like Assumption 3, may appear unconventional. In Lemma 2 we show how Assumption 5 is implied by three other reasonable assumptions, one of which is as follows.
Assumption 6 (Gradient-Hessian Null-Space Property [16]). There exists a constant ν ∈ (0, 1] such that ∥∥(U⊥w)T∇f(w)∥∥2 ≤ (1− ν)ν−1∥∥UTw∇f(w)∥∥2, for all w ∈ Rd, where Uw and U⊥w denote any orthonormal bases forR ( ∇2f(w) ) and its orthogonal complement, respectively.
Assumption 6 implies that, as the iterations progress, the gradient will not become arbitrarily orthogonal to the range space of the Hessian matrix. As an example, any least-squares problem f(w) = ‖Aw − b‖2/2 satisfies Assumption 6 with ν = 1 ; see [16] for detailed discussion and many more examples of Assumption 6.
Lemma 2. Suppose Assumptions 4 and 6 hold and ‖Ht,i‖2 ≤ τi, ∀t ∈ C3, i = 1, 2, . . . ,m, τi ∈ (0,∞), i.e., local Hessians are bounded. Then, Assumption 5 holds with δi = γ √ ν/(τi + φ2), where φ is as in Algorithm 1, and γ and ν are as in Assumptions 4 and 6, respectively.
The following theorem provides convergence properties for iterations of DINGO that are in Case 3. Theorem 3 (Convergence Under Case 3). Suppose we run DINGO. Then under Assumptions 1, 2 and 5, for all t ∈ C3 we have ‖gt+1‖2 ≤ (1 − 2ωtρθ)‖gt‖2 ≤ (1 − 2τ3ρθ)‖gt‖2, where ωt = min{1, 2(1− ρ)θ/Lc2t}, τ3 = min{1, 2(1− ρ)θ/Lc2},
ct = 1
mφ
( m+ |It|+ θ ∑ i∈It 1 δi ) , c = 2 φ + θ mφ m∑ i=1 1 δi ,
L is as in Assumption 2, δi are as in Assumption 5, It is as in (5), and ρ, θ and φ are as in Algorithm 1.
Note that the convergence in Theorem 3 is given in both iteration dependent and independent format, since the former explicitly relates the convergence rate to the size of It, while the latter simply upper-bounds this, and hence is qualitatively less informative.
Comparing Theorems 2 and 3, iterations of DINGO should have slower convergence if they are in Case 3 rather than Case 2. By Theorem 3, if an iteration t resorts to Case 3 then we may have slower convergence for larger |It|. Moreover, this iteration would require two more communication rounds than if it were to stop in Case 1 or Case 2. Therefore, one may wish to choose θ and φ appropriately to reduce the chances that iteration t falls in Case 3 or that |It| is large. Under this consideration, Lemma 3 presents a necessary condition on a relationship between θ and φ.
Lemma 3. Suppose we run DINGO. Under Assumption 1, if |It| < m for some iteration t, then θφ ≤ ‖Htgt‖/‖gt‖.
Lemma 3 suggests that we should pick θ and φ so that their product, θφ, is small. Clearly, choosing smaller θ will increase the chance of an iteration of DINGO being in Case 1 or Case 2. However, this also gives a lower rate of convergence in Theorems 1 and 2. Choosing smaller φ will preserve more curvature information of the Hessian Ht,i in H̃ † t,i. However, φ should still be reasonably large, as making φ smaller also makes some of the sub-problems of DINGO more ill-conditioned. There is a non-trivial trade-off between φ and θ, and Lemma 3 gives an appropriate way to set them.
We can finally present a unifying result on the overall worst-case linear convergence rate of DINGO.
Corollary 1 (Overall Linear Convergence of DINGO). Suppose we run DINGO. Then under Assumptions 1, 2, 3 and 5, for all iterations t we have ‖gt+1‖2 ≤ (1−2τρθ)‖gt‖2 with τ = min{τ1, τ2, τ3}, where τ1, τ2 and τ3 are as in Theorems 1, 2, and 3, respectively, and ρ and θ are as in Algorithm 1.
From Corollary 1, DINGO can achieve ‖gt‖ ≤ ε with O(log(ε)/(τρθ)) communication rounds. Moreover, the term τ is a lower bound on the step-size under all cases, which can determine the maximum communication cost needed during line-search. For example, knowing τ could determine the number of step-sizes used in backtracking line-search for DINGO in Section 4.
4 Experiments
In this section, we evaluate the empirical performance of DINGO, GIANT, DiSCO, InexactDANE, AIDE, Asynchronous SGD (Async-SGD) and Synchronous SGD (Sync-SGD) [11] on the strongly convex problem of softmax cross-entropy minimization with regularization on the CIFAR10 dataset [21], see Figure 1. This dataset has 50000 training samples, 10000 test samples and each datapoint xi ∈ R3072 has a label yi ∈ {1, 2, . . . , 10}. This problem has dimension d = 27648. In the supplementary material, the reader can find additional experiments on another softmax regression
as well as on a Gaussian mixture model and autoencoder problem. In all experiments we consider (1) with (2), where the sets S1, S2, . . . , Sm randomly partition the index set {1, 2, . . . , n}, with each having equal size s = n/m. Code is available at https://github.com/RixonC/DINGO.
We describe some implementation details. All sub-problem solvers are limited to 50 iterations and do not employ preconditioning. For DINGO, we use the sub-problem solvers MINRES-QLP [22], LSMR [23] and CG [24] when computing H†t,igt, H̃ † t,ig̃t and (H̃ T t,iH̃t,i)
−1(Htgt), respectively. We choose CG for the latter problem as the approximation x of (H̃Tt,iH̃t,i)
−1Htgt is guaranteed to satisfy 〈Htgt,x〉 > 0 regardless of the number of CG iterations performed. For DINGO, unless otherwise stated, we set θ = 10−4 and φ = 10−6. We use backtracking line search for DINGO and GIANT to select the largest step-size in {1, 2−1, 2−2, . . . , 2−50} which passes, with an Armijo line-search parameter of 10−4. For InexactDANE, we set η = 1 and µ = 0, as in [15], and use SVRG [25] as a local solver with the best learning rate from {10−6, 10−5, . . . , 106}. We have each iteration of AIDE invoke one iteration of InexactDANE, with the same parameters as in the stand-alone InexactDANE method, and use the best catalyst acceleration parameter τ ∈ {10−6, 10−5, . . . , 106}, as in [15]. For Async-SGD and Sync-SGD we report the best learning rate from {10−6, 10−5, . . . , 106} and each worker uses a mini-batch of size n/(5m).
DiSCO has consistent performance, regardless of the number of workers, due to the distributed PCG algorithm. This essentially allows DiSCO to perform Newton’s method over the full dataset. This is unnecessarily costly, in terms of communication rounds, when s is reasonably large. Thus we see it perform comparatively poorly in Plots 1(a), 1(b), and 1(c). DiSCO outperforms GIANT and DINGO in Plot 1(d). This is likely because the local directions (−H−1t,i gt and pt,i for GIANT and DINGO, respectively) give poor updates as they are calculated using very small subsets of data, i.e., in Plot 1(d) each worker has access to only 5 data points, while d = 27648.
A significant advantage of DINGO to InexactDANE, AIDE, Async-SGD and Sync-SGD is that it is relatively easy to tune hyper-parameters. Namely, making bad choices for ρ, θ and φ in DINGO will give sub-optimal performance; however, it is still theoretically guaranteed to strictly decrease the norm of the gradient. In contrast, some choices of hyper-parameters in InexactDANE, AIDE, AsyncSGD and Sync-SGD will cause divergence and these choices can be problem specific. Moreover, these methods can be very sensitive to the chosen hyper-parameters with some being very difficult to select. For example, the acceleration parameter τ in AIDE was found to be difficult and time consuming to tune and the performance of AIDE was sensitive to it; notice the variation in selected τ in Figure 1. This difficulty was also observed in [13, 15]. We found that simply choosing ρ, θ and φ to be small, in DINGO, gave high performance. Figure 2 compares different values of θ.
5 Future Work
The following is left for future work. First, extending the analysis of DINGO to include convergence results under inexact update. Second, finding more efficient methods of line search, for practical implementations of DINGO, than backtracking line search. Using backtracking line search for GIANT and DINGO requires the communication of some constant number of scalars and vectors, respectively. Hence, for DINGO, it may transmit a large amount of data over the network, while still only requiring two communication rounds per iteration of DINGO. Lastly, considering modifications to DINGO that prevent convergence to a local maximum/saddle point in non-invex problems.
Acknowledgments
Both authors gratefully acknowledge the generous support by the Australian Research Council (ARC) Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS). Fred Roosta was partially supported by DARPA as well as ARC through a Discovery Early Career Researcher Award (DE180100923). Part of this work was done while Fred Roosta was visiting the Simons Institute for the Theory of Computing. | 1. How does the reviewer feel about the method proposed in the paper?
2. What is the main concern of the reviewer regarding the method?
3. What other comments does the reviewer have regarding the paper?
4. Are there any questions from the reviewer regarding the computational costs of the method?
5. Does the reviewer have concerns about the convergence analysis of the method?
6. Can the authors provide more information on how to tune the hyperparameters of the algorithm?
7. Can the authors explain why they chose to use iterative solvers without preconditioners in their implementation?
8. Does the reviewer think that the assumptions made in the paper are necessary for the convergence of the algorithm? | Review | Review
As is stated in question 1, I like the idea of not using the strong convexity assumption. My main concern is associated with the computational costs of computing $H_{t,i}^\dagger g$ and\or $\widetilde H_{t,i} g$. It seems to me that (section 4) iterative methods are used to compute these matrix-vector multiplications. However, the convergence analysis seems to require exact computation. The authors are also suggested to elaborate more on the communication cost associated with the line search step. Other comments: 1. The algorithm DINGO involves a few hyper-parameters. It would be good if the authors can discuss how these hyper-parameters are tuned so that the algorithm can achieve better performance. 2. I am not sure whether the step length \alpha_t can eventually be chosen as 1. 3. For the convergence of the algorithm, many assumptions are needed (e.g., Assumptions 1,2,3,5). I am not sure whether the example considered in Section 4 satisfies the assumptions or not. 4. In the implementation, the authors use iterative solvers without preconditioners. If the subproblems have bad conditioner numbers, I do not know if these iterative methods can obtain solutions with sufficient accuracy to guarantee the progress of the algorithm. |
NIPS | Title
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Abstract
For optimization of a large sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods. Our algorithm, DINGO, is derived by optimization of the gradient’s norm as a surrogate function. DINGO does not impose any specific form on the underlying functions and its application range extends far beyond convexity and smoothness. The underlying sub-problems of DINGO are simple linear least-squares, for which a plethora of efficient algorithms exist. DINGO involves a few hyper-parameters that are easy to tune and we theoretically show that a strict reduction in the surrogate objective is guaranteed, regardless of the selected hyper-parameters.
1 Introduction
Consider the optimization problem
min w∈Rd
{ f(w) , 1
m m∑ i=1 fi(w) } , (1)
in a centralized distributed computing environment involving one driver machine and m worker machines, in which the ith worker can only locally access the ith component function, fi. Such distributed computing settings arise increasingly more frequently as a result of technological and communication advancements that have enabled the collection of and access to large scale datasets.
As a concrete example, take a data fitting application, in which given n data points, {xi}ni=1, and their corresponding loss, `i(w;xi), parameterized by w, the goal is to minimize the overall loss as minw∈Rd ∑n i=1 `i(w;xi)/n. Such problems appear frequently in machine learning, e.g., [1, 2, 3] and scientific computing, e.g., [4, 5, 6]. However, in “big data” regimes where n 1, lack of adequate computational resources, in particular storage, can severely limit, or even prevent, any attempts at solving such optimization problems in a traditional stand-alone way, e.g., using a single machine. This can be remedied through distributed computing, in which resources across a network of stand-alone computational nodes are “pooled” together so as to scale to the problem at hand [7]. In such a setting, where n data points are distributed across m workers, one can instead consider (1) with
fi(w) , 1 |Si| ∑ j∈Si `j(w;xj), i = 1, 2, . . . ,m, (2)
where Si ⊆ {1, 2, . . . , n}, with cardinality denoted by |Si|, correspond to the distribution of data across the nodes, i.e., the ith node has access to a portion of the data indexed by the set Si.
In distributed settings, the amount of communications, i.e., messages exchanged across the network, are often considered a major bottleneck of computations (often more so than local computation
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
times), as they can be expensive in terms of both physical resources and time through latency [8, 9]. First-order methods [10], e.g., stochastic gradient descent (SGD) [11], solely rely on gradient information and as a result are rather easy to implement in distributed settings. They often require the performance of many computationally inexpensive iterations, which can be suitable for execution on a single machine. However, as a direct consequence, they can incur excessive communication costs in distributed environments and, hence, they might not be able to take full advantage of the available distributed computational resources.
By employing curvature information in the form of the Hessian matrix, second-order methods aim at transforming the gradient such that it is a more suitable direction to follow. Compared with first-order alternatives, although second-order methods perform more computations per iteration, they often require far fewer iterations to achieve similar results. In distributed settings, this feature can directly translate to significantly less communication costs. As a result, distributed second-order methods have the potential to become the method of choice for distributed optimization tasks.
Notation
We let 〈·, ·〉 denote the common Euclidean inner product defined by 〈x,y〉 = xTy for x,y ∈ Rd. Given a vector v and matrix A, we denote their vector `2 norm and matrix spectral norm as ‖v‖ and ‖A‖, respectively. For x, z ∈ Rd we let [x, z] , { x+ τ(z− x) | 0 ≤ τ ≤ 1 } . The range and null space of a matrix A is denoted byR(A) and N (A), respectively. The Moore–Penrose inverse [12] of A is denoted by A†. We let wt ∈ Rd denote the point at iteration t. For notational convenience, we denote gt,i , ∇fi(wt), Ht,i , ∇2fi(wt), gt , ∇f(wt) and Ht , ∇2f(wt). We also let
H̃t,i , [ Ht,i φI ] ∈ R2d×d and g̃t , ( gt 0 ) ∈ R2d, (3)
where φ > 0, I is the identity matrix, and 0 is the zero vector.
Related Work and Contributions
Owing to the above-mentioned potential, many distributed second-order optimization algorithms have recently emerged to solve (1). Among them, most notably are GIANT [13], DiSCO [9], DANE [14], InexactDANE and AIDE [15]. While having many advantages, each of these methods respectively come with several disadvantages that can limit their applicability in certain regimes. Namely, some rely on, rather stringent, (strong) convexity assumptions, while for others the underlying subproblems involve non-linear optimization problems that are themselves non-trivial to solve. A subtle, yet potentially severe, draw-back for many of the above-mentioned methods is that their performance can be sensitive to, and severely affected by, the choice of their corresponding hyper-parameters.
Here, we present a novel communication efficient distributed second-order optimization method that aims to alleviate many of the aforementioned disadvantages. Our approach is inspired by and follows many ideas of recent results on Newton-MR [16], which extends the application range of the classical Newton-CG beyond (strong) convexity and smoothness. More specifically, our algorithm, named DINGO for “DIstributed Newton-type method for Gradient-norm Optimization”, is derived by optimization of the gradient’s norm as a surrogate function for (1), i.e.,
min w∈Rd
{ 1
2 ∥∥∇f(w)∥∥2 = 1 2m2 ∥∥∥∥∥ m∑ i=1 ∇fi(w) ∥∥∥∥∥ 2} . (4)
When f is invex, [17, 18], the problems (1) and (4) have the same solutions. Recall that invexity is the generalization of convexity, which extends the sufficiency of the first order optimality condition, e.g., Karush-Kuhn-Tucker conditions, to a broader class of problems than simple convex programming. In other words, invexity is a special case of non-convexity, which subsumes convexity as a sub-class. In this light, unlike DiSCO and GIANT, by considering the surrogate function (4), DINGO’s application range and theoretical guarantees extend far beyond convex settings to invex problems. Naturally, by considering (4), DINGO may converge to a local maximum or saddle point in non-invex problems.
Similar to GIANT and DiSCO, and in contrast to DANE, InexactDANE and AIDE, our algorithm involves a few hyper-parameters that are easy to tune and the underlying sub-problems are simple linear least-squares, for which a plethora of efficient algorithms exist. However, the theoretical
analysis of both GIANT and DiSCO is limited to the case where each fi is strongly convex, and for GIANT they are also of the special form where in (2) we have `j(w;xj) = ψj(〈w,xj〉) + γ‖w‖2, γ > 0 is a regularization parameter and ψj is convex, e.g., linear predictor models. In contrast, DINGO does not impose any specific form on the underlying functions. Also, unlike GIANT, we allow for |Si| < d in (2). Moreover, we theoretically show that DINGO is not too sensitive to the choice of its hyper-parameters in that a strict reduction in the gradient norm is guaranteed, regardless of the selected hyper-parameters. See Tables 1 and 2 for a summary of high-level algorithm properties. Finally, we note that, unlike GIANT, DiSCO, InexactDANE and AIDE, our theoretical analysis requires exact solutions to the sub-problems. Despite the fact that the sub-problems of DINGO are simple ordinary least-squares, and that DINGO performs well in practice with very crude solutions, this is admittedly a theoretical restriction, which we aim to address in future.
The distributed computing environment that we consider is also assumed by GIANT, DiSCO, DANE, InexactDANE and AIDE. Moreover, as with these methods, we restrict communication to vectors of size linear in d, i.e., O(d). A communication round is performed when the driver uses a broadcast operation to send information to one or more workers in parallel, or uses a reduce operation to receive information from one or more workers in parallel. For example, computing the gradient at iteration t, namely gt = ∑m i=1 gt,i/m, requires two communication rounds, i.e., the driver broadcasts wt to all workers and then, by a reduce operation, receives gt,i for all i. We further remind that the distributed computational model considered here is such that the main bottleneck involves the communications across the network.
2 DINGO
In this section, we describe the derivation of DINGO, as depicted in Algorithm 1. Each iteration t involves the computation of two main ingredients: an update direction pt, and an appropriate step-size αt. As usual, our next iterate is then set as wt+1 = wt + αtpt.
Update Direction
We begin iteration t by distributively computing the gradient gt. Thereafter, we distributively compute the Hessian-gradient product Htgt = ∑m i=1 Ht,igt/m as well as the vectors ∑m i=1 H
† t,igt/m and∑m
i=1 H̃ † t,ig̃t/m. Computing the update direction pt involves three cases, all of which involve simple
linear least-squares sub-problems: Case 1 If 〈 ∑m
i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2, where θ is as in Algorithm 1, then we let pt = ∑m i=1 pt,i/m, with pt,i = −H † t,igt. Here, we check that the potential update direction
“− ∑m
i=1 H † t,igt/m” is a suitable descent direction for our surrogate objective (4). We do this since
we have not imposed any restrictive assumptions on (1), e.g., strong convexity of each fi, that would automatically guarantee descent; see Lemma 1 for an example of such restrictive assumptions. Case 2 If Case 1 fails, we include regularization and check again that the new potential update direction yields suitable descent. Namely, if 〈 ∑m i=1 H̃
† t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, then we let pt =∑m
i=1 pt,i/m, with pt,i = −H̃ † t,ig̃t.
Case 3 If all else fails, we enforce descent in the norm of the gradient. More specifically, as Case 2 does not hold, the set
It , { i = 1, 2, . . . ,m | 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖ 2 } , (5)
is non-empty. In parallel, the driver broadcasts Htgt to each worker i ∈ It and has it locally compute the solution to
argmin pt,i
1 2 ‖Ht,ipt,i + gt‖2 +
φ2
2 ‖pt,i‖2, such that 〈pt,i,Htgt〉 ≤ −θ‖gt‖2,
where φ is as in (3). It is easy to show that the solution to this problem is
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt . (6)
The term λt,i in (6) is positive by the definition of It and well-defined by Assumption 5, which implies that for gt 6= 0 we have Htgt 6= 0. In conclusion, for Case 3, each worker i ∈ It computes (6) and, using a reduce operation, the driver then computes the update direction pt = ∑m i=1 pt,i/m, which by construction yields descent in the surrogate objective (4). Note that pt,i = −H̃†t,ig̃t for all i /∈ It have already been obtained as part of Case 2. Remark 1. The three cases help avoid the need for any unnecessary assumptions on data distribution or the knowledge of any practically unknowable constants. In fact, given Lemma 1, which imposes a certain assumption on the data distribution, we could have stated our algorithm in its simplest form, i.e., with only Case 1. This would be more in line with some prior works, e.g., GIANT, but it would have naturally restricted the applicability of our method in terms of data distributions. Remark 2. In practice, like GIANT and DiSCO, our method DINGO never requires the computation or storage of an explicitly formed Hessian. Instead, it only requires Hessian-vector products, which can be computed at a similar cost to computing the gradient itself. Computing matrix pseudo-inverse and vector products, e.g., H†t,igt, constitute the sub-problems of our algorithm. This, in turn, is done through solving least-squares problems using iterative methods that only require matrix-vector products (see Section 4 for some such methods). Thus DINGO is suitable for large dimension d in (1).
Line Search
After computing the update direction pt, DINGO computes the next iterate wt+1 by moving along pt by an appropriate step-size αt and forming wt+1 = wt + αtpt. We use an Armijo-type line search to choose this step-size. Specifically, as we are minimizing the norm of the gradient as a surrogate function, we choose the largest αt ∈ (0, 1] such that
‖gt+1‖2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉, (7) for some constant ρ ∈ (0, 1). By construction of pt we always have 〈pt,Htgt〉 ≤ −θ‖gt‖2. Therefore, after each iteration we are strictly decreasing the norm of the gradient, and line-search guarantees that this occurs irrespective of all hyper-parameters of DINGO, i.e., θ, φ and ρ.
Algorithm 1 DINGO 1: input initial point w0 ∈ Rd, gradient tolerance δ ≥ 0, maximum iterations T , line search
parameter ρ ∈ (0, 1), parameter θ > 0, and regularization parameter φ > 0 as in (3). 2: for t = 0, 1, 2, . . . , T − 1 do 3: Distributively compute the full gradient gt. 4: if ‖gt‖ ≤ δ then 5: return wt 6: else 7: The driver broadcasts gt and, in parallel, each worker i computes Ht,igt, H † t,igt and H̃ † t,ig̃t.
8: By a reduce operation, the driver computes Htgt = 1m ∑m i=1 Ht,igt, 1 m ∑m i=1 H † t,igt and
1 m ∑m i=1 H̃ † t,ig̃t.
9: if 〈
1 m ∑m i=1 H † t,igt,Htgt 〉 ≥ θ‖gt‖2 then
10: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H † t,igt.
11: else if 〈
1 m ∑m i=1 H̃ † t,ig̃t,Htgt 〉 ≥ θ‖gt‖2 then
12: Let pt = 1m ∑m i=1 pt,i, with pt,i = −H̃ † t,ig̃t. 13: else 14: The driver computes pt,i = −H̃†t,ig̃t for all i such that 〈H̃ † t,ig̃t,Htgt〉 ≥ θ‖gt‖2. 15: The driver broadcasts Htgt to each worker i such that 〈H̃†t,ig̃t,Htgt〉 < θ‖gt‖2 and, in parallel, they compute
pt,i = −H̃†t,ig̃t − λt,i(H̃ T t,iH̃t,i) −1Htgt, λt,i = −gTt HtH̃ † t,ig̃t + θ‖gt‖2
gTt Ht(H̃ T t,iH̃t,i) −1Htgt .
16: Using a reduce operation, the driver computes pt = 1m ∑m
i=1 pt,i. 17: end if 18: Choose the largest αt ∈ (0, 1] such that ∥∥∇f(wt + αtpt)∥∥2 ≤ ‖gt‖2 + 2αtρ〈pt,Htgt〉. 19: The driver computes wt+1 = wt + αtpt. 20: end if 21: end for 22: return wT .
3 Theoretical Analysis
In this section, we present convergence results for DINGO. The reader can find proofs of lemmas and theorems in the supplementary material. For notational convenience, in our analysis we have C1 , {t | 〈 ∑m i=1 H † t,igt/m,Htgt〉 ≥ θ‖gt‖2}, C2 , {t | 〈 ∑m i=1 H̃ † t,ig̃t/m,Htgt〉 ≥ θ‖gt‖2, t /∈ C1}, and C3 , {t | t /∈ (C1 ∪C2)}, which are sets indexing iterations t that are in Case 1, Case 2 and Case 3, respectively. The convergence analysis under these cases are treated separately in Sections 3.2, 3.3 and 3.4. The unifying result is then simply given in Corollary 1. We begin, in Section 3.1, by establishing general underlying assumptions for our analysis. The analysis of Case 1 and Case 3 require their own specific assumptions, which are discussed in Sections 3.2 and 3.4, respectively. Remark 3. As long as the presented assumptions are satisfied, our algorithm converges for any choice of θ and φ, i.e., these hyper-parameters do not require the knowledge of the, practically unknowable, parameters from these assumptions. However, in Lemma 3 we give qualitative guidelines for a better choice of θ and φ to avoid Case 2 and Case 3, which are shown to be less desirable than Case 1.
3.1 General Assumptions
As DINGO makes use of Hessian-vector products, we make the following straightforward assumption. Assumption 1 (Twice Differentiability). The functions fi in (1) are twice differentiable.
Notice that we do not require each fi to be twice continuously differentiable. In particular, our analysis carries through even if the Hessian is discontinuous. This is in sharp contrast to popular belief that the application of non-smooth Hessian can hurt more so than it helps, e.g., [19]. Note that
even if the Hessian is discontinuous, Assumption 1 is sufficient in ensuring that Ht,i is symmetric, for all t and i, [20]. Following [16], we also make the following general assumption on f . Assumption 2 (Moral-Smoothness [16]). For all iterations t, there exists a constant L ∈ (0,∞) such that
∥∥∇2f(w)∇f(w)−∇2f(wt)∇f(wt)∥∥ ≤ L‖w−wt‖, for all w ∈ [wt,wt + pt], where pt is the update direction of DINGO at iteration t.
As discussed in [16] with explicit examples, Assumption 2 is strictly weaker than the common assumptions of the gradient and Hessian being both Lipschitz continuous. Using [16, Lemma 10], it follows from Assumptions 1 and 2 that∥∥∇f(wt + αpt)∥∥2 ≤ ∥∥gt∥∥2 + 2α〈pt,Htgt〉+ α2L‖pt‖2, (8) for all α ∈ [0, 1] and all iterations t.
3.2 Analysis of Case 1
In this section, we analyze the convergence of iterations of DINGO that fall under Case 1. For such iterations, we make the following assumption about the action of the pseudo-inverse of Ht,i on gt. Assumption 3 (Pseudo-Inverse Regularity of Ht,i on gt). For all t ∈ C1 and all i = 1, 2, . . . ,m, there exists constants γi ∈ (0,∞) such that ‖H†t,igt‖ ≤ γi‖gt‖.
Assumption 3 may appear unconventional. However, it may be seen as more general than the following assumption. Assumption 4 (Pseudo-Inverse Regularity of Ht on its Range Space [16]). There exists a constant γ ∈ (0,∞) such that for all iterates wt we have ‖Htp‖ ≥ γ‖p‖ for all p ∈ R(Ht).
Assumption 4 implies ‖H†tgt‖ = ‖H † t ( UtU T t + U ⊥ t (U ⊥ t ) T ) gt‖ = ‖H†tUtUTt gt‖ ≤ γ−1‖gt‖, where Ut and U⊥t denote arbitrary orthonormal bases for R(Ht) and R(Ht)⊥, respectively, and R(Ht)⊥ = N (HTt ) = N (H † t). Recall that Assumption 4 is a significant relaxation of strong convexity. As an example, an under-determined least-squares problem f(w) = ‖Aw − b‖2/2, which is clearly not strongly convex, satisfies Assumption 4 with γ = σ2min(A), where σmin(A) is the smallest non-zero singular value of A. Theorem 1 (Convergence Under Case 1). Suppose we run DINGO. Then under Assumptions 1, 2 and 3, for all t ∈ C1 we have ‖gt+1‖2 ≤ (1− 2τ1ρθ)‖gt‖2, where τ1 = min { 1, 2(1− ρ)θ/(Lγ2) } ,
γ = ∑m
i=1 γi/m, L is as in Assumption 2, γi are as in Assumption 3, ρ and θ are as in Algorithm 1.
From the proof of Theorem 1, it is easy to see that ∀t ∈ C1 we are guaranteed that 0 < 1−2τ1ρθ < 1. In Theorem 1, the term γ is the average of the γi’s. This is beneficial as it “smooths out” nonuniformity in γi’s; for example, γ ≥ mini γi. Under specific assumptions on (1), we can theoretically guarantee that t ∈ C1 for all iterations t. The following lemma provides one such example. Lemma 1. Suppose Assumption 1 holds and that we run DINGO. Furthermore, suppose that for all iterations t and all i = 1, 2, . . . ,m, the Hessian matrix Ht,i is invertible and there exists constants εi ∈ [0,∞) and νi ∈ (0,∞) such that ‖Ht,i −Ht‖ ≤ εi and νi‖gt‖ ≤ ‖Ht,igt‖. If∑m
i=1(1− εi/νi)/m ≥ θ then t ∈ C1 for all t, where θ is as in Algorithm 1.
As an example, the Assumptions of Lemma 1 trivially hold if each fi is strongly convex and we assume certain data distribution. Under the assumptions of Lemma 1, if the Hessian matrix for each worker is on average a reasonable approximation to the full Hessian, i.e., εi is on average sufficiently small so that ∑m i=1 εi/νi < m, then we can choose θ small enough to ensure that t ∈ C1 for all t. In other words, for the iterates to stay in C1, we do not require the Hessian matrix of each individual worker to be a high-quality approximation to full Hessian (which could indeed be hard to enforce in many practical applications). As long as the data is distributed in such a way that Hessian matrices are on average reasonable approximations, we can guarantee to have t ∈ C1 for all t.
3.3 Analysis of Case 2
We now analyze the convergence of DINGO for iterations that fall under Case 2. For this case, we do not require any additional assumptions to that of Assumptions 1 and 2. Instead, we use the upper
bound: ‖H̃†t,i‖ ≤ 1/φ for all iterations t and all i = 1, 2, . . . ,m, where φ is as in Algorithm 1; see Lemma 4 in the supplementary material for a proof of this upper bound.
Theorem 2 (Convergence Under Case 2). Suppose we run DINGO. Then under Assumptions 1 and 2, for all t ∈ C2 we have ‖gt+1‖2 ≤ (1− 2τ2ρθ)‖gt‖2, where τ2 = min { 1, 2(1− ρ)φ2θ/L } , L is as in Assumption 2, and ρ, θ and φ are as in Algorithm 1.
In our experience, we have found that Case 2 does not occur frequently in practice. It serves more of a theoretical purpose and is used to identify when Case 3 is required. Case 2 may be thought of as a specific instance of Case 3, in which It is empty. However, it merits its own case, as in analysis it does not require additional assumptions to Assumptions 1 and 2, and in practice it may avoid an additional two communication rounds. If we were to bypass Case 2 to Case 3 and allow It to be empty, then Theorem 3 of Section 3.4 with |It| = 0, which states the convergence for Case 3, indeed coincides with Theorem 2.
3.4 Analysis of Case 3
Now we turn to the final case, and analyze the convergence of iterations of DINGO that fall under Case 3. For such iterations, we make the following assumption. Assumption 5. For all t ∈ C3 and all i = 1, 2, . . . ,m there exists constants δi ∈ (0,∞) such that∥∥(H̃Tt,i)†Htgt∥∥ ≥ δi‖gt‖. Assumption 5, like Assumption 3, may appear unconventional. In Lemma 2 we show how Assumption 5 is implied by three other reasonable assumptions, one of which is as follows.
Assumption 6 (Gradient-Hessian Null-Space Property [16]). There exists a constant ν ∈ (0, 1] such that ∥∥(U⊥w)T∇f(w)∥∥2 ≤ (1− ν)ν−1∥∥UTw∇f(w)∥∥2, for all w ∈ Rd, where Uw and U⊥w denote any orthonormal bases forR ( ∇2f(w) ) and its orthogonal complement, respectively.
Assumption 6 implies that, as the iterations progress, the gradient will not become arbitrarily orthogonal to the range space of the Hessian matrix. As an example, any least-squares problem f(w) = ‖Aw − b‖2/2 satisfies Assumption 6 with ν = 1 ; see [16] for detailed discussion and many more examples of Assumption 6.
Lemma 2. Suppose Assumptions 4 and 6 hold and ‖Ht,i‖2 ≤ τi, ∀t ∈ C3, i = 1, 2, . . . ,m, τi ∈ (0,∞), i.e., local Hessians are bounded. Then, Assumption 5 holds with δi = γ √ ν/(τi + φ2), where φ is as in Algorithm 1, and γ and ν are as in Assumptions 4 and 6, respectively.
The following theorem provides convergence properties for iterations of DINGO that are in Case 3. Theorem 3 (Convergence Under Case 3). Suppose we run DINGO. Then under Assumptions 1, 2 and 5, for all t ∈ C3 we have ‖gt+1‖2 ≤ (1 − 2ωtρθ)‖gt‖2 ≤ (1 − 2τ3ρθ)‖gt‖2, where ωt = min{1, 2(1− ρ)θ/Lc2t}, τ3 = min{1, 2(1− ρ)θ/Lc2},
ct = 1
mφ
( m+ |It|+ θ ∑ i∈It 1 δi ) , c = 2 φ + θ mφ m∑ i=1 1 δi ,
L is as in Assumption 2, δi are as in Assumption 5, It is as in (5), and ρ, θ and φ are as in Algorithm 1.
Note that the convergence in Theorem 3 is given in both iteration dependent and independent format, since the former explicitly relates the convergence rate to the size of It, while the latter simply upper-bounds this, and hence is qualitatively less informative.
Comparing Theorems 2 and 3, iterations of DINGO should have slower convergence if they are in Case 3 rather than Case 2. By Theorem 3, if an iteration t resorts to Case 3 then we may have slower convergence for larger |It|. Moreover, this iteration would require two more communication rounds than if it were to stop in Case 1 or Case 2. Therefore, one may wish to choose θ and φ appropriately to reduce the chances that iteration t falls in Case 3 or that |It| is large. Under this consideration, Lemma 3 presents a necessary condition on a relationship between θ and φ.
Lemma 3. Suppose we run DINGO. Under Assumption 1, if |It| < m for some iteration t, then θφ ≤ ‖Htgt‖/‖gt‖.
Lemma 3 suggests that we should pick θ and φ so that their product, θφ, is small. Clearly, choosing smaller θ will increase the chance of an iteration of DINGO being in Case 1 or Case 2. However, this also gives a lower rate of convergence in Theorems 1 and 2. Choosing smaller φ will preserve more curvature information of the Hessian Ht,i in H̃ † t,i. However, φ should still be reasonably large, as making φ smaller also makes some of the sub-problems of DINGO more ill-conditioned. There is a non-trivial trade-off between φ and θ, and Lemma 3 gives an appropriate way to set them.
We can finally present a unifying result on the overall worst-case linear convergence rate of DINGO.
Corollary 1 (Overall Linear Convergence of DINGO). Suppose we run DINGO. Then under Assumptions 1, 2, 3 and 5, for all iterations t we have ‖gt+1‖2 ≤ (1−2τρθ)‖gt‖2 with τ = min{τ1, τ2, τ3}, where τ1, τ2 and τ3 are as in Theorems 1, 2, and 3, respectively, and ρ and θ are as in Algorithm 1.
From Corollary 1, DINGO can achieve ‖gt‖ ≤ ε with O(log(ε)/(τρθ)) communication rounds. Moreover, the term τ is a lower bound on the step-size under all cases, which can determine the maximum communication cost needed during line-search. For example, knowing τ could determine the number of step-sizes used in backtracking line-search for DINGO in Section 4.
4 Experiments
In this section, we evaluate the empirical performance of DINGO, GIANT, DiSCO, InexactDANE, AIDE, Asynchronous SGD (Async-SGD) and Synchronous SGD (Sync-SGD) [11] on the strongly convex problem of softmax cross-entropy minimization with regularization on the CIFAR10 dataset [21], see Figure 1. This dataset has 50000 training samples, 10000 test samples and each datapoint xi ∈ R3072 has a label yi ∈ {1, 2, . . . , 10}. This problem has dimension d = 27648. In the supplementary material, the reader can find additional experiments on another softmax regression
as well as on a Gaussian mixture model and autoencoder problem. In all experiments we consider (1) with (2), where the sets S1, S2, . . . , Sm randomly partition the index set {1, 2, . . . , n}, with each having equal size s = n/m. Code is available at https://github.com/RixonC/DINGO.
We describe some implementation details. All sub-problem solvers are limited to 50 iterations and do not employ preconditioning. For DINGO, we use the sub-problem solvers MINRES-QLP [22], LSMR [23] and CG [24] when computing H†t,igt, H̃ † t,ig̃t and (H̃ T t,iH̃t,i)
−1(Htgt), respectively. We choose CG for the latter problem as the approximation x of (H̃Tt,iH̃t,i)
−1Htgt is guaranteed to satisfy 〈Htgt,x〉 > 0 regardless of the number of CG iterations performed. For DINGO, unless otherwise stated, we set θ = 10−4 and φ = 10−6. We use backtracking line search for DINGO and GIANT to select the largest step-size in {1, 2−1, 2−2, . . . , 2−50} which passes, with an Armijo line-search parameter of 10−4. For InexactDANE, we set η = 1 and µ = 0, as in [15], and use SVRG [25] as a local solver with the best learning rate from {10−6, 10−5, . . . , 106}. We have each iteration of AIDE invoke one iteration of InexactDANE, with the same parameters as in the stand-alone InexactDANE method, and use the best catalyst acceleration parameter τ ∈ {10−6, 10−5, . . . , 106}, as in [15]. For Async-SGD and Sync-SGD we report the best learning rate from {10−6, 10−5, . . . , 106} and each worker uses a mini-batch of size n/(5m).
DiSCO has consistent performance, regardless of the number of workers, due to the distributed PCG algorithm. This essentially allows DiSCO to perform Newton’s method over the full dataset. This is unnecessarily costly, in terms of communication rounds, when s is reasonably large. Thus we see it perform comparatively poorly in Plots 1(a), 1(b), and 1(c). DiSCO outperforms GIANT and DINGO in Plot 1(d). This is likely because the local directions (−H−1t,i gt and pt,i for GIANT and DINGO, respectively) give poor updates as they are calculated using very small subsets of data, i.e., in Plot 1(d) each worker has access to only 5 data points, while d = 27648.
A significant advantage of DINGO to InexactDANE, AIDE, Async-SGD and Sync-SGD is that it is relatively easy to tune hyper-parameters. Namely, making bad choices for ρ, θ and φ in DINGO will give sub-optimal performance; however, it is still theoretically guaranteed to strictly decrease the norm of the gradient. In contrast, some choices of hyper-parameters in InexactDANE, AIDE, AsyncSGD and Sync-SGD will cause divergence and these choices can be problem specific. Moreover, these methods can be very sensitive to the chosen hyper-parameters with some being very difficult to select. For example, the acceleration parameter τ in AIDE was found to be difficult and time consuming to tune and the performance of AIDE was sensitive to it; notice the variation in selected τ in Figure 1. This difficulty was also observed in [13, 15]. We found that simply choosing ρ, θ and φ to be small, in DINGO, gave high performance. Figure 2 compares different values of θ.
5 Future Work
The following is left for future work. First, extending the analysis of DINGO to include convergence results under inexact update. Second, finding more efficient methods of line search, for practical implementations of DINGO, than backtracking line search. Using backtracking line search for GIANT and DINGO requires the communication of some constant number of scalars and vectors, respectively. Hence, for DINGO, it may transmit a large amount of data over the network, while still only requiring two communication rounds per iteration of DINGO. Lastly, considering modifications to DINGO that prevent convergence to a local maximum/saddle point in non-invex problems.
Acknowledgments
Both authors gratefully acknowledge the generous support by the Australian Research Council (ARC) Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS). Fred Roosta was partially supported by DARPA as well as ARC through a Discovery Early Career Researcher Award (DE180100923). Part of this work was done while Fred Roosta was visiting the Simons Institute for the Theory of Computing. | 1. What are the strengths and weaknesses of the paper's proposed method?
2. Are there any questions or suggestions regarding the efficiency and scalability of the algorithm, particularly in a distributed environment?
3. Do you have any concerns about the assumptions made in the paper, such as Assumption 3 and the Hessian matrix invertibility?
4. How does the reviewer assess the novelty and communication efficiency of the proposed method compared to other optimization methods like DiSCO?
5. What clarifications or additional information would you like the authors to provide regarding the implementation and practical distribution of the algorithm?
6. How does the reviewer evaluate the effectiveness of the proposed method in terms of wall-clock time and number of communication rounds?
7. Are there any suggestions for improving the figures and comparisons presented in the paper, such as considering the cost of line search or providing more details about the worker nodes? | Review | Review
Strength/weakness/questions/suggestions: 1- The paper is well-written and it is also structured properly. 2- In Algorithm 1, it is needed to calculate the product of pseudo-inverse of $\hat{H}$ and some vectors $g_t$ and $\tilde{g}_t$. It can be costly. It would be more clear if the authors clarify more about it. 3- In equation (6), it is expensive to calculate $P_{t,i}$. There is an inverse calculation (which the required info can be calculated by a system of equations too, however expensive), and in each iteration of the algorithm, there is some expensive parts mentioned in the above and current points plus âLine Searchâ. 4- In Algorithm 1, there is no explanation how âLine Searchâ will be done. Is it done in distributed environment? Or just in master machine? Also, the âLine Searchâ mentioned in this paper is very expensive (full gradient calculation is needed in each step), and if this âLine Searchâ is done in master node, then it may happen some cases that the master node would be very busy, or equivalently, the algorithm would not scale well (according to Amdahlâs law). 5- Assumption 3 seems to be a strong assumption. Also, the assumption in Lemma 1, âthe Hessian matrix $H_{t,i}$ is invertibleâ, is strong too. Is the latter assumption is based on strong convexity? The reviewer did not notice why this assumption should be true. 6- In this paper, in several parts it is mentioned âa novel communication efficient distributed second-order optimization methodâ, however, there is no analysis regarding the required number of communication rounds to reach the solution that shows its efficiency (similar to DiSCO algorithm). 7- The reviewer could not find any information about how DINGO is distributed in practice? Do the authors use GPU or CPU for their calculations? The reviewer was eager to see the code for the proposed algorithm, however, no code was available to check how DINGO is distributed. 8- In Figure 1, it is suggested to compare the results based on wall-clock time not just the communication rounds. In some cases, the expensive calculations may be done in master node, therefore, less communication rounds would be needed, however, the wall-clock time would be very high. Another reason is that, the number of communication rounds is somehow equivalent to the number of iterations (not a good measure though), and it is suggested to compare the results of distributed algorithms based on true measure, i.e., wall-clock time. 9- In Figure 1, rows 1,2 and 4, did the authors consider the cost of line search when x-axis is âCommunication Roundsâ for DINGO? 10- In Figure 1, what do the authors mean by âWorkerâ? The details should be provided. 11- It is clear from row 3 of Figure 1, that the âLine Searchâ mentioned in Algorithm 1 is expensive (many gradient evaluation is needed), and not efficient if the authors want to have well-scaled algorithm. ============= After Rebuttal ============= I read the rebuttal carefully, and the authors answered my main concerns. I am generally satisfied with the rebuttal, and thus increased my score. |
NIPS | Title
FasterRisk: Fast and Accurate Interpretable Risk Scores
Abstract
Over the last century, risk scores have been the most popular form of predictive model used in healthcare and criminal justice. Risk scores are sparse linear models with integer coefficients; often these models can be memorized or placed on an index card. Typically, risk scores have been created either without data or by rounding logistic regression coefficients, but these methods do not reliably produce high-quality risk scores. Recent work used mathematical programming, which is computationally slow. We introduce an approach for efficiently producing a collection of high-quality risk scores learned from data. Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm. Each of these continuous solutions is transformed into a separate risk score through a “star ray” search, where a range of multipliers are considered before rounding the coefficients sequentially to maintain low logistic loss. Our algorithm returns all of these high-quality risk scores for the user to consider. This method completes within minutes and can be valuable in a broad variety of applications.
1 Introduction
Risk scores are sparse linear models with integer coefficients that predict risks. They are arguably the most popular form of predictive model for high stakes decisions through the last century and are the standard form of model used in criminal justice [4, 22] and medicine [19, 27, 34, 31, 41].
Their history dates back to at least the criminal justice work of Burgess [8], where, based on their criminal history and demographics, individuals were assigned integer point scores between 0 and 21 that determined the probability of their “making good or of failing upon parole.” Other famous risk scores are arguably the most widelyused predictive models in healthcare. These include the APGAR score [3], developed in 1952 and given to newborns, and the CHADS2 score [18], which estimates stroke risk for atrial fibrillation patients. Figures 1 and 2 show example risk scores, which es-
⇤These authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
timate risk of a breast lesion being malignant.
Risk scores have the benefit of being easily memorized; usually their names reveal the full model – for instance, the factors in CHADS2 are past Chronic heart failure, Hypertension, Age 75 years, Diabetes, and past Stroke (where past stroke receives 2 points and the others each receive 1 point). For risk scores, counterfactuals are often trivial to compute, even without a calculator. Also, checking that the data and calculations are correct is easier with risk scores than with other approaches. In short, risk scores have been created by humans for a century to support a huge spectrum of
applications [2, 23, 30, 43, 44, 47], because humans find them easy to understand.
Traditionally, risk scores have been created in two main ways: (1) without data, with expert knowledge only (and validated only afterwards on data), and (2) using a semi-manual process involving manual feature selection and rounding of logistic regression coefficients. That is, these approaches rely heavily on domain expertise and rely little on data. Unfortunately, the alternative of building a model directly from data leads to computationally hard problems: optimizing risk scores over a global objective on data is NP-hard, because in order to produce integer-valued scores, the feasible region must be the integer lattice. There have been only a few approaches to design risk scores automatically [5, 6, 9, 10, 16, 32, 33, 38, 39, 40], but each of these has a flaw that limits its use in practice: the optimization-based approaches use mathematical programming solvers (which require a license) that are slow and scale poorly, and the other methods are randomized greedy algorithms, producing fast but much lower-quality solutions. We need an approach that exhibits the best of both worlds: speed fast enough to operate in a few minutes on a laptop and optimization/search capability as powerful as that of the mathematical programming tools. Our method, FasterRisk, lies at this intersection. It is fast enough to enable interactive model design and can rapidly produce a large pool of models from which users can choose rather than producing only a single model.
One may wonder why simple rounding of `1-regularized logistic regression coefficients does not yield sufficiently good risk scores. Past works [37, 39] explain this as follows: the sheer amount of `1 regularization needed to get a very sparse solution leads to large biases and worse loss values, and rounding goes against the performance gradient. For example, consider the following coefficients from `1 regularization: [1.45, .87, .83, .47, .23, .15, ... ]. This model is worse than its unregularized counterpart due to the bias induced by the large `1 term. Its rounded solution is [1,1,1,0,0,0,..], which leads to even worse loss. Instead, one could multiply all the coefficients by a constant and then round, but which constant is best? There are an infinite number of choices. Even if some value of the multiplier leads to minimal loss due to rounding, the bias from the `1 term still limits the quality of the solution.
The algorithm presented here does not have these disadvantages. The steps are: (1) Fast subset search with `0 optimization (avoiding the bias from `1). This requires the solution of an NP-hard problem, but our fast subset selection algorithm is able to solve this quickly. We proceed from this accurate sparse continuous solution, preserving both sparseness and accuracy in the next steps. (2) Find a pool of diverse continuous sparse solutions that are almost as good as the solution found in (1) but with different support sets. (3) A “star ray” search, where we search for feasible integer-valued solutions along multipliers of each item in the pool from (2). By using multipliers, the search space resembles the rays of a star, because it extends each coefficient in the pool outward from the origin to search for solutions. To find integer solutions, we perform a local search (a form of sequential rounding). This method yields high performance solutions: we provide a theoretical upper bound on the loss difference between the continuous sparse solution and the rounded integer sparse solution.
Through extensive experiments, we show that our proposed method is computationally fast and produces high-quality integer solutions. This work thus provides valuable and novel tools to create risk scores for professionals in many different fields, such as healthcare, finance, and criminal justice.
Contributions: Our contributions include the three-step framework for producing risk scores, a beam-search-based algorithm for logistic regression with bounded coefficients (for Step 1), the search algorithm to find pools of diverse high-quality continuous solutions (for Step 2), the star ray search technique using multipliers (Step 3), and a theorem guaranteeing the quality of the star ray search.
2 Related Work
Optimization-based approaches: Risk scores, which model P (y = 1|x), are different from threshold classifiers, which predict either y = 1 or y = 1 given x. Most work in the area of optimization of integer-valued sparse linear models focuses on classifiers, not risk scores [5, 6, 9, 32, 33, 37, 40, 46]. This difference is important, because a classifier generally cannot be calibrated well for use in risk scoring: only its single decision point is optimized. Despite this, several works use the hinge loss to calibrate predictions [6, 9, 32]. All of these optimization-based algorithms use mathematical programming solvers (i.e., integer programming solvers), which tend to be slow and cannot be used on larger problems. However, they can handle both feature selection and integer constraints.
To directly optimize risk scores, typically the logistic loss is used. The RiskSLIM algorithm [39] optimizes the logistic loss regularized with `0 regularization, subject to integer constraints on the coefficients. RiskSLIM uses callbacks to a MIP solver, alternating between solving linear programs and using branch-and-cut to divide and reduce the search space. The branch-and-cut procedure needs to keep track of unsolved nodes, whose number increases exponentially with the size of the feature space. Thus, RiskSLIM’s major challenge is scalability.
Local search-based approaches: As discussed earlier, a natural way to produce a scoring system or risk score is by selecting features manually and rounding logistic regression coefficients or hinge-loss solutions to integers [10, 11, 39]. While rounding is fast, rounding errors can cause the solution quality to be much worse than that of the optimization-based approaches. Several works have proposed improvements over traditional rounding. In Randomized Rounding [10], each coefficient is rounded up or down randomly, based on its continuous coefficient value. However, randomized rounding does not seem to perform well in practice. Chevaleyre [10] also proposed Greedy Rounding, where coefficients are rounded sequentially. While this technique aimed to provide theoretical guarantees for the hinge loss, we identified a serious flaw in the argument, rendering the bounds incorrect (see Appendix B). The RiskSLIM paper [39] proposed SequentialRounding, which, at each iteration, chooses a coefficient to round up or down, making the best choice according to the regularized logistic loss. This gives better solutions than other types of rounding, because the coefficients are considered together through their performance on the loss function, not independently.
A drawback of SequentialRounding is that it considers rounding up or down only to the nearest integer from the continuous solution. By considering multipliers, we consider a much larger space of possible solutions. The idea of multipliers (i.e., “scale and round”) is used for medical scoring systems [11], though, as far as we know, it has been used only with traditional rounding rather than SequentialRounding, which could easily lead to poor performance, and we have seen no previous work that studies how to perform scale-and-round in a systematic, computationally efficient way. While the general idea of scale-and-round seems simple, it is not: there are an infinite number of possible multipliers, and, for each one, a number of possible nearby integer coefficient vectors that is the size of a hypercube, expanding exponentially in the search space.
Sampling Methods: The Bayesian method of Ertekin et al. [16] samples scoring systems, favoring those that are simpler and more accurate, according to a prior. “Pooling” [39] creates multiple models through sampling along the regularization path of ElasticNet. As discussed, when regularization is tuned high enough to induce sparse solutions, it results in substantial bias and low-quality solutions (see [37, 39] for numerous experiments on this point). Note that there is a literature on finding diverse solutions to mixed-integer optimization problems [e.g., 1], but it focuses only on linear objective functions.
Algorithm 1 FasterRisk(D,k,C,B,✏,T ,Nm)! {(w+t, w+t0 ,mt)}t Input: dataset D (consisting of feature matrix X 2 Rn⇥p and labels y 2 Rn), sparsity constraint k, coefficient constraint C = 5, beam search size B = 10, tolerance level ✏ = 0.3, number of attempts T = 50, number of multipliers to try Nm = 20. Output: a pool P of scoring systems {(wt, wt0),mt} where t is the index enumerating all found scoring systems with kwtk0 k and kwtk1 C and mt is the corresponding multiplier.
1: Call Algorithm 2 SparseBeamLR(D, k, C,B) to find a high-quality solution (w⇤, w⇤0) to the sparse logistic regression problem with continuous coefficients satisfying a box constraint, i.e., solve Problem (3). (Algorithm SparseBeamLR will call Algorithm ExpandSuppBy1 as a subroutine, which grows the solution by beam search.) 2: Call Algorithm 5 CollectSparseDiversePool((w⇤, w⇤0), ✏, T ), which solves Problem (4). Place its output {(wt, wt0)}t in pool P = {w⇤, w⇤0}. P P [ {(wt, wt0)}t. 3: Send each member t in the pool P , which is (wt, wt0), to Algorithm 3 StarRaySearch (D, (wt, wt0), C,Nm) to perform a line search among possible multiplier values and obtain an integer solution (w+t, w+t0 ) with multiplier mt. Algorithm 3 calls Algorithm 6 AuxiliaryLossRounding which conducts the rounding step. 4: Return the collection of risk scores {(w+t, w+t0 ,mt)}t. If desired, return only the best model according to the logistic loss.
3 Methodology
Define dataset D = {1,xi, yi}ni=1 (1 is a static feature corresponding to the intercept) and scaled dataset as 1m ⇥D = { 1 m , 1 mxi, yi} n i=1, for a real-valued m. Our goal is to produce high-quality risk scores within a few minutes on a small personal computer. We start with an optimization problem similar to RiskSLIM’s [39], which minimizes the logistic loss subject to sparsity constraints and integer coefficients:
min w,w0
L(w, w0,D), where L(w, w0,D) = Pn i=1 log(1 + exp( yi(xTi w + w0))) (1)
such that kwk0 k and w 2 Zp, 8j 2 [1, .., p] wj 2 [ 5, 5], w0 2 Z. In practice, the range of these box constraints [ 5, 5] is user-defined and can be different for each coefficient. (We use 5 for ease of exposition.) The sparsity constraint kwk0 k or integer constraints w 2 Zp make the problem NP-hard, and this is a difficult mixed-integer nonlinear program. Transforming the original features to all possible dummy variables, which is a standard type of preprocessing [e.g., 24], changes the model into a (flexible) generalized additive model; such models can be as accurate as the best machine learning models [39, 42]. Thus, we generally process variables in x to be binary.
To make the solution space substantially larger than [ 5, 4, ..., 4, 5]p, we use multipliers. The problem becomes:
min w,w0,m L
✓ w, w0, 1 m D ◆ , where L ✓ w, w0, 1 m D ◆ = nX
i=1
log ✓ 1 + exp ✓ yi
xTi w + w0 m
◆◆ (2)
such that kwk0 k,w 2 Zp, 8j 2 [1, .., p], wj 2 [ 5, 5], w0 2 Z, m > 0. Note that the use of multipliers does not weaken the interpretability of the risk score: the user still sees integer risk scores composed of values wj 2 { 5, 4, .., 4, 5}, w0 2 Z. Only the risk conversion table is calculated differently, as P (Y = 1|x) = 1/(1 + e f(x)) where f(x) = 1m (w Tx+ w0).
Our method proceeds in three steps, as outlined in Algorithm 1. In the first step, it approximately solves the following sparse logistic regression problem with a box constraint (but not integer constraints), detailed in Section 3.1 and Algorithm 2. (w⇤, w⇤0) 2 argmin
w,w0 L(w, w0,D), kwk0 k,w 2 Rp, 8j 2 [1, ..., p], wj 2 [ 5, 5], w0 2 R.
(3) The algorithm gives an accurate and sparse real-valued solution (w⇤, w⇤0).
The second step produces many near-optimal sparse logistic regression solutions, again without integer constraints, detailed in Section 3.2 and Algorithm 5. Algorithm 5 uses (w⇤, w⇤0) from the
first step to find a set {(wt, wt0)}t such that for all t and a given threshold ✏w:
(wt, wt0) obeys L(w t, wt0,D) L(w⇤, w⇤0 ,D)⇥ (1 + ✏w⇤) (4)
kwtk0 k, wt 2 Rp, 8j 2 [1, ..., p], wtj 2 [ 5, 5], wt0 2 R.
After these steps, we have a pool of almost-optimal sparse logistic regression models. In the third step, for each coefficient vector in the pool, we compute a risk score. It is a feasible integer solution (w+t, w+t0 ) to the following, which includes a positive multiplier mt > 0:
L ✓ w+t, w+t0 , 1 mt D ◆ L(wt, wt0,D) + ✏t, (5)
w+t 2 Zp, 8j 2 [1, ..., p], w+tj 2 [ 5, 5], w +t 0 2 Z,
where we derive a tight theoretical upper bound on ✏t. A detailed solution to (5) is shown in Algorithm 6 in Appendix A. We solve the optimization problem for a large range of multipliers in Algorithm 3 for each coefficient vector in the pool, choosing the best multiplier for each coefficient vector. This third step yields a large collection of risk scores, all of which are approximately as accurate as the best sparse logistic regression model that can be obtained. All steps in this process are fast and scalable.
Algorithm 2 SparseBeamLR(D,k,C,B)! (w, w0) Input: dataset D, sparsity constraint k, coefficient constraint C, and beam search size B. Output: a sparse continuous coefficient vector (w, w0) with kwk0 k, kwk1 C.
1: Define N+ and N as numbers of positive and negative labels, respectively. 2: w0 log( N+/N ),w 0 .Initialize the intercept and coefficients. 3: F ; .Initialize the collection of found supports as an empty set 4: (W,F) ExpandSuppBy1(D, (w, w0),F , B). .Returns B models of support 1 5: for t = 2, ..., k do .Beam search to expand the support 6: Wtmp ; 7: for (w0, w00) 2W do .Each of these has support t 1 8: (W 0,F) ExpandSuppBy1(D, (w0, w00),F , B). .Returns B models with supp. t. 9: Wtmp Wtmp [W 0
10: end for 11: Reset W to be the B solutions in Wtmp with the smallest logistic loss values. 12: end for 13: Pick (w, w0) from W with the smallest logistic loss. 14: Return (w, w0).
3.1 High-quality Sparse Continuous Solution
There are many different approaches for sparse logistic regression, including `1 regularization [35], ElasticNet [48], `0 regularization [13, 24], and orthogonal matching pursuit (OMP) [14, 25], but none of these approaches seem to be able to handle both the box constraints and the sparsity constraint in Problem 3, so we developed a new approach. This approach, in Algorithm 2, SparseBeamLR, uses beam search for best subset selection: each iteration contains several coordinate descent steps to determine whether a new variable should be added to the support, and it clips coefficients to the box [ 5, 5] as it proceeds. Hence the algorithm is able to determine, before committing to the new variable, whether it is likely to decrease the loss while obeying the box constraints. This beam search algorithm for solving (3) implicitly uses the assumption that one of the best models of size k implicitly contains variables of one of the best models of size k 1. This type of assumption has been studied in the sparse learning literature [14] (Theorem 5). However, we are not aware of any other work that applies box constraints or beam search for sparse logistic regression. In Appendix E, we show that our method produces better solutions than the OMP method presented in [14].
Algorithm 2 calls the ExpandSuppBy1 Algorithm, which has two major steps. The detailed algorithm can be found in Appendix A. For the first step, given a solution w, we perform optimization on each single coordinate j outside of the current support supp(w):
d⇤j 2 argmin d2[ 5,5] L(w + dej , w0,D) for 8j where wj = 0. (6)
Vector ej is 1 for the jth coordinate and 0 otherwise. We find d⇤j for each j through an iterative thresholding operation, which is done on all coordinates in parallel, iterating several (⇠ 10) times:
for iteration i: dj Threshold(j, dj ,w, w0,D) := min(max(cdj , 5), 5), (7) where cdj = dj 1ljrjL(w + djej , w0,D), and lj is a Lipschitz constant on coordinate j [24]. Importantly, we can perform Equation 7 on all j where wj = 0 in parallel using matrix form.
For the second step, after the parallel single coordinate optimization is done, we pick the top B indices (j’s) with the smallest logistic losses L(w + d⇤jej) and fine tune on the new support:
wjnew, w0 j new 2 argmin a2[ 5,5]p,b L(a, b,D) with supp(a) = supp(w) [ {j}. (8)
This can be done again using a variant of Equation 7 iteratively on all the coordinates in the new support. We get B pairs of (wjnew, w0jnew) through this ExpandSuppBy1 procedure, and the collection of these pairs form the set W 0 in Line 8 of Algorithm 2. At the end, Algorithm 2 (SparseBeamLR) returns the best model with the smallest logistic loss found by the beam search procedure. This model satisfies both the sparsity and box constraints.
3.2 Collect Sparse Diverse Pool (Rashomon Set)
We now collect the sparse diverse pool. In Section 3.1, our goal was to find a sparse model (w⇤, w⇤0) with the smallest logistic loss. For high dimensional features or in the presence of highly correlated features, there could exist many sparse models with almost equally good performance [7]. This set of models is also known as the Rashomon set. Let us find those and turn them into risk scores. We first predefine a tolerance gap level ✏ (hyperparameter, usually set to 0.3). Then, we delete a feature with index j in the support supp(w⇤) and add a new feature with index j+. We select each new index to be j+ whose logistic loss is within the tolerance gap:
Find all j+ s.t. min a2[ 5,5]
L(w⇤ w⇤j ej + aej+, w0,D) L(w⇤, w⇤0 ,D)(1 + ✏). (9)
We fine-tune the coefficients on each of the new supports and then save the new solution in our pool. Details can be found in Algorithm 5. Swapping one feature at a time is computationally efficient, and our experiments show it produces sufficiently diverse pools over many datasets. We call this method the CollectSparseDiversePool Algorithm.
3.3 “Star Ray” Search for Integer Solutions
The last challenge is how to get an integer solution from a continuous solution. To achieve this, we use a “star ray” search that searches along each “ray” of the star, extending each continuous solution outward from the origin using many values of a multiplier, as shown in Algorithm 3. The star ray search provides much more flexibility in finding a good integer solution than simple rounding. The largest multiplier mmax is set to 5/maxj(|w⇤j |) which will take one of the coefficients to the boundary of the box constraint at 5. We set the smallest multiplier to be 1.0 and pick Nm (usually 20) equally spaced points from [mmin,mmax]. If mmax = 1, we set mmin = 0.5 to allow shrinkage of the coefficients. We scale the coefficients and datasets with each multiplier and round the coefficients to integers using the sequential rounding technique in Algorithm 6. For each continuous solution (each “ray” of the “star”), we report the integer solution and multiplier with the smallest logistic loss. This process yields our collection of risk scores. Note here that a standard line search along the multiplier does not work, because the rounding error is highly non-convex.
We briefly discuss how the sequential rounding technique works. Details of this method can be found in Appendix A. We initialize w+ = w. Then we round the fractional part of w+ one coordinate at a time. At each step, some of the w+j ’s are integer-valued (so w + j wj is nonzero) and we pick the coordinate and rounding operation (either floor or ceil) based on which can minimize the following objective function, where we will round to an integer at coordinate r⇤:
r⇤, v⇤ 2 argmin r,v
nX
i=1
l2i
0 @xir(v wr) + X
j 6=r xij(w
+ j wj)
1 A 2
, (10)
subject to r 2 {j | w+j /2 Z} and v 2 {bw+r c, dw+r e},
Algorithm 3 StarRaySearch(D, (w, w0), C,Nm) ! (w+, w+0 ),m Input: dataset D, a sparse continuous solution (w, w0), coefficient constraint C, and number of multipliers to try Nm. Output: a sparse integer solution (w+, w+0 ) with kw+k1 C and multiplier m.
1: Define mmax C/max|w| as discussed in Section 3.3. If mmax = 1, set mmin 0.5; if mmax > 1, set mmin 1. 2: Pick Nm equally spaced multiplier values ml 2 [mmin,mmax] for l 2 [1, ..., Nm] and call this set M = {ml}l. 3: Use each multiplier to scale the good continuous solution (w, w0), to obtain (mlw, mlw0), which is a good continuous solution to the rescaled dataset 1mlD. 4: Send each rescaled solution (mlw, mlw0) and its rescaled dataset 1mlD to Algorithm 6 AuxiliaryLossRounding( 1mlD,mlw,mlw0) for rounding. It returns (w
+l, w+l0 ,ml), where (w+l, w+l0 ) is close to (mlw, mlw0), and where (w+l, w +l 0 ) on 1 ml
D has a small logistic loss. 5: Evaluate the logistic loss to pick the best multiplier l⇤ 2 argminl L(w+l, w+l0 , 1mlD) 6: Return (w+l ⇤ , w+l ⇤
0 ) and ml⇤ .
where li is the Lipschitz constant restricted to the rounding interval and can be computed as li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. (The Lipschitz constant here is much smaller than the one in Section 3.1 due to the interval restriction.) After we select r⇤ and find value v⇤, we update w+ by setting w+r⇤ = v⇤. We repeat this process until w+ is on the integer lattice: w+ 2 Zp. The objective function in Equation 10 can be understood as an auxiliary upper bound of the logistic loss. Our algorithm provides an upper bound on the difference between the logistic losses of the continuous solution and the final rounded solution before we start the rounding algorithm (Theorem 3.1 below). Additionally, during the sequential rounding procedure, we do not need to perform expensive operations such as logarithms or exponentials as required by the logistic loss function; the bound and auxiliary function require only sums of squares, not logarithms or exponentials. Its derivation and proof are in Appendix C. Theorem 3.1. Let w be the real-valued coefficients for the logistic regression model with objective function L(w) = Pn i=1 log(1 + exp( yixTi w)) (the intercept is incorporated). Let w+ be the integer-valued coefficients returned by the AuxiliaryLossRounding method. Furthermore, let uj = wj bwjc. Let li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. Then, we have an upper bound on the difference between the loss L(w) and the loss L(w+):
L(w+) L(w) vuutn nX
i=1
pX
j=1
(lixij)2uj(1 uj). (11)
Note. Our method has a higher prediction capacity than RiskSLIM: its search space is much larger. Compared to RiskSLIM, our use of the multiplier permits a number of solutions that grows exponentially in k as we increase the multiplier. To see this, consider that for each support of k features, since logistic loss is convex, it contains a hypersphere in coefficient space. The volume of that hypersphere is (as usual) V = ⇡ k/2
( k2+1) rk where r is the radius of the hypersphere. If we increase the multiplier to
2, the grid becomes finer by a factor of 2, which is equivalent to increasing the radius by a factor of 2. Thus, the volume increases by a factor of 2k. In general, for maximum multiplier m, the search space is increased by a factor of mk over RiskSLIM.
4 Experiments
We experimentally focus on two questions: (1) How good is FasterRisk’s solution quality compared to baselines? (§4.1) (2) How fast is FasterRisk compared with the state-of-the-art? (§4.2) In the appendix, we address three more questions: (3) How much do the sparse beam search, diverse pools, and multipliers contribute to our solution quality? (E.4) (4) How well-calibrated are the models produced by FasterRisk? (E.9) (5) How sensitive is FasterRisk to each of the hyperparameters in the algorithm? (E.10)
We compare with RiskSLIM (the current state-of-the-art), as well as algorithms Pooled-PLR-RD, Pooled-PLR-RSRD, Pooled-PRL-RDSP, Pooled-PLR-Rand and Pooled-PRL-RDP. These algorithms were all previously shown to be inferior to RiskSLIM [39]. These methods first find a pool of sparse continuous solutions using different regularizations of ElasticNet (hence the name “Pooled Penalized Logistic Regression” – Pooled-PLR) and then round the coefficients with different techniques. Details are in Appendix D.3. The best solution is chosen from this pool of integer solutions that obeys the sparsity and box constraints and has the smallest logistic loss. We also compare with the baseline AutoScore [44]. However, on some datasets, the results produced by AutoScore are so poor that they distort the AUC scale, so we show those results only in Appendix E.11. As there is no publicly
available code for any of [10, 16, 32, 33], they do not appear in the experiments. For each dataset, we perform 5-fold cross validation and report training and test AUC. Appendix D presents details of the datasets, experimental setup, evaluation metrics, loss values, and computing platform/environment. More experimental results appear in Appendix E.
4.1 Solution Quality
We first evaluate FasterRisk’s solution quality. Figure 3 shows the training and test AUC on six datasets (results for training loss appear in Appendix E). FasterRisk (the red line) outperforms all baselines, consistently obtaining the highest AUC scores on both the training and test sets. Notably, our method obtains better results than RiskSLIM, which uses a mathematical solver and is the current state-of-the-art method for scoring systems. This superior performance is due to the use of multipliers, which increases the complexity of the hypothesis space. Figure 4 provides a more detailed comparison between FasterRisk and RiskSLIM. One may wonder whether running RiskSLIM longer would make this MIP-based method comparable to our FasterRisk, since the current running time limit for RiskSLIM is only 15 minutes. We extended RiskSLIM’s running time limit up to 1 hour and show the comparison in Appendix E.8; FasterRisk still outperforms RiskSLIM by a large margin.
FasterRisk performs significantly better than the other baselines for two reasons. First, the continuous sparse solutions produced by ElasticNet are low quality for very sparse models. Second, it is difficult to obtain an exact model size by controlling `1 regularization. For example, Pooled-PLR-RD and Pooled-PLR-RDSP do not have results for model size 10 on the mammo datasets, because no such model size exists in the pooled solutions after rounding.
4.2 Runtime Comparison
The major drawback of RiskSLIM is its limited scalability. Runtime is important to allow interactive model development and to handle larger datasets. Figure 5 shows that FasterRisk (red bars) is significantly faster than RiskSLIM (blue bars) in general. We ran these experiments with a 900 second (15 minute) timeout. RiskSLIM finishes running on the small dataset mammo, but it times out on the larger datasets, timing out on models larger than 4 features for adult, larger than 3 features for bank, larger than 7 features for mushroom, larger than 2 features for COMPAS, and larger than 1
feature for FICO. RiskSLIM times out early on COMPAS and FICO datasets, suggesting that the MIP-based method struggles with high-dimensional and highly-correlated features. Thus, we see that FasterRisk tends to be both faster and more accurate than RiskSLIM.
4.3 Example Scoring Systems
The main benefit of risk scores is their interpretability. We place a few example risk scores in Table 1 to allow the reader to judge for themselves. More risk scores examples can be found in Appendix F.1. Additionally, we provide a pool of solutions for the top 12 models on the bank, mammo, and Netherlands datasets in Appendix F.2. Prediction performance is generally not the only criteria users consider when deciding to deploy a model. Provided with a pool of solutions that perform equally well, a user can choose the one that best incorporates domain knowledge [45]. After the pool of models is generated, interacting with the pool is essentially computationally instantaneous. Finally, we can reduce some models to relatively prime coefficients or transform some features for better interpretability. Examples of such transformations are given in Appendix G.1.
5 Conclusion
FasterRisk produces a collection of high-quality risk scores within minutes. Its performance owes to three key ideas: a new algorithm for sparsity- and box-constrained continuous models, using a pool of diverse solutions, and the use of the star ray search, which leverages multipliers and a new sequential rounding technique. FasterRisk is suitable for high-stakes decisions, and permits domain experts a collection of interpretable models to choose from.
Code Availability
Implementations of FasterRisk discussed in this paper are available at https://github.com/ jiachangliu/FasterRisk.
Acknowledgements
The authors acknowledge funding from the National Science Foundation under grants IIS-2147061 and IIS-2130250, National Institute on Drug Abuse under grant R01 DA054994, Department of Energy under grants DE-SC0021358 and DE-SC0023194, and National Research Traineeship Program under NSF grants DGE-2022040 and CCF-1934964. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Nous remercions le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) de son soutien. | 1. What is the main contribution of the paper regarding risk scores generation?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of speed and accuracy?
3. How does the reviewer assess the significance and quality of the paper's content, including the choice of baselines and the presentation of results?
4. What are the questions raised by the reviewer regarding the paper's approach, such as considerations of societal impact and interpretability?
5. What are the limitations of the paper, especially concerning its potential negative societal impacts? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose a method for efficiently automatically generating a pool of “risk scores” (sparse linear models with integer coefficients), involving (1) a beam search algorithm to identify a sparse set of features, (2) given the original set of features, identify a pool of sparse solutions with similar performance (but “diverse” set of features), and (3) “star ray” search to choose integer coefficients. They evaluate both the speed and accuracy of their approach on multiple benchmark datasets.
Strengths And Weaknesses
The authors describe an interesting method for quickly identifying risk scores in a diverse range of settings. The main improvement seems to be speed (which wasn’t very well quantified with respect to baselines), since performance-wise it was similar to a previous approach – and I wonder how useful this speed would actually be in this high-stakes offline setting.
Originality/clarity: this seems to be a creative combination of past algorithms, and they described their algorithms in detail
Quality/significance: the authors evaluated their methods and baselines along both accuracy and speed metrics, and also shared extensive additional experiments in their supplement. One aspect that seemed problematic was that for speed plots, they cut off algorithms after 15 minutes, and it’s unclear exactly how their baselines scale because they tend to be censored after just one or two points along the x-axis. It would also be helpful to see if quantitatively, there are any significant differences in AUC between FasterRisk and alternative methods. Based on these two results, it would be easier to assess whether their method provides a meaningful contribution to real-world use cases of risk scores.
The authors also do not describe any considerations of societal impact which is an important factor given their suggested use cases.
Questions
You mention (line 54) “We need an approach that exhibits the best of both worlds: speed fast enough to operate in a few minutes on a laptop and optimization and search capability as powerful as that of the mathematical programming tools. Our method, FasterRisk, lies at this intersection.” — Of course, it makes sense to have a goal of developing an accurate model, but I’m wondering why it is important that risk scores must operate in a few minutes on a laptop. If these are risk scores for a high-impact situation, why can’t we run an analysis for an hour? Or even a week?
Figure 4: I find this figure a bit frustrating, because it’s not really allowing me to see how the baseline method scales compared to yours. A 15 minute time-out seems kind of silly/arbitrary (in the real world, I’d expect people to be willing to train their methods for quite a while if it’s for a high stakes setting), and I would highly recommend allowing your baselines to run for longer (at least several hours) to show how the times actually scale (and then possibly display with a log scale as needed). It would also be helpful to share the number of rows and columns in each of the datasets to give a sense of scale.
Section 4.3 Example scoring systems: how were these examples selected? Also, you mention that risk scores offer interpretability – is this something that you argue is unique to your approach and not your baselines?
Your algorithm is often described to return a “pool” of solutions, but I didn’t see much discussion of what that pool actually looks like. For example, I would want to see how many solutions were produced, how different they are from each other, etc. In a use case, would you expect the user to look through all of them and then choose one, or just defer to the lowest-error one?
Limitations
The authors describe some limitations related to their algorithms, which is appreciated. However, they describe the potential negative societal impacts as “[N/A]” which seems like a huge oversight to me. As they say, “[risk scores] are possibly the most popular form of predictive model for high stakes decisions through the last century and are the standard form of model used in criminal justice and medicine,” it seems obvious that any contribution they make to this field could have serious societal impacts, for better or worse.
Here’s an example of how this approach could be used problematically: Let’s say we have a criminal justice scenario in which we have access to race and some other features that are essentially proxies for race (e.g., zip code). Now let’s assume a user is given a pool of “diverse” solutions by the FasterRisk approach, and they know that they don’t want a model that’s “racist”. They may notice one risk score has a highest accuracy but relies on race, so they decide they shouldn’t use that model. They then notice an almost identical model that has all the same features except race has been replaced by zip code, and they choose this model instead and deploy it in some real world scenario (e.g., recommending whether someone should be placed on parole). |
NIPS | Title
FasterRisk: Fast and Accurate Interpretable Risk Scores
Abstract
Over the last century, risk scores have been the most popular form of predictive model used in healthcare and criminal justice. Risk scores are sparse linear models with integer coefficients; often these models can be memorized or placed on an index card. Typically, risk scores have been created either without data or by rounding logistic regression coefficients, but these methods do not reliably produce high-quality risk scores. Recent work used mathematical programming, which is computationally slow. We introduce an approach for efficiently producing a collection of high-quality risk scores learned from data. Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm. Each of these continuous solutions is transformed into a separate risk score through a “star ray” search, where a range of multipliers are considered before rounding the coefficients sequentially to maintain low logistic loss. Our algorithm returns all of these high-quality risk scores for the user to consider. This method completes within minutes and can be valuable in a broad variety of applications.
1 Introduction
Risk scores are sparse linear models with integer coefficients that predict risks. They are arguably the most popular form of predictive model for high stakes decisions through the last century and are the standard form of model used in criminal justice [4, 22] and medicine [19, 27, 34, 31, 41].
Their history dates back to at least the criminal justice work of Burgess [8], where, based on their criminal history and demographics, individuals were assigned integer point scores between 0 and 21 that determined the probability of their “making good or of failing upon parole.” Other famous risk scores are arguably the most widelyused predictive models in healthcare. These include the APGAR score [3], developed in 1952 and given to newborns, and the CHADS2 score [18], which estimates stroke risk for atrial fibrillation patients. Figures 1 and 2 show example risk scores, which es-
⇤These authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
timate risk of a breast lesion being malignant.
Risk scores have the benefit of being easily memorized; usually their names reveal the full model – for instance, the factors in CHADS2 are past Chronic heart failure, Hypertension, Age 75 years, Diabetes, and past Stroke (where past stroke receives 2 points and the others each receive 1 point). For risk scores, counterfactuals are often trivial to compute, even without a calculator. Also, checking that the data and calculations are correct is easier with risk scores than with other approaches. In short, risk scores have been created by humans for a century to support a huge spectrum of
applications [2, 23, 30, 43, 44, 47], because humans find them easy to understand.
Traditionally, risk scores have been created in two main ways: (1) without data, with expert knowledge only (and validated only afterwards on data), and (2) using a semi-manual process involving manual feature selection and rounding of logistic regression coefficients. That is, these approaches rely heavily on domain expertise and rely little on data. Unfortunately, the alternative of building a model directly from data leads to computationally hard problems: optimizing risk scores over a global objective on data is NP-hard, because in order to produce integer-valued scores, the feasible region must be the integer lattice. There have been only a few approaches to design risk scores automatically [5, 6, 9, 10, 16, 32, 33, 38, 39, 40], but each of these has a flaw that limits its use in practice: the optimization-based approaches use mathematical programming solvers (which require a license) that are slow and scale poorly, and the other methods are randomized greedy algorithms, producing fast but much lower-quality solutions. We need an approach that exhibits the best of both worlds: speed fast enough to operate in a few minutes on a laptop and optimization/search capability as powerful as that of the mathematical programming tools. Our method, FasterRisk, lies at this intersection. It is fast enough to enable interactive model design and can rapidly produce a large pool of models from which users can choose rather than producing only a single model.
One may wonder why simple rounding of `1-regularized logistic regression coefficients does not yield sufficiently good risk scores. Past works [37, 39] explain this as follows: the sheer amount of `1 regularization needed to get a very sparse solution leads to large biases and worse loss values, and rounding goes against the performance gradient. For example, consider the following coefficients from `1 regularization: [1.45, .87, .83, .47, .23, .15, ... ]. This model is worse than its unregularized counterpart due to the bias induced by the large `1 term. Its rounded solution is [1,1,1,0,0,0,..], which leads to even worse loss. Instead, one could multiply all the coefficients by a constant and then round, but which constant is best? There are an infinite number of choices. Even if some value of the multiplier leads to minimal loss due to rounding, the bias from the `1 term still limits the quality of the solution.
The algorithm presented here does not have these disadvantages. The steps are: (1) Fast subset search with `0 optimization (avoiding the bias from `1). This requires the solution of an NP-hard problem, but our fast subset selection algorithm is able to solve this quickly. We proceed from this accurate sparse continuous solution, preserving both sparseness and accuracy in the next steps. (2) Find a pool of diverse continuous sparse solutions that are almost as good as the solution found in (1) but with different support sets. (3) A “star ray” search, where we search for feasible integer-valued solutions along multipliers of each item in the pool from (2). By using multipliers, the search space resembles the rays of a star, because it extends each coefficient in the pool outward from the origin to search for solutions. To find integer solutions, we perform a local search (a form of sequential rounding). This method yields high performance solutions: we provide a theoretical upper bound on the loss difference between the continuous sparse solution and the rounded integer sparse solution.
Through extensive experiments, we show that our proposed method is computationally fast and produces high-quality integer solutions. This work thus provides valuable and novel tools to create risk scores for professionals in many different fields, such as healthcare, finance, and criminal justice.
Contributions: Our contributions include the three-step framework for producing risk scores, a beam-search-based algorithm for logistic regression with bounded coefficients (for Step 1), the search algorithm to find pools of diverse high-quality continuous solutions (for Step 2), the star ray search technique using multipliers (Step 3), and a theorem guaranteeing the quality of the star ray search.
2 Related Work
Optimization-based approaches: Risk scores, which model P (y = 1|x), are different from threshold classifiers, which predict either y = 1 or y = 1 given x. Most work in the area of optimization of integer-valued sparse linear models focuses on classifiers, not risk scores [5, 6, 9, 32, 33, 37, 40, 46]. This difference is important, because a classifier generally cannot be calibrated well for use in risk scoring: only its single decision point is optimized. Despite this, several works use the hinge loss to calibrate predictions [6, 9, 32]. All of these optimization-based algorithms use mathematical programming solvers (i.e., integer programming solvers), which tend to be slow and cannot be used on larger problems. However, they can handle both feature selection and integer constraints.
To directly optimize risk scores, typically the logistic loss is used. The RiskSLIM algorithm [39] optimizes the logistic loss regularized with `0 regularization, subject to integer constraints on the coefficients. RiskSLIM uses callbacks to a MIP solver, alternating between solving linear programs and using branch-and-cut to divide and reduce the search space. The branch-and-cut procedure needs to keep track of unsolved nodes, whose number increases exponentially with the size of the feature space. Thus, RiskSLIM’s major challenge is scalability.
Local search-based approaches: As discussed earlier, a natural way to produce a scoring system or risk score is by selecting features manually and rounding logistic regression coefficients or hinge-loss solutions to integers [10, 11, 39]. While rounding is fast, rounding errors can cause the solution quality to be much worse than that of the optimization-based approaches. Several works have proposed improvements over traditional rounding. In Randomized Rounding [10], each coefficient is rounded up or down randomly, based on its continuous coefficient value. However, randomized rounding does not seem to perform well in practice. Chevaleyre [10] also proposed Greedy Rounding, where coefficients are rounded sequentially. While this technique aimed to provide theoretical guarantees for the hinge loss, we identified a serious flaw in the argument, rendering the bounds incorrect (see Appendix B). The RiskSLIM paper [39] proposed SequentialRounding, which, at each iteration, chooses a coefficient to round up or down, making the best choice according to the regularized logistic loss. This gives better solutions than other types of rounding, because the coefficients are considered together through their performance on the loss function, not independently.
A drawback of SequentialRounding is that it considers rounding up or down only to the nearest integer from the continuous solution. By considering multipliers, we consider a much larger space of possible solutions. The idea of multipliers (i.e., “scale and round”) is used for medical scoring systems [11], though, as far as we know, it has been used only with traditional rounding rather than SequentialRounding, which could easily lead to poor performance, and we have seen no previous work that studies how to perform scale-and-round in a systematic, computationally efficient way. While the general idea of scale-and-round seems simple, it is not: there are an infinite number of possible multipliers, and, for each one, a number of possible nearby integer coefficient vectors that is the size of a hypercube, expanding exponentially in the search space.
Sampling Methods: The Bayesian method of Ertekin et al. [16] samples scoring systems, favoring those that are simpler and more accurate, according to a prior. “Pooling” [39] creates multiple models through sampling along the regularization path of ElasticNet. As discussed, when regularization is tuned high enough to induce sparse solutions, it results in substantial bias and low-quality solutions (see [37, 39] for numerous experiments on this point). Note that there is a literature on finding diverse solutions to mixed-integer optimization problems [e.g., 1], but it focuses only on linear objective functions.
Algorithm 1 FasterRisk(D,k,C,B,✏,T ,Nm)! {(w+t, w+t0 ,mt)}t Input: dataset D (consisting of feature matrix X 2 Rn⇥p and labels y 2 Rn), sparsity constraint k, coefficient constraint C = 5, beam search size B = 10, tolerance level ✏ = 0.3, number of attempts T = 50, number of multipliers to try Nm = 20. Output: a pool P of scoring systems {(wt, wt0),mt} where t is the index enumerating all found scoring systems with kwtk0 k and kwtk1 C and mt is the corresponding multiplier.
1: Call Algorithm 2 SparseBeamLR(D, k, C,B) to find a high-quality solution (w⇤, w⇤0) to the sparse logistic regression problem with continuous coefficients satisfying a box constraint, i.e., solve Problem (3). (Algorithm SparseBeamLR will call Algorithm ExpandSuppBy1 as a subroutine, which grows the solution by beam search.) 2: Call Algorithm 5 CollectSparseDiversePool((w⇤, w⇤0), ✏, T ), which solves Problem (4). Place its output {(wt, wt0)}t in pool P = {w⇤, w⇤0}. P P [ {(wt, wt0)}t. 3: Send each member t in the pool P , which is (wt, wt0), to Algorithm 3 StarRaySearch (D, (wt, wt0), C,Nm) to perform a line search among possible multiplier values and obtain an integer solution (w+t, w+t0 ) with multiplier mt. Algorithm 3 calls Algorithm 6 AuxiliaryLossRounding which conducts the rounding step. 4: Return the collection of risk scores {(w+t, w+t0 ,mt)}t. If desired, return only the best model according to the logistic loss.
3 Methodology
Define dataset D = {1,xi, yi}ni=1 (1 is a static feature corresponding to the intercept) and scaled dataset as 1m ⇥D = { 1 m , 1 mxi, yi} n i=1, for a real-valued m. Our goal is to produce high-quality risk scores within a few minutes on a small personal computer. We start with an optimization problem similar to RiskSLIM’s [39], which minimizes the logistic loss subject to sparsity constraints and integer coefficients:
min w,w0
L(w, w0,D), where L(w, w0,D) = Pn i=1 log(1 + exp( yi(xTi w + w0))) (1)
such that kwk0 k and w 2 Zp, 8j 2 [1, .., p] wj 2 [ 5, 5], w0 2 Z. In practice, the range of these box constraints [ 5, 5] is user-defined and can be different for each coefficient. (We use 5 for ease of exposition.) The sparsity constraint kwk0 k or integer constraints w 2 Zp make the problem NP-hard, and this is a difficult mixed-integer nonlinear program. Transforming the original features to all possible dummy variables, which is a standard type of preprocessing [e.g., 24], changes the model into a (flexible) generalized additive model; such models can be as accurate as the best machine learning models [39, 42]. Thus, we generally process variables in x to be binary.
To make the solution space substantially larger than [ 5, 4, ..., 4, 5]p, we use multipliers. The problem becomes:
min w,w0,m L
✓ w, w0, 1 m D ◆ , where L ✓ w, w0, 1 m D ◆ = nX
i=1
log ✓ 1 + exp ✓ yi
xTi w + w0 m
◆◆ (2)
such that kwk0 k,w 2 Zp, 8j 2 [1, .., p], wj 2 [ 5, 5], w0 2 Z, m > 0. Note that the use of multipliers does not weaken the interpretability of the risk score: the user still sees integer risk scores composed of values wj 2 { 5, 4, .., 4, 5}, w0 2 Z. Only the risk conversion table is calculated differently, as P (Y = 1|x) = 1/(1 + e f(x)) where f(x) = 1m (w Tx+ w0).
Our method proceeds in three steps, as outlined in Algorithm 1. In the first step, it approximately solves the following sparse logistic regression problem with a box constraint (but not integer constraints), detailed in Section 3.1 and Algorithm 2. (w⇤, w⇤0) 2 argmin
w,w0 L(w, w0,D), kwk0 k,w 2 Rp, 8j 2 [1, ..., p], wj 2 [ 5, 5], w0 2 R.
(3) The algorithm gives an accurate and sparse real-valued solution (w⇤, w⇤0).
The second step produces many near-optimal sparse logistic regression solutions, again without integer constraints, detailed in Section 3.2 and Algorithm 5. Algorithm 5 uses (w⇤, w⇤0) from the
first step to find a set {(wt, wt0)}t such that for all t and a given threshold ✏w:
(wt, wt0) obeys L(w t, wt0,D) L(w⇤, w⇤0 ,D)⇥ (1 + ✏w⇤) (4)
kwtk0 k, wt 2 Rp, 8j 2 [1, ..., p], wtj 2 [ 5, 5], wt0 2 R.
After these steps, we have a pool of almost-optimal sparse logistic regression models. In the third step, for each coefficient vector in the pool, we compute a risk score. It is a feasible integer solution (w+t, w+t0 ) to the following, which includes a positive multiplier mt > 0:
L ✓ w+t, w+t0 , 1 mt D ◆ L(wt, wt0,D) + ✏t, (5)
w+t 2 Zp, 8j 2 [1, ..., p], w+tj 2 [ 5, 5], w +t 0 2 Z,
where we derive a tight theoretical upper bound on ✏t. A detailed solution to (5) is shown in Algorithm 6 in Appendix A. We solve the optimization problem for a large range of multipliers in Algorithm 3 for each coefficient vector in the pool, choosing the best multiplier for each coefficient vector. This third step yields a large collection of risk scores, all of which are approximately as accurate as the best sparse logistic regression model that can be obtained. All steps in this process are fast and scalable.
Algorithm 2 SparseBeamLR(D,k,C,B)! (w, w0) Input: dataset D, sparsity constraint k, coefficient constraint C, and beam search size B. Output: a sparse continuous coefficient vector (w, w0) with kwk0 k, kwk1 C.
1: Define N+ and N as numbers of positive and negative labels, respectively. 2: w0 log( N+/N ),w 0 .Initialize the intercept and coefficients. 3: F ; .Initialize the collection of found supports as an empty set 4: (W,F) ExpandSuppBy1(D, (w, w0),F , B). .Returns B models of support 1 5: for t = 2, ..., k do .Beam search to expand the support 6: Wtmp ; 7: for (w0, w00) 2W do .Each of these has support t 1 8: (W 0,F) ExpandSuppBy1(D, (w0, w00),F , B). .Returns B models with supp. t. 9: Wtmp Wtmp [W 0
10: end for 11: Reset W to be the B solutions in Wtmp with the smallest logistic loss values. 12: end for 13: Pick (w, w0) from W with the smallest logistic loss. 14: Return (w, w0).
3.1 High-quality Sparse Continuous Solution
There are many different approaches for sparse logistic regression, including `1 regularization [35], ElasticNet [48], `0 regularization [13, 24], and orthogonal matching pursuit (OMP) [14, 25], but none of these approaches seem to be able to handle both the box constraints and the sparsity constraint in Problem 3, so we developed a new approach. This approach, in Algorithm 2, SparseBeamLR, uses beam search for best subset selection: each iteration contains several coordinate descent steps to determine whether a new variable should be added to the support, and it clips coefficients to the box [ 5, 5] as it proceeds. Hence the algorithm is able to determine, before committing to the new variable, whether it is likely to decrease the loss while obeying the box constraints. This beam search algorithm for solving (3) implicitly uses the assumption that one of the best models of size k implicitly contains variables of one of the best models of size k 1. This type of assumption has been studied in the sparse learning literature [14] (Theorem 5). However, we are not aware of any other work that applies box constraints or beam search for sparse logistic regression. In Appendix E, we show that our method produces better solutions than the OMP method presented in [14].
Algorithm 2 calls the ExpandSuppBy1 Algorithm, which has two major steps. The detailed algorithm can be found in Appendix A. For the first step, given a solution w, we perform optimization on each single coordinate j outside of the current support supp(w):
d⇤j 2 argmin d2[ 5,5] L(w + dej , w0,D) for 8j where wj = 0. (6)
Vector ej is 1 for the jth coordinate and 0 otherwise. We find d⇤j for each j through an iterative thresholding operation, which is done on all coordinates in parallel, iterating several (⇠ 10) times:
for iteration i: dj Threshold(j, dj ,w, w0,D) := min(max(cdj , 5), 5), (7) where cdj = dj 1ljrjL(w + djej , w0,D), and lj is a Lipschitz constant on coordinate j [24]. Importantly, we can perform Equation 7 on all j where wj = 0 in parallel using matrix form.
For the second step, after the parallel single coordinate optimization is done, we pick the top B indices (j’s) with the smallest logistic losses L(w + d⇤jej) and fine tune on the new support:
wjnew, w0 j new 2 argmin a2[ 5,5]p,b L(a, b,D) with supp(a) = supp(w) [ {j}. (8)
This can be done again using a variant of Equation 7 iteratively on all the coordinates in the new support. We get B pairs of (wjnew, w0jnew) through this ExpandSuppBy1 procedure, and the collection of these pairs form the set W 0 in Line 8 of Algorithm 2. At the end, Algorithm 2 (SparseBeamLR) returns the best model with the smallest logistic loss found by the beam search procedure. This model satisfies both the sparsity and box constraints.
3.2 Collect Sparse Diverse Pool (Rashomon Set)
We now collect the sparse diverse pool. In Section 3.1, our goal was to find a sparse model (w⇤, w⇤0) with the smallest logistic loss. For high dimensional features or in the presence of highly correlated features, there could exist many sparse models with almost equally good performance [7]. This set of models is also known as the Rashomon set. Let us find those and turn them into risk scores. We first predefine a tolerance gap level ✏ (hyperparameter, usually set to 0.3). Then, we delete a feature with index j in the support supp(w⇤) and add a new feature with index j+. We select each new index to be j+ whose logistic loss is within the tolerance gap:
Find all j+ s.t. min a2[ 5,5]
L(w⇤ w⇤j ej + aej+, w0,D) L(w⇤, w⇤0 ,D)(1 + ✏). (9)
We fine-tune the coefficients on each of the new supports and then save the new solution in our pool. Details can be found in Algorithm 5. Swapping one feature at a time is computationally efficient, and our experiments show it produces sufficiently diverse pools over many datasets. We call this method the CollectSparseDiversePool Algorithm.
3.3 “Star Ray” Search for Integer Solutions
The last challenge is how to get an integer solution from a continuous solution. To achieve this, we use a “star ray” search that searches along each “ray” of the star, extending each continuous solution outward from the origin using many values of a multiplier, as shown in Algorithm 3. The star ray search provides much more flexibility in finding a good integer solution than simple rounding. The largest multiplier mmax is set to 5/maxj(|w⇤j |) which will take one of the coefficients to the boundary of the box constraint at 5. We set the smallest multiplier to be 1.0 and pick Nm (usually 20) equally spaced points from [mmin,mmax]. If mmax = 1, we set mmin = 0.5 to allow shrinkage of the coefficients. We scale the coefficients and datasets with each multiplier and round the coefficients to integers using the sequential rounding technique in Algorithm 6. For each continuous solution (each “ray” of the “star”), we report the integer solution and multiplier with the smallest logistic loss. This process yields our collection of risk scores. Note here that a standard line search along the multiplier does not work, because the rounding error is highly non-convex.
We briefly discuss how the sequential rounding technique works. Details of this method can be found in Appendix A. We initialize w+ = w. Then we round the fractional part of w+ one coordinate at a time. At each step, some of the w+j ’s are integer-valued (so w + j wj is nonzero) and we pick the coordinate and rounding operation (either floor or ceil) based on which can minimize the following objective function, where we will round to an integer at coordinate r⇤:
r⇤, v⇤ 2 argmin r,v
nX
i=1
l2i
0 @xir(v wr) + X
j 6=r xij(w
+ j wj)
1 A 2
, (10)
subject to r 2 {j | w+j /2 Z} and v 2 {bw+r c, dw+r e},
Algorithm 3 StarRaySearch(D, (w, w0), C,Nm) ! (w+, w+0 ),m Input: dataset D, a sparse continuous solution (w, w0), coefficient constraint C, and number of multipliers to try Nm. Output: a sparse integer solution (w+, w+0 ) with kw+k1 C and multiplier m.
1: Define mmax C/max|w| as discussed in Section 3.3. If mmax = 1, set mmin 0.5; if mmax > 1, set mmin 1. 2: Pick Nm equally spaced multiplier values ml 2 [mmin,mmax] for l 2 [1, ..., Nm] and call this set M = {ml}l. 3: Use each multiplier to scale the good continuous solution (w, w0), to obtain (mlw, mlw0), which is a good continuous solution to the rescaled dataset 1mlD. 4: Send each rescaled solution (mlw, mlw0) and its rescaled dataset 1mlD to Algorithm 6 AuxiliaryLossRounding( 1mlD,mlw,mlw0) for rounding. It returns (w
+l, w+l0 ,ml), where (w+l, w+l0 ) is close to (mlw, mlw0), and where (w+l, w +l 0 ) on 1 ml
D has a small logistic loss. 5: Evaluate the logistic loss to pick the best multiplier l⇤ 2 argminl L(w+l, w+l0 , 1mlD) 6: Return (w+l ⇤ , w+l ⇤
0 ) and ml⇤ .
where li is the Lipschitz constant restricted to the rounding interval and can be computed as li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. (The Lipschitz constant here is much smaller than the one in Section 3.1 due to the interval restriction.) After we select r⇤ and find value v⇤, we update w+ by setting w+r⇤ = v⇤. We repeat this process until w+ is on the integer lattice: w+ 2 Zp. The objective function in Equation 10 can be understood as an auxiliary upper bound of the logistic loss. Our algorithm provides an upper bound on the difference between the logistic losses of the continuous solution and the final rounded solution before we start the rounding algorithm (Theorem 3.1 below). Additionally, during the sequential rounding procedure, we do not need to perform expensive operations such as logarithms or exponentials as required by the logistic loss function; the bound and auxiliary function require only sums of squares, not logarithms or exponentials. Its derivation and proof are in Appendix C. Theorem 3.1. Let w be the real-valued coefficients for the logistic regression model with objective function L(w) = Pn i=1 log(1 + exp( yixTi w)) (the intercept is incorporated). Let w+ be the integer-valued coefficients returned by the AuxiliaryLossRounding method. Furthermore, let uj = wj bwjc. Let li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. Then, we have an upper bound on the difference between the loss L(w) and the loss L(w+):
L(w+) L(w) vuutn nX
i=1
pX
j=1
(lixij)2uj(1 uj). (11)
Note. Our method has a higher prediction capacity than RiskSLIM: its search space is much larger. Compared to RiskSLIM, our use of the multiplier permits a number of solutions that grows exponentially in k as we increase the multiplier. To see this, consider that for each support of k features, since logistic loss is convex, it contains a hypersphere in coefficient space. The volume of that hypersphere is (as usual) V = ⇡ k/2
( k2+1) rk where r is the radius of the hypersphere. If we increase the multiplier to
2, the grid becomes finer by a factor of 2, which is equivalent to increasing the radius by a factor of 2. Thus, the volume increases by a factor of 2k. In general, for maximum multiplier m, the search space is increased by a factor of mk over RiskSLIM.
4 Experiments
We experimentally focus on two questions: (1) How good is FasterRisk’s solution quality compared to baselines? (§4.1) (2) How fast is FasterRisk compared with the state-of-the-art? (§4.2) In the appendix, we address three more questions: (3) How much do the sparse beam search, diverse pools, and multipliers contribute to our solution quality? (E.4) (4) How well-calibrated are the models produced by FasterRisk? (E.9) (5) How sensitive is FasterRisk to each of the hyperparameters in the algorithm? (E.10)
We compare with RiskSLIM (the current state-of-the-art), as well as algorithms Pooled-PLR-RD, Pooled-PLR-RSRD, Pooled-PRL-RDSP, Pooled-PLR-Rand and Pooled-PRL-RDP. These algorithms were all previously shown to be inferior to RiskSLIM [39]. These methods first find a pool of sparse continuous solutions using different regularizations of ElasticNet (hence the name “Pooled Penalized Logistic Regression” – Pooled-PLR) and then round the coefficients with different techniques. Details are in Appendix D.3. The best solution is chosen from this pool of integer solutions that obeys the sparsity and box constraints and has the smallest logistic loss. We also compare with the baseline AutoScore [44]. However, on some datasets, the results produced by AutoScore are so poor that they distort the AUC scale, so we show those results only in Appendix E.11. As there is no publicly
available code for any of [10, 16, 32, 33], they do not appear in the experiments. For each dataset, we perform 5-fold cross validation and report training and test AUC. Appendix D presents details of the datasets, experimental setup, evaluation metrics, loss values, and computing platform/environment. More experimental results appear in Appendix E.
4.1 Solution Quality
We first evaluate FasterRisk’s solution quality. Figure 3 shows the training and test AUC on six datasets (results for training loss appear in Appendix E). FasterRisk (the red line) outperforms all baselines, consistently obtaining the highest AUC scores on both the training and test sets. Notably, our method obtains better results than RiskSLIM, which uses a mathematical solver and is the current state-of-the-art method for scoring systems. This superior performance is due to the use of multipliers, which increases the complexity of the hypothesis space. Figure 4 provides a more detailed comparison between FasterRisk and RiskSLIM. One may wonder whether running RiskSLIM longer would make this MIP-based method comparable to our FasterRisk, since the current running time limit for RiskSLIM is only 15 minutes. We extended RiskSLIM’s running time limit up to 1 hour and show the comparison in Appendix E.8; FasterRisk still outperforms RiskSLIM by a large margin.
FasterRisk performs significantly better than the other baselines for two reasons. First, the continuous sparse solutions produced by ElasticNet are low quality for very sparse models. Second, it is difficult to obtain an exact model size by controlling `1 regularization. For example, Pooled-PLR-RD and Pooled-PLR-RDSP do not have results for model size 10 on the mammo datasets, because no such model size exists in the pooled solutions after rounding.
4.2 Runtime Comparison
The major drawback of RiskSLIM is its limited scalability. Runtime is important to allow interactive model development and to handle larger datasets. Figure 5 shows that FasterRisk (red bars) is significantly faster than RiskSLIM (blue bars) in general. We ran these experiments with a 900 second (15 minute) timeout. RiskSLIM finishes running on the small dataset mammo, but it times out on the larger datasets, timing out on models larger than 4 features for adult, larger than 3 features for bank, larger than 7 features for mushroom, larger than 2 features for COMPAS, and larger than 1
feature for FICO. RiskSLIM times out early on COMPAS and FICO datasets, suggesting that the MIP-based method struggles with high-dimensional and highly-correlated features. Thus, we see that FasterRisk tends to be both faster and more accurate than RiskSLIM.
4.3 Example Scoring Systems
The main benefit of risk scores is their interpretability. We place a few example risk scores in Table 1 to allow the reader to judge for themselves. More risk scores examples can be found in Appendix F.1. Additionally, we provide a pool of solutions for the top 12 models on the bank, mammo, and Netherlands datasets in Appendix F.2. Prediction performance is generally not the only criteria users consider when deciding to deploy a model. Provided with a pool of solutions that perform equally well, a user can choose the one that best incorporates domain knowledge [45]. After the pool of models is generated, interacting with the pool is essentially computationally instantaneous. Finally, we can reduce some models to relatively prime coefficients or transform some features for better interpretability. Examples of such transformations are given in Appendix G.1.
5 Conclusion
FasterRisk produces a collection of high-quality risk scores within minutes. Its performance owes to three key ideas: a new algorithm for sparsity- and box-constrained continuous models, using a pool of diverse solutions, and the use of the star ray search, which leverages multipliers and a new sequential rounding technique. FasterRisk is suitable for high-stakes decisions, and permits domain experts a collection of interpretable models to choose from.
Code Availability
Implementations of FasterRisk discussed in this paper are available at https://github.com/ jiachangliu/FasterRisk.
Acknowledgements
The authors acknowledge funding from the National Science Foundation under grants IIS-2147061 and IIS-2130250, National Institute on Drug Abuse under grant R01 DA054994, Department of Energy under grants DE-SC0021358 and DE-SC0023194, and National Research Traineeship Program under NSF grants DGE-2022040 and CCF-1934964. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Nous remercions le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) de son soutien. | 1. What is the focus and contribution of the paper regarding generating high-quality risk scores?
2. What are the strengths of the proposed approach, particularly in terms of its efficiency and reliability?
3. What are the weaknesses of the paper, especially regarding its examples and explanations?
4. Do you have any concerns about the significance of the proposed method's performance improvement?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This study proposed a novel method to accurately and efficiently generate a collection of high-quality risk scores based on the integration of the beam-search-based algorithm for LR, the generation of diverse high-quality solutions with different support sets, and the star search for integer solutions. It achieved SOTA performance with less time cost in some datasets.
Strengths And Weaknesses
Strengths: 1. The proposed three-step framework includes a beam-search-based algorithm for logistic regression with box constraints and L0 regularization, the search algorithm to collect the sparse diverse pool with different support set, and the star search technique using multipliers, and a theorem guaranteeing the quality of the star search results. The whole methodology was solid and efficient. 2. The introductions of the research context and related work were well-organized and clear. 3. The proposed method achieved the SOTA performance with significantly less time (as shown in figure 4), showing its reliability and efficiency. 4. Their theoretical discussion and supplement material were abundant, Moreover, they also conducted extensive experiments on performance, including performance comparison, efficiency, and ablation experiments.
Weaknesses: 1. The examples of scoring systems in the Introduction seem out of date, there are many newer and recognized clinical scoring systems. It also should briefly introduce the traditional framework of the scoring system and its difference in methodology and performance with the proposed method. 2. As shown in figure 3, the performance improvement of proposed methods seems not so significant, the biggest improvement in the bank dataset was ~0.02. Additionally, using some tables to directly show the key improvements may be more intuitive and detailed. 3. Although extensive experiments and discussion on performance, in my opinion, its most significant improvement would be efficiency, and there are few discussions or ablation experiments on efficiency. 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper.
Questions
None
Limitations
None |
NIPS | Title
FasterRisk: Fast and Accurate Interpretable Risk Scores
Abstract
Over the last century, risk scores have been the most popular form of predictive model used in healthcare and criminal justice. Risk scores are sparse linear models with integer coefficients; often these models can be memorized or placed on an index card. Typically, risk scores have been created either without data or by rounding logistic regression coefficients, but these methods do not reliably produce high-quality risk scores. Recent work used mathematical programming, which is computationally slow. We introduce an approach for efficiently producing a collection of high-quality risk scores learned from data. Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm. Each of these continuous solutions is transformed into a separate risk score through a “star ray” search, where a range of multipliers are considered before rounding the coefficients sequentially to maintain low logistic loss. Our algorithm returns all of these high-quality risk scores for the user to consider. This method completes within minutes and can be valuable in a broad variety of applications.
1 Introduction
Risk scores are sparse linear models with integer coefficients that predict risks. They are arguably the most popular form of predictive model for high stakes decisions through the last century and are the standard form of model used in criminal justice [4, 22] and medicine [19, 27, 34, 31, 41].
Their history dates back to at least the criminal justice work of Burgess [8], where, based on their criminal history and demographics, individuals were assigned integer point scores between 0 and 21 that determined the probability of their “making good or of failing upon parole.” Other famous risk scores are arguably the most widelyused predictive models in healthcare. These include the APGAR score [3], developed in 1952 and given to newborns, and the CHADS2 score [18], which estimates stroke risk for atrial fibrillation patients. Figures 1 and 2 show example risk scores, which es-
⇤These authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
timate risk of a breast lesion being malignant.
Risk scores have the benefit of being easily memorized; usually their names reveal the full model – for instance, the factors in CHADS2 are past Chronic heart failure, Hypertension, Age 75 years, Diabetes, and past Stroke (where past stroke receives 2 points and the others each receive 1 point). For risk scores, counterfactuals are often trivial to compute, even without a calculator. Also, checking that the data and calculations are correct is easier with risk scores than with other approaches. In short, risk scores have been created by humans for a century to support a huge spectrum of
applications [2, 23, 30, 43, 44, 47], because humans find them easy to understand.
Traditionally, risk scores have been created in two main ways: (1) without data, with expert knowledge only (and validated only afterwards on data), and (2) using a semi-manual process involving manual feature selection and rounding of logistic regression coefficients. That is, these approaches rely heavily on domain expertise and rely little on data. Unfortunately, the alternative of building a model directly from data leads to computationally hard problems: optimizing risk scores over a global objective on data is NP-hard, because in order to produce integer-valued scores, the feasible region must be the integer lattice. There have been only a few approaches to design risk scores automatically [5, 6, 9, 10, 16, 32, 33, 38, 39, 40], but each of these has a flaw that limits its use in practice: the optimization-based approaches use mathematical programming solvers (which require a license) that are slow and scale poorly, and the other methods are randomized greedy algorithms, producing fast but much lower-quality solutions. We need an approach that exhibits the best of both worlds: speed fast enough to operate in a few minutes on a laptop and optimization/search capability as powerful as that of the mathematical programming tools. Our method, FasterRisk, lies at this intersection. It is fast enough to enable interactive model design and can rapidly produce a large pool of models from which users can choose rather than producing only a single model.
One may wonder why simple rounding of `1-regularized logistic regression coefficients does not yield sufficiently good risk scores. Past works [37, 39] explain this as follows: the sheer amount of `1 regularization needed to get a very sparse solution leads to large biases and worse loss values, and rounding goes against the performance gradient. For example, consider the following coefficients from `1 regularization: [1.45, .87, .83, .47, .23, .15, ... ]. This model is worse than its unregularized counterpart due to the bias induced by the large `1 term. Its rounded solution is [1,1,1,0,0,0,..], which leads to even worse loss. Instead, one could multiply all the coefficients by a constant and then round, but which constant is best? There are an infinite number of choices. Even if some value of the multiplier leads to minimal loss due to rounding, the bias from the `1 term still limits the quality of the solution.
The algorithm presented here does not have these disadvantages. The steps are: (1) Fast subset search with `0 optimization (avoiding the bias from `1). This requires the solution of an NP-hard problem, but our fast subset selection algorithm is able to solve this quickly. We proceed from this accurate sparse continuous solution, preserving both sparseness and accuracy in the next steps. (2) Find a pool of diverse continuous sparse solutions that are almost as good as the solution found in (1) but with different support sets. (3) A “star ray” search, where we search for feasible integer-valued solutions along multipliers of each item in the pool from (2). By using multipliers, the search space resembles the rays of a star, because it extends each coefficient in the pool outward from the origin to search for solutions. To find integer solutions, we perform a local search (a form of sequential rounding). This method yields high performance solutions: we provide a theoretical upper bound on the loss difference between the continuous sparse solution and the rounded integer sparse solution.
Through extensive experiments, we show that our proposed method is computationally fast and produces high-quality integer solutions. This work thus provides valuable and novel tools to create risk scores for professionals in many different fields, such as healthcare, finance, and criminal justice.
Contributions: Our contributions include the three-step framework for producing risk scores, a beam-search-based algorithm for logistic regression with bounded coefficients (for Step 1), the search algorithm to find pools of diverse high-quality continuous solutions (for Step 2), the star ray search technique using multipliers (Step 3), and a theorem guaranteeing the quality of the star ray search.
2 Related Work
Optimization-based approaches: Risk scores, which model P (y = 1|x), are different from threshold classifiers, which predict either y = 1 or y = 1 given x. Most work in the area of optimization of integer-valued sparse linear models focuses on classifiers, not risk scores [5, 6, 9, 32, 33, 37, 40, 46]. This difference is important, because a classifier generally cannot be calibrated well for use in risk scoring: only its single decision point is optimized. Despite this, several works use the hinge loss to calibrate predictions [6, 9, 32]. All of these optimization-based algorithms use mathematical programming solvers (i.e., integer programming solvers), which tend to be slow and cannot be used on larger problems. However, they can handle both feature selection and integer constraints.
To directly optimize risk scores, typically the logistic loss is used. The RiskSLIM algorithm [39] optimizes the logistic loss regularized with `0 regularization, subject to integer constraints on the coefficients. RiskSLIM uses callbacks to a MIP solver, alternating between solving linear programs and using branch-and-cut to divide and reduce the search space. The branch-and-cut procedure needs to keep track of unsolved nodes, whose number increases exponentially with the size of the feature space. Thus, RiskSLIM’s major challenge is scalability.
Local search-based approaches: As discussed earlier, a natural way to produce a scoring system or risk score is by selecting features manually and rounding logistic regression coefficients or hinge-loss solutions to integers [10, 11, 39]. While rounding is fast, rounding errors can cause the solution quality to be much worse than that of the optimization-based approaches. Several works have proposed improvements over traditional rounding. In Randomized Rounding [10], each coefficient is rounded up or down randomly, based on its continuous coefficient value. However, randomized rounding does not seem to perform well in practice. Chevaleyre [10] also proposed Greedy Rounding, where coefficients are rounded sequentially. While this technique aimed to provide theoretical guarantees for the hinge loss, we identified a serious flaw in the argument, rendering the bounds incorrect (see Appendix B). The RiskSLIM paper [39] proposed SequentialRounding, which, at each iteration, chooses a coefficient to round up or down, making the best choice according to the regularized logistic loss. This gives better solutions than other types of rounding, because the coefficients are considered together through their performance on the loss function, not independently.
A drawback of SequentialRounding is that it considers rounding up or down only to the nearest integer from the continuous solution. By considering multipliers, we consider a much larger space of possible solutions. The idea of multipliers (i.e., “scale and round”) is used for medical scoring systems [11], though, as far as we know, it has been used only with traditional rounding rather than SequentialRounding, which could easily lead to poor performance, and we have seen no previous work that studies how to perform scale-and-round in a systematic, computationally efficient way. While the general idea of scale-and-round seems simple, it is not: there are an infinite number of possible multipliers, and, for each one, a number of possible nearby integer coefficient vectors that is the size of a hypercube, expanding exponentially in the search space.
Sampling Methods: The Bayesian method of Ertekin et al. [16] samples scoring systems, favoring those that are simpler and more accurate, according to a prior. “Pooling” [39] creates multiple models through sampling along the regularization path of ElasticNet. As discussed, when regularization is tuned high enough to induce sparse solutions, it results in substantial bias and low-quality solutions (see [37, 39] for numerous experiments on this point). Note that there is a literature on finding diverse solutions to mixed-integer optimization problems [e.g., 1], but it focuses only on linear objective functions.
Algorithm 1 FasterRisk(D,k,C,B,✏,T ,Nm)! {(w+t, w+t0 ,mt)}t Input: dataset D (consisting of feature matrix X 2 Rn⇥p and labels y 2 Rn), sparsity constraint k, coefficient constraint C = 5, beam search size B = 10, tolerance level ✏ = 0.3, number of attempts T = 50, number of multipliers to try Nm = 20. Output: a pool P of scoring systems {(wt, wt0),mt} where t is the index enumerating all found scoring systems with kwtk0 k and kwtk1 C and mt is the corresponding multiplier.
1: Call Algorithm 2 SparseBeamLR(D, k, C,B) to find a high-quality solution (w⇤, w⇤0) to the sparse logistic regression problem with continuous coefficients satisfying a box constraint, i.e., solve Problem (3). (Algorithm SparseBeamLR will call Algorithm ExpandSuppBy1 as a subroutine, which grows the solution by beam search.) 2: Call Algorithm 5 CollectSparseDiversePool((w⇤, w⇤0), ✏, T ), which solves Problem (4). Place its output {(wt, wt0)}t in pool P = {w⇤, w⇤0}. P P [ {(wt, wt0)}t. 3: Send each member t in the pool P , which is (wt, wt0), to Algorithm 3 StarRaySearch (D, (wt, wt0), C,Nm) to perform a line search among possible multiplier values and obtain an integer solution (w+t, w+t0 ) with multiplier mt. Algorithm 3 calls Algorithm 6 AuxiliaryLossRounding which conducts the rounding step. 4: Return the collection of risk scores {(w+t, w+t0 ,mt)}t. If desired, return only the best model according to the logistic loss.
3 Methodology
Define dataset D = {1,xi, yi}ni=1 (1 is a static feature corresponding to the intercept) and scaled dataset as 1m ⇥D = { 1 m , 1 mxi, yi} n i=1, for a real-valued m. Our goal is to produce high-quality risk scores within a few minutes on a small personal computer. We start with an optimization problem similar to RiskSLIM’s [39], which minimizes the logistic loss subject to sparsity constraints and integer coefficients:
min w,w0
L(w, w0,D), where L(w, w0,D) = Pn i=1 log(1 + exp( yi(xTi w + w0))) (1)
such that kwk0 k and w 2 Zp, 8j 2 [1, .., p] wj 2 [ 5, 5], w0 2 Z. In practice, the range of these box constraints [ 5, 5] is user-defined and can be different for each coefficient. (We use 5 for ease of exposition.) The sparsity constraint kwk0 k or integer constraints w 2 Zp make the problem NP-hard, and this is a difficult mixed-integer nonlinear program. Transforming the original features to all possible dummy variables, which is a standard type of preprocessing [e.g., 24], changes the model into a (flexible) generalized additive model; such models can be as accurate as the best machine learning models [39, 42]. Thus, we generally process variables in x to be binary.
To make the solution space substantially larger than [ 5, 4, ..., 4, 5]p, we use multipliers. The problem becomes:
min w,w0,m L
✓ w, w0, 1 m D ◆ , where L ✓ w, w0, 1 m D ◆ = nX
i=1
log ✓ 1 + exp ✓ yi
xTi w + w0 m
◆◆ (2)
such that kwk0 k,w 2 Zp, 8j 2 [1, .., p], wj 2 [ 5, 5], w0 2 Z, m > 0. Note that the use of multipliers does not weaken the interpretability of the risk score: the user still sees integer risk scores composed of values wj 2 { 5, 4, .., 4, 5}, w0 2 Z. Only the risk conversion table is calculated differently, as P (Y = 1|x) = 1/(1 + e f(x)) where f(x) = 1m (w Tx+ w0).
Our method proceeds in three steps, as outlined in Algorithm 1. In the first step, it approximately solves the following sparse logistic regression problem with a box constraint (but not integer constraints), detailed in Section 3.1 and Algorithm 2. (w⇤, w⇤0) 2 argmin
w,w0 L(w, w0,D), kwk0 k,w 2 Rp, 8j 2 [1, ..., p], wj 2 [ 5, 5], w0 2 R.
(3) The algorithm gives an accurate and sparse real-valued solution (w⇤, w⇤0).
The second step produces many near-optimal sparse logistic regression solutions, again without integer constraints, detailed in Section 3.2 and Algorithm 5. Algorithm 5 uses (w⇤, w⇤0) from the
first step to find a set {(wt, wt0)}t such that for all t and a given threshold ✏w:
(wt, wt0) obeys L(w t, wt0,D) L(w⇤, w⇤0 ,D)⇥ (1 + ✏w⇤) (4)
kwtk0 k, wt 2 Rp, 8j 2 [1, ..., p], wtj 2 [ 5, 5], wt0 2 R.
After these steps, we have a pool of almost-optimal sparse logistic regression models. In the third step, for each coefficient vector in the pool, we compute a risk score. It is a feasible integer solution (w+t, w+t0 ) to the following, which includes a positive multiplier mt > 0:
L ✓ w+t, w+t0 , 1 mt D ◆ L(wt, wt0,D) + ✏t, (5)
w+t 2 Zp, 8j 2 [1, ..., p], w+tj 2 [ 5, 5], w +t 0 2 Z,
where we derive a tight theoretical upper bound on ✏t. A detailed solution to (5) is shown in Algorithm 6 in Appendix A. We solve the optimization problem for a large range of multipliers in Algorithm 3 for each coefficient vector in the pool, choosing the best multiplier for each coefficient vector. This third step yields a large collection of risk scores, all of which are approximately as accurate as the best sparse logistic regression model that can be obtained. All steps in this process are fast and scalable.
Algorithm 2 SparseBeamLR(D,k,C,B)! (w, w0) Input: dataset D, sparsity constraint k, coefficient constraint C, and beam search size B. Output: a sparse continuous coefficient vector (w, w0) with kwk0 k, kwk1 C.
1: Define N+ and N as numbers of positive and negative labels, respectively. 2: w0 log( N+/N ),w 0 .Initialize the intercept and coefficients. 3: F ; .Initialize the collection of found supports as an empty set 4: (W,F) ExpandSuppBy1(D, (w, w0),F , B). .Returns B models of support 1 5: for t = 2, ..., k do .Beam search to expand the support 6: Wtmp ; 7: for (w0, w00) 2W do .Each of these has support t 1 8: (W 0,F) ExpandSuppBy1(D, (w0, w00),F , B). .Returns B models with supp. t. 9: Wtmp Wtmp [W 0
10: end for 11: Reset W to be the B solutions in Wtmp with the smallest logistic loss values. 12: end for 13: Pick (w, w0) from W with the smallest logistic loss. 14: Return (w, w0).
3.1 High-quality Sparse Continuous Solution
There are many different approaches for sparse logistic regression, including `1 regularization [35], ElasticNet [48], `0 regularization [13, 24], and orthogonal matching pursuit (OMP) [14, 25], but none of these approaches seem to be able to handle both the box constraints and the sparsity constraint in Problem 3, so we developed a new approach. This approach, in Algorithm 2, SparseBeamLR, uses beam search for best subset selection: each iteration contains several coordinate descent steps to determine whether a new variable should be added to the support, and it clips coefficients to the box [ 5, 5] as it proceeds. Hence the algorithm is able to determine, before committing to the new variable, whether it is likely to decrease the loss while obeying the box constraints. This beam search algorithm for solving (3) implicitly uses the assumption that one of the best models of size k implicitly contains variables of one of the best models of size k 1. This type of assumption has been studied in the sparse learning literature [14] (Theorem 5). However, we are not aware of any other work that applies box constraints or beam search for sparse logistic regression. In Appendix E, we show that our method produces better solutions than the OMP method presented in [14].
Algorithm 2 calls the ExpandSuppBy1 Algorithm, which has two major steps. The detailed algorithm can be found in Appendix A. For the first step, given a solution w, we perform optimization on each single coordinate j outside of the current support supp(w):
d⇤j 2 argmin d2[ 5,5] L(w + dej , w0,D) for 8j where wj = 0. (6)
Vector ej is 1 for the jth coordinate and 0 otherwise. We find d⇤j for each j through an iterative thresholding operation, which is done on all coordinates in parallel, iterating several (⇠ 10) times:
for iteration i: dj Threshold(j, dj ,w, w0,D) := min(max(cdj , 5), 5), (7) where cdj = dj 1ljrjL(w + djej , w0,D), and lj is a Lipschitz constant on coordinate j [24]. Importantly, we can perform Equation 7 on all j where wj = 0 in parallel using matrix form.
For the second step, after the parallel single coordinate optimization is done, we pick the top B indices (j’s) with the smallest logistic losses L(w + d⇤jej) and fine tune on the new support:
wjnew, w0 j new 2 argmin a2[ 5,5]p,b L(a, b,D) with supp(a) = supp(w) [ {j}. (8)
This can be done again using a variant of Equation 7 iteratively on all the coordinates in the new support. We get B pairs of (wjnew, w0jnew) through this ExpandSuppBy1 procedure, and the collection of these pairs form the set W 0 in Line 8 of Algorithm 2. At the end, Algorithm 2 (SparseBeamLR) returns the best model with the smallest logistic loss found by the beam search procedure. This model satisfies both the sparsity and box constraints.
3.2 Collect Sparse Diverse Pool (Rashomon Set)
We now collect the sparse diverse pool. In Section 3.1, our goal was to find a sparse model (w⇤, w⇤0) with the smallest logistic loss. For high dimensional features or in the presence of highly correlated features, there could exist many sparse models with almost equally good performance [7]. This set of models is also known as the Rashomon set. Let us find those and turn them into risk scores. We first predefine a tolerance gap level ✏ (hyperparameter, usually set to 0.3). Then, we delete a feature with index j in the support supp(w⇤) and add a new feature with index j+. We select each new index to be j+ whose logistic loss is within the tolerance gap:
Find all j+ s.t. min a2[ 5,5]
L(w⇤ w⇤j ej + aej+, w0,D) L(w⇤, w⇤0 ,D)(1 + ✏). (9)
We fine-tune the coefficients on each of the new supports and then save the new solution in our pool. Details can be found in Algorithm 5. Swapping one feature at a time is computationally efficient, and our experiments show it produces sufficiently diverse pools over many datasets. We call this method the CollectSparseDiversePool Algorithm.
3.3 “Star Ray” Search for Integer Solutions
The last challenge is how to get an integer solution from a continuous solution. To achieve this, we use a “star ray” search that searches along each “ray” of the star, extending each continuous solution outward from the origin using many values of a multiplier, as shown in Algorithm 3. The star ray search provides much more flexibility in finding a good integer solution than simple rounding. The largest multiplier mmax is set to 5/maxj(|w⇤j |) which will take one of the coefficients to the boundary of the box constraint at 5. We set the smallest multiplier to be 1.0 and pick Nm (usually 20) equally spaced points from [mmin,mmax]. If mmax = 1, we set mmin = 0.5 to allow shrinkage of the coefficients. We scale the coefficients and datasets with each multiplier and round the coefficients to integers using the sequential rounding technique in Algorithm 6. For each continuous solution (each “ray” of the “star”), we report the integer solution and multiplier with the smallest logistic loss. This process yields our collection of risk scores. Note here that a standard line search along the multiplier does not work, because the rounding error is highly non-convex.
We briefly discuss how the sequential rounding technique works. Details of this method can be found in Appendix A. We initialize w+ = w. Then we round the fractional part of w+ one coordinate at a time. At each step, some of the w+j ’s are integer-valued (so w + j wj is nonzero) and we pick the coordinate and rounding operation (either floor or ceil) based on which can minimize the following objective function, where we will round to an integer at coordinate r⇤:
r⇤, v⇤ 2 argmin r,v
nX
i=1
l2i
0 @xir(v wr) + X
j 6=r xij(w
+ j wj)
1 A 2
, (10)
subject to r 2 {j | w+j /2 Z} and v 2 {bw+r c, dw+r e},
Algorithm 3 StarRaySearch(D, (w, w0), C,Nm) ! (w+, w+0 ),m Input: dataset D, a sparse continuous solution (w, w0), coefficient constraint C, and number of multipliers to try Nm. Output: a sparse integer solution (w+, w+0 ) with kw+k1 C and multiplier m.
1: Define mmax C/max|w| as discussed in Section 3.3. If mmax = 1, set mmin 0.5; if mmax > 1, set mmin 1. 2: Pick Nm equally spaced multiplier values ml 2 [mmin,mmax] for l 2 [1, ..., Nm] and call this set M = {ml}l. 3: Use each multiplier to scale the good continuous solution (w, w0), to obtain (mlw, mlw0), which is a good continuous solution to the rescaled dataset 1mlD. 4: Send each rescaled solution (mlw, mlw0) and its rescaled dataset 1mlD to Algorithm 6 AuxiliaryLossRounding( 1mlD,mlw,mlw0) for rounding. It returns (w
+l, w+l0 ,ml), where (w+l, w+l0 ) is close to (mlw, mlw0), and where (w+l, w +l 0 ) on 1 ml
D has a small logistic loss. 5: Evaluate the logistic loss to pick the best multiplier l⇤ 2 argminl L(w+l, w+l0 , 1mlD) 6: Return (w+l ⇤ , w+l ⇤
0 ) and ml⇤ .
where li is the Lipschitz constant restricted to the rounding interval and can be computed as li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. (The Lipschitz constant here is much smaller than the one in Section 3.1 due to the interval restriction.) After we select r⇤ and find value v⇤, we update w+ by setting w+r⇤ = v⇤. We repeat this process until w+ is on the integer lattice: w+ 2 Zp. The objective function in Equation 10 can be understood as an auxiliary upper bound of the logistic loss. Our algorithm provides an upper bound on the difference between the logistic losses of the continuous solution and the final rounded solution before we start the rounding algorithm (Theorem 3.1 below). Additionally, during the sequential rounding procedure, we do not need to perform expensive operations such as logarithms or exponentials as required by the logistic loss function; the bound and auxiliary function require only sums of squares, not logarithms or exponentials. Its derivation and proof are in Appendix C. Theorem 3.1. Let w be the real-valued coefficients for the logistic regression model with objective function L(w) = Pn i=1 log(1 + exp( yixTi w)) (the intercept is incorporated). Let w+ be the integer-valued coefficients returned by the AuxiliaryLossRounding method. Furthermore, let uj = wj bwjc. Let li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. Then, we have an upper bound on the difference between the loss L(w) and the loss L(w+):
L(w+) L(w) vuutn nX
i=1
pX
j=1
(lixij)2uj(1 uj). (11)
Note. Our method has a higher prediction capacity than RiskSLIM: its search space is much larger. Compared to RiskSLIM, our use of the multiplier permits a number of solutions that grows exponentially in k as we increase the multiplier. To see this, consider that for each support of k features, since logistic loss is convex, it contains a hypersphere in coefficient space. The volume of that hypersphere is (as usual) V = ⇡ k/2
( k2+1) rk where r is the radius of the hypersphere. If we increase the multiplier to
2, the grid becomes finer by a factor of 2, which is equivalent to increasing the radius by a factor of 2. Thus, the volume increases by a factor of 2k. In general, for maximum multiplier m, the search space is increased by a factor of mk over RiskSLIM.
4 Experiments
We experimentally focus on two questions: (1) How good is FasterRisk’s solution quality compared to baselines? (§4.1) (2) How fast is FasterRisk compared with the state-of-the-art? (§4.2) In the appendix, we address three more questions: (3) How much do the sparse beam search, diverse pools, and multipliers contribute to our solution quality? (E.4) (4) How well-calibrated are the models produced by FasterRisk? (E.9) (5) How sensitive is FasterRisk to each of the hyperparameters in the algorithm? (E.10)
We compare with RiskSLIM (the current state-of-the-art), as well as algorithms Pooled-PLR-RD, Pooled-PLR-RSRD, Pooled-PRL-RDSP, Pooled-PLR-Rand and Pooled-PRL-RDP. These algorithms were all previously shown to be inferior to RiskSLIM [39]. These methods first find a pool of sparse continuous solutions using different regularizations of ElasticNet (hence the name “Pooled Penalized Logistic Regression” – Pooled-PLR) and then round the coefficients with different techniques. Details are in Appendix D.3. The best solution is chosen from this pool of integer solutions that obeys the sparsity and box constraints and has the smallest logistic loss. We also compare with the baseline AutoScore [44]. However, on some datasets, the results produced by AutoScore are so poor that they distort the AUC scale, so we show those results only in Appendix E.11. As there is no publicly
available code for any of [10, 16, 32, 33], they do not appear in the experiments. For each dataset, we perform 5-fold cross validation and report training and test AUC. Appendix D presents details of the datasets, experimental setup, evaluation metrics, loss values, and computing platform/environment. More experimental results appear in Appendix E.
4.1 Solution Quality
We first evaluate FasterRisk’s solution quality. Figure 3 shows the training and test AUC on six datasets (results for training loss appear in Appendix E). FasterRisk (the red line) outperforms all baselines, consistently obtaining the highest AUC scores on both the training and test sets. Notably, our method obtains better results than RiskSLIM, which uses a mathematical solver and is the current state-of-the-art method for scoring systems. This superior performance is due to the use of multipliers, which increases the complexity of the hypothesis space. Figure 4 provides a more detailed comparison between FasterRisk and RiskSLIM. One may wonder whether running RiskSLIM longer would make this MIP-based method comparable to our FasterRisk, since the current running time limit for RiskSLIM is only 15 minutes. We extended RiskSLIM’s running time limit up to 1 hour and show the comparison in Appendix E.8; FasterRisk still outperforms RiskSLIM by a large margin.
FasterRisk performs significantly better than the other baselines for two reasons. First, the continuous sparse solutions produced by ElasticNet are low quality for very sparse models. Second, it is difficult to obtain an exact model size by controlling `1 regularization. For example, Pooled-PLR-RD and Pooled-PLR-RDSP do not have results for model size 10 on the mammo datasets, because no such model size exists in the pooled solutions after rounding.
4.2 Runtime Comparison
The major drawback of RiskSLIM is its limited scalability. Runtime is important to allow interactive model development and to handle larger datasets. Figure 5 shows that FasterRisk (red bars) is significantly faster than RiskSLIM (blue bars) in general. We ran these experiments with a 900 second (15 minute) timeout. RiskSLIM finishes running on the small dataset mammo, but it times out on the larger datasets, timing out on models larger than 4 features for adult, larger than 3 features for bank, larger than 7 features for mushroom, larger than 2 features for COMPAS, and larger than 1
feature for FICO. RiskSLIM times out early on COMPAS and FICO datasets, suggesting that the MIP-based method struggles with high-dimensional and highly-correlated features. Thus, we see that FasterRisk tends to be both faster and more accurate than RiskSLIM.
4.3 Example Scoring Systems
The main benefit of risk scores is their interpretability. We place a few example risk scores in Table 1 to allow the reader to judge for themselves. More risk scores examples can be found in Appendix F.1. Additionally, we provide a pool of solutions for the top 12 models on the bank, mammo, and Netherlands datasets in Appendix F.2. Prediction performance is generally not the only criteria users consider when deciding to deploy a model. Provided with a pool of solutions that perform equally well, a user can choose the one that best incorporates domain knowledge [45]. After the pool of models is generated, interacting with the pool is essentially computationally instantaneous. Finally, we can reduce some models to relatively prime coefficients or transform some features for better interpretability. Examples of such transformations are given in Appendix G.1.
5 Conclusion
FasterRisk produces a collection of high-quality risk scores within minutes. Its performance owes to three key ideas: a new algorithm for sparsity- and box-constrained continuous models, using a pool of diverse solutions, and the use of the star ray search, which leverages multipliers and a new sequential rounding technique. FasterRisk is suitable for high-stakes decisions, and permits domain experts a collection of interpretable models to choose from.
Code Availability
Implementations of FasterRisk discussed in this paper are available at https://github.com/ jiachangliu/FasterRisk.
Acknowledgements
The authors acknowledge funding from the National Science Foundation under grants IIS-2147061 and IIS-2130250, National Institute on Drug Abuse under grant R01 DA054994, Department of Energy under grants DE-SC0021358 and DE-SC0023194, and National Research Traineeship Program under NSF grants DGE-2022040 and CCF-1934964. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Nous remercions le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) de son soutien. | 1. What are the strengths and weaknesses of the paper's proposed approach in developing sparse risk scores?
2. How does the reviewer assess the scope, quality, and significance of the study?
3. What are some recent works that have partially addressed some of the limitations the authors proposed to address?
4. How does the reviewer evaluate the performance of FasterRisk compared to its competitors?
5. What are some concerns regarding the clarity of mathematical notations used in the paper?
6. How does the reviewer view the discussion of alternative approaches in the paper?
7. Can the authors provide empirical evidence or literature support for their choice of hyperparameters, such as the tolerance gap level?
8. How can the pool of "equally good" scores generated by the algorithm be utilized to balance performance and fairness in risk scores? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper aims to provide a fast algorithm to derive sparse risk scores that scales to high-dimensional datasets. The authors identified a few major limitations in current methods, and described how these were addressed by the three components in their proposed algorithm. In several experiments with low and high dimension data, the authors showed that their method outperformed the current state-of-the-art and several other baseline methods. The algorithm is implemented in stand-alone Python code, which is advantageous over competitors that rely on mathematical programming solvers.
Strengths And Weaknesses
This paper describes the authors’ original work to resolve several methodological difficulties in current development of risk scores. The writing is clear in defining several major challenges the authors aimed to address, and the proposed algorithm consists of separate components to address them. In addition to evaluating the algorithm with respect to baselines, the authors also showed the importance of each component by assessing the reduction in performance without them.
My major concern is the scope of this study, which affects its quality and significance. The authors aim to develop a fast and well-performing algorithm, FasterRisk, to develop sparse risk scores, which, if successful, would be very useful to healthcare applications. But when discussing related work, the authors did not include some recent works that have partially addressed some of the limitations the authors proposed to address (elaborated in Question 1 below). When evaluating FasterRisk, there is practically only one competitor algorithm, and FasterRisk only had marginal advantage in performance (in most experiments). FasterRisk is indeed much faster than the competitor, but by timing out the run time at 15 minutes, I am not convinced that the competitor is slow enough to be concerning in practice.
The clarity in mathematics notations can be improved. The equations became difficult to follow when the author used some notations without introducing them. For example,
e
p
s
i
l
o
n
w
in equation (4) and
e
p
s
i
l
o
n
t
in equation (5) lack bounds, and it is difficult to understand what the arbitrary a, b, c, d, e in equations (6)-(9) stand for. These affect my trust in the work.
Questions
The authors focused on developing scores by finding integer sparse solutions, and showed some advantages of FasterRisk over the state-of-the-art. But I find the discussion of alternative approaches inadequate, therefore I could not fully appreciate the contribution of this work. For example, the authors pointed out two major limitations of building scores by rounding logistic regression coefficients: (i)
l
1
and
l
0
regularizations not able to get sparse solutions, and (ii) rounding of coefficients worsens performance by making the scores too coarse. But (i) may be resolved by using alternative variable selection methods and (ii) by using smallest non-zero coefficient to scale all coefficients and then rounding to larger integer values (e.g., total score ranging from 0 to 100). For example, a 2020 paper (https://doi.org/10.2196/21798) describes such an alternative approach that worked reasonably well in several clinical applications, and by separating variable selection from score development, domain experts are more easily engaged in the development process to ensure clinical meaningfulness and fairness. I find this paper lacking in discussion on this general approach. Could the authors include such more recent works in their discussion and method evaluation?
The authors stated in appendix that the choice of hyperparameters does not have much impact on performance, but did not provide empirical evidence. I am particularly concerned with the choice of tolerance gap level
e
p
s
i
l
o
n
=
0.3
(equation (9)), meaning we are willing to tolerate up to 30% increase in loss when expanding to “equally good” scores. Without detailed explanation, 30% seems too large to me. Can the authors justify their choice empirically or by citing related literature? I would also like to see some empirical results regarding change in other hyperparameters.
Although the authors generated a pool of “equally good” scores, they did not seem to make use of them other than selecting the best-performing one to report. This pool of scores could be useful for users to select well-performing AND fair scores. This is related to Limitations below.
Limitations
Fairness of scores developed from the proposed algorithm is not adequately discussed. Related to Question 3 above, a naïve application of the proposed method may lead to unfair risk scores. For example, in Table 3 of Appendix F, the method generated a 3-variable risk score to predict salary>30K using education level and marital status, and this direct link of being married with salary level is highly debatable. Marital status might represent a mixed effect of age and socio-economic status, and it may be better to use the latter in the score for more meaningful interpretation. Since the authors have generated a pool of “equally good” scores that make use of alternative predictors in a step of the algorithm, I suggest the authors make use of this pool to help users balance performance and fairness. |
NIPS | Title
FasterRisk: Fast and Accurate Interpretable Risk Scores
Abstract
Over the last century, risk scores have been the most popular form of predictive model used in healthcare and criminal justice. Risk scores are sparse linear models with integer coefficients; often these models can be memorized or placed on an index card. Typically, risk scores have been created either without data or by rounding logistic regression coefficients, but these methods do not reliably produce high-quality risk scores. Recent work used mathematical programming, which is computationally slow. We introduce an approach for efficiently producing a collection of high-quality risk scores learned from data. Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm. Each of these continuous solutions is transformed into a separate risk score through a “star ray” search, where a range of multipliers are considered before rounding the coefficients sequentially to maintain low logistic loss. Our algorithm returns all of these high-quality risk scores for the user to consider. This method completes within minutes and can be valuable in a broad variety of applications.
1 Introduction
Risk scores are sparse linear models with integer coefficients that predict risks. They are arguably the most popular form of predictive model for high stakes decisions through the last century and are the standard form of model used in criminal justice [4, 22] and medicine [19, 27, 34, 31, 41].
Their history dates back to at least the criminal justice work of Burgess [8], where, based on their criminal history and demographics, individuals were assigned integer point scores between 0 and 21 that determined the probability of their “making good or of failing upon parole.” Other famous risk scores are arguably the most widelyused predictive models in healthcare. These include the APGAR score [3], developed in 1952 and given to newborns, and the CHADS2 score [18], which estimates stroke risk for atrial fibrillation patients. Figures 1 and 2 show example risk scores, which es-
⇤These authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
timate risk of a breast lesion being malignant.
Risk scores have the benefit of being easily memorized; usually their names reveal the full model – for instance, the factors in CHADS2 are past Chronic heart failure, Hypertension, Age 75 years, Diabetes, and past Stroke (where past stroke receives 2 points and the others each receive 1 point). For risk scores, counterfactuals are often trivial to compute, even without a calculator. Also, checking that the data and calculations are correct is easier with risk scores than with other approaches. In short, risk scores have been created by humans for a century to support a huge spectrum of
applications [2, 23, 30, 43, 44, 47], because humans find them easy to understand.
Traditionally, risk scores have been created in two main ways: (1) without data, with expert knowledge only (and validated only afterwards on data), and (2) using a semi-manual process involving manual feature selection and rounding of logistic regression coefficients. That is, these approaches rely heavily on domain expertise and rely little on data. Unfortunately, the alternative of building a model directly from data leads to computationally hard problems: optimizing risk scores over a global objective on data is NP-hard, because in order to produce integer-valued scores, the feasible region must be the integer lattice. There have been only a few approaches to design risk scores automatically [5, 6, 9, 10, 16, 32, 33, 38, 39, 40], but each of these has a flaw that limits its use in practice: the optimization-based approaches use mathematical programming solvers (which require a license) that are slow and scale poorly, and the other methods are randomized greedy algorithms, producing fast but much lower-quality solutions. We need an approach that exhibits the best of both worlds: speed fast enough to operate in a few minutes on a laptop and optimization/search capability as powerful as that of the mathematical programming tools. Our method, FasterRisk, lies at this intersection. It is fast enough to enable interactive model design and can rapidly produce a large pool of models from which users can choose rather than producing only a single model.
One may wonder why simple rounding of `1-regularized logistic regression coefficients does not yield sufficiently good risk scores. Past works [37, 39] explain this as follows: the sheer amount of `1 regularization needed to get a very sparse solution leads to large biases and worse loss values, and rounding goes against the performance gradient. For example, consider the following coefficients from `1 regularization: [1.45, .87, .83, .47, .23, .15, ... ]. This model is worse than its unregularized counterpart due to the bias induced by the large `1 term. Its rounded solution is [1,1,1,0,0,0,..], which leads to even worse loss. Instead, one could multiply all the coefficients by a constant and then round, but which constant is best? There are an infinite number of choices. Even if some value of the multiplier leads to minimal loss due to rounding, the bias from the `1 term still limits the quality of the solution.
The algorithm presented here does not have these disadvantages. The steps are: (1) Fast subset search with `0 optimization (avoiding the bias from `1). This requires the solution of an NP-hard problem, but our fast subset selection algorithm is able to solve this quickly. We proceed from this accurate sparse continuous solution, preserving both sparseness and accuracy in the next steps. (2) Find a pool of diverse continuous sparse solutions that are almost as good as the solution found in (1) but with different support sets. (3) A “star ray” search, where we search for feasible integer-valued solutions along multipliers of each item in the pool from (2). By using multipliers, the search space resembles the rays of a star, because it extends each coefficient in the pool outward from the origin to search for solutions. To find integer solutions, we perform a local search (a form of sequential rounding). This method yields high performance solutions: we provide a theoretical upper bound on the loss difference between the continuous sparse solution and the rounded integer sparse solution.
Through extensive experiments, we show that our proposed method is computationally fast and produces high-quality integer solutions. This work thus provides valuable and novel tools to create risk scores for professionals in many different fields, such as healthcare, finance, and criminal justice.
Contributions: Our contributions include the three-step framework for producing risk scores, a beam-search-based algorithm for logistic regression with bounded coefficients (for Step 1), the search algorithm to find pools of diverse high-quality continuous solutions (for Step 2), the star ray search technique using multipliers (Step 3), and a theorem guaranteeing the quality of the star ray search.
2 Related Work
Optimization-based approaches: Risk scores, which model P (y = 1|x), are different from threshold classifiers, which predict either y = 1 or y = 1 given x. Most work in the area of optimization of integer-valued sparse linear models focuses on classifiers, not risk scores [5, 6, 9, 32, 33, 37, 40, 46]. This difference is important, because a classifier generally cannot be calibrated well for use in risk scoring: only its single decision point is optimized. Despite this, several works use the hinge loss to calibrate predictions [6, 9, 32]. All of these optimization-based algorithms use mathematical programming solvers (i.e., integer programming solvers), which tend to be slow and cannot be used on larger problems. However, they can handle both feature selection and integer constraints.
To directly optimize risk scores, typically the logistic loss is used. The RiskSLIM algorithm [39] optimizes the logistic loss regularized with `0 regularization, subject to integer constraints on the coefficients. RiskSLIM uses callbacks to a MIP solver, alternating between solving linear programs and using branch-and-cut to divide and reduce the search space. The branch-and-cut procedure needs to keep track of unsolved nodes, whose number increases exponentially with the size of the feature space. Thus, RiskSLIM’s major challenge is scalability.
Local search-based approaches: As discussed earlier, a natural way to produce a scoring system or risk score is by selecting features manually and rounding logistic regression coefficients or hinge-loss solutions to integers [10, 11, 39]. While rounding is fast, rounding errors can cause the solution quality to be much worse than that of the optimization-based approaches. Several works have proposed improvements over traditional rounding. In Randomized Rounding [10], each coefficient is rounded up or down randomly, based on its continuous coefficient value. However, randomized rounding does not seem to perform well in practice. Chevaleyre [10] also proposed Greedy Rounding, where coefficients are rounded sequentially. While this technique aimed to provide theoretical guarantees for the hinge loss, we identified a serious flaw in the argument, rendering the bounds incorrect (see Appendix B). The RiskSLIM paper [39] proposed SequentialRounding, which, at each iteration, chooses a coefficient to round up or down, making the best choice according to the regularized logistic loss. This gives better solutions than other types of rounding, because the coefficients are considered together through their performance on the loss function, not independently.
A drawback of SequentialRounding is that it considers rounding up or down only to the nearest integer from the continuous solution. By considering multipliers, we consider a much larger space of possible solutions. The idea of multipliers (i.e., “scale and round”) is used for medical scoring systems [11], though, as far as we know, it has been used only with traditional rounding rather than SequentialRounding, which could easily lead to poor performance, and we have seen no previous work that studies how to perform scale-and-round in a systematic, computationally efficient way. While the general idea of scale-and-round seems simple, it is not: there are an infinite number of possible multipliers, and, for each one, a number of possible nearby integer coefficient vectors that is the size of a hypercube, expanding exponentially in the search space.
Sampling Methods: The Bayesian method of Ertekin et al. [16] samples scoring systems, favoring those that are simpler and more accurate, according to a prior. “Pooling” [39] creates multiple models through sampling along the regularization path of ElasticNet. As discussed, when regularization is tuned high enough to induce sparse solutions, it results in substantial bias and low-quality solutions (see [37, 39] for numerous experiments on this point). Note that there is a literature on finding diverse solutions to mixed-integer optimization problems [e.g., 1], but it focuses only on linear objective functions.
Algorithm 1 FasterRisk(D,k,C,B,✏,T ,Nm)! {(w+t, w+t0 ,mt)}t Input: dataset D (consisting of feature matrix X 2 Rn⇥p and labels y 2 Rn), sparsity constraint k, coefficient constraint C = 5, beam search size B = 10, tolerance level ✏ = 0.3, number of attempts T = 50, number of multipliers to try Nm = 20. Output: a pool P of scoring systems {(wt, wt0),mt} where t is the index enumerating all found scoring systems with kwtk0 k and kwtk1 C and mt is the corresponding multiplier.
1: Call Algorithm 2 SparseBeamLR(D, k, C,B) to find a high-quality solution (w⇤, w⇤0) to the sparse logistic regression problem with continuous coefficients satisfying a box constraint, i.e., solve Problem (3). (Algorithm SparseBeamLR will call Algorithm ExpandSuppBy1 as a subroutine, which grows the solution by beam search.) 2: Call Algorithm 5 CollectSparseDiversePool((w⇤, w⇤0), ✏, T ), which solves Problem (4). Place its output {(wt, wt0)}t in pool P = {w⇤, w⇤0}. P P [ {(wt, wt0)}t. 3: Send each member t in the pool P , which is (wt, wt0), to Algorithm 3 StarRaySearch (D, (wt, wt0), C,Nm) to perform a line search among possible multiplier values and obtain an integer solution (w+t, w+t0 ) with multiplier mt. Algorithm 3 calls Algorithm 6 AuxiliaryLossRounding which conducts the rounding step. 4: Return the collection of risk scores {(w+t, w+t0 ,mt)}t. If desired, return only the best model according to the logistic loss.
3 Methodology
Define dataset D = {1,xi, yi}ni=1 (1 is a static feature corresponding to the intercept) and scaled dataset as 1m ⇥D = { 1 m , 1 mxi, yi} n i=1, for a real-valued m. Our goal is to produce high-quality risk scores within a few minutes on a small personal computer. We start with an optimization problem similar to RiskSLIM’s [39], which minimizes the logistic loss subject to sparsity constraints and integer coefficients:
min w,w0
L(w, w0,D), where L(w, w0,D) = Pn i=1 log(1 + exp( yi(xTi w + w0))) (1)
such that kwk0 k and w 2 Zp, 8j 2 [1, .., p] wj 2 [ 5, 5], w0 2 Z. In practice, the range of these box constraints [ 5, 5] is user-defined and can be different for each coefficient. (We use 5 for ease of exposition.) The sparsity constraint kwk0 k or integer constraints w 2 Zp make the problem NP-hard, and this is a difficult mixed-integer nonlinear program. Transforming the original features to all possible dummy variables, which is a standard type of preprocessing [e.g., 24], changes the model into a (flexible) generalized additive model; such models can be as accurate as the best machine learning models [39, 42]. Thus, we generally process variables in x to be binary.
To make the solution space substantially larger than [ 5, 4, ..., 4, 5]p, we use multipliers. The problem becomes:
min w,w0,m L
✓ w, w0, 1 m D ◆ , where L ✓ w, w0, 1 m D ◆ = nX
i=1
log ✓ 1 + exp ✓ yi
xTi w + w0 m
◆◆ (2)
such that kwk0 k,w 2 Zp, 8j 2 [1, .., p], wj 2 [ 5, 5], w0 2 Z, m > 0. Note that the use of multipliers does not weaken the interpretability of the risk score: the user still sees integer risk scores composed of values wj 2 { 5, 4, .., 4, 5}, w0 2 Z. Only the risk conversion table is calculated differently, as P (Y = 1|x) = 1/(1 + e f(x)) where f(x) = 1m (w Tx+ w0).
Our method proceeds in three steps, as outlined in Algorithm 1. In the first step, it approximately solves the following sparse logistic regression problem with a box constraint (but not integer constraints), detailed in Section 3.1 and Algorithm 2. (w⇤, w⇤0) 2 argmin
w,w0 L(w, w0,D), kwk0 k,w 2 Rp, 8j 2 [1, ..., p], wj 2 [ 5, 5], w0 2 R.
(3) The algorithm gives an accurate and sparse real-valued solution (w⇤, w⇤0).
The second step produces many near-optimal sparse logistic regression solutions, again without integer constraints, detailed in Section 3.2 and Algorithm 5. Algorithm 5 uses (w⇤, w⇤0) from the
first step to find a set {(wt, wt0)}t such that for all t and a given threshold ✏w:
(wt, wt0) obeys L(w t, wt0,D) L(w⇤, w⇤0 ,D)⇥ (1 + ✏w⇤) (4)
kwtk0 k, wt 2 Rp, 8j 2 [1, ..., p], wtj 2 [ 5, 5], wt0 2 R.
After these steps, we have a pool of almost-optimal sparse logistic regression models. In the third step, for each coefficient vector in the pool, we compute a risk score. It is a feasible integer solution (w+t, w+t0 ) to the following, which includes a positive multiplier mt > 0:
L ✓ w+t, w+t0 , 1 mt D ◆ L(wt, wt0,D) + ✏t, (5)
w+t 2 Zp, 8j 2 [1, ..., p], w+tj 2 [ 5, 5], w +t 0 2 Z,
where we derive a tight theoretical upper bound on ✏t. A detailed solution to (5) is shown in Algorithm 6 in Appendix A. We solve the optimization problem for a large range of multipliers in Algorithm 3 for each coefficient vector in the pool, choosing the best multiplier for each coefficient vector. This third step yields a large collection of risk scores, all of which are approximately as accurate as the best sparse logistic regression model that can be obtained. All steps in this process are fast and scalable.
Algorithm 2 SparseBeamLR(D,k,C,B)! (w, w0) Input: dataset D, sparsity constraint k, coefficient constraint C, and beam search size B. Output: a sparse continuous coefficient vector (w, w0) with kwk0 k, kwk1 C.
1: Define N+ and N as numbers of positive and negative labels, respectively. 2: w0 log( N+/N ),w 0 .Initialize the intercept and coefficients. 3: F ; .Initialize the collection of found supports as an empty set 4: (W,F) ExpandSuppBy1(D, (w, w0),F , B). .Returns B models of support 1 5: for t = 2, ..., k do .Beam search to expand the support 6: Wtmp ; 7: for (w0, w00) 2W do .Each of these has support t 1 8: (W 0,F) ExpandSuppBy1(D, (w0, w00),F , B). .Returns B models with supp. t. 9: Wtmp Wtmp [W 0
10: end for 11: Reset W to be the B solutions in Wtmp with the smallest logistic loss values. 12: end for 13: Pick (w, w0) from W with the smallest logistic loss. 14: Return (w, w0).
3.1 High-quality Sparse Continuous Solution
There are many different approaches for sparse logistic regression, including `1 regularization [35], ElasticNet [48], `0 regularization [13, 24], and orthogonal matching pursuit (OMP) [14, 25], but none of these approaches seem to be able to handle both the box constraints and the sparsity constraint in Problem 3, so we developed a new approach. This approach, in Algorithm 2, SparseBeamLR, uses beam search for best subset selection: each iteration contains several coordinate descent steps to determine whether a new variable should be added to the support, and it clips coefficients to the box [ 5, 5] as it proceeds. Hence the algorithm is able to determine, before committing to the new variable, whether it is likely to decrease the loss while obeying the box constraints. This beam search algorithm for solving (3) implicitly uses the assumption that one of the best models of size k implicitly contains variables of one of the best models of size k 1. This type of assumption has been studied in the sparse learning literature [14] (Theorem 5). However, we are not aware of any other work that applies box constraints or beam search for sparse logistic regression. In Appendix E, we show that our method produces better solutions than the OMP method presented in [14].
Algorithm 2 calls the ExpandSuppBy1 Algorithm, which has two major steps. The detailed algorithm can be found in Appendix A. For the first step, given a solution w, we perform optimization on each single coordinate j outside of the current support supp(w):
d⇤j 2 argmin d2[ 5,5] L(w + dej , w0,D) for 8j where wj = 0. (6)
Vector ej is 1 for the jth coordinate and 0 otherwise. We find d⇤j for each j through an iterative thresholding operation, which is done on all coordinates in parallel, iterating several (⇠ 10) times:
for iteration i: dj Threshold(j, dj ,w, w0,D) := min(max(cdj , 5), 5), (7) where cdj = dj 1ljrjL(w + djej , w0,D), and lj is a Lipschitz constant on coordinate j [24]. Importantly, we can perform Equation 7 on all j where wj = 0 in parallel using matrix form.
For the second step, after the parallel single coordinate optimization is done, we pick the top B indices (j’s) with the smallest logistic losses L(w + d⇤jej) and fine tune on the new support:
wjnew, w0 j new 2 argmin a2[ 5,5]p,b L(a, b,D) with supp(a) = supp(w) [ {j}. (8)
This can be done again using a variant of Equation 7 iteratively on all the coordinates in the new support. We get B pairs of (wjnew, w0jnew) through this ExpandSuppBy1 procedure, and the collection of these pairs form the set W 0 in Line 8 of Algorithm 2. At the end, Algorithm 2 (SparseBeamLR) returns the best model with the smallest logistic loss found by the beam search procedure. This model satisfies both the sparsity and box constraints.
3.2 Collect Sparse Diverse Pool (Rashomon Set)
We now collect the sparse diverse pool. In Section 3.1, our goal was to find a sparse model (w⇤, w⇤0) with the smallest logistic loss. For high dimensional features or in the presence of highly correlated features, there could exist many sparse models with almost equally good performance [7]. This set of models is also known as the Rashomon set. Let us find those and turn them into risk scores. We first predefine a tolerance gap level ✏ (hyperparameter, usually set to 0.3). Then, we delete a feature with index j in the support supp(w⇤) and add a new feature with index j+. We select each new index to be j+ whose logistic loss is within the tolerance gap:
Find all j+ s.t. min a2[ 5,5]
L(w⇤ w⇤j ej + aej+, w0,D) L(w⇤, w⇤0 ,D)(1 + ✏). (9)
We fine-tune the coefficients on each of the new supports and then save the new solution in our pool. Details can be found in Algorithm 5. Swapping one feature at a time is computationally efficient, and our experiments show it produces sufficiently diverse pools over many datasets. We call this method the CollectSparseDiversePool Algorithm.
3.3 “Star Ray” Search for Integer Solutions
The last challenge is how to get an integer solution from a continuous solution. To achieve this, we use a “star ray” search that searches along each “ray” of the star, extending each continuous solution outward from the origin using many values of a multiplier, as shown in Algorithm 3. The star ray search provides much more flexibility in finding a good integer solution than simple rounding. The largest multiplier mmax is set to 5/maxj(|w⇤j |) which will take one of the coefficients to the boundary of the box constraint at 5. We set the smallest multiplier to be 1.0 and pick Nm (usually 20) equally spaced points from [mmin,mmax]. If mmax = 1, we set mmin = 0.5 to allow shrinkage of the coefficients. We scale the coefficients and datasets with each multiplier and round the coefficients to integers using the sequential rounding technique in Algorithm 6. For each continuous solution (each “ray” of the “star”), we report the integer solution and multiplier with the smallest logistic loss. This process yields our collection of risk scores. Note here that a standard line search along the multiplier does not work, because the rounding error is highly non-convex.
We briefly discuss how the sequential rounding technique works. Details of this method can be found in Appendix A. We initialize w+ = w. Then we round the fractional part of w+ one coordinate at a time. At each step, some of the w+j ’s are integer-valued (so w + j wj is nonzero) and we pick the coordinate and rounding operation (either floor or ceil) based on which can minimize the following objective function, where we will round to an integer at coordinate r⇤:
r⇤, v⇤ 2 argmin r,v
nX
i=1
l2i
0 @xir(v wr) + X
j 6=r xij(w
+ j wj)
1 A 2
, (10)
subject to r 2 {j | w+j /2 Z} and v 2 {bw+r c, dw+r e},
Algorithm 3 StarRaySearch(D, (w, w0), C,Nm) ! (w+, w+0 ),m Input: dataset D, a sparse continuous solution (w, w0), coefficient constraint C, and number of multipliers to try Nm. Output: a sparse integer solution (w+, w+0 ) with kw+k1 C and multiplier m.
1: Define mmax C/max|w| as discussed in Section 3.3. If mmax = 1, set mmin 0.5; if mmax > 1, set mmin 1. 2: Pick Nm equally spaced multiplier values ml 2 [mmin,mmax] for l 2 [1, ..., Nm] and call this set M = {ml}l. 3: Use each multiplier to scale the good continuous solution (w, w0), to obtain (mlw, mlw0), which is a good continuous solution to the rescaled dataset 1mlD. 4: Send each rescaled solution (mlw, mlw0) and its rescaled dataset 1mlD to Algorithm 6 AuxiliaryLossRounding( 1mlD,mlw,mlw0) for rounding. It returns (w
+l, w+l0 ,ml), where (w+l, w+l0 ) is close to (mlw, mlw0), and where (w+l, w +l 0 ) on 1 ml
D has a small logistic loss. 5: Evaluate the logistic loss to pick the best multiplier l⇤ 2 argminl L(w+l, w+l0 , 1mlD) 6: Return (w+l ⇤ , w+l ⇤
0 ) and ml⇤ .
where li is the Lipschitz constant restricted to the rounding interval and can be computed as li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. (The Lipschitz constant here is much smaller than the one in Section 3.1 due to the interval restriction.) After we select r⇤ and find value v⇤, we update w+ by setting w+r⇤ = v⇤. We repeat this process until w+ is on the integer lattice: w+ 2 Zp. The objective function in Equation 10 can be understood as an auxiliary upper bound of the logistic loss. Our algorithm provides an upper bound on the difference between the logistic losses of the continuous solution and the final rounded solution before we start the rounding algorithm (Theorem 3.1 below). Additionally, during the sequential rounding procedure, we do not need to perform expensive operations such as logarithms or exponentials as required by the logistic loss function; the bound and auxiliary function require only sums of squares, not logarithms or exponentials. Its derivation and proof are in Appendix C. Theorem 3.1. Let w be the real-valued coefficients for the logistic regression model with objective function L(w) = Pn i=1 log(1 + exp( yixTi w)) (the intercept is incorporated). Let w+ be the integer-valued coefficients returned by the AuxiliaryLossRounding method. Furthermore, let uj = wj bwjc. Let li = 1/(1 + exp(yixTi i)) with ij = bwjc if yixij > 0 and ij = dwje otherwise. Then, we have an upper bound on the difference between the loss L(w) and the loss L(w+):
L(w+) L(w) vuutn nX
i=1
pX
j=1
(lixij)2uj(1 uj). (11)
Note. Our method has a higher prediction capacity than RiskSLIM: its search space is much larger. Compared to RiskSLIM, our use of the multiplier permits a number of solutions that grows exponentially in k as we increase the multiplier. To see this, consider that for each support of k features, since logistic loss is convex, it contains a hypersphere in coefficient space. The volume of that hypersphere is (as usual) V = ⇡ k/2
( k2+1) rk where r is the radius of the hypersphere. If we increase the multiplier to
2, the grid becomes finer by a factor of 2, which is equivalent to increasing the radius by a factor of 2. Thus, the volume increases by a factor of 2k. In general, for maximum multiplier m, the search space is increased by a factor of mk over RiskSLIM.
4 Experiments
We experimentally focus on two questions: (1) How good is FasterRisk’s solution quality compared to baselines? (§4.1) (2) How fast is FasterRisk compared with the state-of-the-art? (§4.2) In the appendix, we address three more questions: (3) How much do the sparse beam search, diverse pools, and multipliers contribute to our solution quality? (E.4) (4) How well-calibrated are the models produced by FasterRisk? (E.9) (5) How sensitive is FasterRisk to each of the hyperparameters in the algorithm? (E.10)
We compare with RiskSLIM (the current state-of-the-art), as well as algorithms Pooled-PLR-RD, Pooled-PLR-RSRD, Pooled-PRL-RDSP, Pooled-PLR-Rand and Pooled-PRL-RDP. These algorithms were all previously shown to be inferior to RiskSLIM [39]. These methods first find a pool of sparse continuous solutions using different regularizations of ElasticNet (hence the name “Pooled Penalized Logistic Regression” – Pooled-PLR) and then round the coefficients with different techniques. Details are in Appendix D.3. The best solution is chosen from this pool of integer solutions that obeys the sparsity and box constraints and has the smallest logistic loss. We also compare with the baseline AutoScore [44]. However, on some datasets, the results produced by AutoScore are so poor that they distort the AUC scale, so we show those results only in Appendix E.11. As there is no publicly
available code for any of [10, 16, 32, 33], they do not appear in the experiments. For each dataset, we perform 5-fold cross validation and report training and test AUC. Appendix D presents details of the datasets, experimental setup, evaluation metrics, loss values, and computing platform/environment. More experimental results appear in Appendix E.
4.1 Solution Quality
We first evaluate FasterRisk’s solution quality. Figure 3 shows the training and test AUC on six datasets (results for training loss appear in Appendix E). FasterRisk (the red line) outperforms all baselines, consistently obtaining the highest AUC scores on both the training and test sets. Notably, our method obtains better results than RiskSLIM, which uses a mathematical solver and is the current state-of-the-art method for scoring systems. This superior performance is due to the use of multipliers, which increases the complexity of the hypothesis space. Figure 4 provides a more detailed comparison between FasterRisk and RiskSLIM. One may wonder whether running RiskSLIM longer would make this MIP-based method comparable to our FasterRisk, since the current running time limit for RiskSLIM is only 15 minutes. We extended RiskSLIM’s running time limit up to 1 hour and show the comparison in Appendix E.8; FasterRisk still outperforms RiskSLIM by a large margin.
FasterRisk performs significantly better than the other baselines for two reasons. First, the continuous sparse solutions produced by ElasticNet are low quality for very sparse models. Second, it is difficult to obtain an exact model size by controlling `1 regularization. For example, Pooled-PLR-RD and Pooled-PLR-RDSP do not have results for model size 10 on the mammo datasets, because no such model size exists in the pooled solutions after rounding.
4.2 Runtime Comparison
The major drawback of RiskSLIM is its limited scalability. Runtime is important to allow interactive model development and to handle larger datasets. Figure 5 shows that FasterRisk (red bars) is significantly faster than RiskSLIM (blue bars) in general. We ran these experiments with a 900 second (15 minute) timeout. RiskSLIM finishes running on the small dataset mammo, but it times out on the larger datasets, timing out on models larger than 4 features for adult, larger than 3 features for bank, larger than 7 features for mushroom, larger than 2 features for COMPAS, and larger than 1
feature for FICO. RiskSLIM times out early on COMPAS and FICO datasets, suggesting that the MIP-based method struggles with high-dimensional and highly-correlated features. Thus, we see that FasterRisk tends to be both faster and more accurate than RiskSLIM.
4.3 Example Scoring Systems
The main benefit of risk scores is their interpretability. We place a few example risk scores in Table 1 to allow the reader to judge for themselves. More risk scores examples can be found in Appendix F.1. Additionally, we provide a pool of solutions for the top 12 models on the bank, mammo, and Netherlands datasets in Appendix F.2. Prediction performance is generally not the only criteria users consider when deciding to deploy a model. Provided with a pool of solutions that perform equally well, a user can choose the one that best incorporates domain knowledge [45]. After the pool of models is generated, interacting with the pool is essentially computationally instantaneous. Finally, we can reduce some models to relatively prime coefficients or transform some features for better interpretability. Examples of such transformations are given in Appendix G.1.
5 Conclusion
FasterRisk produces a collection of high-quality risk scores within minutes. Its performance owes to three key ideas: a new algorithm for sparsity- and box-constrained continuous models, using a pool of diverse solutions, and the use of the star ray search, which leverages multipliers and a new sequential rounding technique. FasterRisk is suitable for high-stakes decisions, and permits domain experts a collection of interpretable models to choose from.
Code Availability
Implementations of FasterRisk discussed in this paper are available at https://github.com/ jiachangliu/FasterRisk.
Acknowledgements
The authors acknowledge funding from the National Science Foundation under grants IIS-2147061 and IIS-2130250, National Institute on Drug Abuse under grant R01 DA054994, Department of Energy under grants DE-SC0021358 and DE-SC0023194, and National Research Traineeship Program under NSF grants DGE-2022040 and CCF-1934964. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Nous remercions le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) de son soutien. | 1. What is the focus and contribution of the paper regarding risk scores learning?
2. What are the strengths and weaknesses of the proposed approach, particularly its computational efficiency and potential inconveniences?
3. Do you have any questions or concerns about the method's steps, such as the division by m instead of multiplication, and the efficiency of swapping one feature at a time?
4. Is Section 4.3 necessary, or can the example provided in the introduction and appendix suffice?
5. What are the ranges of m values, the number of intermediate pool models, and the percentage of reasonable final integer models used in the experiments? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper is focused on risk scores learning which are simple but efficient (in terms of performance) models. The main idea is to produce a pool of almost-optimal sparse continuous solutions with different support sets using a beam-search algorithm. Each of these solutions is explored: the real-values models are transformed into feasible integer-valued solutions along multipliers (what allows for a large space of possible solutions). The method is computationally efficient.
Strengths And Weaknesses
Strengths. The paper clearly describes a novel three step framework to learn simple interpretable models. The numerical results are convincing.
Weaknesses. The method has three separate steps what can lead to some inconveniences (some kind of error cumulation is possible; coordinate descent can be long as well as the line search).
As also mentioned by the authors, real scores are not considered in the current contribution.
Questions
It seems that m is defined quite late in text, and it is not clear from the beginning of Section 3 that it is the multiplier. Why did you decide to divide by m and not to multiply (if it is a multiplier)?
In Section 3.2. it is mentioned that "swapping one feature at a time is computationally efficient". I would say rather not efficient, if there are a lot of features.
I am not sure whether Section 4.3. is necessary. It underlines that there are not any results on real scores in the current submission. I guess the example provided in the Introduction (and Appendix) is enough.
I am curious to know what the range of m (multiplier) values was in your experiments? How many intermediate (pool) models did you generate? And what is the percentage of reasonable final (integer) models?
Limitations
The authors provide the limitations on page 3. |
NIPS | Title
Exponentially Weighted Imitation Learning for Batched Historical Data
Abstract
We consider deep policy learning with only batched historical trajectories. The main challenge of this problem is that the learner no longer has a simulator or “environment oracle” as in most reinforcement learning settings. To solve this problem, we propose a monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space. The method does not rely on the knowledge of the behavior policy, thus can be used to learn from data generated by an unknown policy. Under mild conditions, our algorithm, though surprisingly simple, has a policy improvement bound and outperforms most competing methods empirically. Thorough numerical results are also provided to demonstrate the efficacy of the proposed methodology.
1 Introduction
In this article, we consider the problem of learning a deep policy with batched historical trajectories. This problem is important and challenging. As in many real-world tasks, we usually have numerous historical data generated by different policies, but is lack of a perfect simulator of the environment. In this case, we want to learn a good policy from these data, to make decisions in a complex environment with possibly continuous state space and hybrid action space of discrete and continuous parts.
Several existing fields of research concern the problem of policy learning from batched data. In particular, imitation learning (IL) aims to find a policy whose performance is close to that of the data-generating policy [Abbeel and Ng, 2004]. On the other hand, off-policy reinforcement learning (RL) concerns the problem of learning a good (or possibly better) policy with data collected from a behavior policy [Sutton and Barto, 1998]. However, to the best of our knowledge, previous methods do not have satisfiable performance or are not directly applicable in a complex environment as ours with continuous state and hybrid action space.
In this work, we propose a novel yet simple method, to imitate a better policy by monotonic advantage reweighting. From theoretical analysis and empirical results, we find the proposed method has several advantages that
• From theoretical analysis, we show that the algorithm as proposed has policy improvement lower bound under mild condition.
• Empirically, the proposed method works well with function approximation and hybrid action space, which is crucial for the success of deep RL in practical problems.
• For off-policy learning, the method does not rely on the knowledge of action probability of the behavior policy, thus can be used to learn from data generated by an unknown policy, and is robust when current policy is deviated from the behavior policy.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
In our real-world problem of a complex MOBA game, the proposed method has been successfully applied on human replay data, which validates the effectiveness of the method.
The article is organized as follows: We firstly state some preliminaries (Sec. 2) and related works (Sec. 3). Then we present our main method of imitating a better policy (Sec. 4), with theoretical analysis (Sec. 5) and empirical experiments (Sec. 6). Finally we conclude our discussion (Sec. 7).
2 Preliminaries
Consider a Markov decision process (MDP) with infinite-horizon, denoted by M = (S,A, P, r, d0, γ), where S is the state space, A is the action space, P is the transition probability defined on S×A×S → [0, 1], r is the reward function S×A → R, d0 is the distribution of initial state s0, and γ ∈ (0, 1) is the discount factor. A trajectory τ is a sequence of triplets of state, action and reward, i.e., τ = {(st, at, rt)}t=1,...,T , where T is the terminal step number. A stochastic policy denoted by π is defined as S×A → [0, 1]. We use the following standard notation of state-value V π(st), action-value Qπ(st, at) and advantage Aπ(st, at), defined as V π(st) = Eπ|st ∑∞ l=0 γ
lr(st+l, at+l), Qπ(st, at) = Eπ|st,at ∑∞ l=0 γ
lr(st+l, at+l), and Aπ(st, at) = Qπ(st, at) − V π(st), where Eπ|st means al ∼ π(a|sl), sl+1 ∼ P (sl+1|sl, al), ∀l ≥ t, and Eπ|st,at means sl+1 ∼ P (sl+1|sl, al), al+1 ∼ π(a|sl+1), ∀l ≥ t. As the state space S may be prohibitively large, we approximate the policy and state-value with parameterized forms as πθ(s, a) and V πθ (s) with parameter θ ∈ Θ. We denote the original policy space as Π = {π|π(s, a) ∈ [0, 1], ∑ a∈A π(s, a) = 1,∀s ∈ S, a ∈ A} and parametrized policy space as ΠΘ = {πθ|θ ∈ Θ}. To measure the similarity between two policies π and π′, we consider the Kullback–Leibler divergence and total variance (TV) distance defined as
DdKL(π ′||π) = ∑ s d(s) ∑ a π′(a|s) log π ′(a|s) π(a|s)
DdTV(π ′, π) = (1/2) ∑ s d(s) ∑ a |π′(a|s)− π(a|s)|
where d(s) is a probability distribution of states. The performance of a policy π is measured by its expected discounted reward:
η(π) = Ed0,π ∞∑ t=0 γtr(st, at)
where Ed0,π means s0 ∼ d0, at ∼ π(at|st), and st+1 ∼ P (st+1|st, at). We omit the subscript d0 when there is no ambiguity. In [Kakade and Langford, 2002], a useful equation has been proved that
η(π′)− η(π) = 1 1− γ ∑ s dπ′(s) ∑ a π′(a|s)Aπ(s, a)
where dπ is the discounted visiting frequencies defined as dπ(s) = (1− γ)Ed0,π ∑∞ t=0 γ
t1(st = s) and 1(·) is an indicator function. In addition, define Ld,π(π′) as
Ld,π(π′) = 1 1− γ ∑ s d(s) ∑ a π′(a|s)Aπ(s, a)
then from [Schulman et al., 2015, Theorem 1], the difference of η(π′) and η(π) can be approximated by Ldπ,π(π′), where the approximation error is bounded by total variance DdπTV(π
′, π), which can be further bounded by DdπKL(π ′||π) or DdπKL(π||π′).
In the following sections, we mainly focus on maximizing Ldπ,π(πθ) as a proxy for optimizing policy performance η(πθ), for πθ ∈ ΠΘ.
3 Related Work
Off-policy learning [Sutton and Barto, 1998] is a broad region of research. For policy improvement method with performance guarantee, conservative policy iteration [Kakade and Langford, 2002] or
safe policy iteration [Pirotta et al., 2013] has long been an interesting topic in the literature. The term “safety” or “conservative” usually means the algorithm described is guaranteed to produce a series of monotonic improved policies. Exact or high-probability bounds of policy improvement are often provided in these previous works [Thomas and Brunskill, 2016, Jiang and Li, 2016, Thomas et al., 2015, Ghavamzadeh et al., 2016]. We refer readers to [Garcıa and Fernández, 2015] for a comprehensive survey of safe RL. However, to the best of our knowledge, these prior methods cannot be directly applied in our problem of learning in a complex game environment with large scale replay data, as they either need full-knowledge of the MDP or consider tabular case mainly for finite states and discrete actions, with prohibitive computational complexity.
Constrained policy optimization problems in the parameter space are considered in previous works [Schulman et al., 2015, Peters et al., 2010]. In [Peters et al., 2010], they constrain the policy on the distribution of pπ(s, a) = µπ(s)π(a|s), while in [Schulman et al., 2015], the constraint is on π(a|s), with fixed state-wise weight d(s). Also, in [Schulman et al., 2015], the authors have considered DdπKL(π||πθ) as a policy divergence constraint, while in [Peters et al., 2010] the authors considered DKL(µ
ππ||q). The connection with our proposed method is elaborated in Appendix B.1. A closely related work is [Abdolmaleki et al., 2018] which present the exponential advantage weighting in an EM perspective. Independently, we further generalize to monotonic advantage re-weighting and also derive a lower bound for imitation learning.
Besides off-policy policy iteration algorithm, value iteration algorithm can also be used in off-policy settings. For deep reinforcement learning, DQN [Mnih et al., 2013], DQfD [Hester et al., 2018] works primarily with discrete actions, while DDPG [Lillicrap et al., 2016] works well with continuous actions. For hybrid action space, there are also works combining the idea of DQN and DDPG [Hausknecht and Stone, 2016]. In our preliminary experiments, we found value iteration method failed to converge for the tasks in the HFO environment. It seems that the discrepancy between behavior policy and the target policy (arg max policy in DQN) should be properly restrained, which we think worth further research and investigation.
Also, there are existing related methods in the field of imitation learning. For example, when expert data is available, we can learn a policy directly by predicting the expert action [Bain and Sommut, 1999, Ross et al., 2011]. Another related idea is to imitate an MCTS policy [Guo et al., 2014, Silver et al., 2016]. In the work of [Silver et al., 2016], the authors propose to use Monte-Carlo Tree Search (MCTS) to form a new policy π̃ = MCTS(π) where π is the base policy of network, then imitate the better policy π̃ by minimizing DKL(π̃||πθ). Also in [Guo et al., 2014], the authors use UCT as a policy improvement operator and generate data from π̃ = UCT(π), then perform regression or classification with the dataset, which can be seen as approximating the policy under normal distribution or multinomial distribution parametrization.
4 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL)
To learn a policy from data, the most straight forward way is imitation learning (behavior cloning). Suppose we have state-action pairs (st, at) in the data generated by a behavior policy π, then we can minimize the KL divergence between π and πθ. To be specific, we would like to minimize
DdKL(π||πθ) = −Es∼d(s),a∼π(a|s)(log πθ(a|s)− log π(a|s)) (1)
under some state distribution d(s). However, this method makes no distinction between “good” and “bad” actions. The learned πθ simply imitates all the actions generated by π. Actually, if we also have reward rt in the data, we can know the consequence of taking action at, by looking at future state st+1 and reward rt. Suppose we have estimation of the advantage of action at as Âπ(st, at), we can put higher sample weight on the actions with higher advantage, thus imitating good actions more often. Inspired by this idea, we propose a monotonic advantage reweighted imitation learning method (Algorithm 1) which maximizes
Es∼dπ(s),a∼π(a|s) exp(β π(s, a)) log πθ(a|s) (2)
where β is a hyper-parameter. When β = 0 the algorithm degenerates to ordinary imitation learning. Ideally we would like to estimate the advantage function A(st, at) = Eπ|st,at(Rt − V π(st)) using cumulated discounted future reward Rt = ∑T l=t γ
l−trl. For example, one possible solution is to use a neural network to estimate A(st, at), by minimizing Eπ|st,at(Aθ(st, at) − (Rt − Vθ(st)))2
Algorithm 1 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) Input: Historical data D generated by π, hyper-parameter β. For each trajectory τ in D, estimate advantages Âπ(st, at) for time t = 1, · · · , T . Maximize E(st,at)∈D exp(βÂπ(st, at)) log πθ(at|st) with respect to θ.
for Rt computed from different trajectories, where Vθ(st) is also estimated with a neural network respectively. In practice we find that good results can be achieved by simply using a single path estimation as Â(st, at) = (Rt − Vθ(st))/c, where we normalize the advantage by its average norm c1 in order to make the scale of β stable across different environments. We use this method in our experiments as it greatly simplifies the computation.
Although the algorithm has a very simple formulation, it has many strengths as
1. Under mild conditions, we show that the proposed algorithm has policy improvement bound by theoretical analysis. Specifically, the policy π̃ is uniformly as good as, or better than the behavior policy π.
2. The method works well with function approximation as a complex neural network, as suggested by theoretical analysis and validated empirically. The method is naturally compatible with hybrid action of discrete and continuous parts, which is common in practical problems.
3. In contrast to most off-policy methods, the algorithm does not rely on importance sampling with the value of π(at|st) – the action probability of the behavior policy, thus can be used to learn from an unknown policy, and is also robust when current policy is deviated from the behavior policy. We validate this with several empirical experiments.
In Section 5 we give a proposition of policy improvement by theoretical analysis. And in Section 6 we give experimental results of the proposed algorithm in off-policy settings.
5 Theoretical Analysis
In this section, we firstly show that in the ideal case Algorithm 1 is equivalent to imitating a new policy π̃. Then we show that the policy π̃ is indeed uniformly better than π. Thus Algorithm 1 can also be regarded as imitating a better policy (IBP). For function approximation, we also provide a policy improvement lower bound under mild conditions.
5.1 Equivalence to Imitating a New Policy
In this subsection, we show that in the ideal case when we know the advantage Aπ(st, at), Algorithm 1 is equivalent to minimizing KL divergence between πθ and a hypothetic π̃. Consider the problem
π̃ = arg max π′∈Π
((1− γ)βLdπ,π(π′)−DdπKL(π ′||π)) (3)
which has an analytical solution in the policy space Π [Azar et al., 2012, Appendix A, Proposition 1]
π̃(a|s) = π(a|s) exp(βAπ(s, a) + C(s)) (4) where C(s) is a normalizing factor to ensure that ∑ a∈A π̃(a|s) = 1 for each state s. Then
arg min θ DdKL(π̃||πθ) = arg max θ ∑ s d(s) ∑ a π̃(a|s) log πθ(a|s)
= arg max θ ∑ s d(s) exp(C(s)) ∑ a π(a|s) exp(βAπ(s, a)) log πθ(a|s) (5)
Thus Algorithm 1 is equivalent to minimizing DdKL(π̃||πθ) for d(s) ∝ dπ(s) exp(−C(s)). 2
1In our experiments, the average norm of advantage is approximated with a moving average estimation, by c2 ← c2 + 10−8((Rt − Vθ(st))2 − c2).
2In the implementation of the algorithm, we omit the step discount in dπ , i.e., using d′π(s) = Ed0,π ∑T t=0 1(st = s) where T is the terminal step. Sampling from dπ(s) is possible, but usually leads to inferior performance according to our preliminary experiments.
5.2 Monotonic Advantage Reweighting
In subsection 5.1, we have shown that the π̃ defined in 4 is the analytical solution to the problem 3. In this section, we further show that π̃ is indeed uniformly as good as, or better than π. To be rigorous, a policy π′ is considered uniformly as good as, or better than π, if ∀s ∈ S, we have V π′(s) ≥ V π(s). In Proposition 1, we give a family of π̃ which are uniformly as good as, or better than π. To be specific, we have Proposition 1. Suppose two policies π and π̃ satisfy
g(π̃(a|s)) = g(π(a|s)) + h(s,Aπ(s, a)) (6) where g(·) is a monotonically increasing function, and h(s, ·) is monotonically increasing for any fixed s. Then we have V π̃(s) ≥ V π(s), ∀s ∈ S. (7) that is, π̃ is uniformly as good as or better than π.
The idea behind this proposition is simple. The condition (6) requires that the policy π̃ has positive advantages for the actions where π̃(a|s) ≥ π(a|s). Then it follows directly from the well-known policy improvement theorem as stated in [Sutton and Barto, 1998, Equation 4.8]. A short proof is provided in Appendix A.1 for completeness.
When g(·) and h(s, ·) in (6) are chosen as g(π) = log(π) and h(s,Aπ(s, a)) = βAπ(s, a) + C(s), then we recover the formula in 4. By Proposition (1) we have shown that π̃ defined in 4 is as good as, or better than policy π.
We note that there are other choice of g(·) and h(s, ·) as well. For example we can choose g(π) = log(π) and h(s,Aπ(s, a)) = log((βAπ(s, a))+ + ) +C(s), where (·)+ is a positive truncation, is a small positive number, and C(s) is a normalizing factor to ensure ∑ a∈A π̃(s, a) = 1. In this case,
we can minimizeDdKL(π̃||πθ) = ∑ s d(s) exp(C(s)) ∑ a π(a|s)((βAπ(s, a))+ + ) log πθ(a|s)+C.
5.3 Lower bound under Approximation
For practical usage, we usually seek a parametric approximation of π̃. The following proposition gives a lower bound of policy improvement for the parametric policy πθ. Proposition 2. Suppose we use parametric policy πθ to approximate the improved policy π̃ defined in Formula 3, we have the following lower bound on the policy improvement
η(πθ)− η(π) ≥ − √ 2
1− γ δ
1 2 1 M πθ + 1
(1− γ)β δ2 −
√ 2γ π̃π
(1− γ)2 δ
1 2 2 (8)
where δ1 = min(Ddπ̃KL (πθ||π̃), D dπ̃ KL (π̃||πθ)), δ2 = D dπ KL (π̃||π), π
′
π = maxs |Ea∼π′Aπ(s, a)|, and Mπ = maxs,a |Aπ(s, a)| ≤ maxs,a |r(s, a)|/(1− γ).
A short proof can be found in Appendix A.2. Note that we would like to approximate π̃ under state distribution dπ̃ in theory. However in practice we use a heuristic approximation to sample data from trajectories generated by the base policy π as in Algorithm 1, which is equivalent to imitating π̃ under a slightly different state distribution d as discussed in Sec.5.1.
6 Experimental Results
In this section, we provide empirical evidence that the algorithm is well suited for off-policy RL tasks, as it does not need to know the probability of the behavior policy, thus is robust when learning from replays from an unknown policy. We evaluate the proposed algorithm with HFO environment under different settings (Sec. 6.1). Furthermore, we also provide two other environments (TORCS and mobile MOBA game) to evaluate the algorithm in learning from replay data (Sec. 6.2, 6.3).
Denote the behavior policy as π, the desired parametrized policy as πθ, the policy loss Lp for the policy iteration algorithms considered are listed as following: (C is a θ-independent constant)
• (IL) Imitation learning, minimizing DdπKL(π||πθ).
Lp = D dπ KL(π||πθ) = −Es∼dπ(s),a∼π(a|s) log πθ(a|s) + C (9)
• (PG) Policy gradient with baseline and DdπKL(π||πθ) regularization.
Lp = −Es∼dπ(s),a∼π(a|s)(βA π(s, a) + 1) log πθ(a|s) + C (10)
• (PGIS) Policy gradient with baseline and DdπKL(π||πθ) regularization, with off-policy correction by importance sampling (IS), as in TRPO [Schulman et al., 2015] and CPO [Achiam et al., 2017]. Here we simply use penalized gradient algorithm to optimize the objective, instead of using delegated optimization method as in [Schulman et al., 2015].
Lp = D dπ KL(π||πθ)− (1− γ)βL dπ,π(πθ) = −Es∼dπ(s),a∼π(a|s) ( πθ(a|s) π(a|s) βAπ(s, a) + log πθ(s, a) ) + C
(11)
• (MARWIL) Minimizing DdKL(π̃||πθ) as in (5) and Algorithm 1.
Lp = D d KL(π̃||πθ) = −Es∼dπ(s),a∼π(a|s) log(πθ(a|s)) exp(βA π(s, a)) + C (12)
Note that IL simply imitates all the actions in the data, while PG needs the on-policy assumption to be a reasonable algorithm. Both PGIS and MARWIL are derived under off-policy setting. However, the importance ratio πθ/π used to correct off-policy bias for PG usually has large variance and may cause severe problems when πθ is deviated far away from π [Sutton and Barto, 1998]. Several methods are proposed to alleviate this problem [Schulman et al., 2017, Munos et al., 2016, Precup et al., 2000]. On the other hand, we note that the algorithm MARWIL is naturally off-policy, instead of relying on the importance sampling ratio πθ/π to do off-policy correction. We expect the proposed algorithm to work better when learning from a possibly unknown behavior policy.
6.1 Experiments with Half Field Offense (HFO)
To compare the aforementioned algorithms, we employ Half Field Offense (HFO) as our primary experiment environment. HFO is an abstraction of the full RoboCup 2D game, where an agent plays soccer in a half field. The HFO environment has continuous state space and hybrid (discrete and continuous) action space, which is similar to our task in a MOBA game (Sec. 6.3). In this simplified environment, we validate the effectiveness and efficiency of the proposed learning method.
6.1.1 Environment Settings
Like in [Hausknecht and Stone, 2016], we let the agent try to goal without a goalkeeper. We follow [Hausknecht and Stone, 2016] for the settings, as is briefed below.
The observation is a 59-d feature vector, encoding the relative position of several critical objects such as the ball, the goal and other landmarks (See [Hausknecht, 2017]). In our experiments, we use a hybrid action space of discrete actions and continuous actions. 3 types of actions are considered in our setting, which correspond to {“Dash”, “Turn”, “Kick”}. For each type k of action, we require the policy to output a parameter xk ∈ R2. For the action “Dash” and “Kick”, the parameter xk is interpreted as (r cosα, r sinα), with r truncated to 1 when exceeding. Then α ∈ [0, 2π] is interpreted as the relative direction of that action, while r ∈ [0, 1] is interpreted as the power/force of that action. For the action “Turn”, the parameter xk is firstly normalized to (cosα, sinα) and then θ is interpreted as the relative degree of turning. The reward is hand-crafted, written as:
rt = dt(b, a)− dt+1(b, a) + Ikickt+1 + 3(dt(b, g)− dt+1(b, g)) + 5I goal t+1 ,
where dt(b, a) (or dt(b, g)) is the distance between the ball and the agent (or the center of goal). Ikickt = 1 if the agent is close enough to kick the ball. I goal t = 1 if a successful goal happens. We leverage Winning Rate = NGNG+NF to evaluate the final performance, where NG is the number of goals (G) achieved, NF is the number of failures (F), due to either out-of-time (the agent does not kick the ball in 100 frames or does not goal in 500 frames) or out-of-bound (the ball is out of the half field).
When learning from data, the historical experience is generated with a mixture of a perfect (100% winning rate) policy πperfect and a random policy πrandom. For the continuous part of the action, a Gaussian distribution of σ = 0.2 or 0.4 is added to the model output, respectively. The mixture
Algorithm 2 Stochastic Gradient Algorithm for MARWIL Input: Policy loss Lp being one of 9 to 12. base policy π, parameter m, cv . Randomly initialize πθ. Empty replay memory D. Fill D with trajectories from π and calculate Rt for each (st, at) in D. for i = 1 to N do
Sample a batch B = {(sk, ak, Rk)}m from D. Compute mini-batch gradient∇θL̂p,∇θL̂v of B. Update θ: −∆θ ∝ ∇θL̂p + cv∇θL̂v
end for
coefficient is used to adjust the proportion of “good” actions and “bad” actions. To be specific, for each step, the action is taken as
at ∼ { πperfect(·|st) +N(0, σ) w.p. πrandom(·|st) +N(0, σ) w.p. 1−
(13)
The parameter is adjusted from 0.1 to 0.5. Smaller means greater noise, in which case it is harder for the algorithms to find a good policy from the noisy data.
6.1.2 Algorithm Setting
For the HFO game, we model the 3 discrete actions with multinomial probabilities and the 2 continuous parameters for each action with normal distributions of known σ = 0.2 but unknown µ. Parameters for different types of action are modeled separately. In total we have 3 output nodes for discrete action probabilities and 6 output nodes for continuous action parameters, in the form of
πθ((k, xk)|s) = pθ(k|s)N(xk|µθ,k, σ), k ∈ {1, 2, 3}, xk ∈ R2
where pθ(·|s) is computed as a soft-max for discrete actions and N(·|µθ, σ) is the probability density function of Gaussian distribution.
When learning from data, the base policy (13) is used to generate trajectories into a replay memory D, and the policy network is updated by different algorithms, respectively. We denote the policy loss objective as Lp, being one of the formula (9) (10) (11) (12). Then we optimize the policy loss Lp and the value loss Lv simultaneously, with a mixture coefficient cv as a hyper-parameter (by default cv = 1). The value loss Lv is defined as Lv = Ed,π(Rt − Vθ(st))2. A stochastic gradient algorithm is given in Algorithm 2. Each experiment is repeated 3 times and the average of scores is reported in Figure 1. Additional details of the algorithm settings are given in Appendix B.2.
We note that the explicit value π(at|st) is crucial for the correction used by most off-policy policy iteration methods [Sutton and Barto, 1998], including [Munos et al., 2016, Wang et al., 2016, Schulman et al., 2017, Wu et al., 2017] and many other works [Geist and Scherrer, 2014]. Here for a comparable experiment between policy gradient method and our proposed method, we consider a simple off-policy correction by importance sampling as in (11). We test the performance of the proposed method and previous works under different settings in Figure 1. We can see that the proposed MARWIL achieves consistently better performance than other methods.3
6.2 Experiments with TORCS
We also evaluate the imitation learning and the proposed method within the TORCS [Wymann et al., 2014] environment. In the TORCS environment, the observation is the raw screen with image size of
3We note that the gap between behavior policy and IL is partly due to the approximation we used. As we have continuous action space, we use a gaussian model with fixed σ, thus the variance of learned policy may be lower than that of the behavior policy. A fair comparison should be made among IL, PG, PGIS, and MARWIL.
64× 64× 3, the action is a scalar indicating the steering angle in [−π, π], and the reward rt is the momentary speed. When the car crushes, a −1 reward is received and the game terminates. For the TORCS environment, a simple rule is leveraged to keep the car running and to prevent it from crushing. Therefore, we can use the rule as the optimal policy to generate expert trajectories. In addition, we generate noisy trajectories with random actions to intentionally confuse the learning algorithms, and see whether the proposed method can learn a better policy from the data generated by the deteriorated policy. We make the training data by generating 10 matches with the optimal policy and another 10 matches with the random actions.
We train the imitation learning and the proposed method for 5 epochs to compare their performance. Table 1 shows the test scores when varying the parameter β. From the results, we see that our proposed algorithm is effective at learning a better policy from these noisy trajectories.
6.3 Experiments with King of Glory
We also evaluate the proposed algorithm with King of Glory – a mobile MOBA (Multi-player Online Battle Arena) game popular in China. In the experiments, we collect human replay files in the size of millions, equaling to tens of billions time steps in total. Evaluation is performed in the “solo” game mode, where an agent fights against another AI in the opposite side. A DNN based function approximator is adopted. In a proprietary test, we find that our AI agent, trained with the proposed method, can reach the level of an experienced human player in a solo game. Additional details of the algorithm settings for King of Glory is given in Appendix B.3.
7 Conclusion
In this article, we present an off-policy learning algorithm that can form a better policy from trajectories generated by a possibly unknown policy. When learning from replay data, the proposed algorithm does not require the bahavior probability π over the actions, which is usually missing in human generated data, and also works well with function approximation and hybrid action space. The algorithm is preferable in real-world application, including playing video games. Experimental results over several real world datasets validate the effectiveness of the proposed algorithm. We note that the proposed MARWIL algorithm can also work as a full reinforcement learning method, when applied iteratively on self-generated replay data. Due to the space limitation, a thorough study of our method for full reinforcement learning is left to a future work.
Acknowledgement We are grateful for the anonymous reviewers for their detailed and helpful comments on this work. We also thank our colleagues in the project of King of Glory AI, particularly Haobo Fu and Tengfei Shi, for their assistance on the game environment and parsing replay data. | 1. What is the focus of the paper regarding deep policy learning?
2. What are the strengths of the proposed approach, particularly in its application to imitation learning?
3. Do you have any concerns or suggestions regarding the experimental setup and analysis?
4. How does the reviewer assess the significance and novelty of the introduced method?
5. Are there any limitations or potential drawbacks of the approach that should be addressed? | Review | Review
A method for learning deep policies from data recorded in demonstrations is introduced. The method uses exponentially weighted learning that can learn policies from data generated by another policy The proposed approach is interesting and well presented. it opens the possibilities for future work, more specifically to RL, as stated in the conclusions. Theat would be even more interesting than this presented imitation learning scheme, however the paper gives the introduction, background and discussion for that future work. How is generated the data for the HFO environment? Why is not used PG, PGIS in the experiments with Torcs and king of Glory? I suggest to specify in Table 1 that B=0.0 is the case of IL. Since the approach is considered Imitation Learning. I consider sensitivity analysis (like the carried out for HFO in Fig 1) of noisy data very important, because most of the time that is the case with the Demonstrations. I suggest to include it for the Torcs and king of Glory environments. I found very troublesome that in section 6.3 the Table 2 shows results of EWIL1, EWIL2, IL2, and IL2, however it is never mentioned what are the difference in those settings between the case 1 and 2, i.e. that does not contribute anything. I consider the section 6.3 can be eliminated since the results are not illustrating something richer than: EWIL is somehow better than IL. If it had comparisons with other methods and also varying parameters like section 6.1 that would be better. Similarly to the previous comment, experiments with Torcs could have shown more interesting observations, so those results are contributing very little. Additionally, it lacks of conclusions about the results, based only on that table, to me it seems that playing with that parameter is worthless since the best score is with the highest value of B=1. Does it always happen for all the learning problems?. Does it have any disadvantage to set B=1? Again more results and analysis could have been taken with those environments used in the experiments. |
NIPS | Title
Exponentially Weighted Imitation Learning for Batched Historical Data
Abstract
We consider deep policy learning with only batched historical trajectories. The main challenge of this problem is that the learner no longer has a simulator or “environment oracle” as in most reinforcement learning settings. To solve this problem, we propose a monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space. The method does not rely on the knowledge of the behavior policy, thus can be used to learn from data generated by an unknown policy. Under mild conditions, our algorithm, though surprisingly simple, has a policy improvement bound and outperforms most competing methods empirically. Thorough numerical results are also provided to demonstrate the efficacy of the proposed methodology.
1 Introduction
In this article, we consider the problem of learning a deep policy with batched historical trajectories. This problem is important and challenging. As in many real-world tasks, we usually have numerous historical data generated by different policies, but is lack of a perfect simulator of the environment. In this case, we want to learn a good policy from these data, to make decisions in a complex environment with possibly continuous state space and hybrid action space of discrete and continuous parts.
Several existing fields of research concern the problem of policy learning from batched data. In particular, imitation learning (IL) aims to find a policy whose performance is close to that of the data-generating policy [Abbeel and Ng, 2004]. On the other hand, off-policy reinforcement learning (RL) concerns the problem of learning a good (or possibly better) policy with data collected from a behavior policy [Sutton and Barto, 1998]. However, to the best of our knowledge, previous methods do not have satisfiable performance or are not directly applicable in a complex environment as ours with continuous state and hybrid action space.
In this work, we propose a novel yet simple method, to imitate a better policy by monotonic advantage reweighting. From theoretical analysis and empirical results, we find the proposed method has several advantages that
• From theoretical analysis, we show that the algorithm as proposed has policy improvement lower bound under mild condition.
• Empirically, the proposed method works well with function approximation and hybrid action space, which is crucial for the success of deep RL in practical problems.
• For off-policy learning, the method does not rely on the knowledge of action probability of the behavior policy, thus can be used to learn from data generated by an unknown policy, and is robust when current policy is deviated from the behavior policy.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
In our real-world problem of a complex MOBA game, the proposed method has been successfully applied on human replay data, which validates the effectiveness of the method.
The article is organized as follows: We firstly state some preliminaries (Sec. 2) and related works (Sec. 3). Then we present our main method of imitating a better policy (Sec. 4), with theoretical analysis (Sec. 5) and empirical experiments (Sec. 6). Finally we conclude our discussion (Sec. 7).
2 Preliminaries
Consider a Markov decision process (MDP) with infinite-horizon, denoted by M = (S,A, P, r, d0, γ), where S is the state space, A is the action space, P is the transition probability defined on S×A×S → [0, 1], r is the reward function S×A → R, d0 is the distribution of initial state s0, and γ ∈ (0, 1) is the discount factor. A trajectory τ is a sequence of triplets of state, action and reward, i.e., τ = {(st, at, rt)}t=1,...,T , where T is the terminal step number. A stochastic policy denoted by π is defined as S×A → [0, 1]. We use the following standard notation of state-value V π(st), action-value Qπ(st, at) and advantage Aπ(st, at), defined as V π(st) = Eπ|st ∑∞ l=0 γ
lr(st+l, at+l), Qπ(st, at) = Eπ|st,at ∑∞ l=0 γ
lr(st+l, at+l), and Aπ(st, at) = Qπ(st, at) − V π(st), where Eπ|st means al ∼ π(a|sl), sl+1 ∼ P (sl+1|sl, al), ∀l ≥ t, and Eπ|st,at means sl+1 ∼ P (sl+1|sl, al), al+1 ∼ π(a|sl+1), ∀l ≥ t. As the state space S may be prohibitively large, we approximate the policy and state-value with parameterized forms as πθ(s, a) and V πθ (s) with parameter θ ∈ Θ. We denote the original policy space as Π = {π|π(s, a) ∈ [0, 1], ∑ a∈A π(s, a) = 1,∀s ∈ S, a ∈ A} and parametrized policy space as ΠΘ = {πθ|θ ∈ Θ}. To measure the similarity between two policies π and π′, we consider the Kullback–Leibler divergence and total variance (TV) distance defined as
DdKL(π ′||π) = ∑ s d(s) ∑ a π′(a|s) log π ′(a|s) π(a|s)
DdTV(π ′, π) = (1/2) ∑ s d(s) ∑ a |π′(a|s)− π(a|s)|
where d(s) is a probability distribution of states. The performance of a policy π is measured by its expected discounted reward:
η(π) = Ed0,π ∞∑ t=0 γtr(st, at)
where Ed0,π means s0 ∼ d0, at ∼ π(at|st), and st+1 ∼ P (st+1|st, at). We omit the subscript d0 when there is no ambiguity. In [Kakade and Langford, 2002], a useful equation has been proved that
η(π′)− η(π) = 1 1− γ ∑ s dπ′(s) ∑ a π′(a|s)Aπ(s, a)
where dπ is the discounted visiting frequencies defined as dπ(s) = (1− γ)Ed0,π ∑∞ t=0 γ
t1(st = s) and 1(·) is an indicator function. In addition, define Ld,π(π′) as
Ld,π(π′) = 1 1− γ ∑ s d(s) ∑ a π′(a|s)Aπ(s, a)
then from [Schulman et al., 2015, Theorem 1], the difference of η(π′) and η(π) can be approximated by Ldπ,π(π′), where the approximation error is bounded by total variance DdπTV(π
′, π), which can be further bounded by DdπKL(π ′||π) or DdπKL(π||π′).
In the following sections, we mainly focus on maximizing Ldπ,π(πθ) as a proxy for optimizing policy performance η(πθ), for πθ ∈ ΠΘ.
3 Related Work
Off-policy learning [Sutton and Barto, 1998] is a broad region of research. For policy improvement method with performance guarantee, conservative policy iteration [Kakade and Langford, 2002] or
safe policy iteration [Pirotta et al., 2013] has long been an interesting topic in the literature. The term “safety” or “conservative” usually means the algorithm described is guaranteed to produce a series of monotonic improved policies. Exact or high-probability bounds of policy improvement are often provided in these previous works [Thomas and Brunskill, 2016, Jiang and Li, 2016, Thomas et al., 2015, Ghavamzadeh et al., 2016]. We refer readers to [Garcıa and Fernández, 2015] for a comprehensive survey of safe RL. However, to the best of our knowledge, these prior methods cannot be directly applied in our problem of learning in a complex game environment with large scale replay data, as they either need full-knowledge of the MDP or consider tabular case mainly for finite states and discrete actions, with prohibitive computational complexity.
Constrained policy optimization problems in the parameter space are considered in previous works [Schulman et al., 2015, Peters et al., 2010]. In [Peters et al., 2010], they constrain the policy on the distribution of pπ(s, a) = µπ(s)π(a|s), while in [Schulman et al., 2015], the constraint is on π(a|s), with fixed state-wise weight d(s). Also, in [Schulman et al., 2015], the authors have considered DdπKL(π||πθ) as a policy divergence constraint, while in [Peters et al., 2010] the authors considered DKL(µ
ππ||q). The connection with our proposed method is elaborated in Appendix B.1. A closely related work is [Abdolmaleki et al., 2018] which present the exponential advantage weighting in an EM perspective. Independently, we further generalize to monotonic advantage re-weighting and also derive a lower bound for imitation learning.
Besides off-policy policy iteration algorithm, value iteration algorithm can also be used in off-policy settings. For deep reinforcement learning, DQN [Mnih et al., 2013], DQfD [Hester et al., 2018] works primarily with discrete actions, while DDPG [Lillicrap et al., 2016] works well with continuous actions. For hybrid action space, there are also works combining the idea of DQN and DDPG [Hausknecht and Stone, 2016]. In our preliminary experiments, we found value iteration method failed to converge for the tasks in the HFO environment. It seems that the discrepancy between behavior policy and the target policy (arg max policy in DQN) should be properly restrained, which we think worth further research and investigation.
Also, there are existing related methods in the field of imitation learning. For example, when expert data is available, we can learn a policy directly by predicting the expert action [Bain and Sommut, 1999, Ross et al., 2011]. Another related idea is to imitate an MCTS policy [Guo et al., 2014, Silver et al., 2016]. In the work of [Silver et al., 2016], the authors propose to use Monte-Carlo Tree Search (MCTS) to form a new policy π̃ = MCTS(π) where π is the base policy of network, then imitate the better policy π̃ by minimizing DKL(π̃||πθ). Also in [Guo et al., 2014], the authors use UCT as a policy improvement operator and generate data from π̃ = UCT(π), then perform regression or classification with the dataset, which can be seen as approximating the policy under normal distribution or multinomial distribution parametrization.
4 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL)
To learn a policy from data, the most straight forward way is imitation learning (behavior cloning). Suppose we have state-action pairs (st, at) in the data generated by a behavior policy π, then we can minimize the KL divergence between π and πθ. To be specific, we would like to minimize
DdKL(π||πθ) = −Es∼d(s),a∼π(a|s)(log πθ(a|s)− log π(a|s)) (1)
under some state distribution d(s). However, this method makes no distinction between “good” and “bad” actions. The learned πθ simply imitates all the actions generated by π. Actually, if we also have reward rt in the data, we can know the consequence of taking action at, by looking at future state st+1 and reward rt. Suppose we have estimation of the advantage of action at as Âπ(st, at), we can put higher sample weight on the actions with higher advantage, thus imitating good actions more often. Inspired by this idea, we propose a monotonic advantage reweighted imitation learning method (Algorithm 1) which maximizes
Es∼dπ(s),a∼π(a|s) exp(β π(s, a)) log πθ(a|s) (2)
where β is a hyper-parameter. When β = 0 the algorithm degenerates to ordinary imitation learning. Ideally we would like to estimate the advantage function A(st, at) = Eπ|st,at(Rt − V π(st)) using cumulated discounted future reward Rt = ∑T l=t γ
l−trl. For example, one possible solution is to use a neural network to estimate A(st, at), by minimizing Eπ|st,at(Aθ(st, at) − (Rt − Vθ(st)))2
Algorithm 1 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) Input: Historical data D generated by π, hyper-parameter β. For each trajectory τ in D, estimate advantages Âπ(st, at) for time t = 1, · · · , T . Maximize E(st,at)∈D exp(βÂπ(st, at)) log πθ(at|st) with respect to θ.
for Rt computed from different trajectories, where Vθ(st) is also estimated with a neural network respectively. In practice we find that good results can be achieved by simply using a single path estimation as Â(st, at) = (Rt − Vθ(st))/c, where we normalize the advantage by its average norm c1 in order to make the scale of β stable across different environments. We use this method in our experiments as it greatly simplifies the computation.
Although the algorithm has a very simple formulation, it has many strengths as
1. Under mild conditions, we show that the proposed algorithm has policy improvement bound by theoretical analysis. Specifically, the policy π̃ is uniformly as good as, or better than the behavior policy π.
2. The method works well with function approximation as a complex neural network, as suggested by theoretical analysis and validated empirically. The method is naturally compatible with hybrid action of discrete and continuous parts, which is common in practical problems.
3. In contrast to most off-policy methods, the algorithm does not rely on importance sampling with the value of π(at|st) – the action probability of the behavior policy, thus can be used to learn from an unknown policy, and is also robust when current policy is deviated from the behavior policy. We validate this with several empirical experiments.
In Section 5 we give a proposition of policy improvement by theoretical analysis. And in Section 6 we give experimental results of the proposed algorithm in off-policy settings.
5 Theoretical Analysis
In this section, we firstly show that in the ideal case Algorithm 1 is equivalent to imitating a new policy π̃. Then we show that the policy π̃ is indeed uniformly better than π. Thus Algorithm 1 can also be regarded as imitating a better policy (IBP). For function approximation, we also provide a policy improvement lower bound under mild conditions.
5.1 Equivalence to Imitating a New Policy
In this subsection, we show that in the ideal case when we know the advantage Aπ(st, at), Algorithm 1 is equivalent to minimizing KL divergence between πθ and a hypothetic π̃. Consider the problem
π̃ = arg max π′∈Π
((1− γ)βLdπ,π(π′)−DdπKL(π ′||π)) (3)
which has an analytical solution in the policy space Π [Azar et al., 2012, Appendix A, Proposition 1]
π̃(a|s) = π(a|s) exp(βAπ(s, a) + C(s)) (4) where C(s) is a normalizing factor to ensure that ∑ a∈A π̃(a|s) = 1 for each state s. Then
arg min θ DdKL(π̃||πθ) = arg max θ ∑ s d(s) ∑ a π̃(a|s) log πθ(a|s)
= arg max θ ∑ s d(s) exp(C(s)) ∑ a π(a|s) exp(βAπ(s, a)) log πθ(a|s) (5)
Thus Algorithm 1 is equivalent to minimizing DdKL(π̃||πθ) for d(s) ∝ dπ(s) exp(−C(s)). 2
1In our experiments, the average norm of advantage is approximated with a moving average estimation, by c2 ← c2 + 10−8((Rt − Vθ(st))2 − c2).
2In the implementation of the algorithm, we omit the step discount in dπ , i.e., using d′π(s) = Ed0,π ∑T t=0 1(st = s) where T is the terminal step. Sampling from dπ(s) is possible, but usually leads to inferior performance according to our preliminary experiments.
5.2 Monotonic Advantage Reweighting
In subsection 5.1, we have shown that the π̃ defined in 4 is the analytical solution to the problem 3. In this section, we further show that π̃ is indeed uniformly as good as, or better than π. To be rigorous, a policy π′ is considered uniformly as good as, or better than π, if ∀s ∈ S, we have V π′(s) ≥ V π(s). In Proposition 1, we give a family of π̃ which are uniformly as good as, or better than π. To be specific, we have Proposition 1. Suppose two policies π and π̃ satisfy
g(π̃(a|s)) = g(π(a|s)) + h(s,Aπ(s, a)) (6) where g(·) is a monotonically increasing function, and h(s, ·) is monotonically increasing for any fixed s. Then we have V π̃(s) ≥ V π(s), ∀s ∈ S. (7) that is, π̃ is uniformly as good as or better than π.
The idea behind this proposition is simple. The condition (6) requires that the policy π̃ has positive advantages for the actions where π̃(a|s) ≥ π(a|s). Then it follows directly from the well-known policy improvement theorem as stated in [Sutton and Barto, 1998, Equation 4.8]. A short proof is provided in Appendix A.1 for completeness.
When g(·) and h(s, ·) in (6) are chosen as g(π) = log(π) and h(s,Aπ(s, a)) = βAπ(s, a) + C(s), then we recover the formula in 4. By Proposition (1) we have shown that π̃ defined in 4 is as good as, or better than policy π.
We note that there are other choice of g(·) and h(s, ·) as well. For example we can choose g(π) = log(π) and h(s,Aπ(s, a)) = log((βAπ(s, a))+ + ) +C(s), where (·)+ is a positive truncation, is a small positive number, and C(s) is a normalizing factor to ensure ∑ a∈A π̃(s, a) = 1. In this case,
we can minimizeDdKL(π̃||πθ) = ∑ s d(s) exp(C(s)) ∑ a π(a|s)((βAπ(s, a))+ + ) log πθ(a|s)+C.
5.3 Lower bound under Approximation
For practical usage, we usually seek a parametric approximation of π̃. The following proposition gives a lower bound of policy improvement for the parametric policy πθ. Proposition 2. Suppose we use parametric policy πθ to approximate the improved policy π̃ defined in Formula 3, we have the following lower bound on the policy improvement
η(πθ)− η(π) ≥ − √ 2
1− γ δ
1 2 1 M πθ + 1
(1− γ)β δ2 −
√ 2γ π̃π
(1− γ)2 δ
1 2 2 (8)
where δ1 = min(Ddπ̃KL (πθ||π̃), D dπ̃ KL (π̃||πθ)), δ2 = D dπ KL (π̃||π), π
′
π = maxs |Ea∼π′Aπ(s, a)|, and Mπ = maxs,a |Aπ(s, a)| ≤ maxs,a |r(s, a)|/(1− γ).
A short proof can be found in Appendix A.2. Note that we would like to approximate π̃ under state distribution dπ̃ in theory. However in practice we use a heuristic approximation to sample data from trajectories generated by the base policy π as in Algorithm 1, which is equivalent to imitating π̃ under a slightly different state distribution d as discussed in Sec.5.1.
6 Experimental Results
In this section, we provide empirical evidence that the algorithm is well suited for off-policy RL tasks, as it does not need to know the probability of the behavior policy, thus is robust when learning from replays from an unknown policy. We evaluate the proposed algorithm with HFO environment under different settings (Sec. 6.1). Furthermore, we also provide two other environments (TORCS and mobile MOBA game) to evaluate the algorithm in learning from replay data (Sec. 6.2, 6.3).
Denote the behavior policy as π, the desired parametrized policy as πθ, the policy loss Lp for the policy iteration algorithms considered are listed as following: (C is a θ-independent constant)
• (IL) Imitation learning, minimizing DdπKL(π||πθ).
Lp = D dπ KL(π||πθ) = −Es∼dπ(s),a∼π(a|s) log πθ(a|s) + C (9)
• (PG) Policy gradient with baseline and DdπKL(π||πθ) regularization.
Lp = −Es∼dπ(s),a∼π(a|s)(βA π(s, a) + 1) log πθ(a|s) + C (10)
• (PGIS) Policy gradient with baseline and DdπKL(π||πθ) regularization, with off-policy correction by importance sampling (IS), as in TRPO [Schulman et al., 2015] and CPO [Achiam et al., 2017]. Here we simply use penalized gradient algorithm to optimize the objective, instead of using delegated optimization method as in [Schulman et al., 2015].
Lp = D dπ KL(π||πθ)− (1− γ)βL dπ,π(πθ) = −Es∼dπ(s),a∼π(a|s) ( πθ(a|s) π(a|s) βAπ(s, a) + log πθ(s, a) ) + C
(11)
• (MARWIL) Minimizing DdKL(π̃||πθ) as in (5) and Algorithm 1.
Lp = D d KL(π̃||πθ) = −Es∼dπ(s),a∼π(a|s) log(πθ(a|s)) exp(βA π(s, a)) + C (12)
Note that IL simply imitates all the actions in the data, while PG needs the on-policy assumption to be a reasonable algorithm. Both PGIS and MARWIL are derived under off-policy setting. However, the importance ratio πθ/π used to correct off-policy bias for PG usually has large variance and may cause severe problems when πθ is deviated far away from π [Sutton and Barto, 1998]. Several methods are proposed to alleviate this problem [Schulman et al., 2017, Munos et al., 2016, Precup et al., 2000]. On the other hand, we note that the algorithm MARWIL is naturally off-policy, instead of relying on the importance sampling ratio πθ/π to do off-policy correction. We expect the proposed algorithm to work better when learning from a possibly unknown behavior policy.
6.1 Experiments with Half Field Offense (HFO)
To compare the aforementioned algorithms, we employ Half Field Offense (HFO) as our primary experiment environment. HFO is an abstraction of the full RoboCup 2D game, where an agent plays soccer in a half field. The HFO environment has continuous state space and hybrid (discrete and continuous) action space, which is similar to our task in a MOBA game (Sec. 6.3). In this simplified environment, we validate the effectiveness and efficiency of the proposed learning method.
6.1.1 Environment Settings
Like in [Hausknecht and Stone, 2016], we let the agent try to goal without a goalkeeper. We follow [Hausknecht and Stone, 2016] for the settings, as is briefed below.
The observation is a 59-d feature vector, encoding the relative position of several critical objects such as the ball, the goal and other landmarks (See [Hausknecht, 2017]). In our experiments, we use a hybrid action space of discrete actions and continuous actions. 3 types of actions are considered in our setting, which correspond to {“Dash”, “Turn”, “Kick”}. For each type k of action, we require the policy to output a parameter xk ∈ R2. For the action “Dash” and “Kick”, the parameter xk is interpreted as (r cosα, r sinα), with r truncated to 1 when exceeding. Then α ∈ [0, 2π] is interpreted as the relative direction of that action, while r ∈ [0, 1] is interpreted as the power/force of that action. For the action “Turn”, the parameter xk is firstly normalized to (cosα, sinα) and then θ is interpreted as the relative degree of turning. The reward is hand-crafted, written as:
rt = dt(b, a)− dt+1(b, a) + Ikickt+1 + 3(dt(b, g)− dt+1(b, g)) + 5I goal t+1 ,
where dt(b, a) (or dt(b, g)) is the distance between the ball and the agent (or the center of goal). Ikickt = 1 if the agent is close enough to kick the ball. I goal t = 1 if a successful goal happens. We leverage Winning Rate = NGNG+NF to evaluate the final performance, where NG is the number of goals (G) achieved, NF is the number of failures (F), due to either out-of-time (the agent does not kick the ball in 100 frames or does not goal in 500 frames) or out-of-bound (the ball is out of the half field).
When learning from data, the historical experience is generated with a mixture of a perfect (100% winning rate) policy πperfect and a random policy πrandom. For the continuous part of the action, a Gaussian distribution of σ = 0.2 or 0.4 is added to the model output, respectively. The mixture
Algorithm 2 Stochastic Gradient Algorithm for MARWIL Input: Policy loss Lp being one of 9 to 12. base policy π, parameter m, cv . Randomly initialize πθ. Empty replay memory D. Fill D with trajectories from π and calculate Rt for each (st, at) in D. for i = 1 to N do
Sample a batch B = {(sk, ak, Rk)}m from D. Compute mini-batch gradient∇θL̂p,∇θL̂v of B. Update θ: −∆θ ∝ ∇θL̂p + cv∇θL̂v
end for
coefficient is used to adjust the proportion of “good” actions and “bad” actions. To be specific, for each step, the action is taken as
at ∼ { πperfect(·|st) +N(0, σ) w.p. πrandom(·|st) +N(0, σ) w.p. 1−
(13)
The parameter is adjusted from 0.1 to 0.5. Smaller means greater noise, in which case it is harder for the algorithms to find a good policy from the noisy data.
6.1.2 Algorithm Setting
For the HFO game, we model the 3 discrete actions with multinomial probabilities and the 2 continuous parameters for each action with normal distributions of known σ = 0.2 but unknown µ. Parameters for different types of action are modeled separately. In total we have 3 output nodes for discrete action probabilities and 6 output nodes for continuous action parameters, in the form of
πθ((k, xk)|s) = pθ(k|s)N(xk|µθ,k, σ), k ∈ {1, 2, 3}, xk ∈ R2
where pθ(·|s) is computed as a soft-max for discrete actions and N(·|µθ, σ) is the probability density function of Gaussian distribution.
When learning from data, the base policy (13) is used to generate trajectories into a replay memory D, and the policy network is updated by different algorithms, respectively. We denote the policy loss objective as Lp, being one of the formula (9) (10) (11) (12). Then we optimize the policy loss Lp and the value loss Lv simultaneously, with a mixture coefficient cv as a hyper-parameter (by default cv = 1). The value loss Lv is defined as Lv = Ed,π(Rt − Vθ(st))2. A stochastic gradient algorithm is given in Algorithm 2. Each experiment is repeated 3 times and the average of scores is reported in Figure 1. Additional details of the algorithm settings are given in Appendix B.2.
We note that the explicit value π(at|st) is crucial for the correction used by most off-policy policy iteration methods [Sutton and Barto, 1998], including [Munos et al., 2016, Wang et al., 2016, Schulman et al., 2017, Wu et al., 2017] and many other works [Geist and Scherrer, 2014]. Here for a comparable experiment between policy gradient method and our proposed method, we consider a simple off-policy correction by importance sampling as in (11). We test the performance of the proposed method and previous works under different settings in Figure 1. We can see that the proposed MARWIL achieves consistently better performance than other methods.3
6.2 Experiments with TORCS
We also evaluate the imitation learning and the proposed method within the TORCS [Wymann et al., 2014] environment. In the TORCS environment, the observation is the raw screen with image size of
3We note that the gap between behavior policy and IL is partly due to the approximation we used. As we have continuous action space, we use a gaussian model with fixed σ, thus the variance of learned policy may be lower than that of the behavior policy. A fair comparison should be made among IL, PG, PGIS, and MARWIL.
64× 64× 3, the action is a scalar indicating the steering angle in [−π, π], and the reward rt is the momentary speed. When the car crushes, a −1 reward is received and the game terminates. For the TORCS environment, a simple rule is leveraged to keep the car running and to prevent it from crushing. Therefore, we can use the rule as the optimal policy to generate expert trajectories. In addition, we generate noisy trajectories with random actions to intentionally confuse the learning algorithms, and see whether the proposed method can learn a better policy from the data generated by the deteriorated policy. We make the training data by generating 10 matches with the optimal policy and another 10 matches with the random actions.
We train the imitation learning and the proposed method for 5 epochs to compare their performance. Table 1 shows the test scores when varying the parameter β. From the results, we see that our proposed algorithm is effective at learning a better policy from these noisy trajectories.
6.3 Experiments with King of Glory
We also evaluate the proposed algorithm with King of Glory – a mobile MOBA (Multi-player Online Battle Arena) game popular in China. In the experiments, we collect human replay files in the size of millions, equaling to tens of billions time steps in total. Evaluation is performed in the “solo” game mode, where an agent fights against another AI in the opposite side. A DNN based function approximator is adopted. In a proprietary test, we find that our AI agent, trained with the proposed method, can reach the level of an experienced human player in a solo game. Additional details of the algorithm settings for King of Glory is given in Appendix B.3.
7 Conclusion
In this article, we present an off-policy learning algorithm that can form a better policy from trajectories generated by a possibly unknown policy. When learning from replay data, the proposed algorithm does not require the bahavior probability π over the actions, which is usually missing in human generated data, and also works well with function approximation and hybrid action space. The algorithm is preferable in real-world application, including playing video games. Experimental results over several real world datasets validate the effectiveness of the proposed algorithm. We note that the proposed MARWIL algorithm can also work as a full reinforcement learning method, when applied iteratively on self-generated replay data. Due to the space limitation, a thorough study of our method for full reinforcement learning is left to a future work.
Acknowledgement We are grateful for the anonymous reviewers for their detailed and helpful comments on this work. We also thank our colleagues in the project of King of Glory AI, particularly Haobo Fu and Tengfei Shi, for their assistance on the game environment and parsing replay data. | 1. What is the focus of the review on the paper regarding imitation learning?
2. What are the strengths of the proposed algorithm, particularly in its ability to improve over the expert policy?
3. What are the concerns regarding the use of KL divergence and its relation to state distribution?
4. How does the reviewer assess the theoretical guarantees of the proposed method, especially when compared to previous works such as Dagger and GAIL?
5. Are there any suggestions for improvements or further analyses that could enhance the paper's contributions? | Review | Review
Summary: The paper considers imitation learning with only batched historical trajectories and without any further access to any simulators or environment models. The proposed imitation learning is reward-aware, in a sense that it weights the expert's action based on advantage of the expert policy A^{\pi}. The paper also analyze the proposed algorithm and show a policy improvement over the expert policy. Strengths: The setting considered in the paper is challenging: the learning algorithm only has access to batched trajectories from the expert but further access to the simulator or model. Hence most of previous imitation learning works (e.g., Dagger ,GAIL) won't apply in this setting. Simply Behavior cloning will work in this setting, but the paper shows that by leveraging the advantage of the expert, it can outperform Behavior cloning. Main comments below: 1. The definition and the use of KL divergence between two policies, i.e., D_{KL}(\pi'||pi) is not clear. Specifically, the definition in Eq (1) is different from the definition of KL below line 56. Which definition did you use in the rest of the paper, say, in Eq (3)? 2. Regarding the state distribution rho(s), I assume it is always referring to the state distribution of the behavior policy pi. However, below line 146, when you say that it is imitating a new hypothetic policy \tilde{\pi}, shouldn't the state distribution becomes the state-distribution resulting from \tilde{\pi}? The state distribution of tilde{\pi} and \pi could be different. 3. Line 147: Why C(s) could be emitted? It is state-dependent. Ignoring C(s) could affect the solution of the optimization. Could the authors elaborate on why it is ok to ignore C(s)? 4. Even assume that we have a rich enough policy space such that we can drive D_{KL}(\tilde{\pi} || \pi_{\theta}) to zero. How does this guarantee that \pi_{\theta} is also as good as, or even better than pi? Note that D_{KL}(\tilde{\pi}|| \pi) = 0 does not imply that \tilde{\pi} and \pi are the same under every state: it depends on whether or not state distribution rho(s) has non-zero probability on every state (consider the case where. a behavior policy can only cover a small part of the large state space). Also no matter how deeper the network is, we can never ever make D_{KL}(\tilde{\pi}|| \pi_{\theta}) reach zero, simply because we do not have infinitely many data. Assuming, for instance, D_{KL}(\tilde{\pi} || pi_{\theta}) <= epsilon for some small epsilon, is reasonable, but then we need to analyze how epsilon affect the performance of the learned \pi_{\theta}. 5. Notations in Eq (7) - (10) are not well-defined. Could you point the readers to the places where, for instance, E_{\pi} \log \pi_{\theta} is defined? What does it even mean by log(\pi_{\theta}), as pi_{\theta} is a function? Also the KL used in (7) and (10) are not consistent. Following the definition in (7), shouldn't the expectation in (10) be defined with respect to \tilde{\pi}? After rebuttal: Thanks a lot for the rebuttal. I read all reviews and the rebuttal. Unfortunately I'm not going to raise my score. Regarding the theory section, essentially the paper is trying to imitate an ideal policy (\tilde{\pi} in Eq 4) using function approximations (\pi_{\theta}). The paper claims the algorithm 1 essentially is minimizing D_{KL}(\tilde{\pi} || \pi_{theta}), but I'm not sure if it's a good idea. Again, the state distribution problem plays important role here. Note that in Alg 1, it is using states from the behavior policy, which is neither \tilde{\pi} nor \pi_{\theta}!. (Existing analysis of Behavior cloning assumes we generate state-action pairs from the policy that we are going to imitate! However here we do not have data from \tilde{\pi}---the policy that we are supposed to imitate). To behavior clone this ideal policy, one should minimize KL under the state-distribution resulting from the ideal policy \tilde{\pi}. This is extremely important for the discussion in section 5.2. While it's not hard to see \tilde{\pi} is a uniformly better policy than the behavior policy \pi, it is definitely not clear to me that the learned policy pi_{\theta} would be better than or at least as good as \pi, considering that the state-distribution for measuring KL-divergence between \tilde{\pi} and \pi_{\theta} is the state-distribution of the behavior policy \pi. Overall, I might be wrong but I personally think that the paper somehow implicitly kept using max_{s} KL(\pi(. | s) || \pi'(. | s) ) as the default-to-go definition of KL-divergence between two policies, at least throughout the analysis (if we can drive this kl-divergence to zero or small number epsilon, then pi and pi' will be close to each other for any states, hence in analysis we do not need to worry about the state-action distribution). But in reality, we can only measure KL under the state-distribution of behavior policy (i.e., max_{s} is impossible to evaluate in large state space). Theoretical guarantees of imitation learning where state-action distribution induced neither from the expert policy, i.e., \tilde{\pi} (classic Behavior cloning analysis considers this setting), nor from the learned policy, i.e. \pi_{\theta} (method like DAgger considers this setting) seem not trivial and need more careful analysis. |
NIPS | Title
Exponentially Weighted Imitation Learning for Batched Historical Data
Abstract
We consider deep policy learning with only batched historical trajectories. The main challenge of this problem is that the learner no longer has a simulator or “environment oracle” as in most reinforcement learning settings. To solve this problem, we propose a monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space. The method does not rely on the knowledge of the behavior policy, thus can be used to learn from data generated by an unknown policy. Under mild conditions, our algorithm, though surprisingly simple, has a policy improvement bound and outperforms most competing methods empirically. Thorough numerical results are also provided to demonstrate the efficacy of the proposed methodology.
1 Introduction
In this article, we consider the problem of learning a deep policy with batched historical trajectories. This problem is important and challenging. As in many real-world tasks, we usually have numerous historical data generated by different policies, but is lack of a perfect simulator of the environment. In this case, we want to learn a good policy from these data, to make decisions in a complex environment with possibly continuous state space and hybrid action space of discrete and continuous parts.
Several existing fields of research concern the problem of policy learning from batched data. In particular, imitation learning (IL) aims to find a policy whose performance is close to that of the data-generating policy [Abbeel and Ng, 2004]. On the other hand, off-policy reinforcement learning (RL) concerns the problem of learning a good (or possibly better) policy with data collected from a behavior policy [Sutton and Barto, 1998]. However, to the best of our knowledge, previous methods do not have satisfiable performance or are not directly applicable in a complex environment as ours with continuous state and hybrid action space.
In this work, we propose a novel yet simple method, to imitate a better policy by monotonic advantage reweighting. From theoretical analysis and empirical results, we find the proposed method has several advantages that
• From theoretical analysis, we show that the algorithm as proposed has policy improvement lower bound under mild condition.
• Empirically, the proposed method works well with function approximation and hybrid action space, which is crucial for the success of deep RL in practical problems.
• For off-policy learning, the method does not rely on the knowledge of action probability of the behavior policy, thus can be used to learn from data generated by an unknown policy, and is robust when current policy is deviated from the behavior policy.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
In our real-world problem of a complex MOBA game, the proposed method has been successfully applied on human replay data, which validates the effectiveness of the method.
The article is organized as follows: We firstly state some preliminaries (Sec. 2) and related works (Sec. 3). Then we present our main method of imitating a better policy (Sec. 4), with theoretical analysis (Sec. 5) and empirical experiments (Sec. 6). Finally we conclude our discussion (Sec. 7).
2 Preliminaries
Consider a Markov decision process (MDP) with infinite-horizon, denoted by M = (S,A, P, r, d0, γ), where S is the state space, A is the action space, P is the transition probability defined on S×A×S → [0, 1], r is the reward function S×A → R, d0 is the distribution of initial state s0, and γ ∈ (0, 1) is the discount factor. A trajectory τ is a sequence of triplets of state, action and reward, i.e., τ = {(st, at, rt)}t=1,...,T , where T is the terminal step number. A stochastic policy denoted by π is defined as S×A → [0, 1]. We use the following standard notation of state-value V π(st), action-value Qπ(st, at) and advantage Aπ(st, at), defined as V π(st) = Eπ|st ∑∞ l=0 γ
lr(st+l, at+l), Qπ(st, at) = Eπ|st,at ∑∞ l=0 γ
lr(st+l, at+l), and Aπ(st, at) = Qπ(st, at) − V π(st), where Eπ|st means al ∼ π(a|sl), sl+1 ∼ P (sl+1|sl, al), ∀l ≥ t, and Eπ|st,at means sl+1 ∼ P (sl+1|sl, al), al+1 ∼ π(a|sl+1), ∀l ≥ t. As the state space S may be prohibitively large, we approximate the policy and state-value with parameterized forms as πθ(s, a) and V πθ (s) with parameter θ ∈ Θ. We denote the original policy space as Π = {π|π(s, a) ∈ [0, 1], ∑ a∈A π(s, a) = 1,∀s ∈ S, a ∈ A} and parametrized policy space as ΠΘ = {πθ|θ ∈ Θ}. To measure the similarity between two policies π and π′, we consider the Kullback–Leibler divergence and total variance (TV) distance defined as
DdKL(π ′||π) = ∑ s d(s) ∑ a π′(a|s) log π ′(a|s) π(a|s)
DdTV(π ′, π) = (1/2) ∑ s d(s) ∑ a |π′(a|s)− π(a|s)|
where d(s) is a probability distribution of states. The performance of a policy π is measured by its expected discounted reward:
η(π) = Ed0,π ∞∑ t=0 γtr(st, at)
where Ed0,π means s0 ∼ d0, at ∼ π(at|st), and st+1 ∼ P (st+1|st, at). We omit the subscript d0 when there is no ambiguity. In [Kakade and Langford, 2002], a useful equation has been proved that
η(π′)− η(π) = 1 1− γ ∑ s dπ′(s) ∑ a π′(a|s)Aπ(s, a)
where dπ is the discounted visiting frequencies defined as dπ(s) = (1− γ)Ed0,π ∑∞ t=0 γ
t1(st = s) and 1(·) is an indicator function. In addition, define Ld,π(π′) as
Ld,π(π′) = 1 1− γ ∑ s d(s) ∑ a π′(a|s)Aπ(s, a)
then from [Schulman et al., 2015, Theorem 1], the difference of η(π′) and η(π) can be approximated by Ldπ,π(π′), where the approximation error is bounded by total variance DdπTV(π
′, π), which can be further bounded by DdπKL(π ′||π) or DdπKL(π||π′).
In the following sections, we mainly focus on maximizing Ldπ,π(πθ) as a proxy for optimizing policy performance η(πθ), for πθ ∈ ΠΘ.
3 Related Work
Off-policy learning [Sutton and Barto, 1998] is a broad region of research. For policy improvement method with performance guarantee, conservative policy iteration [Kakade and Langford, 2002] or
safe policy iteration [Pirotta et al., 2013] has long been an interesting topic in the literature. The term “safety” or “conservative” usually means the algorithm described is guaranteed to produce a series of monotonic improved policies. Exact or high-probability bounds of policy improvement are often provided in these previous works [Thomas and Brunskill, 2016, Jiang and Li, 2016, Thomas et al., 2015, Ghavamzadeh et al., 2016]. We refer readers to [Garcıa and Fernández, 2015] for a comprehensive survey of safe RL. However, to the best of our knowledge, these prior methods cannot be directly applied in our problem of learning in a complex game environment with large scale replay data, as they either need full-knowledge of the MDP or consider tabular case mainly for finite states and discrete actions, with prohibitive computational complexity.
Constrained policy optimization problems in the parameter space are considered in previous works [Schulman et al., 2015, Peters et al., 2010]. In [Peters et al., 2010], they constrain the policy on the distribution of pπ(s, a) = µπ(s)π(a|s), while in [Schulman et al., 2015], the constraint is on π(a|s), with fixed state-wise weight d(s). Also, in [Schulman et al., 2015], the authors have considered DdπKL(π||πθ) as a policy divergence constraint, while in [Peters et al., 2010] the authors considered DKL(µ
ππ||q). The connection with our proposed method is elaborated in Appendix B.1. A closely related work is [Abdolmaleki et al., 2018] which present the exponential advantage weighting in an EM perspective. Independently, we further generalize to monotonic advantage re-weighting and also derive a lower bound for imitation learning.
Besides off-policy policy iteration algorithm, value iteration algorithm can also be used in off-policy settings. For deep reinforcement learning, DQN [Mnih et al., 2013], DQfD [Hester et al., 2018] works primarily with discrete actions, while DDPG [Lillicrap et al., 2016] works well with continuous actions. For hybrid action space, there are also works combining the idea of DQN and DDPG [Hausknecht and Stone, 2016]. In our preliminary experiments, we found value iteration method failed to converge for the tasks in the HFO environment. It seems that the discrepancy between behavior policy and the target policy (arg max policy in DQN) should be properly restrained, which we think worth further research and investigation.
Also, there are existing related methods in the field of imitation learning. For example, when expert data is available, we can learn a policy directly by predicting the expert action [Bain and Sommut, 1999, Ross et al., 2011]. Another related idea is to imitate an MCTS policy [Guo et al., 2014, Silver et al., 2016]. In the work of [Silver et al., 2016], the authors propose to use Monte-Carlo Tree Search (MCTS) to form a new policy π̃ = MCTS(π) where π is the base policy of network, then imitate the better policy π̃ by minimizing DKL(π̃||πθ). Also in [Guo et al., 2014], the authors use UCT as a policy improvement operator and generate data from π̃ = UCT(π), then perform regression or classification with the dataset, which can be seen as approximating the policy under normal distribution or multinomial distribution parametrization.
4 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL)
To learn a policy from data, the most straight forward way is imitation learning (behavior cloning). Suppose we have state-action pairs (st, at) in the data generated by a behavior policy π, then we can minimize the KL divergence between π and πθ. To be specific, we would like to minimize
DdKL(π||πθ) = −Es∼d(s),a∼π(a|s)(log πθ(a|s)− log π(a|s)) (1)
under some state distribution d(s). However, this method makes no distinction between “good” and “bad” actions. The learned πθ simply imitates all the actions generated by π. Actually, if we also have reward rt in the data, we can know the consequence of taking action at, by looking at future state st+1 and reward rt. Suppose we have estimation of the advantage of action at as Âπ(st, at), we can put higher sample weight on the actions with higher advantage, thus imitating good actions more often. Inspired by this idea, we propose a monotonic advantage reweighted imitation learning method (Algorithm 1) which maximizes
Es∼dπ(s),a∼π(a|s) exp(β π(s, a)) log πθ(a|s) (2)
where β is a hyper-parameter. When β = 0 the algorithm degenerates to ordinary imitation learning. Ideally we would like to estimate the advantage function A(st, at) = Eπ|st,at(Rt − V π(st)) using cumulated discounted future reward Rt = ∑T l=t γ
l−trl. For example, one possible solution is to use a neural network to estimate A(st, at), by minimizing Eπ|st,at(Aθ(st, at) − (Rt − Vθ(st)))2
Algorithm 1 Monotonic Advantage Re-Weighted Imitation Learning (MARWIL) Input: Historical data D generated by π, hyper-parameter β. For each trajectory τ in D, estimate advantages Âπ(st, at) for time t = 1, · · · , T . Maximize E(st,at)∈D exp(βÂπ(st, at)) log πθ(at|st) with respect to θ.
for Rt computed from different trajectories, where Vθ(st) is also estimated with a neural network respectively. In practice we find that good results can be achieved by simply using a single path estimation as Â(st, at) = (Rt − Vθ(st))/c, where we normalize the advantage by its average norm c1 in order to make the scale of β stable across different environments. We use this method in our experiments as it greatly simplifies the computation.
Although the algorithm has a very simple formulation, it has many strengths as
1. Under mild conditions, we show that the proposed algorithm has policy improvement bound by theoretical analysis. Specifically, the policy π̃ is uniformly as good as, or better than the behavior policy π.
2. The method works well with function approximation as a complex neural network, as suggested by theoretical analysis and validated empirically. The method is naturally compatible with hybrid action of discrete and continuous parts, which is common in practical problems.
3. In contrast to most off-policy methods, the algorithm does not rely on importance sampling with the value of π(at|st) – the action probability of the behavior policy, thus can be used to learn from an unknown policy, and is also robust when current policy is deviated from the behavior policy. We validate this with several empirical experiments.
In Section 5 we give a proposition of policy improvement by theoretical analysis. And in Section 6 we give experimental results of the proposed algorithm in off-policy settings.
5 Theoretical Analysis
In this section, we firstly show that in the ideal case Algorithm 1 is equivalent to imitating a new policy π̃. Then we show that the policy π̃ is indeed uniformly better than π. Thus Algorithm 1 can also be regarded as imitating a better policy (IBP). For function approximation, we also provide a policy improvement lower bound under mild conditions.
5.1 Equivalence to Imitating a New Policy
In this subsection, we show that in the ideal case when we know the advantage Aπ(st, at), Algorithm 1 is equivalent to minimizing KL divergence between πθ and a hypothetic π̃. Consider the problem
π̃ = arg max π′∈Π
((1− γ)βLdπ,π(π′)−DdπKL(π ′||π)) (3)
which has an analytical solution in the policy space Π [Azar et al., 2012, Appendix A, Proposition 1]
π̃(a|s) = π(a|s) exp(βAπ(s, a) + C(s)) (4) where C(s) is a normalizing factor to ensure that ∑ a∈A π̃(a|s) = 1 for each state s. Then
arg min θ DdKL(π̃||πθ) = arg max θ ∑ s d(s) ∑ a π̃(a|s) log πθ(a|s)
= arg max θ ∑ s d(s) exp(C(s)) ∑ a π(a|s) exp(βAπ(s, a)) log πθ(a|s) (5)
Thus Algorithm 1 is equivalent to minimizing DdKL(π̃||πθ) for d(s) ∝ dπ(s) exp(−C(s)). 2
1In our experiments, the average norm of advantage is approximated with a moving average estimation, by c2 ← c2 + 10−8((Rt − Vθ(st))2 − c2).
2In the implementation of the algorithm, we omit the step discount in dπ , i.e., using d′π(s) = Ed0,π ∑T t=0 1(st = s) where T is the terminal step. Sampling from dπ(s) is possible, but usually leads to inferior performance according to our preliminary experiments.
5.2 Monotonic Advantage Reweighting
In subsection 5.1, we have shown that the π̃ defined in 4 is the analytical solution to the problem 3. In this section, we further show that π̃ is indeed uniformly as good as, or better than π. To be rigorous, a policy π′ is considered uniformly as good as, or better than π, if ∀s ∈ S, we have V π′(s) ≥ V π(s). In Proposition 1, we give a family of π̃ which are uniformly as good as, or better than π. To be specific, we have Proposition 1. Suppose two policies π and π̃ satisfy
g(π̃(a|s)) = g(π(a|s)) + h(s,Aπ(s, a)) (6) where g(·) is a monotonically increasing function, and h(s, ·) is monotonically increasing for any fixed s. Then we have V π̃(s) ≥ V π(s), ∀s ∈ S. (7) that is, π̃ is uniformly as good as or better than π.
The idea behind this proposition is simple. The condition (6) requires that the policy π̃ has positive advantages for the actions where π̃(a|s) ≥ π(a|s). Then it follows directly from the well-known policy improvement theorem as stated in [Sutton and Barto, 1998, Equation 4.8]. A short proof is provided in Appendix A.1 for completeness.
When g(·) and h(s, ·) in (6) are chosen as g(π) = log(π) and h(s,Aπ(s, a)) = βAπ(s, a) + C(s), then we recover the formula in 4. By Proposition (1) we have shown that π̃ defined in 4 is as good as, or better than policy π.
We note that there are other choice of g(·) and h(s, ·) as well. For example we can choose g(π) = log(π) and h(s,Aπ(s, a)) = log((βAπ(s, a))+ + ) +C(s), where (·)+ is a positive truncation, is a small positive number, and C(s) is a normalizing factor to ensure ∑ a∈A π̃(s, a) = 1. In this case,
we can minimizeDdKL(π̃||πθ) = ∑ s d(s) exp(C(s)) ∑ a π(a|s)((βAπ(s, a))+ + ) log πθ(a|s)+C.
5.3 Lower bound under Approximation
For practical usage, we usually seek a parametric approximation of π̃. The following proposition gives a lower bound of policy improvement for the parametric policy πθ. Proposition 2. Suppose we use parametric policy πθ to approximate the improved policy π̃ defined in Formula 3, we have the following lower bound on the policy improvement
η(πθ)− η(π) ≥ − √ 2
1− γ δ
1 2 1 M πθ + 1
(1− γ)β δ2 −
√ 2γ π̃π
(1− γ)2 δ
1 2 2 (8)
where δ1 = min(Ddπ̃KL (πθ||π̃), D dπ̃ KL (π̃||πθ)), δ2 = D dπ KL (π̃||π), π
′
π = maxs |Ea∼π′Aπ(s, a)|, and Mπ = maxs,a |Aπ(s, a)| ≤ maxs,a |r(s, a)|/(1− γ).
A short proof can be found in Appendix A.2. Note that we would like to approximate π̃ under state distribution dπ̃ in theory. However in practice we use a heuristic approximation to sample data from trajectories generated by the base policy π as in Algorithm 1, which is equivalent to imitating π̃ under a slightly different state distribution d as discussed in Sec.5.1.
6 Experimental Results
In this section, we provide empirical evidence that the algorithm is well suited for off-policy RL tasks, as it does not need to know the probability of the behavior policy, thus is robust when learning from replays from an unknown policy. We evaluate the proposed algorithm with HFO environment under different settings (Sec. 6.1). Furthermore, we also provide two other environments (TORCS and mobile MOBA game) to evaluate the algorithm in learning from replay data (Sec. 6.2, 6.3).
Denote the behavior policy as π, the desired parametrized policy as πθ, the policy loss Lp for the policy iteration algorithms considered are listed as following: (C is a θ-independent constant)
• (IL) Imitation learning, minimizing DdπKL(π||πθ).
Lp = D dπ KL(π||πθ) = −Es∼dπ(s),a∼π(a|s) log πθ(a|s) + C (9)
• (PG) Policy gradient with baseline and DdπKL(π||πθ) regularization.
Lp = −Es∼dπ(s),a∼π(a|s)(βA π(s, a) + 1) log πθ(a|s) + C (10)
• (PGIS) Policy gradient with baseline and DdπKL(π||πθ) regularization, with off-policy correction by importance sampling (IS), as in TRPO [Schulman et al., 2015] and CPO [Achiam et al., 2017]. Here we simply use penalized gradient algorithm to optimize the objective, instead of using delegated optimization method as in [Schulman et al., 2015].
Lp = D dπ KL(π||πθ)− (1− γ)βL dπ,π(πθ) = −Es∼dπ(s),a∼π(a|s) ( πθ(a|s) π(a|s) βAπ(s, a) + log πθ(s, a) ) + C
(11)
• (MARWIL) Minimizing DdKL(π̃||πθ) as in (5) and Algorithm 1.
Lp = D d KL(π̃||πθ) = −Es∼dπ(s),a∼π(a|s) log(πθ(a|s)) exp(βA π(s, a)) + C (12)
Note that IL simply imitates all the actions in the data, while PG needs the on-policy assumption to be a reasonable algorithm. Both PGIS and MARWIL are derived under off-policy setting. However, the importance ratio πθ/π used to correct off-policy bias for PG usually has large variance and may cause severe problems when πθ is deviated far away from π [Sutton and Barto, 1998]. Several methods are proposed to alleviate this problem [Schulman et al., 2017, Munos et al., 2016, Precup et al., 2000]. On the other hand, we note that the algorithm MARWIL is naturally off-policy, instead of relying on the importance sampling ratio πθ/π to do off-policy correction. We expect the proposed algorithm to work better when learning from a possibly unknown behavior policy.
6.1 Experiments with Half Field Offense (HFO)
To compare the aforementioned algorithms, we employ Half Field Offense (HFO) as our primary experiment environment. HFO is an abstraction of the full RoboCup 2D game, where an agent plays soccer in a half field. The HFO environment has continuous state space and hybrid (discrete and continuous) action space, which is similar to our task in a MOBA game (Sec. 6.3). In this simplified environment, we validate the effectiveness and efficiency of the proposed learning method.
6.1.1 Environment Settings
Like in [Hausknecht and Stone, 2016], we let the agent try to goal without a goalkeeper. We follow [Hausknecht and Stone, 2016] for the settings, as is briefed below.
The observation is a 59-d feature vector, encoding the relative position of several critical objects such as the ball, the goal and other landmarks (See [Hausknecht, 2017]). In our experiments, we use a hybrid action space of discrete actions and continuous actions. 3 types of actions are considered in our setting, which correspond to {“Dash”, “Turn”, “Kick”}. For each type k of action, we require the policy to output a parameter xk ∈ R2. For the action “Dash” and “Kick”, the parameter xk is interpreted as (r cosα, r sinα), with r truncated to 1 when exceeding. Then α ∈ [0, 2π] is interpreted as the relative direction of that action, while r ∈ [0, 1] is interpreted as the power/force of that action. For the action “Turn”, the parameter xk is firstly normalized to (cosα, sinα) and then θ is interpreted as the relative degree of turning. The reward is hand-crafted, written as:
rt = dt(b, a)− dt+1(b, a) + Ikickt+1 + 3(dt(b, g)− dt+1(b, g)) + 5I goal t+1 ,
where dt(b, a) (or dt(b, g)) is the distance between the ball and the agent (or the center of goal). Ikickt = 1 if the agent is close enough to kick the ball. I goal t = 1 if a successful goal happens. We leverage Winning Rate = NGNG+NF to evaluate the final performance, where NG is the number of goals (G) achieved, NF is the number of failures (F), due to either out-of-time (the agent does not kick the ball in 100 frames or does not goal in 500 frames) or out-of-bound (the ball is out of the half field).
When learning from data, the historical experience is generated with a mixture of a perfect (100% winning rate) policy πperfect and a random policy πrandom. For the continuous part of the action, a Gaussian distribution of σ = 0.2 or 0.4 is added to the model output, respectively. The mixture
Algorithm 2 Stochastic Gradient Algorithm for MARWIL Input: Policy loss Lp being one of 9 to 12. base policy π, parameter m, cv . Randomly initialize πθ. Empty replay memory D. Fill D with trajectories from π and calculate Rt for each (st, at) in D. for i = 1 to N do
Sample a batch B = {(sk, ak, Rk)}m from D. Compute mini-batch gradient∇θL̂p,∇θL̂v of B. Update θ: −∆θ ∝ ∇θL̂p + cv∇θL̂v
end for
coefficient is used to adjust the proportion of “good” actions and “bad” actions. To be specific, for each step, the action is taken as
at ∼ { πperfect(·|st) +N(0, σ) w.p. πrandom(·|st) +N(0, σ) w.p. 1−
(13)
The parameter is adjusted from 0.1 to 0.5. Smaller means greater noise, in which case it is harder for the algorithms to find a good policy from the noisy data.
6.1.2 Algorithm Setting
For the HFO game, we model the 3 discrete actions with multinomial probabilities and the 2 continuous parameters for each action with normal distributions of known σ = 0.2 but unknown µ. Parameters for different types of action are modeled separately. In total we have 3 output nodes for discrete action probabilities and 6 output nodes for continuous action parameters, in the form of
πθ((k, xk)|s) = pθ(k|s)N(xk|µθ,k, σ), k ∈ {1, 2, 3}, xk ∈ R2
where pθ(·|s) is computed as a soft-max for discrete actions and N(·|µθ, σ) is the probability density function of Gaussian distribution.
When learning from data, the base policy (13) is used to generate trajectories into a replay memory D, and the policy network is updated by different algorithms, respectively. We denote the policy loss objective as Lp, being one of the formula (9) (10) (11) (12). Then we optimize the policy loss Lp and the value loss Lv simultaneously, with a mixture coefficient cv as a hyper-parameter (by default cv = 1). The value loss Lv is defined as Lv = Ed,π(Rt − Vθ(st))2. A stochastic gradient algorithm is given in Algorithm 2. Each experiment is repeated 3 times and the average of scores is reported in Figure 1. Additional details of the algorithm settings are given in Appendix B.2.
We note that the explicit value π(at|st) is crucial for the correction used by most off-policy policy iteration methods [Sutton and Barto, 1998], including [Munos et al., 2016, Wang et al., 2016, Schulman et al., 2017, Wu et al., 2017] and many other works [Geist and Scherrer, 2014]. Here for a comparable experiment between policy gradient method and our proposed method, we consider a simple off-policy correction by importance sampling as in (11). We test the performance of the proposed method and previous works under different settings in Figure 1. We can see that the proposed MARWIL achieves consistently better performance than other methods.3
6.2 Experiments with TORCS
We also evaluate the imitation learning and the proposed method within the TORCS [Wymann et al., 2014] environment. In the TORCS environment, the observation is the raw screen with image size of
3We note that the gap between behavior policy and IL is partly due to the approximation we used. As we have continuous action space, we use a gaussian model with fixed σ, thus the variance of learned policy may be lower than that of the behavior policy. A fair comparison should be made among IL, PG, PGIS, and MARWIL.
64× 64× 3, the action is a scalar indicating the steering angle in [−π, π], and the reward rt is the momentary speed. When the car crushes, a −1 reward is received and the game terminates. For the TORCS environment, a simple rule is leveraged to keep the car running and to prevent it from crushing. Therefore, we can use the rule as the optimal policy to generate expert trajectories. In addition, we generate noisy trajectories with random actions to intentionally confuse the learning algorithms, and see whether the proposed method can learn a better policy from the data generated by the deteriorated policy. We make the training data by generating 10 matches with the optimal policy and another 10 matches with the random actions.
We train the imitation learning and the proposed method for 5 epochs to compare their performance. Table 1 shows the test scores when varying the parameter β. From the results, we see that our proposed algorithm is effective at learning a better policy from these noisy trajectories.
6.3 Experiments with King of Glory
We also evaluate the proposed algorithm with King of Glory – a mobile MOBA (Multi-player Online Battle Arena) game popular in China. In the experiments, we collect human replay files in the size of millions, equaling to tens of billions time steps in total. Evaluation is performed in the “solo” game mode, where an agent fights against another AI in the opposite side. A DNN based function approximator is adopted. In a proprietary test, we find that our AI agent, trained with the proposed method, can reach the level of an experienced human player in a solo game. Additional details of the algorithm settings for King of Glory is given in Appendix B.3.
7 Conclusion
In this article, we present an off-policy learning algorithm that can form a better policy from trajectories generated by a possibly unknown policy. When learning from replay data, the proposed algorithm does not require the bahavior probability π over the actions, which is usually missing in human generated data, and also works well with function approximation and hybrid action space. The algorithm is preferable in real-world application, including playing video games. Experimental results over several real world datasets validate the effectiveness of the proposed algorithm. We note that the proposed MARWIL algorithm can also work as a full reinforcement learning method, when applied iteratively on self-generated replay data. Due to the space limitation, a thorough study of our method for full reinforcement learning is left to a future work.
Acknowledgement We are grateful for the anonymous reviewers for their detailed and helpful comments on this work. We also thank our colleagues in the project of King of Glory AI, particularly Haobo Fu and Tengfei Shi, for their assistance on the game environment and parsing replay data. | 1. What is the main contribution of the paper regarding policy learning from historical data?
2. How does the proposed method differ from previous approaches, particularly in handling complex action spaces and unknown policies?
3. Can you provide more clarity or strictness in certain formulas, such as (7) - (10) in Section 6?
4. Why is there a gap between the performance of the cloned policy and the behavior policy in Figure 1, despite both not using reward information? Is this related to function approximation?
5. Are there any additional insights or justifications for the assumptions made in the paper, such as the ergodicity of the MDP with infinite horizon? | Review | Review
The paper tackles an important problem of learning a policy from historical data. In its setting, the data may be generated by an unknown policy, and have complex action space, thus making existing methods hard to apply. The authors propose a novel method which successfully solves these problems. The effectiveness is validated with theoretical justifications and experimental results. In a nutshell, the authors provide an elegant solution for the problem considered, where previous methods are likely to fail. Related works are discussed in Section 3, with the difference from their work, as well as the reasons the existing methods cannot be directly applied in their problem. From the analysis and the experiments, I think the proposed method is simple and welly suited for their problems of deep learning in complex games. Clarity: The writing is mostly clear and rigorous. Most notations and symbols used in this paper are defined in Section 2. Personally, I think formula (7) - (10) in Section 6 can be written more strictly without omitting s and a, although I get what the authors mean. Significance: I think the authors tackled an interesting and important problem in practice. I like the solution the authors proposed, which is both simple and effective. From the results in their experiments, the improvement over baseline methods is significant. In my opinion the work is likely to inspire a few future works. Questions: In Figure 1, I understand that a fair comparison should be made between EWIL and IL, which clearly shows EWIL a winner. However I am a little troubled by the gap between IL and the behavior policy. Do you have any analysis about why the performance of the cloned policy is better than the behavior policy? Since in IL you do not use any information from reward r, I expect their performance should be comparable. I guess this gap is related to the function approximation adopted? Overall, I think the paper tackles an important and interesting problem in a creative and effective way. The proposed method is analytic justifiable and empirical validated. The writing is clear and correct. For the above reasons, I think the paper ought to be accepted. ######################After Rebuttal ################################ After reading the feedback from the authors and the comments from other reviewers, I want to append some extra opinions in my comments. 1) I agree with the authors that the state distribution does not change the optimal discriminative classifier, which is the optimal policy conditioned on each state. 2) In Sec 5.1, the authors show the connection of Alg 1 and imitating a better policy. The approximation error may not be averaged on the state distribution of the improved policy. However the equivalence still holds with their definition of KL divergence with specific state distribution \rho(s). 3) For the function approximation part, personally, I think it is not important whether a policy is better than another in some states with zero state distribution probability. To me, it is a common assumption that every state considered in the state space will be visited many times, as in an ergodic MDP with infinite horizon. |
NIPS | Title
The Hessian Screening Rule
Abstract
Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for `1-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine.
1 Introduction
High-dimensional data, where the number of features (p) exceeds the number of observations (n), poses a challenge for many classical statistical models. A common remedy for this issue is to regularize the model by penalizing the regression coefficients such that the solution becomes sparse. A popular choice of such a penalization is the `1-norm, which, when the objective is least-squares, leads to the well-known lasso [1]. More specifically, we will focus on the following convex optimization problem:
minimize β∈Rp
{ f(β;X) + λ‖β‖1 } , (1)
where f(β;X) is smooth and convex. We let β̂ be the solution vector for this problem and, abusing notation, equivalently let β̂ : R 7→ Rp be a function that returns this vector for a given λ. Our focus lies in solving (1) along a regularization path λ1, λ2 . . . , λm with λ1 ≥ λ2 ≥ · · · ≥ λm. We start the path at λmax, which corresponds to the null (all-sparse) model1, and finish at some fraction of λmax for which the model is either almost saturated (in the p ≥ n setting), or for which the solution approaches the ordinary least-squares estimate. The motivation for this focus is that the optimal λ is typically unknown and must be estimated through model tuning, such as cross-validation. This involves repeated refitting of the model to new batches of data, which is computationally demanding.
Fortunately, the introduction of so-called screening rules has improved this situation remarkably. Screening rules use tests that screen and possibly discard predictors from the model before it is fit, which effectively reduces the dimensions of the problem and leads to improvements in performance and memory usage. There are, generally speaking, two types of screening rules: safe and heuristic rules. Safe rules guarantee that discarded predictors are inactive at the optimum—heuristic rules do not and may therefore cause violations: discarding active predictors. The possibility of violations mean that heuristic methods need to validate the solution through checks of the Karush–Kuhn–Tucker (KKT) optimality conditions after optimization has concluded and, whenever there are violations, rerun optimization, which can be costly particularly because the KKT checks themselves are expensive. This means that the distinction between safe and heuristic rules only matters in regards to algorithmic
1λmax is in fact available in closed form—for the lasso it is maxj |xTj y|.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
details—all heuristic methods that we study here use KKT checks to catch these violations, which means that these methods are in fact also safe.
Screening rules can moreover also be classified as basic, sequential, or dynamic. Basic rules screen predictors based only on information available from the null model. Sequential rules use information from the previous step(s) on the regularization path to screen predictors for the next step. Finally, dynamic rules screen predictors during optimization, reducing the set of screened predictors repeatedly throughout optimization.
Notable examples of safe rules include the basic SAFE rule [2], the sphere tests [3], the R-region test [4], Slores [5], Gap Safe [6, 7], and Dynamic Sasvi [8]. There is also a group of dual polytope projection rules, most prominently Enhanced Dual Polytope Projection (EDPP) [9]. As noted by Fercoq, Gramfort, and Salmon [6], however, the sequential version of EDPP relies on exact knowledge of the optimal solution at the previous step along the path to be safe in practice, which is only available for λmax. Among the heuristic rules, we have the Strong Rule [10], SIS [11], and ExSIS [12]. But the latter two of these are not sequential rules and solve a potentially reduced form of the problem in (1)—we will not discuss them further here. In addition to these two types of rules, there has also recently been attempts to combine safe and heuristic rules into so-called hybrid rules [13].
There are various methods for employing these rules in practice. Of particular interest are so-called working set strategies, which use a subset of the screened set during optimization, iteratively updating the set based on some criterion. Tibshirani et al. [10] introduced the first working set strategy, which we in this paper will refer to simply as the working set strategy. It uses the set of predictors that have ever been active as an initial working set. After convergence on this set, it checks the KKT optimality conditions on the set of predictors selected by the strong rule, and then adds predictors that violate the conditions to the working set. This procedure is then repeated until there are no violations, at which point the optimality conditions are checked for the entire set, possibly triggering additional iterations of the procedure. Blitz [14] and Celer [15] are two other methods that use both Gap Safe screening and working sets. Instead of choosing previously active predictors as a working set, however, both Blitz and Celer assign priorities to each feature based on how close each feature is of violating the Gap Safe check and construct the working set based on this prioritization. In addition to this, Celer uses dual point acceleration to improve Gap Safe screening and speed up convergence. Both Blitz and Celer are heuristic methods.
One problem with current screening rules is that they often become conservative—including large numbers of predictors into the screened set—when dealing with predictors that are strongly correlated. Tibshirani et al. [10], for instance, demonstrated this to be the case with the strong rule, which was the motivation behind the working set strategy. (See Appendix F.4 for additional experiments verifying this). Yet because the computational complexity of the KKT checks in the working set strategy still depends on the strong rule, the effectiveness of the rule may nevertheless be hampered in this situation. A possible and—as we will soon show—powerful solution to this problem is to make use of the second-order information available from (1), and in this paper we present a novel screening rule based on this idea. Methods using second-order information (the Hessian) are often computationally infeasible for high-dimensional problems. We utilize two properties of the problem to remedy this issue: first, we need only to compute the Hessian for the active set, which is often much smaller than the full set of predictors. Second, we avoid constructing the Hessian (and it’s inverse) from scratch for each λ along the path, instead updating it sequentially by means of the Schur complement. The availability of the Hessian also enables us to improve the warm starts (the initial coefficient estimate at the start of each optimization run) used when fitting the regularization path, which plays a key role in our method.
We present our main results in Section 3, beginning with a reformulation of the strong rule and working set strategy before we arrive at the screening rule that represents the main result of this paper. In Section 4, we present numerical experiments on simulated and real data to showcase the effectiveness of the screening rule, demonstrating that the rule is effective both when p n and n p, out-performing the other alternatives that we study. Finally, in Section 5 we wrap up with a discussion on these results, indicating possible ways in which they may be extended.
2 Preliminaries
We use lower-case letters to denote scalars and vectors and upper-case letters for matrices. We use 0 and 1 to denote vectors with elements all equal to 0 or 1 respectively, with dimensions inferred from context. Furthermore, we let sign be the standard signum function with domain {−1, 0, 1}, allowing it to be overloaded for vectors.
Let c(λ) := −∇βf ( β̂(λ);X ) be the negative gradient, or so-called correlation, and denote Aλ = {i : |c(λ)i| > λ} as the active set at λ: the support set of the non-zero regression coefficients corresponding to β̂(λ). In the interest of brevity, we will let A := Aλ. We will consider β a solution to (1) if it satisfies the stationary criterion
0 ∈ ∇βf(β;X) + λ∂. (2)
Here ∂ is the subdifferential of ‖β‖1, defined as
∂j ∈ { {sign(β̂j)} if β̂j 6= 0, [−1, 1] otherwise.
This means that there must be a ∂̃ ∈ ∂ for a given λ such that
∇βf(β;X) + λ∂̃ = 0. (3)
3 Main Results
In this section we derive the main result of this paper: the Hessian screening rule. First, however, we now introduce a non-standard perspective on screening rules. In this approach, we note that (2) suggests a simple and general formulation for a screening rule, namely: we substitute the gradient vector in the optimality condition of a `1-regularized problem with an estimate. More precisely, we discard the jth predictor for the problem at a given λ if the magnitude of the jth component of the gradient vector estimate is smaller than this λ, that is
|c̃(λ)j | < λ. (4)
In the following sections, we review the strong rule and working set method for this problem from this perspective, that is, by viewing both methods as gradient approximations. We start with the case of the standard lasso (`1-regularized least-squares), where we have f(β;X) = 12‖Xβ − y‖ 2 2.
3.1 The Strong Rule
The sequential strong rule for `1-penalized least-squares regression [10] discards the jth predictor at λ = λk+1 if ∣∣xTj (Xβ̂(λk)− y)∣∣ = |c(λk)j | < 2λk+1 − λk. This is equivalent to checking that
c̃S(λk+1) = c(λk) + (λk − λk+1) sign(c(λk)) (5)
satisfies (4). The strong rule gradient approximation (5) is also known as the unit bound, since it assumes the gradient of the correlation vector to be bounded by one.
3.2 The Working Set Method
A simple but remarkably effective alternative to direct use of the strong rule is the working set heuristic [10]. It begins by estimating β at the (k + 1)th step using only the coefficients that have been previously active at any point along the path, i.e. A1:k = ∪ki=1Ai. The working set method can be viewed as a gradient estimate in the sense that
c̃W (λk+1) = X T ( y −XA1:k β̃(λk+1,A1:k) ) = −∇f ( β̃(λk+1,A1:k);X ) ,
where β̃(λ,A) = arg minβ 12 ||y −XAβ|| 2 + λ|β|.
3.3 The Hessian Screening Rule
We have shown that both the strong screening rule and the working set strategy can be expressed as estimates of the correlation (negative gradient) for the next step of the regularization path. As we have discussed previously, however, basing this estimate on the strong rule can lead to conservative approximations. Fortunately, it turns out that we can produce a better estimate by utilizing secondorder information.
We start by noting that (3), in the case of the standard lasso, can be formulated as[ XTAXA X T AXAc
XTAcXA X T AcXAc
] [ β̂A 0 ] + λ [ sign(β̂(λ)A)
∂Ac
] = [ XTAy XTAcy ] ,
and consequently that β̂(λ)A = (X T AXA) −1(XTAy − λ sign (β̂A)). Note that, for an interval [λl, λu] in which the active set is unchanged, that is, Aλ = A for all λ ∈ [λu, λk], then β̂(λ) is a continuous linear function in λ (Theorem 3.1)2. Theorem 3.1. Let β̂(λ) be the solution of (1) where f(β;X) = 12‖Xβ − y‖ 2 2. Define
β̂λ ∗ (λ)Aλ∗ = β̂(λ ∗)Aλ∗ − (λ ∗ − λ) ( XTAλ∗XAλ∗ )−1 sign ( β̂(λ∗)Aλ∗ ) and β̂λ ∗ (λ)Ac
λ∗ = 0. If it for λ ∈ [λ0, λ∗] holds that (i) sign
( β̂λ ∗ (λ) ) = sign ( β̂(λ∗) ) and (ii)
max |∇f(β̂λ∗(λ))Aλ∗ | < λ, then β̂(λ) = β̂λ ∗ (λ) for λ ∈ [λ0, λ∗].
See Appendix A for a full proof. Using Theorem 3.1, we have the following second-order approximation of c(λk+1):
ĉH(λk+1) = −∇f ( β̂λk(λk+1)Aλk ) = c(λk)+(λk+1−λk)XTXAk(XTAkXAk) −1 sign ( β̂(λk)Ak ) . (6) Remark 3.2. If no changes in the active set occur in [λk+1, λk], (6) is in fact an exact expression for the correlation at the next step, that is, ĉH(λk+1) = c(λk+1).
One problem with using the gradient estimate in (6) is that it is expensive to compute due to the inner products involving the full design matrix. To deal with this, we use the following modification, in which we restrict the computation of these inner products to the set indexed by the strong rule, assuming that predictors outside this set remain inactive:
c̃H(λk+1)j := λk+1 sign β̂(λk)j if j ∈ Aλk , 0 if |c̃S(λk+1)j | < λk+1 and j /∈ Aλk , ĉH(λk+1)j else.
For high-dimensional problems, this modification leads to large computational gains and seldom proves inaccurate, given that the strong rule only rarely causes violations [10]. Lastly, we make one more adjustment to the rule, which is to add a proportion of the unit bound (used in the strong rule) to the gradient estimate:
čH(λk+1)j := c̃ H(λk+1)j + γ(λk+1 − λk) sign(c(λk)j),
where γ ∈ R+. Without this adjustment there would be no upwards bias on the estimate, which would cause more violations than would be desirable. In our experiments, we have used γ = 0.01, which has worked well for most problems we have encountered. This finally leads us to the Hessian screening rule: discard the jth predictor at λk+1 if |čH(λk+1)j | < λk+1. We make one more modification in our implementation of the Hessian Screening Rule, which is to use the union of the ever-active predictors and those screened by the screening rule as our final set of screened predictors. We note that this is a marginal improvement to the rule, since violations of the rule are already quite infrequent. But it is included nonetheless, given that it comes at no cost and occasionally prevents violations.
2This result is not a new discovery [16], but is included here for convenience because the following results depend on it.
As an example of how the Hessian Screening Rule performs, we examine the screening performance of several different strategies. We fit a full regularization path to a design with n = 200, p = 20 000, and pairwise correlation between predictors of ρ. (See Section 4 and Appendix F.4 for more information on the setup.) We compute the average number of screened predictors across iterations of the coordinate descent solver. The results are displayed in Figure 1 and demonstrate that our method gracefully handles high correlation among predictors, offering a screened set that is many times smaller than those produced by the other screening strategies. In Appendix F.4 we extend these results to `1-regularized logistic regression as well and report the frequency of violations.
Recall that the Strong rule bounds its gradient of the correlation vector estimate at one. For the Hessian rule, there is no such bound. This means that it is theoretically possible for the Hessian rule to include more predictors than the Strong rule3. In fact, it is even possible to design special cases where the Hessian rule could be more conservative than the Strong rule. In practice, however, we have not encountered any situation in which this is the case.
3.3.1 Updating the Hessian
A potential drawback to using the Hessian screening rule is the computational costs of computing the Hessian and its inverse. LetAk be the active set at step k on the lasso path. In order to use the Hessian screening rule we need H−1k = (X T AkXAk) −1. Computing (XTAkXAk) −1 directly, however, has numerical complexity O(|Ak|3 + |Ak|2n). But if we have stored (H−1k−1, Hk−1) previously, we can utilize it to compute (H−1k , Hk) more efficiently via the so-called sweep operator [17]. We outline this technique in Algorithm 1 (Appendix B). The algorithm has a reduction step and an augmentation step; in the reduction step, we reduce the Hessian and its inverse to remove the presence of any predictors that are no longer active. In the augmentation step, we update the Hessian and its inverse to account for predictors that have just become active.
The complexity of the steps depends on the size of the sets C = Ak−1 \ Ak,D = Ak \ Ak−1, and E = Ak ∩ Ak−1 The complexity of the reduction step is O(|C|3 + |C|2|E| + |C||E|2) and the complexity of the augmentation step isO(|D|2n+n|D||E|+|D|2|E|+|D|3) since n ≥ max(|E|, |D|). An iteration of Algorithm 1 therefore has complexity O(|D|2n+ n|D||E|+ |C|3 + |C||E|2). In most applications, the computationally dominant term will be n|D||E| (since, typically, n > |E| > D > C) which could be compared to evaluating the gradient for βAk , which is n (|D|+ |E|) when βAck = 0. Note that we have so far assumed that the inverse of the Hessian exists, but this need not be the case. To deal with this issue we precondition the Hessian. See Appendix C for details.
3.3.2 Warm Starts
The availability of the Hessian and its inverse offers a coefficient warm start that is more accurate than the standard, naive, approach of using the estimate from the previous step. With the Hessian screening rule, we use the following warm start.
β̂(λk+1)Ak := β̂(λk)Ak + (λk − λk+1)H −1 Ak sign ( β̂(λk)Ak ) , (7)
3The chance of this happening is tied to the setting of γ.
where H−1Ak is the Hessian matrix for the differentiable part of the objective. Our warm start is equivalent to the one used in Park and Hastie [18], but is here made much more efficient due due to the efficient updates of the Hessian and its inverse that we use. Remark 3.3. The warm start given by (7) is the exact solution at λk if the active set remains constant in [λk+1, λk].
As a first demonstration of the value of this warm start, we look at two data sets: YearPredicitionMSD and colon-cancer. We fit a full regularization path using the setup as outlined in Section 4, with or without Hessian warm starts. For YearPredictionMSD we use the standard lasso, and for colon-cancer `1-regularized logistic regression.
The Hessian warm starts offer sizable reductions in the number of passes of the solver (Figure 2), for many steps requiring only a single pass to reach convergence. On inspection, this is not a surprising find. There are no changes in the active set for many of these steps, which means that the warm start is almost exact—“almost” due to the use of a preconditioner for the Hessian (see Appendix C).
3.3.3 General Loss Functions
We now venture beyond the standard lasso and consider loss functions of the form
f(β;X) = n∑ i=1 fi(x T i β) (8)
where fi is convex and twice differentiable. This, for instance, includes logistic, multinomial, and Poisson loss functions. For the strong rule and working set strategy, this extension does not make much of a difference. With the Hessian screening rule, however, the situation is different.
To see this, we start by noting that our method involving the Hessian is really a quadratic Taylor approximation of (1) around a specific point β0. For loss functions of the type (8), this approximation is equal to
Q(β, β0) = f(β0;X) + n∑ i=1 ( xTi f ′ i(x T i β0)(β − β0) + 1 2 (β − β0)TxTi f ′′i (xTi β0)xi(β − β0) )
= 1
2
( ỹ(xTi β0)−Xβ )T D (w(β0)) ( ỹ(xTi β0)−Xβ ) + C(β0),
where D(w(β0)) is a diagonal matrix with diagonal entries w(β0) where w(β0)i = f ′′(xTi β0) and ỹ(z)i = f ′ i(z) / f ′′i (z)− xTi β0, whilst C(β0) is a constant with respect to β.
Suppose that we are on the lasso path at λk and want to approximate c(λk+1). In this case, we simply replace f(β;X) in (1) with Q(β, β̂(λk)), which leads to the following gradient approximation:
cH(λk+1) = c(λk) + (λk+1 − λk)XTD(w)XAk(XTAkD(w)XAk) −1 sign ( β̂(λk)Ak ) ,
where w = w ( β̂(λk) ) . Unfortunately, we cannot use Algorithm 1 to update XTAkD(w)XAk . This means that we are forced to either update the Hessian directly at each step, which can be computationally demanding when |Ak| is large and inefficient when X is very sparse, or to approximate
D(w) with an upper bound. In logistic regression, for instance, we can use 1/4 as such a bound, which also means that we once again can use Algorithm 1.
In our experiments, we have employed the following heuristic to decide whether to use an upper bound or compute the full Hessian in these cases: we use full updates at each step if sparsity(X)n/max{n, p} < 10−3 and the upper bound otherwise.
3.3.4 Reducing the Impact of KKT Checks
The Hessian Screening Rule is heuristic, which means there may be violations. This necessitates that we verify the KKT conditions after having reached convergence for the screened set of predictors, and add predictors back into the working set for which these checks fail. When the screened set is small relative to p, the cost of optimization is often in large part consumed by these checks. Running these checks for the full set of predictors always needs to be done once, but if there are violations during this step, then we need repeat this check, which is best avoided. Here we describe two methods to tackle this issue.
We employ a procedure equivalent to the one used in Tibshirani et al. [10] for the working set strategy: we first check the KKT conditions for the set of predictors singled out by the strong rule and then, if there are no violations in that set, check the full set of predictors for violations. This works well because the strong rule is conservative—violations are rare—which means that we seldom need to run the KKT checks for the entire set more than once.
If we, in spite of the augmentation of the rule, run into violations when checking the full set of predictors, that is, when the strong rule fails to capture the active set, then we can still avoid repeating the full KKT check by relying on Gap Safe screening: after having run the KKT checks and have failed to converge, we screen the set of predictors using the Gap Safe rule. Because this is a safe rule, we can be sure that the predictors we discard will be inactive, which means that we will not need to include them in our upcoming KKT checks. Because Gap Safe screening and the KKT checks rely on exactly the same quantity—the correlation vector–we can do so at marginal extra cost. To see how this works, we now briefly introduce Gap Safe screening. For details, please see Fercoq, Gramfort, and Salmon [6].
For the ordinary lasso (`1-regularized least squares), the primal (1) is P (β) = 12‖y−Xβ‖ 2 2 + λ‖β‖1 and the corresponding dual is
D(θ) = 1
2 ‖y‖22 −
λ2
2 ∥∥∥θ − y λ ∥∥∥2 2
(9)
subject to ‖XT θ‖∞ ≤ 1. The duality gap is then G(β, θ) = P (β)−D(θ) and the relation between the primal and dual problems is given by y = λθ̂+Xβ̂, where θ̂ is the maximizer to the dual problem (9). In order to use Gap Safe screening, we need a feasible dual point, which can be obtained via dual point scaling, taking θ = (y−Xβ) / max ( λ, ‖XT (y−Xβ)‖∞ ) . The Gap Safe screening rule then
discards the jth feature if |xTj θ| < 1−‖xj‖2 √
2G(β, θ)/λ2. Since we have computed XT (y−Xβ) as part of the KKT checks, we can perform Gap Safe screening at an additional (and marginal) cost amounting to O(n) +O(p).
Since this augmentation benefits the working set strategy too, we adopt it in our implementation of this method as well. To avoid ambiguity, we call this version working+. Note that this makes the working set strategy quite similar to Blitz. In Appendix F.8 we show the benefit of adding this type of screening.
3.3.5 Final Algorithm
The Hessian screening method is presented in full in Algorithm 2 (Appendix B).
Lemma 3.4. Let β ∈ Rp×m be the output of Algorithm 2 for a path of length m and convergence threshold ε > 0. For each step k along the path and corresponding solution β(k) ∈ Rp, there is a dual-feasible point θ(k) such that G(β(k), θ(k)) < ζε.
Proof. First note that Gap safe screening [7, Theorem 6] ensures that G ⊇ Ak. Next, note that the algorithm guarantees that the working set, W , grows with each iteration until |xTj r| < λk for all
j ∈ G \W , at which point
max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) = max ( λk, ‖XTG (y −XGβ (k) G )‖∞ ) .
At this iteration, convergence at line 2, for the subproblem (XW , y), guarantees convergence for the full problem, (X, y), since
θ(k) = y −XWβ(k)W max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) is dual-feasible for the full problem.
3.3.6 Extensions
Approximate Homotopy In addition to improved screening and warm starts, the Hessian also allows us to construct the regularization path adaptively via approximate homotopy [19]. In brief, the Hessian screening rule allows us to choose the next λ along the path adaptively, in effect distributing the grid of λs to better approach the exact (homotopy) solution for the lasso, avoiding the otherwise heuristic choice, which can be inappropriate for some data sets.
Elastic Net Our method can be extended to the elastic net [20], which corresponds to adding a quadratic penalty φ‖β‖22/2 to (1). The Hessian now takes the form XTAXA + φI . Loosely speaking, the addition of this term makes the problem “more“ quadratic, which in turn improves both the accuracy and stability of the screening and warm starts we use in our method. As far as we know, however, there is unfortunately no way to update the inverse of the Hessian efficiently in the case of the elastic net. More research in this area would be welcome.
4 Experiments
Throughout the following experiments, we scale and center predictors with the mean and uncorrected sample standard deviation respectively. For the lasso, we also center the response vector, y, with the mean.
To construct the regularization path, we adopt the default settings from glmnet: we use a log-spaced path of 100 λ values from λmax to ξλmax, where ξ = 10−2 if p > n and 10−4 otherwise. We stop the path whenever the deviance ratio, 1 − dev/devnull, reaches 0.999 or the fractional decrease in deviance is less than 10−5. Finally, we also stop the path whenever the number of coefficients ever to be active predictors exceeds p.
We compare our method against working+ (the modified version of the working set strategy from Tibshirani et al. [10]), Celer [15], and Blitz [14]. We initially also ran our comparisons against EDPP [9], the Gap Safe rule [6], and Dynamic Sasvi [8] too, yet these methods performed so poorly that we omit the results in the main part of this work. The interested reader may nevertheless consult Appendix F.6 where results from simulated data has been included for these methods too.
We use cyclical coordinate descent with shuffling and consider the model to converge when the duality gap G(β, θ) ≤ εζ, where we take ζ to be ‖y‖22 when fitting the ordinary lasso, and n log 2 when fitting `1-regularized logistic regression. Unless specified, we let ε = 10−4. These settings are standard settings and, for instance, resemble the defaults used in Celer. For all of the experiments, we employ the line search algorithm used in Blitz4.
The code used in these experiments was, for every method, programmed in C++ using the Armadillo library [21, 22] and organized as an R package via Rcpp [23]. We used the renv package [24] to maintain dependencies. The source code, including a Singularity [25] container and its recipe for reproducing the results, are available at https://github.com/jolars/HessianScreening. Additional details of the computational setup are provided in Appendix D.
4Without the line search, all of the tested methods ran into convergence issues, particularly for the highcorrelation setting and logistic regression.
4.1 Simulated Data
Let X ∈ Rn×p, β ∈ Rp, and y ∈ Rn be the predictor matrix, coefficient vector, and response vector respectively. We draw the rows of the predictor matrix independently and identically distributed from N (0,Σ) and generate the response from N (Xβ, σ2I) with σ2 = βTΣβ/SNR, where SNR is the signal-to-noise ratio. We set s coefficients, equally spaced throughout the coefficient vector, to 1 and the rest to zero.
In our simulations, we consider two scenarios: a low-dimensional scenario and a high-dimensional scenario. In the former, we set n = 10 000, p = 100, s = 5, and the SNR to 1. In the highdimensional scenario, we take n = 400, p = 40 000, s = 20, and set the SNR to 2. These SNR values are inspired by the discussion in Hastie, Tibshirani, and Tibshirani [26] and intend to cover the middle-ground in terms of signal strength. We run our simulations for 20 iterations.
From Figure 3, it is clear that the Hessian screening rule performs best, taking the least time in every setting examined. The difference is largest for the high-correlation context in the low-dimensional setting and otherwise roughly the same across levels of correlation.
The differences between the other methods are on average small, with the working+ strategy performing slightly better in the p > n scenario. Celer and Blitz perform largely on par with one another, although Celer sees an improvement in a few of the experiments, for instance in logistic regression when p > n.
4.2 Real Data
In this section, we conduct experiments on real data sets. We run 20 iterations for the smaller data sets studied and three for the larger ones. For information on the sources of these data sets, please see Appendix E. For more detailed results of these experiments, please see Appendix F.5.
Starting with the case of `1-regularized least-squares regression, we observe that the Hessian screening rule performs best for all five data sets tested here (Table 1), in all but one instance taking less than half the time compared to the runner-up, which in each case is the working+ strategy. The difference is particularly large for the YearPredictionMSD and e2006-tfidf data sets.
In the case of `1-regularized logistic regression, the Hessian method again performs best for most of the examined data sets, for instance completing the regularization path for the madelon data set around five times faster than the working+ strategy. The exception is the arcene data set, for which the working+ strategy performs best out of the four methods.
We have provided additional results related to the effectiveness of our method in Appendix F.
5 Discussion
We have presented the Hessian Screening Rule: a new heuristic predictor screening rule for `1- regularized generalized linear models. We have shown that our screening rule offers large performance improvements over competing methods, both in simulated experiments but also in the majority of the real data sets that we study here. The improved performance of the rule appears to come not only from improved effectiveness in screening, particularly in the high-correlation setting, but also from the much-improved warm starts, which enables our method to dominate in the n p setting. Note that although we have focused on `1-regularized least-squares and logistic regression here, our rule is applicable to any composite objective for which the differentiable part is twice-differentiable.
One limitation of our method is that it consumes more memory than its competitors owing to the storage of the Hessian and its inverse. This cost may become prohibitive for cases when min{n, p} is large. In these situations the next-best choice may instead be the working set strategy. Note also that we, in this paper, focus entirely on the lasso path. The Hessian Screening Rule is a sequential rule and may therefore not prove optimal when solving for a single λ, in which case a dynamic strategy such as Celer and Blitz likely performs better.
With respect to the relative performance of the working set strategy, Celer, and Blitz, we note that our results deviate somewhat from previous comparisons [15, 14]. We speculate that these differences might arise from the fact that we have used equivalent implementations for all of the methods and from the modification that we have used for the working set strategy.
Many avenues remain to be explored in the context of Hessian-based screening rules and algorithms, such as developing more efficient methods for updating of the Hessian matrix for non-least-squares objectives, such as logistic regression and using second-order information to further improve the optimization method used. Other interesting directions also include adapting the rules to more complicated regularization problems, such as the fused lasso [27], SLOPE [28], SCAD [29], and MCP [30]. Although the latter two of these are non-convex problems, they are locally convex for intervals of the regularization path [31], which enables the use of our method. Adapting the method for use in batch stochastic gradient descent would also be an interesting topic for further study, for instance by using methods such as the ones outlined in Asar et al. [32] to ensure that the Hessian remains positive definite.
Finally, we do not expect there to be any negative societal consequences of our work given that it is aimed solely at improving the performance of an optimization method.
Acknowledgments and Disclosure of Funding
We would like to thank Małgorzata Bogdan for valuable comments. This work was funded by the Swedish Research Council through grant agreement no. 2020-05081 and no. 2018-01726. The computations were enabled by resources provided by LUNARC. The results shown here are in part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. | 1. What is the focus of the paper regarding l1-regularized estimation?
2. What are the strengths of the proposed method, particularly in its formulation and experimental performance?
3. Are there any weaknesses or areas for improvement regarding the proposed method's applicability and scalability? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper proposes a heuristic (non-safe) screening rule to deal with l1-regularized estimation problems such as linear regression and logistic regression. The proposed method can be viewed as a generalization of the strong rule used in the glmnet, which only used first-order information. The paper reports experiments on both synthetic data and real-world data to illustrate the performance of the proposed method.
Strengths And Weaknesses
Strengths:
the paper addresses an interesting problem in l1-regularized estimation for linear regression and logistic regression. Screening rules are an effective strategy to speed up such estimation.
Although the proposed rule is heuristic in nature, the simplicity in its formulation and the effectiveness shown in the experiments offers some advantages of the proposed method. This is also a not-so-common approach that makes use of second-order information for screening.
Experiments are exhaustive. six well-known alternative methods are compared on a wide variety of synthetic and real-world data.
Weakness:
The authors may consider carrying out experiments on other settings covered by the proposed method such as poison regression and elastic net. It may also be interesting (somewhat orthgonal) to understand how the proposed method performs or scale compared to SGD based approach on very large datasets.
Questions
I don't have additional questions for the authors.
Limitations
Limitations are discussed in the paper. |
NIPS | Title
The Hessian Screening Rule
Abstract
Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for `1-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine.
1 Introduction
High-dimensional data, where the number of features (p) exceeds the number of observations (n), poses a challenge for many classical statistical models. A common remedy for this issue is to regularize the model by penalizing the regression coefficients such that the solution becomes sparse. A popular choice of such a penalization is the `1-norm, which, when the objective is least-squares, leads to the well-known lasso [1]. More specifically, we will focus on the following convex optimization problem:
minimize β∈Rp
{ f(β;X) + λ‖β‖1 } , (1)
where f(β;X) is smooth and convex. We let β̂ be the solution vector for this problem and, abusing notation, equivalently let β̂ : R 7→ Rp be a function that returns this vector for a given λ. Our focus lies in solving (1) along a regularization path λ1, λ2 . . . , λm with λ1 ≥ λ2 ≥ · · · ≥ λm. We start the path at λmax, which corresponds to the null (all-sparse) model1, and finish at some fraction of λmax for which the model is either almost saturated (in the p ≥ n setting), or for which the solution approaches the ordinary least-squares estimate. The motivation for this focus is that the optimal λ is typically unknown and must be estimated through model tuning, such as cross-validation. This involves repeated refitting of the model to new batches of data, which is computationally demanding.
Fortunately, the introduction of so-called screening rules has improved this situation remarkably. Screening rules use tests that screen and possibly discard predictors from the model before it is fit, which effectively reduces the dimensions of the problem and leads to improvements in performance and memory usage. There are, generally speaking, two types of screening rules: safe and heuristic rules. Safe rules guarantee that discarded predictors are inactive at the optimum—heuristic rules do not and may therefore cause violations: discarding active predictors. The possibility of violations mean that heuristic methods need to validate the solution through checks of the Karush–Kuhn–Tucker (KKT) optimality conditions after optimization has concluded and, whenever there are violations, rerun optimization, which can be costly particularly because the KKT checks themselves are expensive. This means that the distinction between safe and heuristic rules only matters in regards to algorithmic
1λmax is in fact available in closed form—for the lasso it is maxj |xTj y|.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
details—all heuristic methods that we study here use KKT checks to catch these violations, which means that these methods are in fact also safe.
Screening rules can moreover also be classified as basic, sequential, or dynamic. Basic rules screen predictors based only on information available from the null model. Sequential rules use information from the previous step(s) on the regularization path to screen predictors for the next step. Finally, dynamic rules screen predictors during optimization, reducing the set of screened predictors repeatedly throughout optimization.
Notable examples of safe rules include the basic SAFE rule [2], the sphere tests [3], the R-region test [4], Slores [5], Gap Safe [6, 7], and Dynamic Sasvi [8]. There is also a group of dual polytope projection rules, most prominently Enhanced Dual Polytope Projection (EDPP) [9]. As noted by Fercoq, Gramfort, and Salmon [6], however, the sequential version of EDPP relies on exact knowledge of the optimal solution at the previous step along the path to be safe in practice, which is only available for λmax. Among the heuristic rules, we have the Strong Rule [10], SIS [11], and ExSIS [12]. But the latter two of these are not sequential rules and solve a potentially reduced form of the problem in (1)—we will not discuss them further here. In addition to these two types of rules, there has also recently been attempts to combine safe and heuristic rules into so-called hybrid rules [13].
There are various methods for employing these rules in practice. Of particular interest are so-called working set strategies, which use a subset of the screened set during optimization, iteratively updating the set based on some criterion. Tibshirani et al. [10] introduced the first working set strategy, which we in this paper will refer to simply as the working set strategy. It uses the set of predictors that have ever been active as an initial working set. After convergence on this set, it checks the KKT optimality conditions on the set of predictors selected by the strong rule, and then adds predictors that violate the conditions to the working set. This procedure is then repeated until there are no violations, at which point the optimality conditions are checked for the entire set, possibly triggering additional iterations of the procedure. Blitz [14] and Celer [15] are two other methods that use both Gap Safe screening and working sets. Instead of choosing previously active predictors as a working set, however, both Blitz and Celer assign priorities to each feature based on how close each feature is of violating the Gap Safe check and construct the working set based on this prioritization. In addition to this, Celer uses dual point acceleration to improve Gap Safe screening and speed up convergence. Both Blitz and Celer are heuristic methods.
One problem with current screening rules is that they often become conservative—including large numbers of predictors into the screened set—when dealing with predictors that are strongly correlated. Tibshirani et al. [10], for instance, demonstrated this to be the case with the strong rule, which was the motivation behind the working set strategy. (See Appendix F.4 for additional experiments verifying this). Yet because the computational complexity of the KKT checks in the working set strategy still depends on the strong rule, the effectiveness of the rule may nevertheless be hampered in this situation. A possible and—as we will soon show—powerful solution to this problem is to make use of the second-order information available from (1), and in this paper we present a novel screening rule based on this idea. Methods using second-order information (the Hessian) are often computationally infeasible for high-dimensional problems. We utilize two properties of the problem to remedy this issue: first, we need only to compute the Hessian for the active set, which is often much smaller than the full set of predictors. Second, we avoid constructing the Hessian (and it’s inverse) from scratch for each λ along the path, instead updating it sequentially by means of the Schur complement. The availability of the Hessian also enables us to improve the warm starts (the initial coefficient estimate at the start of each optimization run) used when fitting the regularization path, which plays a key role in our method.
We present our main results in Section 3, beginning with a reformulation of the strong rule and working set strategy before we arrive at the screening rule that represents the main result of this paper. In Section 4, we present numerical experiments on simulated and real data to showcase the effectiveness of the screening rule, demonstrating that the rule is effective both when p n and n p, out-performing the other alternatives that we study. Finally, in Section 5 we wrap up with a discussion on these results, indicating possible ways in which they may be extended.
2 Preliminaries
We use lower-case letters to denote scalars and vectors and upper-case letters for matrices. We use 0 and 1 to denote vectors with elements all equal to 0 or 1 respectively, with dimensions inferred from context. Furthermore, we let sign be the standard signum function with domain {−1, 0, 1}, allowing it to be overloaded for vectors.
Let c(λ) := −∇βf ( β̂(λ);X ) be the negative gradient, or so-called correlation, and denote Aλ = {i : |c(λ)i| > λ} as the active set at λ: the support set of the non-zero regression coefficients corresponding to β̂(λ). In the interest of brevity, we will let A := Aλ. We will consider β a solution to (1) if it satisfies the stationary criterion
0 ∈ ∇βf(β;X) + λ∂. (2)
Here ∂ is the subdifferential of ‖β‖1, defined as
∂j ∈ { {sign(β̂j)} if β̂j 6= 0, [−1, 1] otherwise.
This means that there must be a ∂̃ ∈ ∂ for a given λ such that
∇βf(β;X) + λ∂̃ = 0. (3)
3 Main Results
In this section we derive the main result of this paper: the Hessian screening rule. First, however, we now introduce a non-standard perspective on screening rules. In this approach, we note that (2) suggests a simple and general formulation for a screening rule, namely: we substitute the gradient vector in the optimality condition of a `1-regularized problem with an estimate. More precisely, we discard the jth predictor for the problem at a given λ if the magnitude of the jth component of the gradient vector estimate is smaller than this λ, that is
|c̃(λ)j | < λ. (4)
In the following sections, we review the strong rule and working set method for this problem from this perspective, that is, by viewing both methods as gradient approximations. We start with the case of the standard lasso (`1-regularized least-squares), where we have f(β;X) = 12‖Xβ − y‖ 2 2.
3.1 The Strong Rule
The sequential strong rule for `1-penalized least-squares regression [10] discards the jth predictor at λ = λk+1 if ∣∣xTj (Xβ̂(λk)− y)∣∣ = |c(λk)j | < 2λk+1 − λk. This is equivalent to checking that
c̃S(λk+1) = c(λk) + (λk − λk+1) sign(c(λk)) (5)
satisfies (4). The strong rule gradient approximation (5) is also known as the unit bound, since it assumes the gradient of the correlation vector to be bounded by one.
3.2 The Working Set Method
A simple but remarkably effective alternative to direct use of the strong rule is the working set heuristic [10]. It begins by estimating β at the (k + 1)th step using only the coefficients that have been previously active at any point along the path, i.e. A1:k = ∪ki=1Ai. The working set method can be viewed as a gradient estimate in the sense that
c̃W (λk+1) = X T ( y −XA1:k β̃(λk+1,A1:k) ) = −∇f ( β̃(λk+1,A1:k);X ) ,
where β̃(λ,A) = arg minβ 12 ||y −XAβ|| 2 + λ|β|.
3.3 The Hessian Screening Rule
We have shown that both the strong screening rule and the working set strategy can be expressed as estimates of the correlation (negative gradient) for the next step of the regularization path. As we have discussed previously, however, basing this estimate on the strong rule can lead to conservative approximations. Fortunately, it turns out that we can produce a better estimate by utilizing secondorder information.
We start by noting that (3), in the case of the standard lasso, can be formulated as[ XTAXA X T AXAc
XTAcXA X T AcXAc
] [ β̂A 0 ] + λ [ sign(β̂(λ)A)
∂Ac
] = [ XTAy XTAcy ] ,
and consequently that β̂(λ)A = (X T AXA) −1(XTAy − λ sign (β̂A)). Note that, for an interval [λl, λu] in which the active set is unchanged, that is, Aλ = A for all λ ∈ [λu, λk], then β̂(λ) is a continuous linear function in λ (Theorem 3.1)2. Theorem 3.1. Let β̂(λ) be the solution of (1) where f(β;X) = 12‖Xβ − y‖ 2 2. Define
β̂λ ∗ (λ)Aλ∗ = β̂(λ ∗)Aλ∗ − (λ ∗ − λ) ( XTAλ∗XAλ∗ )−1 sign ( β̂(λ∗)Aλ∗ ) and β̂λ ∗ (λ)Ac
λ∗ = 0. If it for λ ∈ [λ0, λ∗] holds that (i) sign
( β̂λ ∗ (λ) ) = sign ( β̂(λ∗) ) and (ii)
max |∇f(β̂λ∗(λ))Aλ∗ | < λ, then β̂(λ) = β̂λ ∗ (λ) for λ ∈ [λ0, λ∗].
See Appendix A for a full proof. Using Theorem 3.1, we have the following second-order approximation of c(λk+1):
ĉH(λk+1) = −∇f ( β̂λk(λk+1)Aλk ) = c(λk)+(λk+1−λk)XTXAk(XTAkXAk) −1 sign ( β̂(λk)Ak ) . (6) Remark 3.2. If no changes in the active set occur in [λk+1, λk], (6) is in fact an exact expression for the correlation at the next step, that is, ĉH(λk+1) = c(λk+1).
One problem with using the gradient estimate in (6) is that it is expensive to compute due to the inner products involving the full design matrix. To deal with this, we use the following modification, in which we restrict the computation of these inner products to the set indexed by the strong rule, assuming that predictors outside this set remain inactive:
c̃H(λk+1)j := λk+1 sign β̂(λk)j if j ∈ Aλk , 0 if |c̃S(λk+1)j | < λk+1 and j /∈ Aλk , ĉH(λk+1)j else.
For high-dimensional problems, this modification leads to large computational gains and seldom proves inaccurate, given that the strong rule only rarely causes violations [10]. Lastly, we make one more adjustment to the rule, which is to add a proportion of the unit bound (used in the strong rule) to the gradient estimate:
čH(λk+1)j := c̃ H(λk+1)j + γ(λk+1 − λk) sign(c(λk)j),
where γ ∈ R+. Without this adjustment there would be no upwards bias on the estimate, which would cause more violations than would be desirable. In our experiments, we have used γ = 0.01, which has worked well for most problems we have encountered. This finally leads us to the Hessian screening rule: discard the jth predictor at λk+1 if |čH(λk+1)j | < λk+1. We make one more modification in our implementation of the Hessian Screening Rule, which is to use the union of the ever-active predictors and those screened by the screening rule as our final set of screened predictors. We note that this is a marginal improvement to the rule, since violations of the rule are already quite infrequent. But it is included nonetheless, given that it comes at no cost and occasionally prevents violations.
2This result is not a new discovery [16], but is included here for convenience because the following results depend on it.
As an example of how the Hessian Screening Rule performs, we examine the screening performance of several different strategies. We fit a full regularization path to a design with n = 200, p = 20 000, and pairwise correlation between predictors of ρ. (See Section 4 and Appendix F.4 for more information on the setup.) We compute the average number of screened predictors across iterations of the coordinate descent solver. The results are displayed in Figure 1 and demonstrate that our method gracefully handles high correlation among predictors, offering a screened set that is many times smaller than those produced by the other screening strategies. In Appendix F.4 we extend these results to `1-regularized logistic regression as well and report the frequency of violations.
Recall that the Strong rule bounds its gradient of the correlation vector estimate at one. For the Hessian rule, there is no such bound. This means that it is theoretically possible for the Hessian rule to include more predictors than the Strong rule3. In fact, it is even possible to design special cases where the Hessian rule could be more conservative than the Strong rule. In practice, however, we have not encountered any situation in which this is the case.
3.3.1 Updating the Hessian
A potential drawback to using the Hessian screening rule is the computational costs of computing the Hessian and its inverse. LetAk be the active set at step k on the lasso path. In order to use the Hessian screening rule we need H−1k = (X T AkXAk) −1. Computing (XTAkXAk) −1 directly, however, has numerical complexity O(|Ak|3 + |Ak|2n). But if we have stored (H−1k−1, Hk−1) previously, we can utilize it to compute (H−1k , Hk) more efficiently via the so-called sweep operator [17]. We outline this technique in Algorithm 1 (Appendix B). The algorithm has a reduction step and an augmentation step; in the reduction step, we reduce the Hessian and its inverse to remove the presence of any predictors that are no longer active. In the augmentation step, we update the Hessian and its inverse to account for predictors that have just become active.
The complexity of the steps depends on the size of the sets C = Ak−1 \ Ak,D = Ak \ Ak−1, and E = Ak ∩ Ak−1 The complexity of the reduction step is O(|C|3 + |C|2|E| + |C||E|2) and the complexity of the augmentation step isO(|D|2n+n|D||E|+|D|2|E|+|D|3) since n ≥ max(|E|, |D|). An iteration of Algorithm 1 therefore has complexity O(|D|2n+ n|D||E|+ |C|3 + |C||E|2). In most applications, the computationally dominant term will be n|D||E| (since, typically, n > |E| > D > C) which could be compared to evaluating the gradient for βAk , which is n (|D|+ |E|) when βAck = 0. Note that we have so far assumed that the inverse of the Hessian exists, but this need not be the case. To deal with this issue we precondition the Hessian. See Appendix C for details.
3.3.2 Warm Starts
The availability of the Hessian and its inverse offers a coefficient warm start that is more accurate than the standard, naive, approach of using the estimate from the previous step. With the Hessian screening rule, we use the following warm start.
β̂(λk+1)Ak := β̂(λk)Ak + (λk − λk+1)H −1 Ak sign ( β̂(λk)Ak ) , (7)
3The chance of this happening is tied to the setting of γ.
where H−1Ak is the Hessian matrix for the differentiable part of the objective. Our warm start is equivalent to the one used in Park and Hastie [18], but is here made much more efficient due due to the efficient updates of the Hessian and its inverse that we use. Remark 3.3. The warm start given by (7) is the exact solution at λk if the active set remains constant in [λk+1, λk].
As a first demonstration of the value of this warm start, we look at two data sets: YearPredicitionMSD and colon-cancer. We fit a full regularization path using the setup as outlined in Section 4, with or without Hessian warm starts. For YearPredictionMSD we use the standard lasso, and for colon-cancer `1-regularized logistic regression.
The Hessian warm starts offer sizable reductions in the number of passes of the solver (Figure 2), for many steps requiring only a single pass to reach convergence. On inspection, this is not a surprising find. There are no changes in the active set for many of these steps, which means that the warm start is almost exact—“almost” due to the use of a preconditioner for the Hessian (see Appendix C).
3.3.3 General Loss Functions
We now venture beyond the standard lasso and consider loss functions of the form
f(β;X) = n∑ i=1 fi(x T i β) (8)
where fi is convex and twice differentiable. This, for instance, includes logistic, multinomial, and Poisson loss functions. For the strong rule and working set strategy, this extension does not make much of a difference. With the Hessian screening rule, however, the situation is different.
To see this, we start by noting that our method involving the Hessian is really a quadratic Taylor approximation of (1) around a specific point β0. For loss functions of the type (8), this approximation is equal to
Q(β, β0) = f(β0;X) + n∑ i=1 ( xTi f ′ i(x T i β0)(β − β0) + 1 2 (β − β0)TxTi f ′′i (xTi β0)xi(β − β0) )
= 1
2
( ỹ(xTi β0)−Xβ )T D (w(β0)) ( ỹ(xTi β0)−Xβ ) + C(β0),
where D(w(β0)) is a diagonal matrix with diagonal entries w(β0) where w(β0)i = f ′′(xTi β0) and ỹ(z)i = f ′ i(z) / f ′′i (z)− xTi β0, whilst C(β0) is a constant with respect to β.
Suppose that we are on the lasso path at λk and want to approximate c(λk+1). In this case, we simply replace f(β;X) in (1) with Q(β, β̂(λk)), which leads to the following gradient approximation:
cH(λk+1) = c(λk) + (λk+1 − λk)XTD(w)XAk(XTAkD(w)XAk) −1 sign ( β̂(λk)Ak ) ,
where w = w ( β̂(λk) ) . Unfortunately, we cannot use Algorithm 1 to update XTAkD(w)XAk . This means that we are forced to either update the Hessian directly at each step, which can be computationally demanding when |Ak| is large and inefficient when X is very sparse, or to approximate
D(w) with an upper bound. In logistic regression, for instance, we can use 1/4 as such a bound, which also means that we once again can use Algorithm 1.
In our experiments, we have employed the following heuristic to decide whether to use an upper bound or compute the full Hessian in these cases: we use full updates at each step if sparsity(X)n/max{n, p} < 10−3 and the upper bound otherwise.
3.3.4 Reducing the Impact of KKT Checks
The Hessian Screening Rule is heuristic, which means there may be violations. This necessitates that we verify the KKT conditions after having reached convergence for the screened set of predictors, and add predictors back into the working set for which these checks fail. When the screened set is small relative to p, the cost of optimization is often in large part consumed by these checks. Running these checks for the full set of predictors always needs to be done once, but if there are violations during this step, then we need repeat this check, which is best avoided. Here we describe two methods to tackle this issue.
We employ a procedure equivalent to the one used in Tibshirani et al. [10] for the working set strategy: we first check the KKT conditions for the set of predictors singled out by the strong rule and then, if there are no violations in that set, check the full set of predictors for violations. This works well because the strong rule is conservative—violations are rare—which means that we seldom need to run the KKT checks for the entire set more than once.
If we, in spite of the augmentation of the rule, run into violations when checking the full set of predictors, that is, when the strong rule fails to capture the active set, then we can still avoid repeating the full KKT check by relying on Gap Safe screening: after having run the KKT checks and have failed to converge, we screen the set of predictors using the Gap Safe rule. Because this is a safe rule, we can be sure that the predictors we discard will be inactive, which means that we will not need to include them in our upcoming KKT checks. Because Gap Safe screening and the KKT checks rely on exactly the same quantity—the correlation vector–we can do so at marginal extra cost. To see how this works, we now briefly introduce Gap Safe screening. For details, please see Fercoq, Gramfort, and Salmon [6].
For the ordinary lasso (`1-regularized least squares), the primal (1) is P (β) = 12‖y−Xβ‖ 2 2 + λ‖β‖1 and the corresponding dual is
D(θ) = 1
2 ‖y‖22 −
λ2
2 ∥∥∥θ − y λ ∥∥∥2 2
(9)
subject to ‖XT θ‖∞ ≤ 1. The duality gap is then G(β, θ) = P (β)−D(θ) and the relation between the primal and dual problems is given by y = λθ̂+Xβ̂, where θ̂ is the maximizer to the dual problem (9). In order to use Gap Safe screening, we need a feasible dual point, which can be obtained via dual point scaling, taking θ = (y−Xβ) / max ( λ, ‖XT (y−Xβ)‖∞ ) . The Gap Safe screening rule then
discards the jth feature if |xTj θ| < 1−‖xj‖2 √
2G(β, θ)/λ2. Since we have computed XT (y−Xβ) as part of the KKT checks, we can perform Gap Safe screening at an additional (and marginal) cost amounting to O(n) +O(p).
Since this augmentation benefits the working set strategy too, we adopt it in our implementation of this method as well. To avoid ambiguity, we call this version working+. Note that this makes the working set strategy quite similar to Blitz. In Appendix F.8 we show the benefit of adding this type of screening.
3.3.5 Final Algorithm
The Hessian screening method is presented in full in Algorithm 2 (Appendix B).
Lemma 3.4. Let β ∈ Rp×m be the output of Algorithm 2 for a path of length m and convergence threshold ε > 0. For each step k along the path and corresponding solution β(k) ∈ Rp, there is a dual-feasible point θ(k) such that G(β(k), θ(k)) < ζε.
Proof. First note that Gap safe screening [7, Theorem 6] ensures that G ⊇ Ak. Next, note that the algorithm guarantees that the working set, W , grows with each iteration until |xTj r| < λk for all
j ∈ G \W , at which point
max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) = max ( λk, ‖XTG (y −XGβ (k) G )‖∞ ) .
At this iteration, convergence at line 2, for the subproblem (XW , y), guarantees convergence for the full problem, (X, y), since
θ(k) = y −XWβ(k)W max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) is dual-feasible for the full problem.
3.3.6 Extensions
Approximate Homotopy In addition to improved screening and warm starts, the Hessian also allows us to construct the regularization path adaptively via approximate homotopy [19]. In brief, the Hessian screening rule allows us to choose the next λ along the path adaptively, in effect distributing the grid of λs to better approach the exact (homotopy) solution for the lasso, avoiding the otherwise heuristic choice, which can be inappropriate for some data sets.
Elastic Net Our method can be extended to the elastic net [20], which corresponds to adding a quadratic penalty φ‖β‖22/2 to (1). The Hessian now takes the form XTAXA + φI . Loosely speaking, the addition of this term makes the problem “more“ quadratic, which in turn improves both the accuracy and stability of the screening and warm starts we use in our method. As far as we know, however, there is unfortunately no way to update the inverse of the Hessian efficiently in the case of the elastic net. More research in this area would be welcome.
4 Experiments
Throughout the following experiments, we scale and center predictors with the mean and uncorrected sample standard deviation respectively. For the lasso, we also center the response vector, y, with the mean.
To construct the regularization path, we adopt the default settings from glmnet: we use a log-spaced path of 100 λ values from λmax to ξλmax, where ξ = 10−2 if p > n and 10−4 otherwise. We stop the path whenever the deviance ratio, 1 − dev/devnull, reaches 0.999 or the fractional decrease in deviance is less than 10−5. Finally, we also stop the path whenever the number of coefficients ever to be active predictors exceeds p.
We compare our method against working+ (the modified version of the working set strategy from Tibshirani et al. [10]), Celer [15], and Blitz [14]. We initially also ran our comparisons against EDPP [9], the Gap Safe rule [6], and Dynamic Sasvi [8] too, yet these methods performed so poorly that we omit the results in the main part of this work. The interested reader may nevertheless consult Appendix F.6 where results from simulated data has been included for these methods too.
We use cyclical coordinate descent with shuffling and consider the model to converge when the duality gap G(β, θ) ≤ εζ, where we take ζ to be ‖y‖22 when fitting the ordinary lasso, and n log 2 when fitting `1-regularized logistic regression. Unless specified, we let ε = 10−4. These settings are standard settings and, for instance, resemble the defaults used in Celer. For all of the experiments, we employ the line search algorithm used in Blitz4.
The code used in these experiments was, for every method, programmed in C++ using the Armadillo library [21, 22] and organized as an R package via Rcpp [23]. We used the renv package [24] to maintain dependencies. The source code, including a Singularity [25] container and its recipe for reproducing the results, are available at https://github.com/jolars/HessianScreening. Additional details of the computational setup are provided in Appendix D.
4Without the line search, all of the tested methods ran into convergence issues, particularly for the highcorrelation setting and logistic regression.
4.1 Simulated Data
Let X ∈ Rn×p, β ∈ Rp, and y ∈ Rn be the predictor matrix, coefficient vector, and response vector respectively. We draw the rows of the predictor matrix independently and identically distributed from N (0,Σ) and generate the response from N (Xβ, σ2I) with σ2 = βTΣβ/SNR, where SNR is the signal-to-noise ratio. We set s coefficients, equally spaced throughout the coefficient vector, to 1 and the rest to zero.
In our simulations, we consider two scenarios: a low-dimensional scenario and a high-dimensional scenario. In the former, we set n = 10 000, p = 100, s = 5, and the SNR to 1. In the highdimensional scenario, we take n = 400, p = 40 000, s = 20, and set the SNR to 2. These SNR values are inspired by the discussion in Hastie, Tibshirani, and Tibshirani [26] and intend to cover the middle-ground in terms of signal strength. We run our simulations for 20 iterations.
From Figure 3, it is clear that the Hessian screening rule performs best, taking the least time in every setting examined. The difference is largest for the high-correlation context in the low-dimensional setting and otherwise roughly the same across levels of correlation.
The differences between the other methods are on average small, with the working+ strategy performing slightly better in the p > n scenario. Celer and Blitz perform largely on par with one another, although Celer sees an improvement in a few of the experiments, for instance in logistic regression when p > n.
4.2 Real Data
In this section, we conduct experiments on real data sets. We run 20 iterations for the smaller data sets studied and three for the larger ones. For information on the sources of these data sets, please see Appendix E. For more detailed results of these experiments, please see Appendix F.5.
Starting with the case of `1-regularized least-squares regression, we observe that the Hessian screening rule performs best for all five data sets tested here (Table 1), in all but one instance taking less than half the time compared to the runner-up, which in each case is the working+ strategy. The difference is particularly large for the YearPredictionMSD and e2006-tfidf data sets.
In the case of `1-regularized logistic regression, the Hessian method again performs best for most of the examined data sets, for instance completing the regularization path for the madelon data set around five times faster than the working+ strategy. The exception is the arcene data set, for which the working+ strategy performs best out of the four methods.
We have provided additional results related to the effectiveness of our method in Appendix F.
5 Discussion
We have presented the Hessian Screening Rule: a new heuristic predictor screening rule for `1- regularized generalized linear models. We have shown that our screening rule offers large performance improvements over competing methods, both in simulated experiments but also in the majority of the real data sets that we study here. The improved performance of the rule appears to come not only from improved effectiveness in screening, particularly in the high-correlation setting, but also from the much-improved warm starts, which enables our method to dominate in the n p setting. Note that although we have focused on `1-regularized least-squares and logistic regression here, our rule is applicable to any composite objective for which the differentiable part is twice-differentiable.
One limitation of our method is that it consumes more memory than its competitors owing to the storage of the Hessian and its inverse. This cost may become prohibitive for cases when min{n, p} is large. In these situations the next-best choice may instead be the working set strategy. Note also that we, in this paper, focus entirely on the lasso path. The Hessian Screening Rule is a sequential rule and may therefore not prove optimal when solving for a single λ, in which case a dynamic strategy such as Celer and Blitz likely performs better.
With respect to the relative performance of the working set strategy, Celer, and Blitz, we note that our results deviate somewhat from previous comparisons [15, 14]. We speculate that these differences might arise from the fact that we have used equivalent implementations for all of the methods and from the modification that we have used for the working set strategy.
Many avenues remain to be explored in the context of Hessian-based screening rules and algorithms, such as developing more efficient methods for updating of the Hessian matrix for non-least-squares objectives, such as logistic regression and using second-order information to further improve the optimization method used. Other interesting directions also include adapting the rules to more complicated regularization problems, such as the fused lasso [27], SLOPE [28], SCAD [29], and MCP [30]. Although the latter two of these are non-convex problems, they are locally convex for intervals of the regularization path [31], which enables the use of our method. Adapting the method for use in batch stochastic gradient descent would also be an interesting topic for further study, for instance by using methods such as the ones outlined in Asar et al. [32] to ensure that the Hessian remains positive definite.
Finally, we do not expect there to be any negative societal consequences of our work given that it is aimed solely at improving the performance of an optimization method.
Acknowledgments and Disclosure of Funding
We would like to thank Małgorzata Bogdan for valuable comments. This work was funded by the Swedish Research Council through grant agreement no. 2020-05081 and no. 2018-01726. The computations were enabled by resources provided by LUNARC. The results shown here are in part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. | 1. What is the focus and contribution of the paper on Hessian screening rule for lasso and logistic regression?
2. What are the strengths of the proposed approach, particularly in terms of taking advantage of high-order information?
3. What are the weaknesses of the paper, especially regarding conservation and performance comparison?
4. Do you have any concerns about the implementation and application of the Hessian screening rule in real datasets?
5. How does the Hessian screening rule compare with other recent screening rules, such as those mentioned in the review? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
A Hessian screening rule for lasso and its generalized linear model extension for logistic regression was presented to take advantage of the high-order information for more efficient screening, specifically in cases with highly correlated predictors/covariates. The proposed Hessian screening rule together with several speedup tricks has been shown to be effective in both simulated and real datasets.
Strengths And Weaknesses
The proposed Hessian screening rule extends the Strong Rule and Working Set by taking advantage of the high-order information for more efficient screening, specifically in cases with highly correlated predictors/covariates. The Hessian screening rule together with several speedup tricks has been shown to be effective in both simulated and real datasets.
It is not clear based on the current presentation why the Hessian rule can be less conservation from Section 3. In addition to Theorem 3.1, some theoretical analysis for that may help further improve the quality of the submission. The actual final screening in fact was based on the modifications as described from line 133 to 151. It may be interesting also to have ablation comparison to see clearly what led to improved efficiency.
Since, the screening rules are not "safe", in particular for logistic regression. In addition to investigating the efficiency, the authors may also need to provide the performance comparison with respect to both predictor selection and model prediction accuracy.
In the real data experiments, the authors may want to provide some explanations why the Hessian rule performs significantly worse on arcene and rcv2 datasets, for which p is much larger than n, especially for arcene. It is clearly not the case when p is similar as n as discussed in Section 5.
Finally, there are language problems in the submission. For example, in line 188-189 (page 5): "... this is not a surprising find." In line 257-258 (page 7), "we also stop the path whenever the number of coefficients ever to be active predictors exceeds p." The number of coefficients can be equal to p but will never exceed p. The authors may need to improve the presentation of the submission.
Questions
Are there any theoretical guarantee that the Hessian rule will be always less conservative than the Strong rule or Working Set method?
How the corresponding prediction accuracy of the resulting models by different screening rules? Are the final "screened" predictors all the same as the ones from the optimal solution from the model fitting without screening?
There are multiple screening rules published more recently, including: https://proceedings.neurips.cc/paper/2020/hash/11348e03e23b137d55d94464250a67a2-Abstract.html, how does the Hessian screening rule compare with these new rules?
Limitations
N/A. |
NIPS | Title
The Hessian Screening Rule
Abstract
Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for `1-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine.
1 Introduction
High-dimensional data, where the number of features (p) exceeds the number of observations (n), poses a challenge for many classical statistical models. A common remedy for this issue is to regularize the model by penalizing the regression coefficients such that the solution becomes sparse. A popular choice of such a penalization is the `1-norm, which, when the objective is least-squares, leads to the well-known lasso [1]. More specifically, we will focus on the following convex optimization problem:
minimize β∈Rp
{ f(β;X) + λ‖β‖1 } , (1)
where f(β;X) is smooth and convex. We let β̂ be the solution vector for this problem and, abusing notation, equivalently let β̂ : R 7→ Rp be a function that returns this vector for a given λ. Our focus lies in solving (1) along a regularization path λ1, λ2 . . . , λm with λ1 ≥ λ2 ≥ · · · ≥ λm. We start the path at λmax, which corresponds to the null (all-sparse) model1, and finish at some fraction of λmax for which the model is either almost saturated (in the p ≥ n setting), or for which the solution approaches the ordinary least-squares estimate. The motivation for this focus is that the optimal λ is typically unknown and must be estimated through model tuning, such as cross-validation. This involves repeated refitting of the model to new batches of data, which is computationally demanding.
Fortunately, the introduction of so-called screening rules has improved this situation remarkably. Screening rules use tests that screen and possibly discard predictors from the model before it is fit, which effectively reduces the dimensions of the problem and leads to improvements in performance and memory usage. There are, generally speaking, two types of screening rules: safe and heuristic rules. Safe rules guarantee that discarded predictors are inactive at the optimum—heuristic rules do not and may therefore cause violations: discarding active predictors. The possibility of violations mean that heuristic methods need to validate the solution through checks of the Karush–Kuhn–Tucker (KKT) optimality conditions after optimization has concluded and, whenever there are violations, rerun optimization, which can be costly particularly because the KKT checks themselves are expensive. This means that the distinction between safe and heuristic rules only matters in regards to algorithmic
1λmax is in fact available in closed form—for the lasso it is maxj |xTj y|.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
details—all heuristic methods that we study here use KKT checks to catch these violations, which means that these methods are in fact also safe.
Screening rules can moreover also be classified as basic, sequential, or dynamic. Basic rules screen predictors based only on information available from the null model. Sequential rules use information from the previous step(s) on the regularization path to screen predictors for the next step. Finally, dynamic rules screen predictors during optimization, reducing the set of screened predictors repeatedly throughout optimization.
Notable examples of safe rules include the basic SAFE rule [2], the sphere tests [3], the R-region test [4], Slores [5], Gap Safe [6, 7], and Dynamic Sasvi [8]. There is also a group of dual polytope projection rules, most prominently Enhanced Dual Polytope Projection (EDPP) [9]. As noted by Fercoq, Gramfort, and Salmon [6], however, the sequential version of EDPP relies on exact knowledge of the optimal solution at the previous step along the path to be safe in practice, which is only available for λmax. Among the heuristic rules, we have the Strong Rule [10], SIS [11], and ExSIS [12]. But the latter two of these are not sequential rules and solve a potentially reduced form of the problem in (1)—we will not discuss them further here. In addition to these two types of rules, there has also recently been attempts to combine safe and heuristic rules into so-called hybrid rules [13].
There are various methods for employing these rules in practice. Of particular interest are so-called working set strategies, which use a subset of the screened set during optimization, iteratively updating the set based on some criterion. Tibshirani et al. [10] introduced the first working set strategy, which we in this paper will refer to simply as the working set strategy. It uses the set of predictors that have ever been active as an initial working set. After convergence on this set, it checks the KKT optimality conditions on the set of predictors selected by the strong rule, and then adds predictors that violate the conditions to the working set. This procedure is then repeated until there are no violations, at which point the optimality conditions are checked for the entire set, possibly triggering additional iterations of the procedure. Blitz [14] and Celer [15] are two other methods that use both Gap Safe screening and working sets. Instead of choosing previously active predictors as a working set, however, both Blitz and Celer assign priorities to each feature based on how close each feature is of violating the Gap Safe check and construct the working set based on this prioritization. In addition to this, Celer uses dual point acceleration to improve Gap Safe screening and speed up convergence. Both Blitz and Celer are heuristic methods.
One problem with current screening rules is that they often become conservative—including large numbers of predictors into the screened set—when dealing with predictors that are strongly correlated. Tibshirani et al. [10], for instance, demonstrated this to be the case with the strong rule, which was the motivation behind the working set strategy. (See Appendix F.4 for additional experiments verifying this). Yet because the computational complexity of the KKT checks in the working set strategy still depends on the strong rule, the effectiveness of the rule may nevertheless be hampered in this situation. A possible and—as we will soon show—powerful solution to this problem is to make use of the second-order information available from (1), and in this paper we present a novel screening rule based on this idea. Methods using second-order information (the Hessian) are often computationally infeasible for high-dimensional problems. We utilize two properties of the problem to remedy this issue: first, we need only to compute the Hessian for the active set, which is often much smaller than the full set of predictors. Second, we avoid constructing the Hessian (and it’s inverse) from scratch for each λ along the path, instead updating it sequentially by means of the Schur complement. The availability of the Hessian also enables us to improve the warm starts (the initial coefficient estimate at the start of each optimization run) used when fitting the regularization path, which plays a key role in our method.
We present our main results in Section 3, beginning with a reformulation of the strong rule and working set strategy before we arrive at the screening rule that represents the main result of this paper. In Section 4, we present numerical experiments on simulated and real data to showcase the effectiveness of the screening rule, demonstrating that the rule is effective both when p n and n p, out-performing the other alternatives that we study. Finally, in Section 5 we wrap up with a discussion on these results, indicating possible ways in which they may be extended.
2 Preliminaries
We use lower-case letters to denote scalars and vectors and upper-case letters for matrices. We use 0 and 1 to denote vectors with elements all equal to 0 or 1 respectively, with dimensions inferred from context. Furthermore, we let sign be the standard signum function with domain {−1, 0, 1}, allowing it to be overloaded for vectors.
Let c(λ) := −∇βf ( β̂(λ);X ) be the negative gradient, or so-called correlation, and denote Aλ = {i : |c(λ)i| > λ} as the active set at λ: the support set of the non-zero regression coefficients corresponding to β̂(λ). In the interest of brevity, we will let A := Aλ. We will consider β a solution to (1) if it satisfies the stationary criterion
0 ∈ ∇βf(β;X) + λ∂. (2)
Here ∂ is the subdifferential of ‖β‖1, defined as
∂j ∈ { {sign(β̂j)} if β̂j 6= 0, [−1, 1] otherwise.
This means that there must be a ∂̃ ∈ ∂ for a given λ such that
∇βf(β;X) + λ∂̃ = 0. (3)
3 Main Results
In this section we derive the main result of this paper: the Hessian screening rule. First, however, we now introduce a non-standard perspective on screening rules. In this approach, we note that (2) suggests a simple and general formulation for a screening rule, namely: we substitute the gradient vector in the optimality condition of a `1-regularized problem with an estimate. More precisely, we discard the jth predictor for the problem at a given λ if the magnitude of the jth component of the gradient vector estimate is smaller than this λ, that is
|c̃(λ)j | < λ. (4)
In the following sections, we review the strong rule and working set method for this problem from this perspective, that is, by viewing both methods as gradient approximations. We start with the case of the standard lasso (`1-regularized least-squares), where we have f(β;X) = 12‖Xβ − y‖ 2 2.
3.1 The Strong Rule
The sequential strong rule for `1-penalized least-squares regression [10] discards the jth predictor at λ = λk+1 if ∣∣xTj (Xβ̂(λk)− y)∣∣ = |c(λk)j | < 2λk+1 − λk. This is equivalent to checking that
c̃S(λk+1) = c(λk) + (λk − λk+1) sign(c(λk)) (5)
satisfies (4). The strong rule gradient approximation (5) is also known as the unit bound, since it assumes the gradient of the correlation vector to be bounded by one.
3.2 The Working Set Method
A simple but remarkably effective alternative to direct use of the strong rule is the working set heuristic [10]. It begins by estimating β at the (k + 1)th step using only the coefficients that have been previously active at any point along the path, i.e. A1:k = ∪ki=1Ai. The working set method can be viewed as a gradient estimate in the sense that
c̃W (λk+1) = X T ( y −XA1:k β̃(λk+1,A1:k) ) = −∇f ( β̃(λk+1,A1:k);X ) ,
where β̃(λ,A) = arg minβ 12 ||y −XAβ|| 2 + λ|β|.
3.3 The Hessian Screening Rule
We have shown that both the strong screening rule and the working set strategy can be expressed as estimates of the correlation (negative gradient) for the next step of the regularization path. As we have discussed previously, however, basing this estimate on the strong rule can lead to conservative approximations. Fortunately, it turns out that we can produce a better estimate by utilizing secondorder information.
We start by noting that (3), in the case of the standard lasso, can be formulated as[ XTAXA X T AXAc
XTAcXA X T AcXAc
] [ β̂A 0 ] + λ [ sign(β̂(λ)A)
∂Ac
] = [ XTAy XTAcy ] ,
and consequently that β̂(λ)A = (X T AXA) −1(XTAy − λ sign (β̂A)). Note that, for an interval [λl, λu] in which the active set is unchanged, that is, Aλ = A for all λ ∈ [λu, λk], then β̂(λ) is a continuous linear function in λ (Theorem 3.1)2. Theorem 3.1. Let β̂(λ) be the solution of (1) where f(β;X) = 12‖Xβ − y‖ 2 2. Define
β̂λ ∗ (λ)Aλ∗ = β̂(λ ∗)Aλ∗ − (λ ∗ − λ) ( XTAλ∗XAλ∗ )−1 sign ( β̂(λ∗)Aλ∗ ) and β̂λ ∗ (λ)Ac
λ∗ = 0. If it for λ ∈ [λ0, λ∗] holds that (i) sign
( β̂λ ∗ (λ) ) = sign ( β̂(λ∗) ) and (ii)
max |∇f(β̂λ∗(λ))Aλ∗ | < λ, then β̂(λ) = β̂λ ∗ (λ) for λ ∈ [λ0, λ∗].
See Appendix A for a full proof. Using Theorem 3.1, we have the following second-order approximation of c(λk+1):
ĉH(λk+1) = −∇f ( β̂λk(λk+1)Aλk ) = c(λk)+(λk+1−λk)XTXAk(XTAkXAk) −1 sign ( β̂(λk)Ak ) . (6) Remark 3.2. If no changes in the active set occur in [λk+1, λk], (6) is in fact an exact expression for the correlation at the next step, that is, ĉH(λk+1) = c(λk+1).
One problem with using the gradient estimate in (6) is that it is expensive to compute due to the inner products involving the full design matrix. To deal with this, we use the following modification, in which we restrict the computation of these inner products to the set indexed by the strong rule, assuming that predictors outside this set remain inactive:
c̃H(λk+1)j := λk+1 sign β̂(λk)j if j ∈ Aλk , 0 if |c̃S(λk+1)j | < λk+1 and j /∈ Aλk , ĉH(λk+1)j else.
For high-dimensional problems, this modification leads to large computational gains and seldom proves inaccurate, given that the strong rule only rarely causes violations [10]. Lastly, we make one more adjustment to the rule, which is to add a proportion of the unit bound (used in the strong rule) to the gradient estimate:
čH(λk+1)j := c̃ H(λk+1)j + γ(λk+1 − λk) sign(c(λk)j),
where γ ∈ R+. Without this adjustment there would be no upwards bias on the estimate, which would cause more violations than would be desirable. In our experiments, we have used γ = 0.01, which has worked well for most problems we have encountered. This finally leads us to the Hessian screening rule: discard the jth predictor at λk+1 if |čH(λk+1)j | < λk+1. We make one more modification in our implementation of the Hessian Screening Rule, which is to use the union of the ever-active predictors and those screened by the screening rule as our final set of screened predictors. We note that this is a marginal improvement to the rule, since violations of the rule are already quite infrequent. But it is included nonetheless, given that it comes at no cost and occasionally prevents violations.
2This result is not a new discovery [16], but is included here for convenience because the following results depend on it.
As an example of how the Hessian Screening Rule performs, we examine the screening performance of several different strategies. We fit a full regularization path to a design with n = 200, p = 20 000, and pairwise correlation between predictors of ρ. (See Section 4 and Appendix F.4 for more information on the setup.) We compute the average number of screened predictors across iterations of the coordinate descent solver. The results are displayed in Figure 1 and demonstrate that our method gracefully handles high correlation among predictors, offering a screened set that is many times smaller than those produced by the other screening strategies. In Appendix F.4 we extend these results to `1-regularized logistic regression as well and report the frequency of violations.
Recall that the Strong rule bounds its gradient of the correlation vector estimate at one. For the Hessian rule, there is no such bound. This means that it is theoretically possible for the Hessian rule to include more predictors than the Strong rule3. In fact, it is even possible to design special cases where the Hessian rule could be more conservative than the Strong rule. In practice, however, we have not encountered any situation in which this is the case.
3.3.1 Updating the Hessian
A potential drawback to using the Hessian screening rule is the computational costs of computing the Hessian and its inverse. LetAk be the active set at step k on the lasso path. In order to use the Hessian screening rule we need H−1k = (X T AkXAk) −1. Computing (XTAkXAk) −1 directly, however, has numerical complexity O(|Ak|3 + |Ak|2n). But if we have stored (H−1k−1, Hk−1) previously, we can utilize it to compute (H−1k , Hk) more efficiently via the so-called sweep operator [17]. We outline this technique in Algorithm 1 (Appendix B). The algorithm has a reduction step and an augmentation step; in the reduction step, we reduce the Hessian and its inverse to remove the presence of any predictors that are no longer active. In the augmentation step, we update the Hessian and its inverse to account for predictors that have just become active.
The complexity of the steps depends on the size of the sets C = Ak−1 \ Ak,D = Ak \ Ak−1, and E = Ak ∩ Ak−1 The complexity of the reduction step is O(|C|3 + |C|2|E| + |C||E|2) and the complexity of the augmentation step isO(|D|2n+n|D||E|+|D|2|E|+|D|3) since n ≥ max(|E|, |D|). An iteration of Algorithm 1 therefore has complexity O(|D|2n+ n|D||E|+ |C|3 + |C||E|2). In most applications, the computationally dominant term will be n|D||E| (since, typically, n > |E| > D > C) which could be compared to evaluating the gradient for βAk , which is n (|D|+ |E|) when βAck = 0. Note that we have so far assumed that the inverse of the Hessian exists, but this need not be the case. To deal with this issue we precondition the Hessian. See Appendix C for details.
3.3.2 Warm Starts
The availability of the Hessian and its inverse offers a coefficient warm start that is more accurate than the standard, naive, approach of using the estimate from the previous step. With the Hessian screening rule, we use the following warm start.
β̂(λk+1)Ak := β̂(λk)Ak + (λk − λk+1)H −1 Ak sign ( β̂(λk)Ak ) , (7)
3The chance of this happening is tied to the setting of γ.
where H−1Ak is the Hessian matrix for the differentiable part of the objective. Our warm start is equivalent to the one used in Park and Hastie [18], but is here made much more efficient due due to the efficient updates of the Hessian and its inverse that we use. Remark 3.3. The warm start given by (7) is the exact solution at λk if the active set remains constant in [λk+1, λk].
As a first demonstration of the value of this warm start, we look at two data sets: YearPredicitionMSD and colon-cancer. We fit a full regularization path using the setup as outlined in Section 4, with or without Hessian warm starts. For YearPredictionMSD we use the standard lasso, and for colon-cancer `1-regularized logistic regression.
The Hessian warm starts offer sizable reductions in the number of passes of the solver (Figure 2), for many steps requiring only a single pass to reach convergence. On inspection, this is not a surprising find. There are no changes in the active set for many of these steps, which means that the warm start is almost exact—“almost” due to the use of a preconditioner for the Hessian (see Appendix C).
3.3.3 General Loss Functions
We now venture beyond the standard lasso and consider loss functions of the form
f(β;X) = n∑ i=1 fi(x T i β) (8)
where fi is convex and twice differentiable. This, for instance, includes logistic, multinomial, and Poisson loss functions. For the strong rule and working set strategy, this extension does not make much of a difference. With the Hessian screening rule, however, the situation is different.
To see this, we start by noting that our method involving the Hessian is really a quadratic Taylor approximation of (1) around a specific point β0. For loss functions of the type (8), this approximation is equal to
Q(β, β0) = f(β0;X) + n∑ i=1 ( xTi f ′ i(x T i β0)(β − β0) + 1 2 (β − β0)TxTi f ′′i (xTi β0)xi(β − β0) )
= 1
2
( ỹ(xTi β0)−Xβ )T D (w(β0)) ( ỹ(xTi β0)−Xβ ) + C(β0),
where D(w(β0)) is a diagonal matrix with diagonal entries w(β0) where w(β0)i = f ′′(xTi β0) and ỹ(z)i = f ′ i(z) / f ′′i (z)− xTi β0, whilst C(β0) is a constant with respect to β.
Suppose that we are on the lasso path at λk and want to approximate c(λk+1). In this case, we simply replace f(β;X) in (1) with Q(β, β̂(λk)), which leads to the following gradient approximation:
cH(λk+1) = c(λk) + (λk+1 − λk)XTD(w)XAk(XTAkD(w)XAk) −1 sign ( β̂(λk)Ak ) ,
where w = w ( β̂(λk) ) . Unfortunately, we cannot use Algorithm 1 to update XTAkD(w)XAk . This means that we are forced to either update the Hessian directly at each step, which can be computationally demanding when |Ak| is large and inefficient when X is very sparse, or to approximate
D(w) with an upper bound. In logistic regression, for instance, we can use 1/4 as such a bound, which also means that we once again can use Algorithm 1.
In our experiments, we have employed the following heuristic to decide whether to use an upper bound or compute the full Hessian in these cases: we use full updates at each step if sparsity(X)n/max{n, p} < 10−3 and the upper bound otherwise.
3.3.4 Reducing the Impact of KKT Checks
The Hessian Screening Rule is heuristic, which means there may be violations. This necessitates that we verify the KKT conditions after having reached convergence for the screened set of predictors, and add predictors back into the working set for which these checks fail. When the screened set is small relative to p, the cost of optimization is often in large part consumed by these checks. Running these checks for the full set of predictors always needs to be done once, but if there are violations during this step, then we need repeat this check, which is best avoided. Here we describe two methods to tackle this issue.
We employ a procedure equivalent to the one used in Tibshirani et al. [10] for the working set strategy: we first check the KKT conditions for the set of predictors singled out by the strong rule and then, if there are no violations in that set, check the full set of predictors for violations. This works well because the strong rule is conservative—violations are rare—which means that we seldom need to run the KKT checks for the entire set more than once.
If we, in spite of the augmentation of the rule, run into violations when checking the full set of predictors, that is, when the strong rule fails to capture the active set, then we can still avoid repeating the full KKT check by relying on Gap Safe screening: after having run the KKT checks and have failed to converge, we screen the set of predictors using the Gap Safe rule. Because this is a safe rule, we can be sure that the predictors we discard will be inactive, which means that we will not need to include them in our upcoming KKT checks. Because Gap Safe screening and the KKT checks rely on exactly the same quantity—the correlation vector–we can do so at marginal extra cost. To see how this works, we now briefly introduce Gap Safe screening. For details, please see Fercoq, Gramfort, and Salmon [6].
For the ordinary lasso (`1-regularized least squares), the primal (1) is P (β) = 12‖y−Xβ‖ 2 2 + λ‖β‖1 and the corresponding dual is
D(θ) = 1
2 ‖y‖22 −
λ2
2 ∥∥∥θ − y λ ∥∥∥2 2
(9)
subject to ‖XT θ‖∞ ≤ 1. The duality gap is then G(β, θ) = P (β)−D(θ) and the relation between the primal and dual problems is given by y = λθ̂+Xβ̂, where θ̂ is the maximizer to the dual problem (9). In order to use Gap Safe screening, we need a feasible dual point, which can be obtained via dual point scaling, taking θ = (y−Xβ) / max ( λ, ‖XT (y−Xβ)‖∞ ) . The Gap Safe screening rule then
discards the jth feature if |xTj θ| < 1−‖xj‖2 √
2G(β, θ)/λ2. Since we have computed XT (y−Xβ) as part of the KKT checks, we can perform Gap Safe screening at an additional (and marginal) cost amounting to O(n) +O(p).
Since this augmentation benefits the working set strategy too, we adopt it in our implementation of this method as well. To avoid ambiguity, we call this version working+. Note that this makes the working set strategy quite similar to Blitz. In Appendix F.8 we show the benefit of adding this type of screening.
3.3.5 Final Algorithm
The Hessian screening method is presented in full in Algorithm 2 (Appendix B).
Lemma 3.4. Let β ∈ Rp×m be the output of Algorithm 2 for a path of length m and convergence threshold ε > 0. For each step k along the path and corresponding solution β(k) ∈ Rp, there is a dual-feasible point θ(k) such that G(β(k), θ(k)) < ζε.
Proof. First note that Gap safe screening [7, Theorem 6] ensures that G ⊇ Ak. Next, note that the algorithm guarantees that the working set, W , grows with each iteration until |xTj r| < λk for all
j ∈ G \W , at which point
max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) = max ( λk, ‖XTG (y −XGβ (k) G )‖∞ ) .
At this iteration, convergence at line 2, for the subproblem (XW , y), guarantees convergence for the full problem, (X, y), since
θ(k) = y −XWβ(k)W max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) is dual-feasible for the full problem.
3.3.6 Extensions
Approximate Homotopy In addition to improved screening and warm starts, the Hessian also allows us to construct the regularization path adaptively via approximate homotopy [19]. In brief, the Hessian screening rule allows us to choose the next λ along the path adaptively, in effect distributing the grid of λs to better approach the exact (homotopy) solution for the lasso, avoiding the otherwise heuristic choice, which can be inappropriate for some data sets.
Elastic Net Our method can be extended to the elastic net [20], which corresponds to adding a quadratic penalty φ‖β‖22/2 to (1). The Hessian now takes the form XTAXA + φI . Loosely speaking, the addition of this term makes the problem “more“ quadratic, which in turn improves both the accuracy and stability of the screening and warm starts we use in our method. As far as we know, however, there is unfortunately no way to update the inverse of the Hessian efficiently in the case of the elastic net. More research in this area would be welcome.
4 Experiments
Throughout the following experiments, we scale and center predictors with the mean and uncorrected sample standard deviation respectively. For the lasso, we also center the response vector, y, with the mean.
To construct the regularization path, we adopt the default settings from glmnet: we use a log-spaced path of 100 λ values from λmax to ξλmax, where ξ = 10−2 if p > n and 10−4 otherwise. We stop the path whenever the deviance ratio, 1 − dev/devnull, reaches 0.999 or the fractional decrease in deviance is less than 10−5. Finally, we also stop the path whenever the number of coefficients ever to be active predictors exceeds p.
We compare our method against working+ (the modified version of the working set strategy from Tibshirani et al. [10]), Celer [15], and Blitz [14]. We initially also ran our comparisons against EDPP [9], the Gap Safe rule [6], and Dynamic Sasvi [8] too, yet these methods performed so poorly that we omit the results in the main part of this work. The interested reader may nevertheless consult Appendix F.6 where results from simulated data has been included for these methods too.
We use cyclical coordinate descent with shuffling and consider the model to converge when the duality gap G(β, θ) ≤ εζ, where we take ζ to be ‖y‖22 when fitting the ordinary lasso, and n log 2 when fitting `1-regularized logistic regression. Unless specified, we let ε = 10−4. These settings are standard settings and, for instance, resemble the defaults used in Celer. For all of the experiments, we employ the line search algorithm used in Blitz4.
The code used in these experiments was, for every method, programmed in C++ using the Armadillo library [21, 22] and organized as an R package via Rcpp [23]. We used the renv package [24] to maintain dependencies. The source code, including a Singularity [25] container and its recipe for reproducing the results, are available at https://github.com/jolars/HessianScreening. Additional details of the computational setup are provided in Appendix D.
4Without the line search, all of the tested methods ran into convergence issues, particularly for the highcorrelation setting and logistic regression.
4.1 Simulated Data
Let X ∈ Rn×p, β ∈ Rp, and y ∈ Rn be the predictor matrix, coefficient vector, and response vector respectively. We draw the rows of the predictor matrix independently and identically distributed from N (0,Σ) and generate the response from N (Xβ, σ2I) with σ2 = βTΣβ/SNR, where SNR is the signal-to-noise ratio. We set s coefficients, equally spaced throughout the coefficient vector, to 1 and the rest to zero.
In our simulations, we consider two scenarios: a low-dimensional scenario and a high-dimensional scenario. In the former, we set n = 10 000, p = 100, s = 5, and the SNR to 1. In the highdimensional scenario, we take n = 400, p = 40 000, s = 20, and set the SNR to 2. These SNR values are inspired by the discussion in Hastie, Tibshirani, and Tibshirani [26] and intend to cover the middle-ground in terms of signal strength. We run our simulations for 20 iterations.
From Figure 3, it is clear that the Hessian screening rule performs best, taking the least time in every setting examined. The difference is largest for the high-correlation context in the low-dimensional setting and otherwise roughly the same across levels of correlation.
The differences between the other methods are on average small, with the working+ strategy performing slightly better in the p > n scenario. Celer and Blitz perform largely on par with one another, although Celer sees an improvement in a few of the experiments, for instance in logistic regression when p > n.
4.2 Real Data
In this section, we conduct experiments on real data sets. We run 20 iterations for the smaller data sets studied and three for the larger ones. For information on the sources of these data sets, please see Appendix E. For more detailed results of these experiments, please see Appendix F.5.
Starting with the case of `1-regularized least-squares regression, we observe that the Hessian screening rule performs best for all five data sets tested here (Table 1), in all but one instance taking less than half the time compared to the runner-up, which in each case is the working+ strategy. The difference is particularly large for the YearPredictionMSD and e2006-tfidf data sets.
In the case of `1-regularized logistic regression, the Hessian method again performs best for most of the examined data sets, for instance completing the regularization path for the madelon data set around five times faster than the working+ strategy. The exception is the arcene data set, for which the working+ strategy performs best out of the four methods.
We have provided additional results related to the effectiveness of our method in Appendix F.
5 Discussion
We have presented the Hessian Screening Rule: a new heuristic predictor screening rule for `1- regularized generalized linear models. We have shown that our screening rule offers large performance improvements over competing methods, both in simulated experiments but also in the majority of the real data sets that we study here. The improved performance of the rule appears to come not only from improved effectiveness in screening, particularly in the high-correlation setting, but also from the much-improved warm starts, which enables our method to dominate in the n p setting. Note that although we have focused on `1-regularized least-squares and logistic regression here, our rule is applicable to any composite objective for which the differentiable part is twice-differentiable.
One limitation of our method is that it consumes more memory than its competitors owing to the storage of the Hessian and its inverse. This cost may become prohibitive for cases when min{n, p} is large. In these situations the next-best choice may instead be the working set strategy. Note also that we, in this paper, focus entirely on the lasso path. The Hessian Screening Rule is a sequential rule and may therefore not prove optimal when solving for a single λ, in which case a dynamic strategy such as Celer and Blitz likely performs better.
With respect to the relative performance of the working set strategy, Celer, and Blitz, we note that our results deviate somewhat from previous comparisons [15, 14]. We speculate that these differences might arise from the fact that we have used equivalent implementations for all of the methods and from the modification that we have used for the working set strategy.
Many avenues remain to be explored in the context of Hessian-based screening rules and algorithms, such as developing more efficient methods for updating of the Hessian matrix for non-least-squares objectives, such as logistic regression and using second-order information to further improve the optimization method used. Other interesting directions also include adapting the rules to more complicated regularization problems, such as the fused lasso [27], SLOPE [28], SCAD [29], and MCP [30]. Although the latter two of these are non-convex problems, they are locally convex for intervals of the regularization path [31], which enables the use of our method. Adapting the method for use in batch stochastic gradient descent would also be an interesting topic for further study, for instance by using methods such as the ones outlined in Asar et al. [32] to ensure that the Hessian remains positive definite.
Finally, we do not expect there to be any negative societal consequences of our work given that it is aimed solely at improving the performance of an optimization method.
Acknowledgments and Disclosure of Funding
We would like to thank Małgorzata Bogdan for valuable comments. This work was funded by the Swedish Research Council through grant agreement no. 2020-05081 and no. 2018-01726. The computations were enabled by resources provided by LUNARC. The results shown here are in part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. | 1. What is the focus and contribution of the paper regarding predictor screening rules?
2. What are the strengths of the proposed approach, particularly in utilizing second-order information?
3. What are the weaknesses of the paper, especially concerning the study method and the significance of the work?
4. Do you have any concerns or questions regarding the proposed Hessian screening rule?
5. Are there any limitations to the approach, such as its applicability only to the lasso optimization problem? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the predictor screening rules over the lasso optimization problem. It proposes a Hessian screening rule which utilizes the second-order information. This rule is effective not only in screening but also in accurate warm starts. Updating the second-order information has high computational complexity, and to deal with it, this work replies on the sweep operator. In the experiments, the proposed rule is compared to many baselines and outperforms them significantly on both simulated and real-world data.
Strengths And Weaknesses
Originality
Previous works on screening rules overlooked the study on the second-order information. Thus I think the direction of this work novel. The proposed rule takes the warm start and the Hessian matrix computation problems into account, and resolves them soundly. Besides, the insightful discussions on the proposed screening rule are provided. They are helpful to understand the rule and to differentiate this approach with previous methods. Therefore, I think the contributions of the work novel.
Quality
I am satisfactory with most contents of this paper. The paper presents a clear overview of the question and previous rules both in words and math. The proposed rule is based on the Hessian matrix and is actually in a form of the second-order Taylor approximation. Speeding up the Hessian matrix computation is based on the sweep operator and the warm starts also benefit from the Hessian matrix. These arguments are demonstrated in the experiments. As far as I checked, the theoretical analyses have no problem.
However, I have a concern with the study method. Normally, people select the predictors for better fitting accuracy, but this paper relies heavily on the time cost and the minimum number of active predictors to measure the performance. I am curious about why not including fitting accuracy.
Clarity
The paper is clearly written for me. I can easily follow the contents on the approach and the experiment.
Significance
The idea of the approach is sound and practical performance is better than baselines. However, because just the lasso problem is involved, I am afraid that the audience will be very limited. Therefore, I think the significance of this paper is limited.
Questions
See the concerns in the above section.
Limitations
Yes |
NIPS | Title
The Hessian Screening Rule
Abstract
Predictor screening rules, which discard predictors before fitting a model, have had considerable impact on the speed with which sparse regression problems, such as the lasso, can be solved. In this paper we present a new screening rule for solving the lasso path: the Hessian Screening Rule. The rule uses second-order information from the model to provide both effective screening, particularly in the case of high correlation, as well as accurate warm starts. The proposed rule outperforms all alternatives we study on simulated data sets with both low and high correlation for `1-regularized least-squares (the lasso) and logistic regression. It also performs best in general on the real data sets that we examine.
1 Introduction
High-dimensional data, where the number of features (p) exceeds the number of observations (n), poses a challenge for many classical statistical models. A common remedy for this issue is to regularize the model by penalizing the regression coefficients such that the solution becomes sparse. A popular choice of such a penalization is the `1-norm, which, when the objective is least-squares, leads to the well-known lasso [1]. More specifically, we will focus on the following convex optimization problem:
minimize β∈Rp
{ f(β;X) + λ‖β‖1 } , (1)
where f(β;X) is smooth and convex. We let β̂ be the solution vector for this problem and, abusing notation, equivalently let β̂ : R 7→ Rp be a function that returns this vector for a given λ. Our focus lies in solving (1) along a regularization path λ1, λ2 . . . , λm with λ1 ≥ λ2 ≥ · · · ≥ λm. We start the path at λmax, which corresponds to the null (all-sparse) model1, and finish at some fraction of λmax for which the model is either almost saturated (in the p ≥ n setting), or for which the solution approaches the ordinary least-squares estimate. The motivation for this focus is that the optimal λ is typically unknown and must be estimated through model tuning, such as cross-validation. This involves repeated refitting of the model to new batches of data, which is computationally demanding.
Fortunately, the introduction of so-called screening rules has improved this situation remarkably. Screening rules use tests that screen and possibly discard predictors from the model before it is fit, which effectively reduces the dimensions of the problem and leads to improvements in performance and memory usage. There are, generally speaking, two types of screening rules: safe and heuristic rules. Safe rules guarantee that discarded predictors are inactive at the optimum—heuristic rules do not and may therefore cause violations: discarding active predictors. The possibility of violations mean that heuristic methods need to validate the solution through checks of the Karush–Kuhn–Tucker (KKT) optimality conditions after optimization has concluded and, whenever there are violations, rerun optimization, which can be costly particularly because the KKT checks themselves are expensive. This means that the distinction between safe and heuristic rules only matters in regards to algorithmic
1λmax is in fact available in closed form—for the lasso it is maxj |xTj y|.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
details—all heuristic methods that we study here use KKT checks to catch these violations, which means that these methods are in fact also safe.
Screening rules can moreover also be classified as basic, sequential, or dynamic. Basic rules screen predictors based only on information available from the null model. Sequential rules use information from the previous step(s) on the regularization path to screen predictors for the next step. Finally, dynamic rules screen predictors during optimization, reducing the set of screened predictors repeatedly throughout optimization.
Notable examples of safe rules include the basic SAFE rule [2], the sphere tests [3], the R-region test [4], Slores [5], Gap Safe [6, 7], and Dynamic Sasvi [8]. There is also a group of dual polytope projection rules, most prominently Enhanced Dual Polytope Projection (EDPP) [9]. As noted by Fercoq, Gramfort, and Salmon [6], however, the sequential version of EDPP relies on exact knowledge of the optimal solution at the previous step along the path to be safe in practice, which is only available for λmax. Among the heuristic rules, we have the Strong Rule [10], SIS [11], and ExSIS [12]. But the latter two of these are not sequential rules and solve a potentially reduced form of the problem in (1)—we will not discuss them further here. In addition to these two types of rules, there has also recently been attempts to combine safe and heuristic rules into so-called hybrid rules [13].
There are various methods for employing these rules in practice. Of particular interest are so-called working set strategies, which use a subset of the screened set during optimization, iteratively updating the set based on some criterion. Tibshirani et al. [10] introduced the first working set strategy, which we in this paper will refer to simply as the working set strategy. It uses the set of predictors that have ever been active as an initial working set. After convergence on this set, it checks the KKT optimality conditions on the set of predictors selected by the strong rule, and then adds predictors that violate the conditions to the working set. This procedure is then repeated until there are no violations, at which point the optimality conditions are checked for the entire set, possibly triggering additional iterations of the procedure. Blitz [14] and Celer [15] are two other methods that use both Gap Safe screening and working sets. Instead of choosing previously active predictors as a working set, however, both Blitz and Celer assign priorities to each feature based on how close each feature is of violating the Gap Safe check and construct the working set based on this prioritization. In addition to this, Celer uses dual point acceleration to improve Gap Safe screening and speed up convergence. Both Blitz and Celer are heuristic methods.
One problem with current screening rules is that they often become conservative—including large numbers of predictors into the screened set—when dealing with predictors that are strongly correlated. Tibshirani et al. [10], for instance, demonstrated this to be the case with the strong rule, which was the motivation behind the working set strategy. (See Appendix F.4 for additional experiments verifying this). Yet because the computational complexity of the KKT checks in the working set strategy still depends on the strong rule, the effectiveness of the rule may nevertheless be hampered in this situation. A possible and—as we will soon show—powerful solution to this problem is to make use of the second-order information available from (1), and in this paper we present a novel screening rule based on this idea. Methods using second-order information (the Hessian) are often computationally infeasible for high-dimensional problems. We utilize two properties of the problem to remedy this issue: first, we need only to compute the Hessian for the active set, which is often much smaller than the full set of predictors. Second, we avoid constructing the Hessian (and it’s inverse) from scratch for each λ along the path, instead updating it sequentially by means of the Schur complement. The availability of the Hessian also enables us to improve the warm starts (the initial coefficient estimate at the start of each optimization run) used when fitting the regularization path, which plays a key role in our method.
We present our main results in Section 3, beginning with a reformulation of the strong rule and working set strategy before we arrive at the screening rule that represents the main result of this paper. In Section 4, we present numerical experiments on simulated and real data to showcase the effectiveness of the screening rule, demonstrating that the rule is effective both when p n and n p, out-performing the other alternatives that we study. Finally, in Section 5 we wrap up with a discussion on these results, indicating possible ways in which they may be extended.
2 Preliminaries
We use lower-case letters to denote scalars and vectors and upper-case letters for matrices. We use 0 and 1 to denote vectors with elements all equal to 0 or 1 respectively, with dimensions inferred from context. Furthermore, we let sign be the standard signum function with domain {−1, 0, 1}, allowing it to be overloaded for vectors.
Let c(λ) := −∇βf ( β̂(λ);X ) be the negative gradient, or so-called correlation, and denote Aλ = {i : |c(λ)i| > λ} as the active set at λ: the support set of the non-zero regression coefficients corresponding to β̂(λ). In the interest of brevity, we will let A := Aλ. We will consider β a solution to (1) if it satisfies the stationary criterion
0 ∈ ∇βf(β;X) + λ∂. (2)
Here ∂ is the subdifferential of ‖β‖1, defined as
∂j ∈ { {sign(β̂j)} if β̂j 6= 0, [−1, 1] otherwise.
This means that there must be a ∂̃ ∈ ∂ for a given λ such that
∇βf(β;X) + λ∂̃ = 0. (3)
3 Main Results
In this section we derive the main result of this paper: the Hessian screening rule. First, however, we now introduce a non-standard perspective on screening rules. In this approach, we note that (2) suggests a simple and general formulation for a screening rule, namely: we substitute the gradient vector in the optimality condition of a `1-regularized problem with an estimate. More precisely, we discard the jth predictor for the problem at a given λ if the magnitude of the jth component of the gradient vector estimate is smaller than this λ, that is
|c̃(λ)j | < λ. (4)
In the following sections, we review the strong rule and working set method for this problem from this perspective, that is, by viewing both methods as gradient approximations. We start with the case of the standard lasso (`1-regularized least-squares), where we have f(β;X) = 12‖Xβ − y‖ 2 2.
3.1 The Strong Rule
The sequential strong rule for `1-penalized least-squares regression [10] discards the jth predictor at λ = λk+1 if ∣∣xTj (Xβ̂(λk)− y)∣∣ = |c(λk)j | < 2λk+1 − λk. This is equivalent to checking that
c̃S(λk+1) = c(λk) + (λk − λk+1) sign(c(λk)) (5)
satisfies (4). The strong rule gradient approximation (5) is also known as the unit bound, since it assumes the gradient of the correlation vector to be bounded by one.
3.2 The Working Set Method
A simple but remarkably effective alternative to direct use of the strong rule is the working set heuristic [10]. It begins by estimating β at the (k + 1)th step using only the coefficients that have been previously active at any point along the path, i.e. A1:k = ∪ki=1Ai. The working set method can be viewed as a gradient estimate in the sense that
c̃W (λk+1) = X T ( y −XA1:k β̃(λk+1,A1:k) ) = −∇f ( β̃(λk+1,A1:k);X ) ,
where β̃(λ,A) = arg minβ 12 ||y −XAβ|| 2 + λ|β|.
3.3 The Hessian Screening Rule
We have shown that both the strong screening rule and the working set strategy can be expressed as estimates of the correlation (negative gradient) for the next step of the regularization path. As we have discussed previously, however, basing this estimate on the strong rule can lead to conservative approximations. Fortunately, it turns out that we can produce a better estimate by utilizing secondorder information.
We start by noting that (3), in the case of the standard lasso, can be formulated as[ XTAXA X T AXAc
XTAcXA X T AcXAc
] [ β̂A 0 ] + λ [ sign(β̂(λ)A)
∂Ac
] = [ XTAy XTAcy ] ,
and consequently that β̂(λ)A = (X T AXA) −1(XTAy − λ sign (β̂A)). Note that, for an interval [λl, λu] in which the active set is unchanged, that is, Aλ = A for all λ ∈ [λu, λk], then β̂(λ) is a continuous linear function in λ (Theorem 3.1)2. Theorem 3.1. Let β̂(λ) be the solution of (1) where f(β;X) = 12‖Xβ − y‖ 2 2. Define
β̂λ ∗ (λ)Aλ∗ = β̂(λ ∗)Aλ∗ − (λ ∗ − λ) ( XTAλ∗XAλ∗ )−1 sign ( β̂(λ∗)Aλ∗ ) and β̂λ ∗ (λ)Ac
λ∗ = 0. If it for λ ∈ [λ0, λ∗] holds that (i) sign
( β̂λ ∗ (λ) ) = sign ( β̂(λ∗) ) and (ii)
max |∇f(β̂λ∗(λ))Aλ∗ | < λ, then β̂(λ) = β̂λ ∗ (λ) for λ ∈ [λ0, λ∗].
See Appendix A for a full proof. Using Theorem 3.1, we have the following second-order approximation of c(λk+1):
ĉH(λk+1) = −∇f ( β̂λk(λk+1)Aλk ) = c(λk)+(λk+1−λk)XTXAk(XTAkXAk) −1 sign ( β̂(λk)Ak ) . (6) Remark 3.2. If no changes in the active set occur in [λk+1, λk], (6) is in fact an exact expression for the correlation at the next step, that is, ĉH(λk+1) = c(λk+1).
One problem with using the gradient estimate in (6) is that it is expensive to compute due to the inner products involving the full design matrix. To deal with this, we use the following modification, in which we restrict the computation of these inner products to the set indexed by the strong rule, assuming that predictors outside this set remain inactive:
c̃H(λk+1)j := λk+1 sign β̂(λk)j if j ∈ Aλk , 0 if |c̃S(λk+1)j | < λk+1 and j /∈ Aλk , ĉH(λk+1)j else.
For high-dimensional problems, this modification leads to large computational gains and seldom proves inaccurate, given that the strong rule only rarely causes violations [10]. Lastly, we make one more adjustment to the rule, which is to add a proportion of the unit bound (used in the strong rule) to the gradient estimate:
čH(λk+1)j := c̃ H(λk+1)j + γ(λk+1 − λk) sign(c(λk)j),
where γ ∈ R+. Without this adjustment there would be no upwards bias on the estimate, which would cause more violations than would be desirable. In our experiments, we have used γ = 0.01, which has worked well for most problems we have encountered. This finally leads us to the Hessian screening rule: discard the jth predictor at λk+1 if |čH(λk+1)j | < λk+1. We make one more modification in our implementation of the Hessian Screening Rule, which is to use the union of the ever-active predictors and those screened by the screening rule as our final set of screened predictors. We note that this is a marginal improvement to the rule, since violations of the rule are already quite infrequent. But it is included nonetheless, given that it comes at no cost and occasionally prevents violations.
2This result is not a new discovery [16], but is included here for convenience because the following results depend on it.
As an example of how the Hessian Screening Rule performs, we examine the screening performance of several different strategies. We fit a full regularization path to a design with n = 200, p = 20 000, and pairwise correlation between predictors of ρ. (See Section 4 and Appendix F.4 for more information on the setup.) We compute the average number of screened predictors across iterations of the coordinate descent solver. The results are displayed in Figure 1 and demonstrate that our method gracefully handles high correlation among predictors, offering a screened set that is many times smaller than those produced by the other screening strategies. In Appendix F.4 we extend these results to `1-regularized logistic regression as well and report the frequency of violations.
Recall that the Strong rule bounds its gradient of the correlation vector estimate at one. For the Hessian rule, there is no such bound. This means that it is theoretically possible for the Hessian rule to include more predictors than the Strong rule3. In fact, it is even possible to design special cases where the Hessian rule could be more conservative than the Strong rule. In practice, however, we have not encountered any situation in which this is the case.
3.3.1 Updating the Hessian
A potential drawback to using the Hessian screening rule is the computational costs of computing the Hessian and its inverse. LetAk be the active set at step k on the lasso path. In order to use the Hessian screening rule we need H−1k = (X T AkXAk) −1. Computing (XTAkXAk) −1 directly, however, has numerical complexity O(|Ak|3 + |Ak|2n). But if we have stored (H−1k−1, Hk−1) previously, we can utilize it to compute (H−1k , Hk) more efficiently via the so-called sweep operator [17]. We outline this technique in Algorithm 1 (Appendix B). The algorithm has a reduction step and an augmentation step; in the reduction step, we reduce the Hessian and its inverse to remove the presence of any predictors that are no longer active. In the augmentation step, we update the Hessian and its inverse to account for predictors that have just become active.
The complexity of the steps depends on the size of the sets C = Ak−1 \ Ak,D = Ak \ Ak−1, and E = Ak ∩ Ak−1 The complexity of the reduction step is O(|C|3 + |C|2|E| + |C||E|2) and the complexity of the augmentation step isO(|D|2n+n|D||E|+|D|2|E|+|D|3) since n ≥ max(|E|, |D|). An iteration of Algorithm 1 therefore has complexity O(|D|2n+ n|D||E|+ |C|3 + |C||E|2). In most applications, the computationally dominant term will be n|D||E| (since, typically, n > |E| > D > C) which could be compared to evaluating the gradient for βAk , which is n (|D|+ |E|) when βAck = 0. Note that we have so far assumed that the inverse of the Hessian exists, but this need not be the case. To deal with this issue we precondition the Hessian. See Appendix C for details.
3.3.2 Warm Starts
The availability of the Hessian and its inverse offers a coefficient warm start that is more accurate than the standard, naive, approach of using the estimate from the previous step. With the Hessian screening rule, we use the following warm start.
β̂(λk+1)Ak := β̂(λk)Ak + (λk − λk+1)H −1 Ak sign ( β̂(λk)Ak ) , (7)
3The chance of this happening is tied to the setting of γ.
where H−1Ak is the Hessian matrix for the differentiable part of the objective. Our warm start is equivalent to the one used in Park and Hastie [18], but is here made much more efficient due due to the efficient updates of the Hessian and its inverse that we use. Remark 3.3. The warm start given by (7) is the exact solution at λk if the active set remains constant in [λk+1, λk].
As a first demonstration of the value of this warm start, we look at two data sets: YearPredicitionMSD and colon-cancer. We fit a full regularization path using the setup as outlined in Section 4, with or without Hessian warm starts. For YearPredictionMSD we use the standard lasso, and for colon-cancer `1-regularized logistic regression.
The Hessian warm starts offer sizable reductions in the number of passes of the solver (Figure 2), for many steps requiring only a single pass to reach convergence. On inspection, this is not a surprising find. There are no changes in the active set for many of these steps, which means that the warm start is almost exact—“almost” due to the use of a preconditioner for the Hessian (see Appendix C).
3.3.3 General Loss Functions
We now venture beyond the standard lasso and consider loss functions of the form
f(β;X) = n∑ i=1 fi(x T i β) (8)
where fi is convex and twice differentiable. This, for instance, includes logistic, multinomial, and Poisson loss functions. For the strong rule and working set strategy, this extension does not make much of a difference. With the Hessian screening rule, however, the situation is different.
To see this, we start by noting that our method involving the Hessian is really a quadratic Taylor approximation of (1) around a specific point β0. For loss functions of the type (8), this approximation is equal to
Q(β, β0) = f(β0;X) + n∑ i=1 ( xTi f ′ i(x T i β0)(β − β0) + 1 2 (β − β0)TxTi f ′′i (xTi β0)xi(β − β0) )
= 1
2
( ỹ(xTi β0)−Xβ )T D (w(β0)) ( ỹ(xTi β0)−Xβ ) + C(β0),
where D(w(β0)) is a diagonal matrix with diagonal entries w(β0) where w(β0)i = f ′′(xTi β0) and ỹ(z)i = f ′ i(z) / f ′′i (z)− xTi β0, whilst C(β0) is a constant with respect to β.
Suppose that we are on the lasso path at λk and want to approximate c(λk+1). In this case, we simply replace f(β;X) in (1) with Q(β, β̂(λk)), which leads to the following gradient approximation:
cH(λk+1) = c(λk) + (λk+1 − λk)XTD(w)XAk(XTAkD(w)XAk) −1 sign ( β̂(λk)Ak ) ,
where w = w ( β̂(λk) ) . Unfortunately, we cannot use Algorithm 1 to update XTAkD(w)XAk . This means that we are forced to either update the Hessian directly at each step, which can be computationally demanding when |Ak| is large and inefficient when X is very sparse, or to approximate
D(w) with an upper bound. In logistic regression, for instance, we can use 1/4 as such a bound, which also means that we once again can use Algorithm 1.
In our experiments, we have employed the following heuristic to decide whether to use an upper bound or compute the full Hessian in these cases: we use full updates at each step if sparsity(X)n/max{n, p} < 10−3 and the upper bound otherwise.
3.3.4 Reducing the Impact of KKT Checks
The Hessian Screening Rule is heuristic, which means there may be violations. This necessitates that we verify the KKT conditions after having reached convergence for the screened set of predictors, and add predictors back into the working set for which these checks fail. When the screened set is small relative to p, the cost of optimization is often in large part consumed by these checks. Running these checks for the full set of predictors always needs to be done once, but if there are violations during this step, then we need repeat this check, which is best avoided. Here we describe two methods to tackle this issue.
We employ a procedure equivalent to the one used in Tibshirani et al. [10] for the working set strategy: we first check the KKT conditions for the set of predictors singled out by the strong rule and then, if there are no violations in that set, check the full set of predictors for violations. This works well because the strong rule is conservative—violations are rare—which means that we seldom need to run the KKT checks for the entire set more than once.
If we, in spite of the augmentation of the rule, run into violations when checking the full set of predictors, that is, when the strong rule fails to capture the active set, then we can still avoid repeating the full KKT check by relying on Gap Safe screening: after having run the KKT checks and have failed to converge, we screen the set of predictors using the Gap Safe rule. Because this is a safe rule, we can be sure that the predictors we discard will be inactive, which means that we will not need to include them in our upcoming KKT checks. Because Gap Safe screening and the KKT checks rely on exactly the same quantity—the correlation vector–we can do so at marginal extra cost. To see how this works, we now briefly introduce Gap Safe screening. For details, please see Fercoq, Gramfort, and Salmon [6].
For the ordinary lasso (`1-regularized least squares), the primal (1) is P (β) = 12‖y−Xβ‖ 2 2 + λ‖β‖1 and the corresponding dual is
D(θ) = 1
2 ‖y‖22 −
λ2
2 ∥∥∥θ − y λ ∥∥∥2 2
(9)
subject to ‖XT θ‖∞ ≤ 1. The duality gap is then G(β, θ) = P (β)−D(θ) and the relation between the primal and dual problems is given by y = λθ̂+Xβ̂, where θ̂ is the maximizer to the dual problem (9). In order to use Gap Safe screening, we need a feasible dual point, which can be obtained via dual point scaling, taking θ = (y−Xβ) / max ( λ, ‖XT (y−Xβ)‖∞ ) . The Gap Safe screening rule then
discards the jth feature if |xTj θ| < 1−‖xj‖2 √
2G(β, θ)/λ2. Since we have computed XT (y−Xβ) as part of the KKT checks, we can perform Gap Safe screening at an additional (and marginal) cost amounting to O(n) +O(p).
Since this augmentation benefits the working set strategy too, we adopt it in our implementation of this method as well. To avoid ambiguity, we call this version working+. Note that this makes the working set strategy quite similar to Blitz. In Appendix F.8 we show the benefit of adding this type of screening.
3.3.5 Final Algorithm
The Hessian screening method is presented in full in Algorithm 2 (Appendix B).
Lemma 3.4. Let β ∈ Rp×m be the output of Algorithm 2 for a path of length m and convergence threshold ε > 0. For each step k along the path and corresponding solution β(k) ∈ Rp, there is a dual-feasible point θ(k) such that G(β(k), θ(k)) < ζε.
Proof. First note that Gap safe screening [7, Theorem 6] ensures that G ⊇ Ak. Next, note that the algorithm guarantees that the working set, W , grows with each iteration until |xTj r| < λk for all
j ∈ G \W , at which point
max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) = max ( λk, ‖XTG (y −XGβ (k) G )‖∞ ) .
At this iteration, convergence at line 2, for the subproblem (XW , y), guarantees convergence for the full problem, (X, y), since
θ(k) = y −XWβ(k)W max ( λk, ‖XTW(y −XWβ (k) W )‖∞ ) is dual-feasible for the full problem.
3.3.6 Extensions
Approximate Homotopy In addition to improved screening and warm starts, the Hessian also allows us to construct the regularization path adaptively via approximate homotopy [19]. In brief, the Hessian screening rule allows us to choose the next λ along the path adaptively, in effect distributing the grid of λs to better approach the exact (homotopy) solution for the lasso, avoiding the otherwise heuristic choice, which can be inappropriate for some data sets.
Elastic Net Our method can be extended to the elastic net [20], which corresponds to adding a quadratic penalty φ‖β‖22/2 to (1). The Hessian now takes the form XTAXA + φI . Loosely speaking, the addition of this term makes the problem “more“ quadratic, which in turn improves both the accuracy and stability of the screening and warm starts we use in our method. As far as we know, however, there is unfortunately no way to update the inverse of the Hessian efficiently in the case of the elastic net. More research in this area would be welcome.
4 Experiments
Throughout the following experiments, we scale and center predictors with the mean and uncorrected sample standard deviation respectively. For the lasso, we also center the response vector, y, with the mean.
To construct the regularization path, we adopt the default settings from glmnet: we use a log-spaced path of 100 λ values from λmax to ξλmax, where ξ = 10−2 if p > n and 10−4 otherwise. We stop the path whenever the deviance ratio, 1 − dev/devnull, reaches 0.999 or the fractional decrease in deviance is less than 10−5. Finally, we also stop the path whenever the number of coefficients ever to be active predictors exceeds p.
We compare our method against working+ (the modified version of the working set strategy from Tibshirani et al. [10]), Celer [15], and Blitz [14]. We initially also ran our comparisons against EDPP [9], the Gap Safe rule [6], and Dynamic Sasvi [8] too, yet these methods performed so poorly that we omit the results in the main part of this work. The interested reader may nevertheless consult Appendix F.6 where results from simulated data has been included for these methods too.
We use cyclical coordinate descent with shuffling and consider the model to converge when the duality gap G(β, θ) ≤ εζ, where we take ζ to be ‖y‖22 when fitting the ordinary lasso, and n log 2 when fitting `1-regularized logistic regression. Unless specified, we let ε = 10−4. These settings are standard settings and, for instance, resemble the defaults used in Celer. For all of the experiments, we employ the line search algorithm used in Blitz4.
The code used in these experiments was, for every method, programmed in C++ using the Armadillo library [21, 22] and organized as an R package via Rcpp [23]. We used the renv package [24] to maintain dependencies. The source code, including a Singularity [25] container and its recipe for reproducing the results, are available at https://github.com/jolars/HessianScreening. Additional details of the computational setup are provided in Appendix D.
4Without the line search, all of the tested methods ran into convergence issues, particularly for the highcorrelation setting and logistic regression.
4.1 Simulated Data
Let X ∈ Rn×p, β ∈ Rp, and y ∈ Rn be the predictor matrix, coefficient vector, and response vector respectively. We draw the rows of the predictor matrix independently and identically distributed from N (0,Σ) and generate the response from N (Xβ, σ2I) with σ2 = βTΣβ/SNR, where SNR is the signal-to-noise ratio. We set s coefficients, equally spaced throughout the coefficient vector, to 1 and the rest to zero.
In our simulations, we consider two scenarios: a low-dimensional scenario and a high-dimensional scenario. In the former, we set n = 10 000, p = 100, s = 5, and the SNR to 1. In the highdimensional scenario, we take n = 400, p = 40 000, s = 20, and set the SNR to 2. These SNR values are inspired by the discussion in Hastie, Tibshirani, and Tibshirani [26] and intend to cover the middle-ground in terms of signal strength. We run our simulations for 20 iterations.
From Figure 3, it is clear that the Hessian screening rule performs best, taking the least time in every setting examined. The difference is largest for the high-correlation context in the low-dimensional setting and otherwise roughly the same across levels of correlation.
The differences between the other methods are on average small, with the working+ strategy performing slightly better in the p > n scenario. Celer and Blitz perform largely on par with one another, although Celer sees an improvement in a few of the experiments, for instance in logistic regression when p > n.
4.2 Real Data
In this section, we conduct experiments on real data sets. We run 20 iterations for the smaller data sets studied and three for the larger ones. For information on the sources of these data sets, please see Appendix E. For more detailed results of these experiments, please see Appendix F.5.
Starting with the case of `1-regularized least-squares regression, we observe that the Hessian screening rule performs best for all five data sets tested here (Table 1), in all but one instance taking less than half the time compared to the runner-up, which in each case is the working+ strategy. The difference is particularly large for the YearPredictionMSD and e2006-tfidf data sets.
In the case of `1-regularized logistic regression, the Hessian method again performs best for most of the examined data sets, for instance completing the regularization path for the madelon data set around five times faster than the working+ strategy. The exception is the arcene data set, for which the working+ strategy performs best out of the four methods.
We have provided additional results related to the effectiveness of our method in Appendix F.
5 Discussion
We have presented the Hessian Screening Rule: a new heuristic predictor screening rule for `1- regularized generalized linear models. We have shown that our screening rule offers large performance improvements over competing methods, both in simulated experiments but also in the majority of the real data sets that we study here. The improved performance of the rule appears to come not only from improved effectiveness in screening, particularly in the high-correlation setting, but also from the much-improved warm starts, which enables our method to dominate in the n p setting. Note that although we have focused on `1-regularized least-squares and logistic regression here, our rule is applicable to any composite objective for which the differentiable part is twice-differentiable.
One limitation of our method is that it consumes more memory than its competitors owing to the storage of the Hessian and its inverse. This cost may become prohibitive for cases when min{n, p} is large. In these situations the next-best choice may instead be the working set strategy. Note also that we, in this paper, focus entirely on the lasso path. The Hessian Screening Rule is a sequential rule and may therefore not prove optimal when solving for a single λ, in which case a dynamic strategy such as Celer and Blitz likely performs better.
With respect to the relative performance of the working set strategy, Celer, and Blitz, we note that our results deviate somewhat from previous comparisons [15, 14]. We speculate that these differences might arise from the fact that we have used equivalent implementations for all of the methods and from the modification that we have used for the working set strategy.
Many avenues remain to be explored in the context of Hessian-based screening rules and algorithms, such as developing more efficient methods for updating of the Hessian matrix for non-least-squares objectives, such as logistic regression and using second-order information to further improve the optimization method used. Other interesting directions also include adapting the rules to more complicated regularization problems, such as the fused lasso [27], SLOPE [28], SCAD [29], and MCP [30]. Although the latter two of these are non-convex problems, they are locally convex for intervals of the regularization path [31], which enables the use of our method. Adapting the method for use in batch stochastic gradient descent would also be an interesting topic for further study, for instance by using methods such as the ones outlined in Asar et al. [32] to ensure that the Hessian remains positive definite.
Finally, we do not expect there to be any negative societal consequences of our work given that it is aimed solely at improving the performance of an optimization method.
Acknowledgments and Disclosure of Funding
We would like to thank Małgorzata Bogdan for valuable comments. This work was funded by the Swedish Research Council through grant agreement no. 2020-05081 and no. 2018-01726. The computations were enabled by resources provided by LUNARC. The results shown here are in part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. | 1. What is the main contribution of the paper regarding L1 sparse modeling?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works?
3. Do you have any questions regarding the novelty and technical aspects of the paper?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any concerns or suggestions for improving the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper proposes a screening strategy for L1 sparse modelings. The basic idea is to use a prediction of an optimal solution as a function of the regularization parameter, derived from optimal conditions (which is aka solution path). The authors further propose combining working set selection screening by predicted solution with so-called strong rule to make screening more efficient.
Strengths And Weaknesses
Overall, the paper is easy to follow and technical quality is fine. The purpose is clear and the procedure is written in detail. However, a critical issue is that the idea of main proposal (Hessian screening) is not novel. Approaches based on a similar idea have been studies in the context of the path following though it is not fully mentioned. Detailed comments are as follows.
Closely related papers in the path following literature are missed. For example, since the following two papers contains conceptually quite similar approaches, the difference should have been discussed in detail, though currently nothing is mentioned: [Rosset2004] S. Rosset, Following Curved Regularized Optimization Solution Paths, NeurIPS 2004. [Park2006] M. Y. Park and T. Hastie, L1 Regularization Path Algorithm for Generalized Linear Models, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2006.
The proposed algorithm can be seen as a so-called 'predictor-corrector' method in general (e.g., discussed in [Park2006]). Further, [Park2006] also discusses the working set selection based on the update equation (6).
Therefore, most importantly, the idea using Theorem 3.1 to predict the variable c (working set criterion) of the next \lambda has been already known. Therefore, I do not think the concept of 'Hessian screening' is novel.
[Rosset2004] also discusses a similar idea of approach (Hessian based update) based on an essentially quite similar theoretical property to Theorem 3.1. Further, this paper also provides the error analysis of the predictor.
Combination with strong rule and additional adjustment would be novel, but its technical significance is a bit marginal because these are quite simple heuristics.
Techniques in the 'Updating the Hessian' paragraph have been also known (the same techniques repeatedly discussed in the path following literature).
The 'Warm Starts' has also been widely known (e.g., [Park2006]).
Minor comments:
Since Theorem 3.1 has been widely known, it should clarify more explicitly rather than noting it only in the footnote.
Questions
Convergence to the optimal solution is not mentioned. Could you give any information about the optimality of solution in a sense of the KKT conditions of (1)?
Limitations
In Section 5, limitations were discussed. |
NIPS | Title
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
Abstract
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5× smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code is available at: github.com/NVlabs/SegFormer.
N/A
1 Introduction
Semantic segmentation is a fundamental task in computer vision and enables many downstream applications. It is related to image classification since it produces per-pixel category prediction instead of image-level prediction. This relationship is pointed out and systematically studied in a seminal work [1], where the authors used fully convolutional networks (FCNs) for semantic segmentation tasks. Since then, FCN has inspired many follow-up works and has become a predominant design choice for dense prediction.
Since there is a strong relation between classification and semantic segmentation, many stateof-the-art semantic segmentation frameworks are variants of popular architectures for image classification on ImageNet. Therefore, designing backbone architectures has remained an active area
∗Corresponding authors: Zhiding Yu and Ping Luo
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
in semantic segmentation. Indeed, starting from early methods using VGGs [1, 2], to the latest methods with significantly deeper and more powerful backbones [3], the evolution of backbones has dramatically pushed the performance boundary of semantic segmentation. Besides backbone architectures, another line of work formulates semantic segmentation as a structured prediction problem, and focuses on designing modules and operators, which can effectively capture contextual information. A representative example in this area is dilated convolution [4, 5], which increases the receptive field by “inflating” the kernel with holes.
Witnessing the great success in natural language processing (NLP), there has been a recent surge of interest to introduce Transformers to vision tasks. Dosovitskiy et al. [6] proposed vision Transformer (ViT) for image classification. Following the Transformer design in NLP, the authors split an image into multiple linearly embedded patches and feed them into a standard Transformer with positional embeddings (PE), leading to an impressive performance on ImageNet. In semantic segmentation, Zheng et al. [7] proposed SETR to demonstrate the feasibility of using Transformers in this task.
SETR adopts ViT as a backbone and incorporates several CNN decoders to enlarge feature resolution. Despite the good performance, ViT has two important limitations: 1) ViT outputs single-scale lowresolution features instead of multi-scale ones, and 2) it has very high computational cost on large images. To address these limitations, Wang et al. [8] proposed a pyramid vision Transformer (PVT), a natural extension of ViT with pyramid structures for dense prediction. PVT shows considerable improvements over the ResNet counterpart on object detection and semantic segmentation. However, together with other emerging methods such as Swin Transformer [9] and Twins [10], these methods mainly consider the design of the Transformer encoder, neglecting the contribution of the decoder for further improvements.
This paper introduces SegFormer, a cutting-edge Transformer framework for semantic segmentation that jointly considers efficiency, accuracy, and robustness. In contrast to previous methods, our framework redesigns both the encoder and the decoder. The key novelties of our approach are:
• A novel positional-encoding-free and hierarchical Transformer encoder.
• A lightweight All-MLP decoder design that yields a powerful representation without complex and computationally demanding modules.
• As shown in Figure 1, SegFormer sets new a state-of-the-art in terms of efficiency, accuracy and robustness in three publicly available semantic segmentation datasets.
First, the proposed encoder avoids interpolating positional codes when performing inference on images with resolutions different from the training one. As a result, our encoder can easily adapt to arbitrary test resolutions without impacting the performance. In addition, the hierarchical part enables the encoder to generate both high-resolution fine features and low-resolution coarse features, this is in contrast to ViT that can only produce single low-resolution feature maps with fixed resolutions. Second, we propose a lightweight MLP decoder where the key idea is to take advantage of the Transformer-induced features where the attentions of lower layers tend to stay local, whereas the ones of the highest layers are highly non-local. By aggregating the information from different layers, the MLP decoder combines both local and global attention. As a result, we obtain a simple and straightforward decoder that renders powerful representations.
We demonstrate the advantages of SegFormer in terms of model size, run-time, and accuracy on three publicly available datasets: ADE20K, Cityscapes, and COCO-Stuff. On Citysapces, our lightweight model, SegFormer-B0, without accelerated implementations such as TensorRT, yields 71.9% mIoU at 48 FPS, which, compared to ICNet [11], represents a relative improvement of 60% and 4.2% in latency and performance, respectively. Our largest model, SegFormer-B5, yields 84.0% mIoU, which represents a relative 1.8% mIoU improvement while being 5 × faster than SETR [7]. On ADE20K, this model sets a new state-of-the-art of 51.8% mIoU while being 4 × smaller than SETR. Moreover, our approach is significantly more robust to common corruptions and perturbations than existing methods, therefore being suitable for safety-critical applications. Code will be publicly available.
2 Related Work
Semantic Segmentation. Semantic segmentation can be seen as an extension of image classification from image level to pixel level. In the deep learning era [12–14], FCN [1] is the fundamental work of semantic segmentation, which is a fully convolution network that performs pixel-to-pixel classification in an end-to-end manner. After that, researchers focused on improving FCN from different aspects such as: enlarging the receptive field [15–17, 5, 2, 4, 18]; refining the contextual information [19– 27]; introducing boundary information [28–35]; designing various attention modules [36–44]; or using AutoML technologies [45–49]. These methods significantly improve semantic segmentation performance at the expense of introducing many empirical modules, making the resulting framework computationally demanding and complicated. More recent methods have proved the effectiveness of Transformer-based architectures for semantic segmentation [7, 44]. However, these methods are still computationally demanding.
Transformer backbones. ViT [6] is the first work to prove that a pure Transformer can achieve state-of-the-art performance in image classification. ViT treats each image as a sequence of tokens and then feeds them to multiple Transformer layers to make the classification. Subsequently, DeiT [50] further explores a data-efficient training strategy and a distillation approach for ViT. More recent methods such as T2T ViT [51], CPVT [52], TNT [53], CrossViT [54] and LocalViT [55] introduce tailored changes to ViT to further improve image classification performance.
Beyond classification, PVT [8] is the first work to introduce a pyramid structure in Transformer, demonstrating the potential of a pure Transformer backbone compared to CNN counterparts in dense prediction tasks. After that, methods such as Swin [9], CvT [56], CoaT [57], LeViT [58] and Twins [10] enhance the local continuity of features and remove fixed size position embedding to improve the performance of Transformers in dense prediction tasks.
Transformers for specific tasks. DETR [50] is the first to use Transformers for end-to-end object detection framework without non-maximum suppression (NMS). Other works have also used Transformers in tasks such as tracking [59, 60], super-resolution [61], re-id [62], colorization [63], retrieval [64] and multi-modal learning [65, 66]. For semantic segmentation, SETR [7] adopts ViT [6] as a backbone to extract features, achieving impressive performance. However, these Transformer-based methods have very low efficiency and, thus, difficult to deploy in real-time applications.
3 Method
As depicted in Figure 2, SegFormer consists of two main modules: (1) a hierarchical Transformer encoder; and (2) a lightweight All-MLP decoder to predict the final mask. Given an image with size H ×W × 3, we first divide it into patches of size 4 × 4. Unlike ViT which uses 16 × 16, using fine-grained patches favors semantic segmentation. Second, we use these patches as input to the hierarchical Transformer encoder to get multi-level features with resolution {1/4, 1/8, 1/16, 1/32}
of the original image. We then pass these multi-level features to the All-MLP decoder to predict the segmentation mask with a H4 × W 4 ×Ncls resolution, where Ncls is the number of categories. In the remainder of this section, we first detail the proposed encoder and decoder designs and then summarize the main differences of our approach compared to SETR.
3.1 Hierarchical Transformer Encoder
We design a series of Mix Transformer encoders (MiT), MiT-B0 to MiT-B5, with the same architecture but different sizes. On top of the hierarchical architecture and efficient self-attention module in PVT [8], we further propose several novel features including overlapped patch merging and positional-encoding-free design which will be shown to greatly benefit the segmentation tasks.
Hierarchical Feature Representation. Unlike ViT [6], our encoder generates multi-level multi-scale features given an input image. These features provide both high-resolution coarse features and lowresolution fine-grained features that boost the performance of semantic segmentation. Specifically, given an input image with size H×W ×3, we perform patch merging to obtain a hierarchical feature map Fi with a resolution of H2i+1 × W 2i+1 × Ci, where i ∈ {1, 2, 3, 4}, and Ci+1 is larger than Ci.
Efficient Self-Attention. A major bottleneck of the above hierarchical feature representation is the quadratic self-attention complexity with long sequence inputs from higher resolution features. Recall that in the original multi-head self-attention, each of the heads Q,K, V have the same dimensions N × C, where N = H ×W is the length of the sequence, the self-attention is estimated as:
Attention(Q,K, V ) = Softmax( QKT√ dhead )V. (1)
We instead adopt the sequence reduction process introduced in [8]. This process uses a reduction ratio R to reduce the length of the sequence of as follows:
K̂ = Reshape( N
R ,C ·R)(K)
K = Linear(C ·R,C)(K̂), (2)
where K is the sequence to be reduced, Reshape(NR , C ·R)(K) refers to reshape K to the one with shape of NR × (C · R), and Linear(Cin, Cout)(·) refers to a linear layer taking a Cin-dimensional tensor as input and generating a Cout-dimensional tensor as output. Therefore, the new K has dimensions NR × C. As a result, the complexity of the self-attention mechanism is reduced from O(N2) to O(N 2
R ). In our experiments, we set R to [64, 16, 4, 1] from stage-1 to stage-4.
Overlapped Patch Merging. Given an image patch, the patch merging process used in ViT, unifies a N × N × 3 patch into a 1 × 1 × C vector. This can easily be extended to unify a 2 × 2 × Ci feature path into a 1× 1×Ci+1 vector to obtain hierarchical feature maps. Using this, we can shrink our hierarchical features from F1 (H4 × W 4 × C1) to F2 ( H 8 × W 8 × C2), and then iterate for any other feature map in the hierarchy. This process was initially designed to combine non-overlapping image or feature patches. Therefore, it fails to preserve the local continuity around those patches. Instead, we use an overlapping patch merging process. To this end, we define K, S, and P , where K is the patch size, S is the stride between two adjacent patches, and P is the padding size. In our experiments, we set K = 7, S = 4, P = 3 ,and K = 3, S = 2, P = 1 to perform overlapping patch merging to produces features with the same size as the non-overlapping process. Similar to the original patch embedding in ViT [6], this operation can be implemented by “nn.Conv2D” in PyTorch.
Positional-Encoding-Free Design. The resolution of the PE in ViT is fixed. One thus needs to interpolate the PE when the test resolution differs from training. This leads to the drop of accuracy, which is undesirable since the resolution mismatch is common in semantic segmentation. We instead introduce Mix-FFN where we consider the effect of zero padding to the leak location information [67] by directly using a 3 × 3 Conv in the feed-forward network (FFN). Mix-FFN is formulated as:
xout = MLP(GELU(Conv3×3(MLP(xin)))) + xin, (3)
where xin is the feature from the self-attention module. Mix-FFN mixes a 3 × 3 convolution and an MLP into each FFN. In our experiments, we will show that a 3 × 3 convolution is sufficient to provide positional information for Transformers. In particular, we use depth-wise convolutions for reducing the number of parameters and improving efficiency.
It should be mentioned that CPVT [52] also alleviates this issue by using a 3 × 3 Conv to generate conditional PE at different resolutions and then add it to the feature map. Our work conceptually goes one step further as we argue that adding PE to feature map is not necessary in semantic segmentation. Another recent work CvT [56] introduced 3× 3 Convs to model the spatial relationship among tokens. Despite the converging design, our work differs in both motivation and application as we aim to totally remove PEs to handle the training/testing resolution mismatch issue in semantic segmentation. Our intuition started from [67] whereas the same intuition was not discussed in CvT.
3.2 Lightweight All-MLP Decoder
SegFormer incorporates a lightweight decoder consisting only of MLP layers and this avoiding the hand-crafted and computationally demanding components typically used in other methods. The key to enabling such a simple decoder is that our hierarchical Transformer encoder has a larger effective receptive field (ERF) than traditional CNN encoders.
The proposed All-MLP decoder consists of four main steps. First, multi-level features Fi from the MiT encoder go through an MLP layer to unify the channel dimension. Then, in a second step, features are up-sampled to 1/4th and concatenated together. Third, a MLP layer is adopted to fuse the concatenated features F . Finally, another MLP layer takes the fused feature to predict the segmentation mask M with a H4 × W 4 × Ncls resolution, where Ncls is the number of categories. This lets us formulate the decoder as:
F̂i = Linear(Ci, C)(Fi),∀i
F̂i = Upsample( W 4 × W 4 )(F̂i),∀i
F = Linear(4C,C)(Concat(F̂i)),∀i M = Linear(C,Ncls)(F ),
(4)
where M refers to the predicted mask, and Linear(Cin, Cout)(·) refers to a linear layer with Cin and Cout as input and output vector dimensions respectively.
Effective Receptive Field Analysis. For semantic segmentation, maintaining large receptive field to include context information has been a central issue [5, 17, 18]. Here, we use effective receptive field (ERF) [68] as a toolkit to visualize and interpret why our MLP decoder design is so effective on Transformers. In Figure 3, we visualize ERFs of the four encoder stages and the decoder heads for both DeepLabv3+ and SegFormer. We can make the following observations:
• The ERF of DeepLabv3+ is relatively small even at Stage-4, the deepest stage.
• SegFormer’s encoder naturally produces local attentions which resemble convolutions at lower stages, while able to output highly non-local attentions that effectively capture contexts at Stage-4.
• As shown with the zoom-in patches in Figure 3, the ERF of the MLP head (blue box) differs from Stage-4 (red box) with a significant stronger local attention besides the non-local attention.
The limited receptive field in CNN requires one to resort to context modules such as ASPP [16] that enlarge the receptive field but inevitably become heavy. Our decoder design benefits from the non-local attention in Transformers and leads to a larger receptive field without being complex. The same decoder design, however, does not work well on CNN backbones since the overall receptive field is upper bounded by the limited one at Stage-4, and we will verify this later in Table 1d,
More importantly, our decoder design essentially takes advantage of a Transformer induced feature that produces both highly local and non-local attention at the same time. By unifying them, our MLP decoder renders complementary and powerful representations by adding few parameters. This is
another key reason that motivated our design. Taking the non-local attention from Stage-4 alone is not enough to produce good results, as will be verified in Table 1d.
3.3 Relationship to SETR.
SegFormer contains multiple more efficient and powerful designs compared with SETR [7]:
• We only use ImageNet-1K for pre-training. ViT in SETR is pre-trained on larger ImageNet-22K.
• SegFormer’s encoder has a hierarchical architecture, which is smaller than ViT and can capture both high-resolution coarse and low-resolution fine features. In contrast, SETR’s ViT encoder can only generate single low-resolution feature map.
• We remove Positional Embedding in encoder, while SETR uses fixed shape Positional Embedding which decreases the accuracy when the resolution at inference differs from the training ones.
• Our MLP decoder is more compact and less computationally demanding than the one in SETR. This leads to a negligible computational overhead. In contrast, SETR requires heavy decoders with multiple 3×3 convolutions.
4 Experiments
4.1 Experimental Settings
Datasets: We used four public datasets: Cityscapes [69], ADE20K [70], and COCO-Stuff [71]. ADE20K is a dataset covering 150 fine-grained semantic concepts consisting of 20210 images. Cityscapes is a driving dataset for semantic segmentation consisting of 5000 fine-annotated high resolution images with 19 categories. COCO-Stuff covers 172 labels and consists of 164k images: 118k for training, 5k for validation, 20k for test-dev and 20k for the test-challenge.
Implementation details: We used the mmsegmentation2 codebase and train on a server with 8 Tesla V100. We pre-train the encoder on the Imagenet-1K dataset and randomly initialize the decoder. During training, we applied data augmentation through random resize with ratio 0.5-2.0, random horizontal flipping, and random cropping to 512 × 512, 1024×1024, 512 × 512 for ADE20K, Cityscapes and COCO-Stuff. Following [9] we set crop size to 640 × 640 on ADE20K for our largest model B5. We trained the models using AdamW optimizer for 160K iterations on ADE20K, Cityscapes, and 80K iterations on COCO-Stuff. Exceptionally, for the ablation studies, we trained the models for 40K iterations. We used a batch size of 16 for ADE20K, COCO-Stuff and a batch size of 8 for Cityscapes. The learning rate was set to an initial value of 0.00006 and then used a “poly” LR schedule with factor 1.0 by default. For simplicity, we did not adopt widely-used tricks such as OHEM, auxiliary losses or class balance loss. During evaluation, we rescale the short side of the image to training cropping size and keep the aspect ratio for ADE20K and COCO-Stuff. For Cityscapes, we do inference using sliding window test by cropping 1024× 1024 windows. We report semantic segmentation performance using mean Intersection over Union (mIoU).
4.2 Ablation Studies
Influence of the size of model. We first analyze the effect of increasing the size of the encoder on the performance and model efficiency. Figure 1 shows the performance vs. model efficiency for ADE20K as a function of the encoder size and, Table 1a summarizes the results for the three datasets. The first thing to observe here is the size of the decoder compared to the encoder. As shown, for the lightweight model, the decoder has only 0.4M parameters. For MiT-B5 encoder, the decoder only takes up to 4% of the total number of parameters in the model. In terms of performance, we can observe that, overall, increasing the size of the encoder yields consistent improvements on all the datasets. Our lightweight model, SegFormer-B0, is compact and efficient while maintaining a competitive performance, showing that our method is very convenient for real-time applications. On the other hand, our SegFormer-B5, the largest model, achieves state-of-the-art results on all three datasets, showing the potential of our Transformer encoder.
2https://github.com/open-mmlab/mmsegmentation
Influence of C, the MLP decoder channel dimension. We now analyze the influence of the channel dimension C in the MLP decoder, see Section 3.2. In Table 1b we show performance, flops, and parameters as a function of this dimension. We can observe that setting C = 256 provides a very competitive performance and computational cost. The performance increases as C increases; however, it leads to larger and less efficient models. Interestingly, this performance plateaus for channel dimensions wider than 768. Given these results, we choose C = 256 for our real-time models SegFormer-B0, B1 and C = 768 for the rest.
Mix-FFN vs. Positional Encoder (PE). In this experiment, we analyze the effect of removing the positional encoding in the Transformer encoder in favor of using the proposed Mix-FFN. To this end, we train Transformer encoders with a positional encoding (PE) and the proposed Mix-FFN and perform inference on Cityscapes with two different image resolutions: 768×768 using a sliding window, and 1024×2048 using the whole image. Table 1c shows the results for this experiment. As shown, for a given resolution, our approach using Mix-FFN clearly outperforms using a positional encoding. Moreover, our approach is less sensitive to differences in the test resolution: the accuracy drops 3.3% when using a positional encoding with a
lower resolution. In contrast, when we use the proposed Mix-FFN the performance drop is reduced to only 0.7%. From these results, we can conclude using the proposed Mix-FFN produces better and more robust encoders than those using positional encoding.
Effective receptive field evaluation. In Section 3.2, we argued that our MLP decoder benefits from Transformers having a larger effective receptive field compared to other CNN models. To quantify this effect, in this experiment, we compare the performance of our MLP-decoder when used with CNN-based encoders such as ResNet or ResNeXt. As shown in Table 1d, coupling our MLP-decoder with a CNN-based encoder yields a significantly lower accuracy compared to coupling it with the proposed Transformer encoder. Intuitively, as a CNN has a smaller receptive field than the Transformer (see the analysis in Section 3.2), the MLP-decoder is not enough for global reasoning. In contrast, coupling our Transformer encoder with the MLP decoder leads to the best performance. Moreover, for Transformer encoder, it is necessary to combine low-level local features and high-level non-local features instead of only high-level feature.
Influence of difference encoders. We select 2 representative Transformer encoders, ViT [6] and Swin [9] and compare with our MiT encoder. As shown in Table 3, with same decoder, e.g.MLP decoder, MiT-B2 is 3.1% higher than Swin-T with similar encoder parameters. Moreover, MiT-B5 has much fewer encoder parameters than ViT-large, but is 3+% mIoU higher than ViT-large. These experiments shows our MiT encoder is better than Swin and ViT for semantic segmentataion.
Influence of difference decoders. We also test MiT encoder with different decoders. As shown in Table 3, the mIoUs are similar with different decoders while the proposed MLP decoder has the least parameters and is only 1/8 of the UperNet decoder in Swin. The MLP decoder is thus an important design towards efficient segmentation.
4.3 Comparison to state of the art methods
We now compare our results with existing approaches on the ADE20K [70] and Cityscapes [69]. More experiments about COCO-Stuff [71] are in appendix. ADE20K and Cityscapes: Table 2 summarizes our results including parameters, FLOPS, latency, and accuracy for ADE20K and Cityscapes. In the top part of the table, we report real-time approaches where we include state-of-the-art methods and our results using the MiT-B0 lightweight encoder. In the bottom part, we focus on performance and report the results of our approach and related works using stronger encoders.
On ADE20K, SegFormer-B0 yields 37.4% mIoU using only 3.8M parameters and 8.4G FLOPs, outperforming all other real-time counterparts in terms of parameters, flops, and latency. For instance, compared to DeeplabV3+ (MobileNetV2), SegFormer-B0 is 7.4 FPS, which is faster and keeps 3.4% better mIoU.
Moreover, SegFormer-B5 outperforms all other approaches, including the previous best SETR, and establishes a new state-of-the-art of 51.8%, which is 1.6% mIoU better than SETR while being significantly more efficient.
As also shown in Table 2, our results also hold on Cityscapes. SegFormer-B0 yields 15.2 FPS and 76.2% mIoU (the shorter side of input image being 1024), which represents a 1.3% mIoU improvement and a 2× speedup compared to DeeplabV3+. Moreover, with the shorter side of input image being 512, SegFormer-B0 runs at 47.6 FPS and yields 71.9% mIoU, which is 17.3 FPS faster and 4.2% better than ICNet. SegFormer-B5 archives the best IoU of 84.0%, outperforming all existing
methods by at least 1.8% mIoU, and it runs 5 × faster and 4 × smaller than SETR [7]. On Cityscapes test set, we follow the common setting [18] and merge the validation images to the train set and report results using Imagenet-1K pre-training and also using Mapillary Vistas [74]. As reported in Table 4, using only Cityscapes fine data and Imagenet-1K pre-training, our method achieves 82.2% mIoU outperforming all other methods including SETR, which uses ImageNet-22K pre-training and the additional Cityscapes coarse data. Using Mapillary pre-training, our sets a new state-of-the-art result of 83.1% mIoU.
4.4 Robustness to natural corruptions
Model robustness is important for many safety-critical tasks such as autonomous driving [75]. In this experiment, we evaluate the robustness of SegFormer to common corruptions and perturbations. To this end, we follow [75] and generate Cityscapes-C, which expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from noise, blur, weather and digital categories. We compare our method to DeeplabV3+ and other methods as reported in [75]. We also compare with SETR with DeiT Transformer backbone. The results for this experiment are summarized in Table 5.
Our method significantly outperforms previous CNN-based methods, yielding a relative improvement of up to 588% on Gaussian Noise and up to 295% on snow weather. SegFormer also outperforms SETR in general except for one corruption (snow). The results indicate the strong robustness of SegFormer, which we envision to benefit safety-critical applications where robustness is important.
5 Conclusion
In this paper, we present SegFormer, a simple, clean yet powerful semantic segmentation method which contains a positional-encoding-free, hierarchical Transformer encoder and a lightweight AllMLP decoder. It avoids common complex designs in previous methods, leading to both high efficiency and performance. SegFormer not only achieves new state of the art results on common datasets, but also shows strong zero-shot robustness. We hope our method can serve as a solid baseline for semantic segmentation and motivate further research. One potential limitation is that even our lightest model may still be too heavy for some edge devices. Thus mixed-precision training, pruning, hardware-friendly attention designs and energy consumption are important parts of our future work.
Broader Impact
Efficiency, accuracy, and robustness are important aspects of AI models. Our work pushes the boundary of semantic segmentation models in these three aspects. We envision that the work will benefit a wide range of safety-critical applications, such as autonomous driving and robot navigation. The proposed method improves the “in-the-wild” robustness of these applications, ultimately leading to better safety. Despite such improvement, we fully understand this work is by no means perfect and there are still many challenges towards reliable real world application. Our models may be subject to biases and other possible undesired mistakes, depending on how they are trained in reality. Our
model may also be used for surveillance similar to other AI recognition methods, even though it is not mainly designed for surveillance applications.
Acknowledgments and Disclosure of Funding
We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. Ping Luo is supported by the General Research Fund of Hong Kong No.27208720. | 1. What is the focus and contribution of the paper on semantic segmentation?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and simplicity?
3. What are the weaknesses of the paper, especially regarding its similarity to other works?
4. Do you have any concerns about the effectiveness and robustness of the method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This work proposed an efficient Transformer-based semantic segmentation model called SegFormer. In essence, it consists of a hierarchical Transformer encoder and a lightweight MLP decoder. Compared to ViT, the transformer encoder employs overlapped patch merging, self-attention with reduced (key, value) for efficient computation, and a depthwise convolution inserted between two MLP layers in FFN to replace position embedding. The authors also conducted an analysis based on effective receptive field and revealed that transformer's encoder with large receptive field enables relative lightweight decoder design. Extensive experiments demonstrate that SegFormer achieves strong speed-accuracy trade-off and better robustness.
Review
Overall writing is clear and easy to follow. As for the method, the architectural design is simple yet effective as proven by extensive experiments and analysis. As a result, this could serve as a new baseline for semantic segmentation task.
In L165, it was stated that "CPVT [54] uses 3x3 Conv together with PE to implement a data-driven PE". To my best understanding, [54] does not require PE. Instead, similar to this work, a 2D Conv with zero padding is employed to replace PE.
Swin Transformers [9] is missing from the comparison with state-of-the-arts.
The results in ablation experiment (Tab.1(b)(d)) does not match the final performance in Tab.1(a). For example, in Tab.1(d), the model variant MiT-B3 (S1-4) achieves 48.6 mIoU on ADE20K whereas MiT-B3 in Tab.1(a) attains 49.4/50.0 mIoU on the same ADE20K dataset despite having the same FLOPs and number of parameters. Why is this so?
UPDATE AFTER REBUTTAL
I appreciate the authors' detailed rebuttal. Here are my responses after reading other reviewers' comments and the authors' response:
I have to admit that when I first read this paper, I had the same concern as other reviewers that this work shares high similarity with some recent vision transformer works, especially PvT [8] in terms of its hierarchical representation structure and efficient self-attention design. However, the efficient self-attention design was not claimed as the contribution of this work. In fact, the authors admitted in L155 that such design was taken from [8]. Nevertheless, I agree with reviewer GD12 that some of the presented components (e.g. overlapped patch merging, hierarchical representation) are intermixed with techniques from prior works.
The main reason for me to recommend an accept are two-fold: 1) the effectiveness of the proposed Mix-FFN for addressing train-test resolution mismatch; 2) the simplicity and efficiency of the overall design. The former is similar to LocalViT [57] which employs a 3x3 depthwise conv between two consecutive MLPs in FFN. Nevertheless, [57] retains the use of position encoding. On the other hand, this work demonstrates their Mix-FFN not only removes the need of positional encoding, but also fixes train-test resolution disparity while a fixed shape positional encoding fails to. I believe this finding is important for dense prediction task such as semantic segmentation where the testing resolution might differ from the training ones.
Secondly, the simplicity and efficiency of the design makes it serve as a strong baseline for semantic segmentation. SETR has previously shown that transformer backbones work well on semantic segmentation. Differently, this work demonstrates a compact and efficient solution for the task while performing comparably or even better. Although each component may not be entirely novel or inspiring, combining them to offer a simple yet effective solution is beneficial to the community, similar to the role of DeepLab series is playing in the segmentation community.
After reading other reviewers' comments and authors' response, I think most of the concerns were well addressed or at least to a certain extent (e.g. different combinations of encoder and decoder, comparison with more real-time segmentation models, variance of different runs, experiments on PASCAL Context dataset etc.). I agree with reviewer n5uX that the technical contribution may be insufficient for both vision and semantic segmentation community. However, as explained earlier, although each component may not be entirely novel, I think offering a feasible compact solution that could serve as a strong baseline for semantic segmentation would be beneficial to the community. After taking all these into accounts, I have decided to lower my rating to 7. |
NIPS | Title
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
Abstract
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5× smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code is available at: github.com/NVlabs/SegFormer.
N/A
1 Introduction
Semantic segmentation is a fundamental task in computer vision and enables many downstream applications. It is related to image classification since it produces per-pixel category prediction instead of image-level prediction. This relationship is pointed out and systematically studied in a seminal work [1], where the authors used fully convolutional networks (FCNs) for semantic segmentation tasks. Since then, FCN has inspired many follow-up works and has become a predominant design choice for dense prediction.
Since there is a strong relation between classification and semantic segmentation, many stateof-the-art semantic segmentation frameworks are variants of popular architectures for image classification on ImageNet. Therefore, designing backbone architectures has remained an active area
∗Corresponding authors: Zhiding Yu and Ping Luo
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
in semantic segmentation. Indeed, starting from early methods using VGGs [1, 2], to the latest methods with significantly deeper and more powerful backbones [3], the evolution of backbones has dramatically pushed the performance boundary of semantic segmentation. Besides backbone architectures, another line of work formulates semantic segmentation as a structured prediction problem, and focuses on designing modules and operators, which can effectively capture contextual information. A representative example in this area is dilated convolution [4, 5], which increases the receptive field by “inflating” the kernel with holes.
Witnessing the great success in natural language processing (NLP), there has been a recent surge of interest to introduce Transformers to vision tasks. Dosovitskiy et al. [6] proposed vision Transformer (ViT) for image classification. Following the Transformer design in NLP, the authors split an image into multiple linearly embedded patches and feed them into a standard Transformer with positional embeddings (PE), leading to an impressive performance on ImageNet. In semantic segmentation, Zheng et al. [7] proposed SETR to demonstrate the feasibility of using Transformers in this task.
SETR adopts ViT as a backbone and incorporates several CNN decoders to enlarge feature resolution. Despite the good performance, ViT has two important limitations: 1) ViT outputs single-scale lowresolution features instead of multi-scale ones, and 2) it has very high computational cost on large images. To address these limitations, Wang et al. [8] proposed a pyramid vision Transformer (PVT), a natural extension of ViT with pyramid structures for dense prediction. PVT shows considerable improvements over the ResNet counterpart on object detection and semantic segmentation. However, together with other emerging methods such as Swin Transformer [9] and Twins [10], these methods mainly consider the design of the Transformer encoder, neglecting the contribution of the decoder for further improvements.
This paper introduces SegFormer, a cutting-edge Transformer framework for semantic segmentation that jointly considers efficiency, accuracy, and robustness. In contrast to previous methods, our framework redesigns both the encoder and the decoder. The key novelties of our approach are:
• A novel positional-encoding-free and hierarchical Transformer encoder.
• A lightweight All-MLP decoder design that yields a powerful representation without complex and computationally demanding modules.
• As shown in Figure 1, SegFormer sets new a state-of-the-art in terms of efficiency, accuracy and robustness in three publicly available semantic segmentation datasets.
First, the proposed encoder avoids interpolating positional codes when performing inference on images with resolutions different from the training one. As a result, our encoder can easily adapt to arbitrary test resolutions without impacting the performance. In addition, the hierarchical part enables the encoder to generate both high-resolution fine features and low-resolution coarse features, this is in contrast to ViT that can only produce single low-resolution feature maps with fixed resolutions. Second, we propose a lightweight MLP decoder where the key idea is to take advantage of the Transformer-induced features where the attentions of lower layers tend to stay local, whereas the ones of the highest layers are highly non-local. By aggregating the information from different layers, the MLP decoder combines both local and global attention. As a result, we obtain a simple and straightforward decoder that renders powerful representations.
We demonstrate the advantages of SegFormer in terms of model size, run-time, and accuracy on three publicly available datasets: ADE20K, Cityscapes, and COCO-Stuff. On Citysapces, our lightweight model, SegFormer-B0, without accelerated implementations such as TensorRT, yields 71.9% mIoU at 48 FPS, which, compared to ICNet [11], represents a relative improvement of 60% and 4.2% in latency and performance, respectively. Our largest model, SegFormer-B5, yields 84.0% mIoU, which represents a relative 1.8% mIoU improvement while being 5 × faster than SETR [7]. On ADE20K, this model sets a new state-of-the-art of 51.8% mIoU while being 4 × smaller than SETR. Moreover, our approach is significantly more robust to common corruptions and perturbations than existing methods, therefore being suitable for safety-critical applications. Code will be publicly available.
2 Related Work
Semantic Segmentation. Semantic segmentation can be seen as an extension of image classification from image level to pixel level. In the deep learning era [12–14], FCN [1] is the fundamental work of semantic segmentation, which is a fully convolution network that performs pixel-to-pixel classification in an end-to-end manner. After that, researchers focused on improving FCN from different aspects such as: enlarging the receptive field [15–17, 5, 2, 4, 18]; refining the contextual information [19– 27]; introducing boundary information [28–35]; designing various attention modules [36–44]; or using AutoML technologies [45–49]. These methods significantly improve semantic segmentation performance at the expense of introducing many empirical modules, making the resulting framework computationally demanding and complicated. More recent methods have proved the effectiveness of Transformer-based architectures for semantic segmentation [7, 44]. However, these methods are still computationally demanding.
Transformer backbones. ViT [6] is the first work to prove that a pure Transformer can achieve state-of-the-art performance in image classification. ViT treats each image as a sequence of tokens and then feeds them to multiple Transformer layers to make the classification. Subsequently, DeiT [50] further explores a data-efficient training strategy and a distillation approach for ViT. More recent methods such as T2T ViT [51], CPVT [52], TNT [53], CrossViT [54] and LocalViT [55] introduce tailored changes to ViT to further improve image classification performance.
Beyond classification, PVT [8] is the first work to introduce a pyramid structure in Transformer, demonstrating the potential of a pure Transformer backbone compared to CNN counterparts in dense prediction tasks. After that, methods such as Swin [9], CvT [56], CoaT [57], LeViT [58] and Twins [10] enhance the local continuity of features and remove fixed size position embedding to improve the performance of Transformers in dense prediction tasks.
Transformers for specific tasks. DETR [50] is the first to use Transformers for end-to-end object detection framework without non-maximum suppression (NMS). Other works have also used Transformers in tasks such as tracking [59, 60], super-resolution [61], re-id [62], colorization [63], retrieval [64] and multi-modal learning [65, 66]. For semantic segmentation, SETR [7] adopts ViT [6] as a backbone to extract features, achieving impressive performance. However, these Transformer-based methods have very low efficiency and, thus, difficult to deploy in real-time applications.
3 Method
As depicted in Figure 2, SegFormer consists of two main modules: (1) a hierarchical Transformer encoder; and (2) a lightweight All-MLP decoder to predict the final mask. Given an image with size H ×W × 3, we first divide it into patches of size 4 × 4. Unlike ViT which uses 16 × 16, using fine-grained patches favors semantic segmentation. Second, we use these patches as input to the hierarchical Transformer encoder to get multi-level features with resolution {1/4, 1/8, 1/16, 1/32}
of the original image. We then pass these multi-level features to the All-MLP decoder to predict the segmentation mask with a H4 × W 4 ×Ncls resolution, where Ncls is the number of categories. In the remainder of this section, we first detail the proposed encoder and decoder designs and then summarize the main differences of our approach compared to SETR.
3.1 Hierarchical Transformer Encoder
We design a series of Mix Transformer encoders (MiT), MiT-B0 to MiT-B5, with the same architecture but different sizes. On top of the hierarchical architecture and efficient self-attention module in PVT [8], we further propose several novel features including overlapped patch merging and positional-encoding-free design which will be shown to greatly benefit the segmentation tasks.
Hierarchical Feature Representation. Unlike ViT [6], our encoder generates multi-level multi-scale features given an input image. These features provide both high-resolution coarse features and lowresolution fine-grained features that boost the performance of semantic segmentation. Specifically, given an input image with size H×W ×3, we perform patch merging to obtain a hierarchical feature map Fi with a resolution of H2i+1 × W 2i+1 × Ci, where i ∈ {1, 2, 3, 4}, and Ci+1 is larger than Ci.
Efficient Self-Attention. A major bottleneck of the above hierarchical feature representation is the quadratic self-attention complexity with long sequence inputs from higher resolution features. Recall that in the original multi-head self-attention, each of the heads Q,K, V have the same dimensions N × C, where N = H ×W is the length of the sequence, the self-attention is estimated as:
Attention(Q,K, V ) = Softmax( QKT√ dhead )V. (1)
We instead adopt the sequence reduction process introduced in [8]. This process uses a reduction ratio R to reduce the length of the sequence of as follows:
K̂ = Reshape( N
R ,C ·R)(K)
K = Linear(C ·R,C)(K̂), (2)
where K is the sequence to be reduced, Reshape(NR , C ·R)(K) refers to reshape K to the one with shape of NR × (C · R), and Linear(Cin, Cout)(·) refers to a linear layer taking a Cin-dimensional tensor as input and generating a Cout-dimensional tensor as output. Therefore, the new K has dimensions NR × C. As a result, the complexity of the self-attention mechanism is reduced from O(N2) to O(N 2
R ). In our experiments, we set R to [64, 16, 4, 1] from stage-1 to stage-4.
Overlapped Patch Merging. Given an image patch, the patch merging process used in ViT, unifies a N × N × 3 patch into a 1 × 1 × C vector. This can easily be extended to unify a 2 × 2 × Ci feature path into a 1× 1×Ci+1 vector to obtain hierarchical feature maps. Using this, we can shrink our hierarchical features from F1 (H4 × W 4 × C1) to F2 ( H 8 × W 8 × C2), and then iterate for any other feature map in the hierarchy. This process was initially designed to combine non-overlapping image or feature patches. Therefore, it fails to preserve the local continuity around those patches. Instead, we use an overlapping patch merging process. To this end, we define K, S, and P , where K is the patch size, S is the stride between two adjacent patches, and P is the padding size. In our experiments, we set K = 7, S = 4, P = 3 ,and K = 3, S = 2, P = 1 to perform overlapping patch merging to produces features with the same size as the non-overlapping process. Similar to the original patch embedding in ViT [6], this operation can be implemented by “nn.Conv2D” in PyTorch.
Positional-Encoding-Free Design. The resolution of the PE in ViT is fixed. One thus needs to interpolate the PE when the test resolution differs from training. This leads to the drop of accuracy, which is undesirable since the resolution mismatch is common in semantic segmentation. We instead introduce Mix-FFN where we consider the effect of zero padding to the leak location information [67] by directly using a 3 × 3 Conv in the feed-forward network (FFN). Mix-FFN is formulated as:
xout = MLP(GELU(Conv3×3(MLP(xin)))) + xin, (3)
where xin is the feature from the self-attention module. Mix-FFN mixes a 3 × 3 convolution and an MLP into each FFN. In our experiments, we will show that a 3 × 3 convolution is sufficient to provide positional information for Transformers. In particular, we use depth-wise convolutions for reducing the number of parameters and improving efficiency.
It should be mentioned that CPVT [52] also alleviates this issue by using a 3 × 3 Conv to generate conditional PE at different resolutions and then add it to the feature map. Our work conceptually goes one step further as we argue that adding PE to feature map is not necessary in semantic segmentation. Another recent work CvT [56] introduced 3× 3 Convs to model the spatial relationship among tokens. Despite the converging design, our work differs in both motivation and application as we aim to totally remove PEs to handle the training/testing resolution mismatch issue in semantic segmentation. Our intuition started from [67] whereas the same intuition was not discussed in CvT.
3.2 Lightweight All-MLP Decoder
SegFormer incorporates a lightweight decoder consisting only of MLP layers and this avoiding the hand-crafted and computationally demanding components typically used in other methods. The key to enabling such a simple decoder is that our hierarchical Transformer encoder has a larger effective receptive field (ERF) than traditional CNN encoders.
The proposed All-MLP decoder consists of four main steps. First, multi-level features Fi from the MiT encoder go through an MLP layer to unify the channel dimension. Then, in a second step, features are up-sampled to 1/4th and concatenated together. Third, a MLP layer is adopted to fuse the concatenated features F . Finally, another MLP layer takes the fused feature to predict the segmentation mask M with a H4 × W 4 × Ncls resolution, where Ncls is the number of categories. This lets us formulate the decoder as:
F̂i = Linear(Ci, C)(Fi),∀i
F̂i = Upsample( W 4 × W 4 )(F̂i),∀i
F = Linear(4C,C)(Concat(F̂i)),∀i M = Linear(C,Ncls)(F ),
(4)
where M refers to the predicted mask, and Linear(Cin, Cout)(·) refers to a linear layer with Cin and Cout as input and output vector dimensions respectively.
Effective Receptive Field Analysis. For semantic segmentation, maintaining large receptive field to include context information has been a central issue [5, 17, 18]. Here, we use effective receptive field (ERF) [68] as a toolkit to visualize and interpret why our MLP decoder design is so effective on Transformers. In Figure 3, we visualize ERFs of the four encoder stages and the decoder heads for both DeepLabv3+ and SegFormer. We can make the following observations:
• The ERF of DeepLabv3+ is relatively small even at Stage-4, the deepest stage.
• SegFormer’s encoder naturally produces local attentions which resemble convolutions at lower stages, while able to output highly non-local attentions that effectively capture contexts at Stage-4.
• As shown with the zoom-in patches in Figure 3, the ERF of the MLP head (blue box) differs from Stage-4 (red box) with a significant stronger local attention besides the non-local attention.
The limited receptive field in CNN requires one to resort to context modules such as ASPP [16] that enlarge the receptive field but inevitably become heavy. Our decoder design benefits from the non-local attention in Transformers and leads to a larger receptive field without being complex. The same decoder design, however, does not work well on CNN backbones since the overall receptive field is upper bounded by the limited one at Stage-4, and we will verify this later in Table 1d,
More importantly, our decoder design essentially takes advantage of a Transformer induced feature that produces both highly local and non-local attention at the same time. By unifying them, our MLP decoder renders complementary and powerful representations by adding few parameters. This is
another key reason that motivated our design. Taking the non-local attention from Stage-4 alone is not enough to produce good results, as will be verified in Table 1d.
3.3 Relationship to SETR.
SegFormer contains multiple more efficient and powerful designs compared with SETR [7]:
• We only use ImageNet-1K for pre-training. ViT in SETR is pre-trained on larger ImageNet-22K.
• SegFormer’s encoder has a hierarchical architecture, which is smaller than ViT and can capture both high-resolution coarse and low-resolution fine features. In contrast, SETR’s ViT encoder can only generate single low-resolution feature map.
• We remove Positional Embedding in encoder, while SETR uses fixed shape Positional Embedding which decreases the accuracy when the resolution at inference differs from the training ones.
• Our MLP decoder is more compact and less computationally demanding than the one in SETR. This leads to a negligible computational overhead. In contrast, SETR requires heavy decoders with multiple 3×3 convolutions.
4 Experiments
4.1 Experimental Settings
Datasets: We used four public datasets: Cityscapes [69], ADE20K [70], and COCO-Stuff [71]. ADE20K is a dataset covering 150 fine-grained semantic concepts consisting of 20210 images. Cityscapes is a driving dataset for semantic segmentation consisting of 5000 fine-annotated high resolution images with 19 categories. COCO-Stuff covers 172 labels and consists of 164k images: 118k for training, 5k for validation, 20k for test-dev and 20k for the test-challenge.
Implementation details: We used the mmsegmentation2 codebase and train on a server with 8 Tesla V100. We pre-train the encoder on the Imagenet-1K dataset and randomly initialize the decoder. During training, we applied data augmentation through random resize with ratio 0.5-2.0, random horizontal flipping, and random cropping to 512 × 512, 1024×1024, 512 × 512 for ADE20K, Cityscapes and COCO-Stuff. Following [9] we set crop size to 640 × 640 on ADE20K for our largest model B5. We trained the models using AdamW optimizer for 160K iterations on ADE20K, Cityscapes, and 80K iterations on COCO-Stuff. Exceptionally, for the ablation studies, we trained the models for 40K iterations. We used a batch size of 16 for ADE20K, COCO-Stuff and a batch size of 8 for Cityscapes. The learning rate was set to an initial value of 0.00006 and then used a “poly” LR schedule with factor 1.0 by default. For simplicity, we did not adopt widely-used tricks such as OHEM, auxiliary losses or class balance loss. During evaluation, we rescale the short side of the image to training cropping size and keep the aspect ratio for ADE20K and COCO-Stuff. For Cityscapes, we do inference using sliding window test by cropping 1024× 1024 windows. We report semantic segmentation performance using mean Intersection over Union (mIoU).
4.2 Ablation Studies
Influence of the size of model. We first analyze the effect of increasing the size of the encoder on the performance and model efficiency. Figure 1 shows the performance vs. model efficiency for ADE20K as a function of the encoder size and, Table 1a summarizes the results for the three datasets. The first thing to observe here is the size of the decoder compared to the encoder. As shown, for the lightweight model, the decoder has only 0.4M parameters. For MiT-B5 encoder, the decoder only takes up to 4% of the total number of parameters in the model. In terms of performance, we can observe that, overall, increasing the size of the encoder yields consistent improvements on all the datasets. Our lightweight model, SegFormer-B0, is compact and efficient while maintaining a competitive performance, showing that our method is very convenient for real-time applications. On the other hand, our SegFormer-B5, the largest model, achieves state-of-the-art results on all three datasets, showing the potential of our Transformer encoder.
2https://github.com/open-mmlab/mmsegmentation
Influence of C, the MLP decoder channel dimension. We now analyze the influence of the channel dimension C in the MLP decoder, see Section 3.2. In Table 1b we show performance, flops, and parameters as a function of this dimension. We can observe that setting C = 256 provides a very competitive performance and computational cost. The performance increases as C increases; however, it leads to larger and less efficient models. Interestingly, this performance plateaus for channel dimensions wider than 768. Given these results, we choose C = 256 for our real-time models SegFormer-B0, B1 and C = 768 for the rest.
Mix-FFN vs. Positional Encoder (PE). In this experiment, we analyze the effect of removing the positional encoding in the Transformer encoder in favor of using the proposed Mix-FFN. To this end, we train Transformer encoders with a positional encoding (PE) and the proposed Mix-FFN and perform inference on Cityscapes with two different image resolutions: 768×768 using a sliding window, and 1024×2048 using the whole image. Table 1c shows the results for this experiment. As shown, for a given resolution, our approach using Mix-FFN clearly outperforms using a positional encoding. Moreover, our approach is less sensitive to differences in the test resolution: the accuracy drops 3.3% when using a positional encoding with a
lower resolution. In contrast, when we use the proposed Mix-FFN the performance drop is reduced to only 0.7%. From these results, we can conclude using the proposed Mix-FFN produces better and more robust encoders than those using positional encoding.
Effective receptive field evaluation. In Section 3.2, we argued that our MLP decoder benefits from Transformers having a larger effective receptive field compared to other CNN models. To quantify this effect, in this experiment, we compare the performance of our MLP-decoder when used with CNN-based encoders such as ResNet or ResNeXt. As shown in Table 1d, coupling our MLP-decoder with a CNN-based encoder yields a significantly lower accuracy compared to coupling it with the proposed Transformer encoder. Intuitively, as a CNN has a smaller receptive field than the Transformer (see the analysis in Section 3.2), the MLP-decoder is not enough for global reasoning. In contrast, coupling our Transformer encoder with the MLP decoder leads to the best performance. Moreover, for Transformer encoder, it is necessary to combine low-level local features and high-level non-local features instead of only high-level feature.
Influence of difference encoders. We select 2 representative Transformer encoders, ViT [6] and Swin [9] and compare with our MiT encoder. As shown in Table 3, with same decoder, e.g.MLP decoder, MiT-B2 is 3.1% higher than Swin-T with similar encoder parameters. Moreover, MiT-B5 has much fewer encoder parameters than ViT-large, but is 3+% mIoU higher than ViT-large. These experiments shows our MiT encoder is better than Swin and ViT for semantic segmentataion.
Influence of difference decoders. We also test MiT encoder with different decoders. As shown in Table 3, the mIoUs are similar with different decoders while the proposed MLP decoder has the least parameters and is only 1/8 of the UperNet decoder in Swin. The MLP decoder is thus an important design towards efficient segmentation.
4.3 Comparison to state of the art methods
We now compare our results with existing approaches on the ADE20K [70] and Cityscapes [69]. More experiments about COCO-Stuff [71] are in appendix. ADE20K and Cityscapes: Table 2 summarizes our results including parameters, FLOPS, latency, and accuracy for ADE20K and Cityscapes. In the top part of the table, we report real-time approaches where we include state-of-the-art methods and our results using the MiT-B0 lightweight encoder. In the bottom part, we focus on performance and report the results of our approach and related works using stronger encoders.
On ADE20K, SegFormer-B0 yields 37.4% mIoU using only 3.8M parameters and 8.4G FLOPs, outperforming all other real-time counterparts in terms of parameters, flops, and latency. For instance, compared to DeeplabV3+ (MobileNetV2), SegFormer-B0 is 7.4 FPS, which is faster and keeps 3.4% better mIoU.
Moreover, SegFormer-B5 outperforms all other approaches, including the previous best SETR, and establishes a new state-of-the-art of 51.8%, which is 1.6% mIoU better than SETR while being significantly more efficient.
As also shown in Table 2, our results also hold on Cityscapes. SegFormer-B0 yields 15.2 FPS and 76.2% mIoU (the shorter side of input image being 1024), which represents a 1.3% mIoU improvement and a 2× speedup compared to DeeplabV3+. Moreover, with the shorter side of input image being 512, SegFormer-B0 runs at 47.6 FPS and yields 71.9% mIoU, which is 17.3 FPS faster and 4.2% better than ICNet. SegFormer-B5 archives the best IoU of 84.0%, outperforming all existing
methods by at least 1.8% mIoU, and it runs 5 × faster and 4 × smaller than SETR [7]. On Cityscapes test set, we follow the common setting [18] and merge the validation images to the train set and report results using Imagenet-1K pre-training and also using Mapillary Vistas [74]. As reported in Table 4, using only Cityscapes fine data and Imagenet-1K pre-training, our method achieves 82.2% mIoU outperforming all other methods including SETR, which uses ImageNet-22K pre-training and the additional Cityscapes coarse data. Using Mapillary pre-training, our sets a new state-of-the-art result of 83.1% mIoU.
4.4 Robustness to natural corruptions
Model robustness is important for many safety-critical tasks such as autonomous driving [75]. In this experiment, we evaluate the robustness of SegFormer to common corruptions and perturbations. To this end, we follow [75] and generate Cityscapes-C, which expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from noise, blur, weather and digital categories. We compare our method to DeeplabV3+ and other methods as reported in [75]. We also compare with SETR with DeiT Transformer backbone. The results for this experiment are summarized in Table 5.
Our method significantly outperforms previous CNN-based methods, yielding a relative improvement of up to 588% on Gaussian Noise and up to 295% on snow weather. SegFormer also outperforms SETR in general except for one corruption (snow). The results indicate the strong robustness of SegFormer, which we envision to benefit safety-critical applications where robustness is important.
5 Conclusion
In this paper, we present SegFormer, a simple, clean yet powerful semantic segmentation method which contains a positional-encoding-free, hierarchical Transformer encoder and a lightweight AllMLP decoder. It avoids common complex designs in previous methods, leading to both high efficiency and performance. SegFormer not only achieves new state of the art results on common datasets, but also shows strong zero-shot robustness. We hope our method can serve as a solid baseline for semantic segmentation and motivate further research. One potential limitation is that even our lightest model may still be too heavy for some edge devices. Thus mixed-precision training, pruning, hardware-friendly attention designs and energy consumption are important parts of our future work.
Broader Impact
Efficiency, accuracy, and robustness are important aspects of AI models. Our work pushes the boundary of semantic segmentation models in these three aspects. We envision that the work will benefit a wide range of safety-critical applications, such as autonomous driving and robot navigation. The proposed method improves the “in-the-wild” robustness of these applications, ultimately leading to better safety. Despite such improvement, we fully understand this work is by no means perfect and there are still many challenges towards reliable real world application. Our models may be subject to biases and other possible undesired mistakes, depending on how they are trained in reality. Our
model may also be used for surveillance similar to other AI recognition methods, even though it is not mainly designed for surveillance applications.
Acknowledgments and Disclosure of Funding
We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. Ping Luo is supported by the General Research Fund of Hong Kong No.27208720. | 1. What is the focus of the paper on semantic segmentation?
2. What are the strengths of the proposed approach, particularly regarding efficiency and flexibility?
3. Do you have any concerns about the novelty of the SegFormer?
4. How does the reviewer assess the comparisons made in the paper with other works, such as SETR, Swin, PVT, ESPNet, ESPNetv2, BiSeNet, and ICNet?
5. What are some suggestions for improving the experimental analysis, such as comparing Unet-like networks and transformer-based networks? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. The proposed SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The extensive experiments show that the proposed method achieves the state-of-the-art performance on semantic segmentation task.
Review
The paper presents a transformer based network for semantic segmentation task. Different from the last transformer based network SETR, the proposed SegFormer uses positional-encoding-free, hierarchical Transformer encoder and lightweight All-MLP decoder, which is more efficient and flexible than SETR.
My concerns and suggestions about this paper are as follows: 1). The author claims that the segformer is novel, but the used overlapped patch embedding, efficient attention, the multi-stage structure are basically from PVT, PVTv2. There is no any novel module or techniques proposed in this paper. So i think the novelty is limited. 2). The author mainly uses the SETR as baseline method, but actually recent Swin and PVT (cited in the paper) also show the SOTA performance, which are not compared in the experimental report. 3). The lightweight sseg models are compared in the experimental part, but only mobilenet based networks are listed, ESPNet, ESPNetv2, BiSeNet, ICNet and other lightweight models are not reported and compared. i recommend the author can give more detailed comparison. 4). In addition, UNet-like network is also popular network in semantic segmentation. Does it work for the transformer based network? it's better to give experiments about this. |
NIPS | Title
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
Abstract
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5× smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code is available at: github.com/NVlabs/SegFormer.
N/A
1 Introduction
Semantic segmentation is a fundamental task in computer vision and enables many downstream applications. It is related to image classification since it produces per-pixel category prediction instead of image-level prediction. This relationship is pointed out and systematically studied in a seminal work [1], where the authors used fully convolutional networks (FCNs) for semantic segmentation tasks. Since then, FCN has inspired many follow-up works and has become a predominant design choice for dense prediction.
Since there is a strong relation between classification and semantic segmentation, many stateof-the-art semantic segmentation frameworks are variants of popular architectures for image classification on ImageNet. Therefore, designing backbone architectures has remained an active area
∗Corresponding authors: Zhiding Yu and Ping Luo
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
in semantic segmentation. Indeed, starting from early methods using VGGs [1, 2], to the latest methods with significantly deeper and more powerful backbones [3], the evolution of backbones has dramatically pushed the performance boundary of semantic segmentation. Besides backbone architectures, another line of work formulates semantic segmentation as a structured prediction problem, and focuses on designing modules and operators, which can effectively capture contextual information. A representative example in this area is dilated convolution [4, 5], which increases the receptive field by “inflating” the kernel with holes.
Witnessing the great success in natural language processing (NLP), there has been a recent surge of interest to introduce Transformers to vision tasks. Dosovitskiy et al. [6] proposed vision Transformer (ViT) for image classification. Following the Transformer design in NLP, the authors split an image into multiple linearly embedded patches and feed them into a standard Transformer with positional embeddings (PE), leading to an impressive performance on ImageNet. In semantic segmentation, Zheng et al. [7] proposed SETR to demonstrate the feasibility of using Transformers in this task.
SETR adopts ViT as a backbone and incorporates several CNN decoders to enlarge feature resolution. Despite the good performance, ViT has two important limitations: 1) ViT outputs single-scale lowresolution features instead of multi-scale ones, and 2) it has very high computational cost on large images. To address these limitations, Wang et al. [8] proposed a pyramid vision Transformer (PVT), a natural extension of ViT with pyramid structures for dense prediction. PVT shows considerable improvements over the ResNet counterpart on object detection and semantic segmentation. However, together with other emerging methods such as Swin Transformer [9] and Twins [10], these methods mainly consider the design of the Transformer encoder, neglecting the contribution of the decoder for further improvements.
This paper introduces SegFormer, a cutting-edge Transformer framework for semantic segmentation that jointly considers efficiency, accuracy, and robustness. In contrast to previous methods, our framework redesigns both the encoder and the decoder. The key novelties of our approach are:
• A novel positional-encoding-free and hierarchical Transformer encoder.
• A lightweight All-MLP decoder design that yields a powerful representation without complex and computationally demanding modules.
• As shown in Figure 1, SegFormer sets new a state-of-the-art in terms of efficiency, accuracy and robustness in three publicly available semantic segmentation datasets.
First, the proposed encoder avoids interpolating positional codes when performing inference on images with resolutions different from the training one. As a result, our encoder can easily adapt to arbitrary test resolutions without impacting the performance. In addition, the hierarchical part enables the encoder to generate both high-resolution fine features and low-resolution coarse features, this is in contrast to ViT that can only produce single low-resolution feature maps with fixed resolutions. Second, we propose a lightweight MLP decoder where the key idea is to take advantage of the Transformer-induced features where the attentions of lower layers tend to stay local, whereas the ones of the highest layers are highly non-local. By aggregating the information from different layers, the MLP decoder combines both local and global attention. As a result, we obtain a simple and straightforward decoder that renders powerful representations.
We demonstrate the advantages of SegFormer in terms of model size, run-time, and accuracy on three publicly available datasets: ADE20K, Cityscapes, and COCO-Stuff. On Citysapces, our lightweight model, SegFormer-B0, without accelerated implementations such as TensorRT, yields 71.9% mIoU at 48 FPS, which, compared to ICNet [11], represents a relative improvement of 60% and 4.2% in latency and performance, respectively. Our largest model, SegFormer-B5, yields 84.0% mIoU, which represents a relative 1.8% mIoU improvement while being 5 × faster than SETR [7]. On ADE20K, this model sets a new state-of-the-art of 51.8% mIoU while being 4 × smaller than SETR. Moreover, our approach is significantly more robust to common corruptions and perturbations than existing methods, therefore being suitable for safety-critical applications. Code will be publicly available.
2 Related Work
Semantic Segmentation. Semantic segmentation can be seen as an extension of image classification from image level to pixel level. In the deep learning era [12–14], FCN [1] is the fundamental work of semantic segmentation, which is a fully convolution network that performs pixel-to-pixel classification in an end-to-end manner. After that, researchers focused on improving FCN from different aspects such as: enlarging the receptive field [15–17, 5, 2, 4, 18]; refining the contextual information [19– 27]; introducing boundary information [28–35]; designing various attention modules [36–44]; or using AutoML technologies [45–49]. These methods significantly improve semantic segmentation performance at the expense of introducing many empirical modules, making the resulting framework computationally demanding and complicated. More recent methods have proved the effectiveness of Transformer-based architectures for semantic segmentation [7, 44]. However, these methods are still computationally demanding.
Transformer backbones. ViT [6] is the first work to prove that a pure Transformer can achieve state-of-the-art performance in image classification. ViT treats each image as a sequence of tokens and then feeds them to multiple Transformer layers to make the classification. Subsequently, DeiT [50] further explores a data-efficient training strategy and a distillation approach for ViT. More recent methods such as T2T ViT [51], CPVT [52], TNT [53], CrossViT [54] and LocalViT [55] introduce tailored changes to ViT to further improve image classification performance.
Beyond classification, PVT [8] is the first work to introduce a pyramid structure in Transformer, demonstrating the potential of a pure Transformer backbone compared to CNN counterparts in dense prediction tasks. After that, methods such as Swin [9], CvT [56], CoaT [57], LeViT [58] and Twins [10] enhance the local continuity of features and remove fixed size position embedding to improve the performance of Transformers in dense prediction tasks.
Transformers for specific tasks. DETR [50] is the first to use Transformers for end-to-end object detection framework without non-maximum suppression (NMS). Other works have also used Transformers in tasks such as tracking [59, 60], super-resolution [61], re-id [62], colorization [63], retrieval [64] and multi-modal learning [65, 66]. For semantic segmentation, SETR [7] adopts ViT [6] as a backbone to extract features, achieving impressive performance. However, these Transformer-based methods have very low efficiency and, thus, difficult to deploy in real-time applications.
3 Method
As depicted in Figure 2, SegFormer consists of two main modules: (1) a hierarchical Transformer encoder; and (2) a lightweight All-MLP decoder to predict the final mask. Given an image with size H ×W × 3, we first divide it into patches of size 4 × 4. Unlike ViT which uses 16 × 16, using fine-grained patches favors semantic segmentation. Second, we use these patches as input to the hierarchical Transformer encoder to get multi-level features with resolution {1/4, 1/8, 1/16, 1/32}
of the original image. We then pass these multi-level features to the All-MLP decoder to predict the segmentation mask with a H4 × W 4 ×Ncls resolution, where Ncls is the number of categories. In the remainder of this section, we first detail the proposed encoder and decoder designs and then summarize the main differences of our approach compared to SETR.
3.1 Hierarchical Transformer Encoder
We design a series of Mix Transformer encoders (MiT), MiT-B0 to MiT-B5, with the same architecture but different sizes. On top of the hierarchical architecture and efficient self-attention module in PVT [8], we further propose several novel features including overlapped patch merging and positional-encoding-free design which will be shown to greatly benefit the segmentation tasks.
Hierarchical Feature Representation. Unlike ViT [6], our encoder generates multi-level multi-scale features given an input image. These features provide both high-resolution coarse features and lowresolution fine-grained features that boost the performance of semantic segmentation. Specifically, given an input image with size H×W ×3, we perform patch merging to obtain a hierarchical feature map Fi with a resolution of H2i+1 × W 2i+1 × Ci, where i ∈ {1, 2, 3, 4}, and Ci+1 is larger than Ci.
Efficient Self-Attention. A major bottleneck of the above hierarchical feature representation is the quadratic self-attention complexity with long sequence inputs from higher resolution features. Recall that in the original multi-head self-attention, each of the heads Q,K, V have the same dimensions N × C, where N = H ×W is the length of the sequence, the self-attention is estimated as:
Attention(Q,K, V ) = Softmax( QKT√ dhead )V. (1)
We instead adopt the sequence reduction process introduced in [8]. This process uses a reduction ratio R to reduce the length of the sequence of as follows:
K̂ = Reshape( N
R ,C ·R)(K)
K = Linear(C ·R,C)(K̂), (2)
where K is the sequence to be reduced, Reshape(NR , C ·R)(K) refers to reshape K to the one with shape of NR × (C · R), and Linear(Cin, Cout)(·) refers to a linear layer taking a Cin-dimensional tensor as input and generating a Cout-dimensional tensor as output. Therefore, the new K has dimensions NR × C. As a result, the complexity of the self-attention mechanism is reduced from O(N2) to O(N 2
R ). In our experiments, we set R to [64, 16, 4, 1] from stage-1 to stage-4.
Overlapped Patch Merging. Given an image patch, the patch merging process used in ViT, unifies a N × N × 3 patch into a 1 × 1 × C vector. This can easily be extended to unify a 2 × 2 × Ci feature path into a 1× 1×Ci+1 vector to obtain hierarchical feature maps. Using this, we can shrink our hierarchical features from F1 (H4 × W 4 × C1) to F2 ( H 8 × W 8 × C2), and then iterate for any other feature map in the hierarchy. This process was initially designed to combine non-overlapping image or feature patches. Therefore, it fails to preserve the local continuity around those patches. Instead, we use an overlapping patch merging process. To this end, we define K, S, and P , where K is the patch size, S is the stride between two adjacent patches, and P is the padding size. In our experiments, we set K = 7, S = 4, P = 3 ,and K = 3, S = 2, P = 1 to perform overlapping patch merging to produces features with the same size as the non-overlapping process. Similar to the original patch embedding in ViT [6], this operation can be implemented by “nn.Conv2D” in PyTorch.
Positional-Encoding-Free Design. The resolution of the PE in ViT is fixed. One thus needs to interpolate the PE when the test resolution differs from training. This leads to the drop of accuracy, which is undesirable since the resolution mismatch is common in semantic segmentation. We instead introduce Mix-FFN where we consider the effect of zero padding to the leak location information [67] by directly using a 3 × 3 Conv in the feed-forward network (FFN). Mix-FFN is formulated as:
xout = MLP(GELU(Conv3×3(MLP(xin)))) + xin, (3)
where xin is the feature from the self-attention module. Mix-FFN mixes a 3 × 3 convolution and an MLP into each FFN. In our experiments, we will show that a 3 × 3 convolution is sufficient to provide positional information for Transformers. In particular, we use depth-wise convolutions for reducing the number of parameters and improving efficiency.
It should be mentioned that CPVT [52] also alleviates this issue by using a 3 × 3 Conv to generate conditional PE at different resolutions and then add it to the feature map. Our work conceptually goes one step further as we argue that adding PE to feature map is not necessary in semantic segmentation. Another recent work CvT [56] introduced 3× 3 Convs to model the spatial relationship among tokens. Despite the converging design, our work differs in both motivation and application as we aim to totally remove PEs to handle the training/testing resolution mismatch issue in semantic segmentation. Our intuition started from [67] whereas the same intuition was not discussed in CvT.
3.2 Lightweight All-MLP Decoder
SegFormer incorporates a lightweight decoder consisting only of MLP layers and this avoiding the hand-crafted and computationally demanding components typically used in other methods. The key to enabling such a simple decoder is that our hierarchical Transformer encoder has a larger effective receptive field (ERF) than traditional CNN encoders.
The proposed All-MLP decoder consists of four main steps. First, multi-level features Fi from the MiT encoder go through an MLP layer to unify the channel dimension. Then, in a second step, features are up-sampled to 1/4th and concatenated together. Third, a MLP layer is adopted to fuse the concatenated features F . Finally, another MLP layer takes the fused feature to predict the segmentation mask M with a H4 × W 4 × Ncls resolution, where Ncls is the number of categories. This lets us formulate the decoder as:
F̂i = Linear(Ci, C)(Fi),∀i
F̂i = Upsample( W 4 × W 4 )(F̂i),∀i
F = Linear(4C,C)(Concat(F̂i)),∀i M = Linear(C,Ncls)(F ),
(4)
where M refers to the predicted mask, and Linear(Cin, Cout)(·) refers to a linear layer with Cin and Cout as input and output vector dimensions respectively.
Effective Receptive Field Analysis. For semantic segmentation, maintaining large receptive field to include context information has been a central issue [5, 17, 18]. Here, we use effective receptive field (ERF) [68] as a toolkit to visualize and interpret why our MLP decoder design is so effective on Transformers. In Figure 3, we visualize ERFs of the four encoder stages and the decoder heads for both DeepLabv3+ and SegFormer. We can make the following observations:
• The ERF of DeepLabv3+ is relatively small even at Stage-4, the deepest stage.
• SegFormer’s encoder naturally produces local attentions which resemble convolutions at lower stages, while able to output highly non-local attentions that effectively capture contexts at Stage-4.
• As shown with the zoom-in patches in Figure 3, the ERF of the MLP head (blue box) differs from Stage-4 (red box) with a significant stronger local attention besides the non-local attention.
The limited receptive field in CNN requires one to resort to context modules such as ASPP [16] that enlarge the receptive field but inevitably become heavy. Our decoder design benefits from the non-local attention in Transformers and leads to a larger receptive field without being complex. The same decoder design, however, does not work well on CNN backbones since the overall receptive field is upper bounded by the limited one at Stage-4, and we will verify this later in Table 1d,
More importantly, our decoder design essentially takes advantage of a Transformer induced feature that produces both highly local and non-local attention at the same time. By unifying them, our MLP decoder renders complementary and powerful representations by adding few parameters. This is
another key reason that motivated our design. Taking the non-local attention from Stage-4 alone is not enough to produce good results, as will be verified in Table 1d.
3.3 Relationship to SETR.
SegFormer contains multiple more efficient and powerful designs compared with SETR [7]:
• We only use ImageNet-1K for pre-training. ViT in SETR is pre-trained on larger ImageNet-22K.
• SegFormer’s encoder has a hierarchical architecture, which is smaller than ViT and can capture both high-resolution coarse and low-resolution fine features. In contrast, SETR’s ViT encoder can only generate single low-resolution feature map.
• We remove Positional Embedding in encoder, while SETR uses fixed shape Positional Embedding which decreases the accuracy when the resolution at inference differs from the training ones.
• Our MLP decoder is more compact and less computationally demanding than the one in SETR. This leads to a negligible computational overhead. In contrast, SETR requires heavy decoders with multiple 3×3 convolutions.
4 Experiments
4.1 Experimental Settings
Datasets: We used four public datasets: Cityscapes [69], ADE20K [70], and COCO-Stuff [71]. ADE20K is a dataset covering 150 fine-grained semantic concepts consisting of 20210 images. Cityscapes is a driving dataset for semantic segmentation consisting of 5000 fine-annotated high resolution images with 19 categories. COCO-Stuff covers 172 labels and consists of 164k images: 118k for training, 5k for validation, 20k for test-dev and 20k for the test-challenge.
Implementation details: We used the mmsegmentation2 codebase and train on a server with 8 Tesla V100. We pre-train the encoder on the Imagenet-1K dataset and randomly initialize the decoder. During training, we applied data augmentation through random resize with ratio 0.5-2.0, random horizontal flipping, and random cropping to 512 × 512, 1024×1024, 512 × 512 for ADE20K, Cityscapes and COCO-Stuff. Following [9] we set crop size to 640 × 640 on ADE20K for our largest model B5. We trained the models using AdamW optimizer for 160K iterations on ADE20K, Cityscapes, and 80K iterations on COCO-Stuff. Exceptionally, for the ablation studies, we trained the models for 40K iterations. We used a batch size of 16 for ADE20K, COCO-Stuff and a batch size of 8 for Cityscapes. The learning rate was set to an initial value of 0.00006 and then used a “poly” LR schedule with factor 1.0 by default. For simplicity, we did not adopt widely-used tricks such as OHEM, auxiliary losses or class balance loss. During evaluation, we rescale the short side of the image to training cropping size and keep the aspect ratio for ADE20K and COCO-Stuff. For Cityscapes, we do inference using sliding window test by cropping 1024× 1024 windows. We report semantic segmentation performance using mean Intersection over Union (mIoU).
4.2 Ablation Studies
Influence of the size of model. We first analyze the effect of increasing the size of the encoder on the performance and model efficiency. Figure 1 shows the performance vs. model efficiency for ADE20K as a function of the encoder size and, Table 1a summarizes the results for the three datasets. The first thing to observe here is the size of the decoder compared to the encoder. As shown, for the lightweight model, the decoder has only 0.4M parameters. For MiT-B5 encoder, the decoder only takes up to 4% of the total number of parameters in the model. In terms of performance, we can observe that, overall, increasing the size of the encoder yields consistent improvements on all the datasets. Our lightweight model, SegFormer-B0, is compact and efficient while maintaining a competitive performance, showing that our method is very convenient for real-time applications. On the other hand, our SegFormer-B5, the largest model, achieves state-of-the-art results on all three datasets, showing the potential of our Transformer encoder.
2https://github.com/open-mmlab/mmsegmentation
Influence of C, the MLP decoder channel dimension. We now analyze the influence of the channel dimension C in the MLP decoder, see Section 3.2. In Table 1b we show performance, flops, and parameters as a function of this dimension. We can observe that setting C = 256 provides a very competitive performance and computational cost. The performance increases as C increases; however, it leads to larger and less efficient models. Interestingly, this performance plateaus for channel dimensions wider than 768. Given these results, we choose C = 256 for our real-time models SegFormer-B0, B1 and C = 768 for the rest.
Mix-FFN vs. Positional Encoder (PE). In this experiment, we analyze the effect of removing the positional encoding in the Transformer encoder in favor of using the proposed Mix-FFN. To this end, we train Transformer encoders with a positional encoding (PE) and the proposed Mix-FFN and perform inference on Cityscapes with two different image resolutions: 768×768 using a sliding window, and 1024×2048 using the whole image. Table 1c shows the results for this experiment. As shown, for a given resolution, our approach using Mix-FFN clearly outperforms using a positional encoding. Moreover, our approach is less sensitive to differences in the test resolution: the accuracy drops 3.3% when using a positional encoding with a
lower resolution. In contrast, when we use the proposed Mix-FFN the performance drop is reduced to only 0.7%. From these results, we can conclude using the proposed Mix-FFN produces better and more robust encoders than those using positional encoding.
Effective receptive field evaluation. In Section 3.2, we argued that our MLP decoder benefits from Transformers having a larger effective receptive field compared to other CNN models. To quantify this effect, in this experiment, we compare the performance of our MLP-decoder when used with CNN-based encoders such as ResNet or ResNeXt. As shown in Table 1d, coupling our MLP-decoder with a CNN-based encoder yields a significantly lower accuracy compared to coupling it with the proposed Transformer encoder. Intuitively, as a CNN has a smaller receptive field than the Transformer (see the analysis in Section 3.2), the MLP-decoder is not enough for global reasoning. In contrast, coupling our Transformer encoder with the MLP decoder leads to the best performance. Moreover, for Transformer encoder, it is necessary to combine low-level local features and high-level non-local features instead of only high-level feature.
Influence of difference encoders. We select 2 representative Transformer encoders, ViT [6] and Swin [9] and compare with our MiT encoder. As shown in Table 3, with same decoder, e.g.MLP decoder, MiT-B2 is 3.1% higher than Swin-T with similar encoder parameters. Moreover, MiT-B5 has much fewer encoder parameters than ViT-large, but is 3+% mIoU higher than ViT-large. These experiments shows our MiT encoder is better than Swin and ViT for semantic segmentataion.
Influence of difference decoders. We also test MiT encoder with different decoders. As shown in Table 3, the mIoUs are similar with different decoders while the proposed MLP decoder has the least parameters and is only 1/8 of the UperNet decoder in Swin. The MLP decoder is thus an important design towards efficient segmentation.
4.3 Comparison to state of the art methods
We now compare our results with existing approaches on the ADE20K [70] and Cityscapes [69]. More experiments about COCO-Stuff [71] are in appendix. ADE20K and Cityscapes: Table 2 summarizes our results including parameters, FLOPS, latency, and accuracy for ADE20K and Cityscapes. In the top part of the table, we report real-time approaches where we include state-of-the-art methods and our results using the MiT-B0 lightweight encoder. In the bottom part, we focus on performance and report the results of our approach and related works using stronger encoders.
On ADE20K, SegFormer-B0 yields 37.4% mIoU using only 3.8M parameters and 8.4G FLOPs, outperforming all other real-time counterparts in terms of parameters, flops, and latency. For instance, compared to DeeplabV3+ (MobileNetV2), SegFormer-B0 is 7.4 FPS, which is faster and keeps 3.4% better mIoU.
Moreover, SegFormer-B5 outperforms all other approaches, including the previous best SETR, and establishes a new state-of-the-art of 51.8%, which is 1.6% mIoU better than SETR while being significantly more efficient.
As also shown in Table 2, our results also hold on Cityscapes. SegFormer-B0 yields 15.2 FPS and 76.2% mIoU (the shorter side of input image being 1024), which represents a 1.3% mIoU improvement and a 2× speedup compared to DeeplabV3+. Moreover, with the shorter side of input image being 512, SegFormer-B0 runs at 47.6 FPS and yields 71.9% mIoU, which is 17.3 FPS faster and 4.2% better than ICNet. SegFormer-B5 archives the best IoU of 84.0%, outperforming all existing
methods by at least 1.8% mIoU, and it runs 5 × faster and 4 × smaller than SETR [7]. On Cityscapes test set, we follow the common setting [18] and merge the validation images to the train set and report results using Imagenet-1K pre-training and also using Mapillary Vistas [74]. As reported in Table 4, using only Cityscapes fine data and Imagenet-1K pre-training, our method achieves 82.2% mIoU outperforming all other methods including SETR, which uses ImageNet-22K pre-training and the additional Cityscapes coarse data. Using Mapillary pre-training, our sets a new state-of-the-art result of 83.1% mIoU.
4.4 Robustness to natural corruptions
Model robustness is important for many safety-critical tasks such as autonomous driving [75]. In this experiment, we evaluate the robustness of SegFormer to common corruptions and perturbations. To this end, we follow [75] and generate Cityscapes-C, which expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from noise, blur, weather and digital categories. We compare our method to DeeplabV3+ and other methods as reported in [75]. We also compare with SETR with DeiT Transformer backbone. The results for this experiment are summarized in Table 5.
Our method significantly outperforms previous CNN-based methods, yielding a relative improvement of up to 588% on Gaussian Noise and up to 295% on snow weather. SegFormer also outperforms SETR in general except for one corruption (snow). The results indicate the strong robustness of SegFormer, which we envision to benefit safety-critical applications where robustness is important.
5 Conclusion
In this paper, we present SegFormer, a simple, clean yet powerful semantic segmentation method which contains a positional-encoding-free, hierarchical Transformer encoder and a lightweight AllMLP decoder. It avoids common complex designs in previous methods, leading to both high efficiency and performance. SegFormer not only achieves new state of the art results on common datasets, but also shows strong zero-shot robustness. We hope our method can serve as a solid baseline for semantic segmentation and motivate further research. One potential limitation is that even our lightest model may still be too heavy for some edge devices. Thus mixed-precision training, pruning, hardware-friendly attention designs and energy consumption are important parts of our future work.
Broader Impact
Efficiency, accuracy, and robustness are important aspects of AI models. Our work pushes the boundary of semantic segmentation models in these three aspects. We envision that the work will benefit a wide range of safety-critical applications, such as autonomous driving and robot navigation. The proposed method improves the “in-the-wild” robustness of these applications, ultimately leading to better safety. Despite such improvement, we fully understand this work is by no means perfect and there are still many challenges towards reliable real world application. Our models may be subject to biases and other possible undesired mistakes, depending on how they are trained in reality. Our
model may also be used for surveillance similar to other AI recognition methods, even though it is not mainly designed for surveillance applications.
Acknowledgments and Disclosure of Funding
We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. Ping Luo is supported by the General Research Fund of Hong Kong No.27208720. | 1. What are the strengths and weaknesses of the SegFormer model compared to prior works?
2. How does the reviewer assess the novelty and impact of the paper's contributions?
3. Are there any questions or concerns regarding the paper's experimental design and results?
4. How does the reviewer evaluate the clarity and quality of the paper's writing?
5. Are there any suggestions for improving the paper or its contributions? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces the SegFormer model, a Transformer-based model for semantic segmentation (i.e. dense pixel classification) in images. The model uses prior architectural innovations such as a pyramidal structure (progressive downsampling) and a factorized version of self-attention for computational efficiency. The main novelties over prior work on Transformer-based semantic segmentation models and image classification models are: 1) the use of convolutional filters to propagate position information from the image boundaries (as opposed to using explicit positional encodings) and 2) skip-connections between individual layers directly to the output layer, using spatial upsampling. Experimental evaluation demonstrates improved mIoU scores on default semantic image classification benchmarks compared to a baseline SETR (Segmentation Transformer) model, while achieving computational benefits thanks to the adoption of a more efficient Transformer architecture.
Review
The paper is overall well-written, well-structured and easy to follow, although clarity could be improved in several aspects (see below). The architectural choices are well-motivated and experimental results demonstrate clear advantages over prior approaches. The code / model implementation looks clean, and providing this implementation is a big plus and will allow for easy reproduction. Transformer architectures for applications in visual processing are very popular right now, and so the paper should be relevant for a NeurIPS audience, although the presented scope of the method (semantic pixel classification) is rather narrow.
Overall, despite the positive points mentioned above, I am arguing for rejecting this paper in its current form. I lay out the reasons for this decision in the following.
The presented novelties (1. use of convolutional filters to propagate position information from the image boundaries; 2. skip-connections between individual layers directly to the output layer) are intermixed with techniques from prior work, such as using a pyramidal Transformer architecture (as in [1]) and the use of an efficient, factorized form of self-attention (also from [1]). In the experimental section, it is unclear whether the performance benefits stem from utilizing these components from prior work [1] or whether the benefits stem from the novel contributions 1. and 2. -- the computational efficiency gains appear to be largely due to the use of the architecture from [1], which does not merit a separate conference paper (swapping out an image classification with a pixel classification target is of limited novelty).
The effective receptive field analysis and the analysis of robustness to natural corruptions are interesting, but they do not compare to the more recent SETR baseline, but instead to a weaker ConvNet-based baseline published in 2018. They thus unfortunately add little to convince the reader about the benefits of the proposed novelties over the current state of the art. The showcased qualitative improvements on Cityscapes in Figure 4 compared to SETR appear to be quite marginal (mask boundaries are marginally sharper, but otherwise unchanged). If this is a representative example of the obtained improvement, then it is unclear how this method will improve prediction quality in downstream applications.
The statement that the proposed "encoder can easily adapt to arbitrary test resolutions without impacting the performance" is not adequately verified. In fact, I would expect this to be false: if I increase the resolution 100x of the input image, I would very much expect the model to struggle (not just computationally, but also in terms of propagating position information throughout the model). The experiment in Table 1c suggests that increasing the resolution and aspect ratio by a small factor leads to only little loss in performance, but this does not justify the above statement.
Since the novel form of (implicit) positional encoding is a core contribution, it should be investigated more. For example, one could try to measure the model's ability to predict positional encodings by either: 1) giving the model PE as inputs (i.e. trivial auto-encoding) vs. 2) using the proposed position-free approach. This would allow for a good comparison of how 1) and 2) compare in terms of generalization to new image sizes.
Regarding the second novelty, i.e. skip-connections to the decoder: SETR introduces a related strategy termed "Multi-Level feature Aggregation (MLA)". A direct comparison against this technique (+ upsampling where necessary) would strengthen this contribution.
Clarity of the paper could be improved. There are many typos and grammatical mistakes, which should be easy to fix by doing another pass over the paper. The "Overlapped patch merging" strategy is described in a confusing way and parameters such as the padding size P are not explained. Looking at the code, this operation is simply implemented by a convolutional layer, and hence it would greatly simplify the description if the authors replaced this paragraph by simply saying that they use a convolutional layer after Transformer blocks, if I understand the method correctly.
Lastly, the impact of this method could be greatly increased by considering (and comparing on) other pixel-level prediction tasks such as depth prediction, which the architecture should support out-of-the box.
Other comments / questions, suggestions for improvement:
The references [2,3] as cited in the sentence "In the deep learning era [...]" (Section 2, first sentence) look quite out of place compared to the other (seminal) works cited in this context. Were these cited by accident? I recommend explaining their relevance in the text (or dropping them).
SETR uses SGD with momentum as optimizer whereas the present method uses AdamW. How much of the performance difference can be explained by the different choice of optimizers? Doing an ablation study over this choice would help clarify where the performance benefits come from (i.e. train with SGD and momentum in a setting similar to SETR).
What is the unit for Params in Table 2?
Why did the authors not compare on PASCAL Context (as in the SETR paper)?
No error bars are reported and the authors state that the variance between seeds was low ("quite table mIoU results"). Would the authors be able to quantify this variance on at least one experiments, to get an idea whether results are significant?
[1] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions, 2021 [2] Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. Selective kernel networks. In CVPR, 2019. [3] Wenhai Wang, Xiang Li, Tong Lu, and Jian Yang. Mixed link networks. In IJCAI, 2018.
----- UPDATE AFTER REBUTTAL ----
I would like to thank the authors for their very extensive and detailed response to my review. My concerns about positioning of the paper still remain, i.e. it is difficult to clearly pinpoint what is the main contribution of this work, as the method clearly benefits from a careful combination of several architectural contributions in conjunction with careful optimization. In this point I largely agree with the concerns raised by reviewer n5uX.
At the same time, the authors have done an impeccable job at addressing my other concerns and at carefully investigating various aspects and components of the proposed model. Given the detailed experimental evaluation and the clear usefulness of this model as a strong baseline for the semantic segmentation community (as pointed out by reviewer LMud), I believe that this paper lies marginally above the acceptance threshold despite concerns around positioning and significance of the individual technical contributions. |
NIPS | Title
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
Abstract
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5× smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code is available at: github.com/NVlabs/SegFormer.
N/A
1 Introduction
Semantic segmentation is a fundamental task in computer vision and enables many downstream applications. It is related to image classification since it produces per-pixel category prediction instead of image-level prediction. This relationship is pointed out and systematically studied in a seminal work [1], where the authors used fully convolutional networks (FCNs) for semantic segmentation tasks. Since then, FCN has inspired many follow-up works and has become a predominant design choice for dense prediction.
Since there is a strong relation between classification and semantic segmentation, many stateof-the-art semantic segmentation frameworks are variants of popular architectures for image classification on ImageNet. Therefore, designing backbone architectures has remained an active area
∗Corresponding authors: Zhiding Yu and Ping Luo
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
in semantic segmentation. Indeed, starting from early methods using VGGs [1, 2], to the latest methods with significantly deeper and more powerful backbones [3], the evolution of backbones has dramatically pushed the performance boundary of semantic segmentation. Besides backbone architectures, another line of work formulates semantic segmentation as a structured prediction problem, and focuses on designing modules and operators, which can effectively capture contextual information. A representative example in this area is dilated convolution [4, 5], which increases the receptive field by “inflating” the kernel with holes.
Witnessing the great success in natural language processing (NLP), there has been a recent surge of interest to introduce Transformers to vision tasks. Dosovitskiy et al. [6] proposed vision Transformer (ViT) for image classification. Following the Transformer design in NLP, the authors split an image into multiple linearly embedded patches and feed them into a standard Transformer with positional embeddings (PE), leading to an impressive performance on ImageNet. In semantic segmentation, Zheng et al. [7] proposed SETR to demonstrate the feasibility of using Transformers in this task.
SETR adopts ViT as a backbone and incorporates several CNN decoders to enlarge feature resolution. Despite the good performance, ViT has two important limitations: 1) ViT outputs single-scale lowresolution features instead of multi-scale ones, and 2) it has very high computational cost on large images. To address these limitations, Wang et al. [8] proposed a pyramid vision Transformer (PVT), a natural extension of ViT with pyramid structures for dense prediction. PVT shows considerable improvements over the ResNet counterpart on object detection and semantic segmentation. However, together with other emerging methods such as Swin Transformer [9] and Twins [10], these methods mainly consider the design of the Transformer encoder, neglecting the contribution of the decoder for further improvements.
This paper introduces SegFormer, a cutting-edge Transformer framework for semantic segmentation that jointly considers efficiency, accuracy, and robustness. In contrast to previous methods, our framework redesigns both the encoder and the decoder. The key novelties of our approach are:
• A novel positional-encoding-free and hierarchical Transformer encoder.
• A lightweight All-MLP decoder design that yields a powerful representation without complex and computationally demanding modules.
• As shown in Figure 1, SegFormer sets new a state-of-the-art in terms of efficiency, accuracy and robustness in three publicly available semantic segmentation datasets.
First, the proposed encoder avoids interpolating positional codes when performing inference on images with resolutions different from the training one. As a result, our encoder can easily adapt to arbitrary test resolutions without impacting the performance. In addition, the hierarchical part enables the encoder to generate both high-resolution fine features and low-resolution coarse features, this is in contrast to ViT that can only produce single low-resolution feature maps with fixed resolutions. Second, we propose a lightweight MLP decoder where the key idea is to take advantage of the Transformer-induced features where the attentions of lower layers tend to stay local, whereas the ones of the highest layers are highly non-local. By aggregating the information from different layers, the MLP decoder combines both local and global attention. As a result, we obtain a simple and straightforward decoder that renders powerful representations.
We demonstrate the advantages of SegFormer in terms of model size, run-time, and accuracy on three publicly available datasets: ADE20K, Cityscapes, and COCO-Stuff. On Citysapces, our lightweight model, SegFormer-B0, without accelerated implementations such as TensorRT, yields 71.9% mIoU at 48 FPS, which, compared to ICNet [11], represents a relative improvement of 60% and 4.2% in latency and performance, respectively. Our largest model, SegFormer-B5, yields 84.0% mIoU, which represents a relative 1.8% mIoU improvement while being 5 × faster than SETR [7]. On ADE20K, this model sets a new state-of-the-art of 51.8% mIoU while being 4 × smaller than SETR. Moreover, our approach is significantly more robust to common corruptions and perturbations than existing methods, therefore being suitable for safety-critical applications. Code will be publicly available.
2 Related Work
Semantic Segmentation. Semantic segmentation can be seen as an extension of image classification from image level to pixel level. In the deep learning era [12–14], FCN [1] is the fundamental work of semantic segmentation, which is a fully convolution network that performs pixel-to-pixel classification in an end-to-end manner. After that, researchers focused on improving FCN from different aspects such as: enlarging the receptive field [15–17, 5, 2, 4, 18]; refining the contextual information [19– 27]; introducing boundary information [28–35]; designing various attention modules [36–44]; or using AutoML technologies [45–49]. These methods significantly improve semantic segmentation performance at the expense of introducing many empirical modules, making the resulting framework computationally demanding and complicated. More recent methods have proved the effectiveness of Transformer-based architectures for semantic segmentation [7, 44]. However, these methods are still computationally demanding.
Transformer backbones. ViT [6] is the first work to prove that a pure Transformer can achieve state-of-the-art performance in image classification. ViT treats each image as a sequence of tokens and then feeds them to multiple Transformer layers to make the classification. Subsequently, DeiT [50] further explores a data-efficient training strategy and a distillation approach for ViT. More recent methods such as T2T ViT [51], CPVT [52], TNT [53], CrossViT [54] and LocalViT [55] introduce tailored changes to ViT to further improve image classification performance.
Beyond classification, PVT [8] is the first work to introduce a pyramid structure in Transformer, demonstrating the potential of a pure Transformer backbone compared to CNN counterparts in dense prediction tasks. After that, methods such as Swin [9], CvT [56], CoaT [57], LeViT [58] and Twins [10] enhance the local continuity of features and remove fixed size position embedding to improve the performance of Transformers in dense prediction tasks.
Transformers for specific tasks. DETR [50] is the first to use Transformers for end-to-end object detection framework without non-maximum suppression (NMS). Other works have also used Transformers in tasks such as tracking [59, 60], super-resolution [61], re-id [62], colorization [63], retrieval [64] and multi-modal learning [65, 66]. For semantic segmentation, SETR [7] adopts ViT [6] as a backbone to extract features, achieving impressive performance. However, these Transformer-based methods have very low efficiency and, thus, difficult to deploy in real-time applications.
3 Method
As depicted in Figure 2, SegFormer consists of two main modules: (1) a hierarchical Transformer encoder; and (2) a lightweight All-MLP decoder to predict the final mask. Given an image with size H ×W × 3, we first divide it into patches of size 4 × 4. Unlike ViT which uses 16 × 16, using fine-grained patches favors semantic segmentation. Second, we use these patches as input to the hierarchical Transformer encoder to get multi-level features with resolution {1/4, 1/8, 1/16, 1/32}
of the original image. We then pass these multi-level features to the All-MLP decoder to predict the segmentation mask with a H4 × W 4 ×Ncls resolution, where Ncls is the number of categories. In the remainder of this section, we first detail the proposed encoder and decoder designs and then summarize the main differences of our approach compared to SETR.
3.1 Hierarchical Transformer Encoder
We design a series of Mix Transformer encoders (MiT), MiT-B0 to MiT-B5, with the same architecture but different sizes. On top of the hierarchical architecture and efficient self-attention module in PVT [8], we further propose several novel features including overlapped patch merging and positional-encoding-free design which will be shown to greatly benefit the segmentation tasks.
Hierarchical Feature Representation. Unlike ViT [6], our encoder generates multi-level multi-scale features given an input image. These features provide both high-resolution coarse features and lowresolution fine-grained features that boost the performance of semantic segmentation. Specifically, given an input image with size H×W ×3, we perform patch merging to obtain a hierarchical feature map Fi with a resolution of H2i+1 × W 2i+1 × Ci, where i ∈ {1, 2, 3, 4}, and Ci+1 is larger than Ci.
Efficient Self-Attention. A major bottleneck of the above hierarchical feature representation is the quadratic self-attention complexity with long sequence inputs from higher resolution features. Recall that in the original multi-head self-attention, each of the heads Q,K, V have the same dimensions N × C, where N = H ×W is the length of the sequence, the self-attention is estimated as:
Attention(Q,K, V ) = Softmax( QKT√ dhead )V. (1)
We instead adopt the sequence reduction process introduced in [8]. This process uses a reduction ratio R to reduce the length of the sequence of as follows:
K̂ = Reshape( N
R ,C ·R)(K)
K = Linear(C ·R,C)(K̂), (2)
where K is the sequence to be reduced, Reshape(NR , C ·R)(K) refers to reshape K to the one with shape of NR × (C · R), and Linear(Cin, Cout)(·) refers to a linear layer taking a Cin-dimensional tensor as input and generating a Cout-dimensional tensor as output. Therefore, the new K has dimensions NR × C. As a result, the complexity of the self-attention mechanism is reduced from O(N2) to O(N 2
R ). In our experiments, we set R to [64, 16, 4, 1] from stage-1 to stage-4.
Overlapped Patch Merging. Given an image patch, the patch merging process used in ViT, unifies a N × N × 3 patch into a 1 × 1 × C vector. This can easily be extended to unify a 2 × 2 × Ci feature path into a 1× 1×Ci+1 vector to obtain hierarchical feature maps. Using this, we can shrink our hierarchical features from F1 (H4 × W 4 × C1) to F2 ( H 8 × W 8 × C2), and then iterate for any other feature map in the hierarchy. This process was initially designed to combine non-overlapping image or feature patches. Therefore, it fails to preserve the local continuity around those patches. Instead, we use an overlapping patch merging process. To this end, we define K, S, and P , where K is the patch size, S is the stride between two adjacent patches, and P is the padding size. In our experiments, we set K = 7, S = 4, P = 3 ,and K = 3, S = 2, P = 1 to perform overlapping patch merging to produces features with the same size as the non-overlapping process. Similar to the original patch embedding in ViT [6], this operation can be implemented by “nn.Conv2D” in PyTorch.
Positional-Encoding-Free Design. The resolution of the PE in ViT is fixed. One thus needs to interpolate the PE when the test resolution differs from training. This leads to the drop of accuracy, which is undesirable since the resolution mismatch is common in semantic segmentation. We instead introduce Mix-FFN where we consider the effect of zero padding to the leak location information [67] by directly using a 3 × 3 Conv in the feed-forward network (FFN). Mix-FFN is formulated as:
xout = MLP(GELU(Conv3×3(MLP(xin)))) + xin, (3)
where xin is the feature from the self-attention module. Mix-FFN mixes a 3 × 3 convolution and an MLP into each FFN. In our experiments, we will show that a 3 × 3 convolution is sufficient to provide positional information for Transformers. In particular, we use depth-wise convolutions for reducing the number of parameters and improving efficiency.
It should be mentioned that CPVT [52] also alleviates this issue by using a 3 × 3 Conv to generate conditional PE at different resolutions and then add it to the feature map. Our work conceptually goes one step further as we argue that adding PE to feature map is not necessary in semantic segmentation. Another recent work CvT [56] introduced 3× 3 Convs to model the spatial relationship among tokens. Despite the converging design, our work differs in both motivation and application as we aim to totally remove PEs to handle the training/testing resolution mismatch issue in semantic segmentation. Our intuition started from [67] whereas the same intuition was not discussed in CvT.
3.2 Lightweight All-MLP Decoder
SegFormer incorporates a lightweight decoder consisting only of MLP layers and this avoiding the hand-crafted and computationally demanding components typically used in other methods. The key to enabling such a simple decoder is that our hierarchical Transformer encoder has a larger effective receptive field (ERF) than traditional CNN encoders.
The proposed All-MLP decoder consists of four main steps. First, multi-level features Fi from the MiT encoder go through an MLP layer to unify the channel dimension. Then, in a second step, features are up-sampled to 1/4th and concatenated together. Third, a MLP layer is adopted to fuse the concatenated features F . Finally, another MLP layer takes the fused feature to predict the segmentation mask M with a H4 × W 4 × Ncls resolution, where Ncls is the number of categories. This lets us formulate the decoder as:
F̂i = Linear(Ci, C)(Fi),∀i
F̂i = Upsample( W 4 × W 4 )(F̂i),∀i
F = Linear(4C,C)(Concat(F̂i)),∀i M = Linear(C,Ncls)(F ),
(4)
where M refers to the predicted mask, and Linear(Cin, Cout)(·) refers to a linear layer with Cin and Cout as input and output vector dimensions respectively.
Effective Receptive Field Analysis. For semantic segmentation, maintaining large receptive field to include context information has been a central issue [5, 17, 18]. Here, we use effective receptive field (ERF) [68] as a toolkit to visualize and interpret why our MLP decoder design is so effective on Transformers. In Figure 3, we visualize ERFs of the four encoder stages and the decoder heads for both DeepLabv3+ and SegFormer. We can make the following observations:
• The ERF of DeepLabv3+ is relatively small even at Stage-4, the deepest stage.
• SegFormer’s encoder naturally produces local attentions which resemble convolutions at lower stages, while able to output highly non-local attentions that effectively capture contexts at Stage-4.
• As shown with the zoom-in patches in Figure 3, the ERF of the MLP head (blue box) differs from Stage-4 (red box) with a significant stronger local attention besides the non-local attention.
The limited receptive field in CNN requires one to resort to context modules such as ASPP [16] that enlarge the receptive field but inevitably become heavy. Our decoder design benefits from the non-local attention in Transformers and leads to a larger receptive field without being complex. The same decoder design, however, does not work well on CNN backbones since the overall receptive field is upper bounded by the limited one at Stage-4, and we will verify this later in Table 1d,
More importantly, our decoder design essentially takes advantage of a Transformer induced feature that produces both highly local and non-local attention at the same time. By unifying them, our MLP decoder renders complementary and powerful representations by adding few parameters. This is
another key reason that motivated our design. Taking the non-local attention from Stage-4 alone is not enough to produce good results, as will be verified in Table 1d.
3.3 Relationship to SETR.
SegFormer contains multiple more efficient and powerful designs compared with SETR [7]:
• We only use ImageNet-1K for pre-training. ViT in SETR is pre-trained on larger ImageNet-22K.
• SegFormer’s encoder has a hierarchical architecture, which is smaller than ViT and can capture both high-resolution coarse and low-resolution fine features. In contrast, SETR’s ViT encoder can only generate single low-resolution feature map.
• We remove Positional Embedding in encoder, while SETR uses fixed shape Positional Embedding which decreases the accuracy when the resolution at inference differs from the training ones.
• Our MLP decoder is more compact and less computationally demanding than the one in SETR. This leads to a negligible computational overhead. In contrast, SETR requires heavy decoders with multiple 3×3 convolutions.
4 Experiments
4.1 Experimental Settings
Datasets: We used four public datasets: Cityscapes [69], ADE20K [70], and COCO-Stuff [71]. ADE20K is a dataset covering 150 fine-grained semantic concepts consisting of 20210 images. Cityscapes is a driving dataset for semantic segmentation consisting of 5000 fine-annotated high resolution images with 19 categories. COCO-Stuff covers 172 labels and consists of 164k images: 118k for training, 5k for validation, 20k for test-dev and 20k for the test-challenge.
Implementation details: We used the mmsegmentation2 codebase and train on a server with 8 Tesla V100. We pre-train the encoder on the Imagenet-1K dataset and randomly initialize the decoder. During training, we applied data augmentation through random resize with ratio 0.5-2.0, random horizontal flipping, and random cropping to 512 × 512, 1024×1024, 512 × 512 for ADE20K, Cityscapes and COCO-Stuff. Following [9] we set crop size to 640 × 640 on ADE20K for our largest model B5. We trained the models using AdamW optimizer for 160K iterations on ADE20K, Cityscapes, and 80K iterations on COCO-Stuff. Exceptionally, for the ablation studies, we trained the models for 40K iterations. We used a batch size of 16 for ADE20K, COCO-Stuff and a batch size of 8 for Cityscapes. The learning rate was set to an initial value of 0.00006 and then used a “poly” LR schedule with factor 1.0 by default. For simplicity, we did not adopt widely-used tricks such as OHEM, auxiliary losses or class balance loss. During evaluation, we rescale the short side of the image to training cropping size and keep the aspect ratio for ADE20K and COCO-Stuff. For Cityscapes, we do inference using sliding window test by cropping 1024× 1024 windows. We report semantic segmentation performance using mean Intersection over Union (mIoU).
4.2 Ablation Studies
Influence of the size of model. We first analyze the effect of increasing the size of the encoder on the performance and model efficiency. Figure 1 shows the performance vs. model efficiency for ADE20K as a function of the encoder size and, Table 1a summarizes the results for the three datasets. The first thing to observe here is the size of the decoder compared to the encoder. As shown, for the lightweight model, the decoder has only 0.4M parameters. For MiT-B5 encoder, the decoder only takes up to 4% of the total number of parameters in the model. In terms of performance, we can observe that, overall, increasing the size of the encoder yields consistent improvements on all the datasets. Our lightweight model, SegFormer-B0, is compact and efficient while maintaining a competitive performance, showing that our method is very convenient for real-time applications. On the other hand, our SegFormer-B5, the largest model, achieves state-of-the-art results on all three datasets, showing the potential of our Transformer encoder.
2https://github.com/open-mmlab/mmsegmentation
Influence of C, the MLP decoder channel dimension. We now analyze the influence of the channel dimension C in the MLP decoder, see Section 3.2. In Table 1b we show performance, flops, and parameters as a function of this dimension. We can observe that setting C = 256 provides a very competitive performance and computational cost. The performance increases as C increases; however, it leads to larger and less efficient models. Interestingly, this performance plateaus for channel dimensions wider than 768. Given these results, we choose C = 256 for our real-time models SegFormer-B0, B1 and C = 768 for the rest.
Mix-FFN vs. Positional Encoder (PE). In this experiment, we analyze the effect of removing the positional encoding in the Transformer encoder in favor of using the proposed Mix-FFN. To this end, we train Transformer encoders with a positional encoding (PE) and the proposed Mix-FFN and perform inference on Cityscapes with two different image resolutions: 768×768 using a sliding window, and 1024×2048 using the whole image. Table 1c shows the results for this experiment. As shown, for a given resolution, our approach using Mix-FFN clearly outperforms using a positional encoding. Moreover, our approach is less sensitive to differences in the test resolution: the accuracy drops 3.3% when using a positional encoding with a
lower resolution. In contrast, when we use the proposed Mix-FFN the performance drop is reduced to only 0.7%. From these results, we can conclude using the proposed Mix-FFN produces better and more robust encoders than those using positional encoding.
Effective receptive field evaluation. In Section 3.2, we argued that our MLP decoder benefits from Transformers having a larger effective receptive field compared to other CNN models. To quantify this effect, in this experiment, we compare the performance of our MLP-decoder when used with CNN-based encoders such as ResNet or ResNeXt. As shown in Table 1d, coupling our MLP-decoder with a CNN-based encoder yields a significantly lower accuracy compared to coupling it with the proposed Transformer encoder. Intuitively, as a CNN has a smaller receptive field than the Transformer (see the analysis in Section 3.2), the MLP-decoder is not enough for global reasoning. In contrast, coupling our Transformer encoder with the MLP decoder leads to the best performance. Moreover, for Transformer encoder, it is necessary to combine low-level local features and high-level non-local features instead of only high-level feature.
Influence of difference encoders. We select 2 representative Transformer encoders, ViT [6] and Swin [9] and compare with our MiT encoder. As shown in Table 3, with same decoder, e.g.MLP decoder, MiT-B2 is 3.1% higher than Swin-T with similar encoder parameters. Moreover, MiT-B5 has much fewer encoder parameters than ViT-large, but is 3+% mIoU higher than ViT-large. These experiments shows our MiT encoder is better than Swin and ViT for semantic segmentataion.
Influence of difference decoders. We also test MiT encoder with different decoders. As shown in Table 3, the mIoUs are similar with different decoders while the proposed MLP decoder has the least parameters and is only 1/8 of the UperNet decoder in Swin. The MLP decoder is thus an important design towards efficient segmentation.
4.3 Comparison to state of the art methods
We now compare our results with existing approaches on the ADE20K [70] and Cityscapes [69]. More experiments about COCO-Stuff [71] are in appendix. ADE20K and Cityscapes: Table 2 summarizes our results including parameters, FLOPS, latency, and accuracy for ADE20K and Cityscapes. In the top part of the table, we report real-time approaches where we include state-of-the-art methods and our results using the MiT-B0 lightweight encoder. In the bottom part, we focus on performance and report the results of our approach and related works using stronger encoders.
On ADE20K, SegFormer-B0 yields 37.4% mIoU using only 3.8M parameters and 8.4G FLOPs, outperforming all other real-time counterparts in terms of parameters, flops, and latency. For instance, compared to DeeplabV3+ (MobileNetV2), SegFormer-B0 is 7.4 FPS, which is faster and keeps 3.4% better mIoU.
Moreover, SegFormer-B5 outperforms all other approaches, including the previous best SETR, and establishes a new state-of-the-art of 51.8%, which is 1.6% mIoU better than SETR while being significantly more efficient.
As also shown in Table 2, our results also hold on Cityscapes. SegFormer-B0 yields 15.2 FPS and 76.2% mIoU (the shorter side of input image being 1024), which represents a 1.3% mIoU improvement and a 2× speedup compared to DeeplabV3+. Moreover, with the shorter side of input image being 512, SegFormer-B0 runs at 47.6 FPS and yields 71.9% mIoU, which is 17.3 FPS faster and 4.2% better than ICNet. SegFormer-B5 archives the best IoU of 84.0%, outperforming all existing
methods by at least 1.8% mIoU, and it runs 5 × faster and 4 × smaller than SETR [7]. On Cityscapes test set, we follow the common setting [18] and merge the validation images to the train set and report results using Imagenet-1K pre-training and also using Mapillary Vistas [74]. As reported in Table 4, using only Cityscapes fine data and Imagenet-1K pre-training, our method achieves 82.2% mIoU outperforming all other methods including SETR, which uses ImageNet-22K pre-training and the additional Cityscapes coarse data. Using Mapillary pre-training, our sets a new state-of-the-art result of 83.1% mIoU.
4.4 Robustness to natural corruptions
Model robustness is important for many safety-critical tasks such as autonomous driving [75]. In this experiment, we evaluate the robustness of SegFormer to common corruptions and perturbations. To this end, we follow [75] and generate Cityscapes-C, which expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from noise, blur, weather and digital categories. We compare our method to DeeplabV3+ and other methods as reported in [75]. We also compare with SETR with DeiT Transformer backbone. The results for this experiment are summarized in Table 5.
Our method significantly outperforms previous CNN-based methods, yielding a relative improvement of up to 588% on Gaussian Noise and up to 295% on snow weather. SegFormer also outperforms SETR in general except for one corruption (snow). The results indicate the strong robustness of SegFormer, which we envision to benefit safety-critical applications where robustness is important.
5 Conclusion
In this paper, we present SegFormer, a simple, clean yet powerful semantic segmentation method which contains a positional-encoding-free, hierarchical Transformer encoder and a lightweight AllMLP decoder. It avoids common complex designs in previous methods, leading to both high efficiency and performance. SegFormer not only achieves new state of the art results on common datasets, but also shows strong zero-shot robustness. We hope our method can serve as a solid baseline for semantic segmentation and motivate further research. One potential limitation is that even our lightest model may still be too heavy for some edge devices. Thus mixed-precision training, pruning, hardware-friendly attention designs and energy consumption are important parts of our future work.
Broader Impact
Efficiency, accuracy, and robustness are important aspects of AI models. Our work pushes the boundary of semantic segmentation models in these three aspects. We envision that the work will benefit a wide range of safety-critical applications, such as autonomous driving and robot navigation. The proposed method improves the “in-the-wild” robustness of these applications, ultimately leading to better safety. Despite such improvement, we fully understand this work is by no means perfect and there are still many challenges towards reliable real world application. Our models may be subject to biases and other possible undesired mistakes, depending on how they are trained in reality. Our
model may also be used for surveillance similar to other AI recognition methods, even though it is not mainly designed for surveillance applications.
Acknowledgments and Disclosure of Funding
We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. Ping Luo is supported by the General Research Fund of Hong Kong No.27208720. | 1. What are the strengths and weaknesses of the proposed SegFormer model compared to other transformer models?
2. How does the author justify the claim that their Transformer encoder and MLP decoder are equally important?
3. Why did the author choose not to compare their model with other transformer models on ImageNet classification?
4. Is the proposed model suitable only for semantic segmentation or can it perform well on other tasks?
5. What are the limitations of the experiments conducted in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces SegFormer for semantic segmentation. The SegFormer consists: (1) A Transformer encoder that extracts multi-scale features. The authors claim their Transformer encoder to be novel in the sense that it is "positional-encoding-free" and "hierarchical". (2) A "MLP" based decoder that aggregates information from different layers. Although the authors claim it to be "MLP", the decoder is essentially a stack of several 1x1 convolutions with bilinear interpolation to upsample low-resolution features.
The authors further explore scaling the Transformer encoder which ends up with a series of models with different number of parameters and FLOPs. The authors show their models are competitive with SOTA on multiple semantic segmentation datasets and claim that their model is more robust on corrupted images than DeepLabV3+ (on Cityscapes-C).
Review
Limitations of the paper
First of all, I don't agree with the position of this paper. The authors claim that both the Transformer encoder and the new "MLP" decoder are equally important to make SegFormer work. However, I don't see any reason why other decoders cannot be used with the SegFormer Transformer encoder, and why other Transformer encoder cannot be used with Segformer decoder. For me, this paper is more like a study of Transformer design that is not making a proper comparison with other Transformers: the authors did not show their number on ImageNet classification, which is fine if the model only works well for semantic segmentation. However, the comparison with other Transformers on semantic segmentation is completely unfair which I will explain later.
Although this paper shows good performance on multiple semantic segmentation datasets, it has two major problems: (A) the authors overclaimed their contribution (or did not clearly summarize their contribution) and (B) the experiments in this paper cannot support claims made by the authors. Next, I will explain my reasons.
A. The authors overclaimed their contribution (or did not clearly summarize their contribution)
The authors claim they propose a "novel positional-encoding-free and hierarchical Transformer encoder" (L61).
In L165, it looks like the authors follow CPVT [54] by using 3x3 convolution to replace positional encoding. Furthermore, in L166-167, the authors argue "positional encoding is actually not necessary for semantic segmentation" without any explanation. I'm not sure how the authors come up with this conclusion, since it is something not obvious. In the Table 1 (c), the authors only compare "PE" with "Mix-FFN", but it is not enough to say positional encoding is not necessary. Given these facts, I don't think "positional-encoding-free" or "Mix-FFN" is something contributed by the authors, as there are many works already uncover this fact like CPVT [54] and CvT [58].
As for the hierarchical Transformer, I find it is very hard to understand what is the key difference with other hierarchical Transformers like PvT [8] from current description (the current text only describes "Hierarchical Feature Representation", "Overlapped Patch Merging", "Efficient Self-Attention" which seems to be already used by multiple hierarchical Transformer papers).
The authors claim their "All-MLP" decoder is lightweight and less complex than other decoder. First of all, "complex" is very subjective. Why one decoder is more complex than another decoder without a proper metric? Second, the decoder used in the paper is not "All-MLP": the decoder is essentially several 1x1 convolutions. And here is the question again: why 1x1 convolutions are less "complex" than 3x3 convolutions?
B. The experiments in this paper cannot support claims made by the authors
The experiments do not show both proposed components (new Transformer encoder and "MLP" decoder) are necessary.
In Table 1, the authors only compares their Transformers with CNN backbones which is not a very fair comparison. In order to really show the benefit of this new Transformer, the authors should compare their Transformer with other Transformer designs (e.g., PvT, CvT, Swin Transformer etc.) to really show this is a better Transformer.
As for the decoder design, the authors only compare their decoder with different channel dimensions. However, the claim made by the authors is the "MLP" decoder is better than other "more complex" decoder. In order to backup this claim, the authors should compare the "MLP" decoder with the "more complex" decoder they refer to.
Design of the experiments is not consistent with the position of the paper. The authors position their paper as a Transformer design for semantic segmentation. Thus, it is okay to not comparing with other Transformer models on ImageNet. However, the authors show at least have a proper comparison with existing Transformers to show why their design is better than other Transformers. For example, the authors can compare their Transformer with Swin Transformer using the same decoder and the same training recipe. However, from the description of the paper, I do not see why this special design is more suitable than other Transformer specifically for semantic segmentation.
Summary
In summary: (1) I believe the claims made by the authors in this paper is not strong enough, (2) experiments cannot support the claims made by the authors as well; and (3) the position of this paper is weird. I'm especially concerned about (2) and (3). I do not see why the design of this Transformer is specifically suitable for semantic segmentation, if the model does not perform better in ImageNet classification. Since the paper never offers an apple-to-apple comparison with other Transformer models for semantic segmentation task, I'm afraid the authors might have tuned training parameters very hard to make it work on semantic segmentation (while other Transformers mainly focus on reporting numbers for classification). If this is the case, I don't feel this paper is very valuable and I think it is a clear rejection. But I hope to see authors explanation if I make any mistake.
#### post-rebuttal update ####
I have read carefully all reviews and author response. The authors have addressed most concerns well and the major disagreement between reviewers (e.g., me and Reviewer GD12) and the authors lies in the position of this paper. I have to say that I'm still not fully convinced and I believe it is better to position the paper as a new ViT design instead of a specific model design for semantic segmentation. However, the authors argue they mainly show improvements on semantic segmentation which is the reason that they position it as a model for semantic segmentation specifically, which is a fair point and I respect their decision.
Besides the problem of "novelty" raised by all other reviewers, another major problem in the original submission is that there is no fair comparison which leads to my original decision of rejection. The authors have addressed this problem by running the suggested experiments and thus I will raise my score to borderline. However, I believe the paper requires a major revision to take into account all these discussions and I'm not sure if the authors will make the revision as they promised since I cannot see the revised version. So I will change my score to 5 and let the AC to decide whether this should be taken into account. |
NIPS | Title
CATs: Cost Aggregation Transformers for Visual Correspondence
Abstract
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to handcrafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://sunghwanhong.github.io/CATs/.
1 Introduction
Establishing dense correspondences across semantically similar images can facilitate many Computer Vision applications, including semantic segmentation [46, 54, 36], object detection [29], and image editing [53, 30, 28, 25]. Unlike classical dense correspondence problems that consider visually similar images taken under the geometrically constrained settings [16, 19, 50, 18], semantic correspondence poses additional challenges from large intra-class appearance and geometric variations caused by the unconstrained settings of given image pair.
Recent approaches [42, 43, 45, 34, 37, 39, 31, 58, 47, 57, 51, 35] addressed these challenges by carefully designing deep convolutional neural networks (CNNs)-based models analogously to the classical matching pipeline [48, 41], feature extraction, cost aggregation, and flow estimation. Several works [24, 9, 37, 39, 47, 51] focused on the feature extraction stage, as it has been proven that the more powerful feature representation the model learns, the more robust matching is obtained [24, 9, 51]. However, solely relying on the matching similarity between features without any prior often suffers ∗Equal contribution †Corresponding author
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
from the challenges due to ambiguities generated by repetitive patterns or background clutters [42, 24, 26]. On the other hand, some methods [42, 49, 43, 23, 26, 58] focused on flow estimation stage either by designing additional CNN as an ad-hoc regressor that predicts the parameters of a single global transformation [42, 43], finding confident matches from correlation maps [20, 26], or directly feeding the correlation maps into the decoder to infer dense correspondences [58]. However, these methods highly rely on the quality of the initial correlation maps.
The latest methods [45, 37, 44, 21, 31, 27, 35] have focused on the second stage, highlighting the importance of cost aggregation. Since the quality of correlation maps is of prime importance, they proposed to refine the matching scores by formulating the task as optimal transport problem [47, 31], re-weighting matching scores by Hough space voting for geometric consistency [37, 39], or utilizing high-dimensional 4D or 6D convolutions to find locally consistent matches [45, 44, 27, 35]. Although formulated variously, these methods either use hand-crafted techniques that are neither learnable nor robust to severe deformations, or inherit the limitation of CNNs, e.g., limited receptive fields, failing to discriminate incorrect matches that are locally consistent.
In this work, we focus on the cost aggregation stage, and propose a novel cost aggregation network to tackle aforementioned issues. Our network, called Cost Aggregation with Transformers (CATs), is based on Transformer [61, 10], which is renowned for its global receptive field. By considering all the matching scores computed between features of input images globally, our aggregation networks explore global consensus and thus refine the ambiguous or noisy matching scores effectively.
Specifically, based on the observation that desired correspondence should be aligned at discontinuities with appearance of images, we concatenate an appearance embedding with the correlation map, which helps to disambiguate the correlation map within the Transformer. To benefit from hierarchical feature representations, following [26, 39, 58], we use a stack of correlation maps constructed from multilevel features, and propose to effectively aggregate the scores across the multi-level correlation maps. Furthermore, we consider bidirectional nature of correlation map, and leverage the correlation map from both directions, obtaining reciprocal scores by swapping the pair of dimensions of correlation map in order to allow global consensus in both perspective. In addition to all these combined, we provide residual connections around aggregation networks in order to ease the learning process.
We demonstrate our method on several benchmarks [38, 11, 12]. Experimental results on various benchmarks prove the effectiveness of the proposed model over the latest methods for semantic correspondence. We also provide an extensive ablation study to validate and analyze components in CATs.
2 Related Work
Semantic Correspondence. Methods for semantic correspondence generally follow the classical matching pipeline [48, 41], including feature extraction, cost aggregation, and flow estimation. Most early efforts [7, 30, 11] leveraged the hand-crafted features which are inherently limited in capturing high-level semantics. Though using deep CNN-based features [5, 24, 42, 43, 23, 49, 26] has become increasingly popular thanks to their invariance to deformations, without a means to refine the matching scores independently computed between the features, the performance would be rather limited.
To alleviate this, several methods focused on flow estimation stage. Rocco et al. [42, 43] proposed an end-to-end network to predict global transformation parameters from the matching scores, and their success inspired many variants [49, 23, 25]. RTNs [23] obtain semantic correspondences through an iterative process of estimating spatial transformations. DGC-Net [34], Semantic-GLU-Net [58] and DMP [15] utilize a CNN-based decoder to directly find correspondence fields. PDC-Net [59] proposed a flexible probabilistic model that jointly learns the flow estimation and its uncertainty. Arguably, directly regressing correspondences from the initial matching scores highly relies on the quality of them.
Recent numerous methods [45, 37, 39, 31, 47, 51, 35] thus have focused on cost aggregation stage to refine the initial matching scores. Among hand-crafted methods, SCOT [31] formulates semantic correspondence as an optimal transport problem and attempts to solve two issues, namely many to one matching and background matching. HPF [37] first computes appearance matching confidence using hyperpixel features and then uses Regularized Hough Matching (RHM) algorithm for cost aggregation to enforce geometric consistency. DHPF [39], that replaces feature selection algorithm
of HPF [37] with trainable networks, also uses RHM. However, these hand-crafted techniques for refining the matching scores are neither learnable nor robust to severe deformations. As learningbased approaches, NC-Net [45] utilizes 4D convolution to achieve local neighborhood consensus by finding locally consistent matches, and its variants [44, 27] proposed more efficient methods. GOCor [57] proposed aggregation module that directly improves the correlation maps. GSF [21] formulated pruning module to suppress false positives of correspondences in order to refine the initial correlation maps. CHM [35] goes one step further, proposing a learnable geometric matching algorithm which utilizes 6D convolution. However, they are all limited in the sense that they inherit limitation of CNN-based architectures, which is local receptive fields.
Transformers in Vision. Transformer [61], the de facto standard for Natural Language Processing (NLP) tasks, has recently imposed significant impact on various tasks in Computer Vision fields such as image classification [10, 55], object detection [3, 62], tracking and matching [52, 51]. ViT [10], the first work to propose an end-to-end Transformer-based architecture for the image classification task, successfully extended the receptive field, owing to its self-attention nature that can capture global relationship between features. For visual correspondence, LoFTR [51] uses cross and self-attention module to refine the feature maps conditioned on both input images, and formulate the hand-crafted aggregation layer with dual-softmax [45, 60] and optimal transport [47] to infer correspondences. COTR [22] takes coordinates as an input and addresses dense correspondence task without the use of correlation map. Unlike these, for the first time, we propose a Transformer-based cost aggregation module.
3 Methodology
3.1 Motivation and Overview
Let us denote a pair of images, i.e., source and target, as Is and It, which represent semantically similar images, and features extracted from Is and It as Ds and Dt, respectively. Here, our goal is to establish a dense correspondence field F (i) between two images that is defined for each pixel i, which warps It towards Is.
Estimating the correspondence with sole reliance on matching similarities betweenDs andDt is often challenged by the ambiguous matches due to the repetitive patterns or background clutters [42, 24, 26]. To address this, numerous methods proposed cost aggregation techniques that focus on refining the initial matching similarities either by formulating the task as optimal transport problem [47, 31], using regularized Hough matching to re-weight the costs [37, 39], or 4D or 6D convolutions [45, 27, 44, 35]. However, these methods either use hand-crafted techniques that are weak to severe deformations, or fail to discriminate incorrect matches due to limited receptive fields.
To overcome these, we present Transformer-based cost aggregation networks that effectively integrate information present in all pairwise matching costs, dubbed CATs, as illustrated in Fig. 1. As done widely in other works [42, 45, 50, 34, 37], we follow the common practice for feature extraction and cost computation. In the following, we first explain feature extraction and cost computation, and then describe several critical design choices we made for effective aggregation of the matching costs.
3.2 Feature Extraction and Cost Computation
To extract dense feature maps from images, we follow [26, 37, 39] that use multi-level features for construction of correlation maps. We use CNNs that produce a sequence of L feature maps, and Dl represents a feature map at l-th level. As done in [37], we use different combination of multi-level features depending on the dataset trained on, e.g., PF-PASCAL [12] or SPair-71k [38]. Given a sequence of feature maps, we resize all the selected feature maps to Rh×w×c, with height h, width w, and c channels. The resized features then undergo l-2 normalization.
Given resized dense features Ds and Dt, we compute a correlation map C ∈ Rhw×hw using the inner product between features: C(i, j) = Dt(i) ·Ds(j) with points i and j in the target and source features, respectively. In this way, all pairwise feature matches are computed and stored. However, raw matching scores contain numerous ambiguous matching points as exemplified in Fig. 2, which results inaccurate correspondences. To remedy this, we propose cost aggregation networks in the following that aim to refine the ambiguous or noisy matching scores.
3.3 Transformer Aggregator
Renowned for its global receptive fields, one of the key elements of Transformer [61] is the selfattention mechanism, which enables finding the correlated input tokens by first feeding into scaled dot product attention function, normalizing with Layer Normalization (LN) [1], and passing the normalized values to a MLP. Several works [10, 3, 62, 51] have shown that given images or features as input, Transformers [61] integrate the global information in a flexible manner by learning to find the attention scores for all pairs of tokens.
In this paper, we leverage the Transformers to integrate the matching scores to discover global consensus by considering global context information. Specifically, we obtain a refined cost C′ by feeding the raw cost C to the Transformer T , consisting of self-attention, LN, and MLP modules:
C′ = T (C + Epos), (1) where Epos denotes positional embedding. The standard Transformer receives as input a 1D sequence of token embeddings. In our context, we reshape the correlation map C into a sequence of vectors C(k) ∈ R1×hw for k ∈ {1, ..., hw}. We visualize the refined correlation map with self-attention in Fig. 2, where the ambiguities are significantly resolved.
Appearance Affinity Modeling. When only matching costs are considered for aggregation, selfattention layer processes the correlation map itself disregarding the noise involved in the correlation map, which may lead to inaccurate correspondences. Rather than solely relying on raw correlation map, we additionally provide an appearance embedding from input features to disambiguate the correlation map aided by appearance affinity within the Transformer. Intuition behind is that visually similar points in an image, e.g., color or feature, have similar correspondences, as proven in stereo matching literature, e.g., Cost Volume Filtering (CVF) [16, 50].
To provide appearance affinity, we propose to concatenate embedded features projected from input features with the correlation map. We first feed the features D into linear projection networks, and then concatenate the output along corresponding dimension, so that the correlation map is augmented such that [C,P(D)] ∈ Rhw×(hw+p), where [ · ] denotes concatenation, P denotes linear projection networks, and p is channel dimension of embedded feature. Within the Transformer, self-attention layer aggregates the correlation map and passes the output to the linear projection networks to retain the size of original correlation C.
Multi-Level Aggregation. As shown in [37, 34, 39, 58, 31], leveraging multi-level features allows capturing hierarchical semantic feature representations. Thus we also use multi-level features from different levels of convolutional layers to construct a stack of correlation maps. Each correlation map Cl computed between Dls and Dlt is concatenated with corresponding embedded features and fed into the aggregation networks. The aggregation networks now consider multiple correlations, aiming to effectively aggregates the matches by the hierarchical semantic representations.
As shown in Fig. 3, a stack of L augmented correlation maps, [Cl,P(Dl)]Ll=1 ∈ Rhw×(hw+p)×L, undergo the Transformer aggregator. For each l-th augmented correlation map, we aggregate with self-attention layer across all the points in the augmented correlation map, and we refer this as intra-correlation self-attention. In addition, subsequent to this, the correlation map undergoes intercorrelation self-attention across multi-level dimensions. Contrary to HPF [37] that concatenates all the multi-level features and compute a correlation map, which disregards the level-wise similarities, within the inter-correlation layer of the proposed model, the similar matching scores are explored across multi-level dimensions. In this way, we can embrace richer semantics in different levels of feature maps, as shown in Fig. 4.
3.4 Cost Aggregation with Transformers
By leveraging the Transformer aggregator, we present cost aggregation framework with following additional techniques to improve the performance.
Swapping Self-Attention. To obtain a refined correlation map invariant to order of the input images and impose consistent matching scores, we argue that reciprocal scores should be used as aids to infer confident correspondences. As correlation map contains bidirectional matching scores, from both target and source perspective, we can leverage matching similarities from both directions in order to obtain more reciprocal scores as done similarly in other works [45, 26].
As shown in Fig. 1, we first feed the augmented correlation map to the aforementioned Transformer aggregator. Then we transpose the output, swapping the pair of dimensions in order to concatenate with the embedded feature from the other image, and feed into the subsequent another aggregator. Note that we share the parameters of the Transformer aggregators to obtain reciprocal scores. Formally, we define the whole process as following:
S = T ([Cl,P(Dlt)]Ll=1 + Epos), C′ = T ([(Sl)T,P(Dls)]Ll=1 + Epos),
(2)
where CT(i, j) = C(j, i) denotes swapping the pair of dimensions corresponding to the source and target images; S denotes the intermediate correlation map before swapping the axis. Note that NC-Net [45] proposed a similar procedure, but instead of processing serially, they separately process the correlation map and its transposed version and add the outputs, which is designed to produce
a correlation map invariant to the particular order of the input images. Unlike this, we process the correlation map serially, first aggregating one pair of dimensions and then further aggregating with respect to the other pair. In this way, the subsequent attention layer is given more consistent matching scores as an input, allowing further reduction of inconsistent matching scores. We include an ablation study to justify our choice in Section 4.4
Residual Connection. At the initial phase when the correlation map is fed into the Transformers, noisy score maps are inferred due to randomly-initialized parameters, which could complicate the learning process. To stabilize the learning process and provide a better initialization for the matching, we employ the residual connection. Specifically, we enforce the cost aggregation networks to estimate the residual correlation by adding residual connection around aggregation networks.
3.5 Training
Data Augmentation. Transformer is well known for lacking some of inductive bias and its datahungry nature thus necessitates a large quantity of training data to be fed [61, 10]. Recent methods [55, 56, 32] that employ the Transformer to address Computer Vision tasks have empirically shown that data augmentation techniques have positive impact on performance. However, in correspondence task, the question of to what extent can data augmentation affect the performance has not yet been properly addressed. From the experiments, we empirically find that data augmentation has positive impacts on performance in semantic correspondence with Transformers as reported in Section 4.4. To apply data augmentation [6, 2] with predetermined probabilities to input images at random. Specifically, 50% of the time, we randomly crop the input image, and independently for each augmentation function used in [6], we set the probability for applying the augmentation as 20%. More details can be found in supplementary material.
Training Objective. As in [37, 39, 35], we assume that the ground-truth keypoints are given for each pair of images. We first average the stack of refined correlation maps C′ ∈ Rhw×hw×L to obtain C′′ ∈ Rhw×hw and then transform it into a dense flow field Fpred using soft-argmax operator [26]. Subsequently, we compare the predicted dense flow field with the ground-truth flow field FGT obtained by following the protocol of [37] using input keypoints. For the training objective, we utilize Average End-Point Error (AEPE) [34], computed by averaging the Euclidean distance between the ground-truth and estimated flow. We thus formulate the objective function as L = ‖FGT − Fpred‖2.
4 Experiments
4.1 Implementation Details
For backbone feature extractor, we use ResNet-101 [14] pre-trained on ImageNet [8], and following [37], extract the features from the best subset layers. Other backbone features can also be used, which we analyze the effect of various backbone features in the following ablation study. For the hyper-parameters for Transformer encoder, we set the depth as 1 and the number of heads as 6. We resize the spatial size of the input image pairs to 256×256 and a sequence of selected features are resized to 16×16. We use a learnable positional embedding [10], instead of fixed [61]. We implemented our network using PyTorch [40], and AdamW [33] optimizer with an initial learning
rate of 3e−5 for the CATs layers and 3e−6 for the backbone features are used, which we gradually decrease during training.
4.2 Experimental Settings
In this section, we conduct comprehensive experiments for semantic correspondence, by evaluating our approach through comparisons to state-of-the-art methods including CNNGeo [42], A2Net [49], WeakAlign [43], NC-Net [45], RTNs [23], SFNet [26], HPF [37], DCC-Net [17], ANC-Net [27], DHPF [39], SCOT [31], GSF [21], and CHMNet [35]. In Section 4.3, we first evaluate matching results on several benchmarks with quantitative measures, and then provide an analysis of each component in our framework in Section 4.4. For more implementation details, please refer to our implementation available at https://github.com/SunghwanHong/CATs.
Datasets. SPair-71k [38] provides total 70,958 image pairs with extreme and diverse viewpoint, scale variations, and rich annotations for each image pair, e.g., keypoints, scale difference, truncation and occlusion difference, and clear data split. Previously, for semantic matching, most of the datasets are limited to a small quantity with similar viewpoints and scales [11, 12]. As our network relies on Transformer which requires a large number of data for training, SPair-71k [38] makes the use of Transformer in our model feasible. we also consider PF-PASCAL [12] containing 1,351 image pairs from 20 categories and PF-WILLOW [11] containing 900 image pairs from 4 categories, each dataset providing corresponding ground-truth annotations.
Evaluation Metric. For evaluation on SPair-71k [38], PF-WILLOW [11], and PF-PASCAL [12], we employ a percentage of correct keypoints (PCK), computed as the ratio of estimated keypoints within the threshold from ground-truths to the total number of keypoints. Given predicted keypoint kpred and ground-truth keypoint kGT, we count the number of predicted keypoints that satisfy following condition: d(kpred, kGT) ≤ α ·max(H,W ), where d( · ) denotes Euclidean distance; α
denotes a threshold which we evaluate on PF-PASCAL with αimg, SPair-71k and PF-WILLOW with αbbox; H and W denote height and width of the object bounding box or entire image, respectively.
4.3 Matching Results
For a fair comparison, we follow the evaluation protocol of [37] for SPair-71k, which our network is trained on the training split and evaluated on the test split. Similarly, for PF-PASCAL and PFWILLOW, following the common evaluation protocol of [13, 23, 17, 37, 39], we train our network on the training split of PF-PASCAL [12] and then evaluate on the test split of PF-PASCAL [12] and PF-WILLOW [11]. All the results of other methods are reported under identical setting.
Table 1 summarizes quantitative results on SPair-71k [38], PF-PASCAL [12] and PF-WILLOW [11]. We note whether each method leverages multi-level features and fine-tunes the backbone features in order to ensure a fair comparison. We additionally denote the types of cost aggregation. Generally, our CATs outperform other methods over all the benchmarks. This is also confirmed by the results on SPair-71k, as shown in Table 2, where the proposed method outperforms other methods by large margin. Note that CATs† reports lower PCK than that of CHM, and this is because CHM fine-tunes its backbone networks while CATs† does not. Fig. 5 visualizes qualitative results for extremely challenging image pairs. We observe that compared to current state-of-the-art methods [31, 39], our method is capable of suppressing noisy scores and find accurate correspondences in cases with large scale and geometric variations.
It is notable that CATs generally report lower PCK on PF-WILLOW [11] compared to other stateof-the-art methods. This is because the Transformer is well known for lacking some of inductive bias. When we evaluate on PF-WILLOW, we infer with the model trained on the training split of PFPASCAL, which only contains 1,351 image pairs, and as only relatively small quantity of image pairs is available within the PF-PASCAL training split, the Transformer shows low generalization power. This demonstrates that the Transformer-based architecture indeed requires a means to compensate for the lack of inductive bias, e.g., data augmentation.
4.4 Ablation Study
In this section we show an ablation analysis to validate critical components we made to design our architecture, and provide an analysis on use of different backbone features, and data augmentation. We train all the variants on the training split of SPair-71k [38] when evaluating on SPair-71k, and train on PF-PASCAL [12] for evaluating on PF-PASCAL. We measure the PCK, and each ablation experiment is conducted under same experimental setting for a fair comparison.
Network Architecture. Table 3 shows the analysis on key components in our architecture. There are four key components we analyze for the ablation study, including appearance modelling, multilevel aggregation, swapping self-attention, and residual connection.
We first define the model without any of these as baseline, which simply feeds the correlation map into the selfattention layer. We evaluate on SPair-71k benchmark by progressively adding the each key component. From I to V, we observe consistent increase in performance when each component is added. II shows a large improvement in performance, which demonstrates that the appearance modelling enabled the model to refine the ambiguous or noisy matching scores. Although relatively small increase in PCK for III, it proves that the proposed model
successfully aggregates the multi-level correlation maps. Furthermore, IV and V show apparent increase, proving the significance of both components.
Feature Backbone. As shown in Table 4, we explore the impact of different feature backbones on the performance on SPair-71k [38] and PF-PASCAL [12]. We report the results of models with backbone networks frozen. The top two rows are models with DeiT-B [55], next two rows use DINO [4], and the rest use ResNet101 [14] as backbone. Specifically, subscript single for DeiT-B and DINO, we use the feature map extracted at the last layer for the singlelevel, while for subscript all, every feature map
from 12 layers is used for cost construction. For ResNet-101 subscript single, we use a single-level feature cropped at conv4− 23, while for multi, we use the best layer subset provided by [37]. Summarizing the results, we observed that leveraging multi-level features showed apparent improvements in performance, proving effectiveness of multi-level aggregation introduced by our method. It is worth noting that DINO, which is more excel at dense tasks than DeiT-B, outperforms DeiT-B when applied to semantic matching. This indicates that fine-tuning the feature could enhance the performance. To best of our knowledge, we are the first to employ Transformer-based features for semantic matching. It would be an interesting setup to train an end-to-end Transformer-based networks, and we hope this work draws attention from community and made useful for future works.
Data Augmentation. In Table 5, we compared the PCK performance between our variants and DHPF [39]. We note if the model is trained with augmentation. For a fair comparison, we evaluate both DHPF [39] and CATs trained on SPair-71k [38] using strong supervision, which assumes that the ground-truth keypoints are given. The results show that compared to DHPF, a CNN-based method, data augmentation has a larger influence on CATs in terms of performance. This demonstrates that not only we eased the data-hunger problem inherent in Transform-
ers, but also found that applying augmentations for matching has positive effects. Augmentation technique would bring a highly likely improvements in performance, and we hope that the future works benefit from this.
Serial swapping. It is apparent that Equation 2 is not designed for an order-invariant output. Different from NC-Net [45], we let the correlation map undergo the self-attention module in a serial manner. We conducted a simple experiment to compare the difference between each approach. From experiments, we obtained the results of parallel and serial processing on SPair-71k with αbbox = 0.1, which are PCK of 40.8 and 42.4, respectively. In light of this, although CATs may not support order invariance, adopting serial processing can obtain higher PCK as it has a better capability to reduce inconsistent matching scores by additionally processing the already processed cost map, which we finalize the architecture to include serial processing.
4.5 Analysis
Visualizing Self-Attention. We visualize the multi-level attention maps obtained from the Transformer aggregator. As shown in Fig. 6, the learned self-attention map at each level exhibits different
aspect. With these self-attentions, our networks can leverage multi-level correlations to capture hierarchical semantic feature representations effectively.
Memory and run-time. In Table 6, we show the memory and run-time comparison to NCNet [45], SCOT [31], DHPF [39] and CHM [35] with CATs. For a fair comparison, the results are obtained using a single NVIDIA GeForce RTX 2080 Ti GPU and Intel Core i7-10700 CPU. We measure the inference time for both the process without counting feature extraction, and the whole process. Thanks to Transformers’ fast com-
putation nature, compared to other methods, our method is beyond compare. We also find that compared to other cost aggregation methods including 4D, 6D convolutons, OT-RHM and RHM, ours show comparable efficiency in terms of computational cost. Note that NC-Net utilizes a single feature map while other methods utilize multi-level feature maps. We used the standard self-attention module for implementation, but more advanced and efficient transformer [32] architectures could reduce the overall memory consumption.
4.6 Limitations
One obvious limitation that CATs possess is that when applying the method to non-corresponding images, the proposed method would still deliver correspondences as it lacks power to ignore pixels that do not have correspondence at all. A straightforward solution would be to consider including a module to account for pixel-wise matching confidence. Another limitation of CATs would be its inability to address a task of finding accurate correspondences given multi-objects or non-corresponding objects. Addressing such challenges would be a promising direction for future work.
5 Conclusion
In this paper, we have proposed, for the first time, Transformer-based cost aggregation networks for semantic correspondence which enables aggregating the matching scores computed between input features, dubbed CATs. We have made several architectural designs in the network architecture, including appearance affinity modelling, multi-level aggregation, swapping self-attention, and residual correlation. We have shown that our method surpasses the current state-of-the-art in several benchmarks. Moreover, we have conducted extensive ablation studies to validate our choices and explore its capacity. A natural next step, which we leave for future work, is to examine how CATs could extend its domain to tasks including 3-D reconstruction, semantic segmentation and stitching, and to explore self-supervised learning.
Acknowledgements
This research was supported by the MSIT, Korea, under the ICT Creative Consilience program (IITP-2021-2020-0-01819) and (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) supervised by the IITP and National Research Foundation of Korea (NRF-2021R1C1C1006897). | 1. What is the main contribution of the paper in the field of semantic correspondence?
2. What are the strengths of the proposed method, particularly regarding its technical soundness, intuition, and novelty?
3. How does the reviewer assess the experimental results and qualitative analysis of the proposed model's performance?
4. What are the concerns or suggestions regarding the paper's writing quality and potential improvements? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses the problem of finding dense correspondences between semantically similar images. Given a feature extractor, the primary focus of this work is on the design of a matching algorithm to find the optimal matches for each spatial location. The proposed method is a learned transformer based architecture that takes advantage of multi-scale features to resolve ambiguous local matches. Results demonstrate that the performance of the proposed model is comparable to existing methods on multiple datasets.
Review
The paper addresses an important problem of finding correspondences across semantically similar images. Majority of the research in this direction focuses on improving feature extraction methods. However, this work shows that building smarter post-processing algorithms for finding matches could bring significant improvements in the quality of correspondences.
The proposed method of using a transformer-based architecture for computing feature correlation scores is technically sound, intuitive and novel. The motivation for the presented design of the architecture has been explained thoroughly. The idea of including multi-scale appearance features and processing the intra-level and inter-level features sequentially is also novel and could be adopted in other domains.
The experimental results show that the proposed model is at least as effective as existing state-of-the-art models for finding semantic correspondences. The qualitative results presented provide insight into the functioning of the self-attention mechanisms and demonstrate that the model can generate high-quality correspondences. Furthermore, the ablative studies show that all the design choices in the transformer-based architecture lead to improved correspondences.
Concerns/Suggestions
The quality of writing of the paper could be significantly improved. Especially the abstract and introduction are very difficult to follow. For example, Line 48-56 uses too much jargon without any context provided. It only makes sense after reading the approach section. So I think the paper would benefit from thorough proof-reading.
Since the proposed cost-aggregation method relies on a transformer-based architecture, it would be good to know the computational cost compared to existing methods like CVF. |
NIPS | Title
CATs: Cost Aggregation Transformers for Visual Correspondence
Abstract
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to handcrafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://sunghwanhong.github.io/CATs/.
1 Introduction
Establishing dense correspondences across semantically similar images can facilitate many Computer Vision applications, including semantic segmentation [46, 54, 36], object detection [29], and image editing [53, 30, 28, 25]. Unlike classical dense correspondence problems that consider visually similar images taken under the geometrically constrained settings [16, 19, 50, 18], semantic correspondence poses additional challenges from large intra-class appearance and geometric variations caused by the unconstrained settings of given image pair.
Recent approaches [42, 43, 45, 34, 37, 39, 31, 58, 47, 57, 51, 35] addressed these challenges by carefully designing deep convolutional neural networks (CNNs)-based models analogously to the classical matching pipeline [48, 41], feature extraction, cost aggregation, and flow estimation. Several works [24, 9, 37, 39, 47, 51] focused on the feature extraction stage, as it has been proven that the more powerful feature representation the model learns, the more robust matching is obtained [24, 9, 51]. However, solely relying on the matching similarity between features without any prior often suffers ∗Equal contribution †Corresponding author
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
from the challenges due to ambiguities generated by repetitive patterns or background clutters [42, 24, 26]. On the other hand, some methods [42, 49, 43, 23, 26, 58] focused on flow estimation stage either by designing additional CNN as an ad-hoc regressor that predicts the parameters of a single global transformation [42, 43], finding confident matches from correlation maps [20, 26], or directly feeding the correlation maps into the decoder to infer dense correspondences [58]. However, these methods highly rely on the quality of the initial correlation maps.
The latest methods [45, 37, 44, 21, 31, 27, 35] have focused on the second stage, highlighting the importance of cost aggregation. Since the quality of correlation maps is of prime importance, they proposed to refine the matching scores by formulating the task as optimal transport problem [47, 31], re-weighting matching scores by Hough space voting for geometric consistency [37, 39], or utilizing high-dimensional 4D or 6D convolutions to find locally consistent matches [45, 44, 27, 35]. Although formulated variously, these methods either use hand-crafted techniques that are neither learnable nor robust to severe deformations, or inherit the limitation of CNNs, e.g., limited receptive fields, failing to discriminate incorrect matches that are locally consistent.
In this work, we focus on the cost aggregation stage, and propose a novel cost aggregation network to tackle aforementioned issues. Our network, called Cost Aggregation with Transformers (CATs), is based on Transformer [61, 10], which is renowned for its global receptive field. By considering all the matching scores computed between features of input images globally, our aggregation networks explore global consensus and thus refine the ambiguous or noisy matching scores effectively.
Specifically, based on the observation that desired correspondence should be aligned at discontinuities with appearance of images, we concatenate an appearance embedding with the correlation map, which helps to disambiguate the correlation map within the Transformer. To benefit from hierarchical feature representations, following [26, 39, 58], we use a stack of correlation maps constructed from multilevel features, and propose to effectively aggregate the scores across the multi-level correlation maps. Furthermore, we consider bidirectional nature of correlation map, and leverage the correlation map from both directions, obtaining reciprocal scores by swapping the pair of dimensions of correlation map in order to allow global consensus in both perspective. In addition to all these combined, we provide residual connections around aggregation networks in order to ease the learning process.
We demonstrate our method on several benchmarks [38, 11, 12]. Experimental results on various benchmarks prove the effectiveness of the proposed model over the latest methods for semantic correspondence. We also provide an extensive ablation study to validate and analyze components in CATs.
2 Related Work
Semantic Correspondence. Methods for semantic correspondence generally follow the classical matching pipeline [48, 41], including feature extraction, cost aggregation, and flow estimation. Most early efforts [7, 30, 11] leveraged the hand-crafted features which are inherently limited in capturing high-level semantics. Though using deep CNN-based features [5, 24, 42, 43, 23, 49, 26] has become increasingly popular thanks to their invariance to deformations, without a means to refine the matching scores independently computed between the features, the performance would be rather limited.
To alleviate this, several methods focused on flow estimation stage. Rocco et al. [42, 43] proposed an end-to-end network to predict global transformation parameters from the matching scores, and their success inspired many variants [49, 23, 25]. RTNs [23] obtain semantic correspondences through an iterative process of estimating spatial transformations. DGC-Net [34], Semantic-GLU-Net [58] and DMP [15] utilize a CNN-based decoder to directly find correspondence fields. PDC-Net [59] proposed a flexible probabilistic model that jointly learns the flow estimation and its uncertainty. Arguably, directly regressing correspondences from the initial matching scores highly relies on the quality of them.
Recent numerous methods [45, 37, 39, 31, 47, 51, 35] thus have focused on cost aggregation stage to refine the initial matching scores. Among hand-crafted methods, SCOT [31] formulates semantic correspondence as an optimal transport problem and attempts to solve two issues, namely many to one matching and background matching. HPF [37] first computes appearance matching confidence using hyperpixel features and then uses Regularized Hough Matching (RHM) algorithm for cost aggregation to enforce geometric consistency. DHPF [39], that replaces feature selection algorithm
of HPF [37] with trainable networks, also uses RHM. However, these hand-crafted techniques for refining the matching scores are neither learnable nor robust to severe deformations. As learningbased approaches, NC-Net [45] utilizes 4D convolution to achieve local neighborhood consensus by finding locally consistent matches, and its variants [44, 27] proposed more efficient methods. GOCor [57] proposed aggregation module that directly improves the correlation maps. GSF [21] formulated pruning module to suppress false positives of correspondences in order to refine the initial correlation maps. CHM [35] goes one step further, proposing a learnable geometric matching algorithm which utilizes 6D convolution. However, they are all limited in the sense that they inherit limitation of CNN-based architectures, which is local receptive fields.
Transformers in Vision. Transformer [61], the de facto standard for Natural Language Processing (NLP) tasks, has recently imposed significant impact on various tasks in Computer Vision fields such as image classification [10, 55], object detection [3, 62], tracking and matching [52, 51]. ViT [10], the first work to propose an end-to-end Transformer-based architecture for the image classification task, successfully extended the receptive field, owing to its self-attention nature that can capture global relationship between features. For visual correspondence, LoFTR [51] uses cross and self-attention module to refine the feature maps conditioned on both input images, and formulate the hand-crafted aggregation layer with dual-softmax [45, 60] and optimal transport [47] to infer correspondences. COTR [22] takes coordinates as an input and addresses dense correspondence task without the use of correlation map. Unlike these, for the first time, we propose a Transformer-based cost aggregation module.
3 Methodology
3.1 Motivation and Overview
Let us denote a pair of images, i.e., source and target, as Is and It, which represent semantically similar images, and features extracted from Is and It as Ds and Dt, respectively. Here, our goal is to establish a dense correspondence field F (i) between two images that is defined for each pixel i, which warps It towards Is.
Estimating the correspondence with sole reliance on matching similarities betweenDs andDt is often challenged by the ambiguous matches due to the repetitive patterns or background clutters [42, 24, 26]. To address this, numerous methods proposed cost aggregation techniques that focus on refining the initial matching similarities either by formulating the task as optimal transport problem [47, 31], using regularized Hough matching to re-weight the costs [37, 39], or 4D or 6D convolutions [45, 27, 44, 35]. However, these methods either use hand-crafted techniques that are weak to severe deformations, or fail to discriminate incorrect matches due to limited receptive fields.
To overcome these, we present Transformer-based cost aggregation networks that effectively integrate information present in all pairwise matching costs, dubbed CATs, as illustrated in Fig. 1. As done widely in other works [42, 45, 50, 34, 37], we follow the common practice for feature extraction and cost computation. In the following, we first explain feature extraction and cost computation, and then describe several critical design choices we made for effective aggregation of the matching costs.
3.2 Feature Extraction and Cost Computation
To extract dense feature maps from images, we follow [26, 37, 39] that use multi-level features for construction of correlation maps. We use CNNs that produce a sequence of L feature maps, and Dl represents a feature map at l-th level. As done in [37], we use different combination of multi-level features depending on the dataset trained on, e.g., PF-PASCAL [12] or SPair-71k [38]. Given a sequence of feature maps, we resize all the selected feature maps to Rh×w×c, with height h, width w, and c channels. The resized features then undergo l-2 normalization.
Given resized dense features Ds and Dt, we compute a correlation map C ∈ Rhw×hw using the inner product between features: C(i, j) = Dt(i) ·Ds(j) with points i and j in the target and source features, respectively. In this way, all pairwise feature matches are computed and stored. However, raw matching scores contain numerous ambiguous matching points as exemplified in Fig. 2, which results inaccurate correspondences. To remedy this, we propose cost aggregation networks in the following that aim to refine the ambiguous or noisy matching scores.
3.3 Transformer Aggregator
Renowned for its global receptive fields, one of the key elements of Transformer [61] is the selfattention mechanism, which enables finding the correlated input tokens by first feeding into scaled dot product attention function, normalizing with Layer Normalization (LN) [1], and passing the normalized values to a MLP. Several works [10, 3, 62, 51] have shown that given images or features as input, Transformers [61] integrate the global information in a flexible manner by learning to find the attention scores for all pairs of tokens.
In this paper, we leverage the Transformers to integrate the matching scores to discover global consensus by considering global context information. Specifically, we obtain a refined cost C′ by feeding the raw cost C to the Transformer T , consisting of self-attention, LN, and MLP modules:
C′ = T (C + Epos), (1) where Epos denotes positional embedding. The standard Transformer receives as input a 1D sequence of token embeddings. In our context, we reshape the correlation map C into a sequence of vectors C(k) ∈ R1×hw for k ∈ {1, ..., hw}. We visualize the refined correlation map with self-attention in Fig. 2, where the ambiguities are significantly resolved.
Appearance Affinity Modeling. When only matching costs are considered for aggregation, selfattention layer processes the correlation map itself disregarding the noise involved in the correlation map, which may lead to inaccurate correspondences. Rather than solely relying on raw correlation map, we additionally provide an appearance embedding from input features to disambiguate the correlation map aided by appearance affinity within the Transformer. Intuition behind is that visually similar points in an image, e.g., color or feature, have similar correspondences, as proven in stereo matching literature, e.g., Cost Volume Filtering (CVF) [16, 50].
To provide appearance affinity, we propose to concatenate embedded features projected from input features with the correlation map. We first feed the features D into linear projection networks, and then concatenate the output along corresponding dimension, so that the correlation map is augmented such that [C,P(D)] ∈ Rhw×(hw+p), where [ · ] denotes concatenation, P denotes linear projection networks, and p is channel dimension of embedded feature. Within the Transformer, self-attention layer aggregates the correlation map and passes the output to the linear projection networks to retain the size of original correlation C.
Multi-Level Aggregation. As shown in [37, 34, 39, 58, 31], leveraging multi-level features allows capturing hierarchical semantic feature representations. Thus we also use multi-level features from different levels of convolutional layers to construct a stack of correlation maps. Each correlation map Cl computed between Dls and Dlt is concatenated with corresponding embedded features and fed into the aggregation networks. The aggregation networks now consider multiple correlations, aiming to effectively aggregates the matches by the hierarchical semantic representations.
As shown in Fig. 3, a stack of L augmented correlation maps, [Cl,P(Dl)]Ll=1 ∈ Rhw×(hw+p)×L, undergo the Transformer aggregator. For each l-th augmented correlation map, we aggregate with self-attention layer across all the points in the augmented correlation map, and we refer this as intra-correlation self-attention. In addition, subsequent to this, the correlation map undergoes intercorrelation self-attention across multi-level dimensions. Contrary to HPF [37] that concatenates all the multi-level features and compute a correlation map, which disregards the level-wise similarities, within the inter-correlation layer of the proposed model, the similar matching scores are explored across multi-level dimensions. In this way, we can embrace richer semantics in different levels of feature maps, as shown in Fig. 4.
3.4 Cost Aggregation with Transformers
By leveraging the Transformer aggregator, we present cost aggregation framework with following additional techniques to improve the performance.
Swapping Self-Attention. To obtain a refined correlation map invariant to order of the input images and impose consistent matching scores, we argue that reciprocal scores should be used as aids to infer confident correspondences. As correlation map contains bidirectional matching scores, from both target and source perspective, we can leverage matching similarities from both directions in order to obtain more reciprocal scores as done similarly in other works [45, 26].
As shown in Fig. 1, we first feed the augmented correlation map to the aforementioned Transformer aggregator. Then we transpose the output, swapping the pair of dimensions in order to concatenate with the embedded feature from the other image, and feed into the subsequent another aggregator. Note that we share the parameters of the Transformer aggregators to obtain reciprocal scores. Formally, we define the whole process as following:
S = T ([Cl,P(Dlt)]Ll=1 + Epos), C′ = T ([(Sl)T,P(Dls)]Ll=1 + Epos),
(2)
where CT(i, j) = C(j, i) denotes swapping the pair of dimensions corresponding to the source and target images; S denotes the intermediate correlation map before swapping the axis. Note that NC-Net [45] proposed a similar procedure, but instead of processing serially, they separately process the correlation map and its transposed version and add the outputs, which is designed to produce
a correlation map invariant to the particular order of the input images. Unlike this, we process the correlation map serially, first aggregating one pair of dimensions and then further aggregating with respect to the other pair. In this way, the subsequent attention layer is given more consistent matching scores as an input, allowing further reduction of inconsistent matching scores. We include an ablation study to justify our choice in Section 4.4
Residual Connection. At the initial phase when the correlation map is fed into the Transformers, noisy score maps are inferred due to randomly-initialized parameters, which could complicate the learning process. To stabilize the learning process and provide a better initialization for the matching, we employ the residual connection. Specifically, we enforce the cost aggregation networks to estimate the residual correlation by adding residual connection around aggregation networks.
3.5 Training
Data Augmentation. Transformer is well known for lacking some of inductive bias and its datahungry nature thus necessitates a large quantity of training data to be fed [61, 10]. Recent methods [55, 56, 32] that employ the Transformer to address Computer Vision tasks have empirically shown that data augmentation techniques have positive impact on performance. However, in correspondence task, the question of to what extent can data augmentation affect the performance has not yet been properly addressed. From the experiments, we empirically find that data augmentation has positive impacts on performance in semantic correspondence with Transformers as reported in Section 4.4. To apply data augmentation [6, 2] with predetermined probabilities to input images at random. Specifically, 50% of the time, we randomly crop the input image, and independently for each augmentation function used in [6], we set the probability for applying the augmentation as 20%. More details can be found in supplementary material.
Training Objective. As in [37, 39, 35], we assume that the ground-truth keypoints are given for each pair of images. We first average the stack of refined correlation maps C′ ∈ Rhw×hw×L to obtain C′′ ∈ Rhw×hw and then transform it into a dense flow field Fpred using soft-argmax operator [26]. Subsequently, we compare the predicted dense flow field with the ground-truth flow field FGT obtained by following the protocol of [37] using input keypoints. For the training objective, we utilize Average End-Point Error (AEPE) [34], computed by averaging the Euclidean distance between the ground-truth and estimated flow. We thus formulate the objective function as L = ‖FGT − Fpred‖2.
4 Experiments
4.1 Implementation Details
For backbone feature extractor, we use ResNet-101 [14] pre-trained on ImageNet [8], and following [37], extract the features from the best subset layers. Other backbone features can also be used, which we analyze the effect of various backbone features in the following ablation study. For the hyper-parameters for Transformer encoder, we set the depth as 1 and the number of heads as 6. We resize the spatial size of the input image pairs to 256×256 and a sequence of selected features are resized to 16×16. We use a learnable positional embedding [10], instead of fixed [61]. We implemented our network using PyTorch [40], and AdamW [33] optimizer with an initial learning
rate of 3e−5 for the CATs layers and 3e−6 for the backbone features are used, which we gradually decrease during training.
4.2 Experimental Settings
In this section, we conduct comprehensive experiments for semantic correspondence, by evaluating our approach through comparisons to state-of-the-art methods including CNNGeo [42], A2Net [49], WeakAlign [43], NC-Net [45], RTNs [23], SFNet [26], HPF [37], DCC-Net [17], ANC-Net [27], DHPF [39], SCOT [31], GSF [21], and CHMNet [35]. In Section 4.3, we first evaluate matching results on several benchmarks with quantitative measures, and then provide an analysis of each component in our framework in Section 4.4. For more implementation details, please refer to our implementation available at https://github.com/SunghwanHong/CATs.
Datasets. SPair-71k [38] provides total 70,958 image pairs with extreme and diverse viewpoint, scale variations, and rich annotations for each image pair, e.g., keypoints, scale difference, truncation and occlusion difference, and clear data split. Previously, for semantic matching, most of the datasets are limited to a small quantity with similar viewpoints and scales [11, 12]. As our network relies on Transformer which requires a large number of data for training, SPair-71k [38] makes the use of Transformer in our model feasible. we also consider PF-PASCAL [12] containing 1,351 image pairs from 20 categories and PF-WILLOW [11] containing 900 image pairs from 4 categories, each dataset providing corresponding ground-truth annotations.
Evaluation Metric. For evaluation on SPair-71k [38], PF-WILLOW [11], and PF-PASCAL [12], we employ a percentage of correct keypoints (PCK), computed as the ratio of estimated keypoints within the threshold from ground-truths to the total number of keypoints. Given predicted keypoint kpred and ground-truth keypoint kGT, we count the number of predicted keypoints that satisfy following condition: d(kpred, kGT) ≤ α ·max(H,W ), where d( · ) denotes Euclidean distance; α
denotes a threshold which we evaluate on PF-PASCAL with αimg, SPair-71k and PF-WILLOW with αbbox; H and W denote height and width of the object bounding box or entire image, respectively.
4.3 Matching Results
For a fair comparison, we follow the evaluation protocol of [37] for SPair-71k, which our network is trained on the training split and evaluated on the test split. Similarly, for PF-PASCAL and PFWILLOW, following the common evaluation protocol of [13, 23, 17, 37, 39], we train our network on the training split of PF-PASCAL [12] and then evaluate on the test split of PF-PASCAL [12] and PF-WILLOW [11]. All the results of other methods are reported under identical setting.
Table 1 summarizes quantitative results on SPair-71k [38], PF-PASCAL [12] and PF-WILLOW [11]. We note whether each method leverages multi-level features and fine-tunes the backbone features in order to ensure a fair comparison. We additionally denote the types of cost aggregation. Generally, our CATs outperform other methods over all the benchmarks. This is also confirmed by the results on SPair-71k, as shown in Table 2, where the proposed method outperforms other methods by large margin. Note that CATs† reports lower PCK than that of CHM, and this is because CHM fine-tunes its backbone networks while CATs† does not. Fig. 5 visualizes qualitative results for extremely challenging image pairs. We observe that compared to current state-of-the-art methods [31, 39], our method is capable of suppressing noisy scores and find accurate correspondences in cases with large scale and geometric variations.
It is notable that CATs generally report lower PCK on PF-WILLOW [11] compared to other stateof-the-art methods. This is because the Transformer is well known for lacking some of inductive bias. When we evaluate on PF-WILLOW, we infer with the model trained on the training split of PFPASCAL, which only contains 1,351 image pairs, and as only relatively small quantity of image pairs is available within the PF-PASCAL training split, the Transformer shows low generalization power. This demonstrates that the Transformer-based architecture indeed requires a means to compensate for the lack of inductive bias, e.g., data augmentation.
4.4 Ablation Study
In this section we show an ablation analysis to validate critical components we made to design our architecture, and provide an analysis on use of different backbone features, and data augmentation. We train all the variants on the training split of SPair-71k [38] when evaluating on SPair-71k, and train on PF-PASCAL [12] for evaluating on PF-PASCAL. We measure the PCK, and each ablation experiment is conducted under same experimental setting for a fair comparison.
Network Architecture. Table 3 shows the analysis on key components in our architecture. There are four key components we analyze for the ablation study, including appearance modelling, multilevel aggregation, swapping self-attention, and residual connection.
We first define the model without any of these as baseline, which simply feeds the correlation map into the selfattention layer. We evaluate on SPair-71k benchmark by progressively adding the each key component. From I to V, we observe consistent increase in performance when each component is added. II shows a large improvement in performance, which demonstrates that the appearance modelling enabled the model to refine the ambiguous or noisy matching scores. Although relatively small increase in PCK for III, it proves that the proposed model
successfully aggregates the multi-level correlation maps. Furthermore, IV and V show apparent increase, proving the significance of both components.
Feature Backbone. As shown in Table 4, we explore the impact of different feature backbones on the performance on SPair-71k [38] and PF-PASCAL [12]. We report the results of models with backbone networks frozen. The top two rows are models with DeiT-B [55], next two rows use DINO [4], and the rest use ResNet101 [14] as backbone. Specifically, subscript single for DeiT-B and DINO, we use the feature map extracted at the last layer for the singlelevel, while for subscript all, every feature map
from 12 layers is used for cost construction. For ResNet-101 subscript single, we use a single-level feature cropped at conv4− 23, while for multi, we use the best layer subset provided by [37]. Summarizing the results, we observed that leveraging multi-level features showed apparent improvements in performance, proving effectiveness of multi-level aggregation introduced by our method. It is worth noting that DINO, which is more excel at dense tasks than DeiT-B, outperforms DeiT-B when applied to semantic matching. This indicates that fine-tuning the feature could enhance the performance. To best of our knowledge, we are the first to employ Transformer-based features for semantic matching. It would be an interesting setup to train an end-to-end Transformer-based networks, and we hope this work draws attention from community and made useful for future works.
Data Augmentation. In Table 5, we compared the PCK performance between our variants and DHPF [39]. We note if the model is trained with augmentation. For a fair comparison, we evaluate both DHPF [39] and CATs trained on SPair-71k [38] using strong supervision, which assumes that the ground-truth keypoints are given. The results show that compared to DHPF, a CNN-based method, data augmentation has a larger influence on CATs in terms of performance. This demonstrates that not only we eased the data-hunger problem inherent in Transform-
ers, but also found that applying augmentations for matching has positive effects. Augmentation technique would bring a highly likely improvements in performance, and we hope that the future works benefit from this.
Serial swapping. It is apparent that Equation 2 is not designed for an order-invariant output. Different from NC-Net [45], we let the correlation map undergo the self-attention module in a serial manner. We conducted a simple experiment to compare the difference between each approach. From experiments, we obtained the results of parallel and serial processing on SPair-71k with αbbox = 0.1, which are PCK of 40.8 and 42.4, respectively. In light of this, although CATs may not support order invariance, adopting serial processing can obtain higher PCK as it has a better capability to reduce inconsistent matching scores by additionally processing the already processed cost map, which we finalize the architecture to include serial processing.
4.5 Analysis
Visualizing Self-Attention. We visualize the multi-level attention maps obtained from the Transformer aggregator. As shown in Fig. 6, the learned self-attention map at each level exhibits different
aspect. With these self-attentions, our networks can leverage multi-level correlations to capture hierarchical semantic feature representations effectively.
Memory and run-time. In Table 6, we show the memory and run-time comparison to NCNet [45], SCOT [31], DHPF [39] and CHM [35] with CATs. For a fair comparison, the results are obtained using a single NVIDIA GeForce RTX 2080 Ti GPU and Intel Core i7-10700 CPU. We measure the inference time for both the process without counting feature extraction, and the whole process. Thanks to Transformers’ fast com-
putation nature, compared to other methods, our method is beyond compare. We also find that compared to other cost aggregation methods including 4D, 6D convolutons, OT-RHM and RHM, ours show comparable efficiency in terms of computational cost. Note that NC-Net utilizes a single feature map while other methods utilize multi-level feature maps. We used the standard self-attention module for implementation, but more advanced and efficient transformer [32] architectures could reduce the overall memory consumption.
4.6 Limitations
One obvious limitation that CATs possess is that when applying the method to non-corresponding images, the proposed method would still deliver correspondences as it lacks power to ignore pixels that do not have correspondence at all. A straightforward solution would be to consider including a module to account for pixel-wise matching confidence. Another limitation of CATs would be its inability to address a task of finding accurate correspondences given multi-objects or non-corresponding objects. Addressing such challenges would be a promising direction for future work.
5 Conclusion
In this paper, we have proposed, for the first time, Transformer-based cost aggregation networks for semantic correspondence which enables aggregating the matching scores computed between input features, dubbed CATs. We have made several architectural designs in the network architecture, including appearance affinity modelling, multi-level aggregation, swapping self-attention, and residual correlation. We have shown that our method surpasses the current state-of-the-art in several benchmarks. Moreover, we have conducted extensive ablation studies to validate our choices and explore its capacity. A natural next step, which we leave for future work, is to examine how CATs could extend its domain to tasks including 3-D reconstruction, semantic segmentation and stitching, and to explore self-supervised learning.
Acknowledgements
This research was supported by the MSIT, Korea, under the ICT Creative Consilience program (IITP-2021-2020-0-01819) and (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) supervised by the IITP and National Research Foundation of Korea (NRF-2021R1C1C1006897). | 1. What is the focus and contribution of the paper on semantic correspondence?
2. What are the strengths of the proposed approach, particularly in terms of transformer-based cost aggregation?
3. What are the weaknesses of the paper, especially regarding the design of the aggregator module?
4. Do you have any concerns about the natural effectiveness of cost aggregation itself?
5. Are there any minor comments or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors propose transformer-based cost aggregation networks for semantic correspondence with challenges of large intra-class appearance and geometric variations. The overall learning pipeline hierarchically aggregates the matching scores computed between disambiguated input features from semantically similar images by combining with swapping self-attention and residual connections. Extensive experimental results on three datasets, SPairk-71k, PF-PASCAL, and PF-WILLOW, show its efficacy on finding dense semantic correspondences.
Review
[Motivation and reasoning about transformer] First, the motivation of the proposed system is described clearly, considering the necessity of leveraging transformer-based cost aggregation for dense correspondence estimation between semantically similar images. Overall, the flow of the paper is easy to understand, and the goal is straightforward.
[About architectural design – aggregator] The overall network architecture consists of feature extraction, cost aggregation, and flow estimation steps. Specifically, on the stage of cost aggregation, each feature is transformed with the proposed module, CAT, and aggregated with intra- and inter-correlation self-attention mechanism. Here, I have a question about the role of each self-attention module. Although in L165, the operation of each module is described well, the motivation for disentangling each module is not clear. I would like to see the reasoning of their design, and what happens if the modules are not decomposed.
[Natural effectiveness of cost aggregation itself] Inherently, the cost aggregation by inner product has the capability of finding correlation between two input features. As described in Figure 2 (c), the raw correlation map shows its strong capability of finding semantically similar features. Therefore, these results raise a doubt as to whether the proposed CAT module is effectively designed, considering the complexity of the module. I wonder if the effect of CAT can be maximized when finding semantic correspondence for cluttered scenes with high complexity, such as traffic driving environments. Currently, most of the demonstrated dataset images contain only a single object.
[Minor comment] The title is too ambiguous to deliver/represent the contribution of the works.
UPDATE AFTER REBUTTAL
I appreciate the authors' feedback and valuable comments from other reviewers. Parts of my main concerns (e.g., reasoning about the design of the aggregator) are eased with the authors' fair reasonings. However, as commented by FB1A, those ablation studies (including the justification of transformers and cost volume) need to be discussed in the main paper. In addition, It would be better to include some failure cases or qualitative results with multiple objects for further discussion (although I appreciate the reply from the authors for the issue of multiple objects). Overall, I update my final score from 7 to 6, but still leaning towards acceptance. |
NIPS | Title
CATs: Cost Aggregation Transformers for Visual Correspondence
Abstract
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to handcrafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://sunghwanhong.github.io/CATs/.
1 Introduction
Establishing dense correspondences across semantically similar images can facilitate many Computer Vision applications, including semantic segmentation [46, 54, 36], object detection [29], and image editing [53, 30, 28, 25]. Unlike classical dense correspondence problems that consider visually similar images taken under the geometrically constrained settings [16, 19, 50, 18], semantic correspondence poses additional challenges from large intra-class appearance and geometric variations caused by the unconstrained settings of given image pair.
Recent approaches [42, 43, 45, 34, 37, 39, 31, 58, 47, 57, 51, 35] addressed these challenges by carefully designing deep convolutional neural networks (CNNs)-based models analogously to the classical matching pipeline [48, 41], feature extraction, cost aggregation, and flow estimation. Several works [24, 9, 37, 39, 47, 51] focused on the feature extraction stage, as it has been proven that the more powerful feature representation the model learns, the more robust matching is obtained [24, 9, 51]. However, solely relying on the matching similarity between features without any prior often suffers ∗Equal contribution †Corresponding author
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
from the challenges due to ambiguities generated by repetitive patterns or background clutters [42, 24, 26]. On the other hand, some methods [42, 49, 43, 23, 26, 58] focused on flow estimation stage either by designing additional CNN as an ad-hoc regressor that predicts the parameters of a single global transformation [42, 43], finding confident matches from correlation maps [20, 26], or directly feeding the correlation maps into the decoder to infer dense correspondences [58]. However, these methods highly rely on the quality of the initial correlation maps.
The latest methods [45, 37, 44, 21, 31, 27, 35] have focused on the second stage, highlighting the importance of cost aggregation. Since the quality of correlation maps is of prime importance, they proposed to refine the matching scores by formulating the task as optimal transport problem [47, 31], re-weighting matching scores by Hough space voting for geometric consistency [37, 39], or utilizing high-dimensional 4D or 6D convolutions to find locally consistent matches [45, 44, 27, 35]. Although formulated variously, these methods either use hand-crafted techniques that are neither learnable nor robust to severe deformations, or inherit the limitation of CNNs, e.g., limited receptive fields, failing to discriminate incorrect matches that are locally consistent.
In this work, we focus on the cost aggregation stage, and propose a novel cost aggregation network to tackle aforementioned issues. Our network, called Cost Aggregation with Transformers (CATs), is based on Transformer [61, 10], which is renowned for its global receptive field. By considering all the matching scores computed between features of input images globally, our aggregation networks explore global consensus and thus refine the ambiguous or noisy matching scores effectively.
Specifically, based on the observation that desired correspondence should be aligned at discontinuities with appearance of images, we concatenate an appearance embedding with the correlation map, which helps to disambiguate the correlation map within the Transformer. To benefit from hierarchical feature representations, following [26, 39, 58], we use a stack of correlation maps constructed from multilevel features, and propose to effectively aggregate the scores across the multi-level correlation maps. Furthermore, we consider bidirectional nature of correlation map, and leverage the correlation map from both directions, obtaining reciprocal scores by swapping the pair of dimensions of correlation map in order to allow global consensus in both perspective. In addition to all these combined, we provide residual connections around aggregation networks in order to ease the learning process.
We demonstrate our method on several benchmarks [38, 11, 12]. Experimental results on various benchmarks prove the effectiveness of the proposed model over the latest methods for semantic correspondence. We also provide an extensive ablation study to validate and analyze components in CATs.
2 Related Work
Semantic Correspondence. Methods for semantic correspondence generally follow the classical matching pipeline [48, 41], including feature extraction, cost aggregation, and flow estimation. Most early efforts [7, 30, 11] leveraged the hand-crafted features which are inherently limited in capturing high-level semantics. Though using deep CNN-based features [5, 24, 42, 43, 23, 49, 26] has become increasingly popular thanks to their invariance to deformations, without a means to refine the matching scores independently computed between the features, the performance would be rather limited.
To alleviate this, several methods focused on flow estimation stage. Rocco et al. [42, 43] proposed an end-to-end network to predict global transformation parameters from the matching scores, and their success inspired many variants [49, 23, 25]. RTNs [23] obtain semantic correspondences through an iterative process of estimating spatial transformations. DGC-Net [34], Semantic-GLU-Net [58] and DMP [15] utilize a CNN-based decoder to directly find correspondence fields. PDC-Net [59] proposed a flexible probabilistic model that jointly learns the flow estimation and its uncertainty. Arguably, directly regressing correspondences from the initial matching scores highly relies on the quality of them.
Recent numerous methods [45, 37, 39, 31, 47, 51, 35] thus have focused on cost aggregation stage to refine the initial matching scores. Among hand-crafted methods, SCOT [31] formulates semantic correspondence as an optimal transport problem and attempts to solve two issues, namely many to one matching and background matching. HPF [37] first computes appearance matching confidence using hyperpixel features and then uses Regularized Hough Matching (RHM) algorithm for cost aggregation to enforce geometric consistency. DHPF [39], that replaces feature selection algorithm
of HPF [37] with trainable networks, also uses RHM. However, these hand-crafted techniques for refining the matching scores are neither learnable nor robust to severe deformations. As learningbased approaches, NC-Net [45] utilizes 4D convolution to achieve local neighborhood consensus by finding locally consistent matches, and its variants [44, 27] proposed more efficient methods. GOCor [57] proposed aggregation module that directly improves the correlation maps. GSF [21] formulated pruning module to suppress false positives of correspondences in order to refine the initial correlation maps. CHM [35] goes one step further, proposing a learnable geometric matching algorithm which utilizes 6D convolution. However, they are all limited in the sense that they inherit limitation of CNN-based architectures, which is local receptive fields.
Transformers in Vision. Transformer [61], the de facto standard for Natural Language Processing (NLP) tasks, has recently imposed significant impact on various tasks in Computer Vision fields such as image classification [10, 55], object detection [3, 62], tracking and matching [52, 51]. ViT [10], the first work to propose an end-to-end Transformer-based architecture for the image classification task, successfully extended the receptive field, owing to its self-attention nature that can capture global relationship between features. For visual correspondence, LoFTR [51] uses cross and self-attention module to refine the feature maps conditioned on both input images, and formulate the hand-crafted aggregation layer with dual-softmax [45, 60] and optimal transport [47] to infer correspondences. COTR [22] takes coordinates as an input and addresses dense correspondence task without the use of correlation map. Unlike these, for the first time, we propose a Transformer-based cost aggregation module.
3 Methodology
3.1 Motivation and Overview
Let us denote a pair of images, i.e., source and target, as Is and It, which represent semantically similar images, and features extracted from Is and It as Ds and Dt, respectively. Here, our goal is to establish a dense correspondence field F (i) between two images that is defined for each pixel i, which warps It towards Is.
Estimating the correspondence with sole reliance on matching similarities betweenDs andDt is often challenged by the ambiguous matches due to the repetitive patterns or background clutters [42, 24, 26]. To address this, numerous methods proposed cost aggregation techniques that focus on refining the initial matching similarities either by formulating the task as optimal transport problem [47, 31], using regularized Hough matching to re-weight the costs [37, 39], or 4D or 6D convolutions [45, 27, 44, 35]. However, these methods either use hand-crafted techniques that are weak to severe deformations, or fail to discriminate incorrect matches due to limited receptive fields.
To overcome these, we present Transformer-based cost aggregation networks that effectively integrate information present in all pairwise matching costs, dubbed CATs, as illustrated in Fig. 1. As done widely in other works [42, 45, 50, 34, 37], we follow the common practice for feature extraction and cost computation. In the following, we first explain feature extraction and cost computation, and then describe several critical design choices we made for effective aggregation of the matching costs.
3.2 Feature Extraction and Cost Computation
To extract dense feature maps from images, we follow [26, 37, 39] that use multi-level features for construction of correlation maps. We use CNNs that produce a sequence of L feature maps, and Dl represents a feature map at l-th level. As done in [37], we use different combination of multi-level features depending on the dataset trained on, e.g., PF-PASCAL [12] or SPair-71k [38]. Given a sequence of feature maps, we resize all the selected feature maps to Rh×w×c, with height h, width w, and c channels. The resized features then undergo l-2 normalization.
Given resized dense features Ds and Dt, we compute a correlation map C ∈ Rhw×hw using the inner product between features: C(i, j) = Dt(i) ·Ds(j) with points i and j in the target and source features, respectively. In this way, all pairwise feature matches are computed and stored. However, raw matching scores contain numerous ambiguous matching points as exemplified in Fig. 2, which results inaccurate correspondences. To remedy this, we propose cost aggregation networks in the following that aim to refine the ambiguous or noisy matching scores.
3.3 Transformer Aggregator
Renowned for its global receptive fields, one of the key elements of Transformer [61] is the selfattention mechanism, which enables finding the correlated input tokens by first feeding into scaled dot product attention function, normalizing with Layer Normalization (LN) [1], and passing the normalized values to a MLP. Several works [10, 3, 62, 51] have shown that given images or features as input, Transformers [61] integrate the global information in a flexible manner by learning to find the attention scores for all pairs of tokens.
In this paper, we leverage the Transformers to integrate the matching scores to discover global consensus by considering global context information. Specifically, we obtain a refined cost C′ by feeding the raw cost C to the Transformer T , consisting of self-attention, LN, and MLP modules:
C′ = T (C + Epos), (1) where Epos denotes positional embedding. The standard Transformer receives as input a 1D sequence of token embeddings. In our context, we reshape the correlation map C into a sequence of vectors C(k) ∈ R1×hw for k ∈ {1, ..., hw}. We visualize the refined correlation map with self-attention in Fig. 2, where the ambiguities are significantly resolved.
Appearance Affinity Modeling. When only matching costs are considered for aggregation, selfattention layer processes the correlation map itself disregarding the noise involved in the correlation map, which may lead to inaccurate correspondences. Rather than solely relying on raw correlation map, we additionally provide an appearance embedding from input features to disambiguate the correlation map aided by appearance affinity within the Transformer. Intuition behind is that visually similar points in an image, e.g., color or feature, have similar correspondences, as proven in stereo matching literature, e.g., Cost Volume Filtering (CVF) [16, 50].
To provide appearance affinity, we propose to concatenate embedded features projected from input features with the correlation map. We first feed the features D into linear projection networks, and then concatenate the output along corresponding dimension, so that the correlation map is augmented such that [C,P(D)] ∈ Rhw×(hw+p), where [ · ] denotes concatenation, P denotes linear projection networks, and p is channel dimension of embedded feature. Within the Transformer, self-attention layer aggregates the correlation map and passes the output to the linear projection networks to retain the size of original correlation C.
Multi-Level Aggregation. As shown in [37, 34, 39, 58, 31], leveraging multi-level features allows capturing hierarchical semantic feature representations. Thus we also use multi-level features from different levels of convolutional layers to construct a stack of correlation maps. Each correlation map Cl computed between Dls and Dlt is concatenated with corresponding embedded features and fed into the aggregation networks. The aggregation networks now consider multiple correlations, aiming to effectively aggregates the matches by the hierarchical semantic representations.
As shown in Fig. 3, a stack of L augmented correlation maps, [Cl,P(Dl)]Ll=1 ∈ Rhw×(hw+p)×L, undergo the Transformer aggregator. For each l-th augmented correlation map, we aggregate with self-attention layer across all the points in the augmented correlation map, and we refer this as intra-correlation self-attention. In addition, subsequent to this, the correlation map undergoes intercorrelation self-attention across multi-level dimensions. Contrary to HPF [37] that concatenates all the multi-level features and compute a correlation map, which disregards the level-wise similarities, within the inter-correlation layer of the proposed model, the similar matching scores are explored across multi-level dimensions. In this way, we can embrace richer semantics in different levels of feature maps, as shown in Fig. 4.
3.4 Cost Aggregation with Transformers
By leveraging the Transformer aggregator, we present cost aggregation framework with following additional techniques to improve the performance.
Swapping Self-Attention. To obtain a refined correlation map invariant to order of the input images and impose consistent matching scores, we argue that reciprocal scores should be used as aids to infer confident correspondences. As correlation map contains bidirectional matching scores, from both target and source perspective, we can leverage matching similarities from both directions in order to obtain more reciprocal scores as done similarly in other works [45, 26].
As shown in Fig. 1, we first feed the augmented correlation map to the aforementioned Transformer aggregator. Then we transpose the output, swapping the pair of dimensions in order to concatenate with the embedded feature from the other image, and feed into the subsequent another aggregator. Note that we share the parameters of the Transformer aggregators to obtain reciprocal scores. Formally, we define the whole process as following:
S = T ([Cl,P(Dlt)]Ll=1 + Epos), C′ = T ([(Sl)T,P(Dls)]Ll=1 + Epos),
(2)
where CT(i, j) = C(j, i) denotes swapping the pair of dimensions corresponding to the source and target images; S denotes the intermediate correlation map before swapping the axis. Note that NC-Net [45] proposed a similar procedure, but instead of processing serially, they separately process the correlation map and its transposed version and add the outputs, which is designed to produce
a correlation map invariant to the particular order of the input images. Unlike this, we process the correlation map serially, first aggregating one pair of dimensions and then further aggregating with respect to the other pair. In this way, the subsequent attention layer is given more consistent matching scores as an input, allowing further reduction of inconsistent matching scores. We include an ablation study to justify our choice in Section 4.4
Residual Connection. At the initial phase when the correlation map is fed into the Transformers, noisy score maps are inferred due to randomly-initialized parameters, which could complicate the learning process. To stabilize the learning process and provide a better initialization for the matching, we employ the residual connection. Specifically, we enforce the cost aggregation networks to estimate the residual correlation by adding residual connection around aggregation networks.
3.5 Training
Data Augmentation. Transformer is well known for lacking some of inductive bias and its datahungry nature thus necessitates a large quantity of training data to be fed [61, 10]. Recent methods [55, 56, 32] that employ the Transformer to address Computer Vision tasks have empirically shown that data augmentation techniques have positive impact on performance. However, in correspondence task, the question of to what extent can data augmentation affect the performance has not yet been properly addressed. From the experiments, we empirically find that data augmentation has positive impacts on performance in semantic correspondence with Transformers as reported in Section 4.4. To apply data augmentation [6, 2] with predetermined probabilities to input images at random. Specifically, 50% of the time, we randomly crop the input image, and independently for each augmentation function used in [6], we set the probability for applying the augmentation as 20%. More details can be found in supplementary material.
Training Objective. As in [37, 39, 35], we assume that the ground-truth keypoints are given for each pair of images. We first average the stack of refined correlation maps C′ ∈ Rhw×hw×L to obtain C′′ ∈ Rhw×hw and then transform it into a dense flow field Fpred using soft-argmax operator [26]. Subsequently, we compare the predicted dense flow field with the ground-truth flow field FGT obtained by following the protocol of [37] using input keypoints. For the training objective, we utilize Average End-Point Error (AEPE) [34], computed by averaging the Euclidean distance between the ground-truth and estimated flow. We thus formulate the objective function as L = ‖FGT − Fpred‖2.
4 Experiments
4.1 Implementation Details
For backbone feature extractor, we use ResNet-101 [14] pre-trained on ImageNet [8], and following [37], extract the features from the best subset layers. Other backbone features can also be used, which we analyze the effect of various backbone features in the following ablation study. For the hyper-parameters for Transformer encoder, we set the depth as 1 and the number of heads as 6. We resize the spatial size of the input image pairs to 256×256 and a sequence of selected features are resized to 16×16. We use a learnable positional embedding [10], instead of fixed [61]. We implemented our network using PyTorch [40], and AdamW [33] optimizer with an initial learning
rate of 3e−5 for the CATs layers and 3e−6 for the backbone features are used, which we gradually decrease during training.
4.2 Experimental Settings
In this section, we conduct comprehensive experiments for semantic correspondence, by evaluating our approach through comparisons to state-of-the-art methods including CNNGeo [42], A2Net [49], WeakAlign [43], NC-Net [45], RTNs [23], SFNet [26], HPF [37], DCC-Net [17], ANC-Net [27], DHPF [39], SCOT [31], GSF [21], and CHMNet [35]. In Section 4.3, we first evaluate matching results on several benchmarks with quantitative measures, and then provide an analysis of each component in our framework in Section 4.4. For more implementation details, please refer to our implementation available at https://github.com/SunghwanHong/CATs.
Datasets. SPair-71k [38] provides total 70,958 image pairs with extreme and diverse viewpoint, scale variations, and rich annotations for each image pair, e.g., keypoints, scale difference, truncation and occlusion difference, and clear data split. Previously, for semantic matching, most of the datasets are limited to a small quantity with similar viewpoints and scales [11, 12]. As our network relies on Transformer which requires a large number of data for training, SPair-71k [38] makes the use of Transformer in our model feasible. we also consider PF-PASCAL [12] containing 1,351 image pairs from 20 categories and PF-WILLOW [11] containing 900 image pairs from 4 categories, each dataset providing corresponding ground-truth annotations.
Evaluation Metric. For evaluation on SPair-71k [38], PF-WILLOW [11], and PF-PASCAL [12], we employ a percentage of correct keypoints (PCK), computed as the ratio of estimated keypoints within the threshold from ground-truths to the total number of keypoints. Given predicted keypoint kpred and ground-truth keypoint kGT, we count the number of predicted keypoints that satisfy following condition: d(kpred, kGT) ≤ α ·max(H,W ), where d( · ) denotes Euclidean distance; α
denotes a threshold which we evaluate on PF-PASCAL with αimg, SPair-71k and PF-WILLOW with αbbox; H and W denote height and width of the object bounding box or entire image, respectively.
4.3 Matching Results
For a fair comparison, we follow the evaluation protocol of [37] for SPair-71k, which our network is trained on the training split and evaluated on the test split. Similarly, for PF-PASCAL and PFWILLOW, following the common evaluation protocol of [13, 23, 17, 37, 39], we train our network on the training split of PF-PASCAL [12] and then evaluate on the test split of PF-PASCAL [12] and PF-WILLOW [11]. All the results of other methods are reported under identical setting.
Table 1 summarizes quantitative results on SPair-71k [38], PF-PASCAL [12] and PF-WILLOW [11]. We note whether each method leverages multi-level features and fine-tunes the backbone features in order to ensure a fair comparison. We additionally denote the types of cost aggregation. Generally, our CATs outperform other methods over all the benchmarks. This is also confirmed by the results on SPair-71k, as shown in Table 2, where the proposed method outperforms other methods by large margin. Note that CATs† reports lower PCK than that of CHM, and this is because CHM fine-tunes its backbone networks while CATs† does not. Fig. 5 visualizes qualitative results for extremely challenging image pairs. We observe that compared to current state-of-the-art methods [31, 39], our method is capable of suppressing noisy scores and find accurate correspondences in cases with large scale and geometric variations.
It is notable that CATs generally report lower PCK on PF-WILLOW [11] compared to other stateof-the-art methods. This is because the Transformer is well known for lacking some of inductive bias. When we evaluate on PF-WILLOW, we infer with the model trained on the training split of PFPASCAL, which only contains 1,351 image pairs, and as only relatively small quantity of image pairs is available within the PF-PASCAL training split, the Transformer shows low generalization power. This demonstrates that the Transformer-based architecture indeed requires a means to compensate for the lack of inductive bias, e.g., data augmentation.
4.4 Ablation Study
In this section we show an ablation analysis to validate critical components we made to design our architecture, and provide an analysis on use of different backbone features, and data augmentation. We train all the variants on the training split of SPair-71k [38] when evaluating on SPair-71k, and train on PF-PASCAL [12] for evaluating on PF-PASCAL. We measure the PCK, and each ablation experiment is conducted under same experimental setting for a fair comparison.
Network Architecture. Table 3 shows the analysis on key components in our architecture. There are four key components we analyze for the ablation study, including appearance modelling, multilevel aggregation, swapping self-attention, and residual connection.
We first define the model without any of these as baseline, which simply feeds the correlation map into the selfattention layer. We evaluate on SPair-71k benchmark by progressively adding the each key component. From I to V, we observe consistent increase in performance when each component is added. II shows a large improvement in performance, which demonstrates that the appearance modelling enabled the model to refine the ambiguous or noisy matching scores. Although relatively small increase in PCK for III, it proves that the proposed model
successfully aggregates the multi-level correlation maps. Furthermore, IV and V show apparent increase, proving the significance of both components.
Feature Backbone. As shown in Table 4, we explore the impact of different feature backbones on the performance on SPair-71k [38] and PF-PASCAL [12]. We report the results of models with backbone networks frozen. The top two rows are models with DeiT-B [55], next two rows use DINO [4], and the rest use ResNet101 [14] as backbone. Specifically, subscript single for DeiT-B and DINO, we use the feature map extracted at the last layer for the singlelevel, while for subscript all, every feature map
from 12 layers is used for cost construction. For ResNet-101 subscript single, we use a single-level feature cropped at conv4− 23, while for multi, we use the best layer subset provided by [37]. Summarizing the results, we observed that leveraging multi-level features showed apparent improvements in performance, proving effectiveness of multi-level aggregation introduced by our method. It is worth noting that DINO, which is more excel at dense tasks than DeiT-B, outperforms DeiT-B when applied to semantic matching. This indicates that fine-tuning the feature could enhance the performance. To best of our knowledge, we are the first to employ Transformer-based features for semantic matching. It would be an interesting setup to train an end-to-end Transformer-based networks, and we hope this work draws attention from community and made useful for future works.
Data Augmentation. In Table 5, we compared the PCK performance between our variants and DHPF [39]. We note if the model is trained with augmentation. For a fair comparison, we evaluate both DHPF [39] and CATs trained on SPair-71k [38] using strong supervision, which assumes that the ground-truth keypoints are given. The results show that compared to DHPF, a CNN-based method, data augmentation has a larger influence on CATs in terms of performance. This demonstrates that not only we eased the data-hunger problem inherent in Transform-
ers, but also found that applying augmentations for matching has positive effects. Augmentation technique would bring a highly likely improvements in performance, and we hope that the future works benefit from this.
Serial swapping. It is apparent that Equation 2 is not designed for an order-invariant output. Different from NC-Net [45], we let the correlation map undergo the self-attention module in a serial manner. We conducted a simple experiment to compare the difference between each approach. From experiments, we obtained the results of parallel and serial processing on SPair-71k with αbbox = 0.1, which are PCK of 40.8 and 42.4, respectively. In light of this, although CATs may not support order invariance, adopting serial processing can obtain higher PCK as it has a better capability to reduce inconsistent matching scores by additionally processing the already processed cost map, which we finalize the architecture to include serial processing.
4.5 Analysis
Visualizing Self-Attention. We visualize the multi-level attention maps obtained from the Transformer aggregator. As shown in Fig. 6, the learned self-attention map at each level exhibits different
aspect. With these self-attentions, our networks can leverage multi-level correlations to capture hierarchical semantic feature representations effectively.
Memory and run-time. In Table 6, we show the memory and run-time comparison to NCNet [45], SCOT [31], DHPF [39] and CHM [35] with CATs. For a fair comparison, the results are obtained using a single NVIDIA GeForce RTX 2080 Ti GPU and Intel Core i7-10700 CPU. We measure the inference time for both the process without counting feature extraction, and the whole process. Thanks to Transformers’ fast com-
putation nature, compared to other methods, our method is beyond compare. We also find that compared to other cost aggregation methods including 4D, 6D convolutons, OT-RHM and RHM, ours show comparable efficiency in terms of computational cost. Note that NC-Net utilizes a single feature map while other methods utilize multi-level feature maps. We used the standard self-attention module for implementation, but more advanced and efficient transformer [32] architectures could reduce the overall memory consumption.
4.6 Limitations
One obvious limitation that CATs possess is that when applying the method to non-corresponding images, the proposed method would still deliver correspondences as it lacks power to ignore pixels that do not have correspondence at all. A straightforward solution would be to consider including a module to account for pixel-wise matching confidence. Another limitation of CATs would be its inability to address a task of finding accurate correspondences given multi-objects or non-corresponding objects. Addressing such challenges would be a promising direction for future work.
5 Conclusion
In this paper, we have proposed, for the first time, Transformer-based cost aggregation networks for semantic correspondence which enables aggregating the matching scores computed between input features, dubbed CATs. We have made several architectural designs in the network architecture, including appearance affinity modelling, multi-level aggregation, swapping self-attention, and residual correlation. We have shown that our method surpasses the current state-of-the-art in several benchmarks. Moreover, we have conducted extensive ablation studies to validate our choices and explore its capacity. A natural next step, which we leave for future work, is to examine how CATs could extend its domain to tasks including 3-D reconstruction, semantic segmentation and stitching, and to explore self-supervised learning.
Acknowledgements
This research was supported by the MSIT, Korea, under the ICT Creative Consilience program (IITP-2021-2020-0-01819) and (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) supervised by the IITP and National Research Foundation of Korea (NRF-2021R1C1C1006897). | 1. What is the focus of the paper on semantic correspondence?
2. What are the strengths and weaknesses of the proposed approach regarding transformer usage?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the concerns regarding the cost volume refinement method?
5. Do you have any questions or suggestions regarding the presentation and organization of the paper? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a cost-volume refinement method using transformers to be used for a cost-volume-based semantic correspondence framework. The main innovations of the method are the use of transformers, how the cost volume is aggregated with appearance features, and the serial way in which appearance features are used. The method is tested on SPair-71K, PF-Pascal, and PF-Willow datasets.
Review
The paper provides decent performance, outperforming the competitors on SPair-71K, while performing slightly worse in PF-Pascal and PF-Willow. However, as the main argument of the paper is about the use of Transformers for cost volume refinement, this has to be well justified, which the current paper lacks. Considering this, with the presentation quality issues, the reviewer is currently leaning towards rejection. The reviewer, however, would be very happy to be convinced otherwise.
Why Transformers?
Contrary to the argument made in the paper, e.g. lines 87--88, CNN-based architectures do not necessarily suffer from local receptive fields. For example, even the featuremap in this paper, which is 16x16 at the end, would have a receptive that roughly covers the entire image. For example, resnet 101 (v1) has a receptive field size of 1027 pixels. This, combined with the post-processing or hierarchical processing makes the locality argument less convincing. Moreover, a naive way to remove the locality constraint for a 16x16 feature map would be to use MLPs.
In fact, a critical component of the transformer is that they actually relate and route features according to the pair-wise relationships -- the core idea behind the self-attention module. Hence I would suggest that the authors reconsider this argument.
Given this, what is most important in this paper is in fact, not just achieving state-of-the-art performance, but rather revealing that one should for sure head towards transformers. The architecture suggested in this paper is a perfect playground for this as the T function could be replaced with any deep network -- which is quite rare! One could make use of CNNs, (both 2D and 3D), as well as a fully connected layer. This ablation study is a must for the paper to deliver its main message, but is missing. Without a clear justification on why transformer is necessary, the contributions of the paper itself are quite incremental, as many of the components are similar to existing work (e.g. [35]). The reviewer would like to note that the ablation study in the current manuscript is great, but they are somewhat tangent to showing that the use of a transformer is essential.
Why cost volume?
As shown in Table 3, it seems that the use of appearance, and how the transformer utilizes it seems to be critical to the final performance. However, since transformers have the ability to perform dot products, at this point, it becomes somewhat questionable whether the cost volume itself is necessary at all. For example, what would happen if everything in the pipeline remains the same but the cost volume is removed? This may create some issues related to connecting between the two images, but then the serial way in which the appearances of the two images are provided may allow information to flow through and let the transformer perform this matching by itself. Hence this is perhaps another ablation study that is required to justify the architecture.
Missing reference
Following up, in fact in COTR [a], the paper does something along these lines where cost volume is omitted and it is left to the transformer to make correlations among features. Their work shows that this is a highly effective strategy in forming correspondences. This work is also the first work that the reviewer is aware of on applying Transformers to the correspondence problem (their work is not on semantic correspondences though), predating also LoFTR in terms of the date it became public. While it is probably unreasonable for the paper to compare against this method as it is aiming for geometric correspondences (COTR code page does show a demo for semantic correspondence across human faces), the paper should at least differentiate itself from this method.
[a] Jiang et al., "COTR: Correspondence Transformer for Matching Across Images", ArXiv 21
Serial swapping
The serial swapping suggested in Eq 2 is not order invariant. Hence, differently from NC-Net, depending on which image is used first, it will give different outcomes. Is there some mitigation strategy to avoid this from happening? Otherwise, this does not seem like a proper way to encourage cycle consistency, but another way to incorporate the two appearances.
Presentation
The quality of the presentation requires improvement. The paper does not read well and has multiple grammatical errors that harm the paper's quality. For example, "appearance affinity modeling" in line 9 is unclear, as well as lines 11--14 unless the reader has already read the paper. In line 26, the paper states that it has been proven that more powerful feature representation... but this is just supported by empirical evidence. A similar mistake is in line 202, where the paper states augmentation guarantees performance boost, which is not always true. Other examples include; In line 22, "unconstrained settings" is unclear; line 40 "formulated variously"; line 65 "without a means to refine..." phrase seems discontinued from the previous phrase; line 131 "by first feeding into scaled dot product attention function" object of the sentence is missing; line 199 "inductive bias" could be anything and not clear as Transformers also have inductive bias coming from their particular structure; line 204 "have empirically found" should be present tense to be consistent; line 218 no need for "basically".
There are some organizational issues as well, for example, lines 104--115 are somewhat repeating what is said in the related works section, and it is the reviewer's personal opinion that they belong also either in the intro or related works as they are not directly relevant to the method. Similarly, 130--136 also discusses Transformers and Layer Normalization in general, which is again more suited in the related works section.
Many important details about the method are also left to reference. For example, in lines 119--121, the paper refers to 35 for how multi-level features are extracted, but without this information, it is hard to understand the paper. For completeness, this information should be included. It seems that some of the details are present in the supplementary appendix, but in this case, the main paper should say so. Another example where detail is missing is related to augmentation in lines 206--207, as without reading [6], it's impossible to replicate the paper.
Promises in the checklist are not delivered
1(c) should be Yes, with a pointer to the appendix.
3(d), 4(a), 4(b) are promised, but not provided.
==== Post Rebuttal Update ====
I am convinced by the new experiments that the reviewers have added, and agree that it is now enough to empirically justify the proposed method's design. However, I am concerned that incorporating these changes would require a significant amount to rewrite, which we have no means of verifying its quality after edit. I thus still vote for rejecting the paper. It seems like the submission was an unfinished product, that was completed during the rebuttal period. |
NIPS | Title
CATs: Cost Aggregation Transformers for Visual Correspondence
Abstract
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to handcrafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://sunghwanhong.github.io/CATs/.
1 Introduction
Establishing dense correspondences across semantically similar images can facilitate many Computer Vision applications, including semantic segmentation [46, 54, 36], object detection [29], and image editing [53, 30, 28, 25]. Unlike classical dense correspondence problems that consider visually similar images taken under the geometrically constrained settings [16, 19, 50, 18], semantic correspondence poses additional challenges from large intra-class appearance and geometric variations caused by the unconstrained settings of given image pair.
Recent approaches [42, 43, 45, 34, 37, 39, 31, 58, 47, 57, 51, 35] addressed these challenges by carefully designing deep convolutional neural networks (CNNs)-based models analogously to the classical matching pipeline [48, 41], feature extraction, cost aggregation, and flow estimation. Several works [24, 9, 37, 39, 47, 51] focused on the feature extraction stage, as it has been proven that the more powerful feature representation the model learns, the more robust matching is obtained [24, 9, 51]. However, solely relying on the matching similarity between features without any prior often suffers ∗Equal contribution †Corresponding author
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
from the challenges due to ambiguities generated by repetitive patterns or background clutters [42, 24, 26]. On the other hand, some methods [42, 49, 43, 23, 26, 58] focused on flow estimation stage either by designing additional CNN as an ad-hoc regressor that predicts the parameters of a single global transformation [42, 43], finding confident matches from correlation maps [20, 26], or directly feeding the correlation maps into the decoder to infer dense correspondences [58]. However, these methods highly rely on the quality of the initial correlation maps.
The latest methods [45, 37, 44, 21, 31, 27, 35] have focused on the second stage, highlighting the importance of cost aggregation. Since the quality of correlation maps is of prime importance, they proposed to refine the matching scores by formulating the task as optimal transport problem [47, 31], re-weighting matching scores by Hough space voting for geometric consistency [37, 39], or utilizing high-dimensional 4D or 6D convolutions to find locally consistent matches [45, 44, 27, 35]. Although formulated variously, these methods either use hand-crafted techniques that are neither learnable nor robust to severe deformations, or inherit the limitation of CNNs, e.g., limited receptive fields, failing to discriminate incorrect matches that are locally consistent.
In this work, we focus on the cost aggregation stage, and propose a novel cost aggregation network to tackle aforementioned issues. Our network, called Cost Aggregation with Transformers (CATs), is based on Transformer [61, 10], which is renowned for its global receptive field. By considering all the matching scores computed between features of input images globally, our aggregation networks explore global consensus and thus refine the ambiguous or noisy matching scores effectively.
Specifically, based on the observation that desired correspondence should be aligned at discontinuities with appearance of images, we concatenate an appearance embedding with the correlation map, which helps to disambiguate the correlation map within the Transformer. To benefit from hierarchical feature representations, following [26, 39, 58], we use a stack of correlation maps constructed from multilevel features, and propose to effectively aggregate the scores across the multi-level correlation maps. Furthermore, we consider bidirectional nature of correlation map, and leverage the correlation map from both directions, obtaining reciprocal scores by swapping the pair of dimensions of correlation map in order to allow global consensus in both perspective. In addition to all these combined, we provide residual connections around aggregation networks in order to ease the learning process.
We demonstrate our method on several benchmarks [38, 11, 12]. Experimental results on various benchmarks prove the effectiveness of the proposed model over the latest methods for semantic correspondence. We also provide an extensive ablation study to validate and analyze components in CATs.
2 Related Work
Semantic Correspondence. Methods for semantic correspondence generally follow the classical matching pipeline [48, 41], including feature extraction, cost aggregation, and flow estimation. Most early efforts [7, 30, 11] leveraged the hand-crafted features which are inherently limited in capturing high-level semantics. Though using deep CNN-based features [5, 24, 42, 43, 23, 49, 26] has become increasingly popular thanks to their invariance to deformations, without a means to refine the matching scores independently computed between the features, the performance would be rather limited.
To alleviate this, several methods focused on flow estimation stage. Rocco et al. [42, 43] proposed an end-to-end network to predict global transformation parameters from the matching scores, and their success inspired many variants [49, 23, 25]. RTNs [23] obtain semantic correspondences through an iterative process of estimating spatial transformations. DGC-Net [34], Semantic-GLU-Net [58] and DMP [15] utilize a CNN-based decoder to directly find correspondence fields. PDC-Net [59] proposed a flexible probabilistic model that jointly learns the flow estimation and its uncertainty. Arguably, directly regressing correspondences from the initial matching scores highly relies on the quality of them.
Recent numerous methods [45, 37, 39, 31, 47, 51, 35] thus have focused on cost aggregation stage to refine the initial matching scores. Among hand-crafted methods, SCOT [31] formulates semantic correspondence as an optimal transport problem and attempts to solve two issues, namely many to one matching and background matching. HPF [37] first computes appearance matching confidence using hyperpixel features and then uses Regularized Hough Matching (RHM) algorithm for cost aggregation to enforce geometric consistency. DHPF [39], that replaces feature selection algorithm
of HPF [37] with trainable networks, also uses RHM. However, these hand-crafted techniques for refining the matching scores are neither learnable nor robust to severe deformations. As learningbased approaches, NC-Net [45] utilizes 4D convolution to achieve local neighborhood consensus by finding locally consistent matches, and its variants [44, 27] proposed more efficient methods. GOCor [57] proposed aggregation module that directly improves the correlation maps. GSF [21] formulated pruning module to suppress false positives of correspondences in order to refine the initial correlation maps. CHM [35] goes one step further, proposing a learnable geometric matching algorithm which utilizes 6D convolution. However, they are all limited in the sense that they inherit limitation of CNN-based architectures, which is local receptive fields.
Transformers in Vision. Transformer [61], the de facto standard for Natural Language Processing (NLP) tasks, has recently imposed significant impact on various tasks in Computer Vision fields such as image classification [10, 55], object detection [3, 62], tracking and matching [52, 51]. ViT [10], the first work to propose an end-to-end Transformer-based architecture for the image classification task, successfully extended the receptive field, owing to its self-attention nature that can capture global relationship between features. For visual correspondence, LoFTR [51] uses cross and self-attention module to refine the feature maps conditioned on both input images, and formulate the hand-crafted aggregation layer with dual-softmax [45, 60] and optimal transport [47] to infer correspondences. COTR [22] takes coordinates as an input and addresses dense correspondence task without the use of correlation map. Unlike these, for the first time, we propose a Transformer-based cost aggregation module.
3 Methodology
3.1 Motivation and Overview
Let us denote a pair of images, i.e., source and target, as Is and It, which represent semantically similar images, and features extracted from Is and It as Ds and Dt, respectively. Here, our goal is to establish a dense correspondence field F (i) between two images that is defined for each pixel i, which warps It towards Is.
Estimating the correspondence with sole reliance on matching similarities betweenDs andDt is often challenged by the ambiguous matches due to the repetitive patterns or background clutters [42, 24, 26]. To address this, numerous methods proposed cost aggregation techniques that focus on refining the initial matching similarities either by formulating the task as optimal transport problem [47, 31], using regularized Hough matching to re-weight the costs [37, 39], or 4D or 6D convolutions [45, 27, 44, 35]. However, these methods either use hand-crafted techniques that are weak to severe deformations, or fail to discriminate incorrect matches due to limited receptive fields.
To overcome these, we present Transformer-based cost aggregation networks that effectively integrate information present in all pairwise matching costs, dubbed CATs, as illustrated in Fig. 1. As done widely in other works [42, 45, 50, 34, 37], we follow the common practice for feature extraction and cost computation. In the following, we first explain feature extraction and cost computation, and then describe several critical design choices we made for effective aggregation of the matching costs.
3.2 Feature Extraction and Cost Computation
To extract dense feature maps from images, we follow [26, 37, 39] that use multi-level features for construction of correlation maps. We use CNNs that produce a sequence of L feature maps, and Dl represents a feature map at l-th level. As done in [37], we use different combination of multi-level features depending on the dataset trained on, e.g., PF-PASCAL [12] or SPair-71k [38]. Given a sequence of feature maps, we resize all the selected feature maps to Rh×w×c, with height h, width w, and c channels. The resized features then undergo l-2 normalization.
Given resized dense features Ds and Dt, we compute a correlation map C ∈ Rhw×hw using the inner product between features: C(i, j) = Dt(i) ·Ds(j) with points i and j in the target and source features, respectively. In this way, all pairwise feature matches are computed and stored. However, raw matching scores contain numerous ambiguous matching points as exemplified in Fig. 2, which results inaccurate correspondences. To remedy this, we propose cost aggregation networks in the following that aim to refine the ambiguous or noisy matching scores.
3.3 Transformer Aggregator
Renowned for its global receptive fields, one of the key elements of Transformer [61] is the selfattention mechanism, which enables finding the correlated input tokens by first feeding into scaled dot product attention function, normalizing with Layer Normalization (LN) [1], and passing the normalized values to a MLP. Several works [10, 3, 62, 51] have shown that given images or features as input, Transformers [61] integrate the global information in a flexible manner by learning to find the attention scores for all pairs of tokens.
In this paper, we leverage the Transformers to integrate the matching scores to discover global consensus by considering global context information. Specifically, we obtain a refined cost C′ by feeding the raw cost C to the Transformer T , consisting of self-attention, LN, and MLP modules:
C′ = T (C + Epos), (1) where Epos denotes positional embedding. The standard Transformer receives as input a 1D sequence of token embeddings. In our context, we reshape the correlation map C into a sequence of vectors C(k) ∈ R1×hw for k ∈ {1, ..., hw}. We visualize the refined correlation map with self-attention in Fig. 2, where the ambiguities are significantly resolved.
Appearance Affinity Modeling. When only matching costs are considered for aggregation, selfattention layer processes the correlation map itself disregarding the noise involved in the correlation map, which may lead to inaccurate correspondences. Rather than solely relying on raw correlation map, we additionally provide an appearance embedding from input features to disambiguate the correlation map aided by appearance affinity within the Transformer. Intuition behind is that visually similar points in an image, e.g., color or feature, have similar correspondences, as proven in stereo matching literature, e.g., Cost Volume Filtering (CVF) [16, 50].
To provide appearance affinity, we propose to concatenate embedded features projected from input features with the correlation map. We first feed the features D into linear projection networks, and then concatenate the output along corresponding dimension, so that the correlation map is augmented such that [C,P(D)] ∈ Rhw×(hw+p), where [ · ] denotes concatenation, P denotes linear projection networks, and p is channel dimension of embedded feature. Within the Transformer, self-attention layer aggregates the correlation map and passes the output to the linear projection networks to retain the size of original correlation C.
Multi-Level Aggregation. As shown in [37, 34, 39, 58, 31], leveraging multi-level features allows capturing hierarchical semantic feature representations. Thus we also use multi-level features from different levels of convolutional layers to construct a stack of correlation maps. Each correlation map Cl computed between Dls and Dlt is concatenated with corresponding embedded features and fed into the aggregation networks. The aggregation networks now consider multiple correlations, aiming to effectively aggregates the matches by the hierarchical semantic representations.
As shown in Fig. 3, a stack of L augmented correlation maps, [Cl,P(Dl)]Ll=1 ∈ Rhw×(hw+p)×L, undergo the Transformer aggregator. For each l-th augmented correlation map, we aggregate with self-attention layer across all the points in the augmented correlation map, and we refer this as intra-correlation self-attention. In addition, subsequent to this, the correlation map undergoes intercorrelation self-attention across multi-level dimensions. Contrary to HPF [37] that concatenates all the multi-level features and compute a correlation map, which disregards the level-wise similarities, within the inter-correlation layer of the proposed model, the similar matching scores are explored across multi-level dimensions. In this way, we can embrace richer semantics in different levels of feature maps, as shown in Fig. 4.
3.4 Cost Aggregation with Transformers
By leveraging the Transformer aggregator, we present cost aggregation framework with following additional techniques to improve the performance.
Swapping Self-Attention. To obtain a refined correlation map invariant to order of the input images and impose consistent matching scores, we argue that reciprocal scores should be used as aids to infer confident correspondences. As correlation map contains bidirectional matching scores, from both target and source perspective, we can leverage matching similarities from both directions in order to obtain more reciprocal scores as done similarly in other works [45, 26].
As shown in Fig. 1, we first feed the augmented correlation map to the aforementioned Transformer aggregator. Then we transpose the output, swapping the pair of dimensions in order to concatenate with the embedded feature from the other image, and feed into the subsequent another aggregator. Note that we share the parameters of the Transformer aggregators to obtain reciprocal scores. Formally, we define the whole process as following:
S = T ([Cl,P(Dlt)]Ll=1 + Epos), C′ = T ([(Sl)T,P(Dls)]Ll=1 + Epos),
(2)
where CT(i, j) = C(j, i) denotes swapping the pair of dimensions corresponding to the source and target images; S denotes the intermediate correlation map before swapping the axis. Note that NC-Net [45] proposed a similar procedure, but instead of processing serially, they separately process the correlation map and its transposed version and add the outputs, which is designed to produce
a correlation map invariant to the particular order of the input images. Unlike this, we process the correlation map serially, first aggregating one pair of dimensions and then further aggregating with respect to the other pair. In this way, the subsequent attention layer is given more consistent matching scores as an input, allowing further reduction of inconsistent matching scores. We include an ablation study to justify our choice in Section 4.4
Residual Connection. At the initial phase when the correlation map is fed into the Transformers, noisy score maps are inferred due to randomly-initialized parameters, which could complicate the learning process. To stabilize the learning process and provide a better initialization for the matching, we employ the residual connection. Specifically, we enforce the cost aggregation networks to estimate the residual correlation by adding residual connection around aggregation networks.
3.5 Training
Data Augmentation. Transformer is well known for lacking some of inductive bias and its datahungry nature thus necessitates a large quantity of training data to be fed [61, 10]. Recent methods [55, 56, 32] that employ the Transformer to address Computer Vision tasks have empirically shown that data augmentation techniques have positive impact on performance. However, in correspondence task, the question of to what extent can data augmentation affect the performance has not yet been properly addressed. From the experiments, we empirically find that data augmentation has positive impacts on performance in semantic correspondence with Transformers as reported in Section 4.4. To apply data augmentation [6, 2] with predetermined probabilities to input images at random. Specifically, 50% of the time, we randomly crop the input image, and independently for each augmentation function used in [6], we set the probability for applying the augmentation as 20%. More details can be found in supplementary material.
Training Objective. As in [37, 39, 35], we assume that the ground-truth keypoints are given for each pair of images. We first average the stack of refined correlation maps C′ ∈ Rhw×hw×L to obtain C′′ ∈ Rhw×hw and then transform it into a dense flow field Fpred using soft-argmax operator [26]. Subsequently, we compare the predicted dense flow field with the ground-truth flow field FGT obtained by following the protocol of [37] using input keypoints. For the training objective, we utilize Average End-Point Error (AEPE) [34], computed by averaging the Euclidean distance between the ground-truth and estimated flow. We thus formulate the objective function as L = ‖FGT − Fpred‖2.
4 Experiments
4.1 Implementation Details
For backbone feature extractor, we use ResNet-101 [14] pre-trained on ImageNet [8], and following [37], extract the features from the best subset layers. Other backbone features can also be used, which we analyze the effect of various backbone features in the following ablation study. For the hyper-parameters for Transformer encoder, we set the depth as 1 and the number of heads as 6. We resize the spatial size of the input image pairs to 256×256 and a sequence of selected features are resized to 16×16. We use a learnable positional embedding [10], instead of fixed [61]. We implemented our network using PyTorch [40], and AdamW [33] optimizer with an initial learning
rate of 3e−5 for the CATs layers and 3e−6 for the backbone features are used, which we gradually decrease during training.
4.2 Experimental Settings
In this section, we conduct comprehensive experiments for semantic correspondence, by evaluating our approach through comparisons to state-of-the-art methods including CNNGeo [42], A2Net [49], WeakAlign [43], NC-Net [45], RTNs [23], SFNet [26], HPF [37], DCC-Net [17], ANC-Net [27], DHPF [39], SCOT [31], GSF [21], and CHMNet [35]. In Section 4.3, we first evaluate matching results on several benchmarks with quantitative measures, and then provide an analysis of each component in our framework in Section 4.4. For more implementation details, please refer to our implementation available at https://github.com/SunghwanHong/CATs.
Datasets. SPair-71k [38] provides total 70,958 image pairs with extreme and diverse viewpoint, scale variations, and rich annotations for each image pair, e.g., keypoints, scale difference, truncation and occlusion difference, and clear data split. Previously, for semantic matching, most of the datasets are limited to a small quantity with similar viewpoints and scales [11, 12]. As our network relies on Transformer which requires a large number of data for training, SPair-71k [38] makes the use of Transformer in our model feasible. we also consider PF-PASCAL [12] containing 1,351 image pairs from 20 categories and PF-WILLOW [11] containing 900 image pairs from 4 categories, each dataset providing corresponding ground-truth annotations.
Evaluation Metric. For evaluation on SPair-71k [38], PF-WILLOW [11], and PF-PASCAL [12], we employ a percentage of correct keypoints (PCK), computed as the ratio of estimated keypoints within the threshold from ground-truths to the total number of keypoints. Given predicted keypoint kpred and ground-truth keypoint kGT, we count the number of predicted keypoints that satisfy following condition: d(kpred, kGT) ≤ α ·max(H,W ), where d( · ) denotes Euclidean distance; α
denotes a threshold which we evaluate on PF-PASCAL with αimg, SPair-71k and PF-WILLOW with αbbox; H and W denote height and width of the object bounding box or entire image, respectively.
4.3 Matching Results
For a fair comparison, we follow the evaluation protocol of [37] for SPair-71k, which our network is trained on the training split and evaluated on the test split. Similarly, for PF-PASCAL and PFWILLOW, following the common evaluation protocol of [13, 23, 17, 37, 39], we train our network on the training split of PF-PASCAL [12] and then evaluate on the test split of PF-PASCAL [12] and PF-WILLOW [11]. All the results of other methods are reported under identical setting.
Table 1 summarizes quantitative results on SPair-71k [38], PF-PASCAL [12] and PF-WILLOW [11]. We note whether each method leverages multi-level features and fine-tunes the backbone features in order to ensure a fair comparison. We additionally denote the types of cost aggregation. Generally, our CATs outperform other methods over all the benchmarks. This is also confirmed by the results on SPair-71k, as shown in Table 2, where the proposed method outperforms other methods by large margin. Note that CATs† reports lower PCK than that of CHM, and this is because CHM fine-tunes its backbone networks while CATs† does not. Fig. 5 visualizes qualitative results for extremely challenging image pairs. We observe that compared to current state-of-the-art methods [31, 39], our method is capable of suppressing noisy scores and find accurate correspondences in cases with large scale and geometric variations.
It is notable that CATs generally report lower PCK on PF-WILLOW [11] compared to other stateof-the-art methods. This is because the Transformer is well known for lacking some of inductive bias. When we evaluate on PF-WILLOW, we infer with the model trained on the training split of PFPASCAL, which only contains 1,351 image pairs, and as only relatively small quantity of image pairs is available within the PF-PASCAL training split, the Transformer shows low generalization power. This demonstrates that the Transformer-based architecture indeed requires a means to compensate for the lack of inductive bias, e.g., data augmentation.
4.4 Ablation Study
In this section we show an ablation analysis to validate critical components we made to design our architecture, and provide an analysis on use of different backbone features, and data augmentation. We train all the variants on the training split of SPair-71k [38] when evaluating on SPair-71k, and train on PF-PASCAL [12] for evaluating on PF-PASCAL. We measure the PCK, and each ablation experiment is conducted under same experimental setting for a fair comparison.
Network Architecture. Table 3 shows the analysis on key components in our architecture. There are four key components we analyze for the ablation study, including appearance modelling, multilevel aggregation, swapping self-attention, and residual connection.
We first define the model without any of these as baseline, which simply feeds the correlation map into the selfattention layer. We evaluate on SPair-71k benchmark by progressively adding the each key component. From I to V, we observe consistent increase in performance when each component is added. II shows a large improvement in performance, which demonstrates that the appearance modelling enabled the model to refine the ambiguous or noisy matching scores. Although relatively small increase in PCK for III, it proves that the proposed model
successfully aggregates the multi-level correlation maps. Furthermore, IV and V show apparent increase, proving the significance of both components.
Feature Backbone. As shown in Table 4, we explore the impact of different feature backbones on the performance on SPair-71k [38] and PF-PASCAL [12]. We report the results of models with backbone networks frozen. The top two rows are models with DeiT-B [55], next two rows use DINO [4], and the rest use ResNet101 [14] as backbone. Specifically, subscript single for DeiT-B and DINO, we use the feature map extracted at the last layer for the singlelevel, while for subscript all, every feature map
from 12 layers is used for cost construction. For ResNet-101 subscript single, we use a single-level feature cropped at conv4− 23, while for multi, we use the best layer subset provided by [37]. Summarizing the results, we observed that leveraging multi-level features showed apparent improvements in performance, proving effectiveness of multi-level aggregation introduced by our method. It is worth noting that DINO, which is more excel at dense tasks than DeiT-B, outperforms DeiT-B when applied to semantic matching. This indicates that fine-tuning the feature could enhance the performance. To best of our knowledge, we are the first to employ Transformer-based features for semantic matching. It would be an interesting setup to train an end-to-end Transformer-based networks, and we hope this work draws attention from community and made useful for future works.
Data Augmentation. In Table 5, we compared the PCK performance between our variants and DHPF [39]. We note if the model is trained with augmentation. For a fair comparison, we evaluate both DHPF [39] and CATs trained on SPair-71k [38] using strong supervision, which assumes that the ground-truth keypoints are given. The results show that compared to DHPF, a CNN-based method, data augmentation has a larger influence on CATs in terms of performance. This demonstrates that not only we eased the data-hunger problem inherent in Transform-
ers, but also found that applying augmentations for matching has positive effects. Augmentation technique would bring a highly likely improvements in performance, and we hope that the future works benefit from this.
Serial swapping. It is apparent that Equation 2 is not designed for an order-invariant output. Different from NC-Net [45], we let the correlation map undergo the self-attention module in a serial manner. We conducted a simple experiment to compare the difference between each approach. From experiments, we obtained the results of parallel and serial processing on SPair-71k with αbbox = 0.1, which are PCK of 40.8 and 42.4, respectively. In light of this, although CATs may not support order invariance, adopting serial processing can obtain higher PCK as it has a better capability to reduce inconsistent matching scores by additionally processing the already processed cost map, which we finalize the architecture to include serial processing.
4.5 Analysis
Visualizing Self-Attention. We visualize the multi-level attention maps obtained from the Transformer aggregator. As shown in Fig. 6, the learned self-attention map at each level exhibits different
aspect. With these self-attentions, our networks can leverage multi-level correlations to capture hierarchical semantic feature representations effectively.
Memory and run-time. In Table 6, we show the memory and run-time comparison to NCNet [45], SCOT [31], DHPF [39] and CHM [35] with CATs. For a fair comparison, the results are obtained using a single NVIDIA GeForce RTX 2080 Ti GPU and Intel Core i7-10700 CPU. We measure the inference time for both the process without counting feature extraction, and the whole process. Thanks to Transformers’ fast com-
putation nature, compared to other methods, our method is beyond compare. We also find that compared to other cost aggregation methods including 4D, 6D convolutons, OT-RHM and RHM, ours show comparable efficiency in terms of computational cost. Note that NC-Net utilizes a single feature map while other methods utilize multi-level feature maps. We used the standard self-attention module for implementation, but more advanced and efficient transformer [32] architectures could reduce the overall memory consumption.
4.6 Limitations
One obvious limitation that CATs possess is that when applying the method to non-corresponding images, the proposed method would still deliver correspondences as it lacks power to ignore pixels that do not have correspondence at all. A straightforward solution would be to consider including a module to account for pixel-wise matching confidence. Another limitation of CATs would be its inability to address a task of finding accurate correspondences given multi-objects or non-corresponding objects. Addressing such challenges would be a promising direction for future work.
5 Conclusion
In this paper, we have proposed, for the first time, Transformer-based cost aggregation networks for semantic correspondence which enables aggregating the matching scores computed between input features, dubbed CATs. We have made several architectural designs in the network architecture, including appearance affinity modelling, multi-level aggregation, swapping self-attention, and residual correlation. We have shown that our method surpasses the current state-of-the-art in several benchmarks. Moreover, we have conducted extensive ablation studies to validate our choices and explore its capacity. A natural next step, which we leave for future work, is to examine how CATs could extend its domain to tasks including 3-D reconstruction, semantic segmentation and stitching, and to explore self-supervised learning.
Acknowledgements
This research was supported by the MSIT, Korea, under the ICT Creative Consilience program (IITP-2021-2020-0-01819) and (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) supervised by the IITP and National Research Foundation of Korea (NRF-2021R1C1C1006897). | 1. What is the main contribution of the paper in the field of semantic correspondence?
2. How does the proposed approach utilize Transformers, and what benefits do they bring to the task?
3. Are there any limitations or areas for improvement regarding the use of Transformers in this context?
4. Why do the authors propose concatenating correlation between pixels and appearance embedding, and how does this help disambiguate matches?
5. Can you provide more information about the experiments conducted in the paper, such as the datasets used and the evaluation metrics employed?
6. How do the results of the proposed method compare to those of other state-of-the-art methods, particularly CHM?
7. What are some minor suggestions for improving the clarity and readability of the paper?
8. How does the reviewer assess the originality and overall quality of the paper? | Summary Of The Paper
Review | Summary Of The Paper
The paper deals with the task of semantic correspondence, i.e., finding correspondence between semantically similar images, e.g., the front left paw of two different dogs. In a nutshell, the paper explores the use of Transformers for this problem and demonstrate that the built-in global receptive field is very beneficial for this task. Specifically, the authors improve the cost-aggregation part of a semantic correspondence pipeline, which refines the initial matching costs. The authors propose to concatenate the correlation between pixels (on a feature map from a neural network) with the corresponding appearance embedding, which helps to disambiguate noisy or ambiguous matches (like the left and right eye of a cat as in Figure 2). Results are demonstrated on standard benchmarks.
Review
Originality
Although the paper is not ground-breaking, it demonstrates how Transformer modules can be beneficial for the task of semantic correspondence.
Clarity
The paper is well written, apart from a few typos.
Experiments
In Table 1, I think it makes sense to see an improvement for the dataset SPair-71k because it contains more data. But compared to CHM, the performance of the proposed method is not great; a rank-based metric to compare CHM and CATs would actually go in favor of CHM, when only considering columns where both methods have valid results. This brings me to the next question; why are no results of CHM for PCK threshold of 0.15?
Minor comments
Line 4: Please split the sentence "Compared to previous ..." to make it easier to comprehend.
The abstract is vague and hard to understand without already knowing the typical pipeline for semantic correspondence frameworks.
Post-rebuttal update
After having read all other reviews and the authors' response, I have decided to keep my initial rating and recommend accepting the submission. However, I also agree with the comments from other reviews on the missing justification of using Transformers and cost volumes. Irrespective of the acceptance decision, I strongly encourage the authors to integrate these additional discussions to make the paper stronger. |
NIPS | Title
The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings
Abstract
We examine a class of embeddings based on structured random matrices with orthogonal rows which can be applied in many machine learning applications including dimensionality reduction and kernel approximation. For both the JohnsonLindenstrauss transform and the angular kernel, we show that we can select matrices yielding guaranteed improved performance in accuracy and/or speed compared to earlier methods. We introduce matrices with complex entries which give significant further accuracy improvement. We provide geometric and Markov chain-based perspectives to help understand the benefits, and empirical results which suggest that the approach is helpful in a wider range of applications.
1 Introduction
Embedding methods play a central role in many machine learning applications by projecting feature vectors into a new space (often nonlinearly), allowing the original task to be solved more efficiently. The new space might have more or fewer dimensions depending on the goal. Applications include the Johnson-Lindenstrauss Transform for dimensionality reduction (JLT, Johnson and Lindenstrauss, 1984) and kernel methods with random feature maps (Rahimi and Recht, 2007). The embedding can be costly hence many fast methods have been developed, see §1.1 for background and related work.
We present a general class of random embeddings based on particular structured random matrices with orthogonal rows, which we call random ortho-matrices (ROMs); see §2. We show that ROMs may be used for the applications above, in each case demonstrating improvements over previous methods in statistical accuracy (measured by mean squared error, MSE), in computational efficiency (while providing similar accuracy), or both. We highlight the following contributions:
• In §3: The Orthogonal Johnson-Lindenstrauss Transform (OJLT) for dimensionality reduction. We prove this has strictly smaller MSE than the previous unstructured JLT mechanisms. Further, OJLT is as fast as the fastest previous JLT variants (which are structured).
• In §4: Estimators for the angular kernel (Sidorov et al., 2014) which guarantee better MSE. The angular kernel is important for many applications, including natural language processing (Sidorov et al., 2014), image analysis (Jégou et al., 2011), speaker representations (Schmidt et al., 2014) and tf-idf data sets (Sundaram et al., 2013).
• In §5: Two perspectives on the effectiveness of ROMs to help build intuitive understanding. In §6 we provide empirical results which support our analysis, and show that ROMs are effective for a still broader set of applications. Full details and proofs of all results are in the Appendix. ∗equal contribution
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1 Background and related work
Our ROMs can have two forms (see §2 for details): (i) a Gort is a random Gaussian matrix conditioned on rows being orthogonal; or (ii) an SD-product matrix is formed by multiplying some number k of SD blocks, each of which is highly structured, typically leading to fast computation of products. Here S is a particular structured matrix, and D is a random diagonal matrix; see §2 for full details. Our SD block generalizes an HD block, where H is a Hadamard matrix, which received previous attention. Earlier approaches to embeddings have explored using various structured matrices, including particular versions of one or other of our two forms, though in different contexts.
For dimensionality reduction, Ailon and Chazelle (2006) used a single HD block as a way to spread out the mass of a vector over all dimensions before applying a sparse Gaussian matrix. Choromanski and Sindhwani (2016) also used just one HD block as part of a larger structure. Bojarski et al. (2017) discussed using k = 3 HD blocks for locality-sensitive hashing methods but gave no concrete results for their application to dimensionality reduction or kernel approximation. All these works, and other earlier approaches (Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013; Le et al., 2013; Choromanska et al., 2016), provided computational benefits by using structured matrices with less randomness than unstructured iid Gaussian matrices, but none demonstrated accuracy gains.
Yu et al. (2016) were the first to show that Gort-type matrices can yield improved accuracy, but their theoretical result applies only asymptotically for many dimensions, only for the Gaussian kernel and for just one specific orthogonal transformation, which is one instance of the larger class we consider. Their theoretical result does not yield computational benefits. Yu et al. (2016) did explore using a number k of HD blocks empirically, observing good computational and statistical performance for k = 3, but without any theoretical accuracy guarantees. It was left as an open question why matrices formed by a small number of HD blocks can outperform non-discrete transforms.
In contrast, we are able to prove that ROMs yield improved MSE in several settings and for many of them for any number of dimensions. In addition, SD-product matrices can deliver computational speed benefits. We provide initial analysis to understand why k = 3 can outperform the state-ofthe-art, why odd k yields better results than even k, and why higher values of k deliver decreasing additional benefits (see §3 and §5).
2 The family of Random Ortho-Matrices (ROMs)
Random ortho-matrices (ROMs) are taken from two main classes of distributions defined below that require the rows of sampled matrices to be orthogonal. A central theme of the paper is that this orthogonal structure can yield improved statistical performance. We shall use bold uppercase (e.g. M) to denote matrices and bold lowercase (e.g. x) for vectors.
Gaussian orthogonal matrices. Let G be a random matrix taking values in Rm×n with iid N (0, 1) elements, which we refer to as an unstructured Gaussian matrix. The first ROM distribution we consider yields the random matrix Gort, which is defined as a random Rn×n matrix given by first taking the rows of the matrix to be a uniformly random orthonormal basis, and then independently scaling each row, so that the rows marginally have multivariate Gaussian N (0, I) distributions. The random variable Gort can then be extended to non-square matrices by either stacking independent copies of the Rn×n random matrices, and deleting superfluous rows if necessary. The orthogonality of the rows of this matrix has been observed to yield improved statistical properties for randomized algorithms built from the matrix in a variety of applications.
SD-product matrices. Our second class of distributions is motivated by the desire to obtain similar statistical benefits of orthogonality to Gort, whilst gaining computational efficiency by employing more structured matrices. We call this second class SD-product matrices. These take the more structured form ∏k i=1 SDi, where S = {si,j} ∈ Rn×n has orthogonal rows, |si,j | = 1√ n ∀i, j ∈
{1, . . . , n}; and the (Di)ki=1 are independent diagonal matrices described below. By ∏k i=1 SDi, we mean the matrix product (SDk) . . . (SD1). This class includes as particular cases several recently introduced random matrices (e.g. Andoni et al., 2015; Yu et al., 2016), where good empirical performance was observed. We go further to establish strong theoretical guarantees, see §3 and §4.
A prominent example of an S matrix is the normalized Hadamard matrix H, defined recursively by
H1 = (1), and then for i > 1, Hi = 1√2 ( Hi−1 Hi−1 Hi−1 −Hi−1 ) . Importantly, matrix-vector products with H are computable in O(n log n) time via the fast Walsh-Hadamard transform, yielding large computational savings. In addition, H matrices enable a significant space advantage: since the fast Walsh-Hadamard transform can be computed without explicitly storing H, only O(n) space is required to store the diagonal elements of (Di)ki=1. Note that these Hn matrices are defined only for n a power of 2, but if needed, one can always adjust data by padding with 0s to enable the use of ‘the next larger’ H, doubling the number of dimensions in the worst case.
Matrices H are representatives of a much larger family in S which also attains computational savings. These are L2-normalized versions of Kronecker-product matrices of the form A1 ⊗ ...⊗Al ∈ Rn×n for l ∈ N, where ⊗ stands for a Kronecker product and blocks Ai ∈ Rd×d have entries of the same magnitude and pairwise orthogonal rows each. For these matrices, matrix-vector products are computable in O(n(2d− 1) logd(n)) time (Zhang et al., 2015). S includes also the Walsh matrices W = {wi,j} ∈ Rn×n, where wi,j = 1√n (−1) iN−1j0+...+i0jN−1 and iN−1...i0, jN−1...j0 are binary representations of i and j respectively.
For diagonal (Di)ki=1, we mainly consider Rademacher entries leading to the following matrices.
Definition 2.1. The S-Rademacher random matrix with k ∈ N blocks is below, where (D(R)i )ki=1 are diagonal with iid Rademacher random variables [i.e. Unif({±1})] on the diagonals:
M (k) SR = k∏ i=1 SD (R) i . (1)
Having established the two classes of ROMs, we next apply them to dimensionality reduction.
3 The Orthogonal Johnson-Lindenstrauss Transform (OJLT)
Let X ⊂ Rn be a dataset of n-dimensional real vectors. The goal of dimensionality reduction via random projections is to transform linearly each x ∈ X by a random mapping x F7→ x′, where: F : Rn → Rm for m < n, such that for any x,y ∈ X the following holds: (x′)>y′ ≈ x>y. If we furthermore have E[(x′)>y′] = x>y then the dot-product estimator is unbiased. In particular, this dimensionality reduction mechanism should in expectation preserve information about vectors’ norms, i.e. we should have: E[‖x′‖22] = ‖x‖22 for any x ∈ X . The standard JLT mechanism uses the randomized linear map F = 1√
m G, where G ∈ Rm×n is as
in §2, requiring mn multiplications to evaluate. Several fast variants (FJLTs) have been proposed by replacing G with random structured matrices, such as sparse or circulant Gaussian matrices (Ailon and Chazelle, 2006; Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013). The fastest of these variants has O(n log n) time complexity, but at a cost of higher MSE for dot-products.
Our Orthogonal Johnson-Lindenstrauss Transform (OJLT) is obtained by replacing the unstructured random matrix G with a sub-sampled ROM from §2: either Gort, or a sub-sampled version M (k),sub SR of the S-Rademacher ROM, given by sub-sampling rows from the left-most S matrix in the product. We sub-sample since m < n. We typically assume uniform sub-sampling without replacement. The resulting dot-product estimators for vectors x,y ∈ X are given by:
K̂basem (x,y) = 1
m (Gx)>(Gy) [unstructured iid baseline, previous state-of-the-art accuracy],
K̂ortm (x,y) = 1
m (Gortx)
>(Gorty), K̂ (k) m (x,y) =
1
m
( M
(k),sub SR x
)> ( M
(k),sub SR y
) . (2)
We contribute the following closed-form expressions, which exactly quantify the mean-squared error (MSE) for these three estimators. Precisely, the MSE of an estimator K̂(x,y) of the inner product 〈x,y〉 for x,y ∈ X is defined to be MSE(K̂(x,y)) = E [ (K̂(x,y)− 〈x,y〉2) ] . See the Appendix
for detailed proofs of these results and all others in this paper.
Lemma 3.1. The MSE of the unstructured JLT dot-product estimator K̂basem of x,y ∈ Rn using mdimensional random feature maps is unbiased, with MSE(K̂basem (x,y)) = 1 m ((x
>y)2 +‖x‖22‖y‖22). Theorem 3.2. The estimator K̂ortm is unbiased and satisfies, for n ≥ 4:
MSE(K̂ortm (x,y))
=MSE(K̂basem (x,y)) +
m
m− 1
[ ‖x‖22‖y‖22n2
4I(n− 3)I(n− 4)
(( 1
n − 1 n+ 2
) (I(n− 3)− I(n− 1))I(n− 4) [ cos2(θ) + 1
2
] +
I(n− 1) (I(n− 4)− I(n− 2)) ( 1
n− 2 − 1 n
)[ cos2(θ)− 1
2
]) − 〈x,y〉2 ] ,
(3) where I(n) = ∫ π
0 sinn(x)dx = √ πΓ((n+1)/2) Γ(n/2+1) .
Theorem 3.3 (Key result). The OJLT estimator K̂(k)m (x,y) with k blocks, using m-dimensional random feature maps and uniform sub-sampling policy without replacement, is unbiased with
MSE(K̂(k)m (x,y))= 1
m ( n−m n− 1 )( ((x>y)2 + ‖x‖2‖y‖2) + (4)
k−1∑ r=1 (−1)r2r nr (2(x>y)2 + ‖x‖2‖y‖2) + (−1) k2k nk−1 n∑ i=1 x2i y 2 i ) .
Proof (Sketch). For k = 1, the random projection matrix is given by sub-sampling rows from SD1, and the computation can be carried out directly. For k ≥ 1, the proof proceeds by induction. The random projection matrix in the general case is given by sub-sampling rows of the matrix SDk · · ·SD1. By writing the MSE as an expectation and using the law of conditional expectations conditioning on the value of the first k − 1 random matrices Dk−1, . . . ,D1, the statement of the theorem for 1 SD block and for k − 1 SD blocks can be neatly combined to yield the result.
To our knowledge, it has not previously been possible to provide theoretical guarantees that SD-product matrices outperform iid matrices. Combining Lemma 3.1 with Theorem 3.3 yields the following important result.
Corollary 3.4 (Theoretical guarantee of improved performance). Estimators K̂(k)m (subsampling without replacement) yield guaranteed lower MSE than K̂basem .
It is not yet clear when K̂ortm is better or worse than K̂ (k) m ; we explore this empirically in §6. Theorem 3.3 shows that there are diminishing MSE benefits to using a large number k of SD blocks. Interestingly, odd k is better than even: it is easy to observe that MSE(K̂(2k−1)m (x,y)) < MSE(K̂ (2k) m (x,y)) > MSE(K̂ (2k+1) m (x,y)). These observations, and those in §5, help to understand why empirically k = 3 was previously observed to work well (Yu et al., 2016).
If we take S to be a normalized Hadamard matrix H, then even though we are using sub-sampling, and hence the full computational benefits of the Walsh-Hadamard transform are not available, still K̂ (k) m achieves improved MSE compared to the base method with less computational effort, as follows.
Lemma 3.5. There exists an algorithm (see Appendix for details) which computes an embedding for a given datapoint x using K̂(k)m with S set to H and uniform sub-sampling policy in expected time min{O((k − 1)n log(n) + nm− (m−1)m2 , kn log(n)}. Note that for m = ω(k log(n)) or if k = 1, the time complexity is smaller than the brute force Θ(nm). The algorithm uses a simple observation that one can reuse calculations conducted for the upper half of the Hadamard matrix while performing computations involving rows from its other half, instead of running these calculations from scratch (details in the Appendix).
An alternative to sampling without replacement is deterministically to choose the first m rows. In our experiments in §6, these two approaches yield the same empirical performance, though we expect
that the deterministic method could perform poorly on adversarially chosen data. The first m rows approach can be realized in time O(n log(m) + (k − 1)n log(n)) per datapoint.
Theorem 3.3 is a key result in this paper, demonstrating that SD-product matrices yield both statistical and computational improvements compared to the base iid procedure, which is widely used in practice. We next show how to obtain further gains in accuracy.
3.1 Complex variants of the OJLT
We show that the MSE benefits of Theorem 3.3 may be markedly improved by using SD-product matrices with complex entries M(k)SH. Specifically, we consider the variant S-Hybrid random matrix below, where D(U)k is a diagonal matrix with iid Unif(S
1) random variables on the diagonal, independent of (D(R)i ) k−1 i=1 , and S
1 is the unit circle of C. We use the real part of the Hermitian product between projections as a dot-product estimator; recalling the definitions of §2, we use:
M (k) SH = SD (U) k k−1∏ i=1 SD (R) i , K̂ H,(k) m (x,y) = 1 m Re [( M (k),sub SH x )> ( M (k),sub SH y )] . (5)
Remarkably, this complex variant yields exactly half the MSE of the OJLT estimator.
Theorem 3.6. The estimator K̂H,(k)m (x,y), applying uniform sub-sampling without replacement, is unbiased and satisfies: MSE(K̂H,(k)m (x,y)) = 12MSE(K̂ (k) m (x,y)).
This large factor of 2 improvement could instead be obtained by doubling m for K̂(k)m . However, this would require doubling the number of parameters for the transform, whereas the S-Hybrid estimator requires additional storage only for the complex parameters in the matrix D(U)k . Strikingly, it is straightforward to extend the proof of Theorem 3.6 (see Appendix) to show that rather than taking the complex random variables in M(k),subSH to be Unif(S
1), it is possible to take them to be Unif({1,−1, i,−i}) and still obtain exactly the same benefit in MSE.
Theorem 3.7. For the estimator K̂H,(k)m defined in Equation (5): replacing the random matrix D(U)k (which has iid Unif(S1) elements on the diagonal) with instead a random diagonal matrix having iid Unif({1,−1, i,−i}) elements on the diagonal, does not affect the MSE of the estimator.
It is natural to wonder if using an SD-product matrix with more complex random variables (for all SD blocks) would improve performance still further. However, interestingly, this appears not to be the case; details are provided in the Appendix §8.7.
3.2 Sub-sampling with replacement
Our results above focus on SD-product matrices where rows have been sub-sampled without replacement. Sometimes (e.g. for parallelization) it can be convenient instead to sub-sample with replacement. As might be expected, this leads to worse MSE, which we can quantify precisely.
Theorem 3.8. For each of the estimators K̂(k)m and K̂H,(k)m , if uniform sub-sampling with (rather than without) replacement is used then the MSE is worsened by a multiplicative constant of n−1n−m .
4 Kernel methods with ROMs
ROMs can also be used to construct high-quality random feature maps for non-linear kernel approximation. We analyze here the angular kernel, an important example of a Pointwise Nonlinear Gaussian kernel (PNG), discussed in more detail at the end of this section.
Definition 4.1. The angular kernel Kang is defined on Rn by Kang(x,y) = 1− 2θx,yπ , where θx,y is the angle between x and y.
To employ random feature style approximations to this kernel, we first observe it may be rewritten as
Kang(x,y) = E [sign(Gx)sign(Gy)] ,
where G ∈ R1×n is an unstructured isotropic Gaussian vector. This motivates approximations of the form:
K̂angm(x,y) = 1
m sign(Mx)>sign(My), (6)
where M ∈ Rm×n is a random matrix, and the sign function is applied coordinate-wise. Such kernel estimation procedures are heavily used in practice (Rahimi and Recht, 2007), as they allow fast approximate linear methods to be used (Joachims, 2006) for inference tasks. If M = G, the unstructured Gaussian matrix, then we obtain the standard random feature estimator. We shall contrast this approach against the use of matrices from the ROMs family.
When constructing random feature maps for kernels, very often m > n. In this case, our structured mechanism can be applied by concatenating some number of independent structured blocks. Our theoretical guarantees will be given just for one block, but can easily be extended to a larger number of blocks since different blocks are independent.
The standard random feature approximation K̂ang,basem for approximating the angular kernel is defined by taking M to be G, the unstructured Gaussian matrix, in Equation (6), and satisfies the following.
Lemma 4.2. The estimator K̂ang,basem is unbiased and MSE(K̂ang,basem (x,y)) = 4θx,y(π−θx,y) mπ2 .
The MSE of an estimator K̂ang(x,y) of the true angular kernel Kang(x,y) is defined analogously to the MSE of an estimator of the dot product, given in §3. Our main result regarding angular kernels states that if we instead take M = Gort in Equation (6), then we obtain an estimator K̂ang,ortm with strictly smaller MSE, as follows.
Theorem 4.3. Estimator K̂ang,ortm is unbiased and satisfies:
MSE(K̂ang,ortm (x,y)) < MSE(K̂ ang,base m (x,y)).
We also derive a formula for the MSE of an estimator K̂ang,Mm of the angular kernel which replaces G with an arbitrary random matrix M and uses m random feature maps. The formula is helpful to see how the quality of the estimator depends on the probabilities that the projections of the rows of M are contained in some particular convex regions of the 2-dimensional space Lx,y spanned by datapoints x and y. For an illustration of the geometric definitions introduced in this Section, see Figure 1. The formula depends on probabilities involving events Ai = {sgn((ri)Tx) 6= sgn((ri)Ty)}, where ri stands for the ith row of the structured matrix. Notice that Ai = {riproj ∈ Cx,y}, where riproj stands for the projection of ri into Lx,y and Cx,y is the union of two cones in Lx,y, each of angle θx,y.
Theorem 4.4. Estimator K̂ang,Mm satisfies the following, where: δi,j = P[Ai ∩ Aj ]− P[Ai]P[Aj ]:
MSE(K̂ang,Mm (x,y)) = 1
m2
[ m−
m∑ i=1
(1− 2P[Ai])2 ] + 4
m2 m∑ i=1 (P[Ai]− θx,y π )2 + ∑ i 6=j δi,j . Note that probabilities P[Ai] and δi,j depend on the choice of M. It is easy to prove that for unstructured G and Gort we have: P[Ai] = θx,yπ . Further, from the independence of the rows of G, δi,j = 0 for i 6= j. For unstructured G we obtain Lemma 4.2. Interestingly, we see that to prove Theorem 4.3, it suffices to show δi,j < 0, which is the approach we take (see Appendix). If we replace G with M(k)SR, then the expression = P[Ai] − θx,y π does not depend on i. Hence, the
angular kernel estimator based on Hadamard matrices gives smaller MSE estimator if and only if∑ i 6=j δi,j +m 2 < 0. It is not yet clear if this holds in general.
As alluded to at the beginning of this section, the angular kernel may be viewed as a member of a wie family of kernels known as Pointwise Nonlinear Gaussian kernels.
Definition 4.5. For a given function f , the Pointwise Nonlinear Gaussian kernel (PNG) Kf is defined by Kf (x,y) = E [ f(gTx)f(gTy) ] , where g is a Gaussian vector with i.i.d N (0, 1) entries.
Many prominent examples of kernels (Williams, 1998; Cho and Saul, 2009) are PNGs. Wiener’s tauberian theorem shows that all stationary kernels may be approximated arbitrarily well by sums of PNGs (Samo and Roberts, 2015). In future work we hope to explore whether ROMs can be used to achieve statistical benefit in estimation tasks associated with a wider range of PNGs.
5 Understanding the effectiveness of orthogonality
Here we build intuitive understanding for the effectiveness of ROMs. We examine geometrically the angular kernel (see §4), then discuss a connection to random walks over orthogonal matrices.
Angular kernel. As noted above for the Gort-mechanism, smaller MSE than that for unstructured G is implied by the inequality P[Ai ∩Aj ] < P[Ai]P[Aj ], which is equivalent to: P[Aj |Ai] < P[Aj ]. Now it becomes clear why orthogonality is crucial. Without loss of generality take: i = 1, j = 2, and let g1 and g2 be the first two rows of Gort.
Consider first the extreme case (middle of left part of Figure 1), where all vectors are 2-dimensional. Recall definitions from just after Theorem 4.3. If g1 is in Cx,y then it is much less probable for g2 also to belong to Cx,y. In particular, if θ < π2 then the probability is zero. That implies the inequality. On the other hand, if g1 is perpendicular to Lx,y then conditioning on Ai does not have any effect on the probability that g2 belongs to Cx,y (left subfigure of Figure 1). In practice, with high probability the angle φ between g1 and Lx,y is close to π2 , but is not exactly π 2 . That again implies that conditioned on the projection g1p of g 1 into Lx,y to be in Cx,y, the more probable directions of g2p are perpendicular to g 1 p (see: ellipsoid-like shape in the right subfigure of Figure 1 which is the projection of the sphere taken from the (n− 1)-dimensional space orthogonal to g1 into Lx,y). This makes it less probable for g2p to be also in Cx,y. The effect is subtle since φ ≈ π2 , but this is what provides superiority of the orthogonal transformations over state-of-the-art ones in the angular kernel approximation setting.
Markov chain perspective. We focus on Hadamard-Rademacher random matrices HDk...HD1, a special case of the SD-product matrices described in Section 2. Our aim is to provide intuition for how the choice of k affects the quality of the random matrix, following our earlier observations just after Corollary 3.4, which indicated that for SD-product matrices, odd values of k yield greater benefits than even values, and that there are diminishing benefits from higher values of k. We proceed by casting the random matrices into the framework of Markov chains.
Definition 5.1. The Hadamard-Rademacher process in n dimensions is the Markov chain (Xk)∞k=0 taking values in the orthogonal group O(n), with X0 = I almost surely, and Xk = HDkXk−1 almost surely, where H is the normalized Hadamard matrix in n dimensions, and (Dk)∞k=1 are iid diagonal matrices with independent Rademacher random variables on their diagonals.
Constructing an estimator based on Hadamard-Rademacher matrices is equivalent to simulating several time steps from the Hadamard-Rademacher process. The quality of estimators based on Hadamard-Rademacher random matrices comes from a quick mixing property of the corresponding
Markov chain. The following demonstrates attractive properties of the chain in low dimensions.
Proposition 5.2. The Hadamard-Rademacher process in two dimensions: explores a state-space of 16 orthogonal matrices, is ergodic with respect to the uniform distribution on this set, has period 2, the diameter of the Cayley graph of its state space is 3, and the chain is fully mixed after 3 time steps.
This proposition, and the Cayley graph corresponding to the Markov chain’s state space (Figure 1 right), illustrate the fast mixing properties of the Hadamard-Rademacher process in low dimensions; this agrees with the observations in §3 that there are diminishing returns associated with using a large number k of HD blocks in an estimator. The observation in Proposition 5.2 that the Markov chain has period 2 indicates that we should expect different behavior for estimators based on odd and even numbers of blocks of HD matrices, which is reflected in the analytic expressions for MSE derived in Theorems 3.3 and 3.6 for the dimensionality reduction setup.
6 Experiments
We present comparisons of estimators introduced in §3 and §4, illustrating our theoretical results, and further demonstrating the empirical success of ROM-based estimators at the level of Gram matrix approximation. We compare estimators based on: unstructured Gaussian matrices G, matrices Gort, S-Rademacher and S-Hybrid matrices with k = 3 and different sub-sampling strategies. Results for k > 3 do not show additional statistical gains empirically. Additional experimental results, including a comparison of estimators using different numbers of SD blocks, are in the Appendix §10. Throughout, we use the normalized Hadamard matrix H for the structured matrix S.
6.1 Pointwise kernel approximation
Complementing the theoretical results of §3 and §4, we provide several salient comparisons of the various methods introduced - see Figure 2 top. Plots presented here (and in the Appendix) compare MSE for dot-product and angular and kernel. They show that estimators based on Gort, S-Hybrid and S-Rademacher matrices without replacement, or using the first m rows, beat the state-of-the-art unstructured G approach on accuracy for all our different datasets in the JLT setup. Interestingly, the latter two approaches give also smaller MSE than Gort-estimators. For angular kernel estimation, where sampling is not relevant, we see that Gort and S-Rademacher approaches again outperform the ones based on matrices G.
6.2 Gram matrix approximation
Moving beyond the theoretical guarantees established in §3 and §4, we show empirically that the superiority of estimators based on ROMs is maintained at the level of Gram matrix approximation. We compute Gram matrix approximations (with respect to both standard dot-product, and angular kernel) for a variety of datasets. We use the normalized Frobenius norm error ‖K− K̂‖2/‖K‖2 as our metric (as used by Choromanski and Sindhwani, 2016), and plot the mean error based on 1,000 repetitions of each random transform - see Figure 2 bottom. The Gram matrices are computed on a randomly selected subset of 550 data points from each dataset. As can be seen, the S-Hybrid estimators using the “no-replacement” or “first m rows” sub-sampling strategies outperform even the orthogonal Gaussian ones in the dot-product case. For the angular case, the Gort-approach and S-Rademacher approach are practically indistinguishable.
7 Conclusion
We defined the family of random ortho-matrices (ROMs). This contains the SD-product matrices, which include a number of recently proposed structured random matrices. We showed theoretically and empirically that ROMs have strong statistical and computational properties (in several cases outperforming previous state-of-the-art) for algorithms performing dimensionality reduction and random feature approximations of kernels. We highlight Corollary 3.4, which provides a theoretical guarantee that SD-product matrices yield better accuracy than iid matrices in an important dimensionality reduction application (we believe the first result of this kind). Intriguingly, for dimensionality reduction, using just one complex structured matrix yields random features of much better quality. We provided perspectives to help understand the benefits of ROMs, and to help explain the behavior of SD-product matrices for various numbers of blocks. Our empirical findings suggest that our theoretical results might be further strengthened, particularly in the kernel setting.
Acknowledgements
We thank Vikas Sindhwani at Google Brain Robotics and Tamas Sarlos at Google Research for inspiring conversations that led to this work. We thank Matej Balog, Maria Lomeli, Jiri Hron and Dave Janz for helpful comments. MR acknowledges support by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis. AW acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. | 1. What is the focus of the paper regarding embeddings and their reconstruction error?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its contributions and improvements over prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper examines embeddings based on structured matrices. In particular the paper analyzes the expected reconstruction error of a class of pointwise non-linear gaussian kernels computed using the embedded vectors.
Embeddings based on structured matrices are well known in literature, [Sarlos, Smola '13, Yu et al '16] and have been studied from a practical and a theoretical viewpoint. In particular it is proven that they achieve an error, that is equal, up to constants to the one of unstructured matrices. The main contribution of this paper is to show that the constant is smaller than one ( it tends to 1 when the ambient dimension tend to infinity).
The paper is technically correct.
Note that the crucial aspect for which the structured methods are preferred with respect to the unstructured ones is that they require O(n log n) instead O(n^2) to be computed, while having a comparable accuracy with respect to the unstructured ones, as widely proven in literature.
The proposed bound give a constant smaller than one. However the constants of the previous bounds comparing the error of structured and unstructured methods are already small and universal and the proposed bound does not reduce the error rate w.r.t. the ambient dimension or the number of random features. So the contribution consists in a minor improvement on the knowledge of the topic.
-------
Reading the rebuttal didn't change my point of view on the paper. Again I remark that the paper provides a result that is of interest and I think it should be accepted. However the proposed result is more on the technical side and does not consist in a major improvement on the topic (e.g. compared to [Yu et al '16], which indeed received an oral presentation). |
NIPS | Title
The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings
Abstract
We examine a class of embeddings based on structured random matrices with orthogonal rows which can be applied in many machine learning applications including dimensionality reduction and kernel approximation. For both the JohnsonLindenstrauss transform and the angular kernel, we show that we can select matrices yielding guaranteed improved performance in accuracy and/or speed compared to earlier methods. We introduce matrices with complex entries which give significant further accuracy improvement. We provide geometric and Markov chain-based perspectives to help understand the benefits, and empirical results which suggest that the approach is helpful in a wider range of applications.
1 Introduction
Embedding methods play a central role in many machine learning applications by projecting feature vectors into a new space (often nonlinearly), allowing the original task to be solved more efficiently. The new space might have more or fewer dimensions depending on the goal. Applications include the Johnson-Lindenstrauss Transform for dimensionality reduction (JLT, Johnson and Lindenstrauss, 1984) and kernel methods with random feature maps (Rahimi and Recht, 2007). The embedding can be costly hence many fast methods have been developed, see §1.1 for background and related work.
We present a general class of random embeddings based on particular structured random matrices with orthogonal rows, which we call random ortho-matrices (ROMs); see §2. We show that ROMs may be used for the applications above, in each case demonstrating improvements over previous methods in statistical accuracy (measured by mean squared error, MSE), in computational efficiency (while providing similar accuracy), or both. We highlight the following contributions:
• In §3: The Orthogonal Johnson-Lindenstrauss Transform (OJLT) for dimensionality reduction. We prove this has strictly smaller MSE than the previous unstructured JLT mechanisms. Further, OJLT is as fast as the fastest previous JLT variants (which are structured).
• In §4: Estimators for the angular kernel (Sidorov et al., 2014) which guarantee better MSE. The angular kernel is important for many applications, including natural language processing (Sidorov et al., 2014), image analysis (Jégou et al., 2011), speaker representations (Schmidt et al., 2014) and tf-idf data sets (Sundaram et al., 2013).
• In §5: Two perspectives on the effectiveness of ROMs to help build intuitive understanding. In §6 we provide empirical results which support our analysis, and show that ROMs are effective for a still broader set of applications. Full details and proofs of all results are in the Appendix. ∗equal contribution
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1 Background and related work
Our ROMs can have two forms (see §2 for details): (i) a Gort is a random Gaussian matrix conditioned on rows being orthogonal; or (ii) an SD-product matrix is formed by multiplying some number k of SD blocks, each of which is highly structured, typically leading to fast computation of products. Here S is a particular structured matrix, and D is a random diagonal matrix; see §2 for full details. Our SD block generalizes an HD block, where H is a Hadamard matrix, which received previous attention. Earlier approaches to embeddings have explored using various structured matrices, including particular versions of one or other of our two forms, though in different contexts.
For dimensionality reduction, Ailon and Chazelle (2006) used a single HD block as a way to spread out the mass of a vector over all dimensions before applying a sparse Gaussian matrix. Choromanski and Sindhwani (2016) also used just one HD block as part of a larger structure. Bojarski et al. (2017) discussed using k = 3 HD blocks for locality-sensitive hashing methods but gave no concrete results for their application to dimensionality reduction or kernel approximation. All these works, and other earlier approaches (Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013; Le et al., 2013; Choromanska et al., 2016), provided computational benefits by using structured matrices with less randomness than unstructured iid Gaussian matrices, but none demonstrated accuracy gains.
Yu et al. (2016) were the first to show that Gort-type matrices can yield improved accuracy, but their theoretical result applies only asymptotically for many dimensions, only for the Gaussian kernel and for just one specific orthogonal transformation, which is one instance of the larger class we consider. Their theoretical result does not yield computational benefits. Yu et al. (2016) did explore using a number k of HD blocks empirically, observing good computational and statistical performance for k = 3, but without any theoretical accuracy guarantees. It was left as an open question why matrices formed by a small number of HD blocks can outperform non-discrete transforms.
In contrast, we are able to prove that ROMs yield improved MSE in several settings and for many of them for any number of dimensions. In addition, SD-product matrices can deliver computational speed benefits. We provide initial analysis to understand why k = 3 can outperform the state-ofthe-art, why odd k yields better results than even k, and why higher values of k deliver decreasing additional benefits (see §3 and §5).
2 The family of Random Ortho-Matrices (ROMs)
Random ortho-matrices (ROMs) are taken from two main classes of distributions defined below that require the rows of sampled matrices to be orthogonal. A central theme of the paper is that this orthogonal structure can yield improved statistical performance. We shall use bold uppercase (e.g. M) to denote matrices and bold lowercase (e.g. x) for vectors.
Gaussian orthogonal matrices. Let G be a random matrix taking values in Rm×n with iid N (0, 1) elements, which we refer to as an unstructured Gaussian matrix. The first ROM distribution we consider yields the random matrix Gort, which is defined as a random Rn×n matrix given by first taking the rows of the matrix to be a uniformly random orthonormal basis, and then independently scaling each row, so that the rows marginally have multivariate Gaussian N (0, I) distributions. The random variable Gort can then be extended to non-square matrices by either stacking independent copies of the Rn×n random matrices, and deleting superfluous rows if necessary. The orthogonality of the rows of this matrix has been observed to yield improved statistical properties for randomized algorithms built from the matrix in a variety of applications.
SD-product matrices. Our second class of distributions is motivated by the desire to obtain similar statistical benefits of orthogonality to Gort, whilst gaining computational efficiency by employing more structured matrices. We call this second class SD-product matrices. These take the more structured form ∏k i=1 SDi, where S = {si,j} ∈ Rn×n has orthogonal rows, |si,j | = 1√ n ∀i, j ∈
{1, . . . , n}; and the (Di)ki=1 are independent diagonal matrices described below. By ∏k i=1 SDi, we mean the matrix product (SDk) . . . (SD1). This class includes as particular cases several recently introduced random matrices (e.g. Andoni et al., 2015; Yu et al., 2016), where good empirical performance was observed. We go further to establish strong theoretical guarantees, see §3 and §4.
A prominent example of an S matrix is the normalized Hadamard matrix H, defined recursively by
H1 = (1), and then for i > 1, Hi = 1√2 ( Hi−1 Hi−1 Hi−1 −Hi−1 ) . Importantly, matrix-vector products with H are computable in O(n log n) time via the fast Walsh-Hadamard transform, yielding large computational savings. In addition, H matrices enable a significant space advantage: since the fast Walsh-Hadamard transform can be computed without explicitly storing H, only O(n) space is required to store the diagonal elements of (Di)ki=1. Note that these Hn matrices are defined only for n a power of 2, but if needed, one can always adjust data by padding with 0s to enable the use of ‘the next larger’ H, doubling the number of dimensions in the worst case.
Matrices H are representatives of a much larger family in S which also attains computational savings. These are L2-normalized versions of Kronecker-product matrices of the form A1 ⊗ ...⊗Al ∈ Rn×n for l ∈ N, where ⊗ stands for a Kronecker product and blocks Ai ∈ Rd×d have entries of the same magnitude and pairwise orthogonal rows each. For these matrices, matrix-vector products are computable in O(n(2d− 1) logd(n)) time (Zhang et al., 2015). S includes also the Walsh matrices W = {wi,j} ∈ Rn×n, where wi,j = 1√n (−1) iN−1j0+...+i0jN−1 and iN−1...i0, jN−1...j0 are binary representations of i and j respectively.
For diagonal (Di)ki=1, we mainly consider Rademacher entries leading to the following matrices.
Definition 2.1. The S-Rademacher random matrix with k ∈ N blocks is below, where (D(R)i )ki=1 are diagonal with iid Rademacher random variables [i.e. Unif({±1})] on the diagonals:
M (k) SR = k∏ i=1 SD (R) i . (1)
Having established the two classes of ROMs, we next apply them to dimensionality reduction.
3 The Orthogonal Johnson-Lindenstrauss Transform (OJLT)
Let X ⊂ Rn be a dataset of n-dimensional real vectors. The goal of dimensionality reduction via random projections is to transform linearly each x ∈ X by a random mapping x F7→ x′, where: F : Rn → Rm for m < n, such that for any x,y ∈ X the following holds: (x′)>y′ ≈ x>y. If we furthermore have E[(x′)>y′] = x>y then the dot-product estimator is unbiased. In particular, this dimensionality reduction mechanism should in expectation preserve information about vectors’ norms, i.e. we should have: E[‖x′‖22] = ‖x‖22 for any x ∈ X . The standard JLT mechanism uses the randomized linear map F = 1√
m G, where G ∈ Rm×n is as
in §2, requiring mn multiplications to evaluate. Several fast variants (FJLTs) have been proposed by replacing G with random structured matrices, such as sparse or circulant Gaussian matrices (Ailon and Chazelle, 2006; Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013). The fastest of these variants has O(n log n) time complexity, but at a cost of higher MSE for dot-products.
Our Orthogonal Johnson-Lindenstrauss Transform (OJLT) is obtained by replacing the unstructured random matrix G with a sub-sampled ROM from §2: either Gort, or a sub-sampled version M (k),sub SR of the S-Rademacher ROM, given by sub-sampling rows from the left-most S matrix in the product. We sub-sample since m < n. We typically assume uniform sub-sampling without replacement. The resulting dot-product estimators for vectors x,y ∈ X are given by:
K̂basem (x,y) = 1
m (Gx)>(Gy) [unstructured iid baseline, previous state-of-the-art accuracy],
K̂ortm (x,y) = 1
m (Gortx)
>(Gorty), K̂ (k) m (x,y) =
1
m
( M
(k),sub SR x
)> ( M
(k),sub SR y
) . (2)
We contribute the following closed-form expressions, which exactly quantify the mean-squared error (MSE) for these three estimators. Precisely, the MSE of an estimator K̂(x,y) of the inner product 〈x,y〉 for x,y ∈ X is defined to be MSE(K̂(x,y)) = E [ (K̂(x,y)− 〈x,y〉2) ] . See the Appendix
for detailed proofs of these results and all others in this paper.
Lemma 3.1. The MSE of the unstructured JLT dot-product estimator K̂basem of x,y ∈ Rn using mdimensional random feature maps is unbiased, with MSE(K̂basem (x,y)) = 1 m ((x
>y)2 +‖x‖22‖y‖22). Theorem 3.2. The estimator K̂ortm is unbiased and satisfies, for n ≥ 4:
MSE(K̂ortm (x,y))
=MSE(K̂basem (x,y)) +
m
m− 1
[ ‖x‖22‖y‖22n2
4I(n− 3)I(n− 4)
(( 1
n − 1 n+ 2
) (I(n− 3)− I(n− 1))I(n− 4) [ cos2(θ) + 1
2
] +
I(n− 1) (I(n− 4)− I(n− 2)) ( 1
n− 2 − 1 n
)[ cos2(θ)− 1
2
]) − 〈x,y〉2 ] ,
(3) where I(n) = ∫ π
0 sinn(x)dx = √ πΓ((n+1)/2) Γ(n/2+1) .
Theorem 3.3 (Key result). The OJLT estimator K̂(k)m (x,y) with k blocks, using m-dimensional random feature maps and uniform sub-sampling policy without replacement, is unbiased with
MSE(K̂(k)m (x,y))= 1
m ( n−m n− 1 )( ((x>y)2 + ‖x‖2‖y‖2) + (4)
k−1∑ r=1 (−1)r2r nr (2(x>y)2 + ‖x‖2‖y‖2) + (−1) k2k nk−1 n∑ i=1 x2i y 2 i ) .
Proof (Sketch). For k = 1, the random projection matrix is given by sub-sampling rows from SD1, and the computation can be carried out directly. For k ≥ 1, the proof proceeds by induction. The random projection matrix in the general case is given by sub-sampling rows of the matrix SDk · · ·SD1. By writing the MSE as an expectation and using the law of conditional expectations conditioning on the value of the first k − 1 random matrices Dk−1, . . . ,D1, the statement of the theorem for 1 SD block and for k − 1 SD blocks can be neatly combined to yield the result.
To our knowledge, it has not previously been possible to provide theoretical guarantees that SD-product matrices outperform iid matrices. Combining Lemma 3.1 with Theorem 3.3 yields the following important result.
Corollary 3.4 (Theoretical guarantee of improved performance). Estimators K̂(k)m (subsampling without replacement) yield guaranteed lower MSE than K̂basem .
It is not yet clear when K̂ortm is better or worse than K̂ (k) m ; we explore this empirically in §6. Theorem 3.3 shows that there are diminishing MSE benefits to using a large number k of SD blocks. Interestingly, odd k is better than even: it is easy to observe that MSE(K̂(2k−1)m (x,y)) < MSE(K̂ (2k) m (x,y)) > MSE(K̂ (2k+1) m (x,y)). These observations, and those in §5, help to understand why empirically k = 3 was previously observed to work well (Yu et al., 2016).
If we take S to be a normalized Hadamard matrix H, then even though we are using sub-sampling, and hence the full computational benefits of the Walsh-Hadamard transform are not available, still K̂ (k) m achieves improved MSE compared to the base method with less computational effort, as follows.
Lemma 3.5. There exists an algorithm (see Appendix for details) which computes an embedding for a given datapoint x using K̂(k)m with S set to H and uniform sub-sampling policy in expected time min{O((k − 1)n log(n) + nm− (m−1)m2 , kn log(n)}. Note that for m = ω(k log(n)) or if k = 1, the time complexity is smaller than the brute force Θ(nm). The algorithm uses a simple observation that one can reuse calculations conducted for the upper half of the Hadamard matrix while performing computations involving rows from its other half, instead of running these calculations from scratch (details in the Appendix).
An alternative to sampling without replacement is deterministically to choose the first m rows. In our experiments in §6, these two approaches yield the same empirical performance, though we expect
that the deterministic method could perform poorly on adversarially chosen data. The first m rows approach can be realized in time O(n log(m) + (k − 1)n log(n)) per datapoint.
Theorem 3.3 is a key result in this paper, demonstrating that SD-product matrices yield both statistical and computational improvements compared to the base iid procedure, which is widely used in practice. We next show how to obtain further gains in accuracy.
3.1 Complex variants of the OJLT
We show that the MSE benefits of Theorem 3.3 may be markedly improved by using SD-product matrices with complex entries M(k)SH. Specifically, we consider the variant S-Hybrid random matrix below, where D(U)k is a diagonal matrix with iid Unif(S
1) random variables on the diagonal, independent of (D(R)i ) k−1 i=1 , and S
1 is the unit circle of C. We use the real part of the Hermitian product between projections as a dot-product estimator; recalling the definitions of §2, we use:
M (k) SH = SD (U) k k−1∏ i=1 SD (R) i , K̂ H,(k) m (x,y) = 1 m Re [( M (k),sub SH x )> ( M (k),sub SH y )] . (5)
Remarkably, this complex variant yields exactly half the MSE of the OJLT estimator.
Theorem 3.6. The estimator K̂H,(k)m (x,y), applying uniform sub-sampling without replacement, is unbiased and satisfies: MSE(K̂H,(k)m (x,y)) = 12MSE(K̂ (k) m (x,y)).
This large factor of 2 improvement could instead be obtained by doubling m for K̂(k)m . However, this would require doubling the number of parameters for the transform, whereas the S-Hybrid estimator requires additional storage only for the complex parameters in the matrix D(U)k . Strikingly, it is straightforward to extend the proof of Theorem 3.6 (see Appendix) to show that rather than taking the complex random variables in M(k),subSH to be Unif(S
1), it is possible to take them to be Unif({1,−1, i,−i}) and still obtain exactly the same benefit in MSE.
Theorem 3.7. For the estimator K̂H,(k)m defined in Equation (5): replacing the random matrix D(U)k (which has iid Unif(S1) elements on the diagonal) with instead a random diagonal matrix having iid Unif({1,−1, i,−i}) elements on the diagonal, does not affect the MSE of the estimator.
It is natural to wonder if using an SD-product matrix with more complex random variables (for all SD blocks) would improve performance still further. However, interestingly, this appears not to be the case; details are provided in the Appendix §8.7.
3.2 Sub-sampling with replacement
Our results above focus on SD-product matrices where rows have been sub-sampled without replacement. Sometimes (e.g. for parallelization) it can be convenient instead to sub-sample with replacement. As might be expected, this leads to worse MSE, which we can quantify precisely.
Theorem 3.8. For each of the estimators K̂(k)m and K̂H,(k)m , if uniform sub-sampling with (rather than without) replacement is used then the MSE is worsened by a multiplicative constant of n−1n−m .
4 Kernel methods with ROMs
ROMs can also be used to construct high-quality random feature maps for non-linear kernel approximation. We analyze here the angular kernel, an important example of a Pointwise Nonlinear Gaussian kernel (PNG), discussed in more detail at the end of this section.
Definition 4.1. The angular kernel Kang is defined on Rn by Kang(x,y) = 1− 2θx,yπ , where θx,y is the angle between x and y.
To employ random feature style approximations to this kernel, we first observe it may be rewritten as
Kang(x,y) = E [sign(Gx)sign(Gy)] ,
where G ∈ R1×n is an unstructured isotropic Gaussian vector. This motivates approximations of the form:
K̂angm(x,y) = 1
m sign(Mx)>sign(My), (6)
where M ∈ Rm×n is a random matrix, and the sign function is applied coordinate-wise. Such kernel estimation procedures are heavily used in practice (Rahimi and Recht, 2007), as they allow fast approximate linear methods to be used (Joachims, 2006) for inference tasks. If M = G, the unstructured Gaussian matrix, then we obtain the standard random feature estimator. We shall contrast this approach against the use of matrices from the ROMs family.
When constructing random feature maps for kernels, very often m > n. In this case, our structured mechanism can be applied by concatenating some number of independent structured blocks. Our theoretical guarantees will be given just for one block, but can easily be extended to a larger number of blocks since different blocks are independent.
The standard random feature approximation K̂ang,basem for approximating the angular kernel is defined by taking M to be G, the unstructured Gaussian matrix, in Equation (6), and satisfies the following.
Lemma 4.2. The estimator K̂ang,basem is unbiased and MSE(K̂ang,basem (x,y)) = 4θx,y(π−θx,y) mπ2 .
The MSE of an estimator K̂ang(x,y) of the true angular kernel Kang(x,y) is defined analogously to the MSE of an estimator of the dot product, given in §3. Our main result regarding angular kernels states that if we instead take M = Gort in Equation (6), then we obtain an estimator K̂ang,ortm with strictly smaller MSE, as follows.
Theorem 4.3. Estimator K̂ang,ortm is unbiased and satisfies:
MSE(K̂ang,ortm (x,y)) < MSE(K̂ ang,base m (x,y)).
We also derive a formula for the MSE of an estimator K̂ang,Mm of the angular kernel which replaces G with an arbitrary random matrix M and uses m random feature maps. The formula is helpful to see how the quality of the estimator depends on the probabilities that the projections of the rows of M are contained in some particular convex regions of the 2-dimensional space Lx,y spanned by datapoints x and y. For an illustration of the geometric definitions introduced in this Section, see Figure 1. The formula depends on probabilities involving events Ai = {sgn((ri)Tx) 6= sgn((ri)Ty)}, where ri stands for the ith row of the structured matrix. Notice that Ai = {riproj ∈ Cx,y}, where riproj stands for the projection of ri into Lx,y and Cx,y is the union of two cones in Lx,y, each of angle θx,y.
Theorem 4.4. Estimator K̂ang,Mm satisfies the following, where: δi,j = P[Ai ∩ Aj ]− P[Ai]P[Aj ]:
MSE(K̂ang,Mm (x,y)) = 1
m2
[ m−
m∑ i=1
(1− 2P[Ai])2 ] + 4
m2 m∑ i=1 (P[Ai]− θx,y π )2 + ∑ i 6=j δi,j . Note that probabilities P[Ai] and δi,j depend on the choice of M. It is easy to prove that for unstructured G and Gort we have: P[Ai] = θx,yπ . Further, from the independence of the rows of G, δi,j = 0 for i 6= j. For unstructured G we obtain Lemma 4.2. Interestingly, we see that to prove Theorem 4.3, it suffices to show δi,j < 0, which is the approach we take (see Appendix). If we replace G with M(k)SR, then the expression = P[Ai] − θx,y π does not depend on i. Hence, the
angular kernel estimator based on Hadamard matrices gives smaller MSE estimator if and only if∑ i 6=j δi,j +m 2 < 0. It is not yet clear if this holds in general.
As alluded to at the beginning of this section, the angular kernel may be viewed as a member of a wie family of kernels known as Pointwise Nonlinear Gaussian kernels.
Definition 4.5. For a given function f , the Pointwise Nonlinear Gaussian kernel (PNG) Kf is defined by Kf (x,y) = E [ f(gTx)f(gTy) ] , where g is a Gaussian vector with i.i.d N (0, 1) entries.
Many prominent examples of kernels (Williams, 1998; Cho and Saul, 2009) are PNGs. Wiener’s tauberian theorem shows that all stationary kernels may be approximated arbitrarily well by sums of PNGs (Samo and Roberts, 2015). In future work we hope to explore whether ROMs can be used to achieve statistical benefit in estimation tasks associated with a wider range of PNGs.
5 Understanding the effectiveness of orthogonality
Here we build intuitive understanding for the effectiveness of ROMs. We examine geometrically the angular kernel (see §4), then discuss a connection to random walks over orthogonal matrices.
Angular kernel. As noted above for the Gort-mechanism, smaller MSE than that for unstructured G is implied by the inequality P[Ai ∩Aj ] < P[Ai]P[Aj ], which is equivalent to: P[Aj |Ai] < P[Aj ]. Now it becomes clear why orthogonality is crucial. Without loss of generality take: i = 1, j = 2, and let g1 and g2 be the first two rows of Gort.
Consider first the extreme case (middle of left part of Figure 1), where all vectors are 2-dimensional. Recall definitions from just after Theorem 4.3. If g1 is in Cx,y then it is much less probable for g2 also to belong to Cx,y. In particular, if θ < π2 then the probability is zero. That implies the inequality. On the other hand, if g1 is perpendicular to Lx,y then conditioning on Ai does not have any effect on the probability that g2 belongs to Cx,y (left subfigure of Figure 1). In practice, with high probability the angle φ between g1 and Lx,y is close to π2 , but is not exactly π 2 . That again implies that conditioned on the projection g1p of g 1 into Lx,y to be in Cx,y, the more probable directions of g2p are perpendicular to g 1 p (see: ellipsoid-like shape in the right subfigure of Figure 1 which is the projection of the sphere taken from the (n− 1)-dimensional space orthogonal to g1 into Lx,y). This makes it less probable for g2p to be also in Cx,y. The effect is subtle since φ ≈ π2 , but this is what provides superiority of the orthogonal transformations over state-of-the-art ones in the angular kernel approximation setting.
Markov chain perspective. We focus on Hadamard-Rademacher random matrices HDk...HD1, a special case of the SD-product matrices described in Section 2. Our aim is to provide intuition for how the choice of k affects the quality of the random matrix, following our earlier observations just after Corollary 3.4, which indicated that for SD-product matrices, odd values of k yield greater benefits than even values, and that there are diminishing benefits from higher values of k. We proceed by casting the random matrices into the framework of Markov chains.
Definition 5.1. The Hadamard-Rademacher process in n dimensions is the Markov chain (Xk)∞k=0 taking values in the orthogonal group O(n), with X0 = I almost surely, and Xk = HDkXk−1 almost surely, where H is the normalized Hadamard matrix in n dimensions, and (Dk)∞k=1 are iid diagonal matrices with independent Rademacher random variables on their diagonals.
Constructing an estimator based on Hadamard-Rademacher matrices is equivalent to simulating several time steps from the Hadamard-Rademacher process. The quality of estimators based on Hadamard-Rademacher random matrices comes from a quick mixing property of the corresponding
Markov chain. The following demonstrates attractive properties of the chain in low dimensions.
Proposition 5.2. The Hadamard-Rademacher process in two dimensions: explores a state-space of 16 orthogonal matrices, is ergodic with respect to the uniform distribution on this set, has period 2, the diameter of the Cayley graph of its state space is 3, and the chain is fully mixed after 3 time steps.
This proposition, and the Cayley graph corresponding to the Markov chain’s state space (Figure 1 right), illustrate the fast mixing properties of the Hadamard-Rademacher process in low dimensions; this agrees with the observations in §3 that there are diminishing returns associated with using a large number k of HD blocks in an estimator. The observation in Proposition 5.2 that the Markov chain has period 2 indicates that we should expect different behavior for estimators based on odd and even numbers of blocks of HD matrices, which is reflected in the analytic expressions for MSE derived in Theorems 3.3 and 3.6 for the dimensionality reduction setup.
6 Experiments
We present comparisons of estimators introduced in §3 and §4, illustrating our theoretical results, and further demonstrating the empirical success of ROM-based estimators at the level of Gram matrix approximation. We compare estimators based on: unstructured Gaussian matrices G, matrices Gort, S-Rademacher and S-Hybrid matrices with k = 3 and different sub-sampling strategies. Results for k > 3 do not show additional statistical gains empirically. Additional experimental results, including a comparison of estimators using different numbers of SD blocks, are in the Appendix §10. Throughout, we use the normalized Hadamard matrix H for the structured matrix S.
6.1 Pointwise kernel approximation
Complementing the theoretical results of §3 and §4, we provide several salient comparisons of the various methods introduced - see Figure 2 top. Plots presented here (and in the Appendix) compare MSE for dot-product and angular and kernel. They show that estimators based on Gort, S-Hybrid and S-Rademacher matrices without replacement, or using the first m rows, beat the state-of-the-art unstructured G approach on accuracy for all our different datasets in the JLT setup. Interestingly, the latter two approaches give also smaller MSE than Gort-estimators. For angular kernel estimation, where sampling is not relevant, we see that Gort and S-Rademacher approaches again outperform the ones based on matrices G.
6.2 Gram matrix approximation
Moving beyond the theoretical guarantees established in §3 and §4, we show empirically that the superiority of estimators based on ROMs is maintained at the level of Gram matrix approximation. We compute Gram matrix approximations (with respect to both standard dot-product, and angular kernel) for a variety of datasets. We use the normalized Frobenius norm error ‖K− K̂‖2/‖K‖2 as our metric (as used by Choromanski and Sindhwani, 2016), and plot the mean error based on 1,000 repetitions of each random transform - see Figure 2 bottom. The Gram matrices are computed on a randomly selected subset of 550 data points from each dataset. As can be seen, the S-Hybrid estimators using the “no-replacement” or “first m rows” sub-sampling strategies outperform even the orthogonal Gaussian ones in the dot-product case. For the angular case, the Gort-approach and S-Rademacher approach are practically indistinguishable.
7 Conclusion
We defined the family of random ortho-matrices (ROMs). This contains the SD-product matrices, which include a number of recently proposed structured random matrices. We showed theoretically and empirically that ROMs have strong statistical and computational properties (in several cases outperforming previous state-of-the-art) for algorithms performing dimensionality reduction and random feature approximations of kernels. We highlight Corollary 3.4, which provides a theoretical guarantee that SD-product matrices yield better accuracy than iid matrices in an important dimensionality reduction application (we believe the first result of this kind). Intriguingly, for dimensionality reduction, using just one complex structured matrix yields random features of much better quality. We provided perspectives to help understand the benefits of ROMs, and to help explain the behavior of SD-product matrices for various numbers of blocks. Our empirical findings suggest that our theoretical results might be further strengthened, particularly in the kernel setting.
Acknowledgements
We thank Vikas Sindhwani at Google Brain Robotics and Tamas Sarlos at Google Research for inspiring conversations that led to this work. We thank Matej Balog, Maria Lomeli, Jiri Hron and Dave Janz for helpful comments. MR acknowledges support by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis. AW acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. | 1. What are the main contributions and findings of the paper regarding random projections and inner products?
2. How do the proposed methods compare to existing approaches, particularly the Johnson-Lindenstrauss transform?
3. What are some limitations or areas for improvement in the paper's analysis and discussion?
4. How does the reviewer perceive the organization and clarity of the paper's content?
5. Are there any concerns or questions regarding the paper's use of terminology or definitions? | Review | Review
The paper analyses the theoretical properties of a family random projections approaches to approximate inner products in high dimensional spaces. In particular the authors focus on methods based on random structured matrices, namely Gaussian orthogonal matrices and SD-matrices. The latter are indeed appealing since they require significantly less computations to perform the projections thanks to their underlying structure. The authors show that the methods considered perform comparably well (or better) with respect to the Johnson-Lindenstrauss transform (baseline based on unstructured Gaussian matrices). Moreover they show that further improvements can be achieved by extending SD-matrices to the complex domain. The authors extend their analysis to the case random feature based approximation of angular kernels.
The paper is well written and clear to read, however the discussion of some of the results could be elaborated more. For instance after Thm 3.3, which characterizes the MSE of the orthogonal JL transform based on SD-matrices, it is not discussed in much detail how this compares to the standard JL baseline. Cor. 3.4 does not really help much since it simply states that Thm 3.3 yields to lower MSE, without clarifying the entity of such improvement. In particular it appears that the improvement in performance of the OJLT over JLT is only in terms of constants (w.r.t. the number of sub-sampled dimensions m).
I found it a bit misleading in Sec. 4, to introduce a general family of kernels, namely the pointwise nonlinear Gaussian kernels, but then immediately focus on a specific instance of such class. The reader expects the following results to apply to the whole family but this is not the case. Reversing the order, and discussing PNG kernels only at the end of the section would probably help the discussion.
I found the term 'Unreasonable' in the title not explained in the text. Is it not reasonable to expect that adding structure to an estimator could make it more effective?
The Mean Squared Error (MSE) is never defined. Although being a standard concept, it is also critical to the theoretical analysis presented in the paper, so it should be defined nevertheless. |
NIPS | Title
The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings
Abstract
We examine a class of embeddings based on structured random matrices with orthogonal rows which can be applied in many machine learning applications including dimensionality reduction and kernel approximation. For both the JohnsonLindenstrauss transform and the angular kernel, we show that we can select matrices yielding guaranteed improved performance in accuracy and/or speed compared to earlier methods. We introduce matrices with complex entries which give significant further accuracy improvement. We provide geometric and Markov chain-based perspectives to help understand the benefits, and empirical results which suggest that the approach is helpful in a wider range of applications.
1 Introduction
Embedding methods play a central role in many machine learning applications by projecting feature vectors into a new space (often nonlinearly), allowing the original task to be solved more efficiently. The new space might have more or fewer dimensions depending on the goal. Applications include the Johnson-Lindenstrauss Transform for dimensionality reduction (JLT, Johnson and Lindenstrauss, 1984) and kernel methods with random feature maps (Rahimi and Recht, 2007). The embedding can be costly hence many fast methods have been developed, see §1.1 for background and related work.
We present a general class of random embeddings based on particular structured random matrices with orthogonal rows, which we call random ortho-matrices (ROMs); see §2. We show that ROMs may be used for the applications above, in each case demonstrating improvements over previous methods in statistical accuracy (measured by mean squared error, MSE), in computational efficiency (while providing similar accuracy), or both. We highlight the following contributions:
• In §3: The Orthogonal Johnson-Lindenstrauss Transform (OJLT) for dimensionality reduction. We prove this has strictly smaller MSE than the previous unstructured JLT mechanisms. Further, OJLT is as fast as the fastest previous JLT variants (which are structured).
• In §4: Estimators for the angular kernel (Sidorov et al., 2014) which guarantee better MSE. The angular kernel is important for many applications, including natural language processing (Sidorov et al., 2014), image analysis (Jégou et al., 2011), speaker representations (Schmidt et al., 2014) and tf-idf data sets (Sundaram et al., 2013).
• In §5: Two perspectives on the effectiveness of ROMs to help build intuitive understanding. In §6 we provide empirical results which support our analysis, and show that ROMs are effective for a still broader set of applications. Full details and proofs of all results are in the Appendix. ∗equal contribution
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1 Background and related work
Our ROMs can have two forms (see §2 for details): (i) a Gort is a random Gaussian matrix conditioned on rows being orthogonal; or (ii) an SD-product matrix is formed by multiplying some number k of SD blocks, each of which is highly structured, typically leading to fast computation of products. Here S is a particular structured matrix, and D is a random diagonal matrix; see §2 for full details. Our SD block generalizes an HD block, where H is a Hadamard matrix, which received previous attention. Earlier approaches to embeddings have explored using various structured matrices, including particular versions of one or other of our two forms, though in different contexts.
For dimensionality reduction, Ailon and Chazelle (2006) used a single HD block as a way to spread out the mass of a vector over all dimensions before applying a sparse Gaussian matrix. Choromanski and Sindhwani (2016) also used just one HD block as part of a larger structure. Bojarski et al. (2017) discussed using k = 3 HD blocks for locality-sensitive hashing methods but gave no concrete results for their application to dimensionality reduction or kernel approximation. All these works, and other earlier approaches (Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013; Le et al., 2013; Choromanska et al., 2016), provided computational benefits by using structured matrices with less randomness than unstructured iid Gaussian matrices, but none demonstrated accuracy gains.
Yu et al. (2016) were the first to show that Gort-type matrices can yield improved accuracy, but their theoretical result applies only asymptotically for many dimensions, only for the Gaussian kernel and for just one specific orthogonal transformation, which is one instance of the larger class we consider. Their theoretical result does not yield computational benefits. Yu et al. (2016) did explore using a number k of HD blocks empirically, observing good computational and statistical performance for k = 3, but without any theoretical accuracy guarantees. It was left as an open question why matrices formed by a small number of HD blocks can outperform non-discrete transforms.
In contrast, we are able to prove that ROMs yield improved MSE in several settings and for many of them for any number of dimensions. In addition, SD-product matrices can deliver computational speed benefits. We provide initial analysis to understand why k = 3 can outperform the state-ofthe-art, why odd k yields better results than even k, and why higher values of k deliver decreasing additional benefits (see §3 and §5).
2 The family of Random Ortho-Matrices (ROMs)
Random ortho-matrices (ROMs) are taken from two main classes of distributions defined below that require the rows of sampled matrices to be orthogonal. A central theme of the paper is that this orthogonal structure can yield improved statistical performance. We shall use bold uppercase (e.g. M) to denote matrices and bold lowercase (e.g. x) for vectors.
Gaussian orthogonal matrices. Let G be a random matrix taking values in Rm×n with iid N (0, 1) elements, which we refer to as an unstructured Gaussian matrix. The first ROM distribution we consider yields the random matrix Gort, which is defined as a random Rn×n matrix given by first taking the rows of the matrix to be a uniformly random orthonormal basis, and then independently scaling each row, so that the rows marginally have multivariate Gaussian N (0, I) distributions. The random variable Gort can then be extended to non-square matrices by either stacking independent copies of the Rn×n random matrices, and deleting superfluous rows if necessary. The orthogonality of the rows of this matrix has been observed to yield improved statistical properties for randomized algorithms built from the matrix in a variety of applications.
SD-product matrices. Our second class of distributions is motivated by the desire to obtain similar statistical benefits of orthogonality to Gort, whilst gaining computational efficiency by employing more structured matrices. We call this second class SD-product matrices. These take the more structured form ∏k i=1 SDi, where S = {si,j} ∈ Rn×n has orthogonal rows, |si,j | = 1√ n ∀i, j ∈
{1, . . . , n}; and the (Di)ki=1 are independent diagonal matrices described below. By ∏k i=1 SDi, we mean the matrix product (SDk) . . . (SD1). This class includes as particular cases several recently introduced random matrices (e.g. Andoni et al., 2015; Yu et al., 2016), where good empirical performance was observed. We go further to establish strong theoretical guarantees, see §3 and §4.
A prominent example of an S matrix is the normalized Hadamard matrix H, defined recursively by
H1 = (1), and then for i > 1, Hi = 1√2 ( Hi−1 Hi−1 Hi−1 −Hi−1 ) . Importantly, matrix-vector products with H are computable in O(n log n) time via the fast Walsh-Hadamard transform, yielding large computational savings. In addition, H matrices enable a significant space advantage: since the fast Walsh-Hadamard transform can be computed without explicitly storing H, only O(n) space is required to store the diagonal elements of (Di)ki=1. Note that these Hn matrices are defined only for n a power of 2, but if needed, one can always adjust data by padding with 0s to enable the use of ‘the next larger’ H, doubling the number of dimensions in the worst case.
Matrices H are representatives of a much larger family in S which also attains computational savings. These are L2-normalized versions of Kronecker-product matrices of the form A1 ⊗ ...⊗Al ∈ Rn×n for l ∈ N, where ⊗ stands for a Kronecker product and blocks Ai ∈ Rd×d have entries of the same magnitude and pairwise orthogonal rows each. For these matrices, matrix-vector products are computable in O(n(2d− 1) logd(n)) time (Zhang et al., 2015). S includes also the Walsh matrices W = {wi,j} ∈ Rn×n, where wi,j = 1√n (−1) iN−1j0+...+i0jN−1 and iN−1...i0, jN−1...j0 are binary representations of i and j respectively.
For diagonal (Di)ki=1, we mainly consider Rademacher entries leading to the following matrices.
Definition 2.1. The S-Rademacher random matrix with k ∈ N blocks is below, where (D(R)i )ki=1 are diagonal with iid Rademacher random variables [i.e. Unif({±1})] on the diagonals:
M (k) SR = k∏ i=1 SD (R) i . (1)
Having established the two classes of ROMs, we next apply them to dimensionality reduction.
3 The Orthogonal Johnson-Lindenstrauss Transform (OJLT)
Let X ⊂ Rn be a dataset of n-dimensional real vectors. The goal of dimensionality reduction via random projections is to transform linearly each x ∈ X by a random mapping x F7→ x′, where: F : Rn → Rm for m < n, such that for any x,y ∈ X the following holds: (x′)>y′ ≈ x>y. If we furthermore have E[(x′)>y′] = x>y then the dot-product estimator is unbiased. In particular, this dimensionality reduction mechanism should in expectation preserve information about vectors’ norms, i.e. we should have: E[‖x′‖22] = ‖x‖22 for any x ∈ X . The standard JLT mechanism uses the randomized linear map F = 1√
m G, where G ∈ Rm×n is as
in §2, requiring mn multiplications to evaluate. Several fast variants (FJLTs) have been proposed by replacing G with random structured matrices, such as sparse or circulant Gaussian matrices (Ailon and Chazelle, 2006; Hinrichs and Vybíral, 2011; Vybíral, 2011; Zhang and Cheng, 2013). The fastest of these variants has O(n log n) time complexity, but at a cost of higher MSE for dot-products.
Our Orthogonal Johnson-Lindenstrauss Transform (OJLT) is obtained by replacing the unstructured random matrix G with a sub-sampled ROM from §2: either Gort, or a sub-sampled version M (k),sub SR of the S-Rademacher ROM, given by sub-sampling rows from the left-most S matrix in the product. We sub-sample since m < n. We typically assume uniform sub-sampling without replacement. The resulting dot-product estimators for vectors x,y ∈ X are given by:
K̂basem (x,y) = 1
m (Gx)>(Gy) [unstructured iid baseline, previous state-of-the-art accuracy],
K̂ortm (x,y) = 1
m (Gortx)
>(Gorty), K̂ (k) m (x,y) =
1
m
( M
(k),sub SR x
)> ( M
(k),sub SR y
) . (2)
We contribute the following closed-form expressions, which exactly quantify the mean-squared error (MSE) for these three estimators. Precisely, the MSE of an estimator K̂(x,y) of the inner product 〈x,y〉 for x,y ∈ X is defined to be MSE(K̂(x,y)) = E [ (K̂(x,y)− 〈x,y〉2) ] . See the Appendix
for detailed proofs of these results and all others in this paper.
Lemma 3.1. The MSE of the unstructured JLT dot-product estimator K̂basem of x,y ∈ Rn using mdimensional random feature maps is unbiased, with MSE(K̂basem (x,y)) = 1 m ((x
>y)2 +‖x‖22‖y‖22). Theorem 3.2. The estimator K̂ortm is unbiased and satisfies, for n ≥ 4:
MSE(K̂ortm (x,y))
=MSE(K̂basem (x,y)) +
m
m− 1
[ ‖x‖22‖y‖22n2
4I(n− 3)I(n− 4)
(( 1
n − 1 n+ 2
) (I(n− 3)− I(n− 1))I(n− 4) [ cos2(θ) + 1
2
] +
I(n− 1) (I(n− 4)− I(n− 2)) ( 1
n− 2 − 1 n
)[ cos2(θ)− 1
2
]) − 〈x,y〉2 ] ,
(3) where I(n) = ∫ π
0 sinn(x)dx = √ πΓ((n+1)/2) Γ(n/2+1) .
Theorem 3.3 (Key result). The OJLT estimator K̂(k)m (x,y) with k blocks, using m-dimensional random feature maps and uniform sub-sampling policy without replacement, is unbiased with
MSE(K̂(k)m (x,y))= 1
m ( n−m n− 1 )( ((x>y)2 + ‖x‖2‖y‖2) + (4)
k−1∑ r=1 (−1)r2r nr (2(x>y)2 + ‖x‖2‖y‖2) + (−1) k2k nk−1 n∑ i=1 x2i y 2 i ) .
Proof (Sketch). For k = 1, the random projection matrix is given by sub-sampling rows from SD1, and the computation can be carried out directly. For k ≥ 1, the proof proceeds by induction. The random projection matrix in the general case is given by sub-sampling rows of the matrix SDk · · ·SD1. By writing the MSE as an expectation and using the law of conditional expectations conditioning on the value of the first k − 1 random matrices Dk−1, . . . ,D1, the statement of the theorem for 1 SD block and for k − 1 SD blocks can be neatly combined to yield the result.
To our knowledge, it has not previously been possible to provide theoretical guarantees that SD-product matrices outperform iid matrices. Combining Lemma 3.1 with Theorem 3.3 yields the following important result.
Corollary 3.4 (Theoretical guarantee of improved performance). Estimators K̂(k)m (subsampling without replacement) yield guaranteed lower MSE than K̂basem .
It is not yet clear when K̂ortm is better or worse than K̂ (k) m ; we explore this empirically in §6. Theorem 3.3 shows that there are diminishing MSE benefits to using a large number k of SD blocks. Interestingly, odd k is better than even: it is easy to observe that MSE(K̂(2k−1)m (x,y)) < MSE(K̂ (2k) m (x,y)) > MSE(K̂ (2k+1) m (x,y)). These observations, and those in §5, help to understand why empirically k = 3 was previously observed to work well (Yu et al., 2016).
If we take S to be a normalized Hadamard matrix H, then even though we are using sub-sampling, and hence the full computational benefits of the Walsh-Hadamard transform are not available, still K̂ (k) m achieves improved MSE compared to the base method with less computational effort, as follows.
Lemma 3.5. There exists an algorithm (see Appendix for details) which computes an embedding for a given datapoint x using K̂(k)m with S set to H and uniform sub-sampling policy in expected time min{O((k − 1)n log(n) + nm− (m−1)m2 , kn log(n)}. Note that for m = ω(k log(n)) or if k = 1, the time complexity is smaller than the brute force Θ(nm). The algorithm uses a simple observation that one can reuse calculations conducted for the upper half of the Hadamard matrix while performing computations involving rows from its other half, instead of running these calculations from scratch (details in the Appendix).
An alternative to sampling without replacement is deterministically to choose the first m rows. In our experiments in §6, these two approaches yield the same empirical performance, though we expect
that the deterministic method could perform poorly on adversarially chosen data. The first m rows approach can be realized in time O(n log(m) + (k − 1)n log(n)) per datapoint.
Theorem 3.3 is a key result in this paper, demonstrating that SD-product matrices yield both statistical and computational improvements compared to the base iid procedure, which is widely used in practice. We next show how to obtain further gains in accuracy.
3.1 Complex variants of the OJLT
We show that the MSE benefits of Theorem 3.3 may be markedly improved by using SD-product matrices with complex entries M(k)SH. Specifically, we consider the variant S-Hybrid random matrix below, where D(U)k is a diagonal matrix with iid Unif(S
1) random variables on the diagonal, independent of (D(R)i ) k−1 i=1 , and S
1 is the unit circle of C. We use the real part of the Hermitian product between projections as a dot-product estimator; recalling the definitions of §2, we use:
M (k) SH = SD (U) k k−1∏ i=1 SD (R) i , K̂ H,(k) m (x,y) = 1 m Re [( M (k),sub SH x )> ( M (k),sub SH y )] . (5)
Remarkably, this complex variant yields exactly half the MSE of the OJLT estimator.
Theorem 3.6. The estimator K̂H,(k)m (x,y), applying uniform sub-sampling without replacement, is unbiased and satisfies: MSE(K̂H,(k)m (x,y)) = 12MSE(K̂ (k) m (x,y)).
This large factor of 2 improvement could instead be obtained by doubling m for K̂(k)m . However, this would require doubling the number of parameters for the transform, whereas the S-Hybrid estimator requires additional storage only for the complex parameters in the matrix D(U)k . Strikingly, it is straightforward to extend the proof of Theorem 3.6 (see Appendix) to show that rather than taking the complex random variables in M(k),subSH to be Unif(S
1), it is possible to take them to be Unif({1,−1, i,−i}) and still obtain exactly the same benefit in MSE.
Theorem 3.7. For the estimator K̂H,(k)m defined in Equation (5): replacing the random matrix D(U)k (which has iid Unif(S1) elements on the diagonal) with instead a random diagonal matrix having iid Unif({1,−1, i,−i}) elements on the diagonal, does not affect the MSE of the estimator.
It is natural to wonder if using an SD-product matrix with more complex random variables (for all SD blocks) would improve performance still further. However, interestingly, this appears not to be the case; details are provided in the Appendix §8.7.
3.2 Sub-sampling with replacement
Our results above focus on SD-product matrices where rows have been sub-sampled without replacement. Sometimes (e.g. for parallelization) it can be convenient instead to sub-sample with replacement. As might be expected, this leads to worse MSE, which we can quantify precisely.
Theorem 3.8. For each of the estimators K̂(k)m and K̂H,(k)m , if uniform sub-sampling with (rather than without) replacement is used then the MSE is worsened by a multiplicative constant of n−1n−m .
4 Kernel methods with ROMs
ROMs can also be used to construct high-quality random feature maps for non-linear kernel approximation. We analyze here the angular kernel, an important example of a Pointwise Nonlinear Gaussian kernel (PNG), discussed in more detail at the end of this section.
Definition 4.1. The angular kernel Kang is defined on Rn by Kang(x,y) = 1− 2θx,yπ , where θx,y is the angle between x and y.
To employ random feature style approximations to this kernel, we first observe it may be rewritten as
Kang(x,y) = E [sign(Gx)sign(Gy)] ,
where G ∈ R1×n is an unstructured isotropic Gaussian vector. This motivates approximations of the form:
K̂angm(x,y) = 1
m sign(Mx)>sign(My), (6)
where M ∈ Rm×n is a random matrix, and the sign function is applied coordinate-wise. Such kernel estimation procedures are heavily used in practice (Rahimi and Recht, 2007), as they allow fast approximate linear methods to be used (Joachims, 2006) for inference tasks. If M = G, the unstructured Gaussian matrix, then we obtain the standard random feature estimator. We shall contrast this approach against the use of matrices from the ROMs family.
When constructing random feature maps for kernels, very often m > n. In this case, our structured mechanism can be applied by concatenating some number of independent structured blocks. Our theoretical guarantees will be given just for one block, but can easily be extended to a larger number of blocks since different blocks are independent.
The standard random feature approximation K̂ang,basem for approximating the angular kernel is defined by taking M to be G, the unstructured Gaussian matrix, in Equation (6), and satisfies the following.
Lemma 4.2. The estimator K̂ang,basem is unbiased and MSE(K̂ang,basem (x,y)) = 4θx,y(π−θx,y) mπ2 .
The MSE of an estimator K̂ang(x,y) of the true angular kernel Kang(x,y) is defined analogously to the MSE of an estimator of the dot product, given in §3. Our main result regarding angular kernels states that if we instead take M = Gort in Equation (6), then we obtain an estimator K̂ang,ortm with strictly smaller MSE, as follows.
Theorem 4.3. Estimator K̂ang,ortm is unbiased and satisfies:
MSE(K̂ang,ortm (x,y)) < MSE(K̂ ang,base m (x,y)).
We also derive a formula for the MSE of an estimator K̂ang,Mm of the angular kernel which replaces G with an arbitrary random matrix M and uses m random feature maps. The formula is helpful to see how the quality of the estimator depends on the probabilities that the projections of the rows of M are contained in some particular convex regions of the 2-dimensional space Lx,y spanned by datapoints x and y. For an illustration of the geometric definitions introduced in this Section, see Figure 1. The formula depends on probabilities involving events Ai = {sgn((ri)Tx) 6= sgn((ri)Ty)}, where ri stands for the ith row of the structured matrix. Notice that Ai = {riproj ∈ Cx,y}, where riproj stands for the projection of ri into Lx,y and Cx,y is the union of two cones in Lx,y, each of angle θx,y.
Theorem 4.4. Estimator K̂ang,Mm satisfies the following, where: δi,j = P[Ai ∩ Aj ]− P[Ai]P[Aj ]:
MSE(K̂ang,Mm (x,y)) = 1
m2
[ m−
m∑ i=1
(1− 2P[Ai])2 ] + 4
m2 m∑ i=1 (P[Ai]− θx,y π )2 + ∑ i 6=j δi,j . Note that probabilities P[Ai] and δi,j depend on the choice of M. It is easy to prove that for unstructured G and Gort we have: P[Ai] = θx,yπ . Further, from the independence of the rows of G, δi,j = 0 for i 6= j. For unstructured G we obtain Lemma 4.2. Interestingly, we see that to prove Theorem 4.3, it suffices to show δi,j < 0, which is the approach we take (see Appendix). If we replace G with M(k)SR, then the expression = P[Ai] − θx,y π does not depend on i. Hence, the
angular kernel estimator based on Hadamard matrices gives smaller MSE estimator if and only if∑ i 6=j δi,j +m 2 < 0. It is not yet clear if this holds in general.
As alluded to at the beginning of this section, the angular kernel may be viewed as a member of a wie family of kernels known as Pointwise Nonlinear Gaussian kernels.
Definition 4.5. For a given function f , the Pointwise Nonlinear Gaussian kernel (PNG) Kf is defined by Kf (x,y) = E [ f(gTx)f(gTy) ] , where g is a Gaussian vector with i.i.d N (0, 1) entries.
Many prominent examples of kernels (Williams, 1998; Cho and Saul, 2009) are PNGs. Wiener’s tauberian theorem shows that all stationary kernels may be approximated arbitrarily well by sums of PNGs (Samo and Roberts, 2015). In future work we hope to explore whether ROMs can be used to achieve statistical benefit in estimation tasks associated with a wider range of PNGs.
5 Understanding the effectiveness of orthogonality
Here we build intuitive understanding for the effectiveness of ROMs. We examine geometrically the angular kernel (see §4), then discuss a connection to random walks over orthogonal matrices.
Angular kernel. As noted above for the Gort-mechanism, smaller MSE than that for unstructured G is implied by the inequality P[Ai ∩Aj ] < P[Ai]P[Aj ], which is equivalent to: P[Aj |Ai] < P[Aj ]. Now it becomes clear why orthogonality is crucial. Without loss of generality take: i = 1, j = 2, and let g1 and g2 be the first two rows of Gort.
Consider first the extreme case (middle of left part of Figure 1), where all vectors are 2-dimensional. Recall definitions from just after Theorem 4.3. If g1 is in Cx,y then it is much less probable for g2 also to belong to Cx,y. In particular, if θ < π2 then the probability is zero. That implies the inequality. On the other hand, if g1 is perpendicular to Lx,y then conditioning on Ai does not have any effect on the probability that g2 belongs to Cx,y (left subfigure of Figure 1). In practice, with high probability the angle φ between g1 and Lx,y is close to π2 , but is not exactly π 2 . That again implies that conditioned on the projection g1p of g 1 into Lx,y to be in Cx,y, the more probable directions of g2p are perpendicular to g 1 p (see: ellipsoid-like shape in the right subfigure of Figure 1 which is the projection of the sphere taken from the (n− 1)-dimensional space orthogonal to g1 into Lx,y). This makes it less probable for g2p to be also in Cx,y. The effect is subtle since φ ≈ π2 , but this is what provides superiority of the orthogonal transformations over state-of-the-art ones in the angular kernel approximation setting.
Markov chain perspective. We focus on Hadamard-Rademacher random matrices HDk...HD1, a special case of the SD-product matrices described in Section 2. Our aim is to provide intuition for how the choice of k affects the quality of the random matrix, following our earlier observations just after Corollary 3.4, which indicated that for SD-product matrices, odd values of k yield greater benefits than even values, and that there are diminishing benefits from higher values of k. We proceed by casting the random matrices into the framework of Markov chains.
Definition 5.1. The Hadamard-Rademacher process in n dimensions is the Markov chain (Xk)∞k=0 taking values in the orthogonal group O(n), with X0 = I almost surely, and Xk = HDkXk−1 almost surely, where H is the normalized Hadamard matrix in n dimensions, and (Dk)∞k=1 are iid diagonal matrices with independent Rademacher random variables on their diagonals.
Constructing an estimator based on Hadamard-Rademacher matrices is equivalent to simulating several time steps from the Hadamard-Rademacher process. The quality of estimators based on Hadamard-Rademacher random matrices comes from a quick mixing property of the corresponding
Markov chain. The following demonstrates attractive properties of the chain in low dimensions.
Proposition 5.2. The Hadamard-Rademacher process in two dimensions: explores a state-space of 16 orthogonal matrices, is ergodic with respect to the uniform distribution on this set, has period 2, the diameter of the Cayley graph of its state space is 3, and the chain is fully mixed after 3 time steps.
This proposition, and the Cayley graph corresponding to the Markov chain’s state space (Figure 1 right), illustrate the fast mixing properties of the Hadamard-Rademacher process in low dimensions; this agrees with the observations in §3 that there are diminishing returns associated with using a large number k of HD blocks in an estimator. The observation in Proposition 5.2 that the Markov chain has period 2 indicates that we should expect different behavior for estimators based on odd and even numbers of blocks of HD matrices, which is reflected in the analytic expressions for MSE derived in Theorems 3.3 and 3.6 for the dimensionality reduction setup.
6 Experiments
We present comparisons of estimators introduced in §3 and §4, illustrating our theoretical results, and further demonstrating the empirical success of ROM-based estimators at the level of Gram matrix approximation. We compare estimators based on: unstructured Gaussian matrices G, matrices Gort, S-Rademacher and S-Hybrid matrices with k = 3 and different sub-sampling strategies. Results for k > 3 do not show additional statistical gains empirically. Additional experimental results, including a comparison of estimators using different numbers of SD blocks, are in the Appendix §10. Throughout, we use the normalized Hadamard matrix H for the structured matrix S.
6.1 Pointwise kernel approximation
Complementing the theoretical results of §3 and §4, we provide several salient comparisons of the various methods introduced - see Figure 2 top. Plots presented here (and in the Appendix) compare MSE for dot-product and angular and kernel. They show that estimators based on Gort, S-Hybrid and S-Rademacher matrices without replacement, or using the first m rows, beat the state-of-the-art unstructured G approach on accuracy for all our different datasets in the JLT setup. Interestingly, the latter two approaches give also smaller MSE than Gort-estimators. For angular kernel estimation, where sampling is not relevant, we see that Gort and S-Rademacher approaches again outperform the ones based on matrices G.
6.2 Gram matrix approximation
Moving beyond the theoretical guarantees established in §3 and §4, we show empirically that the superiority of estimators based on ROMs is maintained at the level of Gram matrix approximation. We compute Gram matrix approximations (with respect to both standard dot-product, and angular kernel) for a variety of datasets. We use the normalized Frobenius norm error ‖K− K̂‖2/‖K‖2 as our metric (as used by Choromanski and Sindhwani, 2016), and plot the mean error based on 1,000 repetitions of each random transform - see Figure 2 bottom. The Gram matrices are computed on a randomly selected subset of 550 data points from each dataset. As can be seen, the S-Hybrid estimators using the “no-replacement” or “first m rows” sub-sampling strategies outperform even the orthogonal Gaussian ones in the dot-product case. For the angular case, the Gort-approach and S-Rademacher approach are practically indistinguishable.
7 Conclusion
We defined the family of random ortho-matrices (ROMs). This contains the SD-product matrices, which include a number of recently proposed structured random matrices. We showed theoretically and empirically that ROMs have strong statistical and computational properties (in several cases outperforming previous state-of-the-art) for algorithms performing dimensionality reduction and random feature approximations of kernels. We highlight Corollary 3.4, which provides a theoretical guarantee that SD-product matrices yield better accuracy than iid matrices in an important dimensionality reduction application (we believe the first result of this kind). Intriguingly, for dimensionality reduction, using just one complex structured matrix yields random features of much better quality. We provided perspectives to help understand the benefits of ROMs, and to help explain the behavior of SD-product matrices for various numbers of blocks. Our empirical findings suggest that our theoretical results might be further strengthened, particularly in the kernel setting.
Acknowledgements
We thank Vikas Sindhwani at Google Brain Robotics and Tamas Sarlos at Google Research for inspiring conversations that led to this work. We thank Matej Balog, Maria Lomeli, Jiri Hron and Dave Janz for helpful comments. MR acknowledges support by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis. AW acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI. | 1. What are the main contributions and extensions proposed by the paper regarding random orthogonal embeddings?
2. What are the key results shown by the authors regarding unbiasedness and MSE of the embeddings?
3. Can you provide more details about the technical developments and proofs in the paper?
4. Do you have any suggestions for improving the paper, such as providing a concrete mathematical definition of the Gaussian orthogonal matrix? | Review | Review
This paper analyzes a few extensions of the random orthogonal embeddings proposed by Yu et al 2016. In particular, the authors focus on the Gaussian orthogonal matrices and SD-product matrices (which slightly generalize the HD-product matrices seen in the past). The authors are able to show several results regarding the unbiasedness and the MSE of the embeddings, for the linear kernel and the angular kernel (recall that the Yu et al paper performed analysis on the Gaussian kernel). One interesting result is that for SD-product matrices, the MSE for linear kernel exhibits osculation when the number of SD blocks alternate between odd and even. Such a result supports the empirical finding that taking 3 blocks works well.
There are quite a few technical developments in the paper (30 pages of appendix) and it is impossible to verify every piece given the short turnaround. I read some proofs and they are correct. I would suggest giving a concrete mathematical definition of the Gaussian orthogonal matrix in Section 2. |
NIPS | Title
Systematic improvement of neural network quantum states using Lanczos
Abstract
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and highenergy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg J1 − J2 model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
1 Introduction
Understanding correlated quantum systems requires dealing with a large configuration space: datasets are comprised of all possible electronic configurations ~σ and cannot be stored in the memory of the largest supercomputer. Hence, the quantum many-body problem can be interpreted as an “extreme data science” problem [13] from an information processing perspective. In a quantum wave function, each electronic or spin configuration has an associated complex amplitude ψ(~σ) determined by solving for the eigenvectors of the Hamiltonian operator. In particular, if one is interested in the zero temperature properties of the system, the solution is given by the eigenvector with the smallest eigenvalue. Finding the exact solution of a N quantum bit system with interactions requires solving for the eigenvectors of a 2N × 2N matrix. Alternatively, one can formulate the calculation as an optimization problem in which an “energy functional” E(ψ) has to minimized with respect to all the 2N complex amplitudes.
Since the number of configurations d grows exponentially with the number of degrees of freedom (electrons, spins), this problem quickly becomes intractable. A solution consists of “compressing’ the wave function by proposing a suitable guess for the amplitudes based on some variational parameters ~α = (α1, α2, · · · , αm). Typically, a functional form ψ(~σ) = f(~σ, ~α) based on some physical intuition is utilized to represent the amplitude of given configuration/state ~σ. The optimal parameters αi are determined by solving the system of equations ∇αE = 0. The objective of this solution is to achieve the lowest possible energy with a number of parameters m d.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Some relatively simple wave functions have enjoyed various degrees of success in the past, such as those of the Jastrow type where the amplitudes can be written as pair products f(~σ, ~α) =∏ ij U(αijσiσj). However, in recent years we have witnessed impressive developments based on the use of neural network (NN) wave functions as variational estimators [4], which have jump-started a new vibrant field of research dubbed “quantum machine learning”. Notice that the optimization of the wave function parameters now translates into the “training” of the NN by minimizing the energy function that becomes a cost function (we describe the training process below). The power of NN wave functions lies in the complex non-linear structure that provides them with remarkable expressivity to represent arbitrary complex many-body states by, at the same time, being completely agnostic to the physics.
Since restricted Boltzmann machines(RBM) were originally used as a variational ansatz for finding the ground state of the quantum many-body systems [4], there has been a growing effort to investigate other forms of neural networks, including convolutional neural networks(CNN)[9, 23], recurrent neural networks(RNN)[19], graph networks[22], transformers[25], to mention a few. Thus, neural network quantum states(NNQS) become the most appealing numerical alternative to treat quantum many body systems since they can be systematically improved by adding new layers or hidden variables, for instance. In addition to the ground state search, the application of NNQS ranges from classical simulation of quantum circuits[1, 5, 32], calculation of spectral function[17, 18], thermodynamics simulation[16, 31], and quantum tomography[44].
Contributions In this work, we show how one can use a mathematically simple structure, a restricted Boltzmann machine (RBM), and yet obtain values of the ground state energy that beat all previous estimates by a range of numerical methods, including using convolutional neural networks. As we describe below, instead of increasing the number of layers or hidden variables, the solution lies on considering linear combinations of RBMs. The new wave function allows one to explore a much larger space of solutions. In particular, one can use this construction to restore spatial symmetries [40, 9, 28, 29]. In addition, we propose implementing a projection method based on a Lanczos recursion using a “Krylov basis” of RBMs obtained by sequentially applying powers of the Hamiltonian operator.
The paper is organized as follows: In Sec.2.1 we describe the quantum many-body problem in the context of the Heisenberg model; in Sec.2.2 we summarize prior attempts to study this problem using NNQS; in Sec.3 we review the basic formalism, including the structure of neural network wave functions, how to incorporate the symmetries of the problem into the quantum many-body state, and the numerical training procedure to optimize it. In Sec.4 we present results of state-of-the-art calculations for the J1 − J2 Heisenberg model on the square lattice and compare to other numerical techniques. We finally close with a summary and conclusions.
2 The quantum many-body problem
2.1 Model
In the following, we will focus on quantum spin problems where the degrees of freedom σi can assume two possible values ±1/2 (or “up” and “down”). Similarly, one can think of them as generic two-level systems or “qubits”. In particular, we will benchmark our methods in the context of the spin 12 antiferromagnetic Heisenberg model with nearest and next nearest neighbor interactions, the so-called J1 − J2 model defined by the Hamiltonian:
Ĥ = J1 ∑ 〈ij〉 ~Si · ~Sj + J2 ∑ 〈〈ij〉〉 ~Si · ~Sj , (1)
where ~S = (Ŝx, Ŝy, Ŝz) are spin operators, the first term runs over nearest neighboring sites 〈ij〉 on a square lattice and the second term runs over next nearest pairs 〈〈ij〉〉 along the diagonals of the plaquettes. For convenience, in the following, we set J1 = 1 as the unit of energy. In this problem, the number of possible configurations grows as d = 2N . However, the ground state wave function lies on the sector with the same number of up and down spins, constraining our search to a smaller subset of states, albeit still exponentially large.
Without the J2 term, the problem can be numerically solved for hundreds of spins using quantum Monte Carlo (QMC) [38]. However, the method cannot be applied to problems with frustration since
it is noticeably affected by the infamous sign problem[24]. In our case, this is due to the presence of the J2 term that makes some transition probabilities ill-defined (negative). The ground states of this model are well established in two extreme cases: at small J2/J1 the system antiferromagnetically orders with wave vector q = (π, π); at large J2/J1 spins prefer columnar order q = (π, 0), (0, π), in which they aligned antiparallel in one direction, but ferromagnetically in the other. However, in the maximally frustrated regime J1 ∼ 0.5J2, the system does not display any apparent order and the nature of this spin liquid state remains controversial despite significant research efforts over the past three decades[3, 6, 11, 10, 35, 39, 41, 34, 37, 26, 20, 15, 21, 45].
Therefore, we choose this Hamiltonian for two reasons: (i) it realizes a quantum spin liquid in a parameter regime near J2 ∼ 0.5J1 and (ii) conventional Monte Carlo methods fail, making the model an ideal testing ground to benchmark new techniques. Variational Monte Carlo(VMC) provides a suitable alternative that can be scaled up to large two-dimensional systems without being affected by the sign problem. The quest for relatively simple yet powerful variational states has focused on neural network states, which have shown a great deal of promise. The complexity of the problem lies in the fact that many states with similar energy have very different physical properties. Therefore, an accurate representation of the ground state becomes the key to studying the nature of the quantum phase.
2.2 Related work
Before the concept of NNQS became a popular new alternative for simulating many-body systems, the most successful numerical techniques to treat the 2D J1 − J2 model have been the density matrix renormalization group (DMRG)[15], VMC based on a projected fermionic ansatz[20], and tensor product states[45]. Recently, some research has focused on improving the accuracy of NNQS by using deep neural networks such as CNN[9] and group-CNN[36]. The idea of applying quantum number projection to recover the symmetries of the wave function[40, 46] has proven to be effective in improving the performance of NNQS[9, 28, 29, 36]. In addition, other alternatives that enhance the quality of the approximations consist of combining NNQS with Gutzwiller-projected fermionic wave functions[12], or pair-product wave functions[30].
3 Method
3.1 Neural Network Wave Function with symmetry
An RBM wave function takes a spin configuration – a sequence of N values ±1/2 – and returns a complex coefficient corresponding to the wave function amplitude. In other words, it is a function ψ : {−1/2,+1/2}N → IC. This function is highly non-linear and is parametrized by biases ~a,~b and weights W as:
ψ(~σz,~a,~b,W ) = e ∑N i=1 aiσ z i M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j + bi). (2)
In this expression, the number of “hidden variables” M is a tunable parameter. While RBMs have remained a simple example of a basic neural network for many decades, it was only recently that their potential as variational wave functions was appreciated [4]. In this case, unlike conventional machine learning applications, the biases and weights are complex valued.
It is possible to account for certain symmetries [27] of the problem directly within the internal mathematical structure of the RBM. In particular:
• Spin flip symmetry: If the z-component of the total magnetization is zero ( ∑ i σ z i = 0), the
global spin flip operation σzi → −σzi preserves this property. Notice that since cosh(x) is an even function, we can easily restore the global flip symmetry in RBM wave function by removing the “magnetic field” terms associated to biases ~a,~b in Eq.(2). Thus, the RBM wave function coefficients become:
ψs(~σ z,W ) = M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j ). (3)
Notice that even though the computational cost of optimizing and evaluating observables with the symmetrized wave function has increased, the resulting state has a much larger expressivity than the original one, translating into a remarkable accuracy as we shall demonstrate. We should highlight here that the new states, by being linear combinations of RBMs, are no longer RBMs, and therefore allow one to explore a much larger space outside the original manifold defined by ψs, Eq.(2).
3.2 Wave Function Optimization
The goal of the calculation is to minimize the cost function defined by the expectation value of the energy:
Evar = 〈ψKL|H|ψKL〉 〈ψKL|ψKL〉
(7)
= ∑ ~σ P~σEloc(~σ), (8)
where the probability distribution is determined by the normalized wave function coefficients
P~σ = |〈~σ|ψKL〉|2∑ ~σ′ |〈~σ′|ψKL〉|2
(9)
and the local energy is given by
Eloc(~σ) = 〈~σ|H|ψKL〉 〈~σ|ψKL〉 . (10)
By formulating the problem in probabilistic terms, one can resort to Metropolis-Hastings Markov Chain Monte Carlo to evaluate the averages. The sampling over the spin configurations ~σ is carried
out by randomly flipping pairs of anti-aligned spins, and using von Neumann rejection according to a transition probability W = |〈~σnew|ψKL〉|2/|〈~σold|ψKL〉|2. The wave function optimization can be implemented by a variety of methods. Since the energy landscape is extremely complex, simple gradient descent tends to get trapped into metastable solutions. More sophisticated strategies are usually employed, such as natural gradient descent or “stochastic reconfiguration”[42]. In contrast to the "standard" natural gradient descent method, the Fubini-study metric[33], which is the complex-valued form of Fisher information, is used to measure the "distance" between wave functions |ψ〉 and |φ〉:
γ(ψ, φ) = arccos √ 〈ψ|φ〉〈φ|ψ〉 〈ψ|ψ〉〈φ|φ〉 . (11)
The procedure to update variational parameters using natural gradient descent is well described in literature[4, 8, 30], and we hereby summarize it. The optimization is done by minimizing the Fubini-study metric between |e−dτHψ(θ)〉 and ψ(θ + δθ)〉 where dτ is a small step in imaginary time and can be viewed as learning rate in the training of neural network. The optimal choice for δθ is given by the solution of a system of equations:∑
k′
[ 〈O†kOk′〉 − 〈O † k〉〈Ok′〉 ] δθk′ = −dτ [ 〈O†kH〉 − 〈O † k〉〈H〉 ] , (12)
where the log derivative Ok = 1ψ(θ) ∂ψ(θ) ∂θ and 〈· · · 〉 means an average over samples. We update the parameters by θk = θ′k + δθk and repeat until convergence is reached.
3.3 Lanczos recursion
Using the symmetrized RBM wave function combined with the stochastic reconfiguration method, a good approximation of the ground state can be achieved after hundreds or thousands of iterations. However, due to the limited representation power of neural network wave functions, and the errors stemming from the Monte Carlo sampling and the optimization method, the true ground state of the Hamiltonian H can still differ significantly from the variational one. One possible way to increase the expressivity of the wave function is to introduce additional hidden variables or layers. However, an alternative to systematically improve the neural network wave function consists of applying a modified Lanczos recursion [14, 2, 20]. The procedure begins with a (normalized) trial wave function ψ0, which in our case is an initial guess for the ground state, ψ0 = ψKL . Then, a new state ψ1 is constructed by applying the Hamiltonian on ψ0 and subtracting the projection over ψ0 in order to preserve orthogonality:
ψ1 = Hψ0 − 〈H〉ψ0
(〈H2〉 − 〈H〉2)1/2 (13)
where 〈Hn〉 = 〈ψ0|Hn|ψ0〉. Notice that ψ1 is orthogonal to ψ0 and also normalized. In the usual Lanczos method, this recursion can be continued such that a new complete orthogonal basis can be constructed. In this representation, the Hamiltonian will have a tri-diagonal form. However, we only use ψ0 and ψ1 as our basis, and thus the Hamiltonian will be a 2× 2 matrix.
The eigenvector ψ̃0 that corresponds to the lowest eigenvalue Ẽ0 of this matrix will be a better approximation of the true ground state of Hamiltonian compared to ψ0. The lowest eigenvalue and corresponding eigenvector are
Ẽ0 = 〈H〉+ vα, (14)
ψ̃0 = 1
(1 + α2)1/2 ψ0 +
α
(1 + α2)1/2 ψ1, (15)
where
v = (〈H2〉 − 〈H〉2)1/2 (16)
r = 〈H3〉 − 3〈H2〉〈H〉+ 2〈H〉3
2(〈H2〉 − 〈H〉2)3/2 (17)
α = r − (r2 + 1)1/2, (18)
The eigenvector ψ̃0, being a linear combination of ψ0 and ψ1, is the improved neural network wave function, and Ẽ0 is the new improved variational energy. By considering ψ̃0 as the new trial wave function replacing ψ0, this method can be repeated to further improve the wave function. The neural network wave function obtained during the Lanczos recursion can be generalized as
|Ψp〉 = (1 + p∑ i=1 βiH i)|ψ0〉, (19)
where p is the maximum number of Lanczos steps, and βi is the wave function coefficient corresponding to Hi|ψ0〉. In this form, one can easily identify the wave function as an expansion on a Krylov basis.
In practice, taking into account the fact that the computational complexity increases dramatically with increasing p, only a few steps can be calculated for a large quantum many-body system. In this study, and for illustration purposes, we shall consider only the p = 1 or p = 2 cases.
3.4 Implementation details
In this work, we focus on the 2D J1 − J2 Heisenberg model on L× L square lattices where L is an even number. For the neural network, we use ψKL in all simulations and consider three different values for the number of hidden variables M consisting of 2, 2.5, and 3 times of the number of spins N = L2 in the system. The parameters W in the RBM are initialized to be randomly chosen random numbers with a uniform distribution between [−0.01, 0.01] for both real and imaginary parts. The ground state can belong to theA1 orB1 irreducible representations of the C4v point group, depending on the value of J2/J1. In our calculations we consider both cases near the transition between the spin liquid phase and the columnar phase with K = (π, 0), i. e. for J2/J1 ≥ 0.5. Due to a large number of parameters and the numerical noise in sampling, we implement the conjugate gradient method to solve the system of equations, Eq.(12). To stabilize the method, we introduce a ridge parameter λ = 10−6. For each training step, we collect 10000 samples to evaluate averages as mentioned in Sec. 3.2 including the variational energy and log derivatives. Since the adjacent states in the Markov chain are highly correlated, the number of the skipped states between samples Nskip is chosen according to this relation Nskip = 5 × 1.0/r, where r is the acceptance rate in the previous training step. The typical value for Nskip is from 30 to 100. As for evaluation, we collect 2 × 105 samples to calculate the average and statistical error. The learning rate used in the training ranges from 5× 10−4 to 3× 10−2. Once we observe that variational energy is not decreasing, a smaller learning rate(half of the previous one) is used instead. For large L, to save training time, we initialize the parameters W in ψKL using the parameters trained by means of the cheaper wave function ψs. All simulations are performed using Eigen and Intel MKL on Intel E5-2680v4 and AMD Rome 7702 CPU nodes. Source code will be available at: https://github.com/hwchen2017/Lanczos_Neural_Network_Quantum_State.
4 Results
4.1 Comparison with Exact Diagonalization
We benchmark the accuracy of the neural network wave functions for the ground state mainly on the 6× 6 and 10× 10 square lattices with periodic boundary conditions. For the 6× 6 lattice, the J1 − J2 is numerically soluble by enumerating the possible spin configurations, constructing the Hamiltonian matrix, and explicitly solving the eigenvalue problem [39]. Once the ground state (or its variational approximation) is obtained, the wave function can be used to calculate other physical quantities besides the energy. Here, for illustration, we compute the spin structure factor, that defines the sublattice magnetization squared for a finite system
S(q) = 1
N2 ∑ i,j 〈σzi σzj 〉eiq·(ri−rj), (20)
where the wave vector q determines spatial structure of the magnetic order. Notice that in all the tables shown here, we display the results times a factor N for readability.
We first focus on the symmetrized RBM wave function without the Lanczos optimization, and we start by comparing the ground state energy for a 6×6 lattice as a function of J2/J1, as shown in Fig.1. In this figure we calculate the relative error as |Enn − Eexact|/|Eexact| using the exact ground state energy from Ref. [39]. We also include the relative error of the ground state energy obtained using a convolutional neural network wave function from Ref.[9]. While the relative error of the CNNs are in order of 10−3, our RBM wave function achieves an accuracy of 10−4 in the frustrated regime. Even comparing other recent works using CNNs[43, 36, 7], our RBM wave function still outperforms the CNN wave function. Besides the ground state energy, the spin structure factor computed from optimized wave functions agree very well with the exact solution as shown in Fig. 2, where the differences are smaller than the symbol size, and in data table 2.
4.2 Comparison with state-of-the-art quantum Monte Carlo
For larger lattices, the problem is numerically intractable. However, as mentioned before, it can be solved using QMC[38] for J2 = 0. Thus, for the case without frustration we can compare with QMC results for several different lattice sizes. From table 3, we can see that even on the 10× 10 lattice the energy difference is about 3× 10−5, showing the extraordinary accuracy of our RBM wave function.
For the frustrated case, J2 6= 0, we compare to other methods, such as those obtained with CNN wave functions as well as results using the density matrix renormalization groump(DMRG) method with SU(2) symmetry from Ref. [15] and VMC using an Abrikosov-fermion mean field with a Z2 gauge structure from Ref. [20]. From the data tables 4 and 5, we observe that our RBM wave function outperform the CNN wave function again in the entire range of J2/J1. In the frustrated regime, comparisons with VMC and DMRG using all the data available in literature demonstrate that the RBM wave functions still yield competitive ground state energies except at J2/J1 = 0.55 where DMRG yields a lower value.
4.3 Lanczos optimization
Since the most interesting regime lies around the maximally frustrated point J2 ∼ 0.5J1, we choose 3 different values of J2/J1 using 6× 6 and 10× 10 lattices and perform a few Lanczos steps to further
improve the ground state energy. From data table 2 and 4, we see that the Lanczos steps are very effective regardless of the system size. Remarkably, by performing p = 1 Lanczos steps, we obtain better energy at J2/J1 = 0.55 for the 10× 10 lattice that improves significantly the best available data using state-of-the-art DMRG, as shown in data table 4. Besides, compared to the "RBM+PP" results[30], which is generally considered as the start-of-the-art NNQS method, we obtain slightly lower variational energy at J2 = 0.5, 0.55 on a 6× 6 lattice while for a 10× 10 lattice at J2 = 0.5, their variational energy is 8 × 10−4 lower than ours. Additionally, with the help of the Lanczos recursion, a better estimate of the energy can be obtained by carrying out a variance extrapolation as illustrated in Ref. [2, 20]. We also try to improve the estimation of spin structure factor using Lanczos, but the Monte Carlo sampling error makes the improvement not obvious.
5 Conclusion
Neural network wave functions hold a great deal of promise due to their ability to compress complex quantum many-body states within a relatively simple mathematical structure that, owing to its nonlinearity, can encode an exponentially large amount of information with polynomial resources. In particular, RBM wave functions, initially deemed too simple, can be used as building blocks for systematically improved wave functions. These improved states obey the internal symmetries of the model and the point group symmetries of the lattice. In addition, they may contain contributions from the state living in a “tangent space” to the original RBM manifold. These tangent vectors are spanned in terms of powers of the Hamiltonian and form a Krylov basis.
We have demonstrated that we can achieve state-of-the-art accuracy that improves previous results using convolutional neural networks with a minimal amount of extra computational cost compared to simple RBMs. The combination of Lanczos and symmetrization offer an effective solution to problems previously beyond the reach of the most powerful numerical techniques and provide the
means to bypass the sign problem. These ideas can seamlessly translate to other areas of research ranging from materials science to quantum chemistry. Besides, our variational solution can be adopted to calculate the excitation spectrum of a quantum many-body system[17, 18], providing valuable information that can be directly compared to experiments.
Limitations The computational cost of a single training step scales as O(Nsample ×MN2), where the number of hidden variablesM is usually proportional to the system sizeN . Thus, the computation time of calculation may be a bottleneck for its application on larger lattices. In particular, we find that even though the results for the energy are very accurate, correlation functions have relatively larger errors. This behavior might be improved by using variational forms with better representation power. Besides, the Lanczos step procedure is not size consistent, which means that the energy improvement with respect to the original wave function |ψ0〉 vanishes for fixed p and N →∞. Also, the Lanczos correction will be smaller and smaller as p increases. Nevertheless, a sizable improvement is obtained even for rather large clusters with 100 sites as shown in the data table 4.
Negative Societal Impact Our work presents the theoretical simulation of the quantum many-body problems without any foreseeable negative societal impacts.
Acknowledgments and Disclosure of Funding
AEF and HC acknowledge the National Science Foundation for support under grant No. DMR2120501. DH is partially supported by a Northeastern Tier 1 grant. | 1. What is the main contribution of the paper regarding neural network quantum states?
2. What are the strengths and weaknesses of the proposed approach compared to previous works?
3. How does the paper address the issue of symmetries in variational states?
4. Can the authors provide further explanations or comparisons regarding the use of brute-force symmetrization vs. inherently symmetric models?
5. What is the property that the authors wish to enforce in their symmetrized anzatz, and why?
6. Can the authors elaborate on the specific meaning of "point group symmetries" in their context?
7. Would including a derivation of equations 14-18 in the appendix be helpful for readers?
8. Are there any additional limitations to the approach that the authors have not addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper demonstrates that by leveraging symmetrization of variational states and employing Lenczos recursion, neural network quantum states can surpass the current state-of-the-art accuracy for representing the ground state of the J1-J2 model on the square lattice. The main finding is that instead of using more intricate NN anzatz (e.g., ConvNets, Graph NN, or Transformers), the paper proposes that using the much simpler RBM anzatz coupled with symmetrization and Lanczos steps is sufficient to represent complex wave-functions. With that method they show that they can obtain much better ground state energy estimation across the entire range of the J2/J1 parameter and specifically near the point of highest frustation (~ 0.5).
Strengths And Weaknesses
The paper is well written and offers a simple introduction to the subject matter that is a bit outside of the common domain of ML. The main strength of the paper are its impressive results, improving on the previous best results by a large factor on a problem that is considered exceptionally difficult. Moreover, it is noteworthy that these improvements are obtained by using relatively simple RBMs, as opposed to more intricate architectures (e.g., ConvNets or Transformers).
However, the presented methods are not novel and are based on previous works. Specifically, it appears that the symmetrization method is identical to those used by prior papers including with NN-based approaches [1,2], and to the extent of my knowledge it is a well-known widely-used technique in the field. Similarly, Lanczos iteration for improving the accuracy of a given ground-state approximation was suggested by other works and was specifically used in the past to improve results on the J1-J2 model [3]. It is worth noting that to the best of my knowledge this is the first use of this method with NN quantum states. While the authors do properly cite prior works, it is not sufficiently emphasized that they merely implement these ideas for NN quantum states -- something that should have been more clearly articulated in the introduction and abstract.
As a final note, while the paper clearly shows the superiority of their proposed method on the given task, there is not a lot of effort spent on attributing these gains to the various choices. Some abalation study (even if done only on the 6x6 case) could go a long way to clarify the contribution of each element (e.g., employing Lanczos on a non-symmetric RBM and the ConvNet model from [1], or using RBMs with a symmetric matrix to account for translation symmetry vs. brute-force symmetrization).
[1] - Choo et al., Two-dimensional frustrated J1−J2 model studied with neural network quantum states, PRB, 100:125124, Sep 2019.
[2] - Sharir et al., Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems, PRL 124:020503, Jan 2020.
[3] - Iqbal et al., Spin liquid nature in the Heisenberg J1-J2 triangular antiferromagnet, PRB 93:144411, Apr 2016.
Questions
On the topic of symmetries it would have been helpful for the authors to discuss other approaches to brute-force symmetrization, e.g., symmetric constructions of RBMs / ConvNets / Graph NN. It is especially interesting that applying brute-force symmetrization to RBMs worked better than the ConvNet used in [1] which follows translational symmetry by construction and uses the same brute-force symmetrization only for the C4 symmetries. It would be great if the authors could comment on why that might be the case given that both models follow the same symmetries and one (the ConvNet) is supposedly more expressive (prior to using Lanczos). Moreover, given that the set of methods advocated by the paper could be applied to any anzatz, have the authors considered using ConvNets or any other NN architecture to verify whether the improved results are due to the use of simpler RBMs, or comparing the use of brute-force symmetrization vs. inherently symmetric models (whether using symmetric RBMs, or symmetric ConvNets)?
There is a slight mismatch between the usual meaning of invariance to symmetries and how you define your symmetrized anzatz. Specifically, one usually refers to symmetric invariant function as one where f(x) = f(Tx) for all T in some set of symmetric operators, but in this case it appears that this requirement is soften to be equal in amplitude but it does allow for a shift in phase, and so it is more akin to equivariant property. Could the authors please elaborate on this point to clarify what is the property they wish to enforce and why?
"point group symmetries" might not be clear for all readers, and it would be great to be specific on what it means in this context (rotations + reflections).
While not mandatory, given that it is a method defined by prior works, it would be helpful to include a derivation of equations 14-18 in the appendix to this work.
Limitations
The authors have properly addressed the limitations of their approach, and the difficulty of scaling it to larger lattices. My only comment would be that the limitations paragraph at the end should also include the cost of symmetrization in the training step (so it should be N^3 not N^2), and that they should also repeat the cost of the Lencoz step. I would also add that it might be challenging to scale this method to larger NN, which might be necessary to solve certain cases with high accuracy. |
NIPS | Title
Systematic improvement of neural network quantum states using Lanczos
Abstract
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and highenergy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg J1 − J2 model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
1 Introduction
Understanding correlated quantum systems requires dealing with a large configuration space: datasets are comprised of all possible electronic configurations ~σ and cannot be stored in the memory of the largest supercomputer. Hence, the quantum many-body problem can be interpreted as an “extreme data science” problem [13] from an information processing perspective. In a quantum wave function, each electronic or spin configuration has an associated complex amplitude ψ(~σ) determined by solving for the eigenvectors of the Hamiltonian operator. In particular, if one is interested in the zero temperature properties of the system, the solution is given by the eigenvector with the smallest eigenvalue. Finding the exact solution of a N quantum bit system with interactions requires solving for the eigenvectors of a 2N × 2N matrix. Alternatively, one can formulate the calculation as an optimization problem in which an “energy functional” E(ψ) has to minimized with respect to all the 2N complex amplitudes.
Since the number of configurations d grows exponentially with the number of degrees of freedom (electrons, spins), this problem quickly becomes intractable. A solution consists of “compressing’ the wave function by proposing a suitable guess for the amplitudes based on some variational parameters ~α = (α1, α2, · · · , αm). Typically, a functional form ψ(~σ) = f(~σ, ~α) based on some physical intuition is utilized to represent the amplitude of given configuration/state ~σ. The optimal parameters αi are determined by solving the system of equations ∇αE = 0. The objective of this solution is to achieve the lowest possible energy with a number of parameters m d.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Some relatively simple wave functions have enjoyed various degrees of success in the past, such as those of the Jastrow type where the amplitudes can be written as pair products f(~σ, ~α) =∏ ij U(αijσiσj). However, in recent years we have witnessed impressive developments based on the use of neural network (NN) wave functions as variational estimators [4], which have jump-started a new vibrant field of research dubbed “quantum machine learning”. Notice that the optimization of the wave function parameters now translates into the “training” of the NN by minimizing the energy function that becomes a cost function (we describe the training process below). The power of NN wave functions lies in the complex non-linear structure that provides them with remarkable expressivity to represent arbitrary complex many-body states by, at the same time, being completely agnostic to the physics.
Since restricted Boltzmann machines(RBM) were originally used as a variational ansatz for finding the ground state of the quantum many-body systems [4], there has been a growing effort to investigate other forms of neural networks, including convolutional neural networks(CNN)[9, 23], recurrent neural networks(RNN)[19], graph networks[22], transformers[25], to mention a few. Thus, neural network quantum states(NNQS) become the most appealing numerical alternative to treat quantum many body systems since they can be systematically improved by adding new layers or hidden variables, for instance. In addition to the ground state search, the application of NNQS ranges from classical simulation of quantum circuits[1, 5, 32], calculation of spectral function[17, 18], thermodynamics simulation[16, 31], and quantum tomography[44].
Contributions In this work, we show how one can use a mathematically simple structure, a restricted Boltzmann machine (RBM), and yet obtain values of the ground state energy that beat all previous estimates by a range of numerical methods, including using convolutional neural networks. As we describe below, instead of increasing the number of layers or hidden variables, the solution lies on considering linear combinations of RBMs. The new wave function allows one to explore a much larger space of solutions. In particular, one can use this construction to restore spatial symmetries [40, 9, 28, 29]. In addition, we propose implementing a projection method based on a Lanczos recursion using a “Krylov basis” of RBMs obtained by sequentially applying powers of the Hamiltonian operator.
The paper is organized as follows: In Sec.2.1 we describe the quantum many-body problem in the context of the Heisenberg model; in Sec.2.2 we summarize prior attempts to study this problem using NNQS; in Sec.3 we review the basic formalism, including the structure of neural network wave functions, how to incorporate the symmetries of the problem into the quantum many-body state, and the numerical training procedure to optimize it. In Sec.4 we present results of state-of-the-art calculations for the J1 − J2 Heisenberg model on the square lattice and compare to other numerical techniques. We finally close with a summary and conclusions.
2 The quantum many-body problem
2.1 Model
In the following, we will focus on quantum spin problems where the degrees of freedom σi can assume two possible values ±1/2 (or “up” and “down”). Similarly, one can think of them as generic two-level systems or “qubits”. In particular, we will benchmark our methods in the context of the spin 12 antiferromagnetic Heisenberg model with nearest and next nearest neighbor interactions, the so-called J1 − J2 model defined by the Hamiltonian:
Ĥ = J1 ∑ 〈ij〉 ~Si · ~Sj + J2 ∑ 〈〈ij〉〉 ~Si · ~Sj , (1)
where ~S = (Ŝx, Ŝy, Ŝz) are spin operators, the first term runs over nearest neighboring sites 〈ij〉 on a square lattice and the second term runs over next nearest pairs 〈〈ij〉〉 along the diagonals of the plaquettes. For convenience, in the following, we set J1 = 1 as the unit of energy. In this problem, the number of possible configurations grows as d = 2N . However, the ground state wave function lies on the sector with the same number of up and down spins, constraining our search to a smaller subset of states, albeit still exponentially large.
Without the J2 term, the problem can be numerically solved for hundreds of spins using quantum Monte Carlo (QMC) [38]. However, the method cannot be applied to problems with frustration since
it is noticeably affected by the infamous sign problem[24]. In our case, this is due to the presence of the J2 term that makes some transition probabilities ill-defined (negative). The ground states of this model are well established in two extreme cases: at small J2/J1 the system antiferromagnetically orders with wave vector q = (π, π); at large J2/J1 spins prefer columnar order q = (π, 0), (0, π), in which they aligned antiparallel in one direction, but ferromagnetically in the other. However, in the maximally frustrated regime J1 ∼ 0.5J2, the system does not display any apparent order and the nature of this spin liquid state remains controversial despite significant research efforts over the past three decades[3, 6, 11, 10, 35, 39, 41, 34, 37, 26, 20, 15, 21, 45].
Therefore, we choose this Hamiltonian for two reasons: (i) it realizes a quantum spin liquid in a parameter regime near J2 ∼ 0.5J1 and (ii) conventional Monte Carlo methods fail, making the model an ideal testing ground to benchmark new techniques. Variational Monte Carlo(VMC) provides a suitable alternative that can be scaled up to large two-dimensional systems without being affected by the sign problem. The quest for relatively simple yet powerful variational states has focused on neural network states, which have shown a great deal of promise. The complexity of the problem lies in the fact that many states with similar energy have very different physical properties. Therefore, an accurate representation of the ground state becomes the key to studying the nature of the quantum phase.
2.2 Related work
Before the concept of NNQS became a popular new alternative for simulating many-body systems, the most successful numerical techniques to treat the 2D J1 − J2 model have been the density matrix renormalization group (DMRG)[15], VMC based on a projected fermionic ansatz[20], and tensor product states[45]. Recently, some research has focused on improving the accuracy of NNQS by using deep neural networks such as CNN[9] and group-CNN[36]. The idea of applying quantum number projection to recover the symmetries of the wave function[40, 46] has proven to be effective in improving the performance of NNQS[9, 28, 29, 36]. In addition, other alternatives that enhance the quality of the approximations consist of combining NNQS with Gutzwiller-projected fermionic wave functions[12], or pair-product wave functions[30].
3 Method
3.1 Neural Network Wave Function with symmetry
An RBM wave function takes a spin configuration – a sequence of N values ±1/2 – and returns a complex coefficient corresponding to the wave function amplitude. In other words, it is a function ψ : {−1/2,+1/2}N → IC. This function is highly non-linear and is parametrized by biases ~a,~b and weights W as:
ψ(~σz,~a,~b,W ) = e ∑N i=1 aiσ z i M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j + bi). (2)
In this expression, the number of “hidden variables” M is a tunable parameter. While RBMs have remained a simple example of a basic neural network for many decades, it was only recently that their potential as variational wave functions was appreciated [4]. In this case, unlike conventional machine learning applications, the biases and weights are complex valued.
It is possible to account for certain symmetries [27] of the problem directly within the internal mathematical structure of the RBM. In particular:
• Spin flip symmetry: If the z-component of the total magnetization is zero ( ∑ i σ z i = 0), the
global spin flip operation σzi → −σzi preserves this property. Notice that since cosh(x) is an even function, we can easily restore the global flip symmetry in RBM wave function by removing the “magnetic field” terms associated to biases ~a,~b in Eq.(2). Thus, the RBM wave function coefficients become:
ψs(~σ z,W ) = M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j ). (3)
Notice that even though the computational cost of optimizing and evaluating observables with the symmetrized wave function has increased, the resulting state has a much larger expressivity than the original one, translating into a remarkable accuracy as we shall demonstrate. We should highlight here that the new states, by being linear combinations of RBMs, are no longer RBMs, and therefore allow one to explore a much larger space outside the original manifold defined by ψs, Eq.(2).
3.2 Wave Function Optimization
The goal of the calculation is to minimize the cost function defined by the expectation value of the energy:
Evar = 〈ψKL|H|ψKL〉 〈ψKL|ψKL〉
(7)
= ∑ ~σ P~σEloc(~σ), (8)
where the probability distribution is determined by the normalized wave function coefficients
P~σ = |〈~σ|ψKL〉|2∑ ~σ′ |〈~σ′|ψKL〉|2
(9)
and the local energy is given by
Eloc(~σ) = 〈~σ|H|ψKL〉 〈~σ|ψKL〉 . (10)
By formulating the problem in probabilistic terms, one can resort to Metropolis-Hastings Markov Chain Monte Carlo to evaluate the averages. The sampling over the spin configurations ~σ is carried
out by randomly flipping pairs of anti-aligned spins, and using von Neumann rejection according to a transition probability W = |〈~σnew|ψKL〉|2/|〈~σold|ψKL〉|2. The wave function optimization can be implemented by a variety of methods. Since the energy landscape is extremely complex, simple gradient descent tends to get trapped into metastable solutions. More sophisticated strategies are usually employed, such as natural gradient descent or “stochastic reconfiguration”[42]. In contrast to the "standard" natural gradient descent method, the Fubini-study metric[33], which is the complex-valued form of Fisher information, is used to measure the "distance" between wave functions |ψ〉 and |φ〉:
γ(ψ, φ) = arccos √ 〈ψ|φ〉〈φ|ψ〉 〈ψ|ψ〉〈φ|φ〉 . (11)
The procedure to update variational parameters using natural gradient descent is well described in literature[4, 8, 30], and we hereby summarize it. The optimization is done by minimizing the Fubini-study metric between |e−dτHψ(θ)〉 and ψ(θ + δθ)〉 where dτ is a small step in imaginary time and can be viewed as learning rate in the training of neural network. The optimal choice for δθ is given by the solution of a system of equations:∑
k′
[ 〈O†kOk′〉 − 〈O † k〉〈Ok′〉 ] δθk′ = −dτ [ 〈O†kH〉 − 〈O † k〉〈H〉 ] , (12)
where the log derivative Ok = 1ψ(θ) ∂ψ(θ) ∂θ and 〈· · · 〉 means an average over samples. We update the parameters by θk = θ′k + δθk and repeat until convergence is reached.
3.3 Lanczos recursion
Using the symmetrized RBM wave function combined with the stochastic reconfiguration method, a good approximation of the ground state can be achieved after hundreds or thousands of iterations. However, due to the limited representation power of neural network wave functions, and the errors stemming from the Monte Carlo sampling and the optimization method, the true ground state of the Hamiltonian H can still differ significantly from the variational one. One possible way to increase the expressivity of the wave function is to introduce additional hidden variables or layers. However, an alternative to systematically improve the neural network wave function consists of applying a modified Lanczos recursion [14, 2, 20]. The procedure begins with a (normalized) trial wave function ψ0, which in our case is an initial guess for the ground state, ψ0 = ψKL . Then, a new state ψ1 is constructed by applying the Hamiltonian on ψ0 and subtracting the projection over ψ0 in order to preserve orthogonality:
ψ1 = Hψ0 − 〈H〉ψ0
(〈H2〉 − 〈H〉2)1/2 (13)
where 〈Hn〉 = 〈ψ0|Hn|ψ0〉. Notice that ψ1 is orthogonal to ψ0 and also normalized. In the usual Lanczos method, this recursion can be continued such that a new complete orthogonal basis can be constructed. In this representation, the Hamiltonian will have a tri-diagonal form. However, we only use ψ0 and ψ1 as our basis, and thus the Hamiltonian will be a 2× 2 matrix.
The eigenvector ψ̃0 that corresponds to the lowest eigenvalue Ẽ0 of this matrix will be a better approximation of the true ground state of Hamiltonian compared to ψ0. The lowest eigenvalue and corresponding eigenvector are
Ẽ0 = 〈H〉+ vα, (14)
ψ̃0 = 1
(1 + α2)1/2 ψ0 +
α
(1 + α2)1/2 ψ1, (15)
where
v = (〈H2〉 − 〈H〉2)1/2 (16)
r = 〈H3〉 − 3〈H2〉〈H〉+ 2〈H〉3
2(〈H2〉 − 〈H〉2)3/2 (17)
α = r − (r2 + 1)1/2, (18)
The eigenvector ψ̃0, being a linear combination of ψ0 and ψ1, is the improved neural network wave function, and Ẽ0 is the new improved variational energy. By considering ψ̃0 as the new trial wave function replacing ψ0, this method can be repeated to further improve the wave function. The neural network wave function obtained during the Lanczos recursion can be generalized as
|Ψp〉 = (1 + p∑ i=1 βiH i)|ψ0〉, (19)
where p is the maximum number of Lanczos steps, and βi is the wave function coefficient corresponding to Hi|ψ0〉. In this form, one can easily identify the wave function as an expansion on a Krylov basis.
In practice, taking into account the fact that the computational complexity increases dramatically with increasing p, only a few steps can be calculated for a large quantum many-body system. In this study, and for illustration purposes, we shall consider only the p = 1 or p = 2 cases.
3.4 Implementation details
In this work, we focus on the 2D J1 − J2 Heisenberg model on L× L square lattices where L is an even number. For the neural network, we use ψKL in all simulations and consider three different values for the number of hidden variables M consisting of 2, 2.5, and 3 times of the number of spins N = L2 in the system. The parameters W in the RBM are initialized to be randomly chosen random numbers with a uniform distribution between [−0.01, 0.01] for both real and imaginary parts. The ground state can belong to theA1 orB1 irreducible representations of the C4v point group, depending on the value of J2/J1. In our calculations we consider both cases near the transition between the spin liquid phase and the columnar phase with K = (π, 0), i. e. for J2/J1 ≥ 0.5. Due to a large number of parameters and the numerical noise in sampling, we implement the conjugate gradient method to solve the system of equations, Eq.(12). To stabilize the method, we introduce a ridge parameter λ = 10−6. For each training step, we collect 10000 samples to evaluate averages as mentioned in Sec. 3.2 including the variational energy and log derivatives. Since the adjacent states in the Markov chain are highly correlated, the number of the skipped states between samples Nskip is chosen according to this relation Nskip = 5 × 1.0/r, where r is the acceptance rate in the previous training step. The typical value for Nskip is from 30 to 100. As for evaluation, we collect 2 × 105 samples to calculate the average and statistical error. The learning rate used in the training ranges from 5× 10−4 to 3× 10−2. Once we observe that variational energy is not decreasing, a smaller learning rate(half of the previous one) is used instead. For large L, to save training time, we initialize the parameters W in ψKL using the parameters trained by means of the cheaper wave function ψs. All simulations are performed using Eigen and Intel MKL on Intel E5-2680v4 and AMD Rome 7702 CPU nodes. Source code will be available at: https://github.com/hwchen2017/Lanczos_Neural_Network_Quantum_State.
4 Results
4.1 Comparison with Exact Diagonalization
We benchmark the accuracy of the neural network wave functions for the ground state mainly on the 6× 6 and 10× 10 square lattices with periodic boundary conditions. For the 6× 6 lattice, the J1 − J2 is numerically soluble by enumerating the possible spin configurations, constructing the Hamiltonian matrix, and explicitly solving the eigenvalue problem [39]. Once the ground state (or its variational approximation) is obtained, the wave function can be used to calculate other physical quantities besides the energy. Here, for illustration, we compute the spin structure factor, that defines the sublattice magnetization squared for a finite system
S(q) = 1
N2 ∑ i,j 〈σzi σzj 〉eiq·(ri−rj), (20)
where the wave vector q determines spatial structure of the magnetic order. Notice that in all the tables shown here, we display the results times a factor N for readability.
We first focus on the symmetrized RBM wave function without the Lanczos optimization, and we start by comparing the ground state energy for a 6×6 lattice as a function of J2/J1, as shown in Fig.1. In this figure we calculate the relative error as |Enn − Eexact|/|Eexact| using the exact ground state energy from Ref. [39]. We also include the relative error of the ground state energy obtained using a convolutional neural network wave function from Ref.[9]. While the relative error of the CNNs are in order of 10−3, our RBM wave function achieves an accuracy of 10−4 in the frustrated regime. Even comparing other recent works using CNNs[43, 36, 7], our RBM wave function still outperforms the CNN wave function. Besides the ground state energy, the spin structure factor computed from optimized wave functions agree very well with the exact solution as shown in Fig. 2, where the differences are smaller than the symbol size, and in data table 2.
4.2 Comparison with state-of-the-art quantum Monte Carlo
For larger lattices, the problem is numerically intractable. However, as mentioned before, it can be solved using QMC[38] for J2 = 0. Thus, for the case without frustration we can compare with QMC results for several different lattice sizes. From table 3, we can see that even on the 10× 10 lattice the energy difference is about 3× 10−5, showing the extraordinary accuracy of our RBM wave function.
For the frustrated case, J2 6= 0, we compare to other methods, such as those obtained with CNN wave functions as well as results using the density matrix renormalization groump(DMRG) method with SU(2) symmetry from Ref. [15] and VMC using an Abrikosov-fermion mean field with a Z2 gauge structure from Ref. [20]. From the data tables 4 and 5, we observe that our RBM wave function outperform the CNN wave function again in the entire range of J2/J1. In the frustrated regime, comparisons with VMC and DMRG using all the data available in literature demonstrate that the RBM wave functions still yield competitive ground state energies except at J2/J1 = 0.55 where DMRG yields a lower value.
4.3 Lanczos optimization
Since the most interesting regime lies around the maximally frustrated point J2 ∼ 0.5J1, we choose 3 different values of J2/J1 using 6× 6 and 10× 10 lattices and perform a few Lanczos steps to further
improve the ground state energy. From data table 2 and 4, we see that the Lanczos steps are very effective regardless of the system size. Remarkably, by performing p = 1 Lanczos steps, we obtain better energy at J2/J1 = 0.55 for the 10× 10 lattice that improves significantly the best available data using state-of-the-art DMRG, as shown in data table 4. Besides, compared to the "RBM+PP" results[30], which is generally considered as the start-of-the-art NNQS method, we obtain slightly lower variational energy at J2 = 0.5, 0.55 on a 6× 6 lattice while for a 10× 10 lattice at J2 = 0.5, their variational energy is 8 × 10−4 lower than ours. Additionally, with the help of the Lanczos recursion, a better estimate of the energy can be obtained by carrying out a variance extrapolation as illustrated in Ref. [2, 20]. We also try to improve the estimation of spin structure factor using Lanczos, but the Monte Carlo sampling error makes the improvement not obvious.
5 Conclusion
Neural network wave functions hold a great deal of promise due to their ability to compress complex quantum many-body states within a relatively simple mathematical structure that, owing to its nonlinearity, can encode an exponentially large amount of information with polynomial resources. In particular, RBM wave functions, initially deemed too simple, can be used as building blocks for systematically improved wave functions. These improved states obey the internal symmetries of the model and the point group symmetries of the lattice. In addition, they may contain contributions from the state living in a “tangent space” to the original RBM manifold. These tangent vectors are spanned in terms of powers of the Hamiltonian and form a Krylov basis.
We have demonstrated that we can achieve state-of-the-art accuracy that improves previous results using convolutional neural networks with a minimal amount of extra computational cost compared to simple RBMs. The combination of Lanczos and symmetrization offer an effective solution to problems previously beyond the reach of the most powerful numerical techniques and provide the
means to bypass the sign problem. These ideas can seamlessly translate to other areas of research ranging from materials science to quantum chemistry. Besides, our variational solution can be adopted to calculate the excitation spectrum of a quantum many-body system[17, 18], providing valuable information that can be directly compared to experiments.
Limitations The computational cost of a single training step scales as O(Nsample ×MN2), where the number of hidden variablesM is usually proportional to the system sizeN . Thus, the computation time of calculation may be a bottleneck for its application on larger lattices. In particular, we find that even though the results for the energy are very accurate, correlation functions have relatively larger errors. This behavior might be improved by using variational forms with better representation power. Besides, the Lanczos step procedure is not size consistent, which means that the energy improvement with respect to the original wave function |ψ0〉 vanishes for fixed p and N →∞. Also, the Lanczos correction will be smaller and smaller as p increases. Nevertheless, a sizable improvement is obtained even for rather large clusters with 100 sites as shown in the data table 4.
Negative Societal Impact Our work presents the theoretical simulation of the quantum many-body problems without any foreseeable negative societal impacts.
Acknowledgments and Disclosure of Funding
AEF and HC acknowledge the National Science Foundation for support under grant No. DMR2120501. DH is partially supported by a Northeastern Tier 1 grant. | 1. What is the focus and contribution of the paper regarding symmetry-projected variational solutions?
2. What are the strengths of the proposed approach, particularly in terms of its ability to represent quantum states and satisfy internal constraints?
3. Are there any concerns regarding the originality of the method, and how does it differ from other works that also aim to restore symmetry?
4. What are the limitations of the paper, especially regarding the comparison with other methods and the lack of experiments on different lattice structures?
5. Can the proposed neural network wave function be applied to other types of lattices, such as triangular and honeycomb, and what would be the performance like?
6. How does the method address the limitation of scaling up to larger systems, and can it be generalized to different types of lattices or random graphs? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines, which allows one to explore states outside of the original variational manifold and then increase representation power. Also, an expansion in terms of Krylov states using a Lanczos recursion is used to further improve quantum state accuracy. Experiments is conducted on Heisenberg J1-J2 model on square lattice.
Strengths And Weaknesses
Strengths:
clarity: This paper is overall well written and easy to follow. The method and results are clearly presented.
significance: The proposed neural network quantum state can obey the internal symmetries of the quantum model and the point group symmetries of the lattice. This could make the neural network being more effective to represent quantum state by satisfying internal constraints of quantum many-body problem. Also, Heisenberg J1-J2 model has the infamous sign problem due to frustration that cannot be handled well by traditional numerical methods. So applying neural network to this problem is of great importance to quantum physics.
Weaknesses:
originality: Based on my background and the paper itself I cannot judge if the method of this work is novel, but apparently this work misses some relevant works, such as the following:
Liang, Xiao, et al. "Solving frustrated quantum many-particle models with convolutional neural networks." Physical Review B 98.10 (2018): 104426.
A. Szabó and C. Castelnovo. Neural network wave functions and the sign problem. Physical Review Research, 2(3):033075, 2020.
Kochkov, Dmitrii, et al. "Learning ground states of quantum Hamiltonians with graph networks." arXiv preprint arXiv:2110.06390 (2021).
And authors don’t explain the difference between the proposed method and those works (mentioned in related works) that also try to restore symmetry. And how is the performance comparison with these methods also considering symmetry, not just one CNN method.
quality: Authors primarily compare the results with a specific CNN method. Could authors explain why only compare with this specific deep learning method?. Also, experiments are only performed on square lattice. Since the compared CNN method has performed so good on square lattice, the proposed method can only show very small improvements. I think authors could show the advantage of the proposed method over other methods on some other difficult quantum systems, such as several different lattices where the ground state is harder to learn, etc.
Questions
I was aware of another recent work on solving quantum many-body problem over various lattices [1]. Have authors tried the proposed methods on other types of lattices, such as triangular and honeycomb, etc? How is the performance of proposed neural network wave function on these lattices?
[1] Kochkov, Dmitrii, et al. "Learning ground states of quantum Hamiltonians with graph networks." arXiv preprint arXiv:2110.06390 (2021).
Limitations
Limitation of scaling up to larger systems is well addressed. Could authors explain more on the generality of this method on different type of lattices or even random graphs? |
NIPS | Title
Systematic improvement of neural network quantum states using Lanczos
Abstract
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and highenergy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg J1 − J2 model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
1 Introduction
Understanding correlated quantum systems requires dealing with a large configuration space: datasets are comprised of all possible electronic configurations ~σ and cannot be stored in the memory of the largest supercomputer. Hence, the quantum many-body problem can be interpreted as an “extreme data science” problem [13] from an information processing perspective. In a quantum wave function, each electronic or spin configuration has an associated complex amplitude ψ(~σ) determined by solving for the eigenvectors of the Hamiltonian operator. In particular, if one is interested in the zero temperature properties of the system, the solution is given by the eigenvector with the smallest eigenvalue. Finding the exact solution of a N quantum bit system with interactions requires solving for the eigenvectors of a 2N × 2N matrix. Alternatively, one can formulate the calculation as an optimization problem in which an “energy functional” E(ψ) has to minimized with respect to all the 2N complex amplitudes.
Since the number of configurations d grows exponentially with the number of degrees of freedom (electrons, spins), this problem quickly becomes intractable. A solution consists of “compressing’ the wave function by proposing a suitable guess for the amplitudes based on some variational parameters ~α = (α1, α2, · · · , αm). Typically, a functional form ψ(~σ) = f(~σ, ~α) based on some physical intuition is utilized to represent the amplitude of given configuration/state ~σ. The optimal parameters αi are determined by solving the system of equations ∇αE = 0. The objective of this solution is to achieve the lowest possible energy with a number of parameters m d.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Some relatively simple wave functions have enjoyed various degrees of success in the past, such as those of the Jastrow type where the amplitudes can be written as pair products f(~σ, ~α) =∏ ij U(αijσiσj). However, in recent years we have witnessed impressive developments based on the use of neural network (NN) wave functions as variational estimators [4], which have jump-started a new vibrant field of research dubbed “quantum machine learning”. Notice that the optimization of the wave function parameters now translates into the “training” of the NN by minimizing the energy function that becomes a cost function (we describe the training process below). The power of NN wave functions lies in the complex non-linear structure that provides them with remarkable expressivity to represent arbitrary complex many-body states by, at the same time, being completely agnostic to the physics.
Since restricted Boltzmann machines(RBM) were originally used as a variational ansatz for finding the ground state of the quantum many-body systems [4], there has been a growing effort to investigate other forms of neural networks, including convolutional neural networks(CNN)[9, 23], recurrent neural networks(RNN)[19], graph networks[22], transformers[25], to mention a few. Thus, neural network quantum states(NNQS) become the most appealing numerical alternative to treat quantum many body systems since they can be systematically improved by adding new layers or hidden variables, for instance. In addition to the ground state search, the application of NNQS ranges from classical simulation of quantum circuits[1, 5, 32], calculation of spectral function[17, 18], thermodynamics simulation[16, 31], and quantum tomography[44].
Contributions In this work, we show how one can use a mathematically simple structure, a restricted Boltzmann machine (RBM), and yet obtain values of the ground state energy that beat all previous estimates by a range of numerical methods, including using convolutional neural networks. As we describe below, instead of increasing the number of layers or hidden variables, the solution lies on considering linear combinations of RBMs. The new wave function allows one to explore a much larger space of solutions. In particular, one can use this construction to restore spatial symmetries [40, 9, 28, 29]. In addition, we propose implementing a projection method based on a Lanczos recursion using a “Krylov basis” of RBMs obtained by sequentially applying powers of the Hamiltonian operator.
The paper is organized as follows: In Sec.2.1 we describe the quantum many-body problem in the context of the Heisenberg model; in Sec.2.2 we summarize prior attempts to study this problem using NNQS; in Sec.3 we review the basic formalism, including the structure of neural network wave functions, how to incorporate the symmetries of the problem into the quantum many-body state, and the numerical training procedure to optimize it. In Sec.4 we present results of state-of-the-art calculations for the J1 − J2 Heisenberg model on the square lattice and compare to other numerical techniques. We finally close with a summary and conclusions.
2 The quantum many-body problem
2.1 Model
In the following, we will focus on quantum spin problems where the degrees of freedom σi can assume two possible values ±1/2 (or “up” and “down”). Similarly, one can think of them as generic two-level systems or “qubits”. In particular, we will benchmark our methods in the context of the spin 12 antiferromagnetic Heisenberg model with nearest and next nearest neighbor interactions, the so-called J1 − J2 model defined by the Hamiltonian:
Ĥ = J1 ∑ 〈ij〉 ~Si · ~Sj + J2 ∑ 〈〈ij〉〉 ~Si · ~Sj , (1)
where ~S = (Ŝx, Ŝy, Ŝz) are spin operators, the first term runs over nearest neighboring sites 〈ij〉 on a square lattice and the second term runs over next nearest pairs 〈〈ij〉〉 along the diagonals of the plaquettes. For convenience, in the following, we set J1 = 1 as the unit of energy. In this problem, the number of possible configurations grows as d = 2N . However, the ground state wave function lies on the sector with the same number of up and down spins, constraining our search to a smaller subset of states, albeit still exponentially large.
Without the J2 term, the problem can be numerically solved for hundreds of spins using quantum Monte Carlo (QMC) [38]. However, the method cannot be applied to problems with frustration since
it is noticeably affected by the infamous sign problem[24]. In our case, this is due to the presence of the J2 term that makes some transition probabilities ill-defined (negative). The ground states of this model are well established in two extreme cases: at small J2/J1 the system antiferromagnetically orders with wave vector q = (π, π); at large J2/J1 spins prefer columnar order q = (π, 0), (0, π), in which they aligned antiparallel in one direction, but ferromagnetically in the other. However, in the maximally frustrated regime J1 ∼ 0.5J2, the system does not display any apparent order and the nature of this spin liquid state remains controversial despite significant research efforts over the past three decades[3, 6, 11, 10, 35, 39, 41, 34, 37, 26, 20, 15, 21, 45].
Therefore, we choose this Hamiltonian for two reasons: (i) it realizes a quantum spin liquid in a parameter regime near J2 ∼ 0.5J1 and (ii) conventional Monte Carlo methods fail, making the model an ideal testing ground to benchmark new techniques. Variational Monte Carlo(VMC) provides a suitable alternative that can be scaled up to large two-dimensional systems without being affected by the sign problem. The quest for relatively simple yet powerful variational states has focused on neural network states, which have shown a great deal of promise. The complexity of the problem lies in the fact that many states with similar energy have very different physical properties. Therefore, an accurate representation of the ground state becomes the key to studying the nature of the quantum phase.
2.2 Related work
Before the concept of NNQS became a popular new alternative for simulating many-body systems, the most successful numerical techniques to treat the 2D J1 − J2 model have been the density matrix renormalization group (DMRG)[15], VMC based on a projected fermionic ansatz[20], and tensor product states[45]. Recently, some research has focused on improving the accuracy of NNQS by using deep neural networks such as CNN[9] and group-CNN[36]. The idea of applying quantum number projection to recover the symmetries of the wave function[40, 46] has proven to be effective in improving the performance of NNQS[9, 28, 29, 36]. In addition, other alternatives that enhance the quality of the approximations consist of combining NNQS with Gutzwiller-projected fermionic wave functions[12], or pair-product wave functions[30].
3 Method
3.1 Neural Network Wave Function with symmetry
An RBM wave function takes a spin configuration – a sequence of N values ±1/2 – and returns a complex coefficient corresponding to the wave function amplitude. In other words, it is a function ψ : {−1/2,+1/2}N → IC. This function is highly non-linear and is parametrized by biases ~a,~b and weights W as:
ψ(~σz,~a,~b,W ) = e ∑N i=1 aiσ z i M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j + bi). (2)
In this expression, the number of “hidden variables” M is a tunable parameter. While RBMs have remained a simple example of a basic neural network for many decades, it was only recently that their potential as variational wave functions was appreciated [4]. In this case, unlike conventional machine learning applications, the biases and weights are complex valued.
It is possible to account for certain symmetries [27] of the problem directly within the internal mathematical structure of the RBM. In particular:
• Spin flip symmetry: If the z-component of the total magnetization is zero ( ∑ i σ z i = 0), the
global spin flip operation σzi → −σzi preserves this property. Notice that since cosh(x) is an even function, we can easily restore the global flip symmetry in RBM wave function by removing the “magnetic field” terms associated to biases ~a,~b in Eq.(2). Thus, the RBM wave function coefficients become:
ψs(~σ z,W ) = M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j ). (3)
Notice that even though the computational cost of optimizing and evaluating observables with the symmetrized wave function has increased, the resulting state has a much larger expressivity than the original one, translating into a remarkable accuracy as we shall demonstrate. We should highlight here that the new states, by being linear combinations of RBMs, are no longer RBMs, and therefore allow one to explore a much larger space outside the original manifold defined by ψs, Eq.(2).
3.2 Wave Function Optimization
The goal of the calculation is to minimize the cost function defined by the expectation value of the energy:
Evar = 〈ψKL|H|ψKL〉 〈ψKL|ψKL〉
(7)
= ∑ ~σ P~σEloc(~σ), (8)
where the probability distribution is determined by the normalized wave function coefficients
P~σ = |〈~σ|ψKL〉|2∑ ~σ′ |〈~σ′|ψKL〉|2
(9)
and the local energy is given by
Eloc(~σ) = 〈~σ|H|ψKL〉 〈~σ|ψKL〉 . (10)
By formulating the problem in probabilistic terms, one can resort to Metropolis-Hastings Markov Chain Monte Carlo to evaluate the averages. The sampling over the spin configurations ~σ is carried
out by randomly flipping pairs of anti-aligned spins, and using von Neumann rejection according to a transition probability W = |〈~σnew|ψKL〉|2/|〈~σold|ψKL〉|2. The wave function optimization can be implemented by a variety of methods. Since the energy landscape is extremely complex, simple gradient descent tends to get trapped into metastable solutions. More sophisticated strategies are usually employed, such as natural gradient descent or “stochastic reconfiguration”[42]. In contrast to the "standard" natural gradient descent method, the Fubini-study metric[33], which is the complex-valued form of Fisher information, is used to measure the "distance" between wave functions |ψ〉 and |φ〉:
γ(ψ, φ) = arccos √ 〈ψ|φ〉〈φ|ψ〉 〈ψ|ψ〉〈φ|φ〉 . (11)
The procedure to update variational parameters using natural gradient descent is well described in literature[4, 8, 30], and we hereby summarize it. The optimization is done by minimizing the Fubini-study metric between |e−dτHψ(θ)〉 and ψ(θ + δθ)〉 where dτ is a small step in imaginary time and can be viewed as learning rate in the training of neural network. The optimal choice for δθ is given by the solution of a system of equations:∑
k′
[ 〈O†kOk′〉 − 〈O † k〉〈Ok′〉 ] δθk′ = −dτ [ 〈O†kH〉 − 〈O † k〉〈H〉 ] , (12)
where the log derivative Ok = 1ψ(θ) ∂ψ(θ) ∂θ and 〈· · · 〉 means an average over samples. We update the parameters by θk = θ′k + δθk and repeat until convergence is reached.
3.3 Lanczos recursion
Using the symmetrized RBM wave function combined with the stochastic reconfiguration method, a good approximation of the ground state can be achieved after hundreds or thousands of iterations. However, due to the limited representation power of neural network wave functions, and the errors stemming from the Monte Carlo sampling and the optimization method, the true ground state of the Hamiltonian H can still differ significantly from the variational one. One possible way to increase the expressivity of the wave function is to introduce additional hidden variables or layers. However, an alternative to systematically improve the neural network wave function consists of applying a modified Lanczos recursion [14, 2, 20]. The procedure begins with a (normalized) trial wave function ψ0, which in our case is an initial guess for the ground state, ψ0 = ψKL . Then, a new state ψ1 is constructed by applying the Hamiltonian on ψ0 and subtracting the projection over ψ0 in order to preserve orthogonality:
ψ1 = Hψ0 − 〈H〉ψ0
(〈H2〉 − 〈H〉2)1/2 (13)
where 〈Hn〉 = 〈ψ0|Hn|ψ0〉. Notice that ψ1 is orthogonal to ψ0 and also normalized. In the usual Lanczos method, this recursion can be continued such that a new complete orthogonal basis can be constructed. In this representation, the Hamiltonian will have a tri-diagonal form. However, we only use ψ0 and ψ1 as our basis, and thus the Hamiltonian will be a 2× 2 matrix.
The eigenvector ψ̃0 that corresponds to the lowest eigenvalue Ẽ0 of this matrix will be a better approximation of the true ground state of Hamiltonian compared to ψ0. The lowest eigenvalue and corresponding eigenvector are
Ẽ0 = 〈H〉+ vα, (14)
ψ̃0 = 1
(1 + α2)1/2 ψ0 +
α
(1 + α2)1/2 ψ1, (15)
where
v = (〈H2〉 − 〈H〉2)1/2 (16)
r = 〈H3〉 − 3〈H2〉〈H〉+ 2〈H〉3
2(〈H2〉 − 〈H〉2)3/2 (17)
α = r − (r2 + 1)1/2, (18)
The eigenvector ψ̃0, being a linear combination of ψ0 and ψ1, is the improved neural network wave function, and Ẽ0 is the new improved variational energy. By considering ψ̃0 as the new trial wave function replacing ψ0, this method can be repeated to further improve the wave function. The neural network wave function obtained during the Lanczos recursion can be generalized as
|Ψp〉 = (1 + p∑ i=1 βiH i)|ψ0〉, (19)
where p is the maximum number of Lanczos steps, and βi is the wave function coefficient corresponding to Hi|ψ0〉. In this form, one can easily identify the wave function as an expansion on a Krylov basis.
In practice, taking into account the fact that the computational complexity increases dramatically with increasing p, only a few steps can be calculated for a large quantum many-body system. In this study, and for illustration purposes, we shall consider only the p = 1 or p = 2 cases.
3.4 Implementation details
In this work, we focus on the 2D J1 − J2 Heisenberg model on L× L square lattices where L is an even number. For the neural network, we use ψKL in all simulations and consider three different values for the number of hidden variables M consisting of 2, 2.5, and 3 times of the number of spins N = L2 in the system. The parameters W in the RBM are initialized to be randomly chosen random numbers with a uniform distribution between [−0.01, 0.01] for both real and imaginary parts. The ground state can belong to theA1 orB1 irreducible representations of the C4v point group, depending on the value of J2/J1. In our calculations we consider both cases near the transition between the spin liquid phase and the columnar phase with K = (π, 0), i. e. for J2/J1 ≥ 0.5. Due to a large number of parameters and the numerical noise in sampling, we implement the conjugate gradient method to solve the system of equations, Eq.(12). To stabilize the method, we introduce a ridge parameter λ = 10−6. For each training step, we collect 10000 samples to evaluate averages as mentioned in Sec. 3.2 including the variational energy and log derivatives. Since the adjacent states in the Markov chain are highly correlated, the number of the skipped states between samples Nskip is chosen according to this relation Nskip = 5 × 1.0/r, where r is the acceptance rate in the previous training step. The typical value for Nskip is from 30 to 100. As for evaluation, we collect 2 × 105 samples to calculate the average and statistical error. The learning rate used in the training ranges from 5× 10−4 to 3× 10−2. Once we observe that variational energy is not decreasing, a smaller learning rate(half of the previous one) is used instead. For large L, to save training time, we initialize the parameters W in ψKL using the parameters trained by means of the cheaper wave function ψs. All simulations are performed using Eigen and Intel MKL on Intel E5-2680v4 and AMD Rome 7702 CPU nodes. Source code will be available at: https://github.com/hwchen2017/Lanczos_Neural_Network_Quantum_State.
4 Results
4.1 Comparison with Exact Diagonalization
We benchmark the accuracy of the neural network wave functions for the ground state mainly on the 6× 6 and 10× 10 square lattices with periodic boundary conditions. For the 6× 6 lattice, the J1 − J2 is numerically soluble by enumerating the possible spin configurations, constructing the Hamiltonian matrix, and explicitly solving the eigenvalue problem [39]. Once the ground state (or its variational approximation) is obtained, the wave function can be used to calculate other physical quantities besides the energy. Here, for illustration, we compute the spin structure factor, that defines the sublattice magnetization squared for a finite system
S(q) = 1
N2 ∑ i,j 〈σzi σzj 〉eiq·(ri−rj), (20)
where the wave vector q determines spatial structure of the magnetic order. Notice that in all the tables shown here, we display the results times a factor N for readability.
We first focus on the symmetrized RBM wave function without the Lanczos optimization, and we start by comparing the ground state energy for a 6×6 lattice as a function of J2/J1, as shown in Fig.1. In this figure we calculate the relative error as |Enn − Eexact|/|Eexact| using the exact ground state energy from Ref. [39]. We also include the relative error of the ground state energy obtained using a convolutional neural network wave function from Ref.[9]. While the relative error of the CNNs are in order of 10−3, our RBM wave function achieves an accuracy of 10−4 in the frustrated regime. Even comparing other recent works using CNNs[43, 36, 7], our RBM wave function still outperforms the CNN wave function. Besides the ground state energy, the spin structure factor computed from optimized wave functions agree very well with the exact solution as shown in Fig. 2, where the differences are smaller than the symbol size, and in data table 2.
4.2 Comparison with state-of-the-art quantum Monte Carlo
For larger lattices, the problem is numerically intractable. However, as mentioned before, it can be solved using QMC[38] for J2 = 0. Thus, for the case without frustration we can compare with QMC results for several different lattice sizes. From table 3, we can see that even on the 10× 10 lattice the energy difference is about 3× 10−5, showing the extraordinary accuracy of our RBM wave function.
For the frustrated case, J2 6= 0, we compare to other methods, such as those obtained with CNN wave functions as well as results using the density matrix renormalization groump(DMRG) method with SU(2) symmetry from Ref. [15] and VMC using an Abrikosov-fermion mean field with a Z2 gauge structure from Ref. [20]. From the data tables 4 and 5, we observe that our RBM wave function outperform the CNN wave function again in the entire range of J2/J1. In the frustrated regime, comparisons with VMC and DMRG using all the data available in literature demonstrate that the RBM wave functions still yield competitive ground state energies except at J2/J1 = 0.55 where DMRG yields a lower value.
4.3 Lanczos optimization
Since the most interesting regime lies around the maximally frustrated point J2 ∼ 0.5J1, we choose 3 different values of J2/J1 using 6× 6 and 10× 10 lattices and perform a few Lanczos steps to further
improve the ground state energy. From data table 2 and 4, we see that the Lanczos steps are very effective regardless of the system size. Remarkably, by performing p = 1 Lanczos steps, we obtain better energy at J2/J1 = 0.55 for the 10× 10 lattice that improves significantly the best available data using state-of-the-art DMRG, as shown in data table 4. Besides, compared to the "RBM+PP" results[30], which is generally considered as the start-of-the-art NNQS method, we obtain slightly lower variational energy at J2 = 0.5, 0.55 on a 6× 6 lattice while for a 10× 10 lattice at J2 = 0.5, their variational energy is 8 × 10−4 lower than ours. Additionally, with the help of the Lanczos recursion, a better estimate of the energy can be obtained by carrying out a variance extrapolation as illustrated in Ref. [2, 20]. We also try to improve the estimation of spin structure factor using Lanczos, but the Monte Carlo sampling error makes the improvement not obvious.
5 Conclusion
Neural network wave functions hold a great deal of promise due to their ability to compress complex quantum many-body states within a relatively simple mathematical structure that, owing to its nonlinearity, can encode an exponentially large amount of information with polynomial resources. In particular, RBM wave functions, initially deemed too simple, can be used as building blocks for systematically improved wave functions. These improved states obey the internal symmetries of the model and the point group symmetries of the lattice. In addition, they may contain contributions from the state living in a “tangent space” to the original RBM manifold. These tangent vectors are spanned in terms of powers of the Hamiltonian and form a Krylov basis.
We have demonstrated that we can achieve state-of-the-art accuracy that improves previous results using convolutional neural networks with a minimal amount of extra computational cost compared to simple RBMs. The combination of Lanczos and symmetrization offer an effective solution to problems previously beyond the reach of the most powerful numerical techniques and provide the
means to bypass the sign problem. These ideas can seamlessly translate to other areas of research ranging from materials science to quantum chemistry. Besides, our variational solution can be adopted to calculate the excitation spectrum of a quantum many-body system[17, 18], providing valuable information that can be directly compared to experiments.
Limitations The computational cost of a single training step scales as O(Nsample ×MN2), where the number of hidden variablesM is usually proportional to the system sizeN . Thus, the computation time of calculation may be a bottleneck for its application on larger lattices. In particular, we find that even though the results for the energy are very accurate, correlation functions have relatively larger errors. This behavior might be improved by using variational forms with better representation power. Besides, the Lanczos step procedure is not size consistent, which means that the energy improvement with respect to the original wave function |ψ0〉 vanishes for fixed p and N →∞. Also, the Lanczos correction will be smaller and smaller as p increases. Nevertheless, a sizable improvement is obtained even for rather large clusters with 100 sites as shown in the data table 4.
Negative Societal Impact Our work presents the theoretical simulation of the quantum many-body problems without any foreseeable negative societal impacts.
Acknowledgments and Disclosure of Funding
AEF and HC acknowledge the National Science Foundation for support under grant No. DMR2120501. DH is partially supported by a Northeastern Tier 1 grant. | 1. What is the focus and contribution of the paper on modeling wave functions of quantum systems?
2. What are the strengths of the proposed approach, particularly in incorporating physical knowledge and combining different concepts?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the limitations of the proposed method regarding its scalability with the system size? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In the present paper, the authors introduce a method to model the wave function of quantum systems, in particular spin systems in the
J
1
-
J
2
model. They use a restricted Boltzmann machine (RBM) and form linear combinations of them in order to incorporate certain symmetries of the investigated system. Furthermore, the authors introduce a method to determine the ground state utilizing the Lanczos recursion to iteratively improve the estimate.
They compare their method on spin lattices of various sizes to baseline methods such as Quantum Monte Carlo and a CNN based method on the basis of comparing the ground state energies and the spin structure factor obtained through the different procedures. Their method is competitive with or outperforms the baselines.
Strengths And Weaknesses
Strengths
In general, the paper is concise and well written, making it pleasant to read and understand.
The authors utilized the physical knowledge about the symmetries of the system and incorporate it into the model, which is a common theme in applying machine learning models to problems in the natural sciences.
Different concepts from machine learning (RBMs), advanced statistics (Fubini-study metric), and numerics (Lanczos recursion) are combined to form a new algorithm capable to compute the ground state energy of a spin system, which is competitive or outperforms its baselines.
Weaknesses
I am not familiar with algorithms for estimating the ground state energy of quantum systems, so I cannot judge how big or small the improvement of this procedure over existing baselines is. However, the difference of the results is often very small, only manifesting in the fourth or fifth digit, so it is not clear to me whether this is actually significant.
The authors mention that the key for such algorithms is that they use a small amount of memory and compute. A naive implementation would use an exponential amount of memory, while this procedure has a controllable number of variables. However, they do not discuss in detail how much compute and memory competing procedures use. They only mention in the conclusion that their method is slightly more expensive than the CNN-based method while having a better accuracy, but it is unclear what this means.
Conclusion
Given my criticism and my lack of knowledge in the field, I am unsure whether to accept or reject this article. For me, it is important to clarify the amount of memory and compute needed by different methods, putting their respective performances in perspective. I am willing to raise my score if this concern is addressed appropriately.
Questions
How much memory and compute do other methods use?
How do they scale with the system size?
Limitations
The authors state that the computational cost of their method scales quadratically with the system size, which limits their ability to tackle larger systems. |
NIPS | Title
Systematic improvement of neural network quantum states using Lanczos
Abstract
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and highenergy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg J1 − J2 model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
1 Introduction
Understanding correlated quantum systems requires dealing with a large configuration space: datasets are comprised of all possible electronic configurations ~σ and cannot be stored in the memory of the largest supercomputer. Hence, the quantum many-body problem can be interpreted as an “extreme data science” problem [13] from an information processing perspective. In a quantum wave function, each electronic or spin configuration has an associated complex amplitude ψ(~σ) determined by solving for the eigenvectors of the Hamiltonian operator. In particular, if one is interested in the zero temperature properties of the system, the solution is given by the eigenvector with the smallest eigenvalue. Finding the exact solution of a N quantum bit system with interactions requires solving for the eigenvectors of a 2N × 2N matrix. Alternatively, one can formulate the calculation as an optimization problem in which an “energy functional” E(ψ) has to minimized with respect to all the 2N complex amplitudes.
Since the number of configurations d grows exponentially with the number of degrees of freedom (electrons, spins), this problem quickly becomes intractable. A solution consists of “compressing’ the wave function by proposing a suitable guess for the amplitudes based on some variational parameters ~α = (α1, α2, · · · , αm). Typically, a functional form ψ(~σ) = f(~σ, ~α) based on some physical intuition is utilized to represent the amplitude of given configuration/state ~σ. The optimal parameters αi are determined by solving the system of equations ∇αE = 0. The objective of this solution is to achieve the lowest possible energy with a number of parameters m d.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Some relatively simple wave functions have enjoyed various degrees of success in the past, such as those of the Jastrow type where the amplitudes can be written as pair products f(~σ, ~α) =∏ ij U(αijσiσj). However, in recent years we have witnessed impressive developments based on the use of neural network (NN) wave functions as variational estimators [4], which have jump-started a new vibrant field of research dubbed “quantum machine learning”. Notice that the optimization of the wave function parameters now translates into the “training” of the NN by minimizing the energy function that becomes a cost function (we describe the training process below). The power of NN wave functions lies in the complex non-linear structure that provides them with remarkable expressivity to represent arbitrary complex many-body states by, at the same time, being completely agnostic to the physics.
Since restricted Boltzmann machines(RBM) were originally used as a variational ansatz for finding the ground state of the quantum many-body systems [4], there has been a growing effort to investigate other forms of neural networks, including convolutional neural networks(CNN)[9, 23], recurrent neural networks(RNN)[19], graph networks[22], transformers[25], to mention a few. Thus, neural network quantum states(NNQS) become the most appealing numerical alternative to treat quantum many body systems since they can be systematically improved by adding new layers or hidden variables, for instance. In addition to the ground state search, the application of NNQS ranges from classical simulation of quantum circuits[1, 5, 32], calculation of spectral function[17, 18], thermodynamics simulation[16, 31], and quantum tomography[44].
Contributions In this work, we show how one can use a mathematically simple structure, a restricted Boltzmann machine (RBM), and yet obtain values of the ground state energy that beat all previous estimates by a range of numerical methods, including using convolutional neural networks. As we describe below, instead of increasing the number of layers or hidden variables, the solution lies on considering linear combinations of RBMs. The new wave function allows one to explore a much larger space of solutions. In particular, one can use this construction to restore spatial symmetries [40, 9, 28, 29]. In addition, we propose implementing a projection method based on a Lanczos recursion using a “Krylov basis” of RBMs obtained by sequentially applying powers of the Hamiltonian operator.
The paper is organized as follows: In Sec.2.1 we describe the quantum many-body problem in the context of the Heisenberg model; in Sec.2.2 we summarize prior attempts to study this problem using NNQS; in Sec.3 we review the basic formalism, including the structure of neural network wave functions, how to incorporate the symmetries of the problem into the quantum many-body state, and the numerical training procedure to optimize it. In Sec.4 we present results of state-of-the-art calculations for the J1 − J2 Heisenberg model on the square lattice and compare to other numerical techniques. We finally close with a summary and conclusions.
2 The quantum many-body problem
2.1 Model
In the following, we will focus on quantum spin problems where the degrees of freedom σi can assume two possible values ±1/2 (or “up” and “down”). Similarly, one can think of them as generic two-level systems or “qubits”. In particular, we will benchmark our methods in the context of the spin 12 antiferromagnetic Heisenberg model with nearest and next nearest neighbor interactions, the so-called J1 − J2 model defined by the Hamiltonian:
Ĥ = J1 ∑ 〈ij〉 ~Si · ~Sj + J2 ∑ 〈〈ij〉〉 ~Si · ~Sj , (1)
where ~S = (Ŝx, Ŝy, Ŝz) are spin operators, the first term runs over nearest neighboring sites 〈ij〉 on a square lattice and the second term runs over next nearest pairs 〈〈ij〉〉 along the diagonals of the plaquettes. For convenience, in the following, we set J1 = 1 as the unit of energy. In this problem, the number of possible configurations grows as d = 2N . However, the ground state wave function lies on the sector with the same number of up and down spins, constraining our search to a smaller subset of states, albeit still exponentially large.
Without the J2 term, the problem can be numerically solved for hundreds of spins using quantum Monte Carlo (QMC) [38]. However, the method cannot be applied to problems with frustration since
it is noticeably affected by the infamous sign problem[24]. In our case, this is due to the presence of the J2 term that makes some transition probabilities ill-defined (negative). The ground states of this model are well established in two extreme cases: at small J2/J1 the system antiferromagnetically orders with wave vector q = (π, π); at large J2/J1 spins prefer columnar order q = (π, 0), (0, π), in which they aligned antiparallel in one direction, but ferromagnetically in the other. However, in the maximally frustrated regime J1 ∼ 0.5J2, the system does not display any apparent order and the nature of this spin liquid state remains controversial despite significant research efforts over the past three decades[3, 6, 11, 10, 35, 39, 41, 34, 37, 26, 20, 15, 21, 45].
Therefore, we choose this Hamiltonian for two reasons: (i) it realizes a quantum spin liquid in a parameter regime near J2 ∼ 0.5J1 and (ii) conventional Monte Carlo methods fail, making the model an ideal testing ground to benchmark new techniques. Variational Monte Carlo(VMC) provides a suitable alternative that can be scaled up to large two-dimensional systems without being affected by the sign problem. The quest for relatively simple yet powerful variational states has focused on neural network states, which have shown a great deal of promise. The complexity of the problem lies in the fact that many states with similar energy have very different physical properties. Therefore, an accurate representation of the ground state becomes the key to studying the nature of the quantum phase.
2.2 Related work
Before the concept of NNQS became a popular new alternative for simulating many-body systems, the most successful numerical techniques to treat the 2D J1 − J2 model have been the density matrix renormalization group (DMRG)[15], VMC based on a projected fermionic ansatz[20], and tensor product states[45]. Recently, some research has focused on improving the accuracy of NNQS by using deep neural networks such as CNN[9] and group-CNN[36]. The idea of applying quantum number projection to recover the symmetries of the wave function[40, 46] has proven to be effective in improving the performance of NNQS[9, 28, 29, 36]. In addition, other alternatives that enhance the quality of the approximations consist of combining NNQS with Gutzwiller-projected fermionic wave functions[12], or pair-product wave functions[30].
3 Method
3.1 Neural Network Wave Function with symmetry
An RBM wave function takes a spin configuration – a sequence of N values ±1/2 – and returns a complex coefficient corresponding to the wave function amplitude. In other words, it is a function ψ : {−1/2,+1/2}N → IC. This function is highly non-linear and is parametrized by biases ~a,~b and weights W as:
ψ(~σz,~a,~b,W ) = e ∑N i=1 aiσ z i M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j + bi). (2)
In this expression, the number of “hidden variables” M is a tunable parameter. While RBMs have remained a simple example of a basic neural network for many decades, it was only recently that their potential as variational wave functions was appreciated [4]. In this case, unlike conventional machine learning applications, the biases and weights are complex valued.
It is possible to account for certain symmetries [27] of the problem directly within the internal mathematical structure of the RBM. In particular:
• Spin flip symmetry: If the z-component of the total magnetization is zero ( ∑ i σ z i = 0), the
global spin flip operation σzi → −σzi preserves this property. Notice that since cosh(x) is an even function, we can easily restore the global flip symmetry in RBM wave function by removing the “magnetic field” terms associated to biases ~a,~b in Eq.(2). Thus, the RBM wave function coefficients become:
ψs(~σ z,W ) = M∏ i=1 2 cosh ( N∑ j=1 Wijσ z j ). (3)
Notice that even though the computational cost of optimizing and evaluating observables with the symmetrized wave function has increased, the resulting state has a much larger expressivity than the original one, translating into a remarkable accuracy as we shall demonstrate. We should highlight here that the new states, by being linear combinations of RBMs, are no longer RBMs, and therefore allow one to explore a much larger space outside the original manifold defined by ψs, Eq.(2).
3.2 Wave Function Optimization
The goal of the calculation is to minimize the cost function defined by the expectation value of the energy:
Evar = 〈ψKL|H|ψKL〉 〈ψKL|ψKL〉
(7)
= ∑ ~σ P~σEloc(~σ), (8)
where the probability distribution is determined by the normalized wave function coefficients
P~σ = |〈~σ|ψKL〉|2∑ ~σ′ |〈~σ′|ψKL〉|2
(9)
and the local energy is given by
Eloc(~σ) = 〈~σ|H|ψKL〉 〈~σ|ψKL〉 . (10)
By formulating the problem in probabilistic terms, one can resort to Metropolis-Hastings Markov Chain Monte Carlo to evaluate the averages. The sampling over the spin configurations ~σ is carried
out by randomly flipping pairs of anti-aligned spins, and using von Neumann rejection according to a transition probability W = |〈~σnew|ψKL〉|2/|〈~σold|ψKL〉|2. The wave function optimization can be implemented by a variety of methods. Since the energy landscape is extremely complex, simple gradient descent tends to get trapped into metastable solutions. More sophisticated strategies are usually employed, such as natural gradient descent or “stochastic reconfiguration”[42]. In contrast to the "standard" natural gradient descent method, the Fubini-study metric[33], which is the complex-valued form of Fisher information, is used to measure the "distance" between wave functions |ψ〉 and |φ〉:
γ(ψ, φ) = arccos √ 〈ψ|φ〉〈φ|ψ〉 〈ψ|ψ〉〈φ|φ〉 . (11)
The procedure to update variational parameters using natural gradient descent is well described in literature[4, 8, 30], and we hereby summarize it. The optimization is done by minimizing the Fubini-study metric between |e−dτHψ(θ)〉 and ψ(θ + δθ)〉 where dτ is a small step in imaginary time and can be viewed as learning rate in the training of neural network. The optimal choice for δθ is given by the solution of a system of equations:∑
k′
[ 〈O†kOk′〉 − 〈O † k〉〈Ok′〉 ] δθk′ = −dτ [ 〈O†kH〉 − 〈O † k〉〈H〉 ] , (12)
where the log derivative Ok = 1ψ(θ) ∂ψ(θ) ∂θ and 〈· · · 〉 means an average over samples. We update the parameters by θk = θ′k + δθk and repeat until convergence is reached.
3.3 Lanczos recursion
Using the symmetrized RBM wave function combined with the stochastic reconfiguration method, a good approximation of the ground state can be achieved after hundreds or thousands of iterations. However, due to the limited representation power of neural network wave functions, and the errors stemming from the Monte Carlo sampling and the optimization method, the true ground state of the Hamiltonian H can still differ significantly from the variational one. One possible way to increase the expressivity of the wave function is to introduce additional hidden variables or layers. However, an alternative to systematically improve the neural network wave function consists of applying a modified Lanczos recursion [14, 2, 20]. The procedure begins with a (normalized) trial wave function ψ0, which in our case is an initial guess for the ground state, ψ0 = ψKL . Then, a new state ψ1 is constructed by applying the Hamiltonian on ψ0 and subtracting the projection over ψ0 in order to preserve orthogonality:
ψ1 = Hψ0 − 〈H〉ψ0
(〈H2〉 − 〈H〉2)1/2 (13)
where 〈Hn〉 = 〈ψ0|Hn|ψ0〉. Notice that ψ1 is orthogonal to ψ0 and also normalized. In the usual Lanczos method, this recursion can be continued such that a new complete orthogonal basis can be constructed. In this representation, the Hamiltonian will have a tri-diagonal form. However, we only use ψ0 and ψ1 as our basis, and thus the Hamiltonian will be a 2× 2 matrix.
The eigenvector ψ̃0 that corresponds to the lowest eigenvalue Ẽ0 of this matrix will be a better approximation of the true ground state of Hamiltonian compared to ψ0. The lowest eigenvalue and corresponding eigenvector are
Ẽ0 = 〈H〉+ vα, (14)
ψ̃0 = 1
(1 + α2)1/2 ψ0 +
α
(1 + α2)1/2 ψ1, (15)
where
v = (〈H2〉 − 〈H〉2)1/2 (16)
r = 〈H3〉 − 3〈H2〉〈H〉+ 2〈H〉3
2(〈H2〉 − 〈H〉2)3/2 (17)
α = r − (r2 + 1)1/2, (18)
The eigenvector ψ̃0, being a linear combination of ψ0 and ψ1, is the improved neural network wave function, and Ẽ0 is the new improved variational energy. By considering ψ̃0 as the new trial wave function replacing ψ0, this method can be repeated to further improve the wave function. The neural network wave function obtained during the Lanczos recursion can be generalized as
|Ψp〉 = (1 + p∑ i=1 βiH i)|ψ0〉, (19)
where p is the maximum number of Lanczos steps, and βi is the wave function coefficient corresponding to Hi|ψ0〉. In this form, one can easily identify the wave function as an expansion on a Krylov basis.
In practice, taking into account the fact that the computational complexity increases dramatically with increasing p, only a few steps can be calculated for a large quantum many-body system. In this study, and for illustration purposes, we shall consider only the p = 1 or p = 2 cases.
3.4 Implementation details
In this work, we focus on the 2D J1 − J2 Heisenberg model on L× L square lattices where L is an even number. For the neural network, we use ψKL in all simulations and consider three different values for the number of hidden variables M consisting of 2, 2.5, and 3 times of the number of spins N = L2 in the system. The parameters W in the RBM are initialized to be randomly chosen random numbers with a uniform distribution between [−0.01, 0.01] for both real and imaginary parts. The ground state can belong to theA1 orB1 irreducible representations of the C4v point group, depending on the value of J2/J1. In our calculations we consider both cases near the transition between the spin liquid phase and the columnar phase with K = (π, 0), i. e. for J2/J1 ≥ 0.5. Due to a large number of parameters and the numerical noise in sampling, we implement the conjugate gradient method to solve the system of equations, Eq.(12). To stabilize the method, we introduce a ridge parameter λ = 10−6. For each training step, we collect 10000 samples to evaluate averages as mentioned in Sec. 3.2 including the variational energy and log derivatives. Since the adjacent states in the Markov chain are highly correlated, the number of the skipped states between samples Nskip is chosen according to this relation Nskip = 5 × 1.0/r, where r is the acceptance rate in the previous training step. The typical value for Nskip is from 30 to 100. As for evaluation, we collect 2 × 105 samples to calculate the average and statistical error. The learning rate used in the training ranges from 5× 10−4 to 3× 10−2. Once we observe that variational energy is not decreasing, a smaller learning rate(half of the previous one) is used instead. For large L, to save training time, we initialize the parameters W in ψKL using the parameters trained by means of the cheaper wave function ψs. All simulations are performed using Eigen and Intel MKL on Intel E5-2680v4 and AMD Rome 7702 CPU nodes. Source code will be available at: https://github.com/hwchen2017/Lanczos_Neural_Network_Quantum_State.
4 Results
4.1 Comparison with Exact Diagonalization
We benchmark the accuracy of the neural network wave functions for the ground state mainly on the 6× 6 and 10× 10 square lattices with periodic boundary conditions. For the 6× 6 lattice, the J1 − J2 is numerically soluble by enumerating the possible spin configurations, constructing the Hamiltonian matrix, and explicitly solving the eigenvalue problem [39]. Once the ground state (or its variational approximation) is obtained, the wave function can be used to calculate other physical quantities besides the energy. Here, for illustration, we compute the spin structure factor, that defines the sublattice magnetization squared for a finite system
S(q) = 1
N2 ∑ i,j 〈σzi σzj 〉eiq·(ri−rj), (20)
where the wave vector q determines spatial structure of the magnetic order. Notice that in all the tables shown here, we display the results times a factor N for readability.
We first focus on the symmetrized RBM wave function without the Lanczos optimization, and we start by comparing the ground state energy for a 6×6 lattice as a function of J2/J1, as shown in Fig.1. In this figure we calculate the relative error as |Enn − Eexact|/|Eexact| using the exact ground state energy from Ref. [39]. We also include the relative error of the ground state energy obtained using a convolutional neural network wave function from Ref.[9]. While the relative error of the CNNs are in order of 10−3, our RBM wave function achieves an accuracy of 10−4 in the frustrated regime. Even comparing other recent works using CNNs[43, 36, 7], our RBM wave function still outperforms the CNN wave function. Besides the ground state energy, the spin structure factor computed from optimized wave functions agree very well with the exact solution as shown in Fig. 2, where the differences are smaller than the symbol size, and in data table 2.
4.2 Comparison with state-of-the-art quantum Monte Carlo
For larger lattices, the problem is numerically intractable. However, as mentioned before, it can be solved using QMC[38] for J2 = 0. Thus, for the case without frustration we can compare with QMC results for several different lattice sizes. From table 3, we can see that even on the 10× 10 lattice the energy difference is about 3× 10−5, showing the extraordinary accuracy of our RBM wave function.
For the frustrated case, J2 6= 0, we compare to other methods, such as those obtained with CNN wave functions as well as results using the density matrix renormalization groump(DMRG) method with SU(2) symmetry from Ref. [15] and VMC using an Abrikosov-fermion mean field with a Z2 gauge structure from Ref. [20]. From the data tables 4 and 5, we observe that our RBM wave function outperform the CNN wave function again in the entire range of J2/J1. In the frustrated regime, comparisons with VMC and DMRG using all the data available in literature demonstrate that the RBM wave functions still yield competitive ground state energies except at J2/J1 = 0.55 where DMRG yields a lower value.
4.3 Lanczos optimization
Since the most interesting regime lies around the maximally frustrated point J2 ∼ 0.5J1, we choose 3 different values of J2/J1 using 6× 6 and 10× 10 lattices and perform a few Lanczos steps to further
improve the ground state energy. From data table 2 and 4, we see that the Lanczos steps are very effective regardless of the system size. Remarkably, by performing p = 1 Lanczos steps, we obtain better energy at J2/J1 = 0.55 for the 10× 10 lattice that improves significantly the best available data using state-of-the-art DMRG, as shown in data table 4. Besides, compared to the "RBM+PP" results[30], which is generally considered as the start-of-the-art NNQS method, we obtain slightly lower variational energy at J2 = 0.5, 0.55 on a 6× 6 lattice while for a 10× 10 lattice at J2 = 0.5, their variational energy is 8 × 10−4 lower than ours. Additionally, with the help of the Lanczos recursion, a better estimate of the energy can be obtained by carrying out a variance extrapolation as illustrated in Ref. [2, 20]. We also try to improve the estimation of spin structure factor using Lanczos, but the Monte Carlo sampling error makes the improvement not obvious.
5 Conclusion
Neural network wave functions hold a great deal of promise due to their ability to compress complex quantum many-body states within a relatively simple mathematical structure that, owing to its nonlinearity, can encode an exponentially large amount of information with polynomial resources. In particular, RBM wave functions, initially deemed too simple, can be used as building blocks for systematically improved wave functions. These improved states obey the internal symmetries of the model and the point group symmetries of the lattice. In addition, they may contain contributions from the state living in a “tangent space” to the original RBM manifold. These tangent vectors are spanned in terms of powers of the Hamiltonian and form a Krylov basis.
We have demonstrated that we can achieve state-of-the-art accuracy that improves previous results using convolutional neural networks with a minimal amount of extra computational cost compared to simple RBMs. The combination of Lanczos and symmetrization offer an effective solution to problems previously beyond the reach of the most powerful numerical techniques and provide the
means to bypass the sign problem. These ideas can seamlessly translate to other areas of research ranging from materials science to quantum chemistry. Besides, our variational solution can be adopted to calculate the excitation spectrum of a quantum many-body system[17, 18], providing valuable information that can be directly compared to experiments.
Limitations The computational cost of a single training step scales as O(Nsample ×MN2), where the number of hidden variablesM is usually proportional to the system sizeN . Thus, the computation time of calculation may be a bottleneck for its application on larger lattices. In particular, we find that even though the results for the energy are very accurate, correlation functions have relatively larger errors. This behavior might be improved by using variational forms with better representation power. Besides, the Lanczos step procedure is not size consistent, which means that the energy improvement with respect to the original wave function |ψ0〉 vanishes for fixed p and N →∞. Also, the Lanczos correction will be smaller and smaller as p increases. Nevertheless, a sizable improvement is obtained even for rather large clusters with 100 sites as shown in the data table 4.
Negative Societal Impact Our work presents the theoretical simulation of the quantum many-body problems without any foreseeable negative societal impacts.
Acknowledgments and Disclosure of Funding
AEF and HC acknowledge the National Science Foundation for support under grant No. DMR2120501. DH is partially supported by a Northeastern Tier 1 grant. | 1. What is the focus and contribution of the paper on neural quantum states?
2. What are the strengths of the proposed approach, particularly in terms of Lanczos step improvements?
3. What are the weaknesses of the paper, especially regarding its limitations in scalability?
4. Do you have any concerns or suggestions regarding the comparisons with other works, particularly the current best-known result on the model?
5. What are the effects of symmetries in improving the bare RBM results, and how do they compare to imposing translation symmetries in the weights? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper applies Lanczos step improvements over a shallow neural quantum state based on RBMs. Symmetries are also exploited successfully, thanks to projections in the relevant symmetry sectors. The authors report results that are highly competitive with the state of the art on a challenge benchmark (J1-J2 model in 2d).
Strengths And Weaknesses
To my knowledge, this is the first application of the Lanczos-step style improvements to neural quantum states. This idea has been applied in the past to other variational states, and carries a cost that is exponential with the number of Lanczos steps.
The technique reported here allows to improve significantly on previously reported "pure" neural quantum states results. The main limitation of the approach is the known lack of "size extensive" scaling of the Lanczos iterations, which make them ineffective for larger systems approaching in the thermodynamic limit. However, there could be cases of finite clusters where the improvement offered is still important.
My main criticism is the lack of comparison with the current best-known result on the model (more in the questions), reported in Nomura and Imada, PHYSICAL REVIEW X 11, 031034 (2021).
Questions
While the authors report on a comparison on the 6x6 model, it is crucial to understand what happens on the larger 10x10 model, if they want to claim new SOTA results on the J1-J2 model. The paper by Nomura and Imada in Table 2 reports the relevant energy to compare to. How does the Lanczos-step approach compare?
The effect of symmetries seems absolutely crucial to improve the bare RBM results. What does it happen if one imposes translation symmetries in the weights instead of summing over the group explicitly?
Limitations
yes |
NIPS | Title
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
Abstract
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
1 Introduction
In this paper, we analyze a line-search method when applied to the problem of minimizing an unconstrained, differentiable, possibly non-convex function : Rn ! R. The goal is to find a "-stationary point for ; that is, a point x with kr (x)k ". We make the standard assumption that r is LLipschitz, but the knowledge of L is not assumed by the algorithm. We consider a setting where neither the function value (x) nor the gradient r (x) are directly computable. Instead, the algorithm is given black-box access to the following probabilistic oracles:
• Probabilistic zeroth order oracle. Given a point x, the oracle computes f(x, ⇠), a (random) estimate of the function value (x). ⇠ is a random variable (whose distribution may depend on x), with probability space (⌦,F⌦, P ). We assume the absolute value of the estimation error e(x) = |f(x, ⇠(x)) (x)| (we omit the dependence on ⇠ for brevity) to be a “one-sided” sub-exponential-like random variable1 with parameters (⌫, b), whose mean is bounded by some constant ✏f > 0. Specifically,
E⇠ [e(x)] ✏f and E⇠ [exp{ (e(x) E[e(x)])}] exp ✓ 2⌫2
2
◆ , 8 2 0, 1
b
. (1)
• Probabilistic first order oracle. Given a point x and a constant ↵ > 0, the oracle computes g(x, ⇠0), a (random) estimate of the gradient r (x), such that
P⇠0 (kg(x, ⇠0) r (x)k max{✏g,↵kg(x, ⇠0)k}) 1 . (2) Here, ⇠0 is a random variable (whose distribution may depend on x), with associated probability space (⌦0,F⌦0 , P 0). (1 ) 2 (0, 1) is the probability, intrinsic to the oracle, that the
1This is a weaker requirement than assuming e(x) to be sub-exponential, as one only needs to guard against the possibility of the error e(x) being too large.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
gradient estimate is “sufficiently accurate” with respect to ✏g,, and ↵. Lastly, , ✏g 0 are constants, intrinsic to the oracle, which represent the precision the oracle can achieve. Note that ✏g allows the gradient estimate to be bounded away from the true gradient by a constant distance.
Remark We will analyze a line search algorithm that relies on these two oracles. In the zeroth order oracle, the constants ✏f and (⌫, b) are intrinsic. In the first order oracle, , ✏g , and are intrinsic. These values cannot be controlled. On the other hand, ↵ is an input to the first order oracle that can be chosen by the algorithm. In fact, as we shall see in Section 3, ↵ will be the step size of the line search method.
These two oracles cover several settings, including
• Standard supervised learning, where gradients and values of the loss function are computed based on a mini-batch. Here, the random variables ⇠ and ⇠0 in the zeroth and first order oracles represent the random set of samples in the mini-batch.
• Zeroth order optimization, where gradients are estimated via randomized finite differences using (possibly noisy) function values. This arises in policy gradients in reinforcement learning, as is used in [SHC+16] and analyzed in [BCCS21].
• A variety of other settings, where the gradients and function estimates may be biased stochastic estimates of the true gradients and function values.
The constants in the oracles determine the precision of the function and gradient estimates. These constants will also dictate the accuracy achievable by the line search method we analyze. Specifically, if ✏f = 0 and ✏g = 0, then the algorithm converges to a stationary point. Otherwise, a precise lower bound is derived for the smallest kr (x)k the algorithm can achieve, in terms of the constants in the oracles. It is worth noting that the oracles can be biased. Indeed, the zeroth order oracle can incur arbitrarily large error, as long as it satisfies 1. Moreover, the first order oracle only requires g(x, ⇠0(x)) to be a “sufficiently accurate” estimate ofr (x) with probability 1 . Thus g(x, ⇠0(x)) can be an arbitrary vector with probability , so it in principle can have an arbitrarily large bias.
The line-search algorithm is given in Section 3. It is a modification of the standard Armijo-based line search algorithm [NW06], with access to the zeroth and first order oracles. The two small modifications are: 1) The Armijo condition is relaxed by an additive constant 2✏f , to account for the inexact function evaluations, and 2) The first order oracle is called in each iteration, and a new search direction is generated whenever the step size changes. This allows the method to progress to near-stationary points without assuming the gradient estimates (e.g. the mini-batch gradients in supervised learning) to be Lipschitz continuous.
Our framework and analysis are based on results in [CS17], [GRVZ18] and [BCS19]. However, there are several key differences. In [CS17] and [BCS19] the line search has access to stronger oracles, with ✏g = 0 and |f(x, ⇠) (x)| ✏f deterministically. Under these assumptions, [CS17] and [BCS19] derive an expected iteration complexity bound. In this paper, we provide a high probability tail bound on the iteration complexity, showing that the algorithm is very likely to succeed in a number of iterations on the order of its expected iteration complexity. Moreover, we consider more general oracles, with arbitrary ✏g and possibly unbounded |f(x, ⇠) (x)|. Thus, we significantly strengthen the results in [CS17] and [BCS19]. To the best of our knowledge, the only other high probability complexity bound of this kind is derived in [GRVZ18] for a trust region algorithm under the assumption ✏g = 0 and |f(x, ⇠) (x)|= 0 deterministically, which are much stronger oracles.
Stochastic line search has also been analyzed in [PS20] and [VML+19]. In [PS20] the assumptions on |f(x, ⇠) (x)| are different. On the one hand, they allow for more general distributions than sub-exponential. On the other hand, it is assumed that |f(x, ⇠) (x)| can be made arbitrarily small with some fixed probability. An expected iteration complexity bound is then derived for arbitrarily small ". In contrast, we do not assume this, and analyze the iteration complexity of reaching an "-stationary point, with " lower-bounded by a function of the constants in the oracles. Moreover, our analysis and results are much simpler than those in [PS20] and we derive an iteration complexity bound in high probability, not just in expectation.
In [VML+19], the traditional line search is analyzed for empirical loss minimization, where the function oracles are implemented using a random mini-batch of a fixed size. The mini-batch remains
fixed during backtracking until a standard Armijo condition is satisfied. Thus the search direction remains the same until a step is taken. While good computational performance has been reported in [VML+19], its theoretical analysis requires several very restrictive assumptions, especially for nonconvex functions. Also, they bound the expected sum of squared gradient norms, while we bound the iteration complexity with high probability. We note that using similar techniques as in [BCS19], our analysis can be extended to the convex and strongly convex cases.
In summary, we present an analysis of an adaptive line search algorithm under very general conditions on the gradient and function estimates. The results not only subsume most results in the prior literature, but also substantially extend the framework. Moreover, high probability tail bounds on iteration complexity are derived, instead of only expected iteration complexity.
2 Oracles
In this section, we discuss a couple of settings, and show how they are captured by our framework. All norms used are 2-norm.
2.1 Expected loss minimization
Let us first discuss how the oracle definitions apply to expected loss minimization. In this setting, (x) = Ed⇠D[`(x, d)]. Here, x is the model parameters, d is a data sample following distribution D, and `(x, d) is the loss when the model parameterized by x is evaluated on data point d.
In this case, the zeroth and first order oracles can be as follows, where S is a mini-batch sampled from D:
f(x,S) = 1
|S|
X d2S `(x, d), g(x,S) = 1 |S| X d2S rx`(x, d). (3)
In general, S can be chosen to depend on x. We now show how our zeroth and first order oracle conditions are satisfied by selecting an appropriate sample size |S|.
Proposition 1. Let ê(x, d) := `(x, d) (x) be a (⌫̂(x), b̂(x))-subexponential random variable and Vard⇠D [`(x, d)] ✏̂(x)2, for some ⌫̂(x), b̂(x), ✏̂(x). Let e(x,S) = |f(x,S) (x)| and N = |S|, then
ES [e(x,S)] 1 p N ✏̂(x) and e(x,S) is (⌫(x), b(x))-subexponential,
with ⌫(x) = b(x) = 8e2 max n
⌫̂(x)p N , b̂(x)
o .
In the case when the support of D is bounded, ` is Lipschitz, and the set of x we consider is bounded, the assumption of Proposition 1 is satisfied. Thus, f(x,S) is a zeroth order oracle with ✏f = supx 1p N ✏̂(x), ⌫ = supx ⌫(x), and b = supx b(x), and ✏f can be made arbitrarily small by taking a large enough sample.
Under standard assumptions on r`(x, d), for instance, suppose Assumption 4.3 in [BCN18] holds: for some Mc,Mv 0 and for all x,
Ed⇠D kr`(x, d) r (x)k2 Mc +Mv kr (x)k2 , (4) one can show g(x,S) is a first order oracle with a large enough sample size. Proposition 2. Let g = g(x,S). Assuming Ed⇠Dr`(x, d) = r (x), then
|S| Mc +Mv kr (x)k
2
min
( 1
✏2g ,
(1 + ↵)2
2↵2 kr (x)k2
)
implies P (kg r (x)k max{✏g,↵kgk}) 1 .
This bound implies a looser bound of:
|S| max ( 2Mc ✏2g , 2Mv(1 + ↵) 2 2↵2 ) .
Remark Let us discuss what is required from the minibatch size in this setting. Unlike standard SGD, the minibatch size is chosen dynamically. When convergence to a stationary point is desired, ✏g has to be zero, and gradient estimate gk tends to zero. Thus, kg r (x)k also has to tend to zero. If Mc = 0, then fixing the mini-batch size to be at least Mv(1+↵) 2
2↵2 provides a valid first order oracle. Thus, unless ↵ tends to zero, the minibatch size remains bounded from below. This is similar to the interpolation condition used in e.g. [BM11, VML+19]. On the other hand, when Mc > 0, the minibatch size has to grow in order to approach a stationary point. This is similar to dynamic minibatch size selection, discussed, e.g. in [BCNW12]. The difference between our results and those in [BCNW12] is that our batch bound is implementable and guarantees convergence, while the one in [BCNW12] is implementable only as a heuristic. As our computational results show, however, a fixed and small mini-batch size appears to work very well, perhaps because Mc is small.
2.2 Randomized finite difference gradient approximation
Gradient estimates based on randomized finite differences using noisy function evaluations have become popular for zeroth order optimization, particularly for model-free policy optimization in reinforcement learning [SHC+16, FGKM18].
In this setting, the zeroth order oracle is assumed to be available, but with a more strict assumption that e(x) ✏f deterministically. The first order oracle is obtained using the zeroth order oracle as follows. Let U = {ui : i = 1, . . . , |U|} be a set of random vectors, with each vector following some “nice” distribution (e.g. standard Gaussian). Then,
g(x,U) =
|U|X
i=1
f(x+ ui, ⇠) f(x, ⇠)
|U| ui, (5)
where is the sampling radius. The proposition below shows that (5) with a large enough sample size gives a first order oracle. Proposition 3. Let g = g(x,U), and fix ✏g = 2 ⇣ p nL + p n✏f ⌘ where n is the dimension of x.
Then
|U|
3 4L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 n+ 18n kr (x)k 2
min
8 >< >: 4 ✏2g , 1 ⇣
↵ 1+↵ kr (x)k ✏g 2
⌘2
9 >=
>;
implies P (kg r (x)k max{✏g,↵kgk}) 1 . Note that in the setting, ✏g is a fixed bias dependent on , and cannot be made arbitrarily small.
Remark Note that ✏g defines the neighborhood of convergence for any method that relies on this oracle, and the smallest value for ✏g is achieved by setting = O( p ✏f ). Let us now discuss the
minibatch size. Under the assumption that ✏f is small, 34L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 is also small. Thus when kr (x)k is larger than or on the order of ✏g , then the sample set size remains constant and is proportional to n. In [NS17] a constant step size stochastic gradient descent is applied using sample size |U| = 1, thus each step requires about n fewer samples. However, the step size has to be roughly n times smaller to account for the variance of the stochastic oracles based on one sample, thus the overall complexity is the same.
Other finite difference approximation schemes and their centralized versions (see [BCCS21] for a reference on these) also give suitable first order oracles.
2.3 Other settings
Our oracle framework also fits a variety of other settings, as we allow the randomness ⇠ and ⇠0 of the zeroth and first order oracles to be dependent on x and on each other, possibly following different distributions. Moreover, the oracles allow the function and gradient estimations to be arbitrarily bad occasionally, which allows them to capture settings where measurements are corrupted with outliers. The exact derivations of these oracles in these different settings are subjects of future exploration.
3 Algorithm and notation
We consider the line search algorithm proposed by [BCS19], which is an extension of the line search algorithm in [CS17] to the setting of inexact function estimates. In both algorithms, a random gradient estimate is used to attempt a step. We name the algorithm “ALOE”, which stands for Adaptive Line-search with Oracle Estimations. Compared to [CS17], the key modification of the algorithm is the relaxation of the Armijo condition by an additive constant 2✏f . The difference between this algorithm and the more standard line search methods such as the ones in [NW06] and [VML+19] is that the gradient estimate is recomputed in each iteration, whether or not a step is accepted. Note that all input parameters are user controlled, except for ✏f . In fact, the input ✏f here is only required to be some upper bound for E[e(x)], not necessarily the tightest one. Moreover, our computational results in Section 6 indicate that estimating ✏f is relatively easy in practice, and the algorithm is robust to the choice of ✏f .
Algorithm 1 Adaptive Line-search with Oracle Estimations (ALOE) Input: Parameter ✏f of the zeroth order oracle, starting point x0, max step size ↵max > 0, initial step size ↵0 < ↵max, constants ✓, 2 (0, 1).
1: for k = 0, 1, 2, . . . do 2: Compute gradient approximation gk:
Generate the direction gk = g(xk, ⇠0k) using the probabilistic first order oracle, with ↵ = ↵k.
3: Check sufficient decrease: Let x+k = xk ↵kgk. Generate f(xk, ⇠k) and f(x + k , ⇠ + k ) using the probabilistic
zeroth order oracle. Check the modified Armijo condition:
f(x+k , ⇠ + k ) f(xk, ⇠k) ↵k✓ kgkk 2 + 2✏f . (6)
4: Successful step: If (6) holds, then set xk+1 x+k and ↵k+1 min{↵max,
1↵k}. 5: Unsuccessful step:
Otherwise, set xk+1 xk and ↵k+1 ↵k.
In this paper we impose the following standard assumption on (x). Assumption 1. r is L-Lipschitz smooth and is bounded from below by some constant ⇤.
Let ek = |f(xk, ⇠k) (xk)| and e+k = |f(x + k , ⇠ + k ) (x + k )|. Recall that ek and e + k satisfy (1) from the definition of the zeroth order oracle. We will consider two cases; 1) ek and e+k are deterministically bounded by ✏f , in which case ⌫ and b in (1) can be chosen to be 0, and 2) ⌫ and b are not necessarily zero, in which case we assume the random variables ek+ e+k are all independent.
Assumption 2. Either e0, e+0 , e1, e + 1 , . . . are all deterministically bounded by ✏f , or the random variables {e0 + e+0 , e1 + e + 1 , . . .} are independent. Definition 1 (Definition of a true iteration). We say an iteration k is true if kgk r (xk)k max{✏g,↵kkgkk} and ek + e+k 2✏f ,
and false otherwise.
Let Mk denotes the triple {⌅k,⌅+k ,⌅ 0 k}, whose realizations are {⇠k, ⇠ + k , ⇠ 0 k}. Algorithm 1 generates a stochastic process adapted to the filtration {Fk : k 0}, where Fk = (M0,M1, . . . ,Mk). We define the following random variables, measurable with respect to Fk.
• Ik := {iteration k is true}. • ⇥k := {iteration k is successful}. • T" := min{k : kr (xk)k "}, the iteration complexity of the algorithm for reaching "-stationarity.
• Zk := (xk) ⇤ 0, a measure of progress.
It is easy to see that T" is a stopping time of the stochastic process with respect to Fk. We derive a high probability tail bound for T✏, and obtain an iteration complexity bound in high probability for Algorithm 1 when applied to non-convex functions. The final result is summarized below with simplified constants. The full statement is in Theorem 4. Theorem 1 (Main convergence result with simplified constants). Suppose Assumptions 1 and 2 hold, and (for simplicity) ✓ = 12 , ↵max 1 and max{L, 1}. Then, for any
" 4max ⇢ ✏g, (1 + ↵max) q (L+ 2)✏f ,
we have the following bound on iteration complexity:
For any s 0, p = 1 e min{ u2 2⌫2 , u2b}, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t
R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, u = infx{✏f E[e(x)]}, R = (x0) ⇤ C"2 ln((L+2)↵0) ln , and C = 1 2(L+2)(1+↵max)2 .
Remark This theorem essentially shows that the iteration complexity of Algorithm 1 is bounded by a quantity on the order of
1
p 12 4✏f+s C"2
✓ (x0) ⇤
C"2
ln((L+ 2)↵0)
ln
◆
with overwhelmingly high probability. If p = 1 and ✏f = 0, the above quantity essentially recovers the iteration complexity of the deterministic line search algorithm.
4 Analysis framework for the high probability bound
In this section we present the main ideas underlying the theoretical analysis. We first state general conditions on the stochastic process (Assumption 3), from which we are able to derive a high probability tail bound on the iteration complexity. They are listed as assumptions here, and in the next section, we will show that they indeed hold for Algorithm 1 when applied to non-convex smooth functions . Assumption 3 (Properties of the stochastic process). There exist a constant ↵̄ > 0 and a nondecreasing function h : R ! R, which satisfies h(↵) > 0 for any ↵ > 0, such that for any realization of the algorithm, the following hold for all k < T":
(i) h(↵̄) > 8✏f .
(ii) P(Ik = 1 | Fk 1) p for all k, with some p 2 ( 12 + 4✏f h(↵̄) , 1].
(iii) If Ik⇥k = 1 then Zk+1 Zk h(↵k) + 4✏f . (True, successful iterations make progress.)
(iv) If ↵k ↵̄ and Ik = 1 then ⇥k = 1.
(v) Zk+1 Zk + 2✏f + ek + e+k for all k.
The following key lemma follows easily from Assumption 3 (ii) and the Azuma-Hoeffding inequality [Azu67] applied to the submartingale Pt 1 k=0 Ik pt.
Lemma 1. For all 1 t T", and any p̂ 2 [0, p), we have
P
t 1X
k=0
Ik p̂t
! exp ✓ (p p̂)2
2p2 t
◆ .
We now define another indicator variable that will be used in the analysis.
Definition 2 (Large step). For all integers k 0, define the random variable Uk as follows:
Uk = ⇢ 1, if min{↵k,↵k+1} ↵̄, 0, if max{↵k,↵k+1} ↵̄.
We will say that step k is a large step if Uk = 1. Otherwise, step k is a small step.
By the dynamics of the process, every step is either a large step or a small step, but not both.
Our analysis will rely on the following key observation: By Assumption 3, if iteration k has Uk⇥kIk = 1, then Zk gets reduced by at least h(↵̄) 4✏f > 0. We call such an iteration a “good” iteration, because it makes progress towards optimality by at least a fixed amount. On the other hand, on any other iteration k, Zk can increase by at most 2✏f + ek + e+k . The idea of the analysis is to show that with high probability, the progress made by the good iterations dominates the damage caused by the other iterations. The crux of the proof is to show that with high probability, a large enough constant fraction of the iterations are good (up to another additive constant).
The following key lemma is the engine of the analysis. It shows that if the stopping time has not been reached and a large enough number of iterations are true, then there must be a large number of good iterations. Lemma 2. For any positive integer t and any p̂ 2 ( 12 , 1], we have
P T" > t and
t 1X
k=0
Ik p̂t and t 1X
k=0
Uk⇥kIk <
✓ p̂ 1
2
◆ t d
2
! = 0,
where d = max n
ln↵0 ln ↵̄ ln , 0
o .
4.1 Bounded noise case
In [CS17] and [BCS19], the expected iteration complexity of the line search algorithm is bounded under the assumptions that e(x) = 0 and e(x) ✏f for all x, respectively. We now derive a high probability tail bound on the iteration complexity under the assumption that e(x) ✏f for all x. Note that we do not need to assume that the errors e(x) are independent in the bounded noise setting. Thus, this analysis applies even when the noise is deterministic or adversarial.
Under Assumption 3 in the bounded noise setting, we have Zk+1 Zk + 4✏f in all iterations, and Zk+1 Zk h(↵̄) + 4✏f in good iterations. Putting this together with Lemma 2 and the other conditions in Assumption 3, we obtain the following theorem. Theorem 2 (Iteration complexity in the bounded noise setting). Suppose Assumption 3 holds, and ek, e + k ✏f at every iteration. Then for any p̂ 2 ( 1 2 + 4✏f h(↵̄) , p), and t R
p̂ 12 4✏f h(↵̄)
we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
4.2 General sub-exponential noise case
We now present a high probability bound for the iteration complexity with general sub-exponential noise in the zeroth order oracle. The result is very similar to that of Theorem 2. The main difference from the bounded noise analysis is that instead of bounding the “damage” caused on a per-iteration basis, we bound the sum of all such damages over all iterations. The fact that the noises are subexponential and independent allows us to apply Bernstein’s inequality to obtain an upper bound on this sum that holds with high probability. Theorem 3 (Iteration complexity in the sub-exponential noise setting). Suppose Assumptions 2 and 3 hold. Then for any s 0, p̂ 2 ( 12 + 4✏f+s h(↵̄) , p), and t R
p̂ 12 4✏f+s h(↵̄)
, we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ e min n s2t 8⌫2 , st4b o ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
5 Final iteration complexity of the line search algorithm
In the previous section, we presented high probability tail bounds on the iteration complexity, assuming Assumption 3 holds. We now verify that Assumption 3 indeed holds for Algorithm 1 when applied to smooth functions. Together with the results in Section 4, this allows us to derive an explicit high-probability bound on the iteration complexity.
As noted earlier, when either ✏f or ✏g are not zero, Algorithm 1 does not converge to a stationary point, but converges to a neighborhood where kr (x)k ", with " bounded from below in terms of ✏f or ✏g . The specific relationship is as follows. Inequality 1 (Lower bound on ").
" > max ( ✏g ⌘ ,max ⇢ 1 + ↵max, 1 1 ⌘ · s 4✏f
✓(p 12 ) ·max
⇢ 0.5L+
1 ✓ ,
L(1 ⌘)
2(1 2⌘ ✓(1 ⌘))
) ,
for some ⌘ 2 (0, 1 ✓2 ✓ ).
Here ⌘ can be any value in the interval. p = 1 when in the bounded noise setting, and p = 1 exp ⇣ min{u 2
⌫2 , u b } ⌘ otherwise, with u = infx{✏f E[e(x)]}.
Proposition 4 (Assumption 3 holds for Algorithm 1). If Inequality 1 and Assumption 1 and 2 hold, then Assumption 3 holds for Algorithm 1 with the following p, ↵̄ and h(↵):
1. p = 1 when the noise is bounded by ✏f , and p = 1 exp ⇣ min{ u 2 2⌫2 , u 2b} ⌘
otherwise. Here u = infx{✏f E[e(x)]}.
2. ↵̄ = min n
1 ✓ 0.5L+ , 2(1 2⌘ ✓(1 ⌘)) L(1 ⌘)
o .
3. h(↵) = min n
✓✏2↵ (1+↵max)2
, ✓↵(1 ⌘)2✏2 o .
Applying Theorem 3 now gives the explicit complexity bound for Algorithm 1.
Theorem 4. Suppose the Inequality 1 on " is satisfied for some ⌘ 2 (0, 1 ✓2 ✓ ), and Assumptions 1 and 2 hold, then we have the following bound on the iteration complexity: For any s 0, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, R = (x0) ⇤ C"2 +max n ln↵0 ln ↵̄ ln , 0 o , C = min n 1 (1+↵max)2 , (1 ⌘)2 o ↵̄✓, with p and ↵̄ as defined in Proposition 4.
Remark Inequality 1 makes sure there exists some p̂ 2 ( 12 + 4✏f+s C"2 , p) for some s > 0. The above theorem is for the general sub-exponential noise setting. In the bounded noise special case, we have s = 0, and the last term exp ⇣ min n s2t 8⌫2 , st 4b o⌘ in the probability is not present.
6 Experiments
In this section, we illustrate that the proposed stochastic algorithm ALOE can be at least as efficient in practice as the line search in [VML+19], and much more efficient than full gradient line search. From the experiments, we show that estimating ✏f is not difficult, and taking mini-batches of a fixed size indeed provides good zeroth and first order oracles in practice.
For illustration, we first conduct experiments on all the datasets for binary classification with 150 to 5000 data points from the Penn Machine Learning Benchmarks repository (PMLB) [RLLC+21]. In total, there are 64 such datasets. Each binary classification problem is formulated as a logistic
regression problem with an RBF kernel (with parameter = 1). All experiments were conducted on a 2020 MacBook Pro with an M1 chip and 16GB of memory.
We compare the following three algorithms, and they are implemented as follows.
• ALOE. The zeroth and first order oracles are implemented using the same mini-batch of a fixed size within each iteration. Batch sizes are taken to be 128. We estimate ✏f at the beginning of every epoch (i.e. every K iterations, where K equals the total number of data samples divided by 128), by computing 15 times the empirical standard deviation of 30 zeroth order oracle calls with batch size 128 at the current point. We found in practice the algorithm is quite robust to how ✏f is chosen. The relevant plots are in Appendix F. The parameter we used are = 0.8, ✓ = 0.2, ↵0 = 1 and ↵max = 10.
• SLS. The SLS algorithm (also called “SGD + Armijo”) proposed in [VML+19] differs from ALOE in that ✏f = 0 and that the same mini-batch is used while backtracking until the Armijo condition is satisfied. We implemented the algorithm using mini-batch size 128 and the parameters suggested in their paper. We tried various parameter combinations for SLS and found the performance of the suggested parameters to work best.
• Full gradient line search. The full gradient line search algorithm is implemented using the entire dataset for function and gradient evaluations on each iteration. Taking ✏f = 0 and the other parameters are the same as used in ALOE. For fair comparison in our experiments, we allow full gradient line search to make the same number of passes over each dataset as ALOE.
We conducted 5 trials for each dataset and ran each algorithm with initial points taken randomly from a standard Gaussian distribution. In Figure 1 we compare the overall performance of the three algorithms in the following way. For each dataset and algorithm, the average best value is defined as the average of the minimum training loss attained over 5 different trials. For each dataset we record the difference between the average best values achieved by SLS vs. ALOE, and plot the resulting 64 numbers as a histogram. The same is done for full gradient line search vs. ALOE. See Figure 1. Under this metric, ALOE achieves better training loss than SLS algorithm in 62 out of 64 datasets, and is always better than the full gradient line search.
Figure 2 illustrates the decay of training losses using these three algorithms for three datasets. In many cases ALOE decreases the training loss more rapidly than the other two algorithms. Testing set accuracy comparisons are also carried out, using random 80 : 20 splits of datasets, as shown in Figure 3. Test accuracy is defined as the proportion of data points in the testing set classified correctly. The results show that ALOE is competitive in terms of test accuracy as well. More performance and test accuracy plots for different datasets, models and loss functions are in Appendix.
7 Final Remarks
We conclude the paper with a brief overview of our theoretical results with respect to those in [VML+19]. The stochastic line search in [VML+19] is proposed specifically for empirical risk minimization, and the zeroth and first order oracles are implemented using mini-batch of a fixed size. The same mini-batch is used for all consecutive unsuccessful iterations. This guarantees that a successful iteration is eventually achieved for Armijo condition with ✏f = 0, under the assumption
that for every mini-batch, g(x, ⇠0) is Lipschitz continuous. The convergence analysis then assumes that Mc = 0 in (4) (strong growth condition) and in the case when is not convex, the step size parameter is bounded above by 1LMv . Thus, the method itself and its convergence are not better than those of a stochastic gradient descent with a fixed step size bounded by 1LMv [BCN18]. It is also assumed that the step size is reset to a fixed value at the start of each iteration, which is impractical. Good computational results are reported in [VML+19] for a heuristic version of the algorithm where the restrictions of the step size are removed.
In this paper we analyzed Algorithm 1 under virtually no restriction on the step size parameter. For the sake of simplicity of analysis, we assume the step size parameter is reduced and increased by the same multiplicative factor. This can be relaxed to some degree. We also do not assume that g(x, ⇠0) is Lipschitz continuous, we only impose this condition on . The cost of relaxing all these assumptions is the use of ✏f . For simplicity of the analysis, ✏f is assumed to be fixed throughout the algorithm. In practice, it can be re-estimated regularly. In many applications, ✏f tends to get smaller as the algorithm progresses towards optimality. Our experiments show that estimating ✏f is easy and works well in practice. Moreover, one can use much smaller values for ✏f than theory dictates.
8 Acknowledgments
This work was partially supported by NSF Grants TRIPODS 17-40796, NSF Grant CCF 2008434 and DARPA Lagrange award HR-001117S0039. Miaolan Xie was partially supported by a PhD Fellowship provided by MunchRe. Billy Jin was partially supported by NSERC fellowship PGSD3532673-2019.
The authors are grateful to the anonymous referees for their reviews that helped us improve the paper. | 1. What is the focus and contribution of the paper regarding unconstrained smooth optimization?
2. What are the strengths of the proposed gradient descent with line search method?
3. What are the weaknesses or concerns regarding the numerical experiment and theoretical assumptions?
4. How does the reviewer assess the clarity and presentation of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
Consider unconstrained smooth optimization with probabilistic zeroth- and first-order oracles. This paper proposes a new gradient descent with line search method and provides a high-probability upper bound of the iteration complexity. The authors highlight that their assumptions on the noisy zeroth- and first-order oracles are weaker than those in literature.
Review
I like the idea of querying new noisy gradients even when the Armijo condition is not met in the proposed line search algorithm. I feel this is perhaps the natural approach in the stochastic setting.
Below are some comments.
The numerical experiment does not exactly follow the theory: In the numerical experiment, the mini-batch sizes are fixed, first- and zero-th order oracles use the same minibatch and
γ
dec
≠
1
/
γ
inc
. Why is the theory not followed? What is the empirical result when the theory is strictly followed?
It is assumed that the the error of the probabilistic first-order oracle is bounded from above by the norm of the noisy gradient (scaled by a constant) with high probability. This assumption seems to suggest that the mini-batch size cannot be a constant but should increase when the iterates approach a stationary point. Is this increasing mini-batch size necessary?
The authors highlight that unlike in existing literature, they do not assume the noisy gradient to be Lipschitz. When is this relaxation of the assumption significant?
Line 63--88 addresses related existing results, arguing existing results are restrictive and do not provide high-probability guarantees. It would be good if the authors can clarify the key idea that enables them to make the breakthrough in this paper.
Line 293: Why is resetting the step size in each iteration is "impractical"?
The presentation is clear. Below are some comments on the presentation:
Line 85: Saying the existing assumptions are "very restrictive" is vague. In what sense are they restrictive?
Line 152--156: The definitions of
I
k
,
Θ
k
, and
Z
k
are not used in Section 3. I think their appearance should be postponed to Section 4.
Line 240: The name of the proposed algorithm should be appear in the second last section. |
NIPS | Title
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
Abstract
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
1 Introduction
In this paper, we analyze a line-search method when applied to the problem of minimizing an unconstrained, differentiable, possibly non-convex function : Rn ! R. The goal is to find a "-stationary point for ; that is, a point x with kr (x)k ". We make the standard assumption that r is LLipschitz, but the knowledge of L is not assumed by the algorithm. We consider a setting where neither the function value (x) nor the gradient r (x) are directly computable. Instead, the algorithm is given black-box access to the following probabilistic oracles:
• Probabilistic zeroth order oracle. Given a point x, the oracle computes f(x, ⇠), a (random) estimate of the function value (x). ⇠ is a random variable (whose distribution may depend on x), with probability space (⌦,F⌦, P ). We assume the absolute value of the estimation error e(x) = |f(x, ⇠(x)) (x)| (we omit the dependence on ⇠ for brevity) to be a “one-sided” sub-exponential-like random variable1 with parameters (⌫, b), whose mean is bounded by some constant ✏f > 0. Specifically,
E⇠ [e(x)] ✏f and E⇠ [exp{ (e(x) E[e(x)])}] exp ✓ 2⌫2
2
◆ , 8 2 0, 1
b
. (1)
• Probabilistic first order oracle. Given a point x and a constant ↵ > 0, the oracle computes g(x, ⇠0), a (random) estimate of the gradient r (x), such that
P⇠0 (kg(x, ⇠0) r (x)k max{✏g,↵kg(x, ⇠0)k}) 1 . (2) Here, ⇠0 is a random variable (whose distribution may depend on x), with associated probability space (⌦0,F⌦0 , P 0). (1 ) 2 (0, 1) is the probability, intrinsic to the oracle, that the
1This is a weaker requirement than assuming e(x) to be sub-exponential, as one only needs to guard against the possibility of the error e(x) being too large.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
gradient estimate is “sufficiently accurate” with respect to ✏g,, and ↵. Lastly, , ✏g 0 are constants, intrinsic to the oracle, which represent the precision the oracle can achieve. Note that ✏g allows the gradient estimate to be bounded away from the true gradient by a constant distance.
Remark We will analyze a line search algorithm that relies on these two oracles. In the zeroth order oracle, the constants ✏f and (⌫, b) are intrinsic. In the first order oracle, , ✏g , and are intrinsic. These values cannot be controlled. On the other hand, ↵ is an input to the first order oracle that can be chosen by the algorithm. In fact, as we shall see in Section 3, ↵ will be the step size of the line search method.
These two oracles cover several settings, including
• Standard supervised learning, where gradients and values of the loss function are computed based on a mini-batch. Here, the random variables ⇠ and ⇠0 in the zeroth and first order oracles represent the random set of samples in the mini-batch.
• Zeroth order optimization, where gradients are estimated via randomized finite differences using (possibly noisy) function values. This arises in policy gradients in reinforcement learning, as is used in [SHC+16] and analyzed in [BCCS21].
• A variety of other settings, where the gradients and function estimates may be biased stochastic estimates of the true gradients and function values.
The constants in the oracles determine the precision of the function and gradient estimates. These constants will also dictate the accuracy achievable by the line search method we analyze. Specifically, if ✏f = 0 and ✏g = 0, then the algorithm converges to a stationary point. Otherwise, a precise lower bound is derived for the smallest kr (x)k the algorithm can achieve, in terms of the constants in the oracles. It is worth noting that the oracles can be biased. Indeed, the zeroth order oracle can incur arbitrarily large error, as long as it satisfies 1. Moreover, the first order oracle only requires g(x, ⇠0(x)) to be a “sufficiently accurate” estimate ofr (x) with probability 1 . Thus g(x, ⇠0(x)) can be an arbitrary vector with probability , so it in principle can have an arbitrarily large bias.
The line-search algorithm is given in Section 3. It is a modification of the standard Armijo-based line search algorithm [NW06], with access to the zeroth and first order oracles. The two small modifications are: 1) The Armijo condition is relaxed by an additive constant 2✏f , to account for the inexact function evaluations, and 2) The first order oracle is called in each iteration, and a new search direction is generated whenever the step size changes. This allows the method to progress to near-stationary points without assuming the gradient estimates (e.g. the mini-batch gradients in supervised learning) to be Lipschitz continuous.
Our framework and analysis are based on results in [CS17], [GRVZ18] and [BCS19]. However, there are several key differences. In [CS17] and [BCS19] the line search has access to stronger oracles, with ✏g = 0 and |f(x, ⇠) (x)| ✏f deterministically. Under these assumptions, [CS17] and [BCS19] derive an expected iteration complexity bound. In this paper, we provide a high probability tail bound on the iteration complexity, showing that the algorithm is very likely to succeed in a number of iterations on the order of its expected iteration complexity. Moreover, we consider more general oracles, with arbitrary ✏g and possibly unbounded |f(x, ⇠) (x)|. Thus, we significantly strengthen the results in [CS17] and [BCS19]. To the best of our knowledge, the only other high probability complexity bound of this kind is derived in [GRVZ18] for a trust region algorithm under the assumption ✏g = 0 and |f(x, ⇠) (x)|= 0 deterministically, which are much stronger oracles.
Stochastic line search has also been analyzed in [PS20] and [VML+19]. In [PS20] the assumptions on |f(x, ⇠) (x)| are different. On the one hand, they allow for more general distributions than sub-exponential. On the other hand, it is assumed that |f(x, ⇠) (x)| can be made arbitrarily small with some fixed probability. An expected iteration complexity bound is then derived for arbitrarily small ". In contrast, we do not assume this, and analyze the iteration complexity of reaching an "-stationary point, with " lower-bounded by a function of the constants in the oracles. Moreover, our analysis and results are much simpler than those in [PS20] and we derive an iteration complexity bound in high probability, not just in expectation.
In [VML+19], the traditional line search is analyzed for empirical loss minimization, where the function oracles are implemented using a random mini-batch of a fixed size. The mini-batch remains
fixed during backtracking until a standard Armijo condition is satisfied. Thus the search direction remains the same until a step is taken. While good computational performance has been reported in [VML+19], its theoretical analysis requires several very restrictive assumptions, especially for nonconvex functions. Also, they bound the expected sum of squared gradient norms, while we bound the iteration complexity with high probability. We note that using similar techniques as in [BCS19], our analysis can be extended to the convex and strongly convex cases.
In summary, we present an analysis of an adaptive line search algorithm under very general conditions on the gradient and function estimates. The results not only subsume most results in the prior literature, but also substantially extend the framework. Moreover, high probability tail bounds on iteration complexity are derived, instead of only expected iteration complexity.
2 Oracles
In this section, we discuss a couple of settings, and show how they are captured by our framework. All norms used are 2-norm.
2.1 Expected loss minimization
Let us first discuss how the oracle definitions apply to expected loss minimization. In this setting, (x) = Ed⇠D[`(x, d)]. Here, x is the model parameters, d is a data sample following distribution D, and `(x, d) is the loss when the model parameterized by x is evaluated on data point d.
In this case, the zeroth and first order oracles can be as follows, where S is a mini-batch sampled from D:
f(x,S) = 1
|S|
X d2S `(x, d), g(x,S) = 1 |S| X d2S rx`(x, d). (3)
In general, S can be chosen to depend on x. We now show how our zeroth and first order oracle conditions are satisfied by selecting an appropriate sample size |S|.
Proposition 1. Let ê(x, d) := `(x, d) (x) be a (⌫̂(x), b̂(x))-subexponential random variable and Vard⇠D [`(x, d)] ✏̂(x)2, for some ⌫̂(x), b̂(x), ✏̂(x). Let e(x,S) = |f(x,S) (x)| and N = |S|, then
ES [e(x,S)] 1 p N ✏̂(x) and e(x,S) is (⌫(x), b(x))-subexponential,
with ⌫(x) = b(x) = 8e2 max n
⌫̂(x)p N , b̂(x)
o .
In the case when the support of D is bounded, ` is Lipschitz, and the set of x we consider is bounded, the assumption of Proposition 1 is satisfied. Thus, f(x,S) is a zeroth order oracle with ✏f = supx 1p N ✏̂(x), ⌫ = supx ⌫(x), and b = supx b(x), and ✏f can be made arbitrarily small by taking a large enough sample.
Under standard assumptions on r`(x, d), for instance, suppose Assumption 4.3 in [BCN18] holds: for some Mc,Mv 0 and for all x,
Ed⇠D kr`(x, d) r (x)k2 Mc +Mv kr (x)k2 , (4) one can show g(x,S) is a first order oracle with a large enough sample size. Proposition 2. Let g = g(x,S). Assuming Ed⇠Dr`(x, d) = r (x), then
|S| Mc +Mv kr (x)k
2
min
( 1
✏2g ,
(1 + ↵)2
2↵2 kr (x)k2
)
implies P (kg r (x)k max{✏g,↵kgk}) 1 .
This bound implies a looser bound of:
|S| max ( 2Mc ✏2g , 2Mv(1 + ↵) 2 2↵2 ) .
Remark Let us discuss what is required from the minibatch size in this setting. Unlike standard SGD, the minibatch size is chosen dynamically. When convergence to a stationary point is desired, ✏g has to be zero, and gradient estimate gk tends to zero. Thus, kg r (x)k also has to tend to zero. If Mc = 0, then fixing the mini-batch size to be at least Mv(1+↵) 2
2↵2 provides a valid first order oracle. Thus, unless ↵ tends to zero, the minibatch size remains bounded from below. This is similar to the interpolation condition used in e.g. [BM11, VML+19]. On the other hand, when Mc > 0, the minibatch size has to grow in order to approach a stationary point. This is similar to dynamic minibatch size selection, discussed, e.g. in [BCNW12]. The difference between our results and those in [BCNW12] is that our batch bound is implementable and guarantees convergence, while the one in [BCNW12] is implementable only as a heuristic. As our computational results show, however, a fixed and small mini-batch size appears to work very well, perhaps because Mc is small.
2.2 Randomized finite difference gradient approximation
Gradient estimates based on randomized finite differences using noisy function evaluations have become popular for zeroth order optimization, particularly for model-free policy optimization in reinforcement learning [SHC+16, FGKM18].
In this setting, the zeroth order oracle is assumed to be available, but with a more strict assumption that e(x) ✏f deterministically. The first order oracle is obtained using the zeroth order oracle as follows. Let U = {ui : i = 1, . . . , |U|} be a set of random vectors, with each vector following some “nice” distribution (e.g. standard Gaussian). Then,
g(x,U) =
|U|X
i=1
f(x+ ui, ⇠) f(x, ⇠)
|U| ui, (5)
where is the sampling radius. The proposition below shows that (5) with a large enough sample size gives a first order oracle. Proposition 3. Let g = g(x,U), and fix ✏g = 2 ⇣ p nL + p n✏f ⌘ where n is the dimension of x.
Then
|U|
3 4L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 n+ 18n kr (x)k 2
min
8 >< >: 4 ✏2g , 1 ⇣
↵ 1+↵ kr (x)k ✏g 2
⌘2
9 >=
>;
implies P (kg r (x)k max{✏g,↵kgk}) 1 . Note that in the setting, ✏g is a fixed bias dependent on , and cannot be made arbitrarily small.
Remark Note that ✏g defines the neighborhood of convergence for any method that relies on this oracle, and the smallest value for ✏g is achieved by setting = O( p ✏f ). Let us now discuss the
minibatch size. Under the assumption that ✏f is small, 34L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 is also small. Thus when kr (x)k is larger than or on the order of ✏g , then the sample set size remains constant and is proportional to n. In [NS17] a constant step size stochastic gradient descent is applied using sample size |U| = 1, thus each step requires about n fewer samples. However, the step size has to be roughly n times smaller to account for the variance of the stochastic oracles based on one sample, thus the overall complexity is the same.
Other finite difference approximation schemes and their centralized versions (see [BCCS21] for a reference on these) also give suitable first order oracles.
2.3 Other settings
Our oracle framework also fits a variety of other settings, as we allow the randomness ⇠ and ⇠0 of the zeroth and first order oracles to be dependent on x and on each other, possibly following different distributions. Moreover, the oracles allow the function and gradient estimations to be arbitrarily bad occasionally, which allows them to capture settings where measurements are corrupted with outliers. The exact derivations of these oracles in these different settings are subjects of future exploration.
3 Algorithm and notation
We consider the line search algorithm proposed by [BCS19], which is an extension of the line search algorithm in [CS17] to the setting of inexact function estimates. In both algorithms, a random gradient estimate is used to attempt a step. We name the algorithm “ALOE”, which stands for Adaptive Line-search with Oracle Estimations. Compared to [CS17], the key modification of the algorithm is the relaxation of the Armijo condition by an additive constant 2✏f . The difference between this algorithm and the more standard line search methods such as the ones in [NW06] and [VML+19] is that the gradient estimate is recomputed in each iteration, whether or not a step is accepted. Note that all input parameters are user controlled, except for ✏f . In fact, the input ✏f here is only required to be some upper bound for E[e(x)], not necessarily the tightest one. Moreover, our computational results in Section 6 indicate that estimating ✏f is relatively easy in practice, and the algorithm is robust to the choice of ✏f .
Algorithm 1 Adaptive Line-search with Oracle Estimations (ALOE) Input: Parameter ✏f of the zeroth order oracle, starting point x0, max step size ↵max > 0, initial step size ↵0 < ↵max, constants ✓, 2 (0, 1).
1: for k = 0, 1, 2, . . . do 2: Compute gradient approximation gk:
Generate the direction gk = g(xk, ⇠0k) using the probabilistic first order oracle, with ↵ = ↵k.
3: Check sufficient decrease: Let x+k = xk ↵kgk. Generate f(xk, ⇠k) and f(x + k , ⇠ + k ) using the probabilistic
zeroth order oracle. Check the modified Armijo condition:
f(x+k , ⇠ + k ) f(xk, ⇠k) ↵k✓ kgkk 2 + 2✏f . (6)
4: Successful step: If (6) holds, then set xk+1 x+k and ↵k+1 min{↵max,
1↵k}. 5: Unsuccessful step:
Otherwise, set xk+1 xk and ↵k+1 ↵k.
In this paper we impose the following standard assumption on (x). Assumption 1. r is L-Lipschitz smooth and is bounded from below by some constant ⇤.
Let ek = |f(xk, ⇠k) (xk)| and e+k = |f(x + k , ⇠ + k ) (x + k )|. Recall that ek and e + k satisfy (1) from the definition of the zeroth order oracle. We will consider two cases; 1) ek and e+k are deterministically bounded by ✏f , in which case ⌫ and b in (1) can be chosen to be 0, and 2) ⌫ and b are not necessarily zero, in which case we assume the random variables ek+ e+k are all independent.
Assumption 2. Either e0, e+0 , e1, e + 1 , . . . are all deterministically bounded by ✏f , or the random variables {e0 + e+0 , e1 + e + 1 , . . .} are independent. Definition 1 (Definition of a true iteration). We say an iteration k is true if kgk r (xk)k max{✏g,↵kkgkk} and ek + e+k 2✏f ,
and false otherwise.
Let Mk denotes the triple {⌅k,⌅+k ,⌅ 0 k}, whose realizations are {⇠k, ⇠ + k , ⇠ 0 k}. Algorithm 1 generates a stochastic process adapted to the filtration {Fk : k 0}, where Fk = (M0,M1, . . . ,Mk). We define the following random variables, measurable with respect to Fk.
• Ik := {iteration k is true}. • ⇥k := {iteration k is successful}. • T" := min{k : kr (xk)k "}, the iteration complexity of the algorithm for reaching "-stationarity.
• Zk := (xk) ⇤ 0, a measure of progress.
It is easy to see that T" is a stopping time of the stochastic process with respect to Fk. We derive a high probability tail bound for T✏, and obtain an iteration complexity bound in high probability for Algorithm 1 when applied to non-convex functions. The final result is summarized below with simplified constants. The full statement is in Theorem 4. Theorem 1 (Main convergence result with simplified constants). Suppose Assumptions 1 and 2 hold, and (for simplicity) ✓ = 12 , ↵max 1 and max{L, 1}. Then, for any
" 4max ⇢ ✏g, (1 + ↵max) q (L+ 2)✏f ,
we have the following bound on iteration complexity:
For any s 0, p = 1 e min{ u2 2⌫2 , u2b}, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t
R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, u = infx{✏f E[e(x)]}, R = (x0) ⇤ C"2 ln((L+2)↵0) ln , and C = 1 2(L+2)(1+↵max)2 .
Remark This theorem essentially shows that the iteration complexity of Algorithm 1 is bounded by a quantity on the order of
1
p 12 4✏f+s C"2
✓ (x0) ⇤
C"2
ln((L+ 2)↵0)
ln
◆
with overwhelmingly high probability. If p = 1 and ✏f = 0, the above quantity essentially recovers the iteration complexity of the deterministic line search algorithm.
4 Analysis framework for the high probability bound
In this section we present the main ideas underlying the theoretical analysis. We first state general conditions on the stochastic process (Assumption 3), from which we are able to derive a high probability tail bound on the iteration complexity. They are listed as assumptions here, and in the next section, we will show that they indeed hold for Algorithm 1 when applied to non-convex smooth functions . Assumption 3 (Properties of the stochastic process). There exist a constant ↵̄ > 0 and a nondecreasing function h : R ! R, which satisfies h(↵) > 0 for any ↵ > 0, such that for any realization of the algorithm, the following hold for all k < T":
(i) h(↵̄) > 8✏f .
(ii) P(Ik = 1 | Fk 1) p for all k, with some p 2 ( 12 + 4✏f h(↵̄) , 1].
(iii) If Ik⇥k = 1 then Zk+1 Zk h(↵k) + 4✏f . (True, successful iterations make progress.)
(iv) If ↵k ↵̄ and Ik = 1 then ⇥k = 1.
(v) Zk+1 Zk + 2✏f + ek + e+k for all k.
The following key lemma follows easily from Assumption 3 (ii) and the Azuma-Hoeffding inequality [Azu67] applied to the submartingale Pt 1 k=0 Ik pt.
Lemma 1. For all 1 t T", and any p̂ 2 [0, p), we have
P
t 1X
k=0
Ik p̂t
! exp ✓ (p p̂)2
2p2 t
◆ .
We now define another indicator variable that will be used in the analysis.
Definition 2 (Large step). For all integers k 0, define the random variable Uk as follows:
Uk = ⇢ 1, if min{↵k,↵k+1} ↵̄, 0, if max{↵k,↵k+1} ↵̄.
We will say that step k is a large step if Uk = 1. Otherwise, step k is a small step.
By the dynamics of the process, every step is either a large step or a small step, but not both.
Our analysis will rely on the following key observation: By Assumption 3, if iteration k has Uk⇥kIk = 1, then Zk gets reduced by at least h(↵̄) 4✏f > 0. We call such an iteration a “good” iteration, because it makes progress towards optimality by at least a fixed amount. On the other hand, on any other iteration k, Zk can increase by at most 2✏f + ek + e+k . The idea of the analysis is to show that with high probability, the progress made by the good iterations dominates the damage caused by the other iterations. The crux of the proof is to show that with high probability, a large enough constant fraction of the iterations are good (up to another additive constant).
The following key lemma is the engine of the analysis. It shows that if the stopping time has not been reached and a large enough number of iterations are true, then there must be a large number of good iterations. Lemma 2. For any positive integer t and any p̂ 2 ( 12 , 1], we have
P T" > t and
t 1X
k=0
Ik p̂t and t 1X
k=0
Uk⇥kIk <
✓ p̂ 1
2
◆ t d
2
! = 0,
where d = max n
ln↵0 ln ↵̄ ln , 0
o .
4.1 Bounded noise case
In [CS17] and [BCS19], the expected iteration complexity of the line search algorithm is bounded under the assumptions that e(x) = 0 and e(x) ✏f for all x, respectively. We now derive a high probability tail bound on the iteration complexity under the assumption that e(x) ✏f for all x. Note that we do not need to assume that the errors e(x) are independent in the bounded noise setting. Thus, this analysis applies even when the noise is deterministic or adversarial.
Under Assumption 3 in the bounded noise setting, we have Zk+1 Zk + 4✏f in all iterations, and Zk+1 Zk h(↵̄) + 4✏f in good iterations. Putting this together with Lemma 2 and the other conditions in Assumption 3, we obtain the following theorem. Theorem 2 (Iteration complexity in the bounded noise setting). Suppose Assumption 3 holds, and ek, e + k ✏f at every iteration. Then for any p̂ 2 ( 1 2 + 4✏f h(↵̄) , p), and t R
p̂ 12 4✏f h(↵̄)
we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
4.2 General sub-exponential noise case
We now present a high probability bound for the iteration complexity with general sub-exponential noise in the zeroth order oracle. The result is very similar to that of Theorem 2. The main difference from the bounded noise analysis is that instead of bounding the “damage” caused on a per-iteration basis, we bound the sum of all such damages over all iterations. The fact that the noises are subexponential and independent allows us to apply Bernstein’s inequality to obtain an upper bound on this sum that holds with high probability. Theorem 3 (Iteration complexity in the sub-exponential noise setting). Suppose Assumptions 2 and 3 hold. Then for any s 0, p̂ 2 ( 12 + 4✏f+s h(↵̄) , p), and t R
p̂ 12 4✏f+s h(↵̄)
, we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ e min n s2t 8⌫2 , st4b o ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
5 Final iteration complexity of the line search algorithm
In the previous section, we presented high probability tail bounds on the iteration complexity, assuming Assumption 3 holds. We now verify that Assumption 3 indeed holds for Algorithm 1 when applied to smooth functions. Together with the results in Section 4, this allows us to derive an explicit high-probability bound on the iteration complexity.
As noted earlier, when either ✏f or ✏g are not zero, Algorithm 1 does not converge to a stationary point, but converges to a neighborhood where kr (x)k ", with " bounded from below in terms of ✏f or ✏g . The specific relationship is as follows. Inequality 1 (Lower bound on ").
" > max ( ✏g ⌘ ,max ⇢ 1 + ↵max, 1 1 ⌘ · s 4✏f
✓(p 12 ) ·max
⇢ 0.5L+
1 ✓ ,
L(1 ⌘)
2(1 2⌘ ✓(1 ⌘))
) ,
for some ⌘ 2 (0, 1 ✓2 ✓ ).
Here ⌘ can be any value in the interval. p = 1 when in the bounded noise setting, and p = 1 exp ⇣ min{u 2
⌫2 , u b } ⌘ otherwise, with u = infx{✏f E[e(x)]}.
Proposition 4 (Assumption 3 holds for Algorithm 1). If Inequality 1 and Assumption 1 and 2 hold, then Assumption 3 holds for Algorithm 1 with the following p, ↵̄ and h(↵):
1. p = 1 when the noise is bounded by ✏f , and p = 1 exp ⇣ min{ u 2 2⌫2 , u 2b} ⌘
otherwise. Here u = infx{✏f E[e(x)]}.
2. ↵̄ = min n
1 ✓ 0.5L+ , 2(1 2⌘ ✓(1 ⌘)) L(1 ⌘)
o .
3. h(↵) = min n
✓✏2↵ (1+↵max)2
, ✓↵(1 ⌘)2✏2 o .
Applying Theorem 3 now gives the explicit complexity bound for Algorithm 1.
Theorem 4. Suppose the Inequality 1 on " is satisfied for some ⌘ 2 (0, 1 ✓2 ✓ ), and Assumptions 1 and 2 hold, then we have the following bound on the iteration complexity: For any s 0, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, R = (x0) ⇤ C"2 +max n ln↵0 ln ↵̄ ln , 0 o , C = min n 1 (1+↵max)2 , (1 ⌘)2 o ↵̄✓, with p and ↵̄ as defined in Proposition 4.
Remark Inequality 1 makes sure there exists some p̂ 2 ( 12 + 4✏f+s C"2 , p) for some s > 0. The above theorem is for the general sub-exponential noise setting. In the bounded noise special case, we have s = 0, and the last term exp ⇣ min n s2t 8⌫2 , st 4b o⌘ in the probability is not present.
6 Experiments
In this section, we illustrate that the proposed stochastic algorithm ALOE can be at least as efficient in practice as the line search in [VML+19], and much more efficient than full gradient line search. From the experiments, we show that estimating ✏f is not difficult, and taking mini-batches of a fixed size indeed provides good zeroth and first order oracles in practice.
For illustration, we first conduct experiments on all the datasets for binary classification with 150 to 5000 data points from the Penn Machine Learning Benchmarks repository (PMLB) [RLLC+21]. In total, there are 64 such datasets. Each binary classification problem is formulated as a logistic
regression problem with an RBF kernel (with parameter = 1). All experiments were conducted on a 2020 MacBook Pro with an M1 chip and 16GB of memory.
We compare the following three algorithms, and they are implemented as follows.
• ALOE. The zeroth and first order oracles are implemented using the same mini-batch of a fixed size within each iteration. Batch sizes are taken to be 128. We estimate ✏f at the beginning of every epoch (i.e. every K iterations, where K equals the total number of data samples divided by 128), by computing 15 times the empirical standard deviation of 30 zeroth order oracle calls with batch size 128 at the current point. We found in practice the algorithm is quite robust to how ✏f is chosen. The relevant plots are in Appendix F. The parameter we used are = 0.8, ✓ = 0.2, ↵0 = 1 and ↵max = 10.
• SLS. The SLS algorithm (also called “SGD + Armijo”) proposed in [VML+19] differs from ALOE in that ✏f = 0 and that the same mini-batch is used while backtracking until the Armijo condition is satisfied. We implemented the algorithm using mini-batch size 128 and the parameters suggested in their paper. We tried various parameter combinations for SLS and found the performance of the suggested parameters to work best.
• Full gradient line search. The full gradient line search algorithm is implemented using the entire dataset for function and gradient evaluations on each iteration. Taking ✏f = 0 and the other parameters are the same as used in ALOE. For fair comparison in our experiments, we allow full gradient line search to make the same number of passes over each dataset as ALOE.
We conducted 5 trials for each dataset and ran each algorithm with initial points taken randomly from a standard Gaussian distribution. In Figure 1 we compare the overall performance of the three algorithms in the following way. For each dataset and algorithm, the average best value is defined as the average of the minimum training loss attained over 5 different trials. For each dataset we record the difference between the average best values achieved by SLS vs. ALOE, and plot the resulting 64 numbers as a histogram. The same is done for full gradient line search vs. ALOE. See Figure 1. Under this metric, ALOE achieves better training loss than SLS algorithm in 62 out of 64 datasets, and is always better than the full gradient line search.
Figure 2 illustrates the decay of training losses using these three algorithms for three datasets. In many cases ALOE decreases the training loss more rapidly than the other two algorithms. Testing set accuracy comparisons are also carried out, using random 80 : 20 splits of datasets, as shown in Figure 3. Test accuracy is defined as the proportion of data points in the testing set classified correctly. The results show that ALOE is competitive in terms of test accuracy as well. More performance and test accuracy plots for different datasets, models and loss functions are in Appendix.
7 Final Remarks
We conclude the paper with a brief overview of our theoretical results with respect to those in [VML+19]. The stochastic line search in [VML+19] is proposed specifically for empirical risk minimization, and the zeroth and first order oracles are implemented using mini-batch of a fixed size. The same mini-batch is used for all consecutive unsuccessful iterations. This guarantees that a successful iteration is eventually achieved for Armijo condition with ✏f = 0, under the assumption
that for every mini-batch, g(x, ⇠0) is Lipschitz continuous. The convergence analysis then assumes that Mc = 0 in (4) (strong growth condition) and in the case when is not convex, the step size parameter is bounded above by 1LMv . Thus, the method itself and its convergence are not better than those of a stochastic gradient descent with a fixed step size bounded by 1LMv [BCN18]. It is also assumed that the step size is reset to a fixed value at the start of each iteration, which is impractical. Good computational results are reported in [VML+19] for a heuristic version of the algorithm where the restrictions of the step size are removed.
In this paper we analyzed Algorithm 1 under virtually no restriction on the step size parameter. For the sake of simplicity of analysis, we assume the step size parameter is reduced and increased by the same multiplicative factor. This can be relaxed to some degree. We also do not assume that g(x, ⇠0) is Lipschitz continuous, we only impose this condition on . The cost of relaxing all these assumptions is the use of ✏f . For simplicity of the analysis, ✏f is assumed to be fixed throughout the algorithm. In practice, it can be re-estimated regularly. In many applications, ✏f tends to get smaller as the algorithm progresses towards optimality. Our experiments show that estimating ✏f is easy and works well in practice. Moreover, one can use much smaller values for ✏f than theory dictates.
8 Acknowledgments
This work was partially supported by NSF Grants TRIPODS 17-40796, NSF Grant CCF 2008434 and DARPA Lagrange award HR-001117S0039. Miaolan Xie was partially supported by a PhD Fellowship provided by MunchRe. Billy Jin was partially supported by NSERC fellowship PGSD3532673-2019.
The authors are grateful to the anonymous referees for their reviews that helped us improve the paper. | 1. What is the focus of the paper regarding nonconvex smooth stochastic optimization?
2. What are the strengths of the proposed Adaptive Line-search with Oracle Estimations (ALOE)?
3. Do you have any concerns or questions about the paper's approach to modified Armijo line-search?
4. How does the paper address the issue of biased oracles in stochastic optimization?
5. Can the algorithm provide high probability complexity bounds without assuming unbiased oracle estimates?
6. Are there any limitations or trade-offs in the algorithm's performance when considering high accuracy scenarios? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposed Adaptive Line-search with Oracle Estimations (ALOE) for nonconvex smooth stochastic optimization, the algorithm is based on modified Armijo condition. The algorithm allows the oracles to be biased and only requires the population function
ϕ
to be Lipschitz smooth and some other mild conditions on the stochastic process.
Review
The paper aims to solve a practical problem in nonconvex optimization on parameter selection by modified Armijo line-search. The flow of the work is clear. Compared to previous closely related work [VML+19], it extend the setting to stochastic case with biased oracle, and provides a high probability complexity bound.
I have some confusions in this work, hope authors can add more discussion to address:
In stochastic optimization, generally it will assume that the function/gradient estimation is unbiased with bounded variance, i.e.
E
f
(
x
,
ξ
)
=
ϕ
(
x
)
,
E
g
(
x
,
ξ
)
=
∇
ϕ
(
x
)
,
E
|
|
g
(
x
,
ξ
)
−
∇
ϕ
(
x
)
|
|
2
≤
σ
2
, instead of the norm form in Eq (1) and (2). So it seems that the main convergence result cannot recover the standard unbiased line-search algorithm result (if any, maybe
O
(
ϵ
−
4
)
?), because
ϵ
f
=
0
will recover the deterministic line-search result
O
(
ϵ
−
2
)
, is that correct, and is there any solution?
Lower bound requirement on
ϵ
. It seems that the setting in Eq (1) and (2) should contain the common unbiased setting above (
E
f
(
x
,
ξ
)
=
ϕ
(
x
)
but
E
|
f
(
x
,
ξ
)
−
ϕ
(
x
)
|
can still be upper bounded), but common stochastic optimization algorithm with unbiased oracle may not need the lower bound requirement in the accuracy
ϵ
(i.e. converge with arbitrary small
ϵ
), is that correct? Even though chasing high accuracy may not be that important in machine learning, but as a theoretical paper in optimization, I am still wondering what is the case for high accuracy.
In Assumption 2, it requires the function estimation error
e
i
,
e
i
+
to be deterministically bounded by
ϵ
f
seems to be much stronger than boundedness in expectation in Eq (1) (authors also mentioned it is strong in Line 65), is that correct? Especially concerning that here the main theorem requires that the accuracy
ϵ
to be larger than
poly
(
ϵ
f
)
.
Based on the confusions above, currently I tend to reject, but after the rebuttal, I will appreciate the authors to address my confusion, and definitely reconsider my decision. Thank you. |
NIPS | Title
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
Abstract
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
1 Introduction
In this paper, we analyze a line-search method when applied to the problem of minimizing an unconstrained, differentiable, possibly non-convex function : Rn ! R. The goal is to find a "-stationary point for ; that is, a point x with kr (x)k ". We make the standard assumption that r is LLipschitz, but the knowledge of L is not assumed by the algorithm. We consider a setting where neither the function value (x) nor the gradient r (x) are directly computable. Instead, the algorithm is given black-box access to the following probabilistic oracles:
• Probabilistic zeroth order oracle. Given a point x, the oracle computes f(x, ⇠), a (random) estimate of the function value (x). ⇠ is a random variable (whose distribution may depend on x), with probability space (⌦,F⌦, P ). We assume the absolute value of the estimation error e(x) = |f(x, ⇠(x)) (x)| (we omit the dependence on ⇠ for brevity) to be a “one-sided” sub-exponential-like random variable1 with parameters (⌫, b), whose mean is bounded by some constant ✏f > 0. Specifically,
E⇠ [e(x)] ✏f and E⇠ [exp{ (e(x) E[e(x)])}] exp ✓ 2⌫2
2
◆ , 8 2 0, 1
b
. (1)
• Probabilistic first order oracle. Given a point x and a constant ↵ > 0, the oracle computes g(x, ⇠0), a (random) estimate of the gradient r (x), such that
P⇠0 (kg(x, ⇠0) r (x)k max{✏g,↵kg(x, ⇠0)k}) 1 . (2) Here, ⇠0 is a random variable (whose distribution may depend on x), with associated probability space (⌦0,F⌦0 , P 0). (1 ) 2 (0, 1) is the probability, intrinsic to the oracle, that the
1This is a weaker requirement than assuming e(x) to be sub-exponential, as one only needs to guard against the possibility of the error e(x) being too large.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
gradient estimate is “sufficiently accurate” with respect to ✏g,, and ↵. Lastly, , ✏g 0 are constants, intrinsic to the oracle, which represent the precision the oracle can achieve. Note that ✏g allows the gradient estimate to be bounded away from the true gradient by a constant distance.
Remark We will analyze a line search algorithm that relies on these two oracles. In the zeroth order oracle, the constants ✏f and (⌫, b) are intrinsic. In the first order oracle, , ✏g , and are intrinsic. These values cannot be controlled. On the other hand, ↵ is an input to the first order oracle that can be chosen by the algorithm. In fact, as we shall see in Section 3, ↵ will be the step size of the line search method.
These two oracles cover several settings, including
• Standard supervised learning, where gradients and values of the loss function are computed based on a mini-batch. Here, the random variables ⇠ and ⇠0 in the zeroth and first order oracles represent the random set of samples in the mini-batch.
• Zeroth order optimization, where gradients are estimated via randomized finite differences using (possibly noisy) function values. This arises in policy gradients in reinforcement learning, as is used in [SHC+16] and analyzed in [BCCS21].
• A variety of other settings, where the gradients and function estimates may be biased stochastic estimates of the true gradients and function values.
The constants in the oracles determine the precision of the function and gradient estimates. These constants will also dictate the accuracy achievable by the line search method we analyze. Specifically, if ✏f = 0 and ✏g = 0, then the algorithm converges to a stationary point. Otherwise, a precise lower bound is derived for the smallest kr (x)k the algorithm can achieve, in terms of the constants in the oracles. It is worth noting that the oracles can be biased. Indeed, the zeroth order oracle can incur arbitrarily large error, as long as it satisfies 1. Moreover, the first order oracle only requires g(x, ⇠0(x)) to be a “sufficiently accurate” estimate ofr (x) with probability 1 . Thus g(x, ⇠0(x)) can be an arbitrary vector with probability , so it in principle can have an arbitrarily large bias.
The line-search algorithm is given in Section 3. It is a modification of the standard Armijo-based line search algorithm [NW06], with access to the zeroth and first order oracles. The two small modifications are: 1) The Armijo condition is relaxed by an additive constant 2✏f , to account for the inexact function evaluations, and 2) The first order oracle is called in each iteration, and a new search direction is generated whenever the step size changes. This allows the method to progress to near-stationary points without assuming the gradient estimates (e.g. the mini-batch gradients in supervised learning) to be Lipschitz continuous.
Our framework and analysis are based on results in [CS17], [GRVZ18] and [BCS19]. However, there are several key differences. In [CS17] and [BCS19] the line search has access to stronger oracles, with ✏g = 0 and |f(x, ⇠) (x)| ✏f deterministically. Under these assumptions, [CS17] and [BCS19] derive an expected iteration complexity bound. In this paper, we provide a high probability tail bound on the iteration complexity, showing that the algorithm is very likely to succeed in a number of iterations on the order of its expected iteration complexity. Moreover, we consider more general oracles, with arbitrary ✏g and possibly unbounded |f(x, ⇠) (x)|. Thus, we significantly strengthen the results in [CS17] and [BCS19]. To the best of our knowledge, the only other high probability complexity bound of this kind is derived in [GRVZ18] for a trust region algorithm under the assumption ✏g = 0 and |f(x, ⇠) (x)|= 0 deterministically, which are much stronger oracles.
Stochastic line search has also been analyzed in [PS20] and [VML+19]. In [PS20] the assumptions on |f(x, ⇠) (x)| are different. On the one hand, they allow for more general distributions than sub-exponential. On the other hand, it is assumed that |f(x, ⇠) (x)| can be made arbitrarily small with some fixed probability. An expected iteration complexity bound is then derived for arbitrarily small ". In contrast, we do not assume this, and analyze the iteration complexity of reaching an "-stationary point, with " lower-bounded by a function of the constants in the oracles. Moreover, our analysis and results are much simpler than those in [PS20] and we derive an iteration complexity bound in high probability, not just in expectation.
In [VML+19], the traditional line search is analyzed for empirical loss minimization, where the function oracles are implemented using a random mini-batch of a fixed size. The mini-batch remains
fixed during backtracking until a standard Armijo condition is satisfied. Thus the search direction remains the same until a step is taken. While good computational performance has been reported in [VML+19], its theoretical analysis requires several very restrictive assumptions, especially for nonconvex functions. Also, they bound the expected sum of squared gradient norms, while we bound the iteration complexity with high probability. We note that using similar techniques as in [BCS19], our analysis can be extended to the convex and strongly convex cases.
In summary, we present an analysis of an adaptive line search algorithm under very general conditions on the gradient and function estimates. The results not only subsume most results in the prior literature, but also substantially extend the framework. Moreover, high probability tail bounds on iteration complexity are derived, instead of only expected iteration complexity.
2 Oracles
In this section, we discuss a couple of settings, and show how they are captured by our framework. All norms used are 2-norm.
2.1 Expected loss minimization
Let us first discuss how the oracle definitions apply to expected loss minimization. In this setting, (x) = Ed⇠D[`(x, d)]. Here, x is the model parameters, d is a data sample following distribution D, and `(x, d) is the loss when the model parameterized by x is evaluated on data point d.
In this case, the zeroth and first order oracles can be as follows, where S is a mini-batch sampled from D:
f(x,S) = 1
|S|
X d2S `(x, d), g(x,S) = 1 |S| X d2S rx`(x, d). (3)
In general, S can be chosen to depend on x. We now show how our zeroth and first order oracle conditions are satisfied by selecting an appropriate sample size |S|.
Proposition 1. Let ê(x, d) := `(x, d) (x) be a (⌫̂(x), b̂(x))-subexponential random variable and Vard⇠D [`(x, d)] ✏̂(x)2, for some ⌫̂(x), b̂(x), ✏̂(x). Let e(x,S) = |f(x,S) (x)| and N = |S|, then
ES [e(x,S)] 1 p N ✏̂(x) and e(x,S) is (⌫(x), b(x))-subexponential,
with ⌫(x) = b(x) = 8e2 max n
⌫̂(x)p N , b̂(x)
o .
In the case when the support of D is bounded, ` is Lipschitz, and the set of x we consider is bounded, the assumption of Proposition 1 is satisfied. Thus, f(x,S) is a zeroth order oracle with ✏f = supx 1p N ✏̂(x), ⌫ = supx ⌫(x), and b = supx b(x), and ✏f can be made arbitrarily small by taking a large enough sample.
Under standard assumptions on r`(x, d), for instance, suppose Assumption 4.3 in [BCN18] holds: for some Mc,Mv 0 and for all x,
Ed⇠D kr`(x, d) r (x)k2 Mc +Mv kr (x)k2 , (4) one can show g(x,S) is a first order oracle with a large enough sample size. Proposition 2. Let g = g(x,S). Assuming Ed⇠Dr`(x, d) = r (x), then
|S| Mc +Mv kr (x)k
2
min
( 1
✏2g ,
(1 + ↵)2
2↵2 kr (x)k2
)
implies P (kg r (x)k max{✏g,↵kgk}) 1 .
This bound implies a looser bound of:
|S| max ( 2Mc ✏2g , 2Mv(1 + ↵) 2 2↵2 ) .
Remark Let us discuss what is required from the minibatch size in this setting. Unlike standard SGD, the minibatch size is chosen dynamically. When convergence to a stationary point is desired, ✏g has to be zero, and gradient estimate gk tends to zero. Thus, kg r (x)k also has to tend to zero. If Mc = 0, then fixing the mini-batch size to be at least Mv(1+↵) 2
2↵2 provides a valid first order oracle. Thus, unless ↵ tends to zero, the minibatch size remains bounded from below. This is similar to the interpolation condition used in e.g. [BM11, VML+19]. On the other hand, when Mc > 0, the minibatch size has to grow in order to approach a stationary point. This is similar to dynamic minibatch size selection, discussed, e.g. in [BCNW12]. The difference between our results and those in [BCNW12] is that our batch bound is implementable and guarantees convergence, while the one in [BCNW12] is implementable only as a heuristic. As our computational results show, however, a fixed and small mini-batch size appears to work very well, perhaps because Mc is small.
2.2 Randomized finite difference gradient approximation
Gradient estimates based on randomized finite differences using noisy function evaluations have become popular for zeroth order optimization, particularly for model-free policy optimization in reinforcement learning [SHC+16, FGKM18].
In this setting, the zeroth order oracle is assumed to be available, but with a more strict assumption that e(x) ✏f deterministically. The first order oracle is obtained using the zeroth order oracle as follows. Let U = {ui : i = 1, . . . , |U|} be a set of random vectors, with each vector following some “nice” distribution (e.g. standard Gaussian). Then,
g(x,U) =
|U|X
i=1
f(x+ ui, ⇠) f(x, ⇠)
|U| ui, (5)
where is the sampling radius. The proposition below shows that (5) with a large enough sample size gives a first order oracle. Proposition 3. Let g = g(x,U), and fix ✏g = 2 ⇣ p nL + p n✏f ⌘ where n is the dimension of x.
Then
|U|
3 4L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 n+ 18n kr (x)k 2
min
8 >< >: 4 ✏2g , 1 ⇣
↵ 1+↵ kr (x)k ✏g 2
⌘2
9 >=
>;
implies P (kg r (x)k max{✏g,↵kgk}) 1 . Note that in the setting, ✏g is a fixed bias dependent on , and cannot be made arbitrarily small.
Remark Note that ✏g defines the neighborhood of convergence for any method that relies on this oracle, and the smallest value for ✏g is achieved by setting = O( p ✏f ). Let us now discuss the
minibatch size. Under the assumption that ✏f is small, 34L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 is also small. Thus when kr (x)k is larger than or on the order of ✏g , then the sample set size remains constant and is proportional to n. In [NS17] a constant step size stochastic gradient descent is applied using sample size |U| = 1, thus each step requires about n fewer samples. However, the step size has to be roughly n times smaller to account for the variance of the stochastic oracles based on one sample, thus the overall complexity is the same.
Other finite difference approximation schemes and their centralized versions (see [BCCS21] for a reference on these) also give suitable first order oracles.
2.3 Other settings
Our oracle framework also fits a variety of other settings, as we allow the randomness ⇠ and ⇠0 of the zeroth and first order oracles to be dependent on x and on each other, possibly following different distributions. Moreover, the oracles allow the function and gradient estimations to be arbitrarily bad occasionally, which allows them to capture settings where measurements are corrupted with outliers. The exact derivations of these oracles in these different settings are subjects of future exploration.
3 Algorithm and notation
We consider the line search algorithm proposed by [BCS19], which is an extension of the line search algorithm in [CS17] to the setting of inexact function estimates. In both algorithms, a random gradient estimate is used to attempt a step. We name the algorithm “ALOE”, which stands for Adaptive Line-search with Oracle Estimations. Compared to [CS17], the key modification of the algorithm is the relaxation of the Armijo condition by an additive constant 2✏f . The difference between this algorithm and the more standard line search methods such as the ones in [NW06] and [VML+19] is that the gradient estimate is recomputed in each iteration, whether or not a step is accepted. Note that all input parameters are user controlled, except for ✏f . In fact, the input ✏f here is only required to be some upper bound for E[e(x)], not necessarily the tightest one. Moreover, our computational results in Section 6 indicate that estimating ✏f is relatively easy in practice, and the algorithm is robust to the choice of ✏f .
Algorithm 1 Adaptive Line-search with Oracle Estimations (ALOE) Input: Parameter ✏f of the zeroth order oracle, starting point x0, max step size ↵max > 0, initial step size ↵0 < ↵max, constants ✓, 2 (0, 1).
1: for k = 0, 1, 2, . . . do 2: Compute gradient approximation gk:
Generate the direction gk = g(xk, ⇠0k) using the probabilistic first order oracle, with ↵ = ↵k.
3: Check sufficient decrease: Let x+k = xk ↵kgk. Generate f(xk, ⇠k) and f(x + k , ⇠ + k ) using the probabilistic
zeroth order oracle. Check the modified Armijo condition:
f(x+k , ⇠ + k ) f(xk, ⇠k) ↵k✓ kgkk 2 + 2✏f . (6)
4: Successful step: If (6) holds, then set xk+1 x+k and ↵k+1 min{↵max,
1↵k}. 5: Unsuccessful step:
Otherwise, set xk+1 xk and ↵k+1 ↵k.
In this paper we impose the following standard assumption on (x). Assumption 1. r is L-Lipschitz smooth and is bounded from below by some constant ⇤.
Let ek = |f(xk, ⇠k) (xk)| and e+k = |f(x + k , ⇠ + k ) (x + k )|. Recall that ek and e + k satisfy (1) from the definition of the zeroth order oracle. We will consider two cases; 1) ek and e+k are deterministically bounded by ✏f , in which case ⌫ and b in (1) can be chosen to be 0, and 2) ⌫ and b are not necessarily zero, in which case we assume the random variables ek+ e+k are all independent.
Assumption 2. Either e0, e+0 , e1, e + 1 , . . . are all deterministically bounded by ✏f , or the random variables {e0 + e+0 , e1 + e + 1 , . . .} are independent. Definition 1 (Definition of a true iteration). We say an iteration k is true if kgk r (xk)k max{✏g,↵kkgkk} and ek + e+k 2✏f ,
and false otherwise.
Let Mk denotes the triple {⌅k,⌅+k ,⌅ 0 k}, whose realizations are {⇠k, ⇠ + k , ⇠ 0 k}. Algorithm 1 generates a stochastic process adapted to the filtration {Fk : k 0}, where Fk = (M0,M1, . . . ,Mk). We define the following random variables, measurable with respect to Fk.
• Ik := {iteration k is true}. • ⇥k := {iteration k is successful}. • T" := min{k : kr (xk)k "}, the iteration complexity of the algorithm for reaching "-stationarity.
• Zk := (xk) ⇤ 0, a measure of progress.
It is easy to see that T" is a stopping time of the stochastic process with respect to Fk. We derive a high probability tail bound for T✏, and obtain an iteration complexity bound in high probability for Algorithm 1 when applied to non-convex functions. The final result is summarized below with simplified constants. The full statement is in Theorem 4. Theorem 1 (Main convergence result with simplified constants). Suppose Assumptions 1 and 2 hold, and (for simplicity) ✓ = 12 , ↵max 1 and max{L, 1}. Then, for any
" 4max ⇢ ✏g, (1 + ↵max) q (L+ 2)✏f ,
we have the following bound on iteration complexity:
For any s 0, p = 1 e min{ u2 2⌫2 , u2b}, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t
R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, u = infx{✏f E[e(x)]}, R = (x0) ⇤ C"2 ln((L+2)↵0) ln , and C = 1 2(L+2)(1+↵max)2 .
Remark This theorem essentially shows that the iteration complexity of Algorithm 1 is bounded by a quantity on the order of
1
p 12 4✏f+s C"2
✓ (x0) ⇤
C"2
ln((L+ 2)↵0)
ln
◆
with overwhelmingly high probability. If p = 1 and ✏f = 0, the above quantity essentially recovers the iteration complexity of the deterministic line search algorithm.
4 Analysis framework for the high probability bound
In this section we present the main ideas underlying the theoretical analysis. We first state general conditions on the stochastic process (Assumption 3), from which we are able to derive a high probability tail bound on the iteration complexity. They are listed as assumptions here, and in the next section, we will show that they indeed hold for Algorithm 1 when applied to non-convex smooth functions . Assumption 3 (Properties of the stochastic process). There exist a constant ↵̄ > 0 and a nondecreasing function h : R ! R, which satisfies h(↵) > 0 for any ↵ > 0, such that for any realization of the algorithm, the following hold for all k < T":
(i) h(↵̄) > 8✏f .
(ii) P(Ik = 1 | Fk 1) p for all k, with some p 2 ( 12 + 4✏f h(↵̄) , 1].
(iii) If Ik⇥k = 1 then Zk+1 Zk h(↵k) + 4✏f . (True, successful iterations make progress.)
(iv) If ↵k ↵̄ and Ik = 1 then ⇥k = 1.
(v) Zk+1 Zk + 2✏f + ek + e+k for all k.
The following key lemma follows easily from Assumption 3 (ii) and the Azuma-Hoeffding inequality [Azu67] applied to the submartingale Pt 1 k=0 Ik pt.
Lemma 1. For all 1 t T", and any p̂ 2 [0, p), we have
P
t 1X
k=0
Ik p̂t
! exp ✓ (p p̂)2
2p2 t
◆ .
We now define another indicator variable that will be used in the analysis.
Definition 2 (Large step). For all integers k 0, define the random variable Uk as follows:
Uk = ⇢ 1, if min{↵k,↵k+1} ↵̄, 0, if max{↵k,↵k+1} ↵̄.
We will say that step k is a large step if Uk = 1. Otherwise, step k is a small step.
By the dynamics of the process, every step is either a large step or a small step, but not both.
Our analysis will rely on the following key observation: By Assumption 3, if iteration k has Uk⇥kIk = 1, then Zk gets reduced by at least h(↵̄) 4✏f > 0. We call such an iteration a “good” iteration, because it makes progress towards optimality by at least a fixed amount. On the other hand, on any other iteration k, Zk can increase by at most 2✏f + ek + e+k . The idea of the analysis is to show that with high probability, the progress made by the good iterations dominates the damage caused by the other iterations. The crux of the proof is to show that with high probability, a large enough constant fraction of the iterations are good (up to another additive constant).
The following key lemma is the engine of the analysis. It shows that if the stopping time has not been reached and a large enough number of iterations are true, then there must be a large number of good iterations. Lemma 2. For any positive integer t and any p̂ 2 ( 12 , 1], we have
P T" > t and
t 1X
k=0
Ik p̂t and t 1X
k=0
Uk⇥kIk <
✓ p̂ 1
2
◆ t d
2
! = 0,
where d = max n
ln↵0 ln ↵̄ ln , 0
o .
4.1 Bounded noise case
In [CS17] and [BCS19], the expected iteration complexity of the line search algorithm is bounded under the assumptions that e(x) = 0 and e(x) ✏f for all x, respectively. We now derive a high probability tail bound on the iteration complexity under the assumption that e(x) ✏f for all x. Note that we do not need to assume that the errors e(x) are independent in the bounded noise setting. Thus, this analysis applies even when the noise is deterministic or adversarial.
Under Assumption 3 in the bounded noise setting, we have Zk+1 Zk + 4✏f in all iterations, and Zk+1 Zk h(↵̄) + 4✏f in good iterations. Putting this together with Lemma 2 and the other conditions in Assumption 3, we obtain the following theorem. Theorem 2 (Iteration complexity in the bounded noise setting). Suppose Assumption 3 holds, and ek, e + k ✏f at every iteration. Then for any p̂ 2 ( 1 2 + 4✏f h(↵̄) , p), and t R
p̂ 12 4✏f h(↵̄)
we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
4.2 General sub-exponential noise case
We now present a high probability bound for the iteration complexity with general sub-exponential noise in the zeroth order oracle. The result is very similar to that of Theorem 2. The main difference from the bounded noise analysis is that instead of bounding the “damage” caused on a per-iteration basis, we bound the sum of all such damages over all iterations. The fact that the noises are subexponential and independent allows us to apply Bernstein’s inequality to obtain an upper bound on this sum that holds with high probability. Theorem 3 (Iteration complexity in the sub-exponential noise setting). Suppose Assumptions 2 and 3 hold. Then for any s 0, p̂ 2 ( 12 + 4✏f+s h(↵̄) , p), and t R
p̂ 12 4✏f+s h(↵̄)
, we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ e min n s2t 8⌫2 , st4b o ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
5 Final iteration complexity of the line search algorithm
In the previous section, we presented high probability tail bounds on the iteration complexity, assuming Assumption 3 holds. We now verify that Assumption 3 indeed holds for Algorithm 1 when applied to smooth functions. Together with the results in Section 4, this allows us to derive an explicit high-probability bound on the iteration complexity.
As noted earlier, when either ✏f or ✏g are not zero, Algorithm 1 does not converge to a stationary point, but converges to a neighborhood where kr (x)k ", with " bounded from below in terms of ✏f or ✏g . The specific relationship is as follows. Inequality 1 (Lower bound on ").
" > max ( ✏g ⌘ ,max ⇢ 1 + ↵max, 1 1 ⌘ · s 4✏f
✓(p 12 ) ·max
⇢ 0.5L+
1 ✓ ,
L(1 ⌘)
2(1 2⌘ ✓(1 ⌘))
) ,
for some ⌘ 2 (0, 1 ✓2 ✓ ).
Here ⌘ can be any value in the interval. p = 1 when in the bounded noise setting, and p = 1 exp ⇣ min{u 2
⌫2 , u b } ⌘ otherwise, with u = infx{✏f E[e(x)]}.
Proposition 4 (Assumption 3 holds for Algorithm 1). If Inequality 1 and Assumption 1 and 2 hold, then Assumption 3 holds for Algorithm 1 with the following p, ↵̄ and h(↵):
1. p = 1 when the noise is bounded by ✏f , and p = 1 exp ⇣ min{ u 2 2⌫2 , u 2b} ⌘
otherwise. Here u = infx{✏f E[e(x)]}.
2. ↵̄ = min n
1 ✓ 0.5L+ , 2(1 2⌘ ✓(1 ⌘)) L(1 ⌘)
o .
3. h(↵) = min n
✓✏2↵ (1+↵max)2
, ✓↵(1 ⌘)2✏2 o .
Applying Theorem 3 now gives the explicit complexity bound for Algorithm 1.
Theorem 4. Suppose the Inequality 1 on " is satisfied for some ⌘ 2 (0, 1 ✓2 ✓ ), and Assumptions 1 and 2 hold, then we have the following bound on the iteration complexity: For any s 0, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, R = (x0) ⇤ C"2 +max n ln↵0 ln ↵̄ ln , 0 o , C = min n 1 (1+↵max)2 , (1 ⌘)2 o ↵̄✓, with p and ↵̄ as defined in Proposition 4.
Remark Inequality 1 makes sure there exists some p̂ 2 ( 12 + 4✏f+s C"2 , p) for some s > 0. The above theorem is for the general sub-exponential noise setting. In the bounded noise special case, we have s = 0, and the last term exp ⇣ min n s2t 8⌫2 , st 4b o⌘ in the probability is not present.
6 Experiments
In this section, we illustrate that the proposed stochastic algorithm ALOE can be at least as efficient in practice as the line search in [VML+19], and much more efficient than full gradient line search. From the experiments, we show that estimating ✏f is not difficult, and taking mini-batches of a fixed size indeed provides good zeroth and first order oracles in practice.
For illustration, we first conduct experiments on all the datasets for binary classification with 150 to 5000 data points from the Penn Machine Learning Benchmarks repository (PMLB) [RLLC+21]. In total, there are 64 such datasets. Each binary classification problem is formulated as a logistic
regression problem with an RBF kernel (with parameter = 1). All experiments were conducted on a 2020 MacBook Pro with an M1 chip and 16GB of memory.
We compare the following three algorithms, and they are implemented as follows.
• ALOE. The zeroth and first order oracles are implemented using the same mini-batch of a fixed size within each iteration. Batch sizes are taken to be 128. We estimate ✏f at the beginning of every epoch (i.e. every K iterations, where K equals the total number of data samples divided by 128), by computing 15 times the empirical standard deviation of 30 zeroth order oracle calls with batch size 128 at the current point. We found in practice the algorithm is quite robust to how ✏f is chosen. The relevant plots are in Appendix F. The parameter we used are = 0.8, ✓ = 0.2, ↵0 = 1 and ↵max = 10.
• SLS. The SLS algorithm (also called “SGD + Armijo”) proposed in [VML+19] differs from ALOE in that ✏f = 0 and that the same mini-batch is used while backtracking until the Armijo condition is satisfied. We implemented the algorithm using mini-batch size 128 and the parameters suggested in their paper. We tried various parameter combinations for SLS and found the performance of the suggested parameters to work best.
• Full gradient line search. The full gradient line search algorithm is implemented using the entire dataset for function and gradient evaluations on each iteration. Taking ✏f = 0 and the other parameters are the same as used in ALOE. For fair comparison in our experiments, we allow full gradient line search to make the same number of passes over each dataset as ALOE.
We conducted 5 trials for each dataset and ran each algorithm with initial points taken randomly from a standard Gaussian distribution. In Figure 1 we compare the overall performance of the three algorithms in the following way. For each dataset and algorithm, the average best value is defined as the average of the minimum training loss attained over 5 different trials. For each dataset we record the difference between the average best values achieved by SLS vs. ALOE, and plot the resulting 64 numbers as a histogram. The same is done for full gradient line search vs. ALOE. See Figure 1. Under this metric, ALOE achieves better training loss than SLS algorithm in 62 out of 64 datasets, and is always better than the full gradient line search.
Figure 2 illustrates the decay of training losses using these three algorithms for three datasets. In many cases ALOE decreases the training loss more rapidly than the other two algorithms. Testing set accuracy comparisons are also carried out, using random 80 : 20 splits of datasets, as shown in Figure 3. Test accuracy is defined as the proportion of data points in the testing set classified correctly. The results show that ALOE is competitive in terms of test accuracy as well. More performance and test accuracy plots for different datasets, models and loss functions are in Appendix.
7 Final Remarks
We conclude the paper with a brief overview of our theoretical results with respect to those in [VML+19]. The stochastic line search in [VML+19] is proposed specifically for empirical risk minimization, and the zeroth and first order oracles are implemented using mini-batch of a fixed size. The same mini-batch is used for all consecutive unsuccessful iterations. This guarantees that a successful iteration is eventually achieved for Armijo condition with ✏f = 0, under the assumption
that for every mini-batch, g(x, ⇠0) is Lipschitz continuous. The convergence analysis then assumes that Mc = 0 in (4) (strong growth condition) and in the case when is not convex, the step size parameter is bounded above by 1LMv . Thus, the method itself and its convergence are not better than those of a stochastic gradient descent with a fixed step size bounded by 1LMv [BCN18]. It is also assumed that the step size is reset to a fixed value at the start of each iteration, which is impractical. Good computational results are reported in [VML+19] for a heuristic version of the algorithm where the restrictions of the step size are removed.
In this paper we analyzed Algorithm 1 under virtually no restriction on the step size parameter. For the sake of simplicity of analysis, we assume the step size parameter is reduced and increased by the same multiplicative factor. This can be relaxed to some degree. We also do not assume that g(x, ⇠0) is Lipschitz continuous, we only impose this condition on . The cost of relaxing all these assumptions is the use of ✏f . For simplicity of the analysis, ✏f is assumed to be fixed throughout the algorithm. In practice, it can be re-estimated regularly. In many applications, ✏f tends to get smaller as the algorithm progresses towards optimality. Our experiments show that estimating ✏f is easy and works well in practice. Moreover, one can use much smaller values for ✏f than theory dictates.
8 Acknowledgments
This work was partially supported by NSF Grants TRIPODS 17-40796, NSF Grant CCF 2008434 and DARPA Lagrange award HR-001117S0039. Miaolan Xie was partially supported by a PhD Fellowship provided by MunchRe. Billy Jin was partially supported by NSERC fellowship PGSD3532673-2019.
The authors are grateful to the anonymous referees for their reviews that helped us improve the paper. | 1. What is the focus of the paper regarding line-search methods and oracle information?
2. What are the strengths of the proposed approach, particularly in its extensions and relaxations?
3. Do you have any concerns or questions regarding the paper's assumptions and contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studied the line-search method with stochastic zeroth-order and first-order oracle. The author(s) extended prior works by (1) relaxing assumptions (2) providing a high probability bound (exponential tail bound) for the stopping time.
Review
Pros:
The structure of the paper is clear and most contents are easy to follow.
Related works are well-addressed to my knowledge.
Cons:
Eq (4) holds for the finite sum of convex functions and usually does not hold for nonconvex functions. Therefore I don’t think Proposition 2 could hold for nonconvex function. Assume that Proposition 2 is satisfied, then it means we have an accurate estimate of the true gradient at every saddle point (with prob
1
−
δ
). Can the author(s) provide a nonconvex example that satisfies Proposition 2? I understand that similar assumptions are also used in prior works, but I am confused about whether it is an appropriate assumption.
One extension of this work to prior work is the
ϵ
g
used in eq. (2). But this also makes the theoretical result weaker,
ϵ
cannot be smaller than
ϵ
g
/
η
(see Assumption 4). I agree that this paper relaxed the assumptions (see eq. (1) and eq.(2)) used in prior works, but the obtained result is also weaker. I don’t think that the relaxed assumption is a significant contribution. The major contribution of this paper I think is the exponential high probability bound obtained (in contrast to the convergence in expectation in prior work). But I am not sure if this result is significant enough to be accepted at NeurIPS. |
NIPS | Title
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
Abstract
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
1 Introduction
In this paper, we analyze a line-search method when applied to the problem of minimizing an unconstrained, differentiable, possibly non-convex function : Rn ! R. The goal is to find a "-stationary point for ; that is, a point x with kr (x)k ". We make the standard assumption that r is LLipschitz, but the knowledge of L is not assumed by the algorithm. We consider a setting where neither the function value (x) nor the gradient r (x) are directly computable. Instead, the algorithm is given black-box access to the following probabilistic oracles:
• Probabilistic zeroth order oracle. Given a point x, the oracle computes f(x, ⇠), a (random) estimate of the function value (x). ⇠ is a random variable (whose distribution may depend on x), with probability space (⌦,F⌦, P ). We assume the absolute value of the estimation error e(x) = |f(x, ⇠(x)) (x)| (we omit the dependence on ⇠ for brevity) to be a “one-sided” sub-exponential-like random variable1 with parameters (⌫, b), whose mean is bounded by some constant ✏f > 0. Specifically,
E⇠ [e(x)] ✏f and E⇠ [exp{ (e(x) E[e(x)])}] exp ✓ 2⌫2
2
◆ , 8 2 0, 1
b
. (1)
• Probabilistic first order oracle. Given a point x and a constant ↵ > 0, the oracle computes g(x, ⇠0), a (random) estimate of the gradient r (x), such that
P⇠0 (kg(x, ⇠0) r (x)k max{✏g,↵kg(x, ⇠0)k}) 1 . (2) Here, ⇠0 is a random variable (whose distribution may depend on x), with associated probability space (⌦0,F⌦0 , P 0). (1 ) 2 (0, 1) is the probability, intrinsic to the oracle, that the
1This is a weaker requirement than assuming e(x) to be sub-exponential, as one only needs to guard against the possibility of the error e(x) being too large.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
gradient estimate is “sufficiently accurate” with respect to ✏g,, and ↵. Lastly, , ✏g 0 are constants, intrinsic to the oracle, which represent the precision the oracle can achieve. Note that ✏g allows the gradient estimate to be bounded away from the true gradient by a constant distance.
Remark We will analyze a line search algorithm that relies on these two oracles. In the zeroth order oracle, the constants ✏f and (⌫, b) are intrinsic. In the first order oracle, , ✏g , and are intrinsic. These values cannot be controlled. On the other hand, ↵ is an input to the first order oracle that can be chosen by the algorithm. In fact, as we shall see in Section 3, ↵ will be the step size of the line search method.
These two oracles cover several settings, including
• Standard supervised learning, where gradients and values of the loss function are computed based on a mini-batch. Here, the random variables ⇠ and ⇠0 in the zeroth and first order oracles represent the random set of samples in the mini-batch.
• Zeroth order optimization, where gradients are estimated via randomized finite differences using (possibly noisy) function values. This arises in policy gradients in reinforcement learning, as is used in [SHC+16] and analyzed in [BCCS21].
• A variety of other settings, where the gradients and function estimates may be biased stochastic estimates of the true gradients and function values.
The constants in the oracles determine the precision of the function and gradient estimates. These constants will also dictate the accuracy achievable by the line search method we analyze. Specifically, if ✏f = 0 and ✏g = 0, then the algorithm converges to a stationary point. Otherwise, a precise lower bound is derived for the smallest kr (x)k the algorithm can achieve, in terms of the constants in the oracles. It is worth noting that the oracles can be biased. Indeed, the zeroth order oracle can incur arbitrarily large error, as long as it satisfies 1. Moreover, the first order oracle only requires g(x, ⇠0(x)) to be a “sufficiently accurate” estimate ofr (x) with probability 1 . Thus g(x, ⇠0(x)) can be an arbitrary vector with probability , so it in principle can have an arbitrarily large bias.
The line-search algorithm is given in Section 3. It is a modification of the standard Armijo-based line search algorithm [NW06], with access to the zeroth and first order oracles. The two small modifications are: 1) The Armijo condition is relaxed by an additive constant 2✏f , to account for the inexact function evaluations, and 2) The first order oracle is called in each iteration, and a new search direction is generated whenever the step size changes. This allows the method to progress to near-stationary points without assuming the gradient estimates (e.g. the mini-batch gradients in supervised learning) to be Lipschitz continuous.
Our framework and analysis are based on results in [CS17], [GRVZ18] and [BCS19]. However, there are several key differences. In [CS17] and [BCS19] the line search has access to stronger oracles, with ✏g = 0 and |f(x, ⇠) (x)| ✏f deterministically. Under these assumptions, [CS17] and [BCS19] derive an expected iteration complexity bound. In this paper, we provide a high probability tail bound on the iteration complexity, showing that the algorithm is very likely to succeed in a number of iterations on the order of its expected iteration complexity. Moreover, we consider more general oracles, with arbitrary ✏g and possibly unbounded |f(x, ⇠) (x)|. Thus, we significantly strengthen the results in [CS17] and [BCS19]. To the best of our knowledge, the only other high probability complexity bound of this kind is derived in [GRVZ18] for a trust region algorithm under the assumption ✏g = 0 and |f(x, ⇠) (x)|= 0 deterministically, which are much stronger oracles.
Stochastic line search has also been analyzed in [PS20] and [VML+19]. In [PS20] the assumptions on |f(x, ⇠) (x)| are different. On the one hand, they allow for more general distributions than sub-exponential. On the other hand, it is assumed that |f(x, ⇠) (x)| can be made arbitrarily small with some fixed probability. An expected iteration complexity bound is then derived for arbitrarily small ". In contrast, we do not assume this, and analyze the iteration complexity of reaching an "-stationary point, with " lower-bounded by a function of the constants in the oracles. Moreover, our analysis and results are much simpler than those in [PS20] and we derive an iteration complexity bound in high probability, not just in expectation.
In [VML+19], the traditional line search is analyzed for empirical loss minimization, where the function oracles are implemented using a random mini-batch of a fixed size. The mini-batch remains
fixed during backtracking until a standard Armijo condition is satisfied. Thus the search direction remains the same until a step is taken. While good computational performance has been reported in [VML+19], its theoretical analysis requires several very restrictive assumptions, especially for nonconvex functions. Also, they bound the expected sum of squared gradient norms, while we bound the iteration complexity with high probability. We note that using similar techniques as in [BCS19], our analysis can be extended to the convex and strongly convex cases.
In summary, we present an analysis of an adaptive line search algorithm under very general conditions on the gradient and function estimates. The results not only subsume most results in the prior literature, but also substantially extend the framework. Moreover, high probability tail bounds on iteration complexity are derived, instead of only expected iteration complexity.
2 Oracles
In this section, we discuss a couple of settings, and show how they are captured by our framework. All norms used are 2-norm.
2.1 Expected loss minimization
Let us first discuss how the oracle definitions apply to expected loss minimization. In this setting, (x) = Ed⇠D[`(x, d)]. Here, x is the model parameters, d is a data sample following distribution D, and `(x, d) is the loss when the model parameterized by x is evaluated on data point d.
In this case, the zeroth and first order oracles can be as follows, where S is a mini-batch sampled from D:
f(x,S) = 1
|S|
X d2S `(x, d), g(x,S) = 1 |S| X d2S rx`(x, d). (3)
In general, S can be chosen to depend on x. We now show how our zeroth and first order oracle conditions are satisfied by selecting an appropriate sample size |S|.
Proposition 1. Let ê(x, d) := `(x, d) (x) be a (⌫̂(x), b̂(x))-subexponential random variable and Vard⇠D [`(x, d)] ✏̂(x)2, for some ⌫̂(x), b̂(x), ✏̂(x). Let e(x,S) = |f(x,S) (x)| and N = |S|, then
ES [e(x,S)] 1 p N ✏̂(x) and e(x,S) is (⌫(x), b(x))-subexponential,
with ⌫(x) = b(x) = 8e2 max n
⌫̂(x)p N , b̂(x)
o .
In the case when the support of D is bounded, ` is Lipschitz, and the set of x we consider is bounded, the assumption of Proposition 1 is satisfied. Thus, f(x,S) is a zeroth order oracle with ✏f = supx 1p N ✏̂(x), ⌫ = supx ⌫(x), and b = supx b(x), and ✏f can be made arbitrarily small by taking a large enough sample.
Under standard assumptions on r`(x, d), for instance, suppose Assumption 4.3 in [BCN18] holds: for some Mc,Mv 0 and for all x,
Ed⇠D kr`(x, d) r (x)k2 Mc +Mv kr (x)k2 , (4) one can show g(x,S) is a first order oracle with a large enough sample size. Proposition 2. Let g = g(x,S). Assuming Ed⇠Dr`(x, d) = r (x), then
|S| Mc +Mv kr (x)k
2
min
( 1
✏2g ,
(1 + ↵)2
2↵2 kr (x)k2
)
implies P (kg r (x)k max{✏g,↵kgk}) 1 .
This bound implies a looser bound of:
|S| max ( 2Mc ✏2g , 2Mv(1 + ↵) 2 2↵2 ) .
Remark Let us discuss what is required from the minibatch size in this setting. Unlike standard SGD, the minibatch size is chosen dynamically. When convergence to a stationary point is desired, ✏g has to be zero, and gradient estimate gk tends to zero. Thus, kg r (x)k also has to tend to zero. If Mc = 0, then fixing the mini-batch size to be at least Mv(1+↵) 2
2↵2 provides a valid first order oracle. Thus, unless ↵ tends to zero, the minibatch size remains bounded from below. This is similar to the interpolation condition used in e.g. [BM11, VML+19]. On the other hand, when Mc > 0, the minibatch size has to grow in order to approach a stationary point. This is similar to dynamic minibatch size selection, discussed, e.g. in [BCNW12]. The difference between our results and those in [BCNW12] is that our batch bound is implementable and guarantees convergence, while the one in [BCNW12] is implementable only as a heuristic. As our computational results show, however, a fixed and small mini-batch size appears to work very well, perhaps because Mc is small.
2.2 Randomized finite difference gradient approximation
Gradient estimates based on randomized finite differences using noisy function evaluations have become popular for zeroth order optimization, particularly for model-free policy optimization in reinforcement learning [SHC+16, FGKM18].
In this setting, the zeroth order oracle is assumed to be available, but with a more strict assumption that e(x) ✏f deterministically. The first order oracle is obtained using the zeroth order oracle as follows. Let U = {ui : i = 1, . . . , |U|} be a set of random vectors, with each vector following some “nice” distribution (e.g. standard Gaussian). Then,
g(x,U) =
|U|X
i=1
f(x+ ui, ⇠) f(x, ⇠)
|U| ui, (5)
where is the sampling radius. The proposition below shows that (5) with a large enough sample size gives a first order oracle. Proposition 3. Let g = g(x,U), and fix ✏g = 2 ⇣ p nL + p n✏f ⌘ where n is the dimension of x.
Then
|U|
3 4L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 n+ 18n kr (x)k 2
min
8 >< >: 4 ✏2g , 1 ⇣
↵ 1+↵ kr (x)k ✏g 2
⌘2
9 >=
>;
implies P (kg r (x)k max{✏g,↵kgk}) 1 . Note that in the setting, ✏g is a fixed bias dependent on , and cannot be made arbitrarily small.
Remark Note that ✏g defines the neighborhood of convergence for any method that relies on this oracle, and the smallest value for ✏g is achieved by setting = O( p ✏f ). Let us now discuss the
minibatch size. Under the assumption that ✏f is small, 34L 2 2n(n+ 2)(n+ 4) + 12✏2f 2 is also small. Thus when kr (x)k is larger than or on the order of ✏g , then the sample set size remains constant and is proportional to n. In [NS17] a constant step size stochastic gradient descent is applied using sample size |U| = 1, thus each step requires about n fewer samples. However, the step size has to be roughly n times smaller to account for the variance of the stochastic oracles based on one sample, thus the overall complexity is the same.
Other finite difference approximation schemes and their centralized versions (see [BCCS21] for a reference on these) also give suitable first order oracles.
2.3 Other settings
Our oracle framework also fits a variety of other settings, as we allow the randomness ⇠ and ⇠0 of the zeroth and first order oracles to be dependent on x and on each other, possibly following different distributions. Moreover, the oracles allow the function and gradient estimations to be arbitrarily bad occasionally, which allows them to capture settings where measurements are corrupted with outliers. The exact derivations of these oracles in these different settings are subjects of future exploration.
3 Algorithm and notation
We consider the line search algorithm proposed by [BCS19], which is an extension of the line search algorithm in [CS17] to the setting of inexact function estimates. In both algorithms, a random gradient estimate is used to attempt a step. We name the algorithm “ALOE”, which stands for Adaptive Line-search with Oracle Estimations. Compared to [CS17], the key modification of the algorithm is the relaxation of the Armijo condition by an additive constant 2✏f . The difference between this algorithm and the more standard line search methods such as the ones in [NW06] and [VML+19] is that the gradient estimate is recomputed in each iteration, whether or not a step is accepted. Note that all input parameters are user controlled, except for ✏f . In fact, the input ✏f here is only required to be some upper bound for E[e(x)], not necessarily the tightest one. Moreover, our computational results in Section 6 indicate that estimating ✏f is relatively easy in practice, and the algorithm is robust to the choice of ✏f .
Algorithm 1 Adaptive Line-search with Oracle Estimations (ALOE) Input: Parameter ✏f of the zeroth order oracle, starting point x0, max step size ↵max > 0, initial step size ↵0 < ↵max, constants ✓, 2 (0, 1).
1: for k = 0, 1, 2, . . . do 2: Compute gradient approximation gk:
Generate the direction gk = g(xk, ⇠0k) using the probabilistic first order oracle, with ↵ = ↵k.
3: Check sufficient decrease: Let x+k = xk ↵kgk. Generate f(xk, ⇠k) and f(x + k , ⇠ + k ) using the probabilistic
zeroth order oracle. Check the modified Armijo condition:
f(x+k , ⇠ + k ) f(xk, ⇠k) ↵k✓ kgkk 2 + 2✏f . (6)
4: Successful step: If (6) holds, then set xk+1 x+k and ↵k+1 min{↵max,
1↵k}. 5: Unsuccessful step:
Otherwise, set xk+1 xk and ↵k+1 ↵k.
In this paper we impose the following standard assumption on (x). Assumption 1. r is L-Lipschitz smooth and is bounded from below by some constant ⇤.
Let ek = |f(xk, ⇠k) (xk)| and e+k = |f(x + k , ⇠ + k ) (x + k )|. Recall that ek and e + k satisfy (1) from the definition of the zeroth order oracle. We will consider two cases; 1) ek and e+k are deterministically bounded by ✏f , in which case ⌫ and b in (1) can be chosen to be 0, and 2) ⌫ and b are not necessarily zero, in which case we assume the random variables ek+ e+k are all independent.
Assumption 2. Either e0, e+0 , e1, e + 1 , . . . are all deterministically bounded by ✏f , or the random variables {e0 + e+0 , e1 + e + 1 , . . .} are independent. Definition 1 (Definition of a true iteration). We say an iteration k is true if kgk r (xk)k max{✏g,↵kkgkk} and ek + e+k 2✏f ,
and false otherwise.
Let Mk denotes the triple {⌅k,⌅+k ,⌅ 0 k}, whose realizations are {⇠k, ⇠ + k , ⇠ 0 k}. Algorithm 1 generates a stochastic process adapted to the filtration {Fk : k 0}, where Fk = (M0,M1, . . . ,Mk). We define the following random variables, measurable with respect to Fk.
• Ik := {iteration k is true}. • ⇥k := {iteration k is successful}. • T" := min{k : kr (xk)k "}, the iteration complexity of the algorithm for reaching "-stationarity.
• Zk := (xk) ⇤ 0, a measure of progress.
It is easy to see that T" is a stopping time of the stochastic process with respect to Fk. We derive a high probability tail bound for T✏, and obtain an iteration complexity bound in high probability for Algorithm 1 when applied to non-convex functions. The final result is summarized below with simplified constants. The full statement is in Theorem 4. Theorem 1 (Main convergence result with simplified constants). Suppose Assumptions 1 and 2 hold, and (for simplicity) ✓ = 12 , ↵max 1 and max{L, 1}. Then, for any
" 4max ⇢ ✏g, (1 + ↵max) q (L+ 2)✏f ,
we have the following bound on iteration complexity:
For any s 0, p = 1 e min{ u2 2⌫2 , u2b}, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t
R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, u = infx{✏f E[e(x)]}, R = (x0) ⇤ C"2 ln((L+2)↵0) ln , and C = 1 2(L+2)(1+↵max)2 .
Remark This theorem essentially shows that the iteration complexity of Algorithm 1 is bounded by a quantity on the order of
1
p 12 4✏f+s C"2
✓ (x0) ⇤
C"2
ln((L+ 2)↵0)
ln
◆
with overwhelmingly high probability. If p = 1 and ✏f = 0, the above quantity essentially recovers the iteration complexity of the deterministic line search algorithm.
4 Analysis framework for the high probability bound
In this section we present the main ideas underlying the theoretical analysis. We first state general conditions on the stochastic process (Assumption 3), from which we are able to derive a high probability tail bound on the iteration complexity. They are listed as assumptions here, and in the next section, we will show that they indeed hold for Algorithm 1 when applied to non-convex smooth functions . Assumption 3 (Properties of the stochastic process). There exist a constant ↵̄ > 0 and a nondecreasing function h : R ! R, which satisfies h(↵) > 0 for any ↵ > 0, such that for any realization of the algorithm, the following hold for all k < T":
(i) h(↵̄) > 8✏f .
(ii) P(Ik = 1 | Fk 1) p for all k, with some p 2 ( 12 + 4✏f h(↵̄) , 1].
(iii) If Ik⇥k = 1 then Zk+1 Zk h(↵k) + 4✏f . (True, successful iterations make progress.)
(iv) If ↵k ↵̄ and Ik = 1 then ⇥k = 1.
(v) Zk+1 Zk + 2✏f + ek + e+k for all k.
The following key lemma follows easily from Assumption 3 (ii) and the Azuma-Hoeffding inequality [Azu67] applied to the submartingale Pt 1 k=0 Ik pt.
Lemma 1. For all 1 t T", and any p̂ 2 [0, p), we have
P
t 1X
k=0
Ik p̂t
! exp ✓ (p p̂)2
2p2 t
◆ .
We now define another indicator variable that will be used in the analysis.
Definition 2 (Large step). For all integers k 0, define the random variable Uk as follows:
Uk = ⇢ 1, if min{↵k,↵k+1} ↵̄, 0, if max{↵k,↵k+1} ↵̄.
We will say that step k is a large step if Uk = 1. Otherwise, step k is a small step.
By the dynamics of the process, every step is either a large step or a small step, but not both.
Our analysis will rely on the following key observation: By Assumption 3, if iteration k has Uk⇥kIk = 1, then Zk gets reduced by at least h(↵̄) 4✏f > 0. We call such an iteration a “good” iteration, because it makes progress towards optimality by at least a fixed amount. On the other hand, on any other iteration k, Zk can increase by at most 2✏f + ek + e+k . The idea of the analysis is to show that with high probability, the progress made by the good iterations dominates the damage caused by the other iterations. The crux of the proof is to show that with high probability, a large enough constant fraction of the iterations are good (up to another additive constant).
The following key lemma is the engine of the analysis. It shows that if the stopping time has not been reached and a large enough number of iterations are true, then there must be a large number of good iterations. Lemma 2. For any positive integer t and any p̂ 2 ( 12 , 1], we have
P T" > t and
t 1X
k=0
Ik p̂t and t 1X
k=0
Uk⇥kIk <
✓ p̂ 1
2
◆ t d
2
! = 0,
where d = max n
ln↵0 ln ↵̄ ln , 0
o .
4.1 Bounded noise case
In [CS17] and [BCS19], the expected iteration complexity of the line search algorithm is bounded under the assumptions that e(x) = 0 and e(x) ✏f for all x, respectively. We now derive a high probability tail bound on the iteration complexity under the assumption that e(x) ✏f for all x. Note that we do not need to assume that the errors e(x) are independent in the bounded noise setting. Thus, this analysis applies even when the noise is deterministic or adversarial.
Under Assumption 3 in the bounded noise setting, we have Zk+1 Zk + 4✏f in all iterations, and Zk+1 Zk h(↵̄) + 4✏f in good iterations. Putting this together with Lemma 2 and the other conditions in Assumption 3, we obtain the following theorem. Theorem 2 (Iteration complexity in the bounded noise setting). Suppose Assumption 3 holds, and ek, e + k ✏f at every iteration. Then for any p̂ 2 ( 1 2 + 4✏f h(↵̄) , p), and t R
p̂ 12 4✏f h(↵̄)
we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
4.2 General sub-exponential noise case
We now present a high probability bound for the iteration complexity with general sub-exponential noise in the zeroth order oracle. The result is very similar to that of Theorem 2. The main difference from the bounded noise analysis is that instead of bounding the “damage” caused on a per-iteration basis, we bound the sum of all such damages over all iterations. The fact that the noises are subexponential and independent allows us to apply Bernstein’s inequality to obtain an upper bound on this sum that holds with high probability. Theorem 3 (Iteration complexity in the sub-exponential noise setting). Suppose Assumptions 2 and 3 hold. Then for any s 0, p̂ 2 ( 12 + 4✏f+s h(↵̄) , p), and t R
p̂ 12 4✏f+s h(↵̄)
, we have
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ e min n s2t 8⌫2 , st4b o ,
where R = Z0h(↵̄) + d 2 and d = max
n
ln↵0 ln ↵̄ ln , 0
o .
5 Final iteration complexity of the line search algorithm
In the previous section, we presented high probability tail bounds on the iteration complexity, assuming Assumption 3 holds. We now verify that Assumption 3 indeed holds for Algorithm 1 when applied to smooth functions. Together with the results in Section 4, this allows us to derive an explicit high-probability bound on the iteration complexity.
As noted earlier, when either ✏f or ✏g are not zero, Algorithm 1 does not converge to a stationary point, but converges to a neighborhood where kr (x)k ", with " bounded from below in terms of ✏f or ✏g . The specific relationship is as follows. Inequality 1 (Lower bound on ").
" > max ( ✏g ⌘ ,max ⇢ 1 + ↵max, 1 1 ⌘ · s 4✏f
✓(p 12 ) ·max
⇢ 0.5L+
1 ✓ ,
L(1 ⌘)
2(1 2⌘ ✓(1 ⌘))
) ,
for some ⌘ 2 (0, 1 ✓2 ✓ ).
Here ⌘ can be any value in the interval. p = 1 when in the bounded noise setting, and p = 1 exp ⇣ min{u 2
⌫2 , u b } ⌘ otherwise, with u = infx{✏f E[e(x)]}.
Proposition 4 (Assumption 3 holds for Algorithm 1). If Inequality 1 and Assumption 1 and 2 hold, then Assumption 3 holds for Algorithm 1 with the following p, ↵̄ and h(↵):
1. p = 1 when the noise is bounded by ✏f , and p = 1 exp ⇣ min{ u 2 2⌫2 , u 2b} ⌘
otherwise. Here u = infx{✏f E[e(x)]}.
2. ↵̄ = min n
1 ✓ 0.5L+ , 2(1 2⌘ ✓(1 ⌘)) L(1 ⌘)
o .
3. h(↵) = min n
✓✏2↵ (1+↵max)2
, ✓↵(1 ⌘)2✏2 o .
Applying Theorem 3 now gives the explicit complexity bound for Algorithm 1.
Theorem 4. Suppose the Inequality 1 on " is satisfied for some ⌘ 2 (0, 1 ✓2 ✓ ), and Assumptions 1 and 2 hold, then we have the following bound on the iteration complexity: For any s 0, p̂ 2 ( 12 + 4✏f+s C"2 , p), and t R
p̂ 12 4✏f+s
C"2
,
P (T" t) 1 exp ✓ (p p̂)2
2p2 t
◆ exp ✓ min ⇢ s2t
8⌫2 , st 4b
◆ .
Here, R = (x0) ⇤ C"2 +max n ln↵0 ln ↵̄ ln , 0 o , C = min n 1 (1+↵max)2 , (1 ⌘)2 o ↵̄✓, with p and ↵̄ as defined in Proposition 4.
Remark Inequality 1 makes sure there exists some p̂ 2 ( 12 + 4✏f+s C"2 , p) for some s > 0. The above theorem is for the general sub-exponential noise setting. In the bounded noise special case, we have s = 0, and the last term exp ⇣ min n s2t 8⌫2 , st 4b o⌘ in the probability is not present.
6 Experiments
In this section, we illustrate that the proposed stochastic algorithm ALOE can be at least as efficient in practice as the line search in [VML+19], and much more efficient than full gradient line search. From the experiments, we show that estimating ✏f is not difficult, and taking mini-batches of a fixed size indeed provides good zeroth and first order oracles in practice.
For illustration, we first conduct experiments on all the datasets for binary classification with 150 to 5000 data points from the Penn Machine Learning Benchmarks repository (PMLB) [RLLC+21]. In total, there are 64 such datasets. Each binary classification problem is formulated as a logistic
regression problem with an RBF kernel (with parameter = 1). All experiments were conducted on a 2020 MacBook Pro with an M1 chip and 16GB of memory.
We compare the following three algorithms, and they are implemented as follows.
• ALOE. The zeroth and first order oracles are implemented using the same mini-batch of a fixed size within each iteration. Batch sizes are taken to be 128. We estimate ✏f at the beginning of every epoch (i.e. every K iterations, where K equals the total number of data samples divided by 128), by computing 15 times the empirical standard deviation of 30 zeroth order oracle calls with batch size 128 at the current point. We found in practice the algorithm is quite robust to how ✏f is chosen. The relevant plots are in Appendix F. The parameter we used are = 0.8, ✓ = 0.2, ↵0 = 1 and ↵max = 10.
• SLS. The SLS algorithm (also called “SGD + Armijo”) proposed in [VML+19] differs from ALOE in that ✏f = 0 and that the same mini-batch is used while backtracking until the Armijo condition is satisfied. We implemented the algorithm using mini-batch size 128 and the parameters suggested in their paper. We tried various parameter combinations for SLS and found the performance of the suggested parameters to work best.
• Full gradient line search. The full gradient line search algorithm is implemented using the entire dataset for function and gradient evaluations on each iteration. Taking ✏f = 0 and the other parameters are the same as used in ALOE. For fair comparison in our experiments, we allow full gradient line search to make the same number of passes over each dataset as ALOE.
We conducted 5 trials for each dataset and ran each algorithm with initial points taken randomly from a standard Gaussian distribution. In Figure 1 we compare the overall performance of the three algorithms in the following way. For each dataset and algorithm, the average best value is defined as the average of the minimum training loss attained over 5 different trials. For each dataset we record the difference between the average best values achieved by SLS vs. ALOE, and plot the resulting 64 numbers as a histogram. The same is done for full gradient line search vs. ALOE. See Figure 1. Under this metric, ALOE achieves better training loss than SLS algorithm in 62 out of 64 datasets, and is always better than the full gradient line search.
Figure 2 illustrates the decay of training losses using these three algorithms for three datasets. In many cases ALOE decreases the training loss more rapidly than the other two algorithms. Testing set accuracy comparisons are also carried out, using random 80 : 20 splits of datasets, as shown in Figure 3. Test accuracy is defined as the proportion of data points in the testing set classified correctly. The results show that ALOE is competitive in terms of test accuracy as well. More performance and test accuracy plots for different datasets, models and loss functions are in Appendix.
7 Final Remarks
We conclude the paper with a brief overview of our theoretical results with respect to those in [VML+19]. The stochastic line search in [VML+19] is proposed specifically for empirical risk minimization, and the zeroth and first order oracles are implemented using mini-batch of a fixed size. The same mini-batch is used for all consecutive unsuccessful iterations. This guarantees that a successful iteration is eventually achieved for Armijo condition with ✏f = 0, under the assumption
that for every mini-batch, g(x, ⇠0) is Lipschitz continuous. The convergence analysis then assumes that Mc = 0 in (4) (strong growth condition) and in the case when is not convex, the step size parameter is bounded above by 1LMv . Thus, the method itself and its convergence are not better than those of a stochastic gradient descent with a fixed step size bounded by 1LMv [BCN18]. It is also assumed that the step size is reset to a fixed value at the start of each iteration, which is impractical. Good computational results are reported in [VML+19] for a heuristic version of the algorithm where the restrictions of the step size are removed.
In this paper we analyzed Algorithm 1 under virtually no restriction on the step size parameter. For the sake of simplicity of analysis, we assume the step size parameter is reduced and increased by the same multiplicative factor. This can be relaxed to some degree. We also do not assume that g(x, ⇠0) is Lipschitz continuous, we only impose this condition on . The cost of relaxing all these assumptions is the use of ✏f . For simplicity of the analysis, ✏f is assumed to be fixed throughout the algorithm. In practice, it can be re-estimated regularly. In many applications, ✏f tends to get smaller as the algorithm progresses towards optimality. Our experiments show that estimating ✏f is easy and works well in practice. Moreover, one can use much smaller values for ✏f than theory dictates.
8 Acknowledgments
This work was partially supported by NSF Grants TRIPODS 17-40796, NSF Grant CCF 2008434 and DARPA Lagrange award HR-001117S0039. Miaolan Xie was partially supported by a PhD Fellowship provided by MunchRe. Billy Jin was partially supported by NSERC fellowship PGSD3532673-2019.
The authors are grateful to the anonymous referees for their reviews that helped us improve the paper. | 1. What is the focus of the paper regarding line search variants for minimizing functions?
2. What are the strengths of the proposed approach, particularly in its generality and analysis?
3. What are the weaknesses of the paper, especially regarding its experimental scope and assumptions?
4. Do you have any questions regarding the methodology, such as minibatch size and learning rate?
5. Are there any concerns about the theorem's parameters and the necessity of recomputing function values and gradients?
6. Can the algorithm's convergence rate be compared to an obvious lower bound for similar oracles?
7. Why use absolute values around |e(x)| in Equations 200-202 when e(x) is already non-negative? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers a new line search variant for minimizing functions with access to function value (zero order) and gradient (first order) estimates only through probabilistic oracles. For both oracles the output is a function of the input point and a random variable, and with probability 1-delta the output is close to the actual value. For the remaining delta probability, the first order oracle assumes nothing while the zero order oracle assumes that large errors are increasingly unlikely, making the requirements for the two oracles different.
The random variable in the oracle, could for instance represent the mini-batch used to estimate the required quantity. The authors propose a variant of Armijo Line search that
Adds some slack on the Armijo condition depending on the quality of the zeroth order oracle
Recompute gradient and function values in each step in the line search instead of using the same one.
For this line search method the authors show high probability iteration bounds for minimizing a Lipshitz bounded smooth function, that converge to an epsilon stationary point (norm gradient less than epsilon) where epsilon is lower bounded by the quality of the oracles. The bounds are more general in the sense that less is assumed about the oracles, and that the paper shows high probability iteration bounds, where the usual is a bound on the expected number of iterations.
Review
The approach and analysis is made as general as possible and analyzed without assuming convexity. However all the experiments consider a convex problem. Nonetheless, the experiments confirm that the proposed line searching method works well in this case. It would have been interesting to see non-convex problem, in particular the case with zeroth order optimization from for instance reinforcement learning as those oracles seem to able to be particularly noisy, albeit such problems may have a lot of moving parts to control that make it hard to isolate the value of the algorithm proposed.
It seems in the general case minibatch size should depend on the learning rate in the algorithm (in the case where the randomness comes from the minibatches), at least for the oracles created from mini batches satisfy the assumptions required? Stated differently, if batch size is too small there seems to be no guarantee for the algorithm. What happens in the experiments if the batch size is made much smaller?
Overall I think the proposed algorithm is nice and fairly simple which is positive albeit there are some parameters that need setting. The actual Theorem has a lot of parameters but the overall message can be extracted, and the result is quite general.
The norms in equation 2, and in general, is that any norm? or maybe the two norm of the vector?
Assumption 2, is that a standard assumption that the errors in different steps are independent (they could depend on the same x here)?
Does the iteration bound (almost) match an obvious lower bound for this kind of oracles?
Can it be shown that for these kinds of oracles the recomputation of function value and gradient in each step in the line search is necessary in some sense (could the standard armijo SGD also converge here as that may be cheaper to run (data could be expensive to generate for instance with zeroth order optimization with finite difference approximation in reinforcement learning).
why use absolute values around |e(x)| on line 200 - 202, e(x) is already non-negative. Am I missing something there? |
NIPS | Title
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
Abstract
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
N/A
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
1 Introduction
The ability to interact with the environment is an essential skill for embodied agents operating in human spaces. Interaction gives agents the capacity to modify their environment, allowing them to move from semantic navigation tasks (e.g., “go to the kitchen; find the coffee cup”) towards complex tasks involving interactions with their surroundings (e.g., “heat some coffee and bring it to me”).
Today’s embodied agents are typically trained to perform specific interactions in a supervised manner. For example, an agent learns to navigate to specified objects [18], a dexterous hand learns to solve a Rubik’s cube [4], a robot learns to manipulate a rope [40]. In these cases and many others, it is known a priori what objects are relevant for the interactions and what the goal of the interaction is, whether expressed through expert demonstrations or a reward crafted to elicit the desired behavior. Despite exciting results, the resulting agents remain specialized to the target interactions and objects for which they were taught.
In contrast, we envision embodied agents that can enter a novel 3D environment, move around to encounter new objects, and autonomously discern the affordance landscape—what are the interactable objects, what actions are relevant to use them, and under what conditions will these interactions succeed? Such an agent could then enter a new kitchen (say), and be primed to address tasks like “wash my coffee cup in the sink.” These capabilities would mimic humans’ ability to efficiently discover the functionality of even unfamiliar objects though a mixture of learned visual priors and exploratory manipulation.
To this end, we introduce the exploration for interaction problem: a mobile agent in a 3D environment must autonomously discover the objects with which it can physically interact, and what actions are valid as interactions with them.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Exploring for interaction presents a challenging search problem over the product of all objects, actions, agent positions, and action histories. Furthermore, many objects are hidden (e.g., in drawers) and need to be discovered, and their interaction dynamics are not straightforward (e.g., cannot open an already opened door, can only slice an apple if a knife is picked up). In contrast, exploration for navigating a static environment involves relatively small action spaces and dynamics governed solely by the presence/absence of obstacles [12, 50, 51, 18, 11, 47].
Towards addressing these challenges, we propose a deep reinforcement learning (RL) approach in which the agent discovers the affordance landscape of a new, unmapped 3D environment. The result is a strong prior for where to explore and what interactions to try. Specifically, we consider an agent equipped with an egocentric RGB-D camera and an action space comprised of navigation and manipulation actions (turn left, open, toggle, etc.), whose effects are initially unknown to the agent. We reward the agent for quickly interacting with all objects in an environment. In parallel, we train an affordance model online to segment images according to the likelihoods for each of the agent’s actions succeeding there, using the partially observed interaction data generated by the exploration policy. The two models work in concert to functionally explore the environment. See Figure 1.
Our experiments with AI2-iTHOR [29] demonstrate the advantages of interaction exploration. Our agents can quickly seek out new objects to interact with in new environments, matching the performance of the best exploration method in 42% fewer timesteps and surpassing them to discover 1.33× more interactions when fully trained. Further, we show our agent and affordance model help train multi-step interaction policies (e.g., washing objects at a sink), improving success rates by up to 16% on various tasks, with fewer training samples, despite sparse rewards and no human demonstrations.
2 Related Work
Visual affordances An affordance is the potential for action [22]. In computer vision, visual affordances are explored in various forms: predicting where to grasp an object from images and video [31, 32, 64, 38, 19, 62, 15, 5], inferring how people might use a space [48, 39] or tool [65], and priors for human body poses [26, 52, 58, 17]. Our work offers a new perspective on learning visual affordances. Rather than learn them passively from a static dataset, the proposed agent actively seeks new affordances via exploratory interactions with a dynamic environment. Furthermore, unlike prior work, our approach yields not just an image model, but also a policy for exploring interactions, which we show accelerates learning new downstream tasks for an embodied agent.
Exploration for navigation in 3D environments Recent embodied AI work in 3D simulators [36, 56, 60, 10] tackles navigation: the agent moves intelligently in an unmapped but static environment to reach a goal (e.g., [12, 11, 36, 6]). Exploration policies for visual navigation efficiently map the environment in an unsupervised “preview” stage [12, 50, 18, 11, 47, 46]. The agent is rewarded for maximizing the area covered in its inferred occupancy map [12, 11, 18], the novelty of the states visited [51], pushing the frontier of explored areas [46], and related metrics [47]. For a game setting in VizDoom, classic frontier-based exploration is improved by learning the visual appearance of hazardous regions (e.g., enemies, lava) where the agent’s health score has previously declined [46].
In contrast to all the above, we study the problem of exploration for interaction in dynamic environments where the agent can modify the environment state (open/close doors, pick up objects etc.). Our
end goal is not to build a top-down occupancy map, but rather to quickly interact with as many objects as possible in a new environment. In other words, whereas exploration for navigation promotes rapidly completing a static environment map, exploration for interaction promotes rapidly completing the agent’s understanding of its interactions in a dynamic environment.
Interaction in 3D environments Beyond navigation, recent work leverages simulated interactionbased environments [21, 29, 55, 45] to develop agents that can also perform actions (e.g., moving objects, opening doors) with the goal of eventually translating policies to real robots [2, 1]. These tasks include answering questions (“how many apples are in the fridge?”) that may require navigation [16] as well as interaction [25]. Towards service robotics, goal driven planning [63], instruction following [55], and cooking [21] agents are trained using imitation learning on expert trajectories.
Our idea to efficiently explore interactions is complementary. Rather than learn a task-specific policy from demonstrations, our approach learns task-agnostic exploration behavior from experience to quickly discover the affordance landscape. Our model can be coupled with a downstream task like those tackled above to accelerate their training, as we demonstrate in the experiments.
Self-supervised interaction learning Prior work studies actively learning manipulation policies through self-supervised training for grasping [44, 42, 33, 35, 61], pushing/poking [3, 41] and drone control [20]. Unstructured play data has also been used to learn subgoal policies [34], which are then sampled to solve complex tasks. Object affordance models are learned for simple objects in table-top environments [23, 24] and for block pushing tasks in gridworlds [28]. We share the general idea of learning through interaction; however, we focus on high-level interaction policies requiring both navigation and manipulation (e.g., moving to the counter and picking up knife) rather than fine-grained manipulation policies (e.g., altering joint angles).
Intrinsic motivation In the absence of external rewards from the environment, reinforcement learning agents can nonetheless focus their behavior to satisfy intrinsic drives [53]. Recent work formulates intrinsic motivation based on curiosity [43, 9, 27], novelty [51, 7], and empowerment [37] to improve video game playing agents (e.g., VizDoom, Super Mario) or increase object attention [27]. Our idea can be seen as a distinct form of intrinsic motivation, where the agent is driven to experience more interactions in the environment. Also, we focus on realistic human-centered 3D environments, rather than video games, and with high-level interactions that can change object state, rather than low-level physical manipulations.
3 Approach
Our goal is to train an interaction exploration agent to enter a new, unseen environment and successfully interact with all objects present. This involves identifying the objects that are interactable, learning to navigate to them, and discovering all valid interactions with them (e.g., discovering that the agent can toggle a light switch, but not a knife).
To address the challenges of a large search space and complex interaction dynamics, our agent learns visual affordances to help it intelligently select regions of the environment to explore and interactions to try. Critically, our agent builds this affordance model through its own experience interacting with the environment during exploration. For example, by successfully opening a cupboard, the agent learns that objects with handles are likely to be “openable". Our method yields an interaction exploration policy that can quickly perform object interactions in new environments, as well as a visual affordance model that captures where each action is likely to succeed in the egocentric view.
In the following, we first define the interaction exploration task (Sec. 3.1). Then, we show how an agent can train an affordance model via interaction experience (Sec. 3.2). Finally, we present our policy learning architecture that integrates interaction exploration and affordance learning, and allows transfer to goal-driven policy learning (Sec. 3.3).
3.1 Learning exploration policies for interaction
We want to train an agent to interact with as many objects as possible in a new environment. Agents can perform actions from a set A = AN ⋃ AI , consisting of navigation actions AN (e.g., move forward, turn left/right) and object interactions AI (e.g., take/put, open/close). The interaction exploration task is set up as a partially observable Markov decision process. The agent is spawned at an initial state s0. At each time step t, the agent in state st receives an observation
(xt, θt) consisting of the RGB image xt and the agent’s odometry1 θt, executes an action at ∼ A and receives a reward rt ∼ R(st, at, st+1). A recurrent network encodes the agent’s observation history over time to arrive at the state representation. The agent is rewarded for each successful interaction with a new object ot:
R(st, at, st+1) = { 1 if at ∈ AI and c(at, ot) = 0 0 otherwise,
(1)
where c(a, o) counts how many times interaction (a, o) has successfully occurred in the past. The goal is to learn an exploration policy πE that maximizes this reward over an episode of length T . See Sec. 3.3 for the policy architecture. The hard, count-based reward formulation only rewards the agent once per interaction, incentivizing broad coverage of interactions, rather than mastery of a few, which is useful for downstream tasks involving arbitrary interactions.
3.2 Affordance learning via interaction exploration
As the agent explores, it attempts interactions at various locations, only some of which succeed. These attempts partially reveal the affordances of objects — what interactions are possible with them — which we capture in a visual affordance model. An explicit model of affordances helps the agent decide what regions to visit (e.g., most interactions fail at walls, so avoid them) and helps extrapolate possible interactions with unvisited objects (e.g., opening one cupboard suggests that other handles are “openable”), leading to more efficient exploration policies.
At a high level, we train an affordance segmentation model FA to transform an input RGB-D image into a |AI |-channel segmentation map, where each channel is aH×W map over the image indicating regions where a particular interaction is likely to succeed. Training samples for this model comes from the agent’s interaction with the environment. For example, if it successfully picks up a kettle, pixels around that kettle are labeled “pickup-able”, and these labels are propagated to all frames where the kettle is visible (both before and after the interaction took place), such that affordances will be recognizable even from far away. See Fig. 2 (right panel).
Specifically, for a trajectory τ = {(st, at)}t=1..T ∼ πE sampled from our exploration policy, we identify time steps t1...tN where interactions occur (at ∈ AI ). For each interaction, the world location pt at the center of the agent’s field of view is calculated by inverse perspective projection and stored along with the interaction type at and success of the interaction zt in memory asM = {(pt, at, zt)}t=t1..tN . This corresponds to “marking” the target of the interaction. At the end of the episode, for each frame x in the trajectory, we generate a corresponding segmentation mask y that highlights the position of all markers from any action that are visible in x. For each interaction ak, the label for each pixel in the k-th segmentation mask slice yk is calculated as:
ykij = 0 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 0 1 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 1 −1 otherwise
(2)
whereMk ⊆M is the subset of markers corresponding to interaction ak, pij is the world location at that pixel, d is euclidean distance, and δ is a fixed distance threshold (20cm). In other words, each pixel is labeled 0 or 1 for affordance k depending on whether any marker has been placed nearby (within distance δ) at any time along the trajectory, and is visible in the current frame. If no markers are placed, the pixel is labeled −1 for unknown. See Fig. 2 (right panel). This results in a |AI | ×H ×W dimension segmentation label mask per frame, which we use to train FA. These labels are sparse and noisy, as an interaction may fail with an object despite being valid in other conditions (e.g., opening an already opened cupboard). To account for this, we train two distinct segmentation heads using these labels to minimize a combination of cross entropy losses:
L(ŷA, ŷI , y) = Lce(ŷA, y,∀yij 6= −1) + Lce(ŷI ,1[y = −1],∀yij) (3) where 1[.] is the indicator function over labels. Lce is standard cross entropy loss, but is evaluated over a subset of pixels specified by the third argument. Classifier output ŷA scores whether each interaction is successful at a location, while ŷI scores general interactibility (y = −1 vs. y 6= −1). The latter acts as a measure of uncertainty to ignore regions where markers are rarely placed, regardless of success (e.g., the ceiling, windows). The final score output by FA is the product ŷ = ŷA × (1− ŷI).
1We assume reliable odometry estimates. See Supp for experiments with noisy odometry.
In our experiments, we consider two variants: one that marks interactions with a single point, and one that marks all points on the target object of the interaction. The former translates to fixed scale labels at the exact interaction location, supposing no prior knowledge about object segmentation. The latter is more general and considers the whole object as “interactable”, leading to denser labels. In both cases, the object class and valid interactions are unknown to the agent.
3.3 Policy learning architecture and transfer
Next we put together both pieces—the interaction exploration objective and the affordance segmentations—in our policy learning framework. We adopt an actor-critic policy model and a U-Net [49] architecture for affordances. At each time step, we receive the current egocentric frame x and generate its affordance maps ŷ = FA(x). The visual observations and affordance maps are encoded using a 3-layer convolutional neural network (CNN) each, and then concatenated and merged using a fully connected layer. This is then fed to a gated recurrent unit (GRU) recurrent neural network to aggregate observations over time, and finally to an actor-critic network (fully connected layers) to generate the next action distribution and value. We train this network using PPO [54] for 1M frames, with rollouts of T = 256 time steps. See Fig. 2 (left) and Supp for architecture details.
We train the policy network and the segmentation model iteratively. As the agent explores, we store episodes drawn from the exploration policy, and create an affordance segmentation dataset as per Sec. 3.2. We train the affordance model using this dataset, and use the updated model to generate ŷ to further train the policy network described above. See Supp for training schedule.
The result of this process is an interaction exploration policy πE that can quickly master object interactions in new environments, as well as a visual affordance model FA, which captures where interactions will likely succeed in the current view. In addition, we show the policy transfers to better learn downstream tasks. Specifically, we freeze the weights of the policy network and FA, and fine-tune only the actor-critic linear layers using the downstream task’s reward (cf. Sec. 4.2).
4 Experiments
We evaluate agents’ ability to interact with as many objects as possible (Sec. 4.1) and enhance policy learning on downstream tasks (Sec. 4.2).
Simulation environment We experiment with AI2-iTHOR [30] (see Fig. 1), since it supports context-specific interactions that can change object states, vs. simple physics-based interactions in other 3D indoor environments [59, 8]. We use all kitchen scenes; kitchens are a valuable domain since many diverse interactions with objects are possible, as also emphasized in prior work [14, 38, 21]. The scenes contain objects from 69 classes, each of which supports 1-5 interactions. We split the 30 scenes into training (20), validation (5), and testing (5) sets. We randomize objects’ positions and states (isOpen, isToggled etc), agent start location, and camera viewpoint when sampling episodes.
Agents can both navigate: AN = {move forward, turn left/right 30◦, look up/down 15◦}, and perform interactions with objects in the center of the agent’s view: AI = {take, put, open, close, toggle-on, toggle-off, slice}. While the simulator knows what actions are valid given where the agent is, what it is holding, and what objects are nearby, all this knowledge is hidden from the agent, who only knows if an action succeeds or fails.
Baselines We compare several methods:
• RANDOM selects actions uniformly at random. RANDOM+ selects random navigation actions from AN to reach unvisited locations, then cycles through all possible object interactions in AI . • CURIOSITY [9, 43] rewards actions that lead to states the agent cannot predict well. • NOVELTY [57, 51, 7] rewards visits to new, unexplored physical locations. We augment this
baseline to cycle through all interactions upon reaching a novel location. • OBJCOVERAGE [18, 47] rewards an agent for visiting new objects (moving close to it, and
centering it in view), but not for interacting with them. We similarly augment this to cycle over all interactions. The above three are standard paradigms for exploration. See Supp for details.
Ablations We examine several variants of the proposed interaction exploration agent. All variants are rewarded for interactions with novel objects (Equation 1) and use the same architecture (Sec. 3.3).
• INTEXP(RGB) uses only the egocentric RGB frames to learn the policy, no affordance map. • INTEXP(SAL) uses RGB plus heatmaps from a pretrained saliency model [13] as input, which
highlight salient objects but are devoid of affordance cues. • INTEXP(GT) uses ground truth affordances from the simulator. • INTEXP(PT) and INTEXP(OBJ) use affordances learned on-the-fly from interaction with the en-
vironment by marking fixed sized points or whole objects, respectively (see Sec. 3.2). INTEXP(PT) is our default model for experiments unless specified.
In short, RANDOM and RANDOM+ test if a learned policy is required at all, given small and easy to navigate environments. NOVELTY, CURIOSITY, and OBJCOVERAGE test whether intelligent interaction policies fall out naturally from traditional exploration methods. Finally, the interaction exploration ablations test how influential learned visual affordances are in driving interaction discovery.
4.1 Affordance driven interaction exploration
First we evaluate how well an agent can locate and interact with all objects in a new environment.
Metrics. For each test environment, we generate 80 randomized episodes of 1024 time steps each. We create an “oracle” agent that takes the shortest path to the next closest object and performs all valid interactions with it, to gauge the maximum number of possible interactions. We report (1) Coverage: the fraction of the maximum number of interactions possible that the agent successfully performs and (2) Precision: the fraction of interactions that the agent attempted that were successful.
Interaction exploration. Fig. 3 (left) shows interaction coverage on new, unseen environments over time, averaged over all episodes and environments. See Supp for environment-specific results. Even though CURIOSITY is trained to seek hard-to-predict states, like the non-trained baselines it risks performing actions that block further interaction (e.g., opening cupboards blocks paths). RANDOM+, NOVELTY, and OBJCOVERAGE seek out new locations/objects but can only cycle through all interactions, leading to slow discovery of new interactions.
Our full model with learned affordance maps leads to the best interaction exploration policies, and discovers 1.33× more unique object interactions than the strongest baseline. Moreover, it performs these interactions quickly — it discovers the same number of interactions as RANDOM+ in 63% fewer time-steps. Our method discovers 2.5× more interactions than NOVELTY at T=256. Fig. 3 (right) shows variants of our method that use different visual priors. INT-EXP(RGB) has no explicit RoI model and performs worst. In INT-EXP(SAL), saliency helps distinguish between objects and walls/ceiling, but does not reveal what interactions are possible with salient objects as our affordance model does. INTEXP(OBJ) performs well during training — 0.236 vs. 0.252 coverage compared to INTEXP(PT) — but suffers more from noisy marker labels as it trains using whole object masks. INTEXP(PT) marks exact target locations and generalizes better to unseen environments, but yields more conservative affordance predictions (see Fig. 5).
Table 1 shows an action-wise breakdown of coverage and precision. In general, many objects can be opened/closed (drawers, fridges, kettles etc.) resulting in more instances covered for those actions. All methods rarely slice objects successfully as it requires first locating and picking up a knife (all have cov <1%). This requires multiple steps that are unlikely to occur randomly, and so is overlooked by trained agents in favor of more accessible objects/interactions. Importantly, methods that cycle through actions eventually interact with objects, leading to moderate coverage, but very low precision since they do not know how to prioritize interactions. This is further exemplified in Fig. 4. NOVELTY tends to seek out new locations, regardless of their potential for interaction, resulting in few successes (green dots) and several failed attempts (yellow dots). Our agent selectively navigates to regions with objects that have potential for interaction. See Supp for more examples.
Affordance prediction. In addition to exploration policies, our method learns an affordance model. Fig. 5 evaluates the INTEXP agents for reconstructing the ground truth affordance landscape of 23,637 uniformly sampled views from unseen test environments. We report mean average precision over all interaction classes. The ALL-ONES baseline assigns equal scores to all pixels. INTEXP(SAL) simply repeats its saliency map |AI | times as the affordance map. Other agents from Fig. 3 do not train affordance models, thus cannot be compared. Our affordance models learn maps tied to the individual actions of the exploring agent and result in the best performance.
4.2 Interaction exploration for downstream tasks
Next we fine-tune our interaction exploration agents for several downstream tasks. The tasks are (1) RETRIEVE: The agent must take any object out of a drawer/cabinet, and set it down in a visible location outside, (2) STORE: The agent must take any object from outside, put it away in a drawer/cabinet and close the door, (3) WASH: The agent must put any object inside the sink, and turn on the tap. (4) HEAT: The agent must put a pan/vessel on the stove-top, and turn on the burner.
These tasks have very sparse rewards, and require agents to successfully perform multiple interactions in sequence involving different objects. Similar tasks are studied in recent work [63, 55], which train imitation learning based agents on expert demonstrations, and report poor performance with pure RL based training [63]. Our idea is to leverage the agent’s policy for intelligent exploration to jumpstart policy learning for the new task without human demonstrations.
We reward the agent (+10) for every subgoal it achieves towards the final task (e.g., for HEAT, these are “put object on stove”, and “turn-on burner”). We fine-tune for 500k frames using PPO, and measure success rate over 400 randomized episodes from the same environments. The results in Fig. 6 (left) show the benefit of the proposed pretraining. Agents trained to be curious or cover more area (CURIOSITY and NOVELTY) are not equipped to seek out useful environment interactions, and suffer due to sparse rewards. OBJCOVERAGE benefits from being trained to visit objects, but falls short of our method, which strives for novel interactions. Our method outperforms others by large margins across all tasks, and it learns much faster than the best baseline (Fig. 6, right).
5 Conclusion
We proposed the task of “interaction exploration” and developed agents that can learn to efficiently act in new environments to prepare for downstream interaction tasks, while simultaneously building an internal model of object affordances. Future work could model more environment state in affordance prediction (e.g., what the agent is holding, or past interactions), and incorporate more complex policy architectures with spatial memory. This line of work is valuable for increasingly autonomous robots that can master new human-centric environments and provide assistance.
Acknowledgments: Thanks to Amy Bush, Kay Nettle, and the UT Systems Administration team for their help setting up experiments on the cluster. Thanks to Santhosh Ramakrishnan for helpful discussions. UT Austin is supported in part by ONR PECASE and DARPA L2M.
6 Broader Impact
Embodied agents that can explore environments in the absence of humans have broader applications in service robotics and assistive technology. Such robots could survey, and then give a quick rundown of a space for new users to, for example, alert them of appliances in a workspace, which of them are functional, and how these can be activated. It could also potentially warn users to avoid interaction with some objects if they are sharp, hot, or otherwise dangerous based on the robot’s own interactions with them.
Deploying embodied agents in human spaces comes with challenges in safety — exploration agents than can “interact with everything” to discover functionality may inadvertently damage their environment or themselves, and privacy — navigating human-centric spaces requires agents to be sensitive of people and personal belongings. Careful consideration of these issues while designing embodied agent policies is essential for deploying these agents in the real world to collaborate with people. | 1. What is the main contribution of the paper regarding robotic exploration and affordance modeling?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and relevance to human learning?
3. What are the weaknesses of the paper, especially regarding the reward function and the lack of fine-grained differentiation between actions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes an approach where a robot explores an unknown environment in order to build a model of the affordances. The robot is performing a policy learned through reinforcement learning, where it is rewarded for maximizing successful novel interactions. At the same time the agent is learning an image segmentation model which is predicting the affordances of areas in the image. All the experimentation and validation was performed in the AI2-iTHOR environment.
Strengths
The authors correctly identify that learning the properties of an environment with exploration is an important feature of human learning, and it is one of the most promising approaches through which a robot can learn the affordances of the environment. As far as I can tell, learning a segmentation based model for affordances is novel.
Weaknesses
Equation 1: The authors do not discuss the fact that the way the reward function is written here, it is non-Markovian. It depends on the history of the state. It could be made Markovian if one folds the visitation frequency into the state, but then the equation (1) is not of the correct form. The approach essentially goes and tries every single object in the environment, and checks whether a certain action can be performed on it or not. It does not perform a fine grain differentiation between the actions - basically taking a knife or an apple is the same, and toggling the fireplace and the coffee makers are also the same action. Thus the number of affordances is very low. The paper does not really deal with the question of what is the impact on the environment if one tries every possible action on every possible object. Clearly, this looseness in the definition precludes any real world evaluation. |
NIPS | Title
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
Abstract
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
N/A
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
1 Introduction
The ability to interact with the environment is an essential skill for embodied agents operating in human spaces. Interaction gives agents the capacity to modify their environment, allowing them to move from semantic navigation tasks (e.g., “go to the kitchen; find the coffee cup”) towards complex tasks involving interactions with their surroundings (e.g., “heat some coffee and bring it to me”).
Today’s embodied agents are typically trained to perform specific interactions in a supervised manner. For example, an agent learns to navigate to specified objects [18], a dexterous hand learns to solve a Rubik’s cube [4], a robot learns to manipulate a rope [40]. In these cases and many others, it is known a priori what objects are relevant for the interactions and what the goal of the interaction is, whether expressed through expert demonstrations or a reward crafted to elicit the desired behavior. Despite exciting results, the resulting agents remain specialized to the target interactions and objects for which they were taught.
In contrast, we envision embodied agents that can enter a novel 3D environment, move around to encounter new objects, and autonomously discern the affordance landscape—what are the interactable objects, what actions are relevant to use them, and under what conditions will these interactions succeed? Such an agent could then enter a new kitchen (say), and be primed to address tasks like “wash my coffee cup in the sink.” These capabilities would mimic humans’ ability to efficiently discover the functionality of even unfamiliar objects though a mixture of learned visual priors and exploratory manipulation.
To this end, we introduce the exploration for interaction problem: a mobile agent in a 3D environment must autonomously discover the objects with which it can physically interact, and what actions are valid as interactions with them.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Exploring for interaction presents a challenging search problem over the product of all objects, actions, agent positions, and action histories. Furthermore, many objects are hidden (e.g., in drawers) and need to be discovered, and their interaction dynamics are not straightforward (e.g., cannot open an already opened door, can only slice an apple if a knife is picked up). In contrast, exploration for navigating a static environment involves relatively small action spaces and dynamics governed solely by the presence/absence of obstacles [12, 50, 51, 18, 11, 47].
Towards addressing these challenges, we propose a deep reinforcement learning (RL) approach in which the agent discovers the affordance landscape of a new, unmapped 3D environment. The result is a strong prior for where to explore and what interactions to try. Specifically, we consider an agent equipped with an egocentric RGB-D camera and an action space comprised of navigation and manipulation actions (turn left, open, toggle, etc.), whose effects are initially unknown to the agent. We reward the agent for quickly interacting with all objects in an environment. In parallel, we train an affordance model online to segment images according to the likelihoods for each of the agent’s actions succeeding there, using the partially observed interaction data generated by the exploration policy. The two models work in concert to functionally explore the environment. See Figure 1.
Our experiments with AI2-iTHOR [29] demonstrate the advantages of interaction exploration. Our agents can quickly seek out new objects to interact with in new environments, matching the performance of the best exploration method in 42% fewer timesteps and surpassing them to discover 1.33× more interactions when fully trained. Further, we show our agent and affordance model help train multi-step interaction policies (e.g., washing objects at a sink), improving success rates by up to 16% on various tasks, with fewer training samples, despite sparse rewards and no human demonstrations.
2 Related Work
Visual affordances An affordance is the potential for action [22]. In computer vision, visual affordances are explored in various forms: predicting where to grasp an object from images and video [31, 32, 64, 38, 19, 62, 15, 5], inferring how people might use a space [48, 39] or tool [65], and priors for human body poses [26, 52, 58, 17]. Our work offers a new perspective on learning visual affordances. Rather than learn them passively from a static dataset, the proposed agent actively seeks new affordances via exploratory interactions with a dynamic environment. Furthermore, unlike prior work, our approach yields not just an image model, but also a policy for exploring interactions, which we show accelerates learning new downstream tasks for an embodied agent.
Exploration for navigation in 3D environments Recent embodied AI work in 3D simulators [36, 56, 60, 10] tackles navigation: the agent moves intelligently in an unmapped but static environment to reach a goal (e.g., [12, 11, 36, 6]). Exploration policies for visual navigation efficiently map the environment in an unsupervised “preview” stage [12, 50, 18, 11, 47, 46]. The agent is rewarded for maximizing the area covered in its inferred occupancy map [12, 11, 18], the novelty of the states visited [51], pushing the frontier of explored areas [46], and related metrics [47]. For a game setting in VizDoom, classic frontier-based exploration is improved by learning the visual appearance of hazardous regions (e.g., enemies, lava) where the agent’s health score has previously declined [46].
In contrast to all the above, we study the problem of exploration for interaction in dynamic environments where the agent can modify the environment state (open/close doors, pick up objects etc.). Our
end goal is not to build a top-down occupancy map, but rather to quickly interact with as many objects as possible in a new environment. In other words, whereas exploration for navigation promotes rapidly completing a static environment map, exploration for interaction promotes rapidly completing the agent’s understanding of its interactions in a dynamic environment.
Interaction in 3D environments Beyond navigation, recent work leverages simulated interactionbased environments [21, 29, 55, 45] to develop agents that can also perform actions (e.g., moving objects, opening doors) with the goal of eventually translating policies to real robots [2, 1]. These tasks include answering questions (“how many apples are in the fridge?”) that may require navigation [16] as well as interaction [25]. Towards service robotics, goal driven planning [63], instruction following [55], and cooking [21] agents are trained using imitation learning on expert trajectories.
Our idea to efficiently explore interactions is complementary. Rather than learn a task-specific policy from demonstrations, our approach learns task-agnostic exploration behavior from experience to quickly discover the affordance landscape. Our model can be coupled with a downstream task like those tackled above to accelerate their training, as we demonstrate in the experiments.
Self-supervised interaction learning Prior work studies actively learning manipulation policies through self-supervised training for grasping [44, 42, 33, 35, 61], pushing/poking [3, 41] and drone control [20]. Unstructured play data has also been used to learn subgoal policies [34], which are then sampled to solve complex tasks. Object affordance models are learned for simple objects in table-top environments [23, 24] and for block pushing tasks in gridworlds [28]. We share the general idea of learning through interaction; however, we focus on high-level interaction policies requiring both navigation and manipulation (e.g., moving to the counter and picking up knife) rather than fine-grained manipulation policies (e.g., altering joint angles).
Intrinsic motivation In the absence of external rewards from the environment, reinforcement learning agents can nonetheless focus their behavior to satisfy intrinsic drives [53]. Recent work formulates intrinsic motivation based on curiosity [43, 9, 27], novelty [51, 7], and empowerment [37] to improve video game playing agents (e.g., VizDoom, Super Mario) or increase object attention [27]. Our idea can be seen as a distinct form of intrinsic motivation, where the agent is driven to experience more interactions in the environment. Also, we focus on realistic human-centered 3D environments, rather than video games, and with high-level interactions that can change object state, rather than low-level physical manipulations.
3 Approach
Our goal is to train an interaction exploration agent to enter a new, unseen environment and successfully interact with all objects present. This involves identifying the objects that are interactable, learning to navigate to them, and discovering all valid interactions with them (e.g., discovering that the agent can toggle a light switch, but not a knife).
To address the challenges of a large search space and complex interaction dynamics, our agent learns visual affordances to help it intelligently select regions of the environment to explore and interactions to try. Critically, our agent builds this affordance model through its own experience interacting with the environment during exploration. For example, by successfully opening a cupboard, the agent learns that objects with handles are likely to be “openable". Our method yields an interaction exploration policy that can quickly perform object interactions in new environments, as well as a visual affordance model that captures where each action is likely to succeed in the egocentric view.
In the following, we first define the interaction exploration task (Sec. 3.1). Then, we show how an agent can train an affordance model via interaction experience (Sec. 3.2). Finally, we present our policy learning architecture that integrates interaction exploration and affordance learning, and allows transfer to goal-driven policy learning (Sec. 3.3).
3.1 Learning exploration policies for interaction
We want to train an agent to interact with as many objects as possible in a new environment. Agents can perform actions from a set A = AN ⋃ AI , consisting of navigation actions AN (e.g., move forward, turn left/right) and object interactions AI (e.g., take/put, open/close). The interaction exploration task is set up as a partially observable Markov decision process. The agent is spawned at an initial state s0. At each time step t, the agent in state st receives an observation
(xt, θt) consisting of the RGB image xt and the agent’s odometry1 θt, executes an action at ∼ A and receives a reward rt ∼ R(st, at, st+1). A recurrent network encodes the agent’s observation history over time to arrive at the state representation. The agent is rewarded for each successful interaction with a new object ot:
R(st, at, st+1) = { 1 if at ∈ AI and c(at, ot) = 0 0 otherwise,
(1)
where c(a, o) counts how many times interaction (a, o) has successfully occurred in the past. The goal is to learn an exploration policy πE that maximizes this reward over an episode of length T . See Sec. 3.3 for the policy architecture. The hard, count-based reward formulation only rewards the agent once per interaction, incentivizing broad coverage of interactions, rather than mastery of a few, which is useful for downstream tasks involving arbitrary interactions.
3.2 Affordance learning via interaction exploration
As the agent explores, it attempts interactions at various locations, only some of which succeed. These attempts partially reveal the affordances of objects — what interactions are possible with them — which we capture in a visual affordance model. An explicit model of affordances helps the agent decide what regions to visit (e.g., most interactions fail at walls, so avoid them) and helps extrapolate possible interactions with unvisited objects (e.g., opening one cupboard suggests that other handles are “openable”), leading to more efficient exploration policies.
At a high level, we train an affordance segmentation model FA to transform an input RGB-D image into a |AI |-channel segmentation map, where each channel is aH×W map over the image indicating regions where a particular interaction is likely to succeed. Training samples for this model comes from the agent’s interaction with the environment. For example, if it successfully picks up a kettle, pixels around that kettle are labeled “pickup-able”, and these labels are propagated to all frames where the kettle is visible (both before and after the interaction took place), such that affordances will be recognizable even from far away. See Fig. 2 (right panel).
Specifically, for a trajectory τ = {(st, at)}t=1..T ∼ πE sampled from our exploration policy, we identify time steps t1...tN where interactions occur (at ∈ AI ). For each interaction, the world location pt at the center of the agent’s field of view is calculated by inverse perspective projection and stored along with the interaction type at and success of the interaction zt in memory asM = {(pt, at, zt)}t=t1..tN . This corresponds to “marking” the target of the interaction. At the end of the episode, for each frame x in the trajectory, we generate a corresponding segmentation mask y that highlights the position of all markers from any action that are visible in x. For each interaction ak, the label for each pixel in the k-th segmentation mask slice yk is calculated as:
ykij = 0 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 0 1 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 1 −1 otherwise
(2)
whereMk ⊆M is the subset of markers corresponding to interaction ak, pij is the world location at that pixel, d is euclidean distance, and δ is a fixed distance threshold (20cm). In other words, each pixel is labeled 0 or 1 for affordance k depending on whether any marker has been placed nearby (within distance δ) at any time along the trajectory, and is visible in the current frame. If no markers are placed, the pixel is labeled −1 for unknown. See Fig. 2 (right panel). This results in a |AI | ×H ×W dimension segmentation label mask per frame, which we use to train FA. These labels are sparse and noisy, as an interaction may fail with an object despite being valid in other conditions (e.g., opening an already opened cupboard). To account for this, we train two distinct segmentation heads using these labels to minimize a combination of cross entropy losses:
L(ŷA, ŷI , y) = Lce(ŷA, y,∀yij 6= −1) + Lce(ŷI ,1[y = −1],∀yij) (3) where 1[.] is the indicator function over labels. Lce is standard cross entropy loss, but is evaluated over a subset of pixels specified by the third argument. Classifier output ŷA scores whether each interaction is successful at a location, while ŷI scores general interactibility (y = −1 vs. y 6= −1). The latter acts as a measure of uncertainty to ignore regions where markers are rarely placed, regardless of success (e.g., the ceiling, windows). The final score output by FA is the product ŷ = ŷA × (1− ŷI).
1We assume reliable odometry estimates. See Supp for experiments with noisy odometry.
In our experiments, we consider two variants: one that marks interactions with a single point, and one that marks all points on the target object of the interaction. The former translates to fixed scale labels at the exact interaction location, supposing no prior knowledge about object segmentation. The latter is more general and considers the whole object as “interactable”, leading to denser labels. In both cases, the object class and valid interactions are unknown to the agent.
3.3 Policy learning architecture and transfer
Next we put together both pieces—the interaction exploration objective and the affordance segmentations—in our policy learning framework. We adopt an actor-critic policy model and a U-Net [49] architecture for affordances. At each time step, we receive the current egocentric frame x and generate its affordance maps ŷ = FA(x). The visual observations and affordance maps are encoded using a 3-layer convolutional neural network (CNN) each, and then concatenated and merged using a fully connected layer. This is then fed to a gated recurrent unit (GRU) recurrent neural network to aggregate observations over time, and finally to an actor-critic network (fully connected layers) to generate the next action distribution and value. We train this network using PPO [54] for 1M frames, with rollouts of T = 256 time steps. See Fig. 2 (left) and Supp for architecture details.
We train the policy network and the segmentation model iteratively. As the agent explores, we store episodes drawn from the exploration policy, and create an affordance segmentation dataset as per Sec. 3.2. We train the affordance model using this dataset, and use the updated model to generate ŷ to further train the policy network described above. See Supp for training schedule.
The result of this process is an interaction exploration policy πE that can quickly master object interactions in new environments, as well as a visual affordance model FA, which captures where interactions will likely succeed in the current view. In addition, we show the policy transfers to better learn downstream tasks. Specifically, we freeze the weights of the policy network and FA, and fine-tune only the actor-critic linear layers using the downstream task’s reward (cf. Sec. 4.2).
4 Experiments
We evaluate agents’ ability to interact with as many objects as possible (Sec. 4.1) and enhance policy learning on downstream tasks (Sec. 4.2).
Simulation environment We experiment with AI2-iTHOR [30] (see Fig. 1), since it supports context-specific interactions that can change object states, vs. simple physics-based interactions in other 3D indoor environments [59, 8]. We use all kitchen scenes; kitchens are a valuable domain since many diverse interactions with objects are possible, as also emphasized in prior work [14, 38, 21]. The scenes contain objects from 69 classes, each of which supports 1-5 interactions. We split the 30 scenes into training (20), validation (5), and testing (5) sets. We randomize objects’ positions and states (isOpen, isToggled etc), agent start location, and camera viewpoint when sampling episodes.
Agents can both navigate: AN = {move forward, turn left/right 30◦, look up/down 15◦}, and perform interactions with objects in the center of the agent’s view: AI = {take, put, open, close, toggle-on, toggle-off, slice}. While the simulator knows what actions are valid given where the agent is, what it is holding, and what objects are nearby, all this knowledge is hidden from the agent, who only knows if an action succeeds or fails.
Baselines We compare several methods:
• RANDOM selects actions uniformly at random. RANDOM+ selects random navigation actions from AN to reach unvisited locations, then cycles through all possible object interactions in AI . • CURIOSITY [9, 43] rewards actions that lead to states the agent cannot predict well. • NOVELTY [57, 51, 7] rewards visits to new, unexplored physical locations. We augment this
baseline to cycle through all interactions upon reaching a novel location. • OBJCOVERAGE [18, 47] rewards an agent for visiting new objects (moving close to it, and
centering it in view), but not for interacting with them. We similarly augment this to cycle over all interactions. The above three are standard paradigms for exploration. See Supp for details.
Ablations We examine several variants of the proposed interaction exploration agent. All variants are rewarded for interactions with novel objects (Equation 1) and use the same architecture (Sec. 3.3).
• INTEXP(RGB) uses only the egocentric RGB frames to learn the policy, no affordance map. • INTEXP(SAL) uses RGB plus heatmaps from a pretrained saliency model [13] as input, which
highlight salient objects but are devoid of affordance cues. • INTEXP(GT) uses ground truth affordances from the simulator. • INTEXP(PT) and INTEXP(OBJ) use affordances learned on-the-fly from interaction with the en-
vironment by marking fixed sized points or whole objects, respectively (see Sec. 3.2). INTEXP(PT) is our default model for experiments unless specified.
In short, RANDOM and RANDOM+ test if a learned policy is required at all, given small and easy to navigate environments. NOVELTY, CURIOSITY, and OBJCOVERAGE test whether intelligent interaction policies fall out naturally from traditional exploration methods. Finally, the interaction exploration ablations test how influential learned visual affordances are in driving interaction discovery.
4.1 Affordance driven interaction exploration
First we evaluate how well an agent can locate and interact with all objects in a new environment.
Metrics. For each test environment, we generate 80 randomized episodes of 1024 time steps each. We create an “oracle” agent that takes the shortest path to the next closest object and performs all valid interactions with it, to gauge the maximum number of possible interactions. We report (1) Coverage: the fraction of the maximum number of interactions possible that the agent successfully performs and (2) Precision: the fraction of interactions that the agent attempted that were successful.
Interaction exploration. Fig. 3 (left) shows interaction coverage on new, unseen environments over time, averaged over all episodes and environments. See Supp for environment-specific results. Even though CURIOSITY is trained to seek hard-to-predict states, like the non-trained baselines it risks performing actions that block further interaction (e.g., opening cupboards blocks paths). RANDOM+, NOVELTY, and OBJCOVERAGE seek out new locations/objects but can only cycle through all interactions, leading to slow discovery of new interactions.
Our full model with learned affordance maps leads to the best interaction exploration policies, and discovers 1.33× more unique object interactions than the strongest baseline. Moreover, it performs these interactions quickly — it discovers the same number of interactions as RANDOM+ in 63% fewer time-steps. Our method discovers 2.5× more interactions than NOVELTY at T=256. Fig. 3 (right) shows variants of our method that use different visual priors. INT-EXP(RGB) has no explicit RoI model and performs worst. In INT-EXP(SAL), saliency helps distinguish between objects and walls/ceiling, but does not reveal what interactions are possible with salient objects as our affordance model does. INTEXP(OBJ) performs well during training — 0.236 vs. 0.252 coverage compared to INTEXP(PT) — but suffers more from noisy marker labels as it trains using whole object masks. INTEXP(PT) marks exact target locations and generalizes better to unseen environments, but yields more conservative affordance predictions (see Fig. 5).
Table 1 shows an action-wise breakdown of coverage and precision. In general, many objects can be opened/closed (drawers, fridges, kettles etc.) resulting in more instances covered for those actions. All methods rarely slice objects successfully as it requires first locating and picking up a knife (all have cov <1%). This requires multiple steps that are unlikely to occur randomly, and so is overlooked by trained agents in favor of more accessible objects/interactions. Importantly, methods that cycle through actions eventually interact with objects, leading to moderate coverage, but very low precision since they do not know how to prioritize interactions. This is further exemplified in Fig. 4. NOVELTY tends to seek out new locations, regardless of their potential for interaction, resulting in few successes (green dots) and several failed attempts (yellow dots). Our agent selectively navigates to regions with objects that have potential for interaction. See Supp for more examples.
Affordance prediction. In addition to exploration policies, our method learns an affordance model. Fig. 5 evaluates the INTEXP agents for reconstructing the ground truth affordance landscape of 23,637 uniformly sampled views from unseen test environments. We report mean average precision over all interaction classes. The ALL-ONES baseline assigns equal scores to all pixels. INTEXP(SAL) simply repeats its saliency map |AI | times as the affordance map. Other agents from Fig. 3 do not train affordance models, thus cannot be compared. Our affordance models learn maps tied to the individual actions of the exploring agent and result in the best performance.
4.2 Interaction exploration for downstream tasks
Next we fine-tune our interaction exploration agents for several downstream tasks. The tasks are (1) RETRIEVE: The agent must take any object out of a drawer/cabinet, and set it down in a visible location outside, (2) STORE: The agent must take any object from outside, put it away in a drawer/cabinet and close the door, (3) WASH: The agent must put any object inside the sink, and turn on the tap. (4) HEAT: The agent must put a pan/vessel on the stove-top, and turn on the burner.
These tasks have very sparse rewards, and require agents to successfully perform multiple interactions in sequence involving different objects. Similar tasks are studied in recent work [63, 55], which train imitation learning based agents on expert demonstrations, and report poor performance with pure RL based training [63]. Our idea is to leverage the agent’s policy for intelligent exploration to jumpstart policy learning for the new task without human demonstrations.
We reward the agent (+10) for every subgoal it achieves towards the final task (e.g., for HEAT, these are “put object on stove”, and “turn-on burner”). We fine-tune for 500k frames using PPO, and measure success rate over 400 randomized episodes from the same environments. The results in Fig. 6 (left) show the benefit of the proposed pretraining. Agents trained to be curious or cover more area (CURIOSITY and NOVELTY) are not equipped to seek out useful environment interactions, and suffer due to sparse rewards. OBJCOVERAGE benefits from being trained to visit objects, but falls short of our method, which strives for novel interactions. Our method outperforms others by large margins across all tasks, and it learns much faster than the best baseline (Fig. 6, right).
5 Conclusion
We proposed the task of “interaction exploration” and developed agents that can learn to efficiently act in new environments to prepare for downstream interaction tasks, while simultaneously building an internal model of object affordances. Future work could model more environment state in affordance prediction (e.g., what the agent is holding, or past interactions), and incorporate more complex policy architectures with spatial memory. This line of work is valuable for increasingly autonomous robots that can master new human-centric environments and provide assistance.
Acknowledgments: Thanks to Amy Bush, Kay Nettle, and the UT Systems Administration team for their help setting up experiments on the cluster. Thanks to Santhosh Ramakrishnan for helpful discussions. UT Austin is supported in part by ONR PECASE and DARPA L2M.
6 Broader Impact
Embodied agents that can explore environments in the absence of humans have broader applications in service robotics and assistive technology. Such robots could survey, and then give a quick rundown of a space for new users to, for example, alert them of appliances in a workspace, which of them are functional, and how these can be activated. It could also potentially warn users to avoid interaction with some objects if they are sharp, hot, or otherwise dangerous based on the robot’s own interactions with them.
Deploying embodied agents in human spaces comes with challenges in safety — exploration agents than can “interact with everything” to discover functionality may inadvertently damage their environment or themselves, and privacy — navigating human-centric spaces requires agents to be sensitive of people and personal belongings. Careful consideration of these issues while designing embodied agent policies is essential for deploying these agents in the real world to collaborate with people. | 1. What is the focus and contribution of the paper regarding learning affordances through active interactions?
2. What are the strengths of the proposed approach, particularly in its exploration policy and affordance map joint learning?
3. What are the weaknesses of the paper, especially concerning its reliance on perfect odometry and minor unclear details?
4. Do you have any questions about the experimental comparisons and ablation results presented in the paper?
5. How might the proposed method's performance change if it used noisy estimates of odometry, and what would be the impact of this modification? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper explores the important problem of learning affordances by interaction. Most previous works on learning affordances were based on manual annotations and passive approaches. In contrast, this paper explores an active approach in a dynamic environment to learn affordances. The paper proposes to learn an exploration policy and an affordance map jointly. This is a difficult search problem in the space of all objects, different types of affordances, agent locations, etc. The paper outperforms a number of baseline approaches and also provides ablation results. More interestingly, it shows the effect of pre-training using this method on a set of down-stream tasks.
Strengths
- The paper explores the interesting direction of learning affordances by interaction, which is a novel perspective compared to previous passive approaches. - The proposed approach has a been used as a pre-training step for a set of downstream tasks and shows improvement over alternative ways of pre-training. - The experiment section is comprehensive. It provides comparisons with a set of baseline approaches. It also provides a variety of ablation experiments. - The proposed approach outperforms the baselines in terms of precision and coverage metrics defined in the paper.
Weaknesses
- One of the main drawbacks of the paper is that it uses perfect odometry to compute the 3D world coordinates of the points. It would be much nicer if it used a noisy estimate of the odometry (using SLAM for example). It is interesting to see how the noise affects the results. - Some of the details are not clear: (a) In the IntExp(Obj) scenario, when the agent picks up the kettle, how does it know which pixels are pickupable? How does it know what the extent of the object is? (b) Lines 167-172 are not clear. It says "Classifier output y_A scores whether each interaction is successful at a location", while the condition for the indicator function is y=0 or y=1 (being either successful or unsuccessful). These are inconsistent. - It would be nice to provide the result of training with fully annotated images as an upper bound. I believe it is easy to obtain the annotations in THOR. |
NIPS | Title
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
Abstract
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
N/A
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like “find a knife and put it in the drawer.” Project page: http://vision. cs.utexas.edu/projects/interaction-exploration/
1 Introduction
The ability to interact with the environment is an essential skill for embodied agents operating in human spaces. Interaction gives agents the capacity to modify their environment, allowing them to move from semantic navigation tasks (e.g., “go to the kitchen; find the coffee cup”) towards complex tasks involving interactions with their surroundings (e.g., “heat some coffee and bring it to me”).
Today’s embodied agents are typically trained to perform specific interactions in a supervised manner. For example, an agent learns to navigate to specified objects [18], a dexterous hand learns to solve a Rubik’s cube [4], a robot learns to manipulate a rope [40]. In these cases and many others, it is known a priori what objects are relevant for the interactions and what the goal of the interaction is, whether expressed through expert demonstrations or a reward crafted to elicit the desired behavior. Despite exciting results, the resulting agents remain specialized to the target interactions and objects for which they were taught.
In contrast, we envision embodied agents that can enter a novel 3D environment, move around to encounter new objects, and autonomously discern the affordance landscape—what are the interactable objects, what actions are relevant to use them, and under what conditions will these interactions succeed? Such an agent could then enter a new kitchen (say), and be primed to address tasks like “wash my coffee cup in the sink.” These capabilities would mimic humans’ ability to efficiently discover the functionality of even unfamiliar objects though a mixture of learned visual priors and exploratory manipulation.
To this end, we introduce the exploration for interaction problem: a mobile agent in a 3D environment must autonomously discover the objects with which it can physically interact, and what actions are valid as interactions with them.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Exploring for interaction presents a challenging search problem over the product of all objects, actions, agent positions, and action histories. Furthermore, many objects are hidden (e.g., in drawers) and need to be discovered, and their interaction dynamics are not straightforward (e.g., cannot open an already opened door, can only slice an apple if a knife is picked up). In contrast, exploration for navigating a static environment involves relatively small action spaces and dynamics governed solely by the presence/absence of obstacles [12, 50, 51, 18, 11, 47].
Towards addressing these challenges, we propose a deep reinforcement learning (RL) approach in which the agent discovers the affordance landscape of a new, unmapped 3D environment. The result is a strong prior for where to explore and what interactions to try. Specifically, we consider an agent equipped with an egocentric RGB-D camera and an action space comprised of navigation and manipulation actions (turn left, open, toggle, etc.), whose effects are initially unknown to the agent. We reward the agent for quickly interacting with all objects in an environment. In parallel, we train an affordance model online to segment images according to the likelihoods for each of the agent’s actions succeeding there, using the partially observed interaction data generated by the exploration policy. The two models work in concert to functionally explore the environment. See Figure 1.
Our experiments with AI2-iTHOR [29] demonstrate the advantages of interaction exploration. Our agents can quickly seek out new objects to interact with in new environments, matching the performance of the best exploration method in 42% fewer timesteps and surpassing them to discover 1.33× more interactions when fully trained. Further, we show our agent and affordance model help train multi-step interaction policies (e.g., washing objects at a sink), improving success rates by up to 16% on various tasks, with fewer training samples, despite sparse rewards and no human demonstrations.
2 Related Work
Visual affordances An affordance is the potential for action [22]. In computer vision, visual affordances are explored in various forms: predicting where to grasp an object from images and video [31, 32, 64, 38, 19, 62, 15, 5], inferring how people might use a space [48, 39] or tool [65], and priors for human body poses [26, 52, 58, 17]. Our work offers a new perspective on learning visual affordances. Rather than learn them passively from a static dataset, the proposed agent actively seeks new affordances via exploratory interactions with a dynamic environment. Furthermore, unlike prior work, our approach yields not just an image model, but also a policy for exploring interactions, which we show accelerates learning new downstream tasks for an embodied agent.
Exploration for navigation in 3D environments Recent embodied AI work in 3D simulators [36, 56, 60, 10] tackles navigation: the agent moves intelligently in an unmapped but static environment to reach a goal (e.g., [12, 11, 36, 6]). Exploration policies for visual navigation efficiently map the environment in an unsupervised “preview” stage [12, 50, 18, 11, 47, 46]. The agent is rewarded for maximizing the area covered in its inferred occupancy map [12, 11, 18], the novelty of the states visited [51], pushing the frontier of explored areas [46], and related metrics [47]. For a game setting in VizDoom, classic frontier-based exploration is improved by learning the visual appearance of hazardous regions (e.g., enemies, lava) where the agent’s health score has previously declined [46].
In contrast to all the above, we study the problem of exploration for interaction in dynamic environments where the agent can modify the environment state (open/close doors, pick up objects etc.). Our
end goal is not to build a top-down occupancy map, but rather to quickly interact with as many objects as possible in a new environment. In other words, whereas exploration for navigation promotes rapidly completing a static environment map, exploration for interaction promotes rapidly completing the agent’s understanding of its interactions in a dynamic environment.
Interaction in 3D environments Beyond navigation, recent work leverages simulated interactionbased environments [21, 29, 55, 45] to develop agents that can also perform actions (e.g., moving objects, opening doors) with the goal of eventually translating policies to real robots [2, 1]. These tasks include answering questions (“how many apples are in the fridge?”) that may require navigation [16] as well as interaction [25]. Towards service robotics, goal driven planning [63], instruction following [55], and cooking [21] agents are trained using imitation learning on expert trajectories.
Our idea to efficiently explore interactions is complementary. Rather than learn a task-specific policy from demonstrations, our approach learns task-agnostic exploration behavior from experience to quickly discover the affordance landscape. Our model can be coupled with a downstream task like those tackled above to accelerate their training, as we demonstrate in the experiments.
Self-supervised interaction learning Prior work studies actively learning manipulation policies through self-supervised training for grasping [44, 42, 33, 35, 61], pushing/poking [3, 41] and drone control [20]. Unstructured play data has also been used to learn subgoal policies [34], which are then sampled to solve complex tasks. Object affordance models are learned for simple objects in table-top environments [23, 24] and for block pushing tasks in gridworlds [28]. We share the general idea of learning through interaction; however, we focus on high-level interaction policies requiring both navigation and manipulation (e.g., moving to the counter and picking up knife) rather than fine-grained manipulation policies (e.g., altering joint angles).
Intrinsic motivation In the absence of external rewards from the environment, reinforcement learning agents can nonetheless focus their behavior to satisfy intrinsic drives [53]. Recent work formulates intrinsic motivation based on curiosity [43, 9, 27], novelty [51, 7], and empowerment [37] to improve video game playing agents (e.g., VizDoom, Super Mario) or increase object attention [27]. Our idea can be seen as a distinct form of intrinsic motivation, where the agent is driven to experience more interactions in the environment. Also, we focus on realistic human-centered 3D environments, rather than video games, and with high-level interactions that can change object state, rather than low-level physical manipulations.
3 Approach
Our goal is to train an interaction exploration agent to enter a new, unseen environment and successfully interact with all objects present. This involves identifying the objects that are interactable, learning to navigate to them, and discovering all valid interactions with them (e.g., discovering that the agent can toggle a light switch, but not a knife).
To address the challenges of a large search space and complex interaction dynamics, our agent learns visual affordances to help it intelligently select regions of the environment to explore and interactions to try. Critically, our agent builds this affordance model through its own experience interacting with the environment during exploration. For example, by successfully opening a cupboard, the agent learns that objects with handles are likely to be “openable". Our method yields an interaction exploration policy that can quickly perform object interactions in new environments, as well as a visual affordance model that captures where each action is likely to succeed in the egocentric view.
In the following, we first define the interaction exploration task (Sec. 3.1). Then, we show how an agent can train an affordance model via interaction experience (Sec. 3.2). Finally, we present our policy learning architecture that integrates interaction exploration and affordance learning, and allows transfer to goal-driven policy learning (Sec. 3.3).
3.1 Learning exploration policies for interaction
We want to train an agent to interact with as many objects as possible in a new environment. Agents can perform actions from a set A = AN ⋃ AI , consisting of navigation actions AN (e.g., move forward, turn left/right) and object interactions AI (e.g., take/put, open/close). The interaction exploration task is set up as a partially observable Markov decision process. The agent is spawned at an initial state s0. At each time step t, the agent in state st receives an observation
(xt, θt) consisting of the RGB image xt and the agent’s odometry1 θt, executes an action at ∼ A and receives a reward rt ∼ R(st, at, st+1). A recurrent network encodes the agent’s observation history over time to arrive at the state representation. The agent is rewarded for each successful interaction with a new object ot:
R(st, at, st+1) = { 1 if at ∈ AI and c(at, ot) = 0 0 otherwise,
(1)
where c(a, o) counts how many times interaction (a, o) has successfully occurred in the past. The goal is to learn an exploration policy πE that maximizes this reward over an episode of length T . See Sec. 3.3 for the policy architecture. The hard, count-based reward formulation only rewards the agent once per interaction, incentivizing broad coverage of interactions, rather than mastery of a few, which is useful for downstream tasks involving arbitrary interactions.
3.2 Affordance learning via interaction exploration
As the agent explores, it attempts interactions at various locations, only some of which succeed. These attempts partially reveal the affordances of objects — what interactions are possible with them — which we capture in a visual affordance model. An explicit model of affordances helps the agent decide what regions to visit (e.g., most interactions fail at walls, so avoid them) and helps extrapolate possible interactions with unvisited objects (e.g., opening one cupboard suggests that other handles are “openable”), leading to more efficient exploration policies.
At a high level, we train an affordance segmentation model FA to transform an input RGB-D image into a |AI |-channel segmentation map, where each channel is aH×W map over the image indicating regions where a particular interaction is likely to succeed. Training samples for this model comes from the agent’s interaction with the environment. For example, if it successfully picks up a kettle, pixels around that kettle are labeled “pickup-able”, and these labels are propagated to all frames where the kettle is visible (both before and after the interaction took place), such that affordances will be recognizable even from far away. See Fig. 2 (right panel).
Specifically, for a trajectory τ = {(st, at)}t=1..T ∼ πE sampled from our exploration policy, we identify time steps t1...tN where interactions occur (at ∈ AI ). For each interaction, the world location pt at the center of the agent’s field of view is calculated by inverse perspective projection and stored along with the interaction type at and success of the interaction zt in memory asM = {(pt, at, zt)}t=t1..tN . This corresponds to “marking” the target of the interaction. At the end of the episode, for each frame x in the trajectory, we generate a corresponding segmentation mask y that highlights the position of all markers from any action that are visible in x. For each interaction ak, the label for each pixel in the k-th segmentation mask slice yk is calculated as:
ykij = 0 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 0 1 ifmin(p,a,z)∈Mk d(pij , p) < δ and z = 1 −1 otherwise
(2)
whereMk ⊆M is the subset of markers corresponding to interaction ak, pij is the world location at that pixel, d is euclidean distance, and δ is a fixed distance threshold (20cm). In other words, each pixel is labeled 0 or 1 for affordance k depending on whether any marker has been placed nearby (within distance δ) at any time along the trajectory, and is visible in the current frame. If no markers are placed, the pixel is labeled −1 for unknown. See Fig. 2 (right panel). This results in a |AI | ×H ×W dimension segmentation label mask per frame, which we use to train FA. These labels are sparse and noisy, as an interaction may fail with an object despite being valid in other conditions (e.g., opening an already opened cupboard). To account for this, we train two distinct segmentation heads using these labels to minimize a combination of cross entropy losses:
L(ŷA, ŷI , y) = Lce(ŷA, y,∀yij 6= −1) + Lce(ŷI ,1[y = −1],∀yij) (3) where 1[.] is the indicator function over labels. Lce is standard cross entropy loss, but is evaluated over a subset of pixels specified by the third argument. Classifier output ŷA scores whether each interaction is successful at a location, while ŷI scores general interactibility (y = −1 vs. y 6= −1). The latter acts as a measure of uncertainty to ignore regions where markers are rarely placed, regardless of success (e.g., the ceiling, windows). The final score output by FA is the product ŷ = ŷA × (1− ŷI).
1We assume reliable odometry estimates. See Supp for experiments with noisy odometry.
In our experiments, we consider two variants: one that marks interactions with a single point, and one that marks all points on the target object of the interaction. The former translates to fixed scale labels at the exact interaction location, supposing no prior knowledge about object segmentation. The latter is more general and considers the whole object as “interactable”, leading to denser labels. In both cases, the object class and valid interactions are unknown to the agent.
3.3 Policy learning architecture and transfer
Next we put together both pieces—the interaction exploration objective and the affordance segmentations—in our policy learning framework. We adopt an actor-critic policy model and a U-Net [49] architecture for affordances. At each time step, we receive the current egocentric frame x and generate its affordance maps ŷ = FA(x). The visual observations and affordance maps are encoded using a 3-layer convolutional neural network (CNN) each, and then concatenated and merged using a fully connected layer. This is then fed to a gated recurrent unit (GRU) recurrent neural network to aggregate observations over time, and finally to an actor-critic network (fully connected layers) to generate the next action distribution and value. We train this network using PPO [54] for 1M frames, with rollouts of T = 256 time steps. See Fig. 2 (left) and Supp for architecture details.
We train the policy network and the segmentation model iteratively. As the agent explores, we store episodes drawn from the exploration policy, and create an affordance segmentation dataset as per Sec. 3.2. We train the affordance model using this dataset, and use the updated model to generate ŷ to further train the policy network described above. See Supp for training schedule.
The result of this process is an interaction exploration policy πE that can quickly master object interactions in new environments, as well as a visual affordance model FA, which captures where interactions will likely succeed in the current view. In addition, we show the policy transfers to better learn downstream tasks. Specifically, we freeze the weights of the policy network and FA, and fine-tune only the actor-critic linear layers using the downstream task’s reward (cf. Sec. 4.2).
4 Experiments
We evaluate agents’ ability to interact with as many objects as possible (Sec. 4.1) and enhance policy learning on downstream tasks (Sec. 4.2).
Simulation environment We experiment with AI2-iTHOR [30] (see Fig. 1), since it supports context-specific interactions that can change object states, vs. simple physics-based interactions in other 3D indoor environments [59, 8]. We use all kitchen scenes; kitchens are a valuable domain since many diverse interactions with objects are possible, as also emphasized in prior work [14, 38, 21]. The scenes contain objects from 69 classes, each of which supports 1-5 interactions. We split the 30 scenes into training (20), validation (5), and testing (5) sets. We randomize objects’ positions and states (isOpen, isToggled etc), agent start location, and camera viewpoint when sampling episodes.
Agents can both navigate: AN = {move forward, turn left/right 30◦, look up/down 15◦}, and perform interactions with objects in the center of the agent’s view: AI = {take, put, open, close, toggle-on, toggle-off, slice}. While the simulator knows what actions are valid given where the agent is, what it is holding, and what objects are nearby, all this knowledge is hidden from the agent, who only knows if an action succeeds or fails.
Baselines We compare several methods:
• RANDOM selects actions uniformly at random. RANDOM+ selects random navigation actions from AN to reach unvisited locations, then cycles through all possible object interactions in AI . • CURIOSITY [9, 43] rewards actions that lead to states the agent cannot predict well. • NOVELTY [57, 51, 7] rewards visits to new, unexplored physical locations. We augment this
baseline to cycle through all interactions upon reaching a novel location. • OBJCOVERAGE [18, 47] rewards an agent for visiting new objects (moving close to it, and
centering it in view), but not for interacting with them. We similarly augment this to cycle over all interactions. The above three are standard paradigms for exploration. See Supp for details.
Ablations We examine several variants of the proposed interaction exploration agent. All variants are rewarded for interactions with novel objects (Equation 1) and use the same architecture (Sec. 3.3).
• INTEXP(RGB) uses only the egocentric RGB frames to learn the policy, no affordance map. • INTEXP(SAL) uses RGB plus heatmaps from a pretrained saliency model [13] as input, which
highlight salient objects but are devoid of affordance cues. • INTEXP(GT) uses ground truth affordances from the simulator. • INTEXP(PT) and INTEXP(OBJ) use affordances learned on-the-fly from interaction with the en-
vironment by marking fixed sized points or whole objects, respectively (see Sec. 3.2). INTEXP(PT) is our default model for experiments unless specified.
In short, RANDOM and RANDOM+ test if a learned policy is required at all, given small and easy to navigate environments. NOVELTY, CURIOSITY, and OBJCOVERAGE test whether intelligent interaction policies fall out naturally from traditional exploration methods. Finally, the interaction exploration ablations test how influential learned visual affordances are in driving interaction discovery.
4.1 Affordance driven interaction exploration
First we evaluate how well an agent can locate and interact with all objects in a new environment.
Metrics. For each test environment, we generate 80 randomized episodes of 1024 time steps each. We create an “oracle” agent that takes the shortest path to the next closest object and performs all valid interactions with it, to gauge the maximum number of possible interactions. We report (1) Coverage: the fraction of the maximum number of interactions possible that the agent successfully performs and (2) Precision: the fraction of interactions that the agent attempted that were successful.
Interaction exploration. Fig. 3 (left) shows interaction coverage on new, unseen environments over time, averaged over all episodes and environments. See Supp for environment-specific results. Even though CURIOSITY is trained to seek hard-to-predict states, like the non-trained baselines it risks performing actions that block further interaction (e.g., opening cupboards blocks paths). RANDOM+, NOVELTY, and OBJCOVERAGE seek out new locations/objects but can only cycle through all interactions, leading to slow discovery of new interactions.
Our full model with learned affordance maps leads to the best interaction exploration policies, and discovers 1.33× more unique object interactions than the strongest baseline. Moreover, it performs these interactions quickly — it discovers the same number of interactions as RANDOM+ in 63% fewer time-steps. Our method discovers 2.5× more interactions than NOVELTY at T=256. Fig. 3 (right) shows variants of our method that use different visual priors. INT-EXP(RGB) has no explicit RoI model and performs worst. In INT-EXP(SAL), saliency helps distinguish between objects and walls/ceiling, but does not reveal what interactions are possible with salient objects as our affordance model does. INTEXP(OBJ) performs well during training — 0.236 vs. 0.252 coverage compared to INTEXP(PT) — but suffers more from noisy marker labels as it trains using whole object masks. INTEXP(PT) marks exact target locations and generalizes better to unseen environments, but yields more conservative affordance predictions (see Fig. 5).
Table 1 shows an action-wise breakdown of coverage and precision. In general, many objects can be opened/closed (drawers, fridges, kettles etc.) resulting in more instances covered for those actions. All methods rarely slice objects successfully as it requires first locating and picking up a knife (all have cov <1%). This requires multiple steps that are unlikely to occur randomly, and so is overlooked by trained agents in favor of more accessible objects/interactions. Importantly, methods that cycle through actions eventually interact with objects, leading to moderate coverage, but very low precision since they do not know how to prioritize interactions. This is further exemplified in Fig. 4. NOVELTY tends to seek out new locations, regardless of their potential for interaction, resulting in few successes (green dots) and several failed attempts (yellow dots). Our agent selectively navigates to regions with objects that have potential for interaction. See Supp for more examples.
Affordance prediction. In addition to exploration policies, our method learns an affordance model. Fig. 5 evaluates the INTEXP agents for reconstructing the ground truth affordance landscape of 23,637 uniformly sampled views from unseen test environments. We report mean average precision over all interaction classes. The ALL-ONES baseline assigns equal scores to all pixels. INTEXP(SAL) simply repeats its saliency map |AI | times as the affordance map. Other agents from Fig. 3 do not train affordance models, thus cannot be compared. Our affordance models learn maps tied to the individual actions of the exploring agent and result in the best performance.
4.2 Interaction exploration for downstream tasks
Next we fine-tune our interaction exploration agents for several downstream tasks. The tasks are (1) RETRIEVE: The agent must take any object out of a drawer/cabinet, and set it down in a visible location outside, (2) STORE: The agent must take any object from outside, put it away in a drawer/cabinet and close the door, (3) WASH: The agent must put any object inside the sink, and turn on the tap. (4) HEAT: The agent must put a pan/vessel on the stove-top, and turn on the burner.
These tasks have very sparse rewards, and require agents to successfully perform multiple interactions in sequence involving different objects. Similar tasks are studied in recent work [63, 55], which train imitation learning based agents on expert demonstrations, and report poor performance with pure RL based training [63]. Our idea is to leverage the agent’s policy for intelligent exploration to jumpstart policy learning for the new task without human demonstrations.
We reward the agent (+10) for every subgoal it achieves towards the final task (e.g., for HEAT, these are “put object on stove”, and “turn-on burner”). We fine-tune for 500k frames using PPO, and measure success rate over 400 randomized episodes from the same environments. The results in Fig. 6 (left) show the benefit of the proposed pretraining. Agents trained to be curious or cover more area (CURIOSITY and NOVELTY) are not equipped to seek out useful environment interactions, and suffer due to sparse rewards. OBJCOVERAGE benefits from being trained to visit objects, but falls short of our method, which strives for novel interactions. Our method outperforms others by large margins across all tasks, and it learns much faster than the best baseline (Fig. 6, right).
5 Conclusion
We proposed the task of “interaction exploration” and developed agents that can learn to efficiently act in new environments to prepare for downstream interaction tasks, while simultaneously building an internal model of object affordances. Future work could model more environment state in affordance prediction (e.g., what the agent is holding, or past interactions), and incorporate more complex policy architectures with spatial memory. This line of work is valuable for increasingly autonomous robots that can master new human-centric environments and provide assistance.
Acknowledgments: Thanks to Amy Bush, Kay Nettle, and the UT Systems Administration team for their help setting up experiments on the cluster. Thanks to Santhosh Ramakrishnan for helpful discussions. UT Austin is supported in part by ONR PECASE and DARPA L2M.
6 Broader Impact
Embodied agents that can explore environments in the absence of humans have broader applications in service robotics and assistive technology. Such robots could survey, and then give a quick rundown of a space for new users to, for example, alert them of appliances in a workspace, which of them are functional, and how these can be activated. It could also potentially warn users to avoid interaction with some objects if they are sharp, hot, or otherwise dangerous based on the robot’s own interactions with them.
Deploying embodied agents in human spaces comes with challenges in safety — exploration agents than can “interact with everything” to discover functionality may inadvertently damage their environment or themselves, and privacy — navigating human-centric spaces requires agents to be sensitive of people and personal belongings. Careful consideration of these issues while designing embodied agent policies is essential for deploying these agents in the real world to collaborate with people. | 1. What is the primary contribution of the paper regarding indoor embodied agents?
2. What are the strengths of the proposed approach, particularly in terms of technical soundness and clarity?
3. What are the weaknesses of the paper, especially regarding multi-task learning and evaluation gaps?
4. How does the reviewer suggest improving the sufficiency of the evaluation?
5. Do you have any additional questions or suggestions for the author to enhance the paper's quality? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Paper presents an exploration strategy for indoor embodied agents. Essentially it leverages an auxiliary 2D affordance map segmentation task on top of the main RL problem and feeds the predicted affordance map as an extra input of the policy network. Experiments on exploration in the AI-THOR simulator demonstrates its effectiveness over heuristic based counterparts on the proposed interaction coverage and precision metrics.
Strengths
+ The paper is overall clearly written and easy to follow. + I can't find any technical issues within the main methodology. The proposed method is technically sound. + The baseline comparison are sufficient and covers a broad range of SOTA exploration methods, especially for embodied agents.
Weaknesses
- Some technical details deserves more elaborations, especially on the multi-task learning. The main idea of this paper is to train an RL agents simultaneously with an affordance map segmentation network, though these are essentially treated as two orthogonal objectives, the learning procedure in a whole is still unclear to me. The authors are suggested to provide more details on how the training is proceeded, such as whether these two tasks are trained concurrently or alternatively, and if their learning processes are not synchronous, how they choose the ratio of the learning iteration of each task, and how these extra parameters can affect the performances (in an extra ablation study). A comprehensive loss function and pseudo code will be preferred. - There is still some gap needed to be filled in the evaluation to further improve the sufficiency. To name a few: a) The selected metrics are only evaluated in a limited range. There are only curves over time(training steps) for the converge, while the success rates should also be evaluated in this way but not just the final quantities as it could be insightful to see the analysis on how the proposed method could improve the interaction skills. I think if it does perform as expected (also the counterparts), there should be low success rate at the beginning (as it tend to interact more with the object) but can improve faster than the other methods. b) The authors demonstrate how some downstream RL tasks could benefit from their proposed method and compare with the seemingly strongest baseline (obj coverage). Given the overall quality of their contribution, more evaluation efforts should be included here. I would like to see some endeavor on extending this part towards this direction: * To combine the proposed method with other exploration strategy. I do feel most of the considered baselines focus less on interaction but navigation, which seems sort of opposite to what this paper specifically works with, thus it may be more interesting to see some results on how the proposed method can really mitigate their drawbacks than simply contrasting on some interaction-oriented tasks. This can also further verify the main motivation of this paper---an efficient solution of exploration for interactions. Nevertheless, I do feel the selected downstream tasks could also be more challenging, say there is a significant need for both navigation and interaction. |
NIPS | Title
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
Abstract
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates. *Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
N/A
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
*Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
1 Introduction
To date, the most successful models of biological visual cortex are object-recognizing deep neural networks applied to the prediction of neural activity in primate visual cortex [1–5]. Corresponding to the biology not only at the level of individual layers, but across the feature hierarchy, these models are so powerful they can now effectively be used as neural controllers, synthesizing stimuli that drive neural activity far beyond the range evoked by any handmade experimental stimulus [6]. The correspondence of these same models to mouse visual cortex, on the other hand, has proven a bit more tenuous [7, 8], with a recent finding even suggesting that randomly initialized networks are as predictive of rodent visual cortical activity as trained ones [9].
Often implicit in interpretations of these results is the notion that the visual milieu and machinery of mice is simply different – something characterized more, perhaps, by brute force predator avoidance and the ‘flexible random associations’ thought to define senses like olfaction [10] than by the sophisticated active sampling and representational compositionality enabled by primate central vision. And yet, mice do recognize objects [11, 12] – and do engage in other sophisticated visual behaviors [13] that suggest they must have visual solutions that at least functionally approximate the kinds of solutions learned by modern computer vision algorithms. If these models perform well in monkeys, but not in mice, are we overfitting to an artifact? Are the object recognition capabilities of mice simply the byproduct of a representational competence learned through other (even more behaviorally relevant) tasks? Have mice perhaps converged on solutions to visual problems that fundamentally differ from the solutions that undergird the emergent similarity between monkeys and machines? To even begin to answer these questions, we need substantially more comprehensive modeling statistics than we currently have. Our main goal in this work was to provide exactly that – to re-examine at large scale the state of neural network modeling in the visual cortices of mice, using many thousands of neurons, over 110 distinct neural network models, and two methods of mapping models to brain.
We summarize the statistics from our benchmarking survey in five main results:
1. Training matters. The randomly initialized variants of some convolutional architectures fare well when predicting individual neural responses, but representational similarity is always better captured by features learned in service of some task. (Segmentation seems best.) 2. Features of intermediate complexity dominate in the prediction of all cortical sites, but both our mapping methods do demonstrate an upwards gradient in complexity from primary visual cortex onwards that roughly matches the information processing hierarchy proposed elsewhere in the rodent neurophysiology literature. 3. Taskonomic tools that have previously been shown to approximate functional organization in primates fail to strongly differentiate anatomical regions in mice, with the same kinds of tasks dominant across multiple, distinct neural sites. 4. When aggregated in similar ways, representational similarity and neural regression methods capture similar trends in the kinds of feature spaces that best predict the biology. 5. While still far from the overall noise ceiling for this highly reliable neural data, a variety of the artificial deep net models in our survey make predictions only slightly less accurate than ‘biological conspecific models’ composed of the neurons from other mice.
2 Methods
2.1 Neural Dataset
For neural data, we use the Allen Brain Observatory Visual Coding1 dataset [14] collected with twophoton calcium-imaging from the visual cortex of 256 awake adult transgenic mice and consisting of approximately 59,610 unique, individual neurons. Calcium-imaging fluorescence patterns are preprocessed and deconvolved by the Allen Institute2. The neurons sampled include neurons from 6 visual cortical areas at 4 cortical depths across 12 genetic cre lines. The visual experiments recorded activity for both artificial images (e.g., diffraction gratings) and 118 natural scenes. We analyze only
1Available with a non-commercial license under the Allen Institute terms of use: http://www.alleninstitute.org/legal/terms-use/
2More details are available in the whitepapers released with the observatory data: http://observatory.brain-map.org/visualcoding/transgenic
the latter to ensure comparable inputs to what is typically used in the training of deep nets. Each natural scene is displayed 50 times over the course of an assay.
To ensure an optimal signal to noise ratio, we perform a significant amount of subsetting on the full neural population, beginning by subsetting only excitatory neurons. Recent analyses suggest neural activity throughout mouse visual cortex is often impacted by extraneous, external body movements [15]. For this reason, we subsequently filter out any neurons whose peak responses to the presentation of natural scene images are significantly modulated by the mouse’s running speed, using an ANOVA metric provided by the Allen Institute. We further subselect neurons by assessing their split-half reliability across trials (with each split-half constituting 25 of 50 presentations for each image), keeping only those neurons exhibiting 0.8 reliability and above. This thresholding still leaves 6619 neurons for analysis, is in line with prior work on primates, and supports, for example, the construction of cortical representational dissimilarity matrices (RDMs) with split-half reliabilities as high as 0.93. (More details on the relationship between our metrics and neural reliability, including visualizations of some of our results across many degrees of thresholding, can be found in A.4 of the Appendix.)
2.2 Model Zoology
To explore the influence of model architecture on predictive performance, we use 26 model architectures from the Torchvision (PyTorch) model zoo [16] and 65 model architectures from the Timm [17] model zoo [18–52]. These models include convolutional networks, vision transformers, normalization-free networks and MLP-Mixer models. For each of these models, we extract the features from one trained and one randomly initialized variant (using whatever initialization scheme the model authors deemed best) so as to better disentangle what training on object recognition affords us in terms of predictive power.
2.3 Neural Taskonomy
Model zoology provides decent perspective on the computations related to object recognition, but the responsibilities of the visual cortex (no matter the species) extend far beyond identifying the category of an object. To probe a wider range of tasks, we turn to Taskonomy: a single architecture trained on 24 different common computer vision tasks [53], ranging from autoencoding to edge detection. The model weights we use are from updated PyTorch implementations of the original Tensorflow models [54]. Key to the engineering of Taskonomy is the use of an encoder-decoder design in which only the construction of the decoder varies across tasks. While recent analyses using a similar approach in human visual cortex with fMRI data [55] have tended to focus only on the latent space of each task’s encoder, we extract representations across all layers, better situating Taskonomy within the same empirical paradigm that has so far defined the modeling of object recognition in the primate brain. For further clarity, we cluster the 24 tasks according to their ‘Taskonomic’ category — a total of 5 clusters (2D, 3D, semantic, geometric or other) that we further collapse into 4 clusters (lumping the only member of the ‘other’ category — a denoising autoencoder — in with its closest cousin — a vanilla autoencoder in the ‘2D’ category). These purely data-driven clusters are derived from estimates of how effectively a set of features learned for one task transfer to (or boost the performance in) another task [53]. Use of the Taskonomy models provides a unique opportunity to test variance in training regimes without the confound of simultaneous changes in architecture.
2.4 Self-Supervised Models
Full category supervision, while robust in its ability to build representations that transfer well to a variety of tasks, suffers in its neuroscientific relevance as an ethologically plausible mode of learning. Recently, self-supervised models have begun to provide viable alternatives to the representations learned by category-supervised models in both computer vision [56, 57] and neural mapping [58, 59]. Here, we assess 22 self-supervision models from the VISSL model zoo [60], ranging from earlier iterations (e.g. DeepCluster [61]) to modern contrastive learning algorithms (e.g. BarlowTwins and Dino [62–65]). We use these models to assess whether category-supervision, however powerful it is in predicting neural activity, might eventually be supplanted by these more realistic alternatives. 14 of these models have as their base architecture a standard ResNet50; 8 are built atop vision transformers.
2.5 Comparing Representations across Biological & Artificial Networks
Two methods predominate in the comparison of neural recordings to deep neural networks: at the most abstract level, one of these compares representational geometries computed across the activations
of many individual neurons [66, 67]; the other attempts to predict the activity of individual neurons directly [67, 68]. Both of these techniques are grounded in the use of image-computable models and a shared stimulus set, but differ in the types of transformation applied to the neural activity generated by those stimuli. Given the difference in both target (neural populations versus individual neurons) and transforms (correlation matrices versus dimensionality reduction) we attempt a variant of each type of analysis here, comparing the two directly on the exact same neural data, with the same models and the same stimulus set, and in a granular, layer-by-layer fashion. (A more comprehensive review of neural mapping methods is provided in Section A.2 of the Appendix.)
2.5.1 Representational Similarity Analysis
To compare the representational geometries of a given model to the representational geometries of the brain, we begin by computing classic representational dissimilarity matrices (RDMs) [69]. We compute these RDMS by calculating the pairwise correlation coefficients between the neural response vectors for each image (one for each of the 6 cortical areas surveyed). We then repeat this procedure for the artificial networks, aggregating the responses of the artificial neurons in a given layer, before aggregating them once more into a correlation matrix. We then measure the relationship between the RDMs computed from the biological and artificial networks with a second-order Pearson correlation between the flattened upper triangles of each. The resultant coefficient constitutes the score for how well a given model layer predicts the representational similarity of a given cortical area.
2.5.2 Neural Regression (Encoding Models)
To more directly compare the biological and artificial neural activations in our data, we use a style of regression made popular in the modeling of primate visual cortex, epitomized by BrainScore [4]. Variants of this approach abound, but most consist of extracting model activations, performing dimensionality reduction, and then some form of cross-validated penalized or principal components regression. The dimensionality-reduced feature spaces of the model are used as the regressors of the activation patterns in a given neuron. After testing a number of these variants, we settled on sparse random projection for dimensionality reduction (which proved far more computationally efficient than standard PCA, without sacrifice in terms of regression scores), followed by ridge regression (in place of the more frequently used partial least squares regression).
The details of our method (programmed with [70]) are as follows: Given a network, we first extract a predetermined number of sparse random projections (4096, in this case) from the features of each layer — in line with the Johnson-Lindenstrauss lemma for the number of observations (images shown to the mice) in our data set 3. After extracting these projections, we regress them on the activity of each individual neuron using ridge regression (with a default lambda penalty of 1.0). The use of a penalized regression in this case allows us to monopolize generalized cross-validation (a linear algebraic form of leave-one-out cross-validation), yielding a set of predictions for the activity of each neuron for each image4. We then compute the Pearson correlation between the predicted and actual activity for each neuron to obtain a score per neuron per model layer, which we then aggregate by taking the mean of scores per neuron across cortical area.
We verify the efficacy of this method on the publicly available benchmarks of primate BrainScore, where (relative to BrainScore’s in-house regression method) we demonstrate provisional gains not only in terms of predictive score (sometimes up to r = 34%), but also in terms of speed and computational efficiency. (Details may be found in Section A.1 of the Appendix.)
2.6 Model Rankings
To rank the models according to how well they predict the variance in a given cortical area, we take the max across layers. In effect, this requires that a model ‘commit’ only one layer to the prediction of each area. In the case of our neural regression metric we call these scores the ‘SRP-Ridge Max’; in the case of our representational similarity metric we call these scores the ‘RSA Max’. A final mean taken over the SRP-Ridge Max and RSA Max scores per model per cortical area yields our overall model rankings, which serve as the basis for the bulk of our analyses.
3Note that in cases where the dimensionality of features is less than the number of projections suggested by the lemma, sparse random projections will actually upsample the feature space, rather than downsample it.
4The use of generalized cross-validation is particularly beneficial in datasets with fewer probe images, where k-fold cross-validation means losing a significant degree of information in each fit.
2.7 Non-Neural Network Baselines
Prior to the ascendancy of neural network models, a significant amount of time and craft was invested in the hand-engineering of features to simultaneously facilitate image recognition and capture meaningful subsets of neural variance. In this work, we test how well a small subset of those features are able to explain the variance in rodent visual cortex, using both our neural encoding and representational similarity metrics. Our non-neural network baselines consist of random fourier features [71] (computed specifically to match the dimensionality of our neural network predictors), handcrafted gabor filters and GIST (spatial envelope) descriptors [72].
3 Results
3.1 How do trained models compare to randomly initialized models?
Previous work in the deep neural network modeling of mouse visual cortex found that a randomly initialized VGG16 predicted neural responses as well as, if not slightly better than, a VGG16 trained on ImageNet [9], suggesting that the neural predictivity of the features produced by a trained object recognition model are perhaps no better than the features produced by a randomly initialized one. Our results, on the other hand, suggest that the neural predictivity of trained versus randomly initialized models more generally depends on both the particular model being tested and the particular method used to produce the mappings between model and brain.
At the level of individual neurons (neural regression), 17 of the 91 model architectures we tested had randomly initialized variants that either matched or outperformed their ImageNet-trained counterparts. Replicating previous findings, we found these 17 architectures to include VGG16, as well as all 3 other VGG variants (11, 13 & 19), AlexNet, the DenseNet architectures (121, 169, 201), and almost all of the normalization-free architectures. Despite this, a paired t-test of the difference in scores across all models demonstrates that ImageNet-trained architectures are still overall more performant than their randomly initialized counterparts (Student’s t = 7.74, p = 1.37e 11, Hedge’s bg = 0.81). At the level of emergent representational similarity (RSA), ImageNet-trained models categorically outperform their randomly initialized counterparts, and by a large margin (Student’s t = 22.66, p = 5.81e 39, Hedge’s bg = 2.36). Taken together, these results strongly affirm that training matters, and that randomly initialized features can only go so far in the prediction of meaningful neural variance. Differences between ImageNet-trained and randomly initialized models are shown in Figure 1.
3.2 What kinds of architectures best predict rodent visual cortex?
The overall best architecture for predicting mouse visual cortex across both individual neurons (SRP-Ridge) and population-level representation (RSA) was an Inception-ResNet hybrid (InceptionResNet-V2). There is a small, positive correlation between the depth of a model (the number of distinct layers) for both the RSA-Max metric and SRP-Ridge metric (Spearman’s r = 0.22, p = 0.001 and r = 0.192, p = 0.007, respectively), and a small, negative correlation for the total number of trainable parameters in the RSA Max metric (Spearman’s r = 0.18, p = 0.007). The latter of these is most likely driven by the relatively poor performance of parameter-dense architectures like VGG.
Markedly, trends previously noted in macaques [73] fail to materialize here. In particular, models with higher top-1 accuracies on ImageNet do not perform significantly better than models with lower top-1 accuracies. This relative parity is driven in large part it seems by newer models like EfficientNets, which across the board have dominant scores on ImageNet, but sometimes middling or poor scores in the predictions of rodent visual cortex we’ve tabulated here.
Compared to all other architectures, transformers on average fare slightly worse in the RSA Max metric (Student’s t = 3.96, p = 0.004, Hedge’s bg = 0.913), but moderately better in the SRP-Ridge Max metric (Student’s t = 2.45, p = 0.023, Hedge’s bg = 0.633). Strikingly, transformers and MLP-Mixers boast the largest differences between ImageNet-trained and randomly initialized variants in the SRP-Ridge Max metric, with all pairwise t-tests significant at alpha = 0.05 after Bonferroni correction for multiple comparisons. This strongly suggests that the advantage of those randomly initialized variants that matched or outperformed their ImageNet-trained counterparts is an advantage conferred by properties of convolutional architectures (e.g., translation invariance), and not necessarily an advantage shared across random feature spaces writ large. (The rankings of these and other architectures may be found in Figure 8 in the Appendix).
3.3 What kinds of tasks best predict rodent visual cortex?
The overall best Taskonomy encoder across both the RSA and SRP-Ridge Max is 2D segmentation (ranking second and first respectively; see Figure 9 in the Appendix). At the level of individual neurons (SRP-Ridge), 2D tasks (keypoints, autoencoding, inpainting) dominate. At the level of representational similarity (RSA), all 2D tasks but 2D segmentation fall to the bottom of the rankings, and Semantic tasks (object recognition and semantic segmentation) rise to 2nd and 3rd place.
This reshifting in rank presents a curious case for interpretation, suggesting most likely that while the representations of individual neurons may be coordinated more by the lower level, less abstract features necessary for performing well on most 2D tasks, the overall neural population codes are coordinated more by the parsing of the visual input into ethologically and spatially relevant units via the segmentation and classification tasks. Notably, the original research from which these PyTorch models were adopted offers an auxiliary data point that may anchor this interpretation more concretely. The top 3 models in our RSA Max metric (2D segmentation, object classification, semantic segmentation) are likewise in the top 5 of a ranking the original researchers produced by pitting the Taskonomy encoders against one another as pretrained ‘perceptual systems’ for reinforcement learning agents learning to navigate a virtual environment (see [54], figure 13 in the appendix). This raises the possibility that the reason these models are optimally predicting the visual neural population code for mice is simply because that code is coordinated in service of navigation.
3.4 How do category-supervised models compare to self-supervised models?
Whether with ResNet50 as their base, or a vision transformer, self-supervised models seem to be verging closer and closer to the predictive power of their category-supervised counterparts. Our most predictive self-supervised ResNet50, for example, (a MocoV2 model) effectively matches its category-supervised counterpart in the SRP-Ridge Max metric (with scores of .182 and .184, respectively), while slightly outperforming its category-supervised counterpart in the RSA Max metric (with scores of .422 and .415, respectively). While this single comparison by no means denotes a statistically significant superiority of self-supervised models (which would require training multiple iterations of each), it does begin to provide preliminary evidence for parity.
3.5 How well do non-neural network baselines predict rodent visual cortex?
Non-neural network baselines somewhat uniformly fail to predict neural activity as accurately as deep net features (though see Section A.10 of the Appendix for a counterexample). We tested three baselines: 1) a bank of Gabor filters of applied to 8x8 grids of each image; 2) the PCs of the resultant feature matrices (i.e. the Gist descriptors [72]); 3) and the max across 600 iterations of 4096 random Fourier features (a dimensionality matching that of our SRPs). Ridge regressed with generalized cross-validation, these feature models yield average scores of 0.07, 0.06 and -0.014, respectively. Compared via representational similarity, they yield average scores of 0.20, 0.25 and 0.011.
3.6 How ‘deep’ are the layers that best predict rodent visual cortex?
Echoing previous results [7, 8], we find across all ImageNet-trained architectures, regardless of metric, that the features most predictive of rodent visual cortex are found about a third of the way into the model (though see Section A.5 of the Appendix for some caveats). These early to intermediate visual features go beyond basic edge detection but are far from the highly abstracted representations adjacent to final fully connected layers. Across Taskonomy encoders, 2D & Geometric tasks yield their best features in earlier layers; 3D & Semantic tasks yield their best features in more intermediate and later layers. Note that these aggregate motifs do not preclude subtler differences across cortical area, which we discuss in the section below.
3.7 Are there differences in model predictions across cortical area?
In this work, we address this question from two perspectives: that of hierarchy and that of function.
In primate visual cortex, it is common consensus that there exists a distinct information processing hierarchy along the ventral visual stream [74–76], with posterior sites like V1 and V3 defined by features like oriented edge detectors, and more anterior sites like V4 and IT defined by more complex morphologies. While there continues to be some debate as to whether a similar hierarchy exists in rodent visual cortex, a large body of anatomical, functional and physiological work [77–83] has coalesced around a meaningful hierarchy that consists first of a ventral / dorsal split after primary visual cortex (VISp), with VISp leading to VISl in the ventral stream and VISp leading to VISrl - VISal - VISpm - VISam in the dorsal stream. Strikingly, our modeling does seem to provide corresponding evidence for this circuit in the form of a data-driven hierarchy produced purely by taking the median depths of the model layers that best predict the neural activity in each of these cortical areas, and assessing for difference across them. A nonparametric ANOVA shows an overall difference in depth across cortical area to be significant for both our SRP-Ridge metric (Friedman’s 2 = 34.08, p = 2.29e 06, Kendall’s cW = 0.04) and our RSA metric (Friedman’s 2 = 37.05, 5.86e 076, Kendall’s cW = 0.06). Subsequent pairwise comparisons show many of the differences that underlie this group-level effect to be differences between earlier and later layers of the information processing hierarchy established in the literature. (For further details, see Figure 2.)
Other differences across cortical area that we might expect are differences driven by function. Research into primate visual cortex over the last two decades has unveiled a significant degree of functional organization over and above purely anatomical organization [84–86], with distinct subregions defined in large part by their differential activity in response to different kinds of stimuli. To try and replicate this in mouse visual cortex we search for Taskonomic organization, a proxy of functional organization wherein distinct neural sites are better or worse predicted by the features from different taskonomy encoders. Curiously, and in contrast to previous findings in human fMRI [55], it seems to be the case that the scores of different Taskonomic clusters are relatively consistent across cortical area; see Figure 3.) This suggests that mouse visual cortex may be more functionally (or Taskonomically) homogenous than primate visual cortex, with anatomical descriptors providing little to no cue of functional difference – though this seems unlikely given other analyses we’ve performed showing greater similarities of neurons within cortical site than between cortical site (see Section A.11 for details). Another (more likely) alternative is that the tasks of computer vision are just not so neatly mapped onto the tasks of biological vision in mice.
3.8 How do the predictions compare across RSA and neural regression?
While prior work has addressed this question theoretically [87], it’s rarely the case that representational similarity and neural regression are compared directly and empirically. Here, we compare our RSA and SRP-Ridge metric both at the level of overall rankings (taking the max across layers) and at
. the level of individual layers, the latter of which provides a much more detailed assessment of how different feature spaces map to cortical representation.
In terms of overall rankings, the Spearman rank order correlation between the two methods is either 0.56 (p = 8.36⇥ 10 19) or 0.59 (p = 3.17⇥ 10 12), depending on whether you include or exclude the randomly initialized architectures. In terms of layer by layer comparisons, we decompose the Spearman correlation across distinct combinations of model and cortical area. The average coefficient between the two methods, along with bootstrapped 95% confidence intervals is 0.468 [0.447,0.489] or 0.626 [0.613,0.639], again depending on the inclusion or exclusion of the random models. This suggests a significant degree of overlap between the kinds of features that optimally predict the representations of both individual neurons and neural populations. Of course, the averages here do obscure some meaningful subtrends and idiosyncrasies. For details, see Figure 4.
3.9 How well are we doing overall in predicting mouse visual cortex?
The overall best model in any cortical area across either of our metrics is unsupervised 2D segmentation in anterolateral visual area (VISal), with an RSA Max score of 0.538. The (Spearman-Brown) splithalf reliability of the RDM for this area (an effective proxy of its explainable variance) is 0.89. This means our most predictive model in any cortical area across any metric is little more than halfway to the noise ceiling.
Of course, it’s possible this noise ceiling is a bit too strict. Instead of requiring the model to predict the neural data as well as the neural data predicts itself, another possible target to which we might recalibrate is the relative performance we would expect if (instead of an artificial neural network) we used the responses of another biological network as the model to predict neural activity. Inspired by recent work [88], and to better contextualize the scores of our SRP-Ridge metric, we attempted a version of this here. To compute this reference, we proceeded again neuron by neuron using the exact same neural regression method (dimesnonality reduction, and hyperparameters) described in Section 2.5.2, but instead of using the responses of a deep net layer as the predictors in our ridge regression, we used the responses of the neurons from the same cortical area in all other mice (conspecifics) across the donor sample. Conceptually, this ‘intermouse score’ represents how well we might do if our model of a given mouse brain were other mouse brains.
Averaging across both cortical area and model, the average distance (with 95% bootstrapped confidence intervals) between the best performing deep net feature spaces and the mean of the intermouse scores (expressed in the same units of Pearson’s r we’ve used heretofore) is 0.0985 [0.0940, 0.103]. Compare this to the same distance computed relative to the splithalf reliability: 728 [0.726, 0.731]. On average, then, while our artificial models are capturing only a fraction of the total explainable variance relative to the splithalf noise ceiling, they’re verging increasingly close to the predictive threshold suggested by the reweighting of biological neurons from the same species. The performance of models relative to the intermouse score may be seen in the lower half of Figure 3.
4 Discussion
Our intent with this work was to provide a preliminary atlas for future ventures into the deep neural network modeling of rodent visual cortex. To this end, we have deliberately invested in introspective analyses of the tools we used, as well as the curation of deep neural networks we hope will provide informative waypoints. Obviously, the atlas is far from complete. Other model classes like recurrent models [89, 90], equivariant models [91], and robotic models (e.g. for visual odometry [92]) are promising candidates for inclusion in future benchmarks, and our neural encoding & representational similarity metrics are just two of many variants.
Nevertheless, the results we have presented here indicate that neural recordings from the visual brains of mice can (with care and caution) be compared to deep neural networks using many of the same tools we’ve used to better characterize the visual brains of monkeys. Having as reference two animal models that occupy very different ecological niches and are separated by tens of million
years of evolution makes it far more likely that insights into vision gleaned across both are actually fundamental to perceptual meaning-making and not just some idiosyncratic quirk specific to any one evolutionary trajectory. Primate and rodent vision do differ rather drastically, even in fairly basic ways: mice lack a fovea, have a retina dominated by rods for vision under low light, and spatial acuity less than 20/1000 [93], making their primary visual system more akin to the primate peripheral system – and making it all the more curious that the same models explain decent amounts of variance in both. The differences between the species, it seems, may not be so irreconcilable at the level of modeling, but only with future work more carefully controlling for distinct aspects of each organism’s unique physiology (see Section A.5) can more concrete conclusions of this kind be made.
Beyond considerations of distinctive physiology is the indispensable point that perceptual systems should always be considered in service of behavior. It’s possible that mice mostly rely on vision as a sort of broad bandpass filter for lower-frequency, dynamic stimuli the animal can then flee, fight, or further investigate with its whiskers — perhaps its most sophisticated sensory organ [94]. Another possibility is that mice use vision to facilitate navigation. The dominance in our Taskonomy results of 2D segmentation, object recognition and semantic segmentation (all tasks that have elsewhere been shown to provide effective, transferable features for the simulation of robotic navigation) provide some evidence for this. Of course, the behavioral roles of rodent vision may very well be manifold. Understanding this plurality in a readily available model species could in the end be key for bridging the gaps that remain between biological and computer vision [95]. The unparalleled access, resolution, and control afforded by rodent neuroimaging have already revolutionized our understanding of the relationship between perceptual representation and behavioral output. Combined with novel methods like the embedding of neural networks in virtual agents [96] in ecologically realistic environments, this kind of data may well provide a testbed for better situating the tasks of computer vision in the broader behavioral context of agentic scene understanding.
In summary, only novel combinations of architecture, task and mapping will help to explain the highly reliable neural variance we’ve yet to explain in our current survey. Already this recombination is under way: Shi et al. [97] have created a custom CNN designed specifically to match (processing stage by processing stage) the anatomy of rodent visual cortex, while Nayebi et al. [88] have combined the power of self-supervised learning with smaller, shallower architectures to more fully account for the ethological realities of rodent behavior and the differences in computational bandwidth that shape and constrain their visual systems. More work of this variety will be necessary to more fully model the rich diversity and fiendish complexity of biological brains at scale – even the very smallest ones.
4.1 Acknowledgements
We thank Martin Schrimpf, Tiago Marques, Jim DiCarlo, as well as many others on the BrainScore team for helpful discussion, feedback, and inspiration. We would also like to thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
4.2 Code Availability
More results and code for the replication of our analysis may be found at this GitHub repository: github.com/ColinConwell/DeepMouseTrap (License GPL v2)
4.3 Compute Required
We used a single machine with 8 Nvidia RTX 3090 GPUs, 755gb of RAM, and 96 CPUs. GPUs were used only for extracting model activations, and could (without major slowdown) be removed from the analytic pipeline. Dimensionality reduction and regression computations were CPU and RAM intensive. Replicating all of our results would take approximately two weeks on a similar machine.
4.4 Ethics Statement
Lest our science forget the life that powers it, we must note that behind the phenomenal dataset provided by the Allen Institute are 256 laboratory mice, each of which was subjected to multiple surgeries, a highly invasive neuroimaging technique and genetic engineering. The moral parameters of this particular praxis of neuroscience are contentious, and not without reason. While we believe centralized, comprehensive and (most importantly) public datasets like those provided by the Allen Institute may actually decrease the total number of laboratory animals required for similar kinds of empirical projects, we acknowledge with solemnity the cost to life required.
4.5 Funding Statement
This work was supported by the Center for Brains, Minds and Machines, NSF STC award 1231216, the MIT CSAIL Systems that Learn Initiative, the CBMM-Siemens Graduate Fellowship, the MITIBM Watson AI Lab, the DARPA Artificial Social Intelligence for Successful Teams (ASIST) program, the United States Air Force Research Laboratory and United States Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000, and the Office of Naval Research under Award Number N00014-20-1-2589 and Award Number N00014- 20-1-2643. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding the comparison of neural network models and rodent visual cortex?
2. What are the strengths of the proposed benchmark and evaluation metrics?
3. How does the reviewer assess the comprehensiveness and novelty of the paper's content?
4. Are there any questions or concerns regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a benchmark for the comparison of neural network models of vision to rodent visual cortex. The benchmark compares models with various architectures (including various CNNs, transformers, and other architectures) and one model trained different tasks to a large database of neural recordings from mouse visual cortex. It relies on newly proposed metrics that are validated on a previous benchmark. These metrics evaluate both the fit to individual neurons and the entire population of neurons. Previous results are replicated and explained in the context of a large scale study. The paper provides analysis of the results as well as predictions on the function and structure of the rodent visual cortex areas.
Review
The paper presents a benchmark, new evaluation metrics, a large scale evaluation of vision models trained on various tasks as models rodent visual cortex, many analyses of the results and interesting interpretations of these results.
strengths:
The benchmark is extensive, it explores a large number of models and tasks when compared to previous work in the field.
They propose new metrics for neural regression which are faster and computationally inexpensive. Furthermore, the two metrics evaluate different aspects of the neural code (fitting individual neuron activity and fitting neural populations).
Previous results on randomly initialized models are replicated and extended to other architectures. Which offers a new perspective on the inductive bias contributes to the scores.
The layer-by-layer and area-by-area analyses offer many interesting hypotheses and predictions on rodent visual cortex.
The paper is clear, thorough, well detailed and well explained. The code is provided for reproducing the results.
The authors discuss the paper's limitations, provide detailed descriptions of the computational resources they used and acknowledge the ethical concerns behind their work. |
NIPS | Title
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
Abstract
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates. *Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
N/A
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
*Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
1 Introduction
To date, the most successful models of biological visual cortex are object-recognizing deep neural networks applied to the prediction of neural activity in primate visual cortex [1–5]. Corresponding to the biology not only at the level of individual layers, but across the feature hierarchy, these models are so powerful they can now effectively be used as neural controllers, synthesizing stimuli that drive neural activity far beyond the range evoked by any handmade experimental stimulus [6]. The correspondence of these same models to mouse visual cortex, on the other hand, has proven a bit more tenuous [7, 8], with a recent finding even suggesting that randomly initialized networks are as predictive of rodent visual cortical activity as trained ones [9].
Often implicit in interpretations of these results is the notion that the visual milieu and machinery of mice is simply different – something characterized more, perhaps, by brute force predator avoidance and the ‘flexible random associations’ thought to define senses like olfaction [10] than by the sophisticated active sampling and representational compositionality enabled by primate central vision. And yet, mice do recognize objects [11, 12] – and do engage in other sophisticated visual behaviors [13] that suggest they must have visual solutions that at least functionally approximate the kinds of solutions learned by modern computer vision algorithms. If these models perform well in monkeys, but not in mice, are we overfitting to an artifact? Are the object recognition capabilities of mice simply the byproduct of a representational competence learned through other (even more behaviorally relevant) tasks? Have mice perhaps converged on solutions to visual problems that fundamentally differ from the solutions that undergird the emergent similarity between monkeys and machines? To even begin to answer these questions, we need substantially more comprehensive modeling statistics than we currently have. Our main goal in this work was to provide exactly that – to re-examine at large scale the state of neural network modeling in the visual cortices of mice, using many thousands of neurons, over 110 distinct neural network models, and two methods of mapping models to brain.
We summarize the statistics from our benchmarking survey in five main results:
1. Training matters. The randomly initialized variants of some convolutional architectures fare well when predicting individual neural responses, but representational similarity is always better captured by features learned in service of some task. (Segmentation seems best.) 2. Features of intermediate complexity dominate in the prediction of all cortical sites, but both our mapping methods do demonstrate an upwards gradient in complexity from primary visual cortex onwards that roughly matches the information processing hierarchy proposed elsewhere in the rodent neurophysiology literature. 3. Taskonomic tools that have previously been shown to approximate functional organization in primates fail to strongly differentiate anatomical regions in mice, with the same kinds of tasks dominant across multiple, distinct neural sites. 4. When aggregated in similar ways, representational similarity and neural regression methods capture similar trends in the kinds of feature spaces that best predict the biology. 5. While still far from the overall noise ceiling for this highly reliable neural data, a variety of the artificial deep net models in our survey make predictions only slightly less accurate than ‘biological conspecific models’ composed of the neurons from other mice.
2 Methods
2.1 Neural Dataset
For neural data, we use the Allen Brain Observatory Visual Coding1 dataset [14] collected with twophoton calcium-imaging from the visual cortex of 256 awake adult transgenic mice and consisting of approximately 59,610 unique, individual neurons. Calcium-imaging fluorescence patterns are preprocessed and deconvolved by the Allen Institute2. The neurons sampled include neurons from 6 visual cortical areas at 4 cortical depths across 12 genetic cre lines. The visual experiments recorded activity for both artificial images (e.g., diffraction gratings) and 118 natural scenes. We analyze only
1Available with a non-commercial license under the Allen Institute terms of use: http://www.alleninstitute.org/legal/terms-use/
2More details are available in the whitepapers released with the observatory data: http://observatory.brain-map.org/visualcoding/transgenic
the latter to ensure comparable inputs to what is typically used in the training of deep nets. Each natural scene is displayed 50 times over the course of an assay.
To ensure an optimal signal to noise ratio, we perform a significant amount of subsetting on the full neural population, beginning by subsetting only excitatory neurons. Recent analyses suggest neural activity throughout mouse visual cortex is often impacted by extraneous, external body movements [15]. For this reason, we subsequently filter out any neurons whose peak responses to the presentation of natural scene images are significantly modulated by the mouse’s running speed, using an ANOVA metric provided by the Allen Institute. We further subselect neurons by assessing their split-half reliability across trials (with each split-half constituting 25 of 50 presentations for each image), keeping only those neurons exhibiting 0.8 reliability and above. This thresholding still leaves 6619 neurons for analysis, is in line with prior work on primates, and supports, for example, the construction of cortical representational dissimilarity matrices (RDMs) with split-half reliabilities as high as 0.93. (More details on the relationship between our metrics and neural reliability, including visualizations of some of our results across many degrees of thresholding, can be found in A.4 of the Appendix.)
2.2 Model Zoology
To explore the influence of model architecture on predictive performance, we use 26 model architectures from the Torchvision (PyTorch) model zoo [16] and 65 model architectures from the Timm [17] model zoo [18–52]. These models include convolutional networks, vision transformers, normalization-free networks and MLP-Mixer models. For each of these models, we extract the features from one trained and one randomly initialized variant (using whatever initialization scheme the model authors deemed best) so as to better disentangle what training on object recognition affords us in terms of predictive power.
2.3 Neural Taskonomy
Model zoology provides decent perspective on the computations related to object recognition, but the responsibilities of the visual cortex (no matter the species) extend far beyond identifying the category of an object. To probe a wider range of tasks, we turn to Taskonomy: a single architecture trained on 24 different common computer vision tasks [53], ranging from autoencoding to edge detection. The model weights we use are from updated PyTorch implementations of the original Tensorflow models [54]. Key to the engineering of Taskonomy is the use of an encoder-decoder design in which only the construction of the decoder varies across tasks. While recent analyses using a similar approach in human visual cortex with fMRI data [55] have tended to focus only on the latent space of each task’s encoder, we extract representations across all layers, better situating Taskonomy within the same empirical paradigm that has so far defined the modeling of object recognition in the primate brain. For further clarity, we cluster the 24 tasks according to their ‘Taskonomic’ category — a total of 5 clusters (2D, 3D, semantic, geometric or other) that we further collapse into 4 clusters (lumping the only member of the ‘other’ category — a denoising autoencoder — in with its closest cousin — a vanilla autoencoder in the ‘2D’ category). These purely data-driven clusters are derived from estimates of how effectively a set of features learned for one task transfer to (or boost the performance in) another task [53]. Use of the Taskonomy models provides a unique opportunity to test variance in training regimes without the confound of simultaneous changes in architecture.
2.4 Self-Supervised Models
Full category supervision, while robust in its ability to build representations that transfer well to a variety of tasks, suffers in its neuroscientific relevance as an ethologically plausible mode of learning. Recently, self-supervised models have begun to provide viable alternatives to the representations learned by category-supervised models in both computer vision [56, 57] and neural mapping [58, 59]. Here, we assess 22 self-supervision models from the VISSL model zoo [60], ranging from earlier iterations (e.g. DeepCluster [61]) to modern contrastive learning algorithms (e.g. BarlowTwins and Dino [62–65]). We use these models to assess whether category-supervision, however powerful it is in predicting neural activity, might eventually be supplanted by these more realistic alternatives. 14 of these models have as their base architecture a standard ResNet50; 8 are built atop vision transformers.
2.5 Comparing Representations across Biological & Artificial Networks
Two methods predominate in the comparison of neural recordings to deep neural networks: at the most abstract level, one of these compares representational geometries computed across the activations
of many individual neurons [66, 67]; the other attempts to predict the activity of individual neurons directly [67, 68]. Both of these techniques are grounded in the use of image-computable models and a shared stimulus set, but differ in the types of transformation applied to the neural activity generated by those stimuli. Given the difference in both target (neural populations versus individual neurons) and transforms (correlation matrices versus dimensionality reduction) we attempt a variant of each type of analysis here, comparing the two directly on the exact same neural data, with the same models and the same stimulus set, and in a granular, layer-by-layer fashion. (A more comprehensive review of neural mapping methods is provided in Section A.2 of the Appendix.)
2.5.1 Representational Similarity Analysis
To compare the representational geometries of a given model to the representational geometries of the brain, we begin by computing classic representational dissimilarity matrices (RDMs) [69]. We compute these RDMS by calculating the pairwise correlation coefficients between the neural response vectors for each image (one for each of the 6 cortical areas surveyed). We then repeat this procedure for the artificial networks, aggregating the responses of the artificial neurons in a given layer, before aggregating them once more into a correlation matrix. We then measure the relationship between the RDMs computed from the biological and artificial networks with a second-order Pearson correlation between the flattened upper triangles of each. The resultant coefficient constitutes the score for how well a given model layer predicts the representational similarity of a given cortical area.
2.5.2 Neural Regression (Encoding Models)
To more directly compare the biological and artificial neural activations in our data, we use a style of regression made popular in the modeling of primate visual cortex, epitomized by BrainScore [4]. Variants of this approach abound, but most consist of extracting model activations, performing dimensionality reduction, and then some form of cross-validated penalized or principal components regression. The dimensionality-reduced feature spaces of the model are used as the regressors of the activation patterns in a given neuron. After testing a number of these variants, we settled on sparse random projection for dimensionality reduction (which proved far more computationally efficient than standard PCA, without sacrifice in terms of regression scores), followed by ridge regression (in place of the more frequently used partial least squares regression).
The details of our method (programmed with [70]) are as follows: Given a network, we first extract a predetermined number of sparse random projections (4096, in this case) from the features of each layer — in line with the Johnson-Lindenstrauss lemma for the number of observations (images shown to the mice) in our data set 3. After extracting these projections, we regress them on the activity of each individual neuron using ridge regression (with a default lambda penalty of 1.0). The use of a penalized regression in this case allows us to monopolize generalized cross-validation (a linear algebraic form of leave-one-out cross-validation), yielding a set of predictions for the activity of each neuron for each image4. We then compute the Pearson correlation between the predicted and actual activity for each neuron to obtain a score per neuron per model layer, which we then aggregate by taking the mean of scores per neuron across cortical area.
We verify the efficacy of this method on the publicly available benchmarks of primate BrainScore, where (relative to BrainScore’s in-house regression method) we demonstrate provisional gains not only in terms of predictive score (sometimes up to r = 34%), but also in terms of speed and computational efficiency. (Details may be found in Section A.1 of the Appendix.)
2.6 Model Rankings
To rank the models according to how well they predict the variance in a given cortical area, we take the max across layers. In effect, this requires that a model ‘commit’ only one layer to the prediction of each area. In the case of our neural regression metric we call these scores the ‘SRP-Ridge Max’; in the case of our representational similarity metric we call these scores the ‘RSA Max’. A final mean taken over the SRP-Ridge Max and RSA Max scores per model per cortical area yields our overall model rankings, which serve as the basis for the bulk of our analyses.
3Note that in cases where the dimensionality of features is less than the number of projections suggested by the lemma, sparse random projections will actually upsample the feature space, rather than downsample it.
4The use of generalized cross-validation is particularly beneficial in datasets with fewer probe images, where k-fold cross-validation means losing a significant degree of information in each fit.
2.7 Non-Neural Network Baselines
Prior to the ascendancy of neural network models, a significant amount of time and craft was invested in the hand-engineering of features to simultaneously facilitate image recognition and capture meaningful subsets of neural variance. In this work, we test how well a small subset of those features are able to explain the variance in rodent visual cortex, using both our neural encoding and representational similarity metrics. Our non-neural network baselines consist of random fourier features [71] (computed specifically to match the dimensionality of our neural network predictors), handcrafted gabor filters and GIST (spatial envelope) descriptors [72].
3 Results
3.1 How do trained models compare to randomly initialized models?
Previous work in the deep neural network modeling of mouse visual cortex found that a randomly initialized VGG16 predicted neural responses as well as, if not slightly better than, a VGG16 trained on ImageNet [9], suggesting that the neural predictivity of the features produced by a trained object recognition model are perhaps no better than the features produced by a randomly initialized one. Our results, on the other hand, suggest that the neural predictivity of trained versus randomly initialized models more generally depends on both the particular model being tested and the particular method used to produce the mappings between model and brain.
At the level of individual neurons (neural regression), 17 of the 91 model architectures we tested had randomly initialized variants that either matched or outperformed their ImageNet-trained counterparts. Replicating previous findings, we found these 17 architectures to include VGG16, as well as all 3 other VGG variants (11, 13 & 19), AlexNet, the DenseNet architectures (121, 169, 201), and almost all of the normalization-free architectures. Despite this, a paired t-test of the difference in scores across all models demonstrates that ImageNet-trained architectures are still overall more performant than their randomly initialized counterparts (Student’s t = 7.74, p = 1.37e 11, Hedge’s bg = 0.81). At the level of emergent representational similarity (RSA), ImageNet-trained models categorically outperform their randomly initialized counterparts, and by a large margin (Student’s t = 22.66, p = 5.81e 39, Hedge’s bg = 2.36). Taken together, these results strongly affirm that training matters, and that randomly initialized features can only go so far in the prediction of meaningful neural variance. Differences between ImageNet-trained and randomly initialized models are shown in Figure 1.
3.2 What kinds of architectures best predict rodent visual cortex?
The overall best architecture for predicting mouse visual cortex across both individual neurons (SRP-Ridge) and population-level representation (RSA) was an Inception-ResNet hybrid (InceptionResNet-V2). There is a small, positive correlation between the depth of a model (the number of distinct layers) for both the RSA-Max metric and SRP-Ridge metric (Spearman’s r = 0.22, p = 0.001 and r = 0.192, p = 0.007, respectively), and a small, negative correlation for the total number of trainable parameters in the RSA Max metric (Spearman’s r = 0.18, p = 0.007). The latter of these is most likely driven by the relatively poor performance of parameter-dense architectures like VGG.
Markedly, trends previously noted in macaques [73] fail to materialize here. In particular, models with higher top-1 accuracies on ImageNet do not perform significantly better than models with lower top-1 accuracies. This relative parity is driven in large part it seems by newer models like EfficientNets, which across the board have dominant scores on ImageNet, but sometimes middling or poor scores in the predictions of rodent visual cortex we’ve tabulated here.
Compared to all other architectures, transformers on average fare slightly worse in the RSA Max metric (Student’s t = 3.96, p = 0.004, Hedge’s bg = 0.913), but moderately better in the SRP-Ridge Max metric (Student’s t = 2.45, p = 0.023, Hedge’s bg = 0.633). Strikingly, transformers and MLP-Mixers boast the largest differences between ImageNet-trained and randomly initialized variants in the SRP-Ridge Max metric, with all pairwise t-tests significant at alpha = 0.05 after Bonferroni correction for multiple comparisons. This strongly suggests that the advantage of those randomly initialized variants that matched or outperformed their ImageNet-trained counterparts is an advantage conferred by properties of convolutional architectures (e.g., translation invariance), and not necessarily an advantage shared across random feature spaces writ large. (The rankings of these and other architectures may be found in Figure 8 in the Appendix).
3.3 What kinds of tasks best predict rodent visual cortex?
The overall best Taskonomy encoder across both the RSA and SRP-Ridge Max is 2D segmentation (ranking second and first respectively; see Figure 9 in the Appendix). At the level of individual neurons (SRP-Ridge), 2D tasks (keypoints, autoencoding, inpainting) dominate. At the level of representational similarity (RSA), all 2D tasks but 2D segmentation fall to the bottom of the rankings, and Semantic tasks (object recognition and semantic segmentation) rise to 2nd and 3rd place.
This reshifting in rank presents a curious case for interpretation, suggesting most likely that while the representations of individual neurons may be coordinated more by the lower level, less abstract features necessary for performing well on most 2D tasks, the overall neural population codes are coordinated more by the parsing of the visual input into ethologically and spatially relevant units via the segmentation and classification tasks. Notably, the original research from which these PyTorch models were adopted offers an auxiliary data point that may anchor this interpretation more concretely. The top 3 models in our RSA Max metric (2D segmentation, object classification, semantic segmentation) are likewise in the top 5 of a ranking the original researchers produced by pitting the Taskonomy encoders against one another as pretrained ‘perceptual systems’ for reinforcement learning agents learning to navigate a virtual environment (see [54], figure 13 in the appendix). This raises the possibility that the reason these models are optimally predicting the visual neural population code for mice is simply because that code is coordinated in service of navigation.
3.4 How do category-supervised models compare to self-supervised models?
Whether with ResNet50 as their base, or a vision transformer, self-supervised models seem to be verging closer and closer to the predictive power of their category-supervised counterparts. Our most predictive self-supervised ResNet50, for example, (a MocoV2 model) effectively matches its category-supervised counterpart in the SRP-Ridge Max metric (with scores of .182 and .184, respectively), while slightly outperforming its category-supervised counterpart in the RSA Max metric (with scores of .422 and .415, respectively). While this single comparison by no means denotes a statistically significant superiority of self-supervised models (which would require training multiple iterations of each), it does begin to provide preliminary evidence for parity.
3.5 How well do non-neural network baselines predict rodent visual cortex?
Non-neural network baselines somewhat uniformly fail to predict neural activity as accurately as deep net features (though see Section A.10 of the Appendix for a counterexample). We tested three baselines: 1) a bank of Gabor filters of applied to 8x8 grids of each image; 2) the PCs of the resultant feature matrices (i.e. the Gist descriptors [72]); 3) and the max across 600 iterations of 4096 random Fourier features (a dimensionality matching that of our SRPs). Ridge regressed with generalized cross-validation, these feature models yield average scores of 0.07, 0.06 and -0.014, respectively. Compared via representational similarity, they yield average scores of 0.20, 0.25 and 0.011.
3.6 How ‘deep’ are the layers that best predict rodent visual cortex?
Echoing previous results [7, 8], we find across all ImageNet-trained architectures, regardless of metric, that the features most predictive of rodent visual cortex are found about a third of the way into the model (though see Section A.5 of the Appendix for some caveats). These early to intermediate visual features go beyond basic edge detection but are far from the highly abstracted representations adjacent to final fully connected layers. Across Taskonomy encoders, 2D & Geometric tasks yield their best features in earlier layers; 3D & Semantic tasks yield their best features in more intermediate and later layers. Note that these aggregate motifs do not preclude subtler differences across cortical area, which we discuss in the section below.
3.7 Are there differences in model predictions across cortical area?
In this work, we address this question from two perspectives: that of hierarchy and that of function.
In primate visual cortex, it is common consensus that there exists a distinct information processing hierarchy along the ventral visual stream [74–76], with posterior sites like V1 and V3 defined by features like oriented edge detectors, and more anterior sites like V4 and IT defined by more complex morphologies. While there continues to be some debate as to whether a similar hierarchy exists in rodent visual cortex, a large body of anatomical, functional and physiological work [77–83] has coalesced around a meaningful hierarchy that consists first of a ventral / dorsal split after primary visual cortex (VISp), with VISp leading to VISl in the ventral stream and VISp leading to VISrl - VISal - VISpm - VISam in the dorsal stream. Strikingly, our modeling does seem to provide corresponding evidence for this circuit in the form of a data-driven hierarchy produced purely by taking the median depths of the model layers that best predict the neural activity in each of these cortical areas, and assessing for difference across them. A nonparametric ANOVA shows an overall difference in depth across cortical area to be significant for both our SRP-Ridge metric (Friedman’s 2 = 34.08, p = 2.29e 06, Kendall’s cW = 0.04) and our RSA metric (Friedman’s 2 = 37.05, 5.86e 076, Kendall’s cW = 0.06). Subsequent pairwise comparisons show many of the differences that underlie this group-level effect to be differences between earlier and later layers of the information processing hierarchy established in the literature. (For further details, see Figure 2.)
Other differences across cortical area that we might expect are differences driven by function. Research into primate visual cortex over the last two decades has unveiled a significant degree of functional organization over and above purely anatomical organization [84–86], with distinct subregions defined in large part by their differential activity in response to different kinds of stimuli. To try and replicate this in mouse visual cortex we search for Taskonomic organization, a proxy of functional organization wherein distinct neural sites are better or worse predicted by the features from different taskonomy encoders. Curiously, and in contrast to previous findings in human fMRI [55], it seems to be the case that the scores of different Taskonomic clusters are relatively consistent across cortical area; see Figure 3.) This suggests that mouse visual cortex may be more functionally (or Taskonomically) homogenous than primate visual cortex, with anatomical descriptors providing little to no cue of functional difference – though this seems unlikely given other analyses we’ve performed showing greater similarities of neurons within cortical site than between cortical site (see Section A.11 for details). Another (more likely) alternative is that the tasks of computer vision are just not so neatly mapped onto the tasks of biological vision in mice.
3.8 How do the predictions compare across RSA and neural regression?
While prior work has addressed this question theoretically [87], it’s rarely the case that representational similarity and neural regression are compared directly and empirically. Here, we compare our RSA and SRP-Ridge metric both at the level of overall rankings (taking the max across layers) and at
. the level of individual layers, the latter of which provides a much more detailed assessment of how different feature spaces map to cortical representation.
In terms of overall rankings, the Spearman rank order correlation between the two methods is either 0.56 (p = 8.36⇥ 10 19) or 0.59 (p = 3.17⇥ 10 12), depending on whether you include or exclude the randomly initialized architectures. In terms of layer by layer comparisons, we decompose the Spearman correlation across distinct combinations of model and cortical area. The average coefficient between the two methods, along with bootstrapped 95% confidence intervals is 0.468 [0.447,0.489] or 0.626 [0.613,0.639], again depending on the inclusion or exclusion of the random models. This suggests a significant degree of overlap between the kinds of features that optimally predict the representations of both individual neurons and neural populations. Of course, the averages here do obscure some meaningful subtrends and idiosyncrasies. For details, see Figure 4.
3.9 How well are we doing overall in predicting mouse visual cortex?
The overall best model in any cortical area across either of our metrics is unsupervised 2D segmentation in anterolateral visual area (VISal), with an RSA Max score of 0.538. The (Spearman-Brown) splithalf reliability of the RDM for this area (an effective proxy of its explainable variance) is 0.89. This means our most predictive model in any cortical area across any metric is little more than halfway to the noise ceiling.
Of course, it’s possible this noise ceiling is a bit too strict. Instead of requiring the model to predict the neural data as well as the neural data predicts itself, another possible target to which we might recalibrate is the relative performance we would expect if (instead of an artificial neural network) we used the responses of another biological network as the model to predict neural activity. Inspired by recent work [88], and to better contextualize the scores of our SRP-Ridge metric, we attempted a version of this here. To compute this reference, we proceeded again neuron by neuron using the exact same neural regression method (dimesnonality reduction, and hyperparameters) described in Section 2.5.2, but instead of using the responses of a deep net layer as the predictors in our ridge regression, we used the responses of the neurons from the same cortical area in all other mice (conspecifics) across the donor sample. Conceptually, this ‘intermouse score’ represents how well we might do if our model of a given mouse brain were other mouse brains.
Averaging across both cortical area and model, the average distance (with 95% bootstrapped confidence intervals) between the best performing deep net feature spaces and the mean of the intermouse scores (expressed in the same units of Pearson’s r we’ve used heretofore) is 0.0985 [0.0940, 0.103]. Compare this to the same distance computed relative to the splithalf reliability: 728 [0.726, 0.731]. On average, then, while our artificial models are capturing only a fraction of the total explainable variance relative to the splithalf noise ceiling, they’re verging increasingly close to the predictive threshold suggested by the reweighting of biological neurons from the same species. The performance of models relative to the intermouse score may be seen in the lower half of Figure 3.
4 Discussion
Our intent with this work was to provide a preliminary atlas for future ventures into the deep neural network modeling of rodent visual cortex. To this end, we have deliberately invested in introspective analyses of the tools we used, as well as the curation of deep neural networks we hope will provide informative waypoints. Obviously, the atlas is far from complete. Other model classes like recurrent models [89, 90], equivariant models [91], and robotic models (e.g. for visual odometry [92]) are promising candidates for inclusion in future benchmarks, and our neural encoding & representational similarity metrics are just two of many variants.
Nevertheless, the results we have presented here indicate that neural recordings from the visual brains of mice can (with care and caution) be compared to deep neural networks using many of the same tools we’ve used to better characterize the visual brains of monkeys. Having as reference two animal models that occupy very different ecological niches and are separated by tens of million
years of evolution makes it far more likely that insights into vision gleaned across both are actually fundamental to perceptual meaning-making and not just some idiosyncratic quirk specific to any one evolutionary trajectory. Primate and rodent vision do differ rather drastically, even in fairly basic ways: mice lack a fovea, have a retina dominated by rods for vision under low light, and spatial acuity less than 20/1000 [93], making their primary visual system more akin to the primate peripheral system – and making it all the more curious that the same models explain decent amounts of variance in both. The differences between the species, it seems, may not be so irreconcilable at the level of modeling, but only with future work more carefully controlling for distinct aspects of each organism’s unique physiology (see Section A.5) can more concrete conclusions of this kind be made.
Beyond considerations of distinctive physiology is the indispensable point that perceptual systems should always be considered in service of behavior. It’s possible that mice mostly rely on vision as a sort of broad bandpass filter for lower-frequency, dynamic stimuli the animal can then flee, fight, or further investigate with its whiskers — perhaps its most sophisticated sensory organ [94]. Another possibility is that mice use vision to facilitate navigation. The dominance in our Taskonomy results of 2D segmentation, object recognition and semantic segmentation (all tasks that have elsewhere been shown to provide effective, transferable features for the simulation of robotic navigation) provide some evidence for this. Of course, the behavioral roles of rodent vision may very well be manifold. Understanding this plurality in a readily available model species could in the end be key for bridging the gaps that remain between biological and computer vision [95]. The unparalleled access, resolution, and control afforded by rodent neuroimaging have already revolutionized our understanding of the relationship between perceptual representation and behavioral output. Combined with novel methods like the embedding of neural networks in virtual agents [96] in ecologically realistic environments, this kind of data may well provide a testbed for better situating the tasks of computer vision in the broader behavioral context of agentic scene understanding.
In summary, only novel combinations of architecture, task and mapping will help to explain the highly reliable neural variance we’ve yet to explain in our current survey. Already this recombination is under way: Shi et al. [97] have created a custom CNN designed specifically to match (processing stage by processing stage) the anatomy of rodent visual cortex, while Nayebi et al. [88] have combined the power of self-supervised learning with smaller, shallower architectures to more fully account for the ethological realities of rodent behavior and the differences in computational bandwidth that shape and constrain their visual systems. More work of this variety will be necessary to more fully model the rich diversity and fiendish complexity of biological brains at scale – even the very smallest ones.
4.1 Acknowledgements
We thank Martin Schrimpf, Tiago Marques, Jim DiCarlo, as well as many others on the BrainScore team for helpful discussion, feedback, and inspiration. We would also like to thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
4.2 Code Availability
More results and code for the replication of our analysis may be found at this GitHub repository: github.com/ColinConwell/DeepMouseTrap (License GPL v2)
4.3 Compute Required
We used a single machine with 8 Nvidia RTX 3090 GPUs, 755gb of RAM, and 96 CPUs. GPUs were used only for extracting model activations, and could (without major slowdown) be removed from the analytic pipeline. Dimensionality reduction and regression computations were CPU and RAM intensive. Replicating all of our results would take approximately two weeks on a similar machine.
4.4 Ethics Statement
Lest our science forget the life that powers it, we must note that behind the phenomenal dataset provided by the Allen Institute are 256 laboratory mice, each of which was subjected to multiple surgeries, a highly invasive neuroimaging technique and genetic engineering. The moral parameters of this particular praxis of neuroscience are contentious, and not without reason. While we believe centralized, comprehensive and (most importantly) public datasets like those provided by the Allen Institute may actually decrease the total number of laboratory animals required for similar kinds of empirical projects, we acknowledge with solemnity the cost to life required.
4.5 Funding Statement
This work was supported by the Center for Brains, Minds and Machines, NSF STC award 1231216, the MIT CSAIL Systems that Learn Initiative, the CBMM-Siemens Graduate Fellowship, the MITIBM Watson AI Lab, the DARPA Artificial Social Intelligence for Successful Teams (ASIST) program, the United States Air Force Research Laboratory and United States Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000, and the Office of Naval Research under Award Number N00014-20-1-2589 and Award Number N00014- 20-1-2643. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding intermediate feature extraction for modeling mouse two-photon neural responses?
2. What are the strengths of the paper, particularly in terms of experimental quality and significance in visual neuroscience?
3. What are the weaknesses of the paper, such as unclear or hard-to-see figures, and how can they be improved?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor remarks or suggestions for improvement in the paper, such as renaming scikit-learn, dropping unnecessary phrases, or reordering figure references? | Summary Of The Paper
Review | Summary Of The Paper
Extracts intermediate features (across all layers) for models pretrained on ImageNet classification (and other computer vision tasks) and tests its modeling performance of mice two-photon neural responses (from 6 cortical areas) using regression and RSA. Features at intermediate depths are better, there is a slight hierarchy between neural network depth and cortical area, and there seems to be no relation between the pretraining task and the cortical area the trained network models best.
Review
Originality Though techniques used are standard, the extent of the experiments and the animal model (mice) is novel and significant.
Quality Experiments are sound and enough information is provided to allow replicability.
The main drawback of the paper in its current form is the figures, though they contain information to draw the conclusions drawn in the paper, it is not easy to see. Fig 1, for instance shows performance metrics of pretrained vs randomly initialized models but the models have been sampled (from the total of 110), the red bar goes below the blue bar if the value is lower and the error bars overlap so it is hard to see where they start or end. Perhaps a two-sided violin plot per model (instead of the bars) could show the distribution for blue and red better. Or maybe a scatter plot (x axis trained, y axis random) where each point is a model could show all models and more easily show which version is better (in comparison to the diagonal), error bars could also be included per point or perhaps omitted for clarity, and important points (i.e., models) can be labelled (with colors or marker styles or text above the point). All results from Sec 3.3 (other than the best network being Inception-ResnetV2) are close to impossible to see in Fig. 1. In Fig. 2, perhaps adding a table with the averages per task group will help show the trends and differences between the two metrics (as done for figure 3). Fig. 3 could have also been a table (or perhaps two), it will be easier to read the numbers and gauge the difference that way (it is also currently unclear what is your assumed cortical hierarchy and task hierarchy). Given the amount of work put into obtaining these good results, spending time finding a better way to convey them might be worthy and will surely make the paper more attractive/citable.
I do not think the method comparison with BrainScore regression (Sec 3.1) adds to this paper (and it may take away from it). I understand this was asked of you before but it does not actually show that results from this paper (in mice) will hold if using the BrainScore regression method, it only shows that both methods give similar regression results on the same data (which is expected given that the changes are minor). That the method transfers to primate data (with the very different visual system and recording methods) might be interesting but I presume so will many other methods (including BrainScore’s) and regression performance or transferability across animal models is not the focus of this paper so it feels disconnected. It may be worth considering bumping it to the end of the result section (as an extra transfer result) or even to the appendix.
Clarity Well written, though sometimes wordy; for instance, the main results of the paper at the end of the introduction could be a lot bolder/more direct even if obviating some details (for instance, 1 → “Training matters: features from trained models predict/model neural responses better than features from randomly initialized models.” or phrases like “features learned in service of some task” → “learned features” or “there may be a slight upwards gradient in complexity” → “complexity increases”). Of course, the work and results still hold but I think ML readers will appreciate a less hedged writing style.
For reproducibility, can you add a sentence about how images were modified to work as ImageNet images, how do you deal with the aspect ratio difference (if any), sizing and color.
Perhaps it is worth providing the proportion of cells (out of the 6.6K) from each cortical area (and the name of the areas). A single sentence should do. “Dataset areas are divided as V1 (50%), AM (10%), PM(5%), ….” (could be in supplementary).
For the Taskonomy models, you say “we choose to extract representations across all layers”(l.90), to confirm, this means through all encoder+decoder layers (i.e., different tasks may have different depths)?
I am confused about your ridge regression, do you cross-validate or not? because it says you use “a lambda penalty of 1.0” (l.132) but then in the same line you say you “monopolize generalized cross-validation” (?), I presume you do use LOOCV (if so, I would drop the sentence about using a lambda of 1 and perhaps provide the lambda grid you searched over).
Also, to confirm, once you cross-validate the penalty, you use all the data to fit the final linear regressor (with the chosen lambda), right? if so, the regression correlation may be optimistic (i.e., it reports cross-validation results). This is ok as long as what’s reported is clear.
The interpretation in Sec 3.4 seems premature given that it is only truly one task (Semantic segmentation) that moves up from SRP to RSA (and at the same time, scene classification moves down from SRP to RSA). It is “only” a hypothesis so it is up to the authors to decide whether they are ok with it.
Changes in performance in Sec. 3.6 are so small I do not think one can even say they point in any direction (l.251).
Results from Sec 3.8 are neat, I thought the lack of functional specialization across different cortical areas in mice was well known; maybe you can find some references for it and frame your results as further confirmation. I would also reference some of the discussions about cortical area hierarchies.
In l. 296, pointing regression scores are lower than RSA scores seems unimportant. They are both correlation scores but in very different spaces.
Great discussion.
Other minor remarks
the word “later” in l.33 is odd.
you end up using 6619 cells (not 59K as claimed in the abstract)
in l. 132, it will be better to name scikit-learn [52] rather than just the citation or “implemented with a popular machine learning toolkit [57]” if you don’t want to name it.
in l. 152, drop “In this work,”
i would reference the figures before the written results (i.e., closer to the top of each 3.x subsection rather than at the very end), for instance, the line in 195-196 could go up to l.184
l.219 ‘writ large”?
Reference to fig 4 (l.257) appears before reference to fig 3 (l.281), fig 3 should probably be fig 4 and viceversa.
For 3.x sections, perhaps consider putting result directly in the titles, for instance, section 3.5: "how well do non-neural network baselines predict rodent visual cortex" -> "non-neural network baselines fail to predict rodent visual cortex”
Showing the neural RDMs for different areas and perhaps some for a selected layer in a trained model could be interesting (perhaps in appendix)
Significance It fills a research gap in visual neuroscience and will be a good reference for future studies.
Overall This is a good paper that is on the verge of acceptance. With some improvements I would have no problem accepting it.
Update: I have bumped my score from 5 to 6 after the authors addressed some of my concerns and those of other reviewers. |
NIPS | Title
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
Abstract
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates. *Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
N/A
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
*Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
1 Introduction
To date, the most successful models of biological visual cortex are object-recognizing deep neural networks applied to the prediction of neural activity in primate visual cortex [1–5]. Corresponding to the biology not only at the level of individual layers, but across the feature hierarchy, these models are so powerful they can now effectively be used as neural controllers, synthesizing stimuli that drive neural activity far beyond the range evoked by any handmade experimental stimulus [6]. The correspondence of these same models to mouse visual cortex, on the other hand, has proven a bit more tenuous [7, 8], with a recent finding even suggesting that randomly initialized networks are as predictive of rodent visual cortical activity as trained ones [9].
Often implicit in interpretations of these results is the notion that the visual milieu and machinery of mice is simply different – something characterized more, perhaps, by brute force predator avoidance and the ‘flexible random associations’ thought to define senses like olfaction [10] than by the sophisticated active sampling and representational compositionality enabled by primate central vision. And yet, mice do recognize objects [11, 12] – and do engage in other sophisticated visual behaviors [13] that suggest they must have visual solutions that at least functionally approximate the kinds of solutions learned by modern computer vision algorithms. If these models perform well in monkeys, but not in mice, are we overfitting to an artifact? Are the object recognition capabilities of mice simply the byproduct of a representational competence learned through other (even more behaviorally relevant) tasks? Have mice perhaps converged on solutions to visual problems that fundamentally differ from the solutions that undergird the emergent similarity between monkeys and machines? To even begin to answer these questions, we need substantially more comprehensive modeling statistics than we currently have. Our main goal in this work was to provide exactly that – to re-examine at large scale the state of neural network modeling in the visual cortices of mice, using many thousands of neurons, over 110 distinct neural network models, and two methods of mapping models to brain.
We summarize the statistics from our benchmarking survey in five main results:
1. Training matters. The randomly initialized variants of some convolutional architectures fare well when predicting individual neural responses, but representational similarity is always better captured by features learned in service of some task. (Segmentation seems best.) 2. Features of intermediate complexity dominate in the prediction of all cortical sites, but both our mapping methods do demonstrate an upwards gradient in complexity from primary visual cortex onwards that roughly matches the information processing hierarchy proposed elsewhere in the rodent neurophysiology literature. 3. Taskonomic tools that have previously been shown to approximate functional organization in primates fail to strongly differentiate anatomical regions in mice, with the same kinds of tasks dominant across multiple, distinct neural sites. 4. When aggregated in similar ways, representational similarity and neural regression methods capture similar trends in the kinds of feature spaces that best predict the biology. 5. While still far from the overall noise ceiling for this highly reliable neural data, a variety of the artificial deep net models in our survey make predictions only slightly less accurate than ‘biological conspecific models’ composed of the neurons from other mice.
2 Methods
2.1 Neural Dataset
For neural data, we use the Allen Brain Observatory Visual Coding1 dataset [14] collected with twophoton calcium-imaging from the visual cortex of 256 awake adult transgenic mice and consisting of approximately 59,610 unique, individual neurons. Calcium-imaging fluorescence patterns are preprocessed and deconvolved by the Allen Institute2. The neurons sampled include neurons from 6 visual cortical areas at 4 cortical depths across 12 genetic cre lines. The visual experiments recorded activity for both artificial images (e.g., diffraction gratings) and 118 natural scenes. We analyze only
1Available with a non-commercial license under the Allen Institute terms of use: http://www.alleninstitute.org/legal/terms-use/
2More details are available in the whitepapers released with the observatory data: http://observatory.brain-map.org/visualcoding/transgenic
the latter to ensure comparable inputs to what is typically used in the training of deep nets. Each natural scene is displayed 50 times over the course of an assay.
To ensure an optimal signal to noise ratio, we perform a significant amount of subsetting on the full neural population, beginning by subsetting only excitatory neurons. Recent analyses suggest neural activity throughout mouse visual cortex is often impacted by extraneous, external body movements [15]. For this reason, we subsequently filter out any neurons whose peak responses to the presentation of natural scene images are significantly modulated by the mouse’s running speed, using an ANOVA metric provided by the Allen Institute. We further subselect neurons by assessing their split-half reliability across trials (with each split-half constituting 25 of 50 presentations for each image), keeping only those neurons exhibiting 0.8 reliability and above. This thresholding still leaves 6619 neurons for analysis, is in line with prior work on primates, and supports, for example, the construction of cortical representational dissimilarity matrices (RDMs) with split-half reliabilities as high as 0.93. (More details on the relationship between our metrics and neural reliability, including visualizations of some of our results across many degrees of thresholding, can be found in A.4 of the Appendix.)
2.2 Model Zoology
To explore the influence of model architecture on predictive performance, we use 26 model architectures from the Torchvision (PyTorch) model zoo [16] and 65 model architectures from the Timm [17] model zoo [18–52]. These models include convolutional networks, vision transformers, normalization-free networks and MLP-Mixer models. For each of these models, we extract the features from one trained and one randomly initialized variant (using whatever initialization scheme the model authors deemed best) so as to better disentangle what training on object recognition affords us in terms of predictive power.
2.3 Neural Taskonomy
Model zoology provides decent perspective on the computations related to object recognition, but the responsibilities of the visual cortex (no matter the species) extend far beyond identifying the category of an object. To probe a wider range of tasks, we turn to Taskonomy: a single architecture trained on 24 different common computer vision tasks [53], ranging from autoencoding to edge detection. The model weights we use are from updated PyTorch implementations of the original Tensorflow models [54]. Key to the engineering of Taskonomy is the use of an encoder-decoder design in which only the construction of the decoder varies across tasks. While recent analyses using a similar approach in human visual cortex with fMRI data [55] have tended to focus only on the latent space of each task’s encoder, we extract representations across all layers, better situating Taskonomy within the same empirical paradigm that has so far defined the modeling of object recognition in the primate brain. For further clarity, we cluster the 24 tasks according to their ‘Taskonomic’ category — a total of 5 clusters (2D, 3D, semantic, geometric or other) that we further collapse into 4 clusters (lumping the only member of the ‘other’ category — a denoising autoencoder — in with its closest cousin — a vanilla autoencoder in the ‘2D’ category). These purely data-driven clusters are derived from estimates of how effectively a set of features learned for one task transfer to (or boost the performance in) another task [53]. Use of the Taskonomy models provides a unique opportunity to test variance in training regimes without the confound of simultaneous changes in architecture.
2.4 Self-Supervised Models
Full category supervision, while robust in its ability to build representations that transfer well to a variety of tasks, suffers in its neuroscientific relevance as an ethologically plausible mode of learning. Recently, self-supervised models have begun to provide viable alternatives to the representations learned by category-supervised models in both computer vision [56, 57] and neural mapping [58, 59]. Here, we assess 22 self-supervision models from the VISSL model zoo [60], ranging from earlier iterations (e.g. DeepCluster [61]) to modern contrastive learning algorithms (e.g. BarlowTwins and Dino [62–65]). We use these models to assess whether category-supervision, however powerful it is in predicting neural activity, might eventually be supplanted by these more realistic alternatives. 14 of these models have as their base architecture a standard ResNet50; 8 are built atop vision transformers.
2.5 Comparing Representations across Biological & Artificial Networks
Two methods predominate in the comparison of neural recordings to deep neural networks: at the most abstract level, one of these compares representational geometries computed across the activations
of many individual neurons [66, 67]; the other attempts to predict the activity of individual neurons directly [67, 68]. Both of these techniques are grounded in the use of image-computable models and a shared stimulus set, but differ in the types of transformation applied to the neural activity generated by those stimuli. Given the difference in both target (neural populations versus individual neurons) and transforms (correlation matrices versus dimensionality reduction) we attempt a variant of each type of analysis here, comparing the two directly on the exact same neural data, with the same models and the same stimulus set, and in a granular, layer-by-layer fashion. (A more comprehensive review of neural mapping methods is provided in Section A.2 of the Appendix.)
2.5.1 Representational Similarity Analysis
To compare the representational geometries of a given model to the representational geometries of the brain, we begin by computing classic representational dissimilarity matrices (RDMs) [69]. We compute these RDMS by calculating the pairwise correlation coefficients between the neural response vectors for each image (one for each of the 6 cortical areas surveyed). We then repeat this procedure for the artificial networks, aggregating the responses of the artificial neurons in a given layer, before aggregating them once more into a correlation matrix. We then measure the relationship between the RDMs computed from the biological and artificial networks with a second-order Pearson correlation between the flattened upper triangles of each. The resultant coefficient constitutes the score for how well a given model layer predicts the representational similarity of a given cortical area.
2.5.2 Neural Regression (Encoding Models)
To more directly compare the biological and artificial neural activations in our data, we use a style of regression made popular in the modeling of primate visual cortex, epitomized by BrainScore [4]. Variants of this approach abound, but most consist of extracting model activations, performing dimensionality reduction, and then some form of cross-validated penalized or principal components regression. The dimensionality-reduced feature spaces of the model are used as the regressors of the activation patterns in a given neuron. After testing a number of these variants, we settled on sparse random projection for dimensionality reduction (which proved far more computationally efficient than standard PCA, without sacrifice in terms of regression scores), followed by ridge regression (in place of the more frequently used partial least squares regression).
The details of our method (programmed with [70]) are as follows: Given a network, we first extract a predetermined number of sparse random projections (4096, in this case) from the features of each layer — in line with the Johnson-Lindenstrauss lemma for the number of observations (images shown to the mice) in our data set 3. After extracting these projections, we regress them on the activity of each individual neuron using ridge regression (with a default lambda penalty of 1.0). The use of a penalized regression in this case allows us to monopolize generalized cross-validation (a linear algebraic form of leave-one-out cross-validation), yielding a set of predictions for the activity of each neuron for each image4. We then compute the Pearson correlation between the predicted and actual activity for each neuron to obtain a score per neuron per model layer, which we then aggregate by taking the mean of scores per neuron across cortical area.
We verify the efficacy of this method on the publicly available benchmarks of primate BrainScore, where (relative to BrainScore’s in-house regression method) we demonstrate provisional gains not only in terms of predictive score (sometimes up to r = 34%), but also in terms of speed and computational efficiency. (Details may be found in Section A.1 of the Appendix.)
2.6 Model Rankings
To rank the models according to how well they predict the variance in a given cortical area, we take the max across layers. In effect, this requires that a model ‘commit’ only one layer to the prediction of each area. In the case of our neural regression metric we call these scores the ‘SRP-Ridge Max’; in the case of our representational similarity metric we call these scores the ‘RSA Max’. A final mean taken over the SRP-Ridge Max and RSA Max scores per model per cortical area yields our overall model rankings, which serve as the basis for the bulk of our analyses.
3Note that in cases where the dimensionality of features is less than the number of projections suggested by the lemma, sparse random projections will actually upsample the feature space, rather than downsample it.
4The use of generalized cross-validation is particularly beneficial in datasets with fewer probe images, where k-fold cross-validation means losing a significant degree of information in each fit.
2.7 Non-Neural Network Baselines
Prior to the ascendancy of neural network models, a significant amount of time and craft was invested in the hand-engineering of features to simultaneously facilitate image recognition and capture meaningful subsets of neural variance. In this work, we test how well a small subset of those features are able to explain the variance in rodent visual cortex, using both our neural encoding and representational similarity metrics. Our non-neural network baselines consist of random fourier features [71] (computed specifically to match the dimensionality of our neural network predictors), handcrafted gabor filters and GIST (spatial envelope) descriptors [72].
3 Results
3.1 How do trained models compare to randomly initialized models?
Previous work in the deep neural network modeling of mouse visual cortex found that a randomly initialized VGG16 predicted neural responses as well as, if not slightly better than, a VGG16 trained on ImageNet [9], suggesting that the neural predictivity of the features produced by a trained object recognition model are perhaps no better than the features produced by a randomly initialized one. Our results, on the other hand, suggest that the neural predictivity of trained versus randomly initialized models more generally depends on both the particular model being tested and the particular method used to produce the mappings between model and brain.
At the level of individual neurons (neural regression), 17 of the 91 model architectures we tested had randomly initialized variants that either matched or outperformed their ImageNet-trained counterparts. Replicating previous findings, we found these 17 architectures to include VGG16, as well as all 3 other VGG variants (11, 13 & 19), AlexNet, the DenseNet architectures (121, 169, 201), and almost all of the normalization-free architectures. Despite this, a paired t-test of the difference in scores across all models demonstrates that ImageNet-trained architectures are still overall more performant than their randomly initialized counterparts (Student’s t = 7.74, p = 1.37e 11, Hedge’s bg = 0.81). At the level of emergent representational similarity (RSA), ImageNet-trained models categorically outperform their randomly initialized counterparts, and by a large margin (Student’s t = 22.66, p = 5.81e 39, Hedge’s bg = 2.36). Taken together, these results strongly affirm that training matters, and that randomly initialized features can only go so far in the prediction of meaningful neural variance. Differences between ImageNet-trained and randomly initialized models are shown in Figure 1.
3.2 What kinds of architectures best predict rodent visual cortex?
The overall best architecture for predicting mouse visual cortex across both individual neurons (SRP-Ridge) and population-level representation (RSA) was an Inception-ResNet hybrid (InceptionResNet-V2). There is a small, positive correlation between the depth of a model (the number of distinct layers) for both the RSA-Max metric and SRP-Ridge metric (Spearman’s r = 0.22, p = 0.001 and r = 0.192, p = 0.007, respectively), and a small, negative correlation for the total number of trainable parameters in the RSA Max metric (Spearman’s r = 0.18, p = 0.007). The latter of these is most likely driven by the relatively poor performance of parameter-dense architectures like VGG.
Markedly, trends previously noted in macaques [73] fail to materialize here. In particular, models with higher top-1 accuracies on ImageNet do not perform significantly better than models with lower top-1 accuracies. This relative parity is driven in large part it seems by newer models like EfficientNets, which across the board have dominant scores on ImageNet, but sometimes middling or poor scores in the predictions of rodent visual cortex we’ve tabulated here.
Compared to all other architectures, transformers on average fare slightly worse in the RSA Max metric (Student’s t = 3.96, p = 0.004, Hedge’s bg = 0.913), but moderately better in the SRP-Ridge Max metric (Student’s t = 2.45, p = 0.023, Hedge’s bg = 0.633). Strikingly, transformers and MLP-Mixers boast the largest differences between ImageNet-trained and randomly initialized variants in the SRP-Ridge Max metric, with all pairwise t-tests significant at alpha = 0.05 after Bonferroni correction for multiple comparisons. This strongly suggests that the advantage of those randomly initialized variants that matched or outperformed their ImageNet-trained counterparts is an advantage conferred by properties of convolutional architectures (e.g., translation invariance), and not necessarily an advantage shared across random feature spaces writ large. (The rankings of these and other architectures may be found in Figure 8 in the Appendix).
3.3 What kinds of tasks best predict rodent visual cortex?
The overall best Taskonomy encoder across both the RSA and SRP-Ridge Max is 2D segmentation (ranking second and first respectively; see Figure 9 in the Appendix). At the level of individual neurons (SRP-Ridge), 2D tasks (keypoints, autoencoding, inpainting) dominate. At the level of representational similarity (RSA), all 2D tasks but 2D segmentation fall to the bottom of the rankings, and Semantic tasks (object recognition and semantic segmentation) rise to 2nd and 3rd place.
This reshifting in rank presents a curious case for interpretation, suggesting most likely that while the representations of individual neurons may be coordinated more by the lower level, less abstract features necessary for performing well on most 2D tasks, the overall neural population codes are coordinated more by the parsing of the visual input into ethologically and spatially relevant units via the segmentation and classification tasks. Notably, the original research from which these PyTorch models were adopted offers an auxiliary data point that may anchor this interpretation more concretely. The top 3 models in our RSA Max metric (2D segmentation, object classification, semantic segmentation) are likewise in the top 5 of a ranking the original researchers produced by pitting the Taskonomy encoders against one another as pretrained ‘perceptual systems’ for reinforcement learning agents learning to navigate a virtual environment (see [54], figure 13 in the appendix). This raises the possibility that the reason these models are optimally predicting the visual neural population code for mice is simply because that code is coordinated in service of navigation.
3.4 How do category-supervised models compare to self-supervised models?
Whether with ResNet50 as their base, or a vision transformer, self-supervised models seem to be verging closer and closer to the predictive power of their category-supervised counterparts. Our most predictive self-supervised ResNet50, for example, (a MocoV2 model) effectively matches its category-supervised counterpart in the SRP-Ridge Max metric (with scores of .182 and .184, respectively), while slightly outperforming its category-supervised counterpart in the RSA Max metric (with scores of .422 and .415, respectively). While this single comparison by no means denotes a statistically significant superiority of self-supervised models (which would require training multiple iterations of each), it does begin to provide preliminary evidence for parity.
3.5 How well do non-neural network baselines predict rodent visual cortex?
Non-neural network baselines somewhat uniformly fail to predict neural activity as accurately as deep net features (though see Section A.10 of the Appendix for a counterexample). We tested three baselines: 1) a bank of Gabor filters of applied to 8x8 grids of each image; 2) the PCs of the resultant feature matrices (i.e. the Gist descriptors [72]); 3) and the max across 600 iterations of 4096 random Fourier features (a dimensionality matching that of our SRPs). Ridge regressed with generalized cross-validation, these feature models yield average scores of 0.07, 0.06 and -0.014, respectively. Compared via representational similarity, they yield average scores of 0.20, 0.25 and 0.011.
3.6 How ‘deep’ are the layers that best predict rodent visual cortex?
Echoing previous results [7, 8], we find across all ImageNet-trained architectures, regardless of metric, that the features most predictive of rodent visual cortex are found about a third of the way into the model (though see Section A.5 of the Appendix for some caveats). These early to intermediate visual features go beyond basic edge detection but are far from the highly abstracted representations adjacent to final fully connected layers. Across Taskonomy encoders, 2D & Geometric tasks yield their best features in earlier layers; 3D & Semantic tasks yield their best features in more intermediate and later layers. Note that these aggregate motifs do not preclude subtler differences across cortical area, which we discuss in the section below.
3.7 Are there differences in model predictions across cortical area?
In this work, we address this question from two perspectives: that of hierarchy and that of function.
In primate visual cortex, it is common consensus that there exists a distinct information processing hierarchy along the ventral visual stream [74–76], with posterior sites like V1 and V3 defined by features like oriented edge detectors, and more anterior sites like V4 and IT defined by more complex morphologies. While there continues to be some debate as to whether a similar hierarchy exists in rodent visual cortex, a large body of anatomical, functional and physiological work [77–83] has coalesced around a meaningful hierarchy that consists first of a ventral / dorsal split after primary visual cortex (VISp), with VISp leading to VISl in the ventral stream and VISp leading to VISrl - VISal - VISpm - VISam in the dorsal stream. Strikingly, our modeling does seem to provide corresponding evidence for this circuit in the form of a data-driven hierarchy produced purely by taking the median depths of the model layers that best predict the neural activity in each of these cortical areas, and assessing for difference across them. A nonparametric ANOVA shows an overall difference in depth across cortical area to be significant for both our SRP-Ridge metric (Friedman’s 2 = 34.08, p = 2.29e 06, Kendall’s cW = 0.04) and our RSA metric (Friedman’s 2 = 37.05, 5.86e 076, Kendall’s cW = 0.06). Subsequent pairwise comparisons show many of the differences that underlie this group-level effect to be differences between earlier and later layers of the information processing hierarchy established in the literature. (For further details, see Figure 2.)
Other differences across cortical area that we might expect are differences driven by function. Research into primate visual cortex over the last two decades has unveiled a significant degree of functional organization over and above purely anatomical organization [84–86], with distinct subregions defined in large part by their differential activity in response to different kinds of stimuli. To try and replicate this in mouse visual cortex we search for Taskonomic organization, a proxy of functional organization wherein distinct neural sites are better or worse predicted by the features from different taskonomy encoders. Curiously, and in contrast to previous findings in human fMRI [55], it seems to be the case that the scores of different Taskonomic clusters are relatively consistent across cortical area; see Figure 3.) This suggests that mouse visual cortex may be more functionally (or Taskonomically) homogenous than primate visual cortex, with anatomical descriptors providing little to no cue of functional difference – though this seems unlikely given other analyses we’ve performed showing greater similarities of neurons within cortical site than between cortical site (see Section A.11 for details). Another (more likely) alternative is that the tasks of computer vision are just not so neatly mapped onto the tasks of biological vision in mice.
3.8 How do the predictions compare across RSA and neural regression?
While prior work has addressed this question theoretically [87], it’s rarely the case that representational similarity and neural regression are compared directly and empirically. Here, we compare our RSA and SRP-Ridge metric both at the level of overall rankings (taking the max across layers) and at
. the level of individual layers, the latter of which provides a much more detailed assessment of how different feature spaces map to cortical representation.
In terms of overall rankings, the Spearman rank order correlation between the two methods is either 0.56 (p = 8.36⇥ 10 19) or 0.59 (p = 3.17⇥ 10 12), depending on whether you include or exclude the randomly initialized architectures. In terms of layer by layer comparisons, we decompose the Spearman correlation across distinct combinations of model and cortical area. The average coefficient between the two methods, along with bootstrapped 95% confidence intervals is 0.468 [0.447,0.489] or 0.626 [0.613,0.639], again depending on the inclusion or exclusion of the random models. This suggests a significant degree of overlap between the kinds of features that optimally predict the representations of both individual neurons and neural populations. Of course, the averages here do obscure some meaningful subtrends and idiosyncrasies. For details, see Figure 4.
3.9 How well are we doing overall in predicting mouse visual cortex?
The overall best model in any cortical area across either of our metrics is unsupervised 2D segmentation in anterolateral visual area (VISal), with an RSA Max score of 0.538. The (Spearman-Brown) splithalf reliability of the RDM for this area (an effective proxy of its explainable variance) is 0.89. This means our most predictive model in any cortical area across any metric is little more than halfway to the noise ceiling.
Of course, it’s possible this noise ceiling is a bit too strict. Instead of requiring the model to predict the neural data as well as the neural data predicts itself, another possible target to which we might recalibrate is the relative performance we would expect if (instead of an artificial neural network) we used the responses of another biological network as the model to predict neural activity. Inspired by recent work [88], and to better contextualize the scores of our SRP-Ridge metric, we attempted a version of this here. To compute this reference, we proceeded again neuron by neuron using the exact same neural regression method (dimesnonality reduction, and hyperparameters) described in Section 2.5.2, but instead of using the responses of a deep net layer as the predictors in our ridge regression, we used the responses of the neurons from the same cortical area in all other mice (conspecifics) across the donor sample. Conceptually, this ‘intermouse score’ represents how well we might do if our model of a given mouse brain were other mouse brains.
Averaging across both cortical area and model, the average distance (with 95% bootstrapped confidence intervals) between the best performing deep net feature spaces and the mean of the intermouse scores (expressed in the same units of Pearson’s r we’ve used heretofore) is 0.0985 [0.0940, 0.103]. Compare this to the same distance computed relative to the splithalf reliability: 728 [0.726, 0.731]. On average, then, while our artificial models are capturing only a fraction of the total explainable variance relative to the splithalf noise ceiling, they’re verging increasingly close to the predictive threshold suggested by the reweighting of biological neurons from the same species. The performance of models relative to the intermouse score may be seen in the lower half of Figure 3.
4 Discussion
Our intent with this work was to provide a preliminary atlas for future ventures into the deep neural network modeling of rodent visual cortex. To this end, we have deliberately invested in introspective analyses of the tools we used, as well as the curation of deep neural networks we hope will provide informative waypoints. Obviously, the atlas is far from complete. Other model classes like recurrent models [89, 90], equivariant models [91], and robotic models (e.g. for visual odometry [92]) are promising candidates for inclusion in future benchmarks, and our neural encoding & representational similarity metrics are just two of many variants.
Nevertheless, the results we have presented here indicate that neural recordings from the visual brains of mice can (with care and caution) be compared to deep neural networks using many of the same tools we’ve used to better characterize the visual brains of monkeys. Having as reference two animal models that occupy very different ecological niches and are separated by tens of million
years of evolution makes it far more likely that insights into vision gleaned across both are actually fundamental to perceptual meaning-making and not just some idiosyncratic quirk specific to any one evolutionary trajectory. Primate and rodent vision do differ rather drastically, even in fairly basic ways: mice lack a fovea, have a retina dominated by rods for vision under low light, and spatial acuity less than 20/1000 [93], making their primary visual system more akin to the primate peripheral system – and making it all the more curious that the same models explain decent amounts of variance in both. The differences between the species, it seems, may not be so irreconcilable at the level of modeling, but only with future work more carefully controlling for distinct aspects of each organism’s unique physiology (see Section A.5) can more concrete conclusions of this kind be made.
Beyond considerations of distinctive physiology is the indispensable point that perceptual systems should always be considered in service of behavior. It’s possible that mice mostly rely on vision as a sort of broad bandpass filter for lower-frequency, dynamic stimuli the animal can then flee, fight, or further investigate with its whiskers — perhaps its most sophisticated sensory organ [94]. Another possibility is that mice use vision to facilitate navigation. The dominance in our Taskonomy results of 2D segmentation, object recognition and semantic segmentation (all tasks that have elsewhere been shown to provide effective, transferable features for the simulation of robotic navigation) provide some evidence for this. Of course, the behavioral roles of rodent vision may very well be manifold. Understanding this plurality in a readily available model species could in the end be key for bridging the gaps that remain between biological and computer vision [95]. The unparalleled access, resolution, and control afforded by rodent neuroimaging have already revolutionized our understanding of the relationship between perceptual representation and behavioral output. Combined with novel methods like the embedding of neural networks in virtual agents [96] in ecologically realistic environments, this kind of data may well provide a testbed for better situating the tasks of computer vision in the broader behavioral context of agentic scene understanding.
In summary, only novel combinations of architecture, task and mapping will help to explain the highly reliable neural variance we’ve yet to explain in our current survey. Already this recombination is under way: Shi et al. [97] have created a custom CNN designed specifically to match (processing stage by processing stage) the anatomy of rodent visual cortex, while Nayebi et al. [88] have combined the power of self-supervised learning with smaller, shallower architectures to more fully account for the ethological realities of rodent behavior and the differences in computational bandwidth that shape and constrain their visual systems. More work of this variety will be necessary to more fully model the rich diversity and fiendish complexity of biological brains at scale – even the very smallest ones.
4.1 Acknowledgements
We thank Martin Schrimpf, Tiago Marques, Jim DiCarlo, as well as many others on the BrainScore team for helpful discussion, feedback, and inspiration. We would also like to thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
4.2 Code Availability
More results and code for the replication of our analysis may be found at this GitHub repository: github.com/ColinConwell/DeepMouseTrap (License GPL v2)
4.3 Compute Required
We used a single machine with 8 Nvidia RTX 3090 GPUs, 755gb of RAM, and 96 CPUs. GPUs were used only for extracting model activations, and could (without major slowdown) be removed from the analytic pipeline. Dimensionality reduction and regression computations were CPU and RAM intensive. Replicating all of our results would take approximately two weeks on a similar machine.
4.4 Ethics Statement
Lest our science forget the life that powers it, we must note that behind the phenomenal dataset provided by the Allen Institute are 256 laboratory mice, each of which was subjected to multiple surgeries, a highly invasive neuroimaging technique and genetic engineering. The moral parameters of this particular praxis of neuroscience are contentious, and not without reason. While we believe centralized, comprehensive and (most importantly) public datasets like those provided by the Allen Institute may actually decrease the total number of laboratory animals required for similar kinds of empirical projects, we acknowledge with solemnity the cost to life required.
4.5 Funding Statement
This work was supported by the Center for Brains, Minds and Machines, NSF STC award 1231216, the MIT CSAIL Systems that Learn Initiative, the CBMM-Siemens Graduate Fellowship, the MITIBM Watson AI Lab, the DARPA Artificial Social Intelligence for Successful Teams (ASIST) program, the United States Air Force Research Laboratory and United States Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000, and the Office of Naval Research under Award Number N00014-20-1-2589 and Award Number N00014- 20-1-2643. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding the comparison between mouse visual system and computer vision models?
2. How does the proposed neural regression method differ from traditional ridge regression, and what advantages does it offer?
3. How do the findings in mice compare to those in monkeys and humans, and what implications does this have for our understanding of visual processing across species?
4. How were the samples used to train the encoding models, and how did the authors handle repetitions of different image stimuli in the leave-one-out cross validation setting?
5. Could the authors provide more clarification on the clustering method used in the study, specifically how they obtained "purely data-driven clusters"?
6. How do the prediction performances of the encoding models and RSA compare when considering chance performance in both settings?
7. What are some potential limitations of the study, such as the small sample size or the use of only one type of visual stimulus? | Summary Of The Paper
Review | Summary Of The Paper
Several other reviewers have expressed that they find the current scientific contributions of the paper significant enough, so I will increase my score to a 6, even though the authors did not quite address my major concern about the novelty and utility of the proposed neural regression (i.e. what is the empirical benefit of neural regression over just ridge regression? if that is not understood, the authors should disclose that and not put so much emphasis on the importance of the regression method (in the title, in the abstract, and in the main text)).
I believe that the paper can benefit from reducing the amount of time spent discussing the neural regression method and the RSA vs encoding model comparisons, and more time focusing on the actually novel and significant part which is the comparison of the findings in mice to the previously established findings in monkeys and humans. And just to clarify, I think that only discussing what the findings are in mice is not enough on its own, because all the analyses that the authors carry out have been previously done in other organisms. So to actually have a significant contribution, I think it's important to emphasize the comparisons of the results in mice to the previously established results in other organisms.
=============== post-rebuttal =================
The manuscript presents an exploratory empirical study of the relationships between the mouse visual system and representations extracted from computer vision neural network models. The authors investigate representations from a large number of computer vision models, as well as representations from the same models but when the models are initialized randomly. They also explore two ways of relating the mouse recordings to the representations from the computer vision models: encoding models and representational similarity analysis. The authors additionally propose what they claim is a new encoding model methodology, though I remain unconvinced of its utility.
Review
The manuscript was easy to read and provides a broad exploration of an interesting question. I do however have several concerns about this work. I've summarized these below.
Methodological concerns. The authors claim to propose a "novel, highly optimized neural regression method" for encoding models. However I'm unclear about what parts of the proposed method (sparse random projection + ridge regression via generalized cross validation) the authors are claiming to be novel, and I'm also unclear about which of those parts are actually leading to a boost in prediction performance. The authors claim that ridge regression is a less popular encoding model method, though it is in fact the default method in encoding models in the language literature (Wehbe et al 2014, Huth et al. 2016, Jain and Huth 2018, Toneva and Wehbe 2019,...). The authors use leave one out cross validation which is indeed an uncommon choice, but it's also the default setting on the RidgeCV method in sklearn, so it's unclear what is "highly optimized" about this method. Also, it would be helpful if the authors could provide some clarification (both in the rebuttal and in the paper) about exactly what samples are used to train the encoding models (both in the mouse and the monkey experiments). Were separate encoding models built for different individuals? If separate encoding models were built, it's not clear how the predictions or predictions performance is aggregated across individuals. How were the repetitions of different image stimuli handled in the leave one out CV setting? I imagine that leaving out the data corresponding to one image while keeping data that corresponds to other repetitions in the training set can lead to some unfair underestimate of the generalization error. It would be interesting to compare the zero-shot performance of the leave-one-out proposed method to the zero-shot performance of the default Brain Score encoding model (i.e. zero-shot meaning that all repetitions of a test image are removed from the training set).
Questions about some of the interpretations.
L214-220: The authors interpret the result that transformer-based architectures show the largest differences in prediction performance between the trained and randomly initialized computer vision models as evidence that some of the better performing randomly initialized models may be due to an underlying convolutional architecture. Another possibility is that the randomly initialized models predict the neural recordings well across architectures, but the trained transformer-based architectures are much better at predicting the neural recordings. This possibility is consistent with a large difference in prediction performance between the trained and randomly initialized transformer-based models. To disentangle these, it would be helpful if the authors compared the prediction performances across different randomly initialized architectures with a statistical test.
The authors compare the results from RSA with those of encoding models and state that "with more probe images, we expect that the parametric linear mapping would be more competitive with the nonparameteric distances of representational similarity". It's not clear to me that the correlations from encoding models being lower than those from RSA means that the encoding model results are less "competitive". The results from an encoding model are from generalizing to a previously unseen datapoint, and the RSA results are not. It would be helpful if the authors provide an estimate of the chance performance in both the encoding model and the RSA setting to show that one is not inherently easier than the other.
Some lack of clarity in the methodology and experimental settings. The manuscript has a clear exposition for the most part but there are some methodologically details that need to be better explained. Some examples:
The questions that I brought up in the previous point
L92-98: what is being clustered and how? Are these clusters from the original Taskonomy paper or did the authors actually obtain these "purely data-driven clusters".
L111-120: for a reader who is unfamiliar with RSA, the description in these lines may not be easy to parse. For example "We compute RDMs by correlating the activations of all neurons in a given neural site to the 118 images in the stimulus set" sounds like the authors are correlating neural activity to images, instead of computing pairwise correlations across the neural activations. Here it is again not clear how separate subjects are handled.
L185-196: are these results using the mouse or the monkey data? |
NIPS | Title
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
Abstract
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates. *Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
N/A
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory’s 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
*Correspondence: conwell@g.harvard.edu; Project Website: github.com/ColinConwell/DeepMouseTrap
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
1 Introduction
To date, the most successful models of biological visual cortex are object-recognizing deep neural networks applied to the prediction of neural activity in primate visual cortex [1–5]. Corresponding to the biology not only at the level of individual layers, but across the feature hierarchy, these models are so powerful they can now effectively be used as neural controllers, synthesizing stimuli that drive neural activity far beyond the range evoked by any handmade experimental stimulus [6]. The correspondence of these same models to mouse visual cortex, on the other hand, has proven a bit more tenuous [7, 8], with a recent finding even suggesting that randomly initialized networks are as predictive of rodent visual cortical activity as trained ones [9].
Often implicit in interpretations of these results is the notion that the visual milieu and machinery of mice is simply different – something characterized more, perhaps, by brute force predator avoidance and the ‘flexible random associations’ thought to define senses like olfaction [10] than by the sophisticated active sampling and representational compositionality enabled by primate central vision. And yet, mice do recognize objects [11, 12] – and do engage in other sophisticated visual behaviors [13] that suggest they must have visual solutions that at least functionally approximate the kinds of solutions learned by modern computer vision algorithms. If these models perform well in monkeys, but not in mice, are we overfitting to an artifact? Are the object recognition capabilities of mice simply the byproduct of a representational competence learned through other (even more behaviorally relevant) tasks? Have mice perhaps converged on solutions to visual problems that fundamentally differ from the solutions that undergird the emergent similarity between monkeys and machines? To even begin to answer these questions, we need substantially more comprehensive modeling statistics than we currently have. Our main goal in this work was to provide exactly that – to re-examine at large scale the state of neural network modeling in the visual cortices of mice, using many thousands of neurons, over 110 distinct neural network models, and two methods of mapping models to brain.
We summarize the statistics from our benchmarking survey in five main results:
1. Training matters. The randomly initialized variants of some convolutional architectures fare well when predicting individual neural responses, but representational similarity is always better captured by features learned in service of some task. (Segmentation seems best.) 2. Features of intermediate complexity dominate in the prediction of all cortical sites, but both our mapping methods do demonstrate an upwards gradient in complexity from primary visual cortex onwards that roughly matches the information processing hierarchy proposed elsewhere in the rodent neurophysiology literature. 3. Taskonomic tools that have previously been shown to approximate functional organization in primates fail to strongly differentiate anatomical regions in mice, with the same kinds of tasks dominant across multiple, distinct neural sites. 4. When aggregated in similar ways, representational similarity and neural regression methods capture similar trends in the kinds of feature spaces that best predict the biology. 5. While still far from the overall noise ceiling for this highly reliable neural data, a variety of the artificial deep net models in our survey make predictions only slightly less accurate than ‘biological conspecific models’ composed of the neurons from other mice.
2 Methods
2.1 Neural Dataset
For neural data, we use the Allen Brain Observatory Visual Coding1 dataset [14] collected with twophoton calcium-imaging from the visual cortex of 256 awake adult transgenic mice and consisting of approximately 59,610 unique, individual neurons. Calcium-imaging fluorescence patterns are preprocessed and deconvolved by the Allen Institute2. The neurons sampled include neurons from 6 visual cortical areas at 4 cortical depths across 12 genetic cre lines. The visual experiments recorded activity for both artificial images (e.g., diffraction gratings) and 118 natural scenes. We analyze only
1Available with a non-commercial license under the Allen Institute terms of use: http://www.alleninstitute.org/legal/terms-use/
2More details are available in the whitepapers released with the observatory data: http://observatory.brain-map.org/visualcoding/transgenic
the latter to ensure comparable inputs to what is typically used in the training of deep nets. Each natural scene is displayed 50 times over the course of an assay.
To ensure an optimal signal to noise ratio, we perform a significant amount of subsetting on the full neural population, beginning by subsetting only excitatory neurons. Recent analyses suggest neural activity throughout mouse visual cortex is often impacted by extraneous, external body movements [15]. For this reason, we subsequently filter out any neurons whose peak responses to the presentation of natural scene images are significantly modulated by the mouse’s running speed, using an ANOVA metric provided by the Allen Institute. We further subselect neurons by assessing their split-half reliability across trials (with each split-half constituting 25 of 50 presentations for each image), keeping only those neurons exhibiting 0.8 reliability and above. This thresholding still leaves 6619 neurons for analysis, is in line with prior work on primates, and supports, for example, the construction of cortical representational dissimilarity matrices (RDMs) with split-half reliabilities as high as 0.93. (More details on the relationship between our metrics and neural reliability, including visualizations of some of our results across many degrees of thresholding, can be found in A.4 of the Appendix.)
2.2 Model Zoology
To explore the influence of model architecture on predictive performance, we use 26 model architectures from the Torchvision (PyTorch) model zoo [16] and 65 model architectures from the Timm [17] model zoo [18–52]. These models include convolutional networks, vision transformers, normalization-free networks and MLP-Mixer models. For each of these models, we extract the features from one trained and one randomly initialized variant (using whatever initialization scheme the model authors deemed best) so as to better disentangle what training on object recognition affords us in terms of predictive power.
2.3 Neural Taskonomy
Model zoology provides decent perspective on the computations related to object recognition, but the responsibilities of the visual cortex (no matter the species) extend far beyond identifying the category of an object. To probe a wider range of tasks, we turn to Taskonomy: a single architecture trained on 24 different common computer vision tasks [53], ranging from autoencoding to edge detection. The model weights we use are from updated PyTorch implementations of the original Tensorflow models [54]. Key to the engineering of Taskonomy is the use of an encoder-decoder design in which only the construction of the decoder varies across tasks. While recent analyses using a similar approach in human visual cortex with fMRI data [55] have tended to focus only on the latent space of each task’s encoder, we extract representations across all layers, better situating Taskonomy within the same empirical paradigm that has so far defined the modeling of object recognition in the primate brain. For further clarity, we cluster the 24 tasks according to their ‘Taskonomic’ category — a total of 5 clusters (2D, 3D, semantic, geometric or other) that we further collapse into 4 clusters (lumping the only member of the ‘other’ category — a denoising autoencoder — in with its closest cousin — a vanilla autoencoder in the ‘2D’ category). These purely data-driven clusters are derived from estimates of how effectively a set of features learned for one task transfer to (or boost the performance in) another task [53]. Use of the Taskonomy models provides a unique opportunity to test variance in training regimes without the confound of simultaneous changes in architecture.
2.4 Self-Supervised Models
Full category supervision, while robust in its ability to build representations that transfer well to a variety of tasks, suffers in its neuroscientific relevance as an ethologically plausible mode of learning. Recently, self-supervised models have begun to provide viable alternatives to the representations learned by category-supervised models in both computer vision [56, 57] and neural mapping [58, 59]. Here, we assess 22 self-supervision models from the VISSL model zoo [60], ranging from earlier iterations (e.g. DeepCluster [61]) to modern contrastive learning algorithms (e.g. BarlowTwins and Dino [62–65]). We use these models to assess whether category-supervision, however powerful it is in predicting neural activity, might eventually be supplanted by these more realistic alternatives. 14 of these models have as their base architecture a standard ResNet50; 8 are built atop vision transformers.
2.5 Comparing Representations across Biological & Artificial Networks
Two methods predominate in the comparison of neural recordings to deep neural networks: at the most abstract level, one of these compares representational geometries computed across the activations
of many individual neurons [66, 67]; the other attempts to predict the activity of individual neurons directly [67, 68]. Both of these techniques are grounded in the use of image-computable models and a shared stimulus set, but differ in the types of transformation applied to the neural activity generated by those stimuli. Given the difference in both target (neural populations versus individual neurons) and transforms (correlation matrices versus dimensionality reduction) we attempt a variant of each type of analysis here, comparing the two directly on the exact same neural data, with the same models and the same stimulus set, and in a granular, layer-by-layer fashion. (A more comprehensive review of neural mapping methods is provided in Section A.2 of the Appendix.)
2.5.1 Representational Similarity Analysis
To compare the representational geometries of a given model to the representational geometries of the brain, we begin by computing classic representational dissimilarity matrices (RDMs) [69]. We compute these RDMS by calculating the pairwise correlation coefficients between the neural response vectors for each image (one for each of the 6 cortical areas surveyed). We then repeat this procedure for the artificial networks, aggregating the responses of the artificial neurons in a given layer, before aggregating them once more into a correlation matrix. We then measure the relationship between the RDMs computed from the biological and artificial networks with a second-order Pearson correlation between the flattened upper triangles of each. The resultant coefficient constitutes the score for how well a given model layer predicts the representational similarity of a given cortical area.
2.5.2 Neural Regression (Encoding Models)
To more directly compare the biological and artificial neural activations in our data, we use a style of regression made popular in the modeling of primate visual cortex, epitomized by BrainScore [4]. Variants of this approach abound, but most consist of extracting model activations, performing dimensionality reduction, and then some form of cross-validated penalized or principal components regression. The dimensionality-reduced feature spaces of the model are used as the regressors of the activation patterns in a given neuron. After testing a number of these variants, we settled on sparse random projection for dimensionality reduction (which proved far more computationally efficient than standard PCA, without sacrifice in terms of regression scores), followed by ridge regression (in place of the more frequently used partial least squares regression).
The details of our method (programmed with [70]) are as follows: Given a network, we first extract a predetermined number of sparse random projections (4096, in this case) from the features of each layer — in line with the Johnson-Lindenstrauss lemma for the number of observations (images shown to the mice) in our data set 3. After extracting these projections, we regress them on the activity of each individual neuron using ridge regression (with a default lambda penalty of 1.0). The use of a penalized regression in this case allows us to monopolize generalized cross-validation (a linear algebraic form of leave-one-out cross-validation), yielding a set of predictions for the activity of each neuron for each image4. We then compute the Pearson correlation between the predicted and actual activity for each neuron to obtain a score per neuron per model layer, which we then aggregate by taking the mean of scores per neuron across cortical area.
We verify the efficacy of this method on the publicly available benchmarks of primate BrainScore, where (relative to BrainScore’s in-house regression method) we demonstrate provisional gains not only in terms of predictive score (sometimes up to r = 34%), but also in terms of speed and computational efficiency. (Details may be found in Section A.1 of the Appendix.)
2.6 Model Rankings
To rank the models according to how well they predict the variance in a given cortical area, we take the max across layers. In effect, this requires that a model ‘commit’ only one layer to the prediction of each area. In the case of our neural regression metric we call these scores the ‘SRP-Ridge Max’; in the case of our representational similarity metric we call these scores the ‘RSA Max’. A final mean taken over the SRP-Ridge Max and RSA Max scores per model per cortical area yields our overall model rankings, which serve as the basis for the bulk of our analyses.
3Note that in cases where the dimensionality of features is less than the number of projections suggested by the lemma, sparse random projections will actually upsample the feature space, rather than downsample it.
4The use of generalized cross-validation is particularly beneficial in datasets with fewer probe images, where k-fold cross-validation means losing a significant degree of information in each fit.
2.7 Non-Neural Network Baselines
Prior to the ascendancy of neural network models, a significant amount of time and craft was invested in the hand-engineering of features to simultaneously facilitate image recognition and capture meaningful subsets of neural variance. In this work, we test how well a small subset of those features are able to explain the variance in rodent visual cortex, using both our neural encoding and representational similarity metrics. Our non-neural network baselines consist of random fourier features [71] (computed specifically to match the dimensionality of our neural network predictors), handcrafted gabor filters and GIST (spatial envelope) descriptors [72].
3 Results
3.1 How do trained models compare to randomly initialized models?
Previous work in the deep neural network modeling of mouse visual cortex found that a randomly initialized VGG16 predicted neural responses as well as, if not slightly better than, a VGG16 trained on ImageNet [9], suggesting that the neural predictivity of the features produced by a trained object recognition model are perhaps no better than the features produced by a randomly initialized one. Our results, on the other hand, suggest that the neural predictivity of trained versus randomly initialized models more generally depends on both the particular model being tested and the particular method used to produce the mappings between model and brain.
At the level of individual neurons (neural regression), 17 of the 91 model architectures we tested had randomly initialized variants that either matched or outperformed their ImageNet-trained counterparts. Replicating previous findings, we found these 17 architectures to include VGG16, as well as all 3 other VGG variants (11, 13 & 19), AlexNet, the DenseNet architectures (121, 169, 201), and almost all of the normalization-free architectures. Despite this, a paired t-test of the difference in scores across all models demonstrates that ImageNet-trained architectures are still overall more performant than their randomly initialized counterparts (Student’s t = 7.74, p = 1.37e 11, Hedge’s bg = 0.81). At the level of emergent representational similarity (RSA), ImageNet-trained models categorically outperform their randomly initialized counterparts, and by a large margin (Student’s t = 22.66, p = 5.81e 39, Hedge’s bg = 2.36). Taken together, these results strongly affirm that training matters, and that randomly initialized features can only go so far in the prediction of meaningful neural variance. Differences between ImageNet-trained and randomly initialized models are shown in Figure 1.
3.2 What kinds of architectures best predict rodent visual cortex?
The overall best architecture for predicting mouse visual cortex across both individual neurons (SRP-Ridge) and population-level representation (RSA) was an Inception-ResNet hybrid (InceptionResNet-V2). There is a small, positive correlation between the depth of a model (the number of distinct layers) for both the RSA-Max metric and SRP-Ridge metric (Spearman’s r = 0.22, p = 0.001 and r = 0.192, p = 0.007, respectively), and a small, negative correlation for the total number of trainable parameters in the RSA Max metric (Spearman’s r = 0.18, p = 0.007). The latter of these is most likely driven by the relatively poor performance of parameter-dense architectures like VGG.
Markedly, trends previously noted in macaques [73] fail to materialize here. In particular, models with higher top-1 accuracies on ImageNet do not perform significantly better than models with lower top-1 accuracies. This relative parity is driven in large part it seems by newer models like EfficientNets, which across the board have dominant scores on ImageNet, but sometimes middling or poor scores in the predictions of rodent visual cortex we’ve tabulated here.
Compared to all other architectures, transformers on average fare slightly worse in the RSA Max metric (Student’s t = 3.96, p = 0.004, Hedge’s bg = 0.913), but moderately better in the SRP-Ridge Max metric (Student’s t = 2.45, p = 0.023, Hedge’s bg = 0.633). Strikingly, transformers and MLP-Mixers boast the largest differences between ImageNet-trained and randomly initialized variants in the SRP-Ridge Max metric, with all pairwise t-tests significant at alpha = 0.05 after Bonferroni correction for multiple comparisons. This strongly suggests that the advantage of those randomly initialized variants that matched or outperformed their ImageNet-trained counterparts is an advantage conferred by properties of convolutional architectures (e.g., translation invariance), and not necessarily an advantage shared across random feature spaces writ large. (The rankings of these and other architectures may be found in Figure 8 in the Appendix).
3.3 What kinds of tasks best predict rodent visual cortex?
The overall best Taskonomy encoder across both the RSA and SRP-Ridge Max is 2D segmentation (ranking second and first respectively; see Figure 9 in the Appendix). At the level of individual neurons (SRP-Ridge), 2D tasks (keypoints, autoencoding, inpainting) dominate. At the level of representational similarity (RSA), all 2D tasks but 2D segmentation fall to the bottom of the rankings, and Semantic tasks (object recognition and semantic segmentation) rise to 2nd and 3rd place.
This reshifting in rank presents a curious case for interpretation, suggesting most likely that while the representations of individual neurons may be coordinated more by the lower level, less abstract features necessary for performing well on most 2D tasks, the overall neural population codes are coordinated more by the parsing of the visual input into ethologically and spatially relevant units via the segmentation and classification tasks. Notably, the original research from which these PyTorch models were adopted offers an auxiliary data point that may anchor this interpretation more concretely. The top 3 models in our RSA Max metric (2D segmentation, object classification, semantic segmentation) are likewise in the top 5 of a ranking the original researchers produced by pitting the Taskonomy encoders against one another as pretrained ‘perceptual systems’ for reinforcement learning agents learning to navigate a virtual environment (see [54], figure 13 in the appendix). This raises the possibility that the reason these models are optimally predicting the visual neural population code for mice is simply because that code is coordinated in service of navigation.
3.4 How do category-supervised models compare to self-supervised models?
Whether with ResNet50 as their base, or a vision transformer, self-supervised models seem to be verging closer and closer to the predictive power of their category-supervised counterparts. Our most predictive self-supervised ResNet50, for example, (a MocoV2 model) effectively matches its category-supervised counterpart in the SRP-Ridge Max metric (with scores of .182 and .184, respectively), while slightly outperforming its category-supervised counterpart in the RSA Max metric (with scores of .422 and .415, respectively). While this single comparison by no means denotes a statistically significant superiority of self-supervised models (which would require training multiple iterations of each), it does begin to provide preliminary evidence for parity.
3.5 How well do non-neural network baselines predict rodent visual cortex?
Non-neural network baselines somewhat uniformly fail to predict neural activity as accurately as deep net features (though see Section A.10 of the Appendix for a counterexample). We tested three baselines: 1) a bank of Gabor filters of applied to 8x8 grids of each image; 2) the PCs of the resultant feature matrices (i.e. the Gist descriptors [72]); 3) and the max across 600 iterations of 4096 random Fourier features (a dimensionality matching that of our SRPs). Ridge regressed with generalized cross-validation, these feature models yield average scores of 0.07, 0.06 and -0.014, respectively. Compared via representational similarity, they yield average scores of 0.20, 0.25 and 0.011.
3.6 How ‘deep’ are the layers that best predict rodent visual cortex?
Echoing previous results [7, 8], we find across all ImageNet-trained architectures, regardless of metric, that the features most predictive of rodent visual cortex are found about a third of the way into the model (though see Section A.5 of the Appendix for some caveats). These early to intermediate visual features go beyond basic edge detection but are far from the highly abstracted representations adjacent to final fully connected layers. Across Taskonomy encoders, 2D & Geometric tasks yield their best features in earlier layers; 3D & Semantic tasks yield their best features in more intermediate and later layers. Note that these aggregate motifs do not preclude subtler differences across cortical area, which we discuss in the section below.
3.7 Are there differences in model predictions across cortical area?
In this work, we address this question from two perspectives: that of hierarchy and that of function.
In primate visual cortex, it is common consensus that there exists a distinct information processing hierarchy along the ventral visual stream [74–76], with posterior sites like V1 and V3 defined by features like oriented edge detectors, and more anterior sites like V4 and IT defined by more complex morphologies. While there continues to be some debate as to whether a similar hierarchy exists in rodent visual cortex, a large body of anatomical, functional and physiological work [77–83] has coalesced around a meaningful hierarchy that consists first of a ventral / dorsal split after primary visual cortex (VISp), with VISp leading to VISl in the ventral stream and VISp leading to VISrl - VISal - VISpm - VISam in the dorsal stream. Strikingly, our modeling does seem to provide corresponding evidence for this circuit in the form of a data-driven hierarchy produced purely by taking the median depths of the model layers that best predict the neural activity in each of these cortical areas, and assessing for difference across them. A nonparametric ANOVA shows an overall difference in depth across cortical area to be significant for both our SRP-Ridge metric (Friedman’s 2 = 34.08, p = 2.29e 06, Kendall’s cW = 0.04) and our RSA metric (Friedman’s 2 = 37.05, 5.86e 076, Kendall’s cW = 0.06). Subsequent pairwise comparisons show many of the differences that underlie this group-level effect to be differences between earlier and later layers of the information processing hierarchy established in the literature. (For further details, see Figure 2.)
Other differences across cortical area that we might expect are differences driven by function. Research into primate visual cortex over the last two decades has unveiled a significant degree of functional organization over and above purely anatomical organization [84–86], with distinct subregions defined in large part by their differential activity in response to different kinds of stimuli. To try and replicate this in mouse visual cortex we search for Taskonomic organization, a proxy of functional organization wherein distinct neural sites are better or worse predicted by the features from different taskonomy encoders. Curiously, and in contrast to previous findings in human fMRI [55], it seems to be the case that the scores of different Taskonomic clusters are relatively consistent across cortical area; see Figure 3.) This suggests that mouse visual cortex may be more functionally (or Taskonomically) homogenous than primate visual cortex, with anatomical descriptors providing little to no cue of functional difference – though this seems unlikely given other analyses we’ve performed showing greater similarities of neurons within cortical site than between cortical site (see Section A.11 for details). Another (more likely) alternative is that the tasks of computer vision are just not so neatly mapped onto the tasks of biological vision in mice.
3.8 How do the predictions compare across RSA and neural regression?
While prior work has addressed this question theoretically [87], it’s rarely the case that representational similarity and neural regression are compared directly and empirically. Here, we compare our RSA and SRP-Ridge metric both at the level of overall rankings (taking the max across layers) and at
. the level of individual layers, the latter of which provides a much more detailed assessment of how different feature spaces map to cortical representation.
In terms of overall rankings, the Spearman rank order correlation between the two methods is either 0.56 (p = 8.36⇥ 10 19) or 0.59 (p = 3.17⇥ 10 12), depending on whether you include or exclude the randomly initialized architectures. In terms of layer by layer comparisons, we decompose the Spearman correlation across distinct combinations of model and cortical area. The average coefficient between the two methods, along with bootstrapped 95% confidence intervals is 0.468 [0.447,0.489] or 0.626 [0.613,0.639], again depending on the inclusion or exclusion of the random models. This suggests a significant degree of overlap between the kinds of features that optimally predict the representations of both individual neurons and neural populations. Of course, the averages here do obscure some meaningful subtrends and idiosyncrasies. For details, see Figure 4.
3.9 How well are we doing overall in predicting mouse visual cortex?
The overall best model in any cortical area across either of our metrics is unsupervised 2D segmentation in anterolateral visual area (VISal), with an RSA Max score of 0.538. The (Spearman-Brown) splithalf reliability of the RDM for this area (an effective proxy of its explainable variance) is 0.89. This means our most predictive model in any cortical area across any metric is little more than halfway to the noise ceiling.
Of course, it’s possible this noise ceiling is a bit too strict. Instead of requiring the model to predict the neural data as well as the neural data predicts itself, another possible target to which we might recalibrate is the relative performance we would expect if (instead of an artificial neural network) we used the responses of another biological network as the model to predict neural activity. Inspired by recent work [88], and to better contextualize the scores of our SRP-Ridge metric, we attempted a version of this here. To compute this reference, we proceeded again neuron by neuron using the exact same neural regression method (dimesnonality reduction, and hyperparameters) described in Section 2.5.2, but instead of using the responses of a deep net layer as the predictors in our ridge regression, we used the responses of the neurons from the same cortical area in all other mice (conspecifics) across the donor sample. Conceptually, this ‘intermouse score’ represents how well we might do if our model of a given mouse brain were other mouse brains.
Averaging across both cortical area and model, the average distance (with 95% bootstrapped confidence intervals) between the best performing deep net feature spaces and the mean of the intermouse scores (expressed in the same units of Pearson’s r we’ve used heretofore) is 0.0985 [0.0940, 0.103]. Compare this to the same distance computed relative to the splithalf reliability: 728 [0.726, 0.731]. On average, then, while our artificial models are capturing only a fraction of the total explainable variance relative to the splithalf noise ceiling, they’re verging increasingly close to the predictive threshold suggested by the reweighting of biological neurons from the same species. The performance of models relative to the intermouse score may be seen in the lower half of Figure 3.
4 Discussion
Our intent with this work was to provide a preliminary atlas for future ventures into the deep neural network modeling of rodent visual cortex. To this end, we have deliberately invested in introspective analyses of the tools we used, as well as the curation of deep neural networks we hope will provide informative waypoints. Obviously, the atlas is far from complete. Other model classes like recurrent models [89, 90], equivariant models [91], and robotic models (e.g. for visual odometry [92]) are promising candidates for inclusion in future benchmarks, and our neural encoding & representational similarity metrics are just two of many variants.
Nevertheless, the results we have presented here indicate that neural recordings from the visual brains of mice can (with care and caution) be compared to deep neural networks using many of the same tools we’ve used to better characterize the visual brains of monkeys. Having as reference two animal models that occupy very different ecological niches and are separated by tens of million
years of evolution makes it far more likely that insights into vision gleaned across both are actually fundamental to perceptual meaning-making and not just some idiosyncratic quirk specific to any one evolutionary trajectory. Primate and rodent vision do differ rather drastically, even in fairly basic ways: mice lack a fovea, have a retina dominated by rods for vision under low light, and spatial acuity less than 20/1000 [93], making their primary visual system more akin to the primate peripheral system – and making it all the more curious that the same models explain decent amounts of variance in both. The differences between the species, it seems, may not be so irreconcilable at the level of modeling, but only with future work more carefully controlling for distinct aspects of each organism’s unique physiology (see Section A.5) can more concrete conclusions of this kind be made.
Beyond considerations of distinctive physiology is the indispensable point that perceptual systems should always be considered in service of behavior. It’s possible that mice mostly rely on vision as a sort of broad bandpass filter for lower-frequency, dynamic stimuli the animal can then flee, fight, or further investigate with its whiskers — perhaps its most sophisticated sensory organ [94]. Another possibility is that mice use vision to facilitate navigation. The dominance in our Taskonomy results of 2D segmentation, object recognition and semantic segmentation (all tasks that have elsewhere been shown to provide effective, transferable features for the simulation of robotic navigation) provide some evidence for this. Of course, the behavioral roles of rodent vision may very well be manifold. Understanding this plurality in a readily available model species could in the end be key for bridging the gaps that remain between biological and computer vision [95]. The unparalleled access, resolution, and control afforded by rodent neuroimaging have already revolutionized our understanding of the relationship between perceptual representation and behavioral output. Combined with novel methods like the embedding of neural networks in virtual agents [96] in ecologically realistic environments, this kind of data may well provide a testbed for better situating the tasks of computer vision in the broader behavioral context of agentic scene understanding.
In summary, only novel combinations of architecture, task and mapping will help to explain the highly reliable neural variance we’ve yet to explain in our current survey. Already this recombination is under way: Shi et al. [97] have created a custom CNN designed specifically to match (processing stage by processing stage) the anatomy of rodent visual cortex, while Nayebi et al. [88] have combined the power of self-supervised learning with smaller, shallower architectures to more fully account for the ethological realities of rodent behavior and the differences in computational bandwidth that shape and constrain their visual systems. More work of this variety will be necessary to more fully model the rich diversity and fiendish complexity of biological brains at scale – even the very smallest ones.
4.1 Acknowledgements
We thank Martin Schrimpf, Tiago Marques, Jim DiCarlo, as well as many others on the BrainScore team for helpful discussion, feedback, and inspiration. We would also like to thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support.
4.2 Code Availability
More results and code for the replication of our analysis may be found at this GitHub repository: github.com/ColinConwell/DeepMouseTrap (License GPL v2)
4.3 Compute Required
We used a single machine with 8 Nvidia RTX 3090 GPUs, 755gb of RAM, and 96 CPUs. GPUs were used only for extracting model activations, and could (without major slowdown) be removed from the analytic pipeline. Dimensionality reduction and regression computations were CPU and RAM intensive. Replicating all of our results would take approximately two weeks on a similar machine.
4.4 Ethics Statement
Lest our science forget the life that powers it, we must note that behind the phenomenal dataset provided by the Allen Institute are 256 laboratory mice, each of which was subjected to multiple surgeries, a highly invasive neuroimaging technique and genetic engineering. The moral parameters of this particular praxis of neuroscience are contentious, and not without reason. While we believe centralized, comprehensive and (most importantly) public datasets like those provided by the Allen Institute may actually decrease the total number of laboratory animals required for similar kinds of empirical projects, we acknowledge with solemnity the cost to life required.
4.5 Funding Statement
This work was supported by the Center for Brains, Minds and Machines, NSF STC award 1231216, the MIT CSAIL Systems that Learn Initiative, the CBMM-Siemens Graduate Fellowship, the MITIBM Watson AI Lab, the DARPA Artificial Social Intelligence for Successful Teams (ASIST) program, the United States Air Force Research Laboratory and United States Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000, and the Office of Naval Research under Award Number N00014-20-1-2589 and Award Number N00014- 20-1-2643. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding artificial neural networks and mouse brain recordings?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of understanding the comparison between deep neural network activations and mouse brain activations?
3. How does the study establish a viable benchmark for relating mouse visual neuron recordings to artificial network activations?
4. What new information does the study provide regarding biological and artificial networks, especially regarding equivalences between primate and mouse?
5. How do the non-neural network baselines and neural networks perform on layer 4 primary cortex data, specifically in relation to the Gabor filter bank model?
6. What is the issue with the phrasing of correlational scores as a "geometric" analysis, and how could the authors differentiate their definition from others?
7. How does the overall differentiation among the models relate to the usefulness of the metrics themselves for this type of differentiation?
8. What are some possible alternative conclusions that could be drawn from the general lack of variation in scores reported from figures 1 and 2? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a large-scale empirical study comparing artificial neural networks against mouse brain recordings using correlation-based metrics. They utilize a variety of publicly available neural networks as well as a large brain data corpus provided by the Allen Brain Institute. Their metrics are sampled from the field and rely on linear correlation based comparisons. They do also provide a variant metric that is more performant than the recently proposed BrainScore. They demonstrate that an artificial network’s training task and architecture individually influence metric performance. They also replicate some analyses previously done with primate data to emphasize similarities and differences.
Review
Strengths: The study is timely and comprehensive. It provides a sufficient level of rigor to establish itself as a viable benchmark for relating mouse visual neuron recordings to artificial network activations. Overall it is well written with easy-to-parse plots. The appendix also provides a concise and convincing defense for the methods chosen in the main manuscript.
Weaknesses: The claimed contribution of the work is to better understand what does/doesn't matter when comparing deep neural network activations against mouse brain activations. The abstract offers some argument for why we would want to do such a comparison in the first place, but in general the paper is pretty well fixated on the opening question of how well the networks fare as models of mouse cortex instead of what does comparing network responses to mouse cortex tell us. I realize that this work is building on an ongoing research thread from a couple of labs, who all provide their own motivations. Nonetheless, I still find myself wondering what new information the results gives us regarding biological & artificial networks. Maybe their emphasis on equivalences between primate and mouse is key? My recommendation is for them to try to make this point of why the comparison matters in the first place more salient in the intro or discussion.
The references need to be reviewed carefully. There are many [10, 26-28, 30-34, 36-42, 44-48] that do not include a publication venue. Others [24, 51, 74] are listed as arxiv preprints, but have since been published at peer-reviewed venues. Finally, [53] and [56] appear to be the same citation.
Additional minor comments:
Assuming you have this sort of resolution in your dataset, how well do the non-neural network baselines & neural networks perform on layer 4 primary cortex data? It has been suggested that this region has a dearth of “complex cells” as compared to primate data {1}. So I would imagine that the Gabor bank in particular will perform comparatively well in this area. I am asking because the Gabor filter bank model is still widely accepted as the standard among “interpretable” methods {2}, and given your discussion at the end about the meager overall performance it would be interesting to see if this focused area has a higher ceiling. Maybe this question is answered with the cre line split, but I’m not familiar with the different lines.
While I realize this is far from the authors’ problem to solve, I take issue with the phrasing of correlational scores as a “geometric” analysis. For example, one of a few quotes from the paper: “The resultant coefficient constitutes the score for how well a given model layer predicts the representational geometry of a given cortical area.” I found no mention of geometry in [56]. A quick internet search got me to {3}, which says “The dissimilarities can be interpreted as distances in the multivariate response space. The RDM thus describes the geometry of the arrangement of patterns in this space.” Maybe that’s where they got the term from? I find that it is a stretch to relate correlation values to distances and then further to a geometric description. My recommendation to the authors: use the term if you want, but I would suggest that you at least specify exactly what you mean by it. You should probably also differentiate your definition from others, e.g. that used by {4}, {5}, and {6}.
It appears to me that the overall differentiation among the models is quite small. This could lead to a possible alternative conclusion that none of these differentiations (task, architecture, depth, etc) really matter. Or, alternatively, that the metrics themselves are not useful for this type of differentiation. The noted winners/losers among architectures & tasks are certainly interesting observations, but I think it would help balance the paper if the authors commented on the general lack of variation in scores reported from figures 1 and 2. This would go well next to their already helpful discussion on the lack of total performance at the end of the paper.
{1} https://www.jneurosci.org/content/jneuro/28/30/7520.full.pdf
{2} https://www.annualreviews.org/doi/abs/10.1146/annurev-vision-091718-014731
{3} https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003553
{4} https://www.nature.com/articles/s41586-019-1346-5
{5} https://elifesciences.org/articles/44526
{6} https://www.sciencedirect.com/science/article/pii/S0042698915003600 |
NIPS | Title
Model Fusion via Optimal Transport
Abstract
Combining different models is a widely used paradigm in machine learning applications. While the most common approach is to form an ensemble of models and average their individual predictions, this approach is often rendered infeasible by given resource constraints in terms of memory and computation, which grow linearly with the number of models. We present a layer-wise model fusion algorithm for neural networks that utilizes optimal transport to (soft-) align neurons across the models before averaging their associated parameters. We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
N/A
We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
1 Introduction
If two neural networks had a child, what would be its weights? In this work, we study the fusion of two parent neural networks—which were trained differently but have the same number of layers—into a single child network. We further focus on performing this operation in a one-shot manner, based on the network weights only, so as to minimize the need of any retraining.
This fundamental operation of merging several neural networks into one contrasts other widely used techniques for combining machine learning models:
Ensemble methods have a very long history. They combine the outputs of several different models as a way to improve the prediction performance and robustness. However, this requires maintaining the K trained models and running each of them at test time (say, in order to average their outputs). This approach thus quickly becomes infeasible for many applications with limited computational resources, especially in view of the ever-growing size of modern deep learning models.
The simplest way to fuse several parent networks into a single network of the same size is direct weight averaging, which we refer to as vanilla averaging; here for simplicity, we assume that all network architectures are identical. Unfortunately, neural networks are typically highly redundant in their parameterizations, so that there is no one-to-one correspondence between the weights of two different neural networks, even if they would describe the same function of the input. In practice, vanilla averaging is known to perform very poorly on trained networks whose weights differ non-trivially.
Finally, a third way to combine two models is distillation, where one network is retrained on its training data, while jointly using the output predictions of the other ‘teacher’ network on those ∗Work done while at EPFL.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
samples. Such a scenario is considered infeasible in our setting, as we aim for approaches not requiring the sharing of training data.This requirement is particularly crucial if the training data is to be kept private, like in federated learning applications, or is unavailable due to e.g. legal reasons.
Contributions. We propose a novel layer-wise approach of aligning the neurons and weights of several differently trained models, for fusing them into a single model of the same architecture. Our method relies on optimal transport (OT) [1, 2], to minimize the transportation cost of neurons present in the layers of individual models, measured by the similarity of activations or incoming weights. The resulting layer-wise averaging scheme can be interpreted as computing the Wasserstein barycenter [3, 4] of the probability measures defined at the corresponding layers of the parent models.
We empirically demonstrate that our method succeeds in the one-shot merging of networks of different weights, and in all scenarios significantly outperforms vanilla averaging. More surprisingly, we also show that our method succeeds in merging two networks that were trained for slightly different tasks (such as using a different set of labels). The method is able to “inherit” abilities unique to one of the parent networks, while outperforming the same parent network on the task associated with the other network. Further, we illustrate how it can serve as a data-free and algorithm independent post-processing tool for structured pruning. Finally, we show that OT fusion, with mild fine-tuning, can act as efficient proxy for the ensemble, whereas vanilla averaging fails for more than two models.
Extensions and Applications. The method serves as a new building block for enabling several use-cases: (1) The adaptation of a global model to personal training data. (2) Fusing the parameters of a bigger model into a smaller sized model and vice versa. (3) Federated or decentralized learning applications, where training data can not be shared due to privacy reasons or simply due to its large size. In general, improved model fusion techniques such as ours have strong potential towards encouraging model exchange as opposed to data exchange, to improve privacy & reduce communication costs.
2 Related Work
Ensembling. Ensemble methods [5–7] have long been in use in deep learning and machine learning in general. However, given our goal is to obtain a single model, it is assumed infeasible to maintain and run several trained models as needed here.
Distillation. Another line of work by Hinton et al. [8], Buciluǎ et al. [9], Schmidhuber [10] proposes distillation techniques. Here the key idea is to employ the knowledge of a pre-trained teacher network (typically larger and expensive to train) and transfer its abilities to a smaller model called the student network. During this transfer process, the goal is to use the relative probabilities of misclassification of the teacher as a more informative training signal.
While distillation also results in a single model, the main drawback is its computational complexity— the distillation process is essentially as expensive as training the student network from scratch, and also involves its own set of hyper-parameter tuning. In addition, distillation still requires sharing the training data with the teacher (as the teacher network can be too large to share), which we avoid here.
In a different line of work, Shen et al. [11] propose an approach where the student network is forced to produce outputs mimicking the teacher networks, by utilizing Generative Adversarial Network [12]. This still does not resolve the problem of high computational costs involved in this kind of knowledge transfer. Further, it does not provide a principled way to aggregate the parameters of different models.
Relation to other network fusion methods. Several studies have investigated a method to merge two trained networks into a single network without the need for retraining [13–15]. Leontev et al. [15] propose Elastic Weight Consolidation, which formulates an assignment problem on top of diagonal approximations to the Hessian matrices of each of the two parent neural networks. Their method however only works when the weights of the parent models are already close, i.e. share a significant part of the training history [13, 14], by relying on SGD with periodic averaging, also called local SGD [16]. Nevertheless, their empirical results [15] do not improve over vanilla averaging.
Alignment-based methods. Alignment of neurons was considered in Li et al. [17] to probe the representations learned by different networks. Recently, Yurochkin et al. [18] independently proposed a Bayesian non-parametric framework that considers matching the neurons of different MLPs in federated learning. In a concurrent work2, Wang et al. [19] extend [18] to more realistic networks
2An early version of our paper also appeared at NeurIPS 2019 workshop on OT, arxiv:1910.05653.
including CNNs, also with a specific focus on federated learning. In contrast, we develop our method from the lens of optimal transport (OT), which lends us a simpler approach by utilizing Wasserstein barycenters. The method of aligning neurons employed in both lines of work form instances for the choice of ground metric in OT. Overall, we consider model fusion in general, beyond federated learning. For instance, we show applications of fusing different sized models (e.g., for structured pruning) as well as the compatibility of our method to serve as an initialization for distillation. From a practical side, our approach is # of layer times more efficient and also applies to ResNets.
To conclude, the application of Wasserstein barycenters for averaging the weights of neural networks has—to our knowledge—not been considered in the past.
3 Background on Optimal Transport (OT)
We present a short background on OT in the discrete case, and in this process set up the notation for the rest of the paper. OT gives a way to compare two probability distributions defined over a ground space S, provided an underlying distance or more generally the cost of transporting one point to another in the ground space. Next, we describe the linear program (LP) which lies at the heart of OT.
LP Formulation. First, let us consider two empirical probability measures µ and ν denoted by a weighted sum of Diracs, i.e., µ = ∑n i=1 αi δ(x (i)) and ν = ∑m i=1 βi δ(y
(i)). Here δ(x) denotes the Dirac (unit mass) distribution at point x ∈ S and the set of pointsX = (x(1), . . . ,x(n)) ∈ Sn. The weight α = (α1, . . . , αn) lives in the probability simplex (and similarly β). Further, let Cij denote the ground cost of moving point x(i) to y(j). Then the optimal transport between µ and ν can be formulated as solving the following linear program. OT(µ, ν;C) := min 〈T ,C〉, with T ∈ R(n×m)+ such that T1m = α, T>1n = β. Here, 〈T ,C〉 := tr ( T>C ) = ∑ ij TijCij is the Frobenius inner product of matrices. The optimal T ∈ R(n×m)+ is called as the transportation matrix or transport map, and Tij represents the optimal amount of mass to be moved from point x(i) to y(j). Wasserstein Distance. When S = Rd and the cost is defined with respect to a metric DS over S( i.e., Cij = DS(x(i),y(j))p for any i, j ) , OT establishes a distance between probability distributions. This is called the p-Wasserstein distance and is defined asWp(µ, ν) := OT(µ, ν;DpS)1/p. Wasserstein Barycenters. This represents the notion of averaging in the Wasserstein space. To be precise, the Wasserstein barycenter [3] is a probability measure that minimizes the weighted sum of (p-th power) Wasserstein distances to the given K measures {µ1, . . . , µK}, with corresponding weights η = {η1, . . . , ηK} ∈ ΣK . Hence, it can be written as Bp(µ1, . . . , µK) = arg minµ ∑K k=1 ηk Wp(µk, ν)p.
4 Proposed Algorithm
In this section, we discuss our proposed algorithm for model aggregation. First, we consider that we are averaging the parameters of only two neural networks, but later present the extension to the multiple model case. For now, we ignore the bias parameters and we only focus on the weights. This is to make the presentation succinct, and it can be easily extended to take care of these aspects.
Motivation. As alluded to earlier in the introduction, the problem with vanilla averaging of parameters is the lack of one-to-one correspondence between the model parameters. In particular, for a given layer, there is no direct matching between the neurons of the two models. For e.g., this means that the pth neuron of model A might behave very differently (in terms of the feature it detects) from the pth neuron of the other model B, and instead might be quite similar in functionality to the p+ 1th neuron. Imagine, if we knew a perfect matching between the neurons, then we could simply align the neurons of model A with respect to B. Having done this, it would then make more sense to perform vanilla averaging of the neuron parameters. The matching or assignment could be formulated as a permutation matrix, and just multiplying the parameters by this matrix would align the parameters.
But in practice, it is more likely to have soft correspondences between the neurons of the two models for a given layer, especially if their number is not the same across the two models. This is where optimal transport comes in and provides us a soft-alignment matrix in the form of the transport map T . In other words, the alignment problem can be rephrased as optimally transporting the neurons in a given layer of model A to the neurons in the same layer of model B.
General procedure. Let us assume we are at some layer ` and that neurons in the previous layers have already been aligned. Then, we define probability measures over neurons in this layer for the two models as, µ(`) = ( α(`),X[`] ) and ν(`) = ( β(`),Y [`] ) , whereX,Y are the measure supports.
Next, we use uniform distributions to initialize the histogram (or probability mass values) for each layer. Although we note that it is possible to additionally use other measures of neuron importance [20, 21], but we leave it for a future work. In particular, if the size of layer ` of models A and B is denoted by n(`), m(`) respectively, we get α(`) ← 1n(`)/n(`), β(`) ← 1m(`)/m(`). Now, in terms of the alignment procedure, we first align the incoming edge weights for the current layer `. This can be done by post-multiplying with the previous layer transport matrix T (`−1), normalized appropriately via the inverse of the corresponding column marginals β(`−1):
Ŵ (`, `−1) A ←W (`, `−1) A T
(`−1)diag ( 1/β(`−1) ) . (1)
This update can be interpreted as follows: the matrix T (`−1)diag ( β−(`−1) ) has m(`−1) columns in the simplex Σn(`−1) , thus post-multiplyingW (`, `−1) A with it will produce a convex combination of the points inW (`, `−1)A with weights defined by the optimal transport map T (`−1).
Once this has been done, we focus on aligning the neurons in this layer ` of the two models. Let us assume, we have a suitable ground metric DS (which we discuss in the sections ahead). Then we compute the optimal transport map T (`) between the measures µ(`), ν(`) for layer `, i.e., T (`), W2 ← OT(µ(`), ν(`), DS), whereW2 denotes the obtained Wasserstein-distance. Now, we use this transport map T (`) to align the neurons (more precisely the weights) of the first model (A) with respect to the second (B),
W̃ (`, `−1) A ← diag
( 1/β(`) ) T (`) > Ŵ
(`, `−1) A . (2)
We will refer to model A’s weights, W̃ (`, `−1)A , as those aligned with respect to model B. Hence, with this alignment in place, we can average the weights of two layers to obtain the fused weight matrix W
(`, `−1) F , as in Eq. (3). We carry out this procedure over all the layers sequentially.
W (`, `−1) F ← 1 2
( W̃
(`, `−1) A +W (`, `−1) B
) . (3)
Note that, since the input layer is ordered identically for both models, we start the alignment from second layer onwards. Additionally, the order of neurons for the very last layer, i.e., in the output layer, again is identical. Thus, the (scaled) transport map at the last layer will be equal to the identity.
Extension to multiple models. The key idea is to begin with an estimate M̂F of the fused model, then align all the given models with respect to it, and finally return the average of these aligned weights as the final weights for the fused model. For the two model case, this is equivalent to the procedure we discussed above when the fused model is initialized to model B, i.e., M̂F ← MB . Because, aligning model B with this estimate of the fused model will yield a (scaled) transport map equal to the identity. And then, Eq. (3) will amount to returning the average of the aligned weights.
Alignment strategies. The above discussion implies that we need to design a ground metric DS between the inter-model neurons. So, we branch out into the following two strategies:
(a) Activation-based alignment (ψ = ‘acts’): In this variant, we run inference over a set of m samples, S = {x}mi=1 and store the activations for all neurons in the model. Thus, we consider the neuron activations, concatenated over the samples into a vector, as the support of the measures, and we denote it asXk ← ACTS ( Mk(S) ) , Y ← ACTS ( MF (S) ) . Then the neurons across the two models are considered to be similar if they produce similar activation outputs for the given set of samples. We measure this by computing the Euclidean distance between the resulting vector of activations. This serves as the ground metric for OT computations. In practice, we use the pre-activations.
(b) Weight-based alignment (ψ = ‘wts’): Here, we consider that the support of each neuron is given by the weights of the incoming edges (stacked in a vector). Thus, a neuron can be thought as being represented by the row corresponding to it in the weight matrix. So, the support of the measures in such an alignment type is given by,Xk[`]← Ŵ (`, `−1)k , Y [`]← Ŵ (`, `−1) F . The reasoning for such a choice for the support stems from the neuron activation at a particular layer being calculated as the inner product between this weight vector and the previous layer output. The ground metric used for OT is the Euclidean distance, like in the previous alignment strategy. Besides this difference of employing the actual weights in the ground metric (LINE 6, 10), rest of the procedure is identical.
Lastly, the overall procedure is summarized in Algorithm 1 below, where the GETSUPPORT selects between the above strategies based on the value of ψ.
Algorithm 1: Model Fusion (with ψ = {‘acts’, ‘wts’}−alignment)
1: input: Trained models {Mk}Kk=1 and initial estimate of the fused model M̂F 2: output: Fused model MF with weightsWF 3: notation: For model Mk, size of the layer ` is written as n(`)k , and the weight matrix between the layer `
and `− 1 is denoted asW (`, `−1)k . Neuron support tensors are given byXk,Y .
4: initialize: The size of input layer n(1)k ← m (1) for all k ∈ [K]; so α(1)k = β (1) ← 1m(1)/m (1) and
the transport map is defined as T (1)k ← diag(β (1)) Im(1)×m(1) .
5: for each layer ` = 2, . . . , L do
6: β(`), Y [`] ← 1m(`)/m (`), GETSUPPORT(M̂F , ψ, `) 7: ν(`) ← ( β(`), Y [`] ) . Define probability measure for initial fused model M̂F
8: for each model k = 1, . . . ,K do
9: Ŵ (`, `−1)k ←W (`, `−1) k T (`−1) k diag
( 1
β(`−1)
) . Align incoming edges for Mk
10: α(`)k , Xk[`] ← 1n(`) k /n (`) k , GETSUPPORT(Mk, ψ, `)
11: µ(`)k ← ( α (`) k , Xk[`] ) . Define probability measure for model Mk
12: D(`)S [p, q] ← ‖Xk[`][p]− Y [`][q]‖2, ∀ p∈[n(`)k ], q∈[m(`)] . Form ground metric
13: T (`)k , W (`) 2 ← OT ( µ (`) k , ν (`), D (`) S )
. Compute OT map and distance 14: W̃ (`, `−1)k ← diag ( 1
β(`)
) T (`) > Ŵ
(`, `−1) k . Align model Mk neurons
15: end for
16: W (`, `−1)F ← 1 K ∑K k=1 W̃ (`, `−1) k . Average model weights
17: end for
4.1 Discussion
Pros and cons of alignment type. An advantage of the weight-based alignment is that it is independent of the dataset samples, making it useful in privacy-constrained scenarios. On the flip side, the activation-based alignment only needs unlabeled data, and an interesting prospect for a future study would be to utilize synthetic data. But, activation-based alignment may help tailor the fusion to certain desired kinds of classes or domains. Fusion results for both are nevertheless similar.
Combinatorial hardness of the ideal procedure. In principle, we should actually search over the space of permutation matrices, jointly across all the layers. But this would be computationally
intractable for models such as deep neural networks, and thus we fuse in a layer-wise manner and in a way have a greedy procedure.
# of samples used for activation-based alignment. We typically consider a mini-batch of ∼ 100 to 400 samples for these experiments. Table S2 in the Appendix, shows that effect of increasing this mini-batch size on the fusion performance and we find that even as few as 25 samples are enough to outperform vanilla averaging.
Exact OT and runtime efficiency. Our fusion procedure is efficient enough for the deep neural networks considered here (VGG11, RESNET18), so we primarily utilize exact OT solvers. While the runtime of exact OT is roughly cubic in the cardinality of the measure supports, it is not an issue for us as this cardinality (which amounts to the network width) is ≤ 600 for these networks. In general, modern-day neural networks are typically deeper than wide. To give a concrete estimate, the time taken to fuse six VGG11 models is ≈ 15 seconds on 1 Nvidia V100 GPU (c.f. Section S1.4 for more details). It is possible to further improve the runtime by adopting the entropy-regularized OT [22], but this looses slightly in terms of test accuracy compared to exact OT (c.f. Table S4).
5 Experiments
Outline. We first present our results for one-shot fusion when the models are trained on different data distributions. Next, in Section 5.2, we consider (one-shot) fusion in the case when model sizes are different (i.e., unequal layer widths to be precise). In fact, this aspect facilitates a new tool that can be applied in ways not possible with vanilla averaging. Further on, we focus on the use-case of obtaining an efficient replacement for ensembling models in Section 5.3.
Empirical Details. We test our model fusion approach on standard image classification datasets, like CIFAR10 with commonly used convolutional neural networks (CNNs) such as VGG11 [23] and residual networks like ResNet18 [24]; and on MNIST, we use a fully connected network with 3 hidden layers of size 400, 200, 100, which we refer to as MLPNET. As baselines, we mention the performance of ‘prediction’ ensembling and ‘vanilla’ averaging, besides that of individual models. Prediction ensembling refers to keeping all the models and averaging their predictions (output layer scores), and thus reflects in a way the ideal (but unrealistic) performance that we can hope to achieve when fusing into a single model. Vanilla averaging denotes the direct averaging of parameters. All the performance scores are test accuracies. Full experimental details are provided in Appendix S1.1.
5.1 Fusion in the setting of heterogeneous data and tasks
We first consider the setting of merging two models A and B, but assume that model A has some special skill or knowledge (say, recognizing an object) which B does not possess. However, B is overall more powerful across the remaining set of skills in comparison to A. The goal of fusion now is to obtain a single model that can gain from the strength of B on overall skills and also acquire the specialized skill possessed by A. Such a scenario can arise e.g. in reinforcement learning where these models are agents that have had different training episodes so far. Another possible use case lies in federated learning [25], where model A is a client application that has been trained to perform well on certain tasks (like personalized keyword prediction) and model B is the server that typically has a strong skill set for a range of tasks (general language model).
The natural constraints in such scenarios are (a) ensuring privacy and (b) minimization communication frequency. This implies that the training examples can not be shared between A and B to respect privacy and a one-shot knowledge transfer is ideally desired, which eliminates e.g., joint training.
At a very abstract level, these scenarios are representative of aggregating models that have been trained on non-i.i.d data distributions. To simulate a heterogeneous data-split, we consider the MNIST digit classification task with MLPNET models, where the unique skill possessed by model A corresponds to recognizing one particular ‘personalized’ label (say 4), which is unknown to B. Model B contains 90% of the remaining training set (i.e., excluding the label 4), while A has the other 10%. Both are trained on their portions of the data for 10 epochs , and other training settings are identical.
Figure 2 illustrates the results for fusing models A and B (in different proportions), both when they have different parameter initializations or when they share the same initialization. OT fusion 3 significantly outperforms the vanilla averaging of their parameters in terms of the overall test accuracy
3Only the receiver A’s own examples are used for computing the activations, avoiding the sharing of data.
in both the cases, and also improves over the individual models. E.g., in Figure 2(a), where the individual models obtain 89.78% and 87.35% accuracy respectively on the overall (global) test set, OT avg. achieves the best overall test set accuracy of 93.11%. Thus, confirming the successful skill transfer from both parent models, without the need for any retraining.
Our obtained results are robust to other scenarios when (i) some other label (say 6) serves as the special skill and (ii) the % of remaining data split is different. These results are collected in the Appendix S5, where in addition we also present results without the special label as well.
The case of multiple models. In the above example of two models, one might also consider maintaining an ensemble, however the associated costs for ensembling become prohibitive as soon as the numbers of models increases. Take for instance, four models: A, B, C and D, with the same initialization and assume that A again possessing the knowledge of a special digit (say, 4). Consider that the rest of the data is divided as 10%, 30%, 50%, 10%. Now training in the similar setting as before, these models end up getting (global) test accuracies of 87.7%, 86.5%, 87.0%, 83.5% respectively. Ensembling the predictions yields 95.0% while vanilla averaging obtains 80.6%. In contrast, OT averaging results in 93.6% test accuracy (≈ 6% gain over the best individual model), while being 4× more efficient than ensembling. Further details can be found in the Appendix S7.
5.2 Fusing different sized models
An advantage of our OT-based fusion is that it allows the layer widths to be different for each input model. Here, our procedure first identifies which weights of the bigger model should be mapped to the smaller model (via the transport map), and then averages the aligned models (now both of the size of the smaller one). We can thus combine the parameters of a bigger network into a smaller one, and vice versa, allowing new use-cases in (a) model compression and (b) federated learning.
(a) Post-processing tool for structured pruning. Structured pruning [26–28] is an approach to model compression that aims to remove entire neurons or channels, resulting in an out-of-the-box reduction in inference costs, while affecting the performance minimally. A widely effective method for CNNs is to remove the filters with smallest `1 norm [26]. Our key idea here is to fuse the original dense network into the pruned network, instead of just throwing it away.
Figure 3 shows the gain in test accuracy on CIFAR10 by carrying out OT fusion procedure (with weight-based alignment) when different convolutional layers of VGG11 are pruned to increasing amounts. For all the layers, we con-
sistently obtain a significant improvement in performance, and ≈ 10% or more gain in the high
sparsity regime. We also observe similar improvements other layers as well as when multiple (or all) layers are pruned simultaneously (c.f. Appendix S8).
Further, these gains are also significant when measured with respect to the overall sparsity obtained in the model. E.g., structured pruning the CONV_8 to 90% results in a net sparsity of 23% in the model. Here after pruning, the accuracy of the model drops from 90.3% to 81.5%, and on applying OT fusion, the performances recovers to 89.4%. As an another example take CONV_7, where after structured pruning to 80%, OT fusion improves the performance of the pruned model from 87.6% to 90.1% while achieving an overall sparsity of 41% in the network (see S8).
Our goal here is not to propose a method for structured pruning, but rather a post-processing tool that can help regain the drop in performance due to pruning. These results are thus independent of the pruning algorithm used, and e.g., Appendix S8 shows similar gains when the filters are pruned based on `2 norm (Figure S10) or even randomly (Figure S11). Further, Figure S12 in the appendix also shows the results when applied to VGG11 trained on CIFAR100 (instead of CIFAR10). Overall, OT fusion offers a completely data-free approach to improving the performance of the pruned model, which can be handy in the limited data regime or when retraining is prohibitive.
(b) Adapting the size of client and server-side models in federated learning. Given the huge sizes of contemporary neural networks, it is evident that we will not able to fit the same sized model on a client device as would be possible on the server. However, this might come at the cost of reduced performance. Further, the resource constraints might be fairly varied even amongst the clients devices, thus necessitating the flexibility to adapt the model sizes.
We consider a similar formulation, as in the one-shot knowledge transfer setting from Section 5.1, except that now the model B has twice the layer widths as compared to the corresponding layers of model A. Vanilla averaging of parameters, a core component of the widely prevalent FedAvg algorithm [25], gets ruled out in such a setting. Figure 4 shows how OT fusion/average can still lead to a successful knowledge transfer between the given models.
5.3 Fusion for efficient ensembling
In this section, our goal is to obtain a single model which can serve as a proxy for an ensemble of models, even if it comes at a slight decrease in performance relative to the ensemble, for future efficiency. Specifically, here we investigate how much can be gained by fusing multiple models that differ only in their parameter initializations (i.e., seeds). This means that models are trained on the same data, so unlike in Section 5.1 with a heterogeneous data-split, the gain here might be limited.
We study this in context of deep networks such as VGG11 and RESNET18 which have been trained to convergence on CIFAR10. As a first step, we consider the setting when we are given just two models, the results for which are present in Table 1. We observe that vanilla averaging absolutely fails in this case, and is 3- 5× worse than OT averaging, in case of RESNET18 and VGG11 respectively. OT average, however, does not yet improve over the individual models. This can be attributed to the combinatorial hardness of
the underlying alignment problem, and the greedy nature of our algorithm as mentioned before. As a simple but effective remedy, we consider finetuning (i.e., retraining) from the fused or averaged models. Retraining helps for both vanilla and OT averaging, but in comparison, the OT averaging
results in a better score for both the cases as shown in Table 1. E.g., for RESNET18, OT avg. + finetuning gets almost as good as prediction ensembling on test accuracy.
The finetuning scores for vanilla and OT averaging correspond to their best obtained results, when retrained with several finetuning learning rate schedules for a total of 100 and 120 epochs in case of VGG11and RESNET18 respectively. We also considered finetuning the individual models across these various hyperparameter settings (which of course will be infeasible in practice), but the best accuracy mustered via this attempt for RESNET18 was 93.51, in comparison to 93.78 for OT avg. + finetuning. See Appendix S3 and S4 for detailed results and typical retraining curves.
More than 2 models. Now, we discuss the case of more than two models, where the savings in efficiency relative to the ensemble are even higher. As before, we take the case of VGG11 on CIFAR10 and additionally CIFAR100 4, but now consider {4, 6, 8}− such models that have been trained to convergence, each from a different parameter initialization. Table 2 shows the results for this in case of CIFAR100 (results for CIFAR10 are similar and can be found in Table S9).
We find that the performance of vanilla averaging degrades to close-to-random performance, and interestingly even fails to retrain, despite trying numerous settings of optimization hyperparameters (like learning rate and schedules, c.f. Section S3.2). In contrast, OT average performs significantly better even without fine-tuning, and results in a mean test accuracy gain∼ {1.4%, 1.7%, 2%} over the best individual models after fine-tuning, in the case of {4, 6, 8}− base models respectively. Overall, Tables 1, 2 (also S9) show the importance of aligning the networks via OT before averaging. Further finetuning of the OT fused model, always results in an improvement over the individual models, while being # models times more efficient than the ensemble.
Fusion and Distillation. For the sake of completeness, we also compare OT fusion, distillation, and their combination, in context of transferring the knowledge of a large pre-trained teacher network into a smaller pre-trained student network. We find that starting the distillation from the OT fused model yields better performance than initializing randomly or with the student model. Further, when averaged across the considered temperature values = {20, 10, 8, 4, 1}, we observe that distillation of the teacher into random or student network based initialization performs worse than simple OT avg. + finetuning (which also doesn’t require doing such a sweep that would be prohibitive for larger models/datasets). These experiments are discussed in detail in Appendix S12. An interesting direction for future work would be to use intermediate OT distances computed during fusion as a means for regularizing or distilling with hidden layers.
6 Conclusion
We show that averaging the weights of models, by first doing a layer-wise (soft) alignment of the neurons via optimal transport, can serve as a versatile tool for fusing models in various settings. This results in (a) successful one-shot transfer of knowledge between models without sharing training data, (b) data free and algorithm independent post-processing tool for structured pruning, (c) and more generally, combining parameters of different sized models. Lastly, the OT average when further finetuned, allows for just keeping one model rather than a complete ensemble of models at inference. Future avenues include application in distributed optimization and continual learning, besides extending our current toolkit to fuse models with different number of layers, as well as, fusing generative models like GANs [12] (where ensembling does not make as much sense). The promising empirical results of the presented algorithm, thus warrant attention for further use-cases.
4We simply adapt the VGG11 architecture used for CIFAR10 and train it on CIFAR100 for 300 epochs. Since our focus here was not to obtain best individual models, but rather to investigate the efficacy of fusion.
Broader Impact Model fusion is a fundamental building block in machine learning, as a way of direct knowledge transfer between trained neural networks. Beyond theoretical interest it can serve a wide range of concrete applications. For instance, collaborative learning schemes such as federated learning are of increasing importance for enabling privacy-preserving training of ML models, as well as a better alignment of each individual’s data ownership with the resulting utility from jointly trained machine learning models, especially in applications where data is user-provided and privacy sensitive [29]. Here fusion of several models is a key building block to allow several agents to participate in joint training and knowledge exchange. We propose that a reliable fusion technique can serve as a step towards more broadly enabling privacy-preserving and efficient collaborative learning.
Acknowledgments
We would like to thank Rémi Flamary, Boris Muzellec, Sebastian Stich and other members of MLO, as well as the anonymous reviewers for their comments and feedback. | 1. What is the focus and contribution of the paper on neural network fusion?
2. What are the strengths of the proposed approach, particularly in terms of its efficiency?
3. What are the weaknesses of the paper, and how does the reviewer suggest improving them? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The main contribution of this work is to introduce a layer-wise approach to fusion neurons and weights of neural networks. The key idea here is to consider the barycenter based fusion of networks.
Strengths
The main strength of the work is demonstrating improved efficacy for model fusion over typical approaches like averaging.
Weaknesses
See Additional feedback. |
NIPS | Title
Model Fusion via Optimal Transport
Abstract
Combining different models is a widely used paradigm in machine learning applications. While the most common approach is to form an ensemble of models and average their individual predictions, this approach is often rendered infeasible by given resource constraints in terms of memory and computation, which grow linearly with the number of models. We present a layer-wise model fusion algorithm for neural networks that utilizes optimal transport to (soft-) align neurons across the models before averaging their associated parameters. We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
N/A
We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
1 Introduction
If two neural networks had a child, what would be its weights? In this work, we study the fusion of two parent neural networks—which were trained differently but have the same number of layers—into a single child network. We further focus on performing this operation in a one-shot manner, based on the network weights only, so as to minimize the need of any retraining.
This fundamental operation of merging several neural networks into one contrasts other widely used techniques for combining machine learning models:
Ensemble methods have a very long history. They combine the outputs of several different models as a way to improve the prediction performance and robustness. However, this requires maintaining the K trained models and running each of them at test time (say, in order to average their outputs). This approach thus quickly becomes infeasible for many applications with limited computational resources, especially in view of the ever-growing size of modern deep learning models.
The simplest way to fuse several parent networks into a single network of the same size is direct weight averaging, which we refer to as vanilla averaging; here for simplicity, we assume that all network architectures are identical. Unfortunately, neural networks are typically highly redundant in their parameterizations, so that there is no one-to-one correspondence between the weights of two different neural networks, even if they would describe the same function of the input. In practice, vanilla averaging is known to perform very poorly on trained networks whose weights differ non-trivially.
Finally, a third way to combine two models is distillation, where one network is retrained on its training data, while jointly using the output predictions of the other ‘teacher’ network on those ∗Work done while at EPFL.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
samples. Such a scenario is considered infeasible in our setting, as we aim for approaches not requiring the sharing of training data.This requirement is particularly crucial if the training data is to be kept private, like in federated learning applications, or is unavailable due to e.g. legal reasons.
Contributions. We propose a novel layer-wise approach of aligning the neurons and weights of several differently trained models, for fusing them into a single model of the same architecture. Our method relies on optimal transport (OT) [1, 2], to minimize the transportation cost of neurons present in the layers of individual models, measured by the similarity of activations or incoming weights. The resulting layer-wise averaging scheme can be interpreted as computing the Wasserstein barycenter [3, 4] of the probability measures defined at the corresponding layers of the parent models.
We empirically demonstrate that our method succeeds in the one-shot merging of networks of different weights, and in all scenarios significantly outperforms vanilla averaging. More surprisingly, we also show that our method succeeds in merging two networks that were trained for slightly different tasks (such as using a different set of labels). The method is able to “inherit” abilities unique to one of the parent networks, while outperforming the same parent network on the task associated with the other network. Further, we illustrate how it can serve as a data-free and algorithm independent post-processing tool for structured pruning. Finally, we show that OT fusion, with mild fine-tuning, can act as efficient proxy for the ensemble, whereas vanilla averaging fails for more than two models.
Extensions and Applications. The method serves as a new building block for enabling several use-cases: (1) The adaptation of a global model to personal training data. (2) Fusing the parameters of a bigger model into a smaller sized model and vice versa. (3) Federated or decentralized learning applications, where training data can not be shared due to privacy reasons or simply due to its large size. In general, improved model fusion techniques such as ours have strong potential towards encouraging model exchange as opposed to data exchange, to improve privacy & reduce communication costs.
2 Related Work
Ensembling. Ensemble methods [5–7] have long been in use in deep learning and machine learning in general. However, given our goal is to obtain a single model, it is assumed infeasible to maintain and run several trained models as needed here.
Distillation. Another line of work by Hinton et al. [8], Buciluǎ et al. [9], Schmidhuber [10] proposes distillation techniques. Here the key idea is to employ the knowledge of a pre-trained teacher network (typically larger and expensive to train) and transfer its abilities to a smaller model called the student network. During this transfer process, the goal is to use the relative probabilities of misclassification of the teacher as a more informative training signal.
While distillation also results in a single model, the main drawback is its computational complexity— the distillation process is essentially as expensive as training the student network from scratch, and also involves its own set of hyper-parameter tuning. In addition, distillation still requires sharing the training data with the teacher (as the teacher network can be too large to share), which we avoid here.
In a different line of work, Shen et al. [11] propose an approach where the student network is forced to produce outputs mimicking the teacher networks, by utilizing Generative Adversarial Network [12]. This still does not resolve the problem of high computational costs involved in this kind of knowledge transfer. Further, it does not provide a principled way to aggregate the parameters of different models.
Relation to other network fusion methods. Several studies have investigated a method to merge two trained networks into a single network without the need for retraining [13–15]. Leontev et al. [15] propose Elastic Weight Consolidation, which formulates an assignment problem on top of diagonal approximations to the Hessian matrices of each of the two parent neural networks. Their method however only works when the weights of the parent models are already close, i.e. share a significant part of the training history [13, 14], by relying on SGD with periodic averaging, also called local SGD [16]. Nevertheless, their empirical results [15] do not improve over vanilla averaging.
Alignment-based methods. Alignment of neurons was considered in Li et al. [17] to probe the representations learned by different networks. Recently, Yurochkin et al. [18] independently proposed a Bayesian non-parametric framework that considers matching the neurons of different MLPs in federated learning. In a concurrent work2, Wang et al. [19] extend [18] to more realistic networks
2An early version of our paper also appeared at NeurIPS 2019 workshop on OT, arxiv:1910.05653.
including CNNs, also with a specific focus on federated learning. In contrast, we develop our method from the lens of optimal transport (OT), which lends us a simpler approach by utilizing Wasserstein barycenters. The method of aligning neurons employed in both lines of work form instances for the choice of ground metric in OT. Overall, we consider model fusion in general, beyond federated learning. For instance, we show applications of fusing different sized models (e.g., for structured pruning) as well as the compatibility of our method to serve as an initialization for distillation. From a practical side, our approach is # of layer times more efficient and also applies to ResNets.
To conclude, the application of Wasserstein barycenters for averaging the weights of neural networks has—to our knowledge—not been considered in the past.
3 Background on Optimal Transport (OT)
We present a short background on OT in the discrete case, and in this process set up the notation for the rest of the paper. OT gives a way to compare two probability distributions defined over a ground space S, provided an underlying distance or more generally the cost of transporting one point to another in the ground space. Next, we describe the linear program (LP) which lies at the heart of OT.
LP Formulation. First, let us consider two empirical probability measures µ and ν denoted by a weighted sum of Diracs, i.e., µ = ∑n i=1 αi δ(x (i)) and ν = ∑m i=1 βi δ(y
(i)). Here δ(x) denotes the Dirac (unit mass) distribution at point x ∈ S and the set of pointsX = (x(1), . . . ,x(n)) ∈ Sn. The weight α = (α1, . . . , αn) lives in the probability simplex (and similarly β). Further, let Cij denote the ground cost of moving point x(i) to y(j). Then the optimal transport between µ and ν can be formulated as solving the following linear program. OT(µ, ν;C) := min 〈T ,C〉, with T ∈ R(n×m)+ such that T1m = α, T>1n = β. Here, 〈T ,C〉 := tr ( T>C ) = ∑ ij TijCij is the Frobenius inner product of matrices. The optimal T ∈ R(n×m)+ is called as the transportation matrix or transport map, and Tij represents the optimal amount of mass to be moved from point x(i) to y(j). Wasserstein Distance. When S = Rd and the cost is defined with respect to a metric DS over S( i.e., Cij = DS(x(i),y(j))p for any i, j ) , OT establishes a distance between probability distributions. This is called the p-Wasserstein distance and is defined asWp(µ, ν) := OT(µ, ν;DpS)1/p. Wasserstein Barycenters. This represents the notion of averaging in the Wasserstein space. To be precise, the Wasserstein barycenter [3] is a probability measure that minimizes the weighted sum of (p-th power) Wasserstein distances to the given K measures {µ1, . . . , µK}, with corresponding weights η = {η1, . . . , ηK} ∈ ΣK . Hence, it can be written as Bp(µ1, . . . , µK) = arg minµ ∑K k=1 ηk Wp(µk, ν)p.
4 Proposed Algorithm
In this section, we discuss our proposed algorithm for model aggregation. First, we consider that we are averaging the parameters of only two neural networks, but later present the extension to the multiple model case. For now, we ignore the bias parameters and we only focus on the weights. This is to make the presentation succinct, and it can be easily extended to take care of these aspects.
Motivation. As alluded to earlier in the introduction, the problem with vanilla averaging of parameters is the lack of one-to-one correspondence between the model parameters. In particular, for a given layer, there is no direct matching between the neurons of the two models. For e.g., this means that the pth neuron of model A might behave very differently (in terms of the feature it detects) from the pth neuron of the other model B, and instead might be quite similar in functionality to the p+ 1th neuron. Imagine, if we knew a perfect matching between the neurons, then we could simply align the neurons of model A with respect to B. Having done this, it would then make more sense to perform vanilla averaging of the neuron parameters. The matching or assignment could be formulated as a permutation matrix, and just multiplying the parameters by this matrix would align the parameters.
But in practice, it is more likely to have soft correspondences between the neurons of the two models for a given layer, especially if their number is not the same across the two models. This is where optimal transport comes in and provides us a soft-alignment matrix in the form of the transport map T . In other words, the alignment problem can be rephrased as optimally transporting the neurons in a given layer of model A to the neurons in the same layer of model B.
General procedure. Let us assume we are at some layer ` and that neurons in the previous layers have already been aligned. Then, we define probability measures over neurons in this layer for the two models as, µ(`) = ( α(`),X[`] ) and ν(`) = ( β(`),Y [`] ) , whereX,Y are the measure supports.
Next, we use uniform distributions to initialize the histogram (or probability mass values) for each layer. Although we note that it is possible to additionally use other measures of neuron importance [20, 21], but we leave it for a future work. In particular, if the size of layer ` of models A and B is denoted by n(`), m(`) respectively, we get α(`) ← 1n(`)/n(`), β(`) ← 1m(`)/m(`). Now, in terms of the alignment procedure, we first align the incoming edge weights for the current layer `. This can be done by post-multiplying with the previous layer transport matrix T (`−1), normalized appropriately via the inverse of the corresponding column marginals β(`−1):
Ŵ (`, `−1) A ←W (`, `−1) A T
(`−1)diag ( 1/β(`−1) ) . (1)
This update can be interpreted as follows: the matrix T (`−1)diag ( β−(`−1) ) has m(`−1) columns in the simplex Σn(`−1) , thus post-multiplyingW (`, `−1) A with it will produce a convex combination of the points inW (`, `−1)A with weights defined by the optimal transport map T (`−1).
Once this has been done, we focus on aligning the neurons in this layer ` of the two models. Let us assume, we have a suitable ground metric DS (which we discuss in the sections ahead). Then we compute the optimal transport map T (`) between the measures µ(`), ν(`) for layer `, i.e., T (`), W2 ← OT(µ(`), ν(`), DS), whereW2 denotes the obtained Wasserstein-distance. Now, we use this transport map T (`) to align the neurons (more precisely the weights) of the first model (A) with respect to the second (B),
W̃ (`, `−1) A ← diag
( 1/β(`) ) T (`) > Ŵ
(`, `−1) A . (2)
We will refer to model A’s weights, W̃ (`, `−1)A , as those aligned with respect to model B. Hence, with this alignment in place, we can average the weights of two layers to obtain the fused weight matrix W
(`, `−1) F , as in Eq. (3). We carry out this procedure over all the layers sequentially.
W (`, `−1) F ← 1 2
( W̃
(`, `−1) A +W (`, `−1) B
) . (3)
Note that, since the input layer is ordered identically for both models, we start the alignment from second layer onwards. Additionally, the order of neurons for the very last layer, i.e., in the output layer, again is identical. Thus, the (scaled) transport map at the last layer will be equal to the identity.
Extension to multiple models. The key idea is to begin with an estimate M̂F of the fused model, then align all the given models with respect to it, and finally return the average of these aligned weights as the final weights for the fused model. For the two model case, this is equivalent to the procedure we discussed above when the fused model is initialized to model B, i.e., M̂F ← MB . Because, aligning model B with this estimate of the fused model will yield a (scaled) transport map equal to the identity. And then, Eq. (3) will amount to returning the average of the aligned weights.
Alignment strategies. The above discussion implies that we need to design a ground metric DS between the inter-model neurons. So, we branch out into the following two strategies:
(a) Activation-based alignment (ψ = ‘acts’): In this variant, we run inference over a set of m samples, S = {x}mi=1 and store the activations for all neurons in the model. Thus, we consider the neuron activations, concatenated over the samples into a vector, as the support of the measures, and we denote it asXk ← ACTS ( Mk(S) ) , Y ← ACTS ( MF (S) ) . Then the neurons across the two models are considered to be similar if they produce similar activation outputs for the given set of samples. We measure this by computing the Euclidean distance between the resulting vector of activations. This serves as the ground metric for OT computations. In practice, we use the pre-activations.
(b) Weight-based alignment (ψ = ‘wts’): Here, we consider that the support of each neuron is given by the weights of the incoming edges (stacked in a vector). Thus, a neuron can be thought as being represented by the row corresponding to it in the weight matrix. So, the support of the measures in such an alignment type is given by,Xk[`]← Ŵ (`, `−1)k , Y [`]← Ŵ (`, `−1) F . The reasoning for such a choice for the support stems from the neuron activation at a particular layer being calculated as the inner product between this weight vector and the previous layer output. The ground metric used for OT is the Euclidean distance, like in the previous alignment strategy. Besides this difference of employing the actual weights in the ground metric (LINE 6, 10), rest of the procedure is identical.
Lastly, the overall procedure is summarized in Algorithm 1 below, where the GETSUPPORT selects between the above strategies based on the value of ψ.
Algorithm 1: Model Fusion (with ψ = {‘acts’, ‘wts’}−alignment)
1: input: Trained models {Mk}Kk=1 and initial estimate of the fused model M̂F 2: output: Fused model MF with weightsWF 3: notation: For model Mk, size of the layer ` is written as n(`)k , and the weight matrix between the layer `
and `− 1 is denoted asW (`, `−1)k . Neuron support tensors are given byXk,Y .
4: initialize: The size of input layer n(1)k ← m (1) for all k ∈ [K]; so α(1)k = β (1) ← 1m(1)/m (1) and
the transport map is defined as T (1)k ← diag(β (1)) Im(1)×m(1) .
5: for each layer ` = 2, . . . , L do
6: β(`), Y [`] ← 1m(`)/m (`), GETSUPPORT(M̂F , ψ, `) 7: ν(`) ← ( β(`), Y [`] ) . Define probability measure for initial fused model M̂F
8: for each model k = 1, . . . ,K do
9: Ŵ (`, `−1)k ←W (`, `−1) k T (`−1) k diag
( 1
β(`−1)
) . Align incoming edges for Mk
10: α(`)k , Xk[`] ← 1n(`) k /n (`) k , GETSUPPORT(Mk, ψ, `)
11: µ(`)k ← ( α (`) k , Xk[`] ) . Define probability measure for model Mk
12: D(`)S [p, q] ← ‖Xk[`][p]− Y [`][q]‖2, ∀ p∈[n(`)k ], q∈[m(`)] . Form ground metric
13: T (`)k , W (`) 2 ← OT ( µ (`) k , ν (`), D (`) S )
. Compute OT map and distance 14: W̃ (`, `−1)k ← diag ( 1
β(`)
) T (`) > Ŵ
(`, `−1) k . Align model Mk neurons
15: end for
16: W (`, `−1)F ← 1 K ∑K k=1 W̃ (`, `−1) k . Average model weights
17: end for
4.1 Discussion
Pros and cons of alignment type. An advantage of the weight-based alignment is that it is independent of the dataset samples, making it useful in privacy-constrained scenarios. On the flip side, the activation-based alignment only needs unlabeled data, and an interesting prospect for a future study would be to utilize synthetic data. But, activation-based alignment may help tailor the fusion to certain desired kinds of classes or domains. Fusion results for both are nevertheless similar.
Combinatorial hardness of the ideal procedure. In principle, we should actually search over the space of permutation matrices, jointly across all the layers. But this would be computationally
intractable for models such as deep neural networks, and thus we fuse in a layer-wise manner and in a way have a greedy procedure.
# of samples used for activation-based alignment. We typically consider a mini-batch of ∼ 100 to 400 samples for these experiments. Table S2 in the Appendix, shows that effect of increasing this mini-batch size on the fusion performance and we find that even as few as 25 samples are enough to outperform vanilla averaging.
Exact OT and runtime efficiency. Our fusion procedure is efficient enough for the deep neural networks considered here (VGG11, RESNET18), so we primarily utilize exact OT solvers. While the runtime of exact OT is roughly cubic in the cardinality of the measure supports, it is not an issue for us as this cardinality (which amounts to the network width) is ≤ 600 for these networks. In general, modern-day neural networks are typically deeper than wide. To give a concrete estimate, the time taken to fuse six VGG11 models is ≈ 15 seconds on 1 Nvidia V100 GPU (c.f. Section S1.4 for more details). It is possible to further improve the runtime by adopting the entropy-regularized OT [22], but this looses slightly in terms of test accuracy compared to exact OT (c.f. Table S4).
5 Experiments
Outline. We first present our results for one-shot fusion when the models are trained on different data distributions. Next, in Section 5.2, we consider (one-shot) fusion in the case when model sizes are different (i.e., unequal layer widths to be precise). In fact, this aspect facilitates a new tool that can be applied in ways not possible with vanilla averaging. Further on, we focus on the use-case of obtaining an efficient replacement for ensembling models in Section 5.3.
Empirical Details. We test our model fusion approach on standard image classification datasets, like CIFAR10 with commonly used convolutional neural networks (CNNs) such as VGG11 [23] and residual networks like ResNet18 [24]; and on MNIST, we use a fully connected network with 3 hidden layers of size 400, 200, 100, which we refer to as MLPNET. As baselines, we mention the performance of ‘prediction’ ensembling and ‘vanilla’ averaging, besides that of individual models. Prediction ensembling refers to keeping all the models and averaging their predictions (output layer scores), and thus reflects in a way the ideal (but unrealistic) performance that we can hope to achieve when fusing into a single model. Vanilla averaging denotes the direct averaging of parameters. All the performance scores are test accuracies. Full experimental details are provided in Appendix S1.1.
5.1 Fusion in the setting of heterogeneous data and tasks
We first consider the setting of merging two models A and B, but assume that model A has some special skill or knowledge (say, recognizing an object) which B does not possess. However, B is overall more powerful across the remaining set of skills in comparison to A. The goal of fusion now is to obtain a single model that can gain from the strength of B on overall skills and also acquire the specialized skill possessed by A. Such a scenario can arise e.g. in reinforcement learning where these models are agents that have had different training episodes so far. Another possible use case lies in federated learning [25], where model A is a client application that has been trained to perform well on certain tasks (like personalized keyword prediction) and model B is the server that typically has a strong skill set for a range of tasks (general language model).
The natural constraints in such scenarios are (a) ensuring privacy and (b) minimization communication frequency. This implies that the training examples can not be shared between A and B to respect privacy and a one-shot knowledge transfer is ideally desired, which eliminates e.g., joint training.
At a very abstract level, these scenarios are representative of aggregating models that have been trained on non-i.i.d data distributions. To simulate a heterogeneous data-split, we consider the MNIST digit classification task with MLPNET models, where the unique skill possessed by model A corresponds to recognizing one particular ‘personalized’ label (say 4), which is unknown to B. Model B contains 90% of the remaining training set (i.e., excluding the label 4), while A has the other 10%. Both are trained on their portions of the data for 10 epochs , and other training settings are identical.
Figure 2 illustrates the results for fusing models A and B (in different proportions), both when they have different parameter initializations or when they share the same initialization. OT fusion 3 significantly outperforms the vanilla averaging of their parameters in terms of the overall test accuracy
3Only the receiver A’s own examples are used for computing the activations, avoiding the sharing of data.
in both the cases, and also improves over the individual models. E.g., in Figure 2(a), where the individual models obtain 89.78% and 87.35% accuracy respectively on the overall (global) test set, OT avg. achieves the best overall test set accuracy of 93.11%. Thus, confirming the successful skill transfer from both parent models, without the need for any retraining.
Our obtained results are robust to other scenarios when (i) some other label (say 6) serves as the special skill and (ii) the % of remaining data split is different. These results are collected in the Appendix S5, where in addition we also present results without the special label as well.
The case of multiple models. In the above example of two models, one might also consider maintaining an ensemble, however the associated costs for ensembling become prohibitive as soon as the numbers of models increases. Take for instance, four models: A, B, C and D, with the same initialization and assume that A again possessing the knowledge of a special digit (say, 4). Consider that the rest of the data is divided as 10%, 30%, 50%, 10%. Now training in the similar setting as before, these models end up getting (global) test accuracies of 87.7%, 86.5%, 87.0%, 83.5% respectively. Ensembling the predictions yields 95.0% while vanilla averaging obtains 80.6%. In contrast, OT averaging results in 93.6% test accuracy (≈ 6% gain over the best individual model), while being 4× more efficient than ensembling. Further details can be found in the Appendix S7.
5.2 Fusing different sized models
An advantage of our OT-based fusion is that it allows the layer widths to be different for each input model. Here, our procedure first identifies which weights of the bigger model should be mapped to the smaller model (via the transport map), and then averages the aligned models (now both of the size of the smaller one). We can thus combine the parameters of a bigger network into a smaller one, and vice versa, allowing new use-cases in (a) model compression and (b) federated learning.
(a) Post-processing tool for structured pruning. Structured pruning [26–28] is an approach to model compression that aims to remove entire neurons or channels, resulting in an out-of-the-box reduction in inference costs, while affecting the performance minimally. A widely effective method for CNNs is to remove the filters with smallest `1 norm [26]. Our key idea here is to fuse the original dense network into the pruned network, instead of just throwing it away.
Figure 3 shows the gain in test accuracy on CIFAR10 by carrying out OT fusion procedure (with weight-based alignment) when different convolutional layers of VGG11 are pruned to increasing amounts. For all the layers, we con-
sistently obtain a significant improvement in performance, and ≈ 10% or more gain in the high
sparsity regime. We also observe similar improvements other layers as well as when multiple (or all) layers are pruned simultaneously (c.f. Appendix S8).
Further, these gains are also significant when measured with respect to the overall sparsity obtained in the model. E.g., structured pruning the CONV_8 to 90% results in a net sparsity of 23% in the model. Here after pruning, the accuracy of the model drops from 90.3% to 81.5%, and on applying OT fusion, the performances recovers to 89.4%. As an another example take CONV_7, where after structured pruning to 80%, OT fusion improves the performance of the pruned model from 87.6% to 90.1% while achieving an overall sparsity of 41% in the network (see S8).
Our goal here is not to propose a method for structured pruning, but rather a post-processing tool that can help regain the drop in performance due to pruning. These results are thus independent of the pruning algorithm used, and e.g., Appendix S8 shows similar gains when the filters are pruned based on `2 norm (Figure S10) or even randomly (Figure S11). Further, Figure S12 in the appendix also shows the results when applied to VGG11 trained on CIFAR100 (instead of CIFAR10). Overall, OT fusion offers a completely data-free approach to improving the performance of the pruned model, which can be handy in the limited data regime or when retraining is prohibitive.
(b) Adapting the size of client and server-side models in federated learning. Given the huge sizes of contemporary neural networks, it is evident that we will not able to fit the same sized model on a client device as would be possible on the server. However, this might come at the cost of reduced performance. Further, the resource constraints might be fairly varied even amongst the clients devices, thus necessitating the flexibility to adapt the model sizes.
We consider a similar formulation, as in the one-shot knowledge transfer setting from Section 5.1, except that now the model B has twice the layer widths as compared to the corresponding layers of model A. Vanilla averaging of parameters, a core component of the widely prevalent FedAvg algorithm [25], gets ruled out in such a setting. Figure 4 shows how OT fusion/average can still lead to a successful knowledge transfer between the given models.
5.3 Fusion for efficient ensembling
In this section, our goal is to obtain a single model which can serve as a proxy for an ensemble of models, even if it comes at a slight decrease in performance relative to the ensemble, for future efficiency. Specifically, here we investigate how much can be gained by fusing multiple models that differ only in their parameter initializations (i.e., seeds). This means that models are trained on the same data, so unlike in Section 5.1 with a heterogeneous data-split, the gain here might be limited.
We study this in context of deep networks such as VGG11 and RESNET18 which have been trained to convergence on CIFAR10. As a first step, we consider the setting when we are given just two models, the results for which are present in Table 1. We observe that vanilla averaging absolutely fails in this case, and is 3- 5× worse than OT averaging, in case of RESNET18 and VGG11 respectively. OT average, however, does not yet improve over the individual models. This can be attributed to the combinatorial hardness of
the underlying alignment problem, and the greedy nature of our algorithm as mentioned before. As a simple but effective remedy, we consider finetuning (i.e., retraining) from the fused or averaged models. Retraining helps for both vanilla and OT averaging, but in comparison, the OT averaging
results in a better score for both the cases as shown in Table 1. E.g., for RESNET18, OT avg. + finetuning gets almost as good as prediction ensembling on test accuracy.
The finetuning scores for vanilla and OT averaging correspond to their best obtained results, when retrained with several finetuning learning rate schedules for a total of 100 and 120 epochs in case of VGG11and RESNET18 respectively. We also considered finetuning the individual models across these various hyperparameter settings (which of course will be infeasible in practice), but the best accuracy mustered via this attempt for RESNET18 was 93.51, in comparison to 93.78 for OT avg. + finetuning. See Appendix S3 and S4 for detailed results and typical retraining curves.
More than 2 models. Now, we discuss the case of more than two models, where the savings in efficiency relative to the ensemble are even higher. As before, we take the case of VGG11 on CIFAR10 and additionally CIFAR100 4, but now consider {4, 6, 8}− such models that have been trained to convergence, each from a different parameter initialization. Table 2 shows the results for this in case of CIFAR100 (results for CIFAR10 are similar and can be found in Table S9).
We find that the performance of vanilla averaging degrades to close-to-random performance, and interestingly even fails to retrain, despite trying numerous settings of optimization hyperparameters (like learning rate and schedules, c.f. Section S3.2). In contrast, OT average performs significantly better even without fine-tuning, and results in a mean test accuracy gain∼ {1.4%, 1.7%, 2%} over the best individual models after fine-tuning, in the case of {4, 6, 8}− base models respectively. Overall, Tables 1, 2 (also S9) show the importance of aligning the networks via OT before averaging. Further finetuning of the OT fused model, always results in an improvement over the individual models, while being # models times more efficient than the ensemble.
Fusion and Distillation. For the sake of completeness, we also compare OT fusion, distillation, and their combination, in context of transferring the knowledge of a large pre-trained teacher network into a smaller pre-trained student network. We find that starting the distillation from the OT fused model yields better performance than initializing randomly or with the student model. Further, when averaged across the considered temperature values = {20, 10, 8, 4, 1}, we observe that distillation of the teacher into random or student network based initialization performs worse than simple OT avg. + finetuning (which also doesn’t require doing such a sweep that would be prohibitive for larger models/datasets). These experiments are discussed in detail in Appendix S12. An interesting direction for future work would be to use intermediate OT distances computed during fusion as a means for regularizing or distilling with hidden layers.
6 Conclusion
We show that averaging the weights of models, by first doing a layer-wise (soft) alignment of the neurons via optimal transport, can serve as a versatile tool for fusing models in various settings. This results in (a) successful one-shot transfer of knowledge between models without sharing training data, (b) data free and algorithm independent post-processing tool for structured pruning, (c) and more generally, combining parameters of different sized models. Lastly, the OT average when further finetuned, allows for just keeping one model rather than a complete ensemble of models at inference. Future avenues include application in distributed optimization and continual learning, besides extending our current toolkit to fuse models with different number of layers, as well as, fusing generative models like GANs [12] (where ensembling does not make as much sense). The promising empirical results of the presented algorithm, thus warrant attention for further use-cases.
4We simply adapt the VGG11 architecture used for CIFAR10 and train it on CIFAR100 for 300 epochs. Since our focus here was not to obtain best individual models, but rather to investigate the efficacy of fusion.
Broader Impact Model fusion is a fundamental building block in machine learning, as a way of direct knowledge transfer between trained neural networks. Beyond theoretical interest it can serve a wide range of concrete applications. For instance, collaborative learning schemes such as federated learning are of increasing importance for enabling privacy-preserving training of ML models, as well as a better alignment of each individual’s data ownership with the resulting utility from jointly trained machine learning models, especially in applications where data is user-provided and privacy sensitive [29]. Here fusion of several models is a key building block to allow several agents to participate in joint training and knowledge exchange. We propose that a reliable fusion technique can serve as a step towards more broadly enabling privacy-preserving and efficient collaborative learning.
Acknowledgments
We would like to thank Rémi Flamary, Boris Muzellec, Sebastian Stich and other members of MLO, as well as the anonymous reviewers for their comments and feedback. | 1. What is the focus and contribution of the paper on neural network fusion?
2. What are the strengths of the proposed approach, particularly in terms of its applications?
3. What are the weaknesses of the paper, especially regarding its experimental comparisons and dataset usage?
4. Do you have any concerns about the scalability of the method to larger datasets?
5. Are there any other relevant comparison methods that could have been included in the experiments? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a layer-wise fusion algorithm for neural networks based on optimal transport of the parameters in each layer. For various applications, the proposed algorithm shows superior performance over the vanilla averaging.
Strengths
It is interesting to fuse several model parameters into a single model with only model parameters and has multiple applications such as federated learning. For various settings, the proposed fusion algorithm showed superior performance over the vanilla baseline. The potential to federated or decentralized learning seems to be an interesting point.
Weaknesses
The paper is not well-written, e.g., the algorithm part is not clearly organized and addressed. The only comparing methods are 'prediction ensembling' and 'vanilla averaging' across all experiments, which is not convincing and sufficient. For example, in pruning experiments, there are some published structured pruning methods [25-27]. Only very small datasets (MNIST, CIFAR10) are used in experiments. It is not clear if the network and model will overfit on such small-scale dataset. |
NIPS | Title
Model Fusion via Optimal Transport
Abstract
Combining different models is a widely used paradigm in machine learning applications. While the most common approach is to form an ensemble of models and average their individual predictions, this approach is often rendered infeasible by given resource constraints in terms of memory and computation, which grow linearly with the number of models. We present a layer-wise model fusion algorithm for neural networks that utilizes optimal transport to (soft-) align neurons across the models before averaging their associated parameters. We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
N/A
We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
1 Introduction
If two neural networks had a child, what would be its weights? In this work, we study the fusion of two parent neural networks—which were trained differently but have the same number of layers—into a single child network. We further focus on performing this operation in a one-shot manner, based on the network weights only, so as to minimize the need of any retraining.
This fundamental operation of merging several neural networks into one contrasts other widely used techniques for combining machine learning models:
Ensemble methods have a very long history. They combine the outputs of several different models as a way to improve the prediction performance and robustness. However, this requires maintaining the K trained models and running each of them at test time (say, in order to average their outputs). This approach thus quickly becomes infeasible for many applications with limited computational resources, especially in view of the ever-growing size of modern deep learning models.
The simplest way to fuse several parent networks into a single network of the same size is direct weight averaging, which we refer to as vanilla averaging; here for simplicity, we assume that all network architectures are identical. Unfortunately, neural networks are typically highly redundant in their parameterizations, so that there is no one-to-one correspondence between the weights of two different neural networks, even if they would describe the same function of the input. In practice, vanilla averaging is known to perform very poorly on trained networks whose weights differ non-trivially.
Finally, a third way to combine two models is distillation, where one network is retrained on its training data, while jointly using the output predictions of the other ‘teacher’ network on those ∗Work done while at EPFL.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
samples. Such a scenario is considered infeasible in our setting, as we aim for approaches not requiring the sharing of training data.This requirement is particularly crucial if the training data is to be kept private, like in federated learning applications, or is unavailable due to e.g. legal reasons.
Contributions. We propose a novel layer-wise approach of aligning the neurons and weights of several differently trained models, for fusing them into a single model of the same architecture. Our method relies on optimal transport (OT) [1, 2], to minimize the transportation cost of neurons present in the layers of individual models, measured by the similarity of activations or incoming weights. The resulting layer-wise averaging scheme can be interpreted as computing the Wasserstein barycenter [3, 4] of the probability measures defined at the corresponding layers of the parent models.
We empirically demonstrate that our method succeeds in the one-shot merging of networks of different weights, and in all scenarios significantly outperforms vanilla averaging. More surprisingly, we also show that our method succeeds in merging two networks that were trained for slightly different tasks (such as using a different set of labels). The method is able to “inherit” abilities unique to one of the parent networks, while outperforming the same parent network on the task associated with the other network. Further, we illustrate how it can serve as a data-free and algorithm independent post-processing tool for structured pruning. Finally, we show that OT fusion, with mild fine-tuning, can act as efficient proxy for the ensemble, whereas vanilla averaging fails for more than two models.
Extensions and Applications. The method serves as a new building block for enabling several use-cases: (1) The adaptation of a global model to personal training data. (2) Fusing the parameters of a bigger model into a smaller sized model and vice versa. (3) Federated or decentralized learning applications, where training data can not be shared due to privacy reasons or simply due to its large size. In general, improved model fusion techniques such as ours have strong potential towards encouraging model exchange as opposed to data exchange, to improve privacy & reduce communication costs.
2 Related Work
Ensembling. Ensemble methods [5–7] have long been in use in deep learning and machine learning in general. However, given our goal is to obtain a single model, it is assumed infeasible to maintain and run several trained models as needed here.
Distillation. Another line of work by Hinton et al. [8], Buciluǎ et al. [9], Schmidhuber [10] proposes distillation techniques. Here the key idea is to employ the knowledge of a pre-trained teacher network (typically larger and expensive to train) and transfer its abilities to a smaller model called the student network. During this transfer process, the goal is to use the relative probabilities of misclassification of the teacher as a more informative training signal.
While distillation also results in a single model, the main drawback is its computational complexity— the distillation process is essentially as expensive as training the student network from scratch, and also involves its own set of hyper-parameter tuning. In addition, distillation still requires sharing the training data with the teacher (as the teacher network can be too large to share), which we avoid here.
In a different line of work, Shen et al. [11] propose an approach where the student network is forced to produce outputs mimicking the teacher networks, by utilizing Generative Adversarial Network [12]. This still does not resolve the problem of high computational costs involved in this kind of knowledge transfer. Further, it does not provide a principled way to aggregate the parameters of different models.
Relation to other network fusion methods. Several studies have investigated a method to merge two trained networks into a single network without the need for retraining [13–15]. Leontev et al. [15] propose Elastic Weight Consolidation, which formulates an assignment problem on top of diagonal approximations to the Hessian matrices of each of the two parent neural networks. Their method however only works when the weights of the parent models are already close, i.e. share a significant part of the training history [13, 14], by relying on SGD with periodic averaging, also called local SGD [16]. Nevertheless, their empirical results [15] do not improve over vanilla averaging.
Alignment-based methods. Alignment of neurons was considered in Li et al. [17] to probe the representations learned by different networks. Recently, Yurochkin et al. [18] independently proposed a Bayesian non-parametric framework that considers matching the neurons of different MLPs in federated learning. In a concurrent work2, Wang et al. [19] extend [18] to more realistic networks
2An early version of our paper also appeared at NeurIPS 2019 workshop on OT, arxiv:1910.05653.
including CNNs, also with a specific focus on federated learning. In contrast, we develop our method from the lens of optimal transport (OT), which lends us a simpler approach by utilizing Wasserstein barycenters. The method of aligning neurons employed in both lines of work form instances for the choice of ground metric in OT. Overall, we consider model fusion in general, beyond federated learning. For instance, we show applications of fusing different sized models (e.g., for structured pruning) as well as the compatibility of our method to serve as an initialization for distillation. From a practical side, our approach is # of layer times more efficient and also applies to ResNets.
To conclude, the application of Wasserstein barycenters for averaging the weights of neural networks has—to our knowledge—not been considered in the past.
3 Background on Optimal Transport (OT)
We present a short background on OT in the discrete case, and in this process set up the notation for the rest of the paper. OT gives a way to compare two probability distributions defined over a ground space S, provided an underlying distance or more generally the cost of transporting one point to another in the ground space. Next, we describe the linear program (LP) which lies at the heart of OT.
LP Formulation. First, let us consider two empirical probability measures µ and ν denoted by a weighted sum of Diracs, i.e., µ = ∑n i=1 αi δ(x (i)) and ν = ∑m i=1 βi δ(y
(i)). Here δ(x) denotes the Dirac (unit mass) distribution at point x ∈ S and the set of pointsX = (x(1), . . . ,x(n)) ∈ Sn. The weight α = (α1, . . . , αn) lives in the probability simplex (and similarly β). Further, let Cij denote the ground cost of moving point x(i) to y(j). Then the optimal transport between µ and ν can be formulated as solving the following linear program. OT(µ, ν;C) := min 〈T ,C〉, with T ∈ R(n×m)+ such that T1m = α, T>1n = β. Here, 〈T ,C〉 := tr ( T>C ) = ∑ ij TijCij is the Frobenius inner product of matrices. The optimal T ∈ R(n×m)+ is called as the transportation matrix or transport map, and Tij represents the optimal amount of mass to be moved from point x(i) to y(j). Wasserstein Distance. When S = Rd and the cost is defined with respect to a metric DS over S( i.e., Cij = DS(x(i),y(j))p for any i, j ) , OT establishes a distance between probability distributions. This is called the p-Wasserstein distance and is defined asWp(µ, ν) := OT(µ, ν;DpS)1/p. Wasserstein Barycenters. This represents the notion of averaging in the Wasserstein space. To be precise, the Wasserstein barycenter [3] is a probability measure that minimizes the weighted sum of (p-th power) Wasserstein distances to the given K measures {µ1, . . . , µK}, with corresponding weights η = {η1, . . . , ηK} ∈ ΣK . Hence, it can be written as Bp(µ1, . . . , µK) = arg minµ ∑K k=1 ηk Wp(µk, ν)p.
4 Proposed Algorithm
In this section, we discuss our proposed algorithm for model aggregation. First, we consider that we are averaging the parameters of only two neural networks, but later present the extension to the multiple model case. For now, we ignore the bias parameters and we only focus on the weights. This is to make the presentation succinct, and it can be easily extended to take care of these aspects.
Motivation. As alluded to earlier in the introduction, the problem with vanilla averaging of parameters is the lack of one-to-one correspondence between the model parameters. In particular, for a given layer, there is no direct matching between the neurons of the two models. For e.g., this means that the pth neuron of model A might behave very differently (in terms of the feature it detects) from the pth neuron of the other model B, and instead might be quite similar in functionality to the p+ 1th neuron. Imagine, if we knew a perfect matching between the neurons, then we could simply align the neurons of model A with respect to B. Having done this, it would then make more sense to perform vanilla averaging of the neuron parameters. The matching or assignment could be formulated as a permutation matrix, and just multiplying the parameters by this matrix would align the parameters.
But in practice, it is more likely to have soft correspondences between the neurons of the two models for a given layer, especially if their number is not the same across the two models. This is where optimal transport comes in and provides us a soft-alignment matrix in the form of the transport map T . In other words, the alignment problem can be rephrased as optimally transporting the neurons in a given layer of model A to the neurons in the same layer of model B.
General procedure. Let us assume we are at some layer ` and that neurons in the previous layers have already been aligned. Then, we define probability measures over neurons in this layer for the two models as, µ(`) = ( α(`),X[`] ) and ν(`) = ( β(`),Y [`] ) , whereX,Y are the measure supports.
Next, we use uniform distributions to initialize the histogram (or probability mass values) for each layer. Although we note that it is possible to additionally use other measures of neuron importance [20, 21], but we leave it for a future work. In particular, if the size of layer ` of models A and B is denoted by n(`), m(`) respectively, we get α(`) ← 1n(`)/n(`), β(`) ← 1m(`)/m(`). Now, in terms of the alignment procedure, we first align the incoming edge weights for the current layer `. This can be done by post-multiplying with the previous layer transport matrix T (`−1), normalized appropriately via the inverse of the corresponding column marginals β(`−1):
Ŵ (`, `−1) A ←W (`, `−1) A T
(`−1)diag ( 1/β(`−1) ) . (1)
This update can be interpreted as follows: the matrix T (`−1)diag ( β−(`−1) ) has m(`−1) columns in the simplex Σn(`−1) , thus post-multiplyingW (`, `−1) A with it will produce a convex combination of the points inW (`, `−1)A with weights defined by the optimal transport map T (`−1).
Once this has been done, we focus on aligning the neurons in this layer ` of the two models. Let us assume, we have a suitable ground metric DS (which we discuss in the sections ahead). Then we compute the optimal transport map T (`) between the measures µ(`), ν(`) for layer `, i.e., T (`), W2 ← OT(µ(`), ν(`), DS), whereW2 denotes the obtained Wasserstein-distance. Now, we use this transport map T (`) to align the neurons (more precisely the weights) of the first model (A) with respect to the second (B),
W̃ (`, `−1) A ← diag
( 1/β(`) ) T (`) > Ŵ
(`, `−1) A . (2)
We will refer to model A’s weights, W̃ (`, `−1)A , as those aligned with respect to model B. Hence, with this alignment in place, we can average the weights of two layers to obtain the fused weight matrix W
(`, `−1) F , as in Eq. (3). We carry out this procedure over all the layers sequentially.
W (`, `−1) F ← 1 2
( W̃
(`, `−1) A +W (`, `−1) B
) . (3)
Note that, since the input layer is ordered identically for both models, we start the alignment from second layer onwards. Additionally, the order of neurons for the very last layer, i.e., in the output layer, again is identical. Thus, the (scaled) transport map at the last layer will be equal to the identity.
Extension to multiple models. The key idea is to begin with an estimate M̂F of the fused model, then align all the given models with respect to it, and finally return the average of these aligned weights as the final weights for the fused model. For the two model case, this is equivalent to the procedure we discussed above when the fused model is initialized to model B, i.e., M̂F ← MB . Because, aligning model B with this estimate of the fused model will yield a (scaled) transport map equal to the identity. And then, Eq. (3) will amount to returning the average of the aligned weights.
Alignment strategies. The above discussion implies that we need to design a ground metric DS between the inter-model neurons. So, we branch out into the following two strategies:
(a) Activation-based alignment (ψ = ‘acts’): In this variant, we run inference over a set of m samples, S = {x}mi=1 and store the activations for all neurons in the model. Thus, we consider the neuron activations, concatenated over the samples into a vector, as the support of the measures, and we denote it asXk ← ACTS ( Mk(S) ) , Y ← ACTS ( MF (S) ) . Then the neurons across the two models are considered to be similar if they produce similar activation outputs for the given set of samples. We measure this by computing the Euclidean distance between the resulting vector of activations. This serves as the ground metric for OT computations. In practice, we use the pre-activations.
(b) Weight-based alignment (ψ = ‘wts’): Here, we consider that the support of each neuron is given by the weights of the incoming edges (stacked in a vector). Thus, a neuron can be thought as being represented by the row corresponding to it in the weight matrix. So, the support of the measures in such an alignment type is given by,Xk[`]← Ŵ (`, `−1)k , Y [`]← Ŵ (`, `−1) F . The reasoning for such a choice for the support stems from the neuron activation at a particular layer being calculated as the inner product between this weight vector and the previous layer output. The ground metric used for OT is the Euclidean distance, like in the previous alignment strategy. Besides this difference of employing the actual weights in the ground metric (LINE 6, 10), rest of the procedure is identical.
Lastly, the overall procedure is summarized in Algorithm 1 below, where the GETSUPPORT selects between the above strategies based on the value of ψ.
Algorithm 1: Model Fusion (with ψ = {‘acts’, ‘wts’}−alignment)
1: input: Trained models {Mk}Kk=1 and initial estimate of the fused model M̂F 2: output: Fused model MF with weightsWF 3: notation: For model Mk, size of the layer ` is written as n(`)k , and the weight matrix between the layer `
and `− 1 is denoted asW (`, `−1)k . Neuron support tensors are given byXk,Y .
4: initialize: The size of input layer n(1)k ← m (1) for all k ∈ [K]; so α(1)k = β (1) ← 1m(1)/m (1) and
the transport map is defined as T (1)k ← diag(β (1)) Im(1)×m(1) .
5: for each layer ` = 2, . . . , L do
6: β(`), Y [`] ← 1m(`)/m (`), GETSUPPORT(M̂F , ψ, `) 7: ν(`) ← ( β(`), Y [`] ) . Define probability measure for initial fused model M̂F
8: for each model k = 1, . . . ,K do
9: Ŵ (`, `−1)k ←W (`, `−1) k T (`−1) k diag
( 1
β(`−1)
) . Align incoming edges for Mk
10: α(`)k , Xk[`] ← 1n(`) k /n (`) k , GETSUPPORT(Mk, ψ, `)
11: µ(`)k ← ( α (`) k , Xk[`] ) . Define probability measure for model Mk
12: D(`)S [p, q] ← ‖Xk[`][p]− Y [`][q]‖2, ∀ p∈[n(`)k ], q∈[m(`)] . Form ground metric
13: T (`)k , W (`) 2 ← OT ( µ (`) k , ν (`), D (`) S )
. Compute OT map and distance 14: W̃ (`, `−1)k ← diag ( 1
β(`)
) T (`) > Ŵ
(`, `−1) k . Align model Mk neurons
15: end for
16: W (`, `−1)F ← 1 K ∑K k=1 W̃ (`, `−1) k . Average model weights
17: end for
4.1 Discussion
Pros and cons of alignment type. An advantage of the weight-based alignment is that it is independent of the dataset samples, making it useful in privacy-constrained scenarios. On the flip side, the activation-based alignment only needs unlabeled data, and an interesting prospect for a future study would be to utilize synthetic data. But, activation-based alignment may help tailor the fusion to certain desired kinds of classes or domains. Fusion results for both are nevertheless similar.
Combinatorial hardness of the ideal procedure. In principle, we should actually search over the space of permutation matrices, jointly across all the layers. But this would be computationally
intractable for models such as deep neural networks, and thus we fuse in a layer-wise manner and in a way have a greedy procedure.
# of samples used for activation-based alignment. We typically consider a mini-batch of ∼ 100 to 400 samples for these experiments. Table S2 in the Appendix, shows that effect of increasing this mini-batch size on the fusion performance and we find that even as few as 25 samples are enough to outperform vanilla averaging.
Exact OT and runtime efficiency. Our fusion procedure is efficient enough for the deep neural networks considered here (VGG11, RESNET18), so we primarily utilize exact OT solvers. While the runtime of exact OT is roughly cubic in the cardinality of the measure supports, it is not an issue for us as this cardinality (which amounts to the network width) is ≤ 600 for these networks. In general, modern-day neural networks are typically deeper than wide. To give a concrete estimate, the time taken to fuse six VGG11 models is ≈ 15 seconds on 1 Nvidia V100 GPU (c.f. Section S1.4 for more details). It is possible to further improve the runtime by adopting the entropy-regularized OT [22], but this looses slightly in terms of test accuracy compared to exact OT (c.f. Table S4).
5 Experiments
Outline. We first present our results for one-shot fusion when the models are trained on different data distributions. Next, in Section 5.2, we consider (one-shot) fusion in the case when model sizes are different (i.e., unequal layer widths to be precise). In fact, this aspect facilitates a new tool that can be applied in ways not possible with vanilla averaging. Further on, we focus on the use-case of obtaining an efficient replacement for ensembling models in Section 5.3.
Empirical Details. We test our model fusion approach on standard image classification datasets, like CIFAR10 with commonly used convolutional neural networks (CNNs) such as VGG11 [23] and residual networks like ResNet18 [24]; and on MNIST, we use a fully connected network with 3 hidden layers of size 400, 200, 100, which we refer to as MLPNET. As baselines, we mention the performance of ‘prediction’ ensembling and ‘vanilla’ averaging, besides that of individual models. Prediction ensembling refers to keeping all the models and averaging their predictions (output layer scores), and thus reflects in a way the ideal (but unrealistic) performance that we can hope to achieve when fusing into a single model. Vanilla averaging denotes the direct averaging of parameters. All the performance scores are test accuracies. Full experimental details are provided in Appendix S1.1.
5.1 Fusion in the setting of heterogeneous data and tasks
We first consider the setting of merging two models A and B, but assume that model A has some special skill or knowledge (say, recognizing an object) which B does not possess. However, B is overall more powerful across the remaining set of skills in comparison to A. The goal of fusion now is to obtain a single model that can gain from the strength of B on overall skills and also acquire the specialized skill possessed by A. Such a scenario can arise e.g. in reinforcement learning where these models are agents that have had different training episodes so far. Another possible use case lies in federated learning [25], where model A is a client application that has been trained to perform well on certain tasks (like personalized keyword prediction) and model B is the server that typically has a strong skill set for a range of tasks (general language model).
The natural constraints in such scenarios are (a) ensuring privacy and (b) minimization communication frequency. This implies that the training examples can not be shared between A and B to respect privacy and a one-shot knowledge transfer is ideally desired, which eliminates e.g., joint training.
At a very abstract level, these scenarios are representative of aggregating models that have been trained on non-i.i.d data distributions. To simulate a heterogeneous data-split, we consider the MNIST digit classification task with MLPNET models, where the unique skill possessed by model A corresponds to recognizing one particular ‘personalized’ label (say 4), which is unknown to B. Model B contains 90% of the remaining training set (i.e., excluding the label 4), while A has the other 10%. Both are trained on their portions of the data for 10 epochs , and other training settings are identical.
Figure 2 illustrates the results for fusing models A and B (in different proportions), both when they have different parameter initializations or when they share the same initialization. OT fusion 3 significantly outperforms the vanilla averaging of their parameters in terms of the overall test accuracy
3Only the receiver A’s own examples are used for computing the activations, avoiding the sharing of data.
in both the cases, and also improves over the individual models. E.g., in Figure 2(a), where the individual models obtain 89.78% and 87.35% accuracy respectively on the overall (global) test set, OT avg. achieves the best overall test set accuracy of 93.11%. Thus, confirming the successful skill transfer from both parent models, without the need for any retraining.
Our obtained results are robust to other scenarios when (i) some other label (say 6) serves as the special skill and (ii) the % of remaining data split is different. These results are collected in the Appendix S5, where in addition we also present results without the special label as well.
The case of multiple models. In the above example of two models, one might also consider maintaining an ensemble, however the associated costs for ensembling become prohibitive as soon as the numbers of models increases. Take for instance, four models: A, B, C and D, with the same initialization and assume that A again possessing the knowledge of a special digit (say, 4). Consider that the rest of the data is divided as 10%, 30%, 50%, 10%. Now training in the similar setting as before, these models end up getting (global) test accuracies of 87.7%, 86.5%, 87.0%, 83.5% respectively. Ensembling the predictions yields 95.0% while vanilla averaging obtains 80.6%. In contrast, OT averaging results in 93.6% test accuracy (≈ 6% gain over the best individual model), while being 4× more efficient than ensembling. Further details can be found in the Appendix S7.
5.2 Fusing different sized models
An advantage of our OT-based fusion is that it allows the layer widths to be different for each input model. Here, our procedure first identifies which weights of the bigger model should be mapped to the smaller model (via the transport map), and then averages the aligned models (now both of the size of the smaller one). We can thus combine the parameters of a bigger network into a smaller one, and vice versa, allowing new use-cases in (a) model compression and (b) federated learning.
(a) Post-processing tool for structured pruning. Structured pruning [26–28] is an approach to model compression that aims to remove entire neurons or channels, resulting in an out-of-the-box reduction in inference costs, while affecting the performance minimally. A widely effective method for CNNs is to remove the filters with smallest `1 norm [26]. Our key idea here is to fuse the original dense network into the pruned network, instead of just throwing it away.
Figure 3 shows the gain in test accuracy on CIFAR10 by carrying out OT fusion procedure (with weight-based alignment) when different convolutional layers of VGG11 are pruned to increasing amounts. For all the layers, we con-
sistently obtain a significant improvement in performance, and ≈ 10% or more gain in the high
sparsity regime. We also observe similar improvements other layers as well as when multiple (or all) layers are pruned simultaneously (c.f. Appendix S8).
Further, these gains are also significant when measured with respect to the overall sparsity obtained in the model. E.g., structured pruning the CONV_8 to 90% results in a net sparsity of 23% in the model. Here after pruning, the accuracy of the model drops from 90.3% to 81.5%, and on applying OT fusion, the performances recovers to 89.4%. As an another example take CONV_7, where after structured pruning to 80%, OT fusion improves the performance of the pruned model from 87.6% to 90.1% while achieving an overall sparsity of 41% in the network (see S8).
Our goal here is not to propose a method for structured pruning, but rather a post-processing tool that can help regain the drop in performance due to pruning. These results are thus independent of the pruning algorithm used, and e.g., Appendix S8 shows similar gains when the filters are pruned based on `2 norm (Figure S10) or even randomly (Figure S11). Further, Figure S12 in the appendix also shows the results when applied to VGG11 trained on CIFAR100 (instead of CIFAR10). Overall, OT fusion offers a completely data-free approach to improving the performance of the pruned model, which can be handy in the limited data regime or when retraining is prohibitive.
(b) Adapting the size of client and server-side models in federated learning. Given the huge sizes of contemporary neural networks, it is evident that we will not able to fit the same sized model on a client device as would be possible on the server. However, this might come at the cost of reduced performance. Further, the resource constraints might be fairly varied even amongst the clients devices, thus necessitating the flexibility to adapt the model sizes.
We consider a similar formulation, as in the one-shot knowledge transfer setting from Section 5.1, except that now the model B has twice the layer widths as compared to the corresponding layers of model A. Vanilla averaging of parameters, a core component of the widely prevalent FedAvg algorithm [25], gets ruled out in such a setting. Figure 4 shows how OT fusion/average can still lead to a successful knowledge transfer between the given models.
5.3 Fusion for efficient ensembling
In this section, our goal is to obtain a single model which can serve as a proxy for an ensemble of models, even if it comes at a slight decrease in performance relative to the ensemble, for future efficiency. Specifically, here we investigate how much can be gained by fusing multiple models that differ only in their parameter initializations (i.e., seeds). This means that models are trained on the same data, so unlike in Section 5.1 with a heterogeneous data-split, the gain here might be limited.
We study this in context of deep networks such as VGG11 and RESNET18 which have been trained to convergence on CIFAR10. As a first step, we consider the setting when we are given just two models, the results for which are present in Table 1. We observe that vanilla averaging absolutely fails in this case, and is 3- 5× worse than OT averaging, in case of RESNET18 and VGG11 respectively. OT average, however, does not yet improve over the individual models. This can be attributed to the combinatorial hardness of
the underlying alignment problem, and the greedy nature of our algorithm as mentioned before. As a simple but effective remedy, we consider finetuning (i.e., retraining) from the fused or averaged models. Retraining helps for both vanilla and OT averaging, but in comparison, the OT averaging
results in a better score for both the cases as shown in Table 1. E.g., for RESNET18, OT avg. + finetuning gets almost as good as prediction ensembling on test accuracy.
The finetuning scores for vanilla and OT averaging correspond to their best obtained results, when retrained with several finetuning learning rate schedules for a total of 100 and 120 epochs in case of VGG11and RESNET18 respectively. We also considered finetuning the individual models across these various hyperparameter settings (which of course will be infeasible in practice), but the best accuracy mustered via this attempt for RESNET18 was 93.51, in comparison to 93.78 for OT avg. + finetuning. See Appendix S3 and S4 for detailed results and typical retraining curves.
More than 2 models. Now, we discuss the case of more than two models, where the savings in efficiency relative to the ensemble are even higher. As before, we take the case of VGG11 on CIFAR10 and additionally CIFAR100 4, but now consider {4, 6, 8}− such models that have been trained to convergence, each from a different parameter initialization. Table 2 shows the results for this in case of CIFAR100 (results for CIFAR10 are similar and can be found in Table S9).
We find that the performance of vanilla averaging degrades to close-to-random performance, and interestingly even fails to retrain, despite trying numerous settings of optimization hyperparameters (like learning rate and schedules, c.f. Section S3.2). In contrast, OT average performs significantly better even without fine-tuning, and results in a mean test accuracy gain∼ {1.4%, 1.7%, 2%} over the best individual models after fine-tuning, in the case of {4, 6, 8}− base models respectively. Overall, Tables 1, 2 (also S9) show the importance of aligning the networks via OT before averaging. Further finetuning of the OT fused model, always results in an improvement over the individual models, while being # models times more efficient than the ensemble.
Fusion and Distillation. For the sake of completeness, we also compare OT fusion, distillation, and their combination, in context of transferring the knowledge of a large pre-trained teacher network into a smaller pre-trained student network. We find that starting the distillation from the OT fused model yields better performance than initializing randomly or with the student model. Further, when averaged across the considered temperature values = {20, 10, 8, 4, 1}, we observe that distillation of the teacher into random or student network based initialization performs worse than simple OT avg. + finetuning (which also doesn’t require doing such a sweep that would be prohibitive for larger models/datasets). These experiments are discussed in detail in Appendix S12. An interesting direction for future work would be to use intermediate OT distances computed during fusion as a means for regularizing or distilling with hidden layers.
6 Conclusion
We show that averaging the weights of models, by first doing a layer-wise (soft) alignment of the neurons via optimal transport, can serve as a versatile tool for fusing models in various settings. This results in (a) successful one-shot transfer of knowledge between models without sharing training data, (b) data free and algorithm independent post-processing tool for structured pruning, (c) and more generally, combining parameters of different sized models. Lastly, the OT average when further finetuned, allows for just keeping one model rather than a complete ensemble of models at inference. Future avenues include application in distributed optimization and continual learning, besides extending our current toolkit to fuse models with different number of layers, as well as, fusing generative models like GANs [12] (where ensembling does not make as much sense). The promising empirical results of the presented algorithm, thus warrant attention for further use-cases.
4We simply adapt the VGG11 architecture used for CIFAR10 and train it on CIFAR100 for 300 epochs. Since our focus here was not to obtain best individual models, but rather to investigate the efficacy of fusion.
Broader Impact Model fusion is a fundamental building block in machine learning, as a way of direct knowledge transfer between trained neural networks. Beyond theoretical interest it can serve a wide range of concrete applications. For instance, collaborative learning schemes such as federated learning are of increasing importance for enabling privacy-preserving training of ML models, as well as a better alignment of each individual’s data ownership with the resulting utility from jointly trained machine learning models, especially in applications where data is user-provided and privacy sensitive [29]. Here fusion of several models is a key building block to allow several agents to participate in joint training and knowledge exchange. We propose that a reliable fusion technique can serve as a step towards more broadly enabling privacy-preserving and efficient collaborative learning.
Acknowledgments
We would like to thank Rémi Flamary, Boris Muzellec, Sebastian Stich and other members of MLO, as well as the anonymous reviewers for their comments and feedback. | 1. What is the focus and contribution of the paper regarding Optimal Transport's application in deep learning?
2. What are the strengths of the proposed approach, particularly in its ability to outperform vanilla averaging?
3. What are the weaknesses of the paper, especially regarding the artificiality of the special and general models and the choice of baseline?
4. How does the reviewer assess the usefulness and competitiveness of the method under different scenarios, such as ensembling, pruning, and federated learning?
5. Are there any questions or concerns regarding the experiments and results presented in the paper, including the choice of alignment methods and the effect of fine-tuning? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes to use the formulation of Optimal Transport (OT) to align the channels/neurons in two/multiple different models, and then do a weight fusion by averaging. The use cases of such weight fusion is beneficial in cases like special/general multi-tasking, pruning, federated learning, ensembling. In each case, the experiment shows the OT fusion outperforms vanilla averaging of weights.
Strengths
Using Optimal Transport problem to match the order of channels/neurons is a intuitive application of an traditional algorithm to deep learning, and is shown to outperform vanilla averaging where we ignore order. The paper lists lots of use cases such as special and general model fusion, federated learning, pruning, ensembling. The appendix contains lots of detailed experiments and results, which helps interested readers to learn more.
Weaknesses
1. The special and general models A and B which focus on 1 class and 9 classes in MNIST respectively seems a bit artificial to me. The other constraints introduced in the paper, such as no fine-tuning allowed, no joint training allowed (due to data privacy), are also a bit strange, as these approaches are widely used in common scenarios like pruning or multi-task learning. The method’s usefulness seems to only exist under these strict and sometimes artificial assumptions. Under the most straightforward application (ensembling), the method does not bring improvement without fine-tuning, and even with fine-tuning, the improvement over vanilla averaging is very marginal and I would consider them to be within the error bar of CIFAR-10 classification (0.3%). The paper could benefit from running these experiments with multiple seeds and report mean and stds. 2. Also, in my opinion, "Vanilla averaging” of weights does not form a strong baseline. Normally people don’t average the weights of two identical architectures element wisely, which is unlikely to produce meaningful performance, as also shown in the paper. The reason, as the author tried to address using OT, is that all positions in channel dimension in a convolution or linear layer are equivalent, thus averaging channel 1 and 2 from model A and B respectively makes no more sense than swapping them. The main baseline of the work should be vanilla ensembling, which the method only outperforms with fine-tuning, though. Vanilla averaging could be served as an illustration that the method works, but itself is not competitive. Even under the constraint that we want only one model and no training examples are given, there could possibly be more competent baselines. 3. In the case of pruning, I would intuitively imagine the channels get aligned with the large model correspond to the channels that survived the pruning, which are essentially the same channels in the small model. To what degree this is true? If this is largely true, why would a model benefit from fusion with (almost) itself? If not, why? 4. In figure S9(i), the caption says “all”, which I suppose should indicate all layers are pruned together, but the legends says “conv_9”. Which one is the case? In addition, I wonder whether the difference between vanilla and OT fusion still exists when fine-tuning is enabled, as is often the case in pruning. 5. It seems the paper did not specify the reason to use weight-based or activation-based alignment in each of the experiment. =======================Post Rebuttal=========================== The rebuttal address many of my concerns, e.g., about the application cases and the prior practice of averaging, and my opinion changes towards acceptance. |
NIPS | Title
Model Fusion via Optimal Transport
Abstract
Combining different models is a widely used paradigm in machine learning applications. While the most common approach is to form an ensemble of models and average their individual predictions, this approach is often rendered infeasible by given resource constraints in terms of memory and computation, which grow linearly with the number of models. We present a layer-wise model fusion algorithm for neural networks that utilizes optimal transport to (soft-) align neurons across the models before averaging their associated parameters. We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
N/A
We show that this can successfully yield “one-shot” knowledge transfer (i.e, without requiring any retraining) between neural networks trained on heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings, we illustrate that our approach significantly outperforms vanilla averaging, as well as how it can serve as an efficient replacement for the ensemble with moderate fine-tuning, for standard convolutional networks (like VGG11), residual networks (like RESNET18), and multi-layer perceptrons on CIFAR10, CIFAR100, and MNIST. Finally, our approach also provides a principled way to combine the parameters of neural networks with different widths, and we explore its application for model compression. The code is available at the following link, https://github.com/sidak/otfusion.
1 Introduction
If two neural networks had a child, what would be its weights? In this work, we study the fusion of two parent neural networks—which were trained differently but have the same number of layers—into a single child network. We further focus on performing this operation in a one-shot manner, based on the network weights only, so as to minimize the need of any retraining.
This fundamental operation of merging several neural networks into one contrasts other widely used techniques for combining machine learning models:
Ensemble methods have a very long history. They combine the outputs of several different models as a way to improve the prediction performance and robustness. However, this requires maintaining the K trained models and running each of them at test time (say, in order to average their outputs). This approach thus quickly becomes infeasible for many applications with limited computational resources, especially in view of the ever-growing size of modern deep learning models.
The simplest way to fuse several parent networks into a single network of the same size is direct weight averaging, which we refer to as vanilla averaging; here for simplicity, we assume that all network architectures are identical. Unfortunately, neural networks are typically highly redundant in their parameterizations, so that there is no one-to-one correspondence between the weights of two different neural networks, even if they would describe the same function of the input. In practice, vanilla averaging is known to perform very poorly on trained networks whose weights differ non-trivially.
Finally, a third way to combine two models is distillation, where one network is retrained on its training data, while jointly using the output predictions of the other ‘teacher’ network on those ∗Work done while at EPFL.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
samples. Such a scenario is considered infeasible in our setting, as we aim for approaches not requiring the sharing of training data.This requirement is particularly crucial if the training data is to be kept private, like in federated learning applications, or is unavailable due to e.g. legal reasons.
Contributions. We propose a novel layer-wise approach of aligning the neurons and weights of several differently trained models, for fusing them into a single model of the same architecture. Our method relies on optimal transport (OT) [1, 2], to minimize the transportation cost of neurons present in the layers of individual models, measured by the similarity of activations or incoming weights. The resulting layer-wise averaging scheme can be interpreted as computing the Wasserstein barycenter [3, 4] of the probability measures defined at the corresponding layers of the parent models.
We empirically demonstrate that our method succeeds in the one-shot merging of networks of different weights, and in all scenarios significantly outperforms vanilla averaging. More surprisingly, we also show that our method succeeds in merging two networks that were trained for slightly different tasks (such as using a different set of labels). The method is able to “inherit” abilities unique to one of the parent networks, while outperforming the same parent network on the task associated with the other network. Further, we illustrate how it can serve as a data-free and algorithm independent post-processing tool for structured pruning. Finally, we show that OT fusion, with mild fine-tuning, can act as efficient proxy for the ensemble, whereas vanilla averaging fails for more than two models.
Extensions and Applications. The method serves as a new building block for enabling several use-cases: (1) The adaptation of a global model to personal training data. (2) Fusing the parameters of a bigger model into a smaller sized model and vice versa. (3) Federated or decentralized learning applications, where training data can not be shared due to privacy reasons or simply due to its large size. In general, improved model fusion techniques such as ours have strong potential towards encouraging model exchange as opposed to data exchange, to improve privacy & reduce communication costs.
2 Related Work
Ensembling. Ensemble methods [5–7] have long been in use in deep learning and machine learning in general. However, given our goal is to obtain a single model, it is assumed infeasible to maintain and run several trained models as needed here.
Distillation. Another line of work by Hinton et al. [8], Buciluǎ et al. [9], Schmidhuber [10] proposes distillation techniques. Here the key idea is to employ the knowledge of a pre-trained teacher network (typically larger and expensive to train) and transfer its abilities to a smaller model called the student network. During this transfer process, the goal is to use the relative probabilities of misclassification of the teacher as a more informative training signal.
While distillation also results in a single model, the main drawback is its computational complexity— the distillation process is essentially as expensive as training the student network from scratch, and also involves its own set of hyper-parameter tuning. In addition, distillation still requires sharing the training data with the teacher (as the teacher network can be too large to share), which we avoid here.
In a different line of work, Shen et al. [11] propose an approach where the student network is forced to produce outputs mimicking the teacher networks, by utilizing Generative Adversarial Network [12]. This still does not resolve the problem of high computational costs involved in this kind of knowledge transfer. Further, it does not provide a principled way to aggregate the parameters of different models.
Relation to other network fusion methods. Several studies have investigated a method to merge two trained networks into a single network without the need for retraining [13–15]. Leontev et al. [15] propose Elastic Weight Consolidation, which formulates an assignment problem on top of diagonal approximations to the Hessian matrices of each of the two parent neural networks. Their method however only works when the weights of the parent models are already close, i.e. share a significant part of the training history [13, 14], by relying on SGD with periodic averaging, also called local SGD [16]. Nevertheless, their empirical results [15] do not improve over vanilla averaging.
Alignment-based methods. Alignment of neurons was considered in Li et al. [17] to probe the representations learned by different networks. Recently, Yurochkin et al. [18] independently proposed a Bayesian non-parametric framework that considers matching the neurons of different MLPs in federated learning. In a concurrent work2, Wang et al. [19] extend [18] to more realistic networks
2An early version of our paper also appeared at NeurIPS 2019 workshop on OT, arxiv:1910.05653.
including CNNs, also with a specific focus on federated learning. In contrast, we develop our method from the lens of optimal transport (OT), which lends us a simpler approach by utilizing Wasserstein barycenters. The method of aligning neurons employed in both lines of work form instances for the choice of ground metric in OT. Overall, we consider model fusion in general, beyond federated learning. For instance, we show applications of fusing different sized models (e.g., for structured pruning) as well as the compatibility of our method to serve as an initialization for distillation. From a practical side, our approach is # of layer times more efficient and also applies to ResNets.
To conclude, the application of Wasserstein barycenters for averaging the weights of neural networks has—to our knowledge—not been considered in the past.
3 Background on Optimal Transport (OT)
We present a short background on OT in the discrete case, and in this process set up the notation for the rest of the paper. OT gives a way to compare two probability distributions defined over a ground space S, provided an underlying distance or more generally the cost of transporting one point to another in the ground space. Next, we describe the linear program (LP) which lies at the heart of OT.
LP Formulation. First, let us consider two empirical probability measures µ and ν denoted by a weighted sum of Diracs, i.e., µ = ∑n i=1 αi δ(x (i)) and ν = ∑m i=1 βi δ(y
(i)). Here δ(x) denotes the Dirac (unit mass) distribution at point x ∈ S and the set of pointsX = (x(1), . . . ,x(n)) ∈ Sn. The weight α = (α1, . . . , αn) lives in the probability simplex (and similarly β). Further, let Cij denote the ground cost of moving point x(i) to y(j). Then the optimal transport between µ and ν can be formulated as solving the following linear program. OT(µ, ν;C) := min 〈T ,C〉, with T ∈ R(n×m)+ such that T1m = α, T>1n = β. Here, 〈T ,C〉 := tr ( T>C ) = ∑ ij TijCij is the Frobenius inner product of matrices. The optimal T ∈ R(n×m)+ is called as the transportation matrix or transport map, and Tij represents the optimal amount of mass to be moved from point x(i) to y(j). Wasserstein Distance. When S = Rd and the cost is defined with respect to a metric DS over S( i.e., Cij = DS(x(i),y(j))p for any i, j ) , OT establishes a distance between probability distributions. This is called the p-Wasserstein distance and is defined asWp(µ, ν) := OT(µ, ν;DpS)1/p. Wasserstein Barycenters. This represents the notion of averaging in the Wasserstein space. To be precise, the Wasserstein barycenter [3] is a probability measure that minimizes the weighted sum of (p-th power) Wasserstein distances to the given K measures {µ1, . . . , µK}, with corresponding weights η = {η1, . . . , ηK} ∈ ΣK . Hence, it can be written as Bp(µ1, . . . , µK) = arg minµ ∑K k=1 ηk Wp(µk, ν)p.
4 Proposed Algorithm
In this section, we discuss our proposed algorithm for model aggregation. First, we consider that we are averaging the parameters of only two neural networks, but later present the extension to the multiple model case. For now, we ignore the bias parameters and we only focus on the weights. This is to make the presentation succinct, and it can be easily extended to take care of these aspects.
Motivation. As alluded to earlier in the introduction, the problem with vanilla averaging of parameters is the lack of one-to-one correspondence between the model parameters. In particular, for a given layer, there is no direct matching between the neurons of the two models. For e.g., this means that the pth neuron of model A might behave very differently (in terms of the feature it detects) from the pth neuron of the other model B, and instead might be quite similar in functionality to the p+ 1th neuron. Imagine, if we knew a perfect matching between the neurons, then we could simply align the neurons of model A with respect to B. Having done this, it would then make more sense to perform vanilla averaging of the neuron parameters. The matching or assignment could be formulated as a permutation matrix, and just multiplying the parameters by this matrix would align the parameters.
But in practice, it is more likely to have soft correspondences between the neurons of the two models for a given layer, especially if their number is not the same across the two models. This is where optimal transport comes in and provides us a soft-alignment matrix in the form of the transport map T . In other words, the alignment problem can be rephrased as optimally transporting the neurons in a given layer of model A to the neurons in the same layer of model B.
General procedure. Let us assume we are at some layer ` and that neurons in the previous layers have already been aligned. Then, we define probability measures over neurons in this layer for the two models as, µ(`) = ( α(`),X[`] ) and ν(`) = ( β(`),Y [`] ) , whereX,Y are the measure supports.
Next, we use uniform distributions to initialize the histogram (or probability mass values) for each layer. Although we note that it is possible to additionally use other measures of neuron importance [20, 21], but we leave it for a future work. In particular, if the size of layer ` of models A and B is denoted by n(`), m(`) respectively, we get α(`) ← 1n(`)/n(`), β(`) ← 1m(`)/m(`). Now, in terms of the alignment procedure, we first align the incoming edge weights for the current layer `. This can be done by post-multiplying with the previous layer transport matrix T (`−1), normalized appropriately via the inverse of the corresponding column marginals β(`−1):
Ŵ (`, `−1) A ←W (`, `−1) A T
(`−1)diag ( 1/β(`−1) ) . (1)
This update can be interpreted as follows: the matrix T (`−1)diag ( β−(`−1) ) has m(`−1) columns in the simplex Σn(`−1) , thus post-multiplyingW (`, `−1) A with it will produce a convex combination of the points inW (`, `−1)A with weights defined by the optimal transport map T (`−1).
Once this has been done, we focus on aligning the neurons in this layer ` of the two models. Let us assume, we have a suitable ground metric DS (which we discuss in the sections ahead). Then we compute the optimal transport map T (`) between the measures µ(`), ν(`) for layer `, i.e., T (`), W2 ← OT(µ(`), ν(`), DS), whereW2 denotes the obtained Wasserstein-distance. Now, we use this transport map T (`) to align the neurons (more precisely the weights) of the first model (A) with respect to the second (B),
W̃ (`, `−1) A ← diag
( 1/β(`) ) T (`) > Ŵ
(`, `−1) A . (2)
We will refer to model A’s weights, W̃ (`, `−1)A , as those aligned with respect to model B. Hence, with this alignment in place, we can average the weights of two layers to obtain the fused weight matrix W
(`, `−1) F , as in Eq. (3). We carry out this procedure over all the layers sequentially.
W (`, `−1) F ← 1 2
( W̃
(`, `−1) A +W (`, `−1) B
) . (3)
Note that, since the input layer is ordered identically for both models, we start the alignment from second layer onwards. Additionally, the order of neurons for the very last layer, i.e., in the output layer, again is identical. Thus, the (scaled) transport map at the last layer will be equal to the identity.
Extension to multiple models. The key idea is to begin with an estimate M̂F of the fused model, then align all the given models with respect to it, and finally return the average of these aligned weights as the final weights for the fused model. For the two model case, this is equivalent to the procedure we discussed above when the fused model is initialized to model B, i.e., M̂F ← MB . Because, aligning model B with this estimate of the fused model will yield a (scaled) transport map equal to the identity. And then, Eq. (3) will amount to returning the average of the aligned weights.
Alignment strategies. The above discussion implies that we need to design a ground metric DS between the inter-model neurons. So, we branch out into the following two strategies:
(a) Activation-based alignment (ψ = ‘acts’): In this variant, we run inference over a set of m samples, S = {x}mi=1 and store the activations for all neurons in the model. Thus, we consider the neuron activations, concatenated over the samples into a vector, as the support of the measures, and we denote it asXk ← ACTS ( Mk(S) ) , Y ← ACTS ( MF (S) ) . Then the neurons across the two models are considered to be similar if they produce similar activation outputs for the given set of samples. We measure this by computing the Euclidean distance between the resulting vector of activations. This serves as the ground metric for OT computations. In practice, we use the pre-activations.
(b) Weight-based alignment (ψ = ‘wts’): Here, we consider that the support of each neuron is given by the weights of the incoming edges (stacked in a vector). Thus, a neuron can be thought as being represented by the row corresponding to it in the weight matrix. So, the support of the measures in such an alignment type is given by,Xk[`]← Ŵ (`, `−1)k , Y [`]← Ŵ (`, `−1) F . The reasoning for such a choice for the support stems from the neuron activation at a particular layer being calculated as the inner product between this weight vector and the previous layer output. The ground metric used for OT is the Euclidean distance, like in the previous alignment strategy. Besides this difference of employing the actual weights in the ground metric (LINE 6, 10), rest of the procedure is identical.
Lastly, the overall procedure is summarized in Algorithm 1 below, where the GETSUPPORT selects between the above strategies based on the value of ψ.
Algorithm 1: Model Fusion (with ψ = {‘acts’, ‘wts’}−alignment)
1: input: Trained models {Mk}Kk=1 and initial estimate of the fused model M̂F 2: output: Fused model MF with weightsWF 3: notation: For model Mk, size of the layer ` is written as n(`)k , and the weight matrix between the layer `
and `− 1 is denoted asW (`, `−1)k . Neuron support tensors are given byXk,Y .
4: initialize: The size of input layer n(1)k ← m (1) for all k ∈ [K]; so α(1)k = β (1) ← 1m(1)/m (1) and
the transport map is defined as T (1)k ← diag(β (1)) Im(1)×m(1) .
5: for each layer ` = 2, . . . , L do
6: β(`), Y [`] ← 1m(`)/m (`), GETSUPPORT(M̂F , ψ, `) 7: ν(`) ← ( β(`), Y [`] ) . Define probability measure for initial fused model M̂F
8: for each model k = 1, . . . ,K do
9: Ŵ (`, `−1)k ←W (`, `−1) k T (`−1) k diag
( 1
β(`−1)
) . Align incoming edges for Mk
10: α(`)k , Xk[`] ← 1n(`) k /n (`) k , GETSUPPORT(Mk, ψ, `)
11: µ(`)k ← ( α (`) k , Xk[`] ) . Define probability measure for model Mk
12: D(`)S [p, q] ← ‖Xk[`][p]− Y [`][q]‖2, ∀ p∈[n(`)k ], q∈[m(`)] . Form ground metric
13: T (`)k , W (`) 2 ← OT ( µ (`) k , ν (`), D (`) S )
. Compute OT map and distance 14: W̃ (`, `−1)k ← diag ( 1
β(`)
) T (`) > Ŵ
(`, `−1) k . Align model Mk neurons
15: end for
16: W (`, `−1)F ← 1 K ∑K k=1 W̃ (`, `−1) k . Average model weights
17: end for
4.1 Discussion
Pros and cons of alignment type. An advantage of the weight-based alignment is that it is independent of the dataset samples, making it useful in privacy-constrained scenarios. On the flip side, the activation-based alignment only needs unlabeled data, and an interesting prospect for a future study would be to utilize synthetic data. But, activation-based alignment may help tailor the fusion to certain desired kinds of classes or domains. Fusion results for both are nevertheless similar.
Combinatorial hardness of the ideal procedure. In principle, we should actually search over the space of permutation matrices, jointly across all the layers. But this would be computationally
intractable for models such as deep neural networks, and thus we fuse in a layer-wise manner and in a way have a greedy procedure.
# of samples used for activation-based alignment. We typically consider a mini-batch of ∼ 100 to 400 samples for these experiments. Table S2 in the Appendix, shows that effect of increasing this mini-batch size on the fusion performance and we find that even as few as 25 samples are enough to outperform vanilla averaging.
Exact OT and runtime efficiency. Our fusion procedure is efficient enough for the deep neural networks considered here (VGG11, RESNET18), so we primarily utilize exact OT solvers. While the runtime of exact OT is roughly cubic in the cardinality of the measure supports, it is not an issue for us as this cardinality (which amounts to the network width) is ≤ 600 for these networks. In general, modern-day neural networks are typically deeper than wide. To give a concrete estimate, the time taken to fuse six VGG11 models is ≈ 15 seconds on 1 Nvidia V100 GPU (c.f. Section S1.4 for more details). It is possible to further improve the runtime by adopting the entropy-regularized OT [22], but this looses slightly in terms of test accuracy compared to exact OT (c.f. Table S4).
5 Experiments
Outline. We first present our results for one-shot fusion when the models are trained on different data distributions. Next, in Section 5.2, we consider (one-shot) fusion in the case when model sizes are different (i.e., unequal layer widths to be precise). In fact, this aspect facilitates a new tool that can be applied in ways not possible with vanilla averaging. Further on, we focus on the use-case of obtaining an efficient replacement for ensembling models in Section 5.3.
Empirical Details. We test our model fusion approach on standard image classification datasets, like CIFAR10 with commonly used convolutional neural networks (CNNs) such as VGG11 [23] and residual networks like ResNet18 [24]; and on MNIST, we use a fully connected network with 3 hidden layers of size 400, 200, 100, which we refer to as MLPNET. As baselines, we mention the performance of ‘prediction’ ensembling and ‘vanilla’ averaging, besides that of individual models. Prediction ensembling refers to keeping all the models and averaging their predictions (output layer scores), and thus reflects in a way the ideal (but unrealistic) performance that we can hope to achieve when fusing into a single model. Vanilla averaging denotes the direct averaging of parameters. All the performance scores are test accuracies. Full experimental details are provided in Appendix S1.1.
5.1 Fusion in the setting of heterogeneous data and tasks
We first consider the setting of merging two models A and B, but assume that model A has some special skill or knowledge (say, recognizing an object) which B does not possess. However, B is overall more powerful across the remaining set of skills in comparison to A. The goal of fusion now is to obtain a single model that can gain from the strength of B on overall skills and also acquire the specialized skill possessed by A. Such a scenario can arise e.g. in reinforcement learning where these models are agents that have had different training episodes so far. Another possible use case lies in federated learning [25], where model A is a client application that has been trained to perform well on certain tasks (like personalized keyword prediction) and model B is the server that typically has a strong skill set for a range of tasks (general language model).
The natural constraints in such scenarios are (a) ensuring privacy and (b) minimization communication frequency. This implies that the training examples can not be shared between A and B to respect privacy and a one-shot knowledge transfer is ideally desired, which eliminates e.g., joint training.
At a very abstract level, these scenarios are representative of aggregating models that have been trained on non-i.i.d data distributions. To simulate a heterogeneous data-split, we consider the MNIST digit classification task with MLPNET models, where the unique skill possessed by model A corresponds to recognizing one particular ‘personalized’ label (say 4), which is unknown to B. Model B contains 90% of the remaining training set (i.e., excluding the label 4), while A has the other 10%. Both are trained on their portions of the data for 10 epochs , and other training settings are identical.
Figure 2 illustrates the results for fusing models A and B (in different proportions), both when they have different parameter initializations or when they share the same initialization. OT fusion 3 significantly outperforms the vanilla averaging of their parameters in terms of the overall test accuracy
3Only the receiver A’s own examples are used for computing the activations, avoiding the sharing of data.
in both the cases, and also improves over the individual models. E.g., in Figure 2(a), where the individual models obtain 89.78% and 87.35% accuracy respectively on the overall (global) test set, OT avg. achieves the best overall test set accuracy of 93.11%. Thus, confirming the successful skill transfer from both parent models, without the need for any retraining.
Our obtained results are robust to other scenarios when (i) some other label (say 6) serves as the special skill and (ii) the % of remaining data split is different. These results are collected in the Appendix S5, where in addition we also present results without the special label as well.
The case of multiple models. In the above example of two models, one might also consider maintaining an ensemble, however the associated costs for ensembling become prohibitive as soon as the numbers of models increases. Take for instance, four models: A, B, C and D, with the same initialization and assume that A again possessing the knowledge of a special digit (say, 4). Consider that the rest of the data is divided as 10%, 30%, 50%, 10%. Now training in the similar setting as before, these models end up getting (global) test accuracies of 87.7%, 86.5%, 87.0%, 83.5% respectively. Ensembling the predictions yields 95.0% while vanilla averaging obtains 80.6%. In contrast, OT averaging results in 93.6% test accuracy (≈ 6% gain over the best individual model), while being 4× more efficient than ensembling. Further details can be found in the Appendix S7.
5.2 Fusing different sized models
An advantage of our OT-based fusion is that it allows the layer widths to be different for each input model. Here, our procedure first identifies which weights of the bigger model should be mapped to the smaller model (via the transport map), and then averages the aligned models (now both of the size of the smaller one). We can thus combine the parameters of a bigger network into a smaller one, and vice versa, allowing new use-cases in (a) model compression and (b) federated learning.
(a) Post-processing tool for structured pruning. Structured pruning [26–28] is an approach to model compression that aims to remove entire neurons or channels, resulting in an out-of-the-box reduction in inference costs, while affecting the performance minimally. A widely effective method for CNNs is to remove the filters with smallest `1 norm [26]. Our key idea here is to fuse the original dense network into the pruned network, instead of just throwing it away.
Figure 3 shows the gain in test accuracy on CIFAR10 by carrying out OT fusion procedure (with weight-based alignment) when different convolutional layers of VGG11 are pruned to increasing amounts. For all the layers, we con-
sistently obtain a significant improvement in performance, and ≈ 10% or more gain in the high
sparsity regime. We also observe similar improvements other layers as well as when multiple (or all) layers are pruned simultaneously (c.f. Appendix S8).
Further, these gains are also significant when measured with respect to the overall sparsity obtained in the model. E.g., structured pruning the CONV_8 to 90% results in a net sparsity of 23% in the model. Here after pruning, the accuracy of the model drops from 90.3% to 81.5%, and on applying OT fusion, the performances recovers to 89.4%. As an another example take CONV_7, where after structured pruning to 80%, OT fusion improves the performance of the pruned model from 87.6% to 90.1% while achieving an overall sparsity of 41% in the network (see S8).
Our goal here is not to propose a method for structured pruning, but rather a post-processing tool that can help regain the drop in performance due to pruning. These results are thus independent of the pruning algorithm used, and e.g., Appendix S8 shows similar gains when the filters are pruned based on `2 norm (Figure S10) or even randomly (Figure S11). Further, Figure S12 in the appendix also shows the results when applied to VGG11 trained on CIFAR100 (instead of CIFAR10). Overall, OT fusion offers a completely data-free approach to improving the performance of the pruned model, which can be handy in the limited data regime or when retraining is prohibitive.
(b) Adapting the size of client and server-side models in federated learning. Given the huge sizes of contemporary neural networks, it is evident that we will not able to fit the same sized model on a client device as would be possible on the server. However, this might come at the cost of reduced performance. Further, the resource constraints might be fairly varied even amongst the clients devices, thus necessitating the flexibility to adapt the model sizes.
We consider a similar formulation, as in the one-shot knowledge transfer setting from Section 5.1, except that now the model B has twice the layer widths as compared to the corresponding layers of model A. Vanilla averaging of parameters, a core component of the widely prevalent FedAvg algorithm [25], gets ruled out in such a setting. Figure 4 shows how OT fusion/average can still lead to a successful knowledge transfer between the given models.
5.3 Fusion for efficient ensembling
In this section, our goal is to obtain a single model which can serve as a proxy for an ensemble of models, even if it comes at a slight decrease in performance relative to the ensemble, for future efficiency. Specifically, here we investigate how much can be gained by fusing multiple models that differ only in their parameter initializations (i.e., seeds). This means that models are trained on the same data, so unlike in Section 5.1 with a heterogeneous data-split, the gain here might be limited.
We study this in context of deep networks such as VGG11 and RESNET18 which have been trained to convergence on CIFAR10. As a first step, we consider the setting when we are given just two models, the results for which are present in Table 1. We observe that vanilla averaging absolutely fails in this case, and is 3- 5× worse than OT averaging, in case of RESNET18 and VGG11 respectively. OT average, however, does not yet improve over the individual models. This can be attributed to the combinatorial hardness of
the underlying alignment problem, and the greedy nature of our algorithm as mentioned before. As a simple but effective remedy, we consider finetuning (i.e., retraining) from the fused or averaged models. Retraining helps for both vanilla and OT averaging, but in comparison, the OT averaging
results in a better score for both the cases as shown in Table 1. E.g., for RESNET18, OT avg. + finetuning gets almost as good as prediction ensembling on test accuracy.
The finetuning scores for vanilla and OT averaging correspond to their best obtained results, when retrained with several finetuning learning rate schedules for a total of 100 and 120 epochs in case of VGG11and RESNET18 respectively. We also considered finetuning the individual models across these various hyperparameter settings (which of course will be infeasible in practice), but the best accuracy mustered via this attempt for RESNET18 was 93.51, in comparison to 93.78 for OT avg. + finetuning. See Appendix S3 and S4 for detailed results and typical retraining curves.
More than 2 models. Now, we discuss the case of more than two models, where the savings in efficiency relative to the ensemble are even higher. As before, we take the case of VGG11 on CIFAR10 and additionally CIFAR100 4, but now consider {4, 6, 8}− such models that have been trained to convergence, each from a different parameter initialization. Table 2 shows the results for this in case of CIFAR100 (results for CIFAR10 are similar and can be found in Table S9).
We find that the performance of vanilla averaging degrades to close-to-random performance, and interestingly even fails to retrain, despite trying numerous settings of optimization hyperparameters (like learning rate and schedules, c.f. Section S3.2). In contrast, OT average performs significantly better even without fine-tuning, and results in a mean test accuracy gain∼ {1.4%, 1.7%, 2%} over the best individual models after fine-tuning, in the case of {4, 6, 8}− base models respectively. Overall, Tables 1, 2 (also S9) show the importance of aligning the networks via OT before averaging. Further finetuning of the OT fused model, always results in an improvement over the individual models, while being # models times more efficient than the ensemble.
Fusion and Distillation. For the sake of completeness, we also compare OT fusion, distillation, and their combination, in context of transferring the knowledge of a large pre-trained teacher network into a smaller pre-trained student network. We find that starting the distillation from the OT fused model yields better performance than initializing randomly or with the student model. Further, when averaged across the considered temperature values = {20, 10, 8, 4, 1}, we observe that distillation of the teacher into random or student network based initialization performs worse than simple OT avg. + finetuning (which also doesn’t require doing such a sweep that would be prohibitive for larger models/datasets). These experiments are discussed in detail in Appendix S12. An interesting direction for future work would be to use intermediate OT distances computed during fusion as a means for regularizing or distilling with hidden layers.
6 Conclusion
We show that averaging the weights of models, by first doing a layer-wise (soft) alignment of the neurons via optimal transport, can serve as a versatile tool for fusing models in various settings. This results in (a) successful one-shot transfer of knowledge between models without sharing training data, (b) data free and algorithm independent post-processing tool for structured pruning, (c) and more generally, combining parameters of different sized models. Lastly, the OT average when further finetuned, allows for just keeping one model rather than a complete ensemble of models at inference. Future avenues include application in distributed optimization and continual learning, besides extending our current toolkit to fuse models with different number of layers, as well as, fusing generative models like GANs [12] (where ensembling does not make as much sense). The promising empirical results of the presented algorithm, thus warrant attention for further use-cases.
4We simply adapt the VGG11 architecture used for CIFAR10 and train it on CIFAR100 for 300 epochs. Since our focus here was not to obtain best individual models, but rather to investigate the efficacy of fusion.
Broader Impact Model fusion is a fundamental building block in machine learning, as a way of direct knowledge transfer between trained neural networks. Beyond theoretical interest it can serve a wide range of concrete applications. For instance, collaborative learning schemes such as federated learning are of increasing importance for enabling privacy-preserving training of ML models, as well as a better alignment of each individual’s data ownership with the resulting utility from jointly trained machine learning models, especially in applications where data is user-provided and privacy sensitive [29]. Here fusion of several models is a key building block to allow several agents to participate in joint training and knowledge exchange. We propose that a reliable fusion technique can serve as a step towards more broadly enabling privacy-preserving and efficient collaborative learning.
Acknowledgments
We would like to thank Rémi Flamary, Boris Muzellec, Sebastian Stich and other members of MLO, as well as the anonymous reviewers for their comments and feedback. | 1. What is the main contribution of the paper in terms of neural network fusion?
2. What are the strengths of the proposed approach compared to traditional methods?
3. What are the weaknesses of the paper regarding comparisons with other works and experimental scope? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Authors propose a to fuse neural net models by aligning weights or activations using the optimal transport (using the wassertian barycenter) instead of vanilla averaging of corresponding weights. This approach outperforms vanilla averaging by a big marging and can be used as an alternative to model ensembling in constraint settings. It's is suitable for federated learning and does not required same size layers.
Strengths
The propose approach seems to work very well on tested datasets and have the potential of great impact among a large set of applications. - The comparison with ensembling methods is very appealing as well as the application to structured pruning.
Weaknesses
- Authors just compare against vanilla averaging of weights, but there are similar proposed approaches that are important to consider. I would really like to see a performance comparison with [1] for example. - The datasets and models where the idea was tested are good enough to get the point across but quiet small compare to modern models and applications. 1. Wang, Hongyi, et al. "Federated learning with matched averaging." ICLR 2020 |
NIPS | Title
Bayesian Active Learning with Fully Bayesian Gaussian Processes
Abstract
The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on active learning with Gaussian Processes (GPs). For the GP, the bias-variance trade-off is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first is a Bayesian variant of Query-by-Committee (B-QBC), and the second is an extension that explicitly minimizes the predictive variance through a Query by Mixture of Gaussian Processes (QB-MGP) formulation. Across six simulators, we empirically show that B-QBC, on average, achieves the best marginal likelihood, whereas QB-MGP achieves the best predictive performance. We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.
1 Introduction
Gaussian Processes (GPs) are well-known for their ability to deal with small to medium-size data sets by balancing model complexity and regularization [Williams and Rasmussen, 2006]. Together with their inherent ability to model uncertainties, this has made GPs the go-to models to use for Bayesian optimization and metamodeling [Snoek et al., 2012, Gramacy, 2020]. For both cases, the data is often scarce, making the modeling task a balance between complexity and regularization, i.e., preventing severe overfitting while maintaining the ability to fit nonlinear functions. Likewise, the ability to quantify the uncertainty often guides the acquisition functions of Bayesian optimization and active learning schemes, which are inevitably required to build metamodels efficiently.
On the other hand, it is not flawless to use GPs in Bayesian optimization and active learning. In both cases, the same GP is used in an iterative process firstly to predict the mean and variance of a new data point and then secondly to use those estimates to guide the data acquisition. Since this is an iterative process, poor predictions will lead to poor data acquisition, and vice versa. The problem is less pronounced for larger data sets as predictions become increasingly accurate as more data is available. However, in Bayesian optimization and active learning, where the data sets tend to be relatively small, wrong predictions can result in misguidance, thus hindering performance and efficiency. In this paper, we mitigate this problem by applying a fully Bayesian approach to the GPs and formulating two new acquisition functions for active learning. Where a single GP trained with a maximum likelihood estimate only represents one model hypothesis, a Fully Bayesian Gaussian Process (FBGP) represents multiple model hypotheses at once. We will utilize this extra information
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
to create an acquisition function that simultaneously seeks the best model hypothesis and minimize the prediction error of the GP.
The hyperparameters of a GP are typically fitted through evaluation of the marginal likelihood, which automatically incorporates a trade-off between complexity and regularization [Williams and Rasmussen, 2006], also known as the bias-variance trade-off [Bishop, 2006]. However, when the data is scarce, it is more challenging to choose the appropriate trade-off, and different configurations of the hyperparameters of the GP can give rise to distinct fits. We highlight this issue in Figure 1, where two seemingly reasonable fits describe the data very distinctly, which would result in different acquisitions of new data.
The bias-variance trade-off for Gaussian Processes For a GP with a common stationary covariance function with a length scale and a noise-term, e.g., the radial-basis function or the Matérn class of functions with a Gaussian likelihood, the challenge of choosing the best trade-off between bias-variance trade-off, can be formulated using the hyperparameters of the GP. The two central hyperparameters are the length scale ` and the variance of the noise 2" . The length scale describes how much the underlying model f fluctuates, i.e., if the length scale is short or long, the model varies quickly or slowly, respectively. In terms of tuning the hyperparameters, often a short length scale goes well with a small noise-term since the greater flexibility of a short length scale means that the noise level can be reduced. This results in a flexible model with high variance and low bias. Conversely, a long length scale tends to increase the noise level, resulting in a rigid model with low variance and high bias [Bishop, 2006]. In the extremes, the former corresponds to a white-noise process model, and the latter corresponds to a constant with white noise [Williams and Rasmussen, 2006]. Thus, there is a direct relation between the bias-variance trade-off and the values of the two hyperparameters.
How to choose the best bias-variance trade-off? Finding a good bias-variance trade-off is a non-trivial problem. In the case of small to medium-sized data sets, the joint posterior distribution of the two hyperparameters is likely to be characterized by two modes, as illustrated in Figure 2 for the data in Figure 1. The two modes illustrate two different bias-variance trade-offs, which both describe the data well. However, depending on the choice of trade-off, the data acquisition can be very distinct, and thus a wrong choice of mode will imply non-optimal guidance from the acquisition function.
Though the multimodal posterior has been studied for GPs before [Yao et al., 2022], the literature often searches for the single best mode with clever initializations of the hyperparameters [Williams and Rasmussen, 1995] or by favoring small ` and 2" by either always initializing hyperparameters in the low noise regime or by applying strong priors [Gramacy, 2020]. However, none of these approaches directly address the core problem: which mode to choose? When working with Bayesian optimization and active learning, this should ideally be answered with prior information about the problem, although typically that is not available, making these approaches less practical [Antunes et al., 2018, Gramacy, 2020, Riis et al., 2021].
Our contribution We follow a general approach to the problem and assume no prior knowledge about the functional form of the data, kernel, nor hyperparameters. If it is known that the data has a periodic trend or high noise, it is advantageous to incorporate that into the kernel and the hyperparameters by using a periodic kernel and a prior on the hyperparameters that favor high noise, respectively. However, with no prior knowledge, we tend to use the general-purpose Radial-
Basis function (RBF) kernel with non-informative priors on the hyperparameters. Further, the fitted hyperparameters are often given by point estimates (e.g. fitted with maximum likelihood estimation or maximum a posteriori), but we consider multiple model hypotheses by replacing the fitting procedure of the marginal likelihood with Markov Chain Monte Carlo (MCMC) sampling to get the joint posterior of the hyperparameters. The result is that we have multiple models (same kernel, but different hyperparameters), which represent different model hypotheses. We utilize the extra information from the hyperparameters’ joint posterior to handle the bias-variance trade-off by incorporating the extra information into two new acquisition functions.
Our main contribution is the proposal of two new acquisition functions for active learning that utilize the extra information from the hyperparamters’ posterior estimated by MCMC to seek the most reasonable mode alongside minimizing the predictive variance. Through empirical results, we show that the two acquisition functions are more accurate and robust than other common functions across multiple benchmark simulators used in the literature.
2 Related work
In this section, we review related work to the proposed acquisition functions. We cover active learning schemes for regression tasks, including Query-by-Committee and GP as Gaussian Mixture Models.
Active Learning The main idea of active learning is to actively choose a new data point to label and add to the current training data set, to iteratively improve the performance of the predictive model [Settles, 2009]. In the context of metamodeling or surrogate modeling of simulators, new data is often added sequentially, i.e., one data point at a time [Gramacy, 2020], but in other applications, it can be beneficial to query batches of data instead [Kirsch et al., 2019].
The acquisition functions can be divided into model-based and model-free functions, where the former utilize information from the model and is often based on uncertainty measures (recently also function values and gradients [Fernandez et al., 2020, Svendsen et al., 2020]), whereas the latter do not use information from the model and is typically based on distance metrics in the input space [O’Neill et al., 2017]. Both types of functions seek to minimize the expected predictive loss of the model. Another distinction between the acquisition functions is decision-based and information theory-based [Houlsby et al., 2011]. Decision-based functions seek to minimize the expected predictive loss in the hope of maximizing the performance on the test set. Information-theoretic-based functions instead try to reduce the number of possible models, e.g., through the KL-divergence or Shannon entropy.
It is not straightforward to use information-theoretic acquisition functions. However, if one has access to the posterior of the parameters, Houlsby et al. [2011] have derived the algorithm Bayesian Active Learning by Disagreement (BALD), which can be applied in general. Generally, BALD seeks the data point that maximizes the decrease in the expected posterior entropy of the parameters.
Query-by-Committee The Query-by-Committee (QBC) is a specific acquisition function that was originally proposed for classification tasks [Seung et al., 1992]. It aims to maximize the disagreement among the committee to get the highest information gain and minimize the version space, which is the set of model hypotheses aligned with the training data. The construction of the committee is the core component of QBC since it is the committee’s ability to accurately and diversely represent the version space that gives rise to informative disagreement criteria [Settles, 2009].
Query-by-Committee can also be applied for regression problems. Krogh and Vedelsby [1995] construct the members of the committee by random initializations of the weights in the neural networks. RayChaudhuri and Hamey [1995] apply bagging and train the members on different subsets of the data set. In general, QBC constructed by bagging has been used as a benchmark with mixed results [Cai et al., 2013, Wu, 2018, Wu et al., 2019]. Burbidge et al. [2007] show that the less noise there is in the output, the better QBC is compared to random querying. They also highlight the fact that with a misspecified model, QBC might perform worse than random querying. None of these approaches explore the usage of MCMC samples of the posterior to construct a committee.
Gaussian Process as a Gaussian Mixture Model Mixture models have recently been applied in active learning for classification tasks. Iswanto [2021] proposes to use Gaussian Mixture Models (GMMs) with active learning, where he designs a specific acquisition function that queries the data point that maximizes the expected likelihood of the model. Zhao et al. [2020] use a mixture of GPs in active learning, where each component is fitted to a subset of the training set. The combination of
GMMs and GPs have previously been explored for static data sets. Chen and Ren [2009] investigate regression tasks and apply bagging, where they repeatedly randomly sample data points from the training set to construct new subsets to get GPs fitted to different data.
3 Gaussian Processes
The Gaussian Processes (GPs) are the central models in this work. In this section, we give a brief overview of GPs before covering the Fully Bayesian GPs. For a thorough description of GPs, we refer to Williams and Rasmussen [2006].
A Gaussian Process (GP) is a stochastic function fully defined by a mean function m(·) and a covariance function (often called a kernel) k(·, ·). Given the data D = (X,y) = {xi, yi}Ni=1, where yi is the corrupted observations of some latent function values f with Gaussian noise ", i.e., yi = fi + "i, "i 2 N (0, 2"), a GP is typically denoted as GP(mf (x), kf (x,x0)). It is common practice to set the mean function equal to the zero-value vector and thus, the GP is fully determined by the kernel kf (x,x0). For short, we will denote the kernel K✓, which explicitly states that the kernel is parameterized with some hyperparameters ✓. The generative model of the GP can be found in Appendix A.1. Given the hyperparameters ✓, the predictive posterior for unknown test inputs X? is given by p(f?|✓,y, X,X?) = N (µ?,⌃?) with
µ? = K?✓ K✓ + 2 "I 1 y and ⌃? = K??✓ K?✓ K✓ + 2 "I 1 K?>✓ (1)
where K??✓ denotes the covariance matrix between the test inputs, and K ? ✓ denotes the covariance matrix between the test inputs and training inputs.
We use the canonical kernel automatic relevance determination (ARD) Radial-basis function (RBF) given by k (x,x0) = exp ||x x0||2/2`2 where ` is a vector of length scales `1, ..., `d, one for each input dimension. Often the kernel is scaled by an output variance but here we fix it to one and solely focus on the two other hyperparameters: length scale and noise-term. The noise-term 2" is integrated into the kernel with an indicator variable by adding the term 2"I{x=x0} to the current kernel [Williams and Rasmussen, 2006, Bishop, 2006].
Fully Bayesian Gaussian Processes (FBGP) An FBGP extends a GP by putting a prior over the hyperparameters p(✓) and approximating their full posteriors. The joint posterior is then given by
p(f ,✓|y, X) / p(y|f)p(f |✓, X)p(✓) (2)
and the predictive posterior for the test inputs X? is
p(y?|y) = ZZ p (y?|f?,✓) p(f?|✓,y)p(✓|y)df?d✓ (3)
where the conditioning on X and X? have been omitted for brevity. The inner integral reduces to the predictive posterior given by a normal GP, whereas the outer integral remains intractable and is approximated with MCMC inference with M samples as
p (y?|y) = Z
p (y?|y,✓) p(✓|y)d✓ ⇡ 1 M
MX
j=1
p (y?|y,✓j) , ✓j ⇠ p(✓|y) (4)
Adapting the hyperparameters of an FBGP is computationally expensive compared to the approach with GPs and maximum likelihood estimation. However, in Bayesian optimization and active learning, the computational burden for querying a new data point will often be of magnitudes higher. For example for simulators, the computational cost of querying a new data point is, in general, expensive and can take minutes and hours [Gorissen et al., 2009, Riis et al., 2021, Chabanet et al., 2021].
4 Active Learning
In this section, we lay out the most common acquisition functions and then propose first a Bayesian variant of Query-by-Committee and second an extension motivated by Gaussian Mixture Models, which seek to minimize both the predictive variance and the number of model hypotheses.
Many active learning acquisition functions are based on the model’s uncertainty and entropy and can thus be denoted as Bayesian active learning acquisition functions [Settles, 2009, Gramacy, 2020]. The most common acquisition function is based on the predictive entropy and denoted Active Learning MacKay (ALM) [MacKay, 1992]. All the following objective functions query a new data point by maximizing the argument x. In the following, we write a new test point x? as x for brevity. All the acquisition functions choose a data point x among the possible data points in the unlabeled pool U .
Entropy (ALM) For a Gaussian distribution, the Shannon entropy H[·] is proportional to the predictive variance 2(x) (derived in A.2), so ALM is given as
ALM = H[y|x,D] / 2(x) (5)
Intuitively, ALM queries the data point, where the uncertainty of the prediction is the highest.
If we have access to the posterior of the model’s hyperparameters, we can utilize acquisition functions with an extra Bayesian level. The posterior of the hyperparameters of an FBGP can be estimated with MCMC such that GPs with different kernel parameters can be drawn, e.g., the length scale and the noise-term `, 2" ⇠ p(✓|D). The following four acquisition functions all utilize this information and are approximated using the samples from the MCMC, cf. (4).
Entropy (B-ALM) With the information from the posterior p(✓|D), the criteria for the extra Bayesian variant of ALM (B-ALM) is then given as
B-ALM = H Z p(y|x,✓)p(✓|D)d✓ = H ⇥ Ep(✓|D)[p(y|x,✓)] ⇤ / Ep(✓|D)[ 2✓(x)|✓] (6)
Bayesian Active learning by Disagreement (BALD) Another common objective in Bayesian active learning, is to maximize the expected decrease in posterior entropy [Guestrin et al., 2005, Houlsby et al., 2012]. Houlsby et al. [2011] rewrite the objective from computing entropies in the parameter space to the output space by observing that it is equivalent to maximizing the conditional mutual information between the model’s parameters ✓̂ and output I[✓̂, y|x,D]. The acquisition function is denoted Bayesian Active Learning by Disagreement (BALD) and the criteria is given by:
I[✓̂, y|x,D] = H[y|x,D] Ep(✓̂|D)[H[y|x, ✓̂]] (7)
BALD was originally derived for non-parametric discriminative models but has recently been extended to batches and deep learning with great success [Kirsch et al., 2019]. In the context of non-parametric models, such as GPs, the models parameters now correspond to the latent function f . For a regular GP, BALD is equivalent to ALM (cf. A.3), although that is not the case for an FBGP. If we let the hyperparameters be the main parameters of interest and set f to be a nuisance parameter, BALD can be written as (cf. A.3.1):
BALD = H ⇥ Ep(✓|D)[y|x,D,✓] ⇤ Ep(✓|D) [H[y|x,✓]] (8)
Bayesian Query-by-Committee (B-QBC) Motivated by finding the optimal bias-variance tradeoff, we propose a Bayesian version of the Query-by-Committee, using the MCMC samples of the hyperparameters’ joint posterior. We have previously argued that the optimal bias-variance trade-off is equivalent to the optimal mode of the multimodal posterior of the hyperparameters, which is exactly what we utilize here. We use the joint posterior of the hyperparameters obtained through MCMC to draw multiple models and then query a new data point where the mean predictions µ✓(x) of these models disagree the most, i.e. querying the data point that maximizes the variance of µ✓(x). Each mean predictor µ✓(·) drawn from the posterior is equivalent to a single model, and thus this criteria can be seen as a Bayesian variant of a Query-by-Committee, and thus denoted as Bayesian Query-by-Committee (B-QBC). Given that µ(x) is the average mean function, B-QBC is given as
B-QBC = Vp(✓|D)[µ✓(x)|✓] = Ep(✓|D)[(µ✓(x) µ(x))2|✓] (9)
Since the models are drawn from the hyperparameters’ posterior, the collection of models is dominated by models near the posterior modes. High variance in µ✓(x), thus corresponds to high disagreement between modes. Querying the data point that maximizes this disagreement, gives information about which mode is most likely to be the optimal one, and thus this can be seen as a mode-seeking Bayesian Query-by-Committee. To the best of our knowledge, we are the first to propose QBC based on model hypotheses drawn from the hyperparameters’ joint posterior.
Query by Mixture of Gaussian Processes (QB-MGP) Bayesian Query-by-Committee (B-QBC) seek the optimal mode, but does not take the predictive performance of the model into account. Since the predictive performance, and thereby the predictive uncertainty, is also important, we extend B-QBC to consider the predictive entropy as well. We denote the new acquisition function Query by Mixture of Gaussian Processes (QB-MGP), because of its relation to Gaussian Mixture Models (GMMs). Using the MCMC samples, each prediction of the FBGP can be seen as an MGP, yielding the predictive posterior given as in equation (4): 1M PM j=1 p (y
?|y,✓j). This hierarchical predictive posterior is a mixture of M Gaussians with mean µGMM and variance 2GMM defined as (cf. A.4):
µGMM (x) = 1
M
MX
j=1
µ✓j (x) (10)
2GMM (x) = 1
M
MX
j=1
2✓j (x) + 1
M
MX
j=1
(µ✓j (x) µGMM (x))2 (11)
Finding the data point that maximizes the variance of the Mixture of Gaussian Processes (MGP) is now equivalent to simultaneously considering the B-ALM and B-QBC, i.e., the sum of the two:
QB-MGP = Ep(✓|D)[ 2✓(x)|✓] + Ep(✓|D)[(µ✓(x) µGMM (x))2|✓] (12) Instead of using bagging, we construct the multiple GPs by using the MCMC samples of the hyperparameters’ joint posterior, and then obtain a natural weighting of the GPs: the MGP will consist of more GPs with hyperparameters close to the modes than hyperparameters far away. To the best of our knowledge, we are the first to use MGP in this manner for active learning.
5 Experiments
In this section, we benchmark the performance of the two proposed acquisition functions against the standard acquisition functions based on the entropy, i.e., ALM, B-ALM, and BALD, on various classic simulators used in recent literature on GPs and active learning. They are all listed in Table 1, and those with less than three inputs are shown in Figure 3.1 The multimodal posteriors of the FBGPs fitted to the simulators can be found in appendix A.5.
Experimental settings In the experiments, we use a zero-mean GP with an ARD RBF kernel. In each iteration of the active learning loop, the inputs are rescaled to the unit cube [0, 1]d, and the outputs are standardized to have zero mean and unit variance. Following Lalchand and Rasmussen [2020], we give all the hyperparameters relatively uninformative N (0, 3) priors in log space. The initial data sets consist of three data points chosen by maximin Latin Hypercube Sampling [Pedregosa et al., 2011], and in each iteration, one data point is queried. The unlabeled pool U consists of the
1All of them can be found at https://www.sfu.ca/~ssurjano/ [Surjanovic and Bingham, 2022].
input space discretized into 100 equidistant points along each dimension. If U contains more than 10, 000 data points, we randomly sample a subset of 10, 000 data points in each iteration and use that as the new pool. The inference in FBGP is carried out using NUTS [Hoffman and Gelman, 2014] in Pyro [Bingham et al., 2019] with five chains and 500 samples, including a warm-up period with 200 samples. The remaining 1500 samples are all used for the acquisition functions. For all the predictions, we use the best mode of the hyperparameters’ posterior, since the mean is of limited value when the posterior is multimodal. The best mode is computed by using a kernel density estimation with a Gaussian kernel [Pedregosa et al., 2011]. The models are implemented in GPyTorch [Gardner et al., 2018]. All experiments are repeated ten times with different initial data sets. With seven simulators and five acquisition functions, this gives 350 active learning runs, each with a running time on approximately one hour, using five CPU cores on a Threadripper 3960X. The code for reproducing the experiments is available on GitHub.2
Evaluation It is common to evaluate the performance of active learning by visually inspecting the learning or loss curves, or by measuring the performance after a specific iteration [Gramacy, 2020, Settles, 2009]. However, both procedures are inadequate to quantify how better an acquisition function is than another if we are not interested in the performance of a specific iteration but the performance in general. We are unaware of any metric for quantifying the overall performance within the regression setting, which is comparable across different data sets. Within the classification setting, Yang and Loog [2018] propose to use the area under the learning curve (AUC) based on accuracy. Since the accuracy is bounded between 0 and 1, they can compare this measure across different data sets. Likewise, O’Neill et al. [2017] compute the AUC using the root mean square error (RMSE) as the performance metric. However, since the magnitude of the RMSE is data-specific, they can not compare the performance across data sets. To resolve this problem, we suggest using the relative decrease in AUC.
We compare the relative decrease with respect to the baseline acquisition function ALM since the latter is widely used and regarded as the standard within active learning with GPs. For a metric that is lower bounded by zero, such as RMSE, the AUC will give the overall error of the acquisition function. We can directly calculate the relative decrease in error from the AUCs of the active learning acquisition functions and the baseline. If the metric has no lower bound, such as the negative log marginal likelihood (NLML), the interpretation is less intuitive. Therefore, we make a lower bound for the metric using the lowest NLML obtained across all the acquisition functions, such that the relative decrease in the AUC then can be interpreted in the same way as for the RMSE. We compare all 10 runs of each acquisition function with the 10 runs for the baseline to get a precise estimate of both the mean and standard deviation of the relative decreases. Since the relative decrease is a ratio, we compute the unbiased estimates using the formulas from Van Kempen and Van Vliet [2000]. See Appendix A.6 for the formulas and the pseudo-code.
5.1 Experiments with 6 simulators
We benchmark the acquisition functions on the six classic simulators. We divide the simulators into subgroups and describe how the functions work in different complexities and with multiple inputs, effectively encompassing three distinct modeling scenarios. The following paragraphs describe the active learning curves in Figure 4.
Noise or signal The simulator Gramacy1d has previously been used to study the effect of the noise-term in GPs [Gramacy and Lee, 2012]. The simulator has a periodic signal that is hard to reveal if the data points are not queried cleverly. Both B-QBC and QB-MGP reach convergence simultaneously, while the other acquisition functions struggle in distinguishing noise from the signal.
Linear and non-linear output regions The two simulators Higdon and Gramacy2d have been used to illustrate cases where GPs struggle in modeling the data due to the output signal having both linear and non-linear regions [Gramacy and Lee, 2009]. Our experiments on these simulators show the performance of the acquisition functions when the GP is an non-optimal choice of model. Querying data points in the linear and non-linear regions will yield a GP with a longer and shorter length scale, respectively. For Higdon, both B-QBC and QB-MGP balance the sampling since the corresponding NLML and RMSE are low, likewise, for Gramacy2d, QB-MGP is achieving both the
2 https://github.com/coriis/active-learning-fbgp
lowest NLML and RMSE. Overall, these results show that when the GP is inadequate to model the data, both B-QBC and QB-MGP perform better than the other acquisition functions.
Multiple inputs To evaluate the performance on higher dimensions, we consider the smooth 2d Branin simulator, the strongly non-linear 3d Ishigami simulator, and the 6d Hartmann simulator with six local minima. BALD underperforms on Branin, but the other acquisition functions have similar performance, with B-QBC having an overall better NLML. For Ishigami, BALD, B-QBC, and QB-MGP reach the best NLML, but the earlier iterations show that BALD is slightly better than the other two acquisition functions. For Hartmann, B-QBC and QB-MGP are the most stable and best in terms of the NLML, whereas the latter achieves the lowest RMSE.
5.2 The Overall Performance
From the visual inspection of Figure 4, it is hard to measure how good each the of acquisition functions are. In Table 2, we quantify the performance using the method described earlier, based on the relative decrease in the area under the curve (AUC).
First of all, it is clear that there is not a single acquisition function that is performing the best for all the simulators. B-QBC achieves the largest decrease in AUC for either NLML or RMSE in five cases, and QB-MGP is the next best, having the largest decrease four times. This is reflected in the overall performance, where B-QBC and QB-MGP are the best performing acquisition functions in terms of the marginal likelihood and root mean square error, respectively. Oppositely, BALD is the worst performing acquisition function overall, not even better than the baseline, which suggests that this acquisition function might not be suited for Gaussian Processes within the regression setting. Given the overall good consistent performance of B-QBC and QB-MGP (B-QBC only worse than the baseline twice), we say that they are robust to different complexities in the simulators’ outputs.
5.3 Limitations
Gaussian Processes are known to be computationally expensive [Williams and Rasmussen, 2006]. The computational cost scales cubically with the number of data points in the data set, i.e., O(n3), and the GPs are thus only suited for small data sets. A fully Bayesian GP is even more computationally expensive because of the MCMC sampling. There exist methods to circumvent this, e.g, variational inference (VI), but often at the cost of the approximation of the joint posterior of the hyperparameters [Lalchand and Rasmussen, 2020]. For future work, we will investigate if the posterior can be approximated by VI, using several initializations, or if it is possible to achieve similar performance by alternating between using a fully Bayesian GP and a regular GP.
This paper is based on empirical results that are dependent on specific simulators. The simulators represent diverse and distinct classic homoscedastic simulators and are representative of the engineering problems occurring in the real world [Gramacy, 2020, Sauer et al., 2022, Cole et al., 2022]. However, an interesting case that is less often investigated in the literature on active learning with simulators, is a simulator with heteroscedastic noise.
A common heteroscedastic case study is the motorcycle data set [Silverman, 1985, Gramacy and Lee, 2008, Gramacy, 2020]. We create a corresponding simulator by fitting a variational GP [Hensman et al., 2015] to the motorcycle accident data. For reproducibility, the mean and standard deviation of the simulator is given in appendix (A.7). The experiments on the Motorcycle simulator explore how the active learning acquisition functions perform when the simulator has heteroscedastic noise, but we model it with a homoscedastic GP. The simulator and the results are seen in Figure 5. Most conspicuous is the poor and good performance of B-QBC and QB-MGP, respectively. In appendix A.8, we show that B-QBC is misled by the heteroscedastic noise and focuses too much on the disagreement in the middle, and that the B-ALM component of QB-MGP acts as a diversity measure that encourages more exploration since it aims in reducing the overall predictive uncertainty.
Another aspect of the work in this paper is the use of the domain and expert knowledge. The incorporation of the domain and expert guidance regarding the simulator under study can be a decisive factor in a successful active learning strategy. However, in many practical situations, such a priori domain expertise may not be readily accessible or even translatable into the functional structure of the model as useful modeling information. On these occasions, generic tools that are robust enough to handle a plethora of diverse simulation output behaviors are prudently advisable. If we had such information regarding the functional complexity, e.g., knowing that the signal is periodic, this study does not show which of the acquisition functions is best. In future work, it would indeed be interesting to see if B-QBC and QB-MGP would perform equally well.
6 Conclusion
In this paper, we propose two active learning acquisition functions: Bayesian Query-by-Committee (B-QBC) and Query by a Mixture of Gaussian Processes (QB-MGP), both of which are suited for fully Bayesian GPs. They are designed to explicitly handle the well-known bias-variance trade-off by optimization of the GP’s two hyperparameters, length scale and noise-term. We empirically show that they query new data points more efficiently than previously used acquisition functions. Across six classic simulators, which cover different complexities and numbers of inputs, we show that B-QBC and QB-MGP are the two functions that achieve the best marginal likelihood and root mean square error, respectively, with the fewest iterations. On average, across the simulators, B-QCB reduced the negative marginal log-likelihood with 41%, and QB-MGP decreased the root mean square error with 12% compared to the baseline. To this end, we believe that the proposed acquisition functions are robust enough to handle a variety of diverse simulation output behaviors, while being entirely independent of any prior understanding of the underlying output distributions of the simulator.
Acknowledgments and Disclosure of Funding
This work was supported by NOSTROMO, framed in the scope of the SESAR 2020 Exploratory Research topic SESAR-ER4-26-2019, funded by SESAR Joint Undertakingthrough the European Union’s Horizon 2020 research and innovation programme under grant agreement No 892517. | 1. What is the focus and contribution of the paper on active learning for Gaussian processes?
2. What are the strengths of the proposed approach, particularly in its full-Bayesian solution and state-of-the-art discussion?
3. What are the weaknesses of the paper regarding its contribution and novelty compared to other works?
4. Do you have any suggestions or recommendations to enhance the state-of-the-art discussion and improve the paper's overall impact? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose a novel active learning scheme for GPs based on a full-Bayesian solution.
Strengths And Weaknesses
The paper is well-written and it is nice to read this work. This is the the main strength, in my opinion. The state-of-the-art discussion is the nicest part, in my opinion.
Weakness: the contribution seems a bit incremental.
Questions
I have only a suggestion: to complete the state-of-the-art discussion, including the acquisition functions in GP schemes (for regression of for quadrature) that considers also the gradient information as suggested in
D. H. Svendsen, et al. Active Emulation of Computer Codes with Gaussian Processes - Application to Remote Sensing, Pattern Recognition Volume 100, 2020,
F. Llorente, et al, "Adaptive quadrature schemes for Bayesian inference via active learning", IEEE Access, Volume 8, 2020.
M. Kanagawa and P. Hennig, “Convergence Guarantees for Adaptive Bayesian Quadrature Methods,” in Advances in Neural Information Processing Systems, 2019, pp. 6234–6245.
Limitations
The contribution seems a bit incremental. |
NIPS | Title
Bayesian Active Learning with Fully Bayesian Gaussian Processes
Abstract
The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on active learning with Gaussian Processes (GPs). For the GP, the bias-variance trade-off is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first is a Bayesian variant of Query-by-Committee (B-QBC), and the second is an extension that explicitly minimizes the predictive variance through a Query by Mixture of Gaussian Processes (QB-MGP) formulation. Across six simulators, we empirically show that B-QBC, on average, achieves the best marginal likelihood, whereas QB-MGP achieves the best predictive performance. We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.
1 Introduction
Gaussian Processes (GPs) are well-known for their ability to deal with small to medium-size data sets by balancing model complexity and regularization [Williams and Rasmussen, 2006]. Together with their inherent ability to model uncertainties, this has made GPs the go-to models to use for Bayesian optimization and metamodeling [Snoek et al., 2012, Gramacy, 2020]. For both cases, the data is often scarce, making the modeling task a balance between complexity and regularization, i.e., preventing severe overfitting while maintaining the ability to fit nonlinear functions. Likewise, the ability to quantify the uncertainty often guides the acquisition functions of Bayesian optimization and active learning schemes, which are inevitably required to build metamodels efficiently.
On the other hand, it is not flawless to use GPs in Bayesian optimization and active learning. In both cases, the same GP is used in an iterative process firstly to predict the mean and variance of a new data point and then secondly to use those estimates to guide the data acquisition. Since this is an iterative process, poor predictions will lead to poor data acquisition, and vice versa. The problem is less pronounced for larger data sets as predictions become increasingly accurate as more data is available. However, in Bayesian optimization and active learning, where the data sets tend to be relatively small, wrong predictions can result in misguidance, thus hindering performance and efficiency. In this paper, we mitigate this problem by applying a fully Bayesian approach to the GPs and formulating two new acquisition functions for active learning. Where a single GP trained with a maximum likelihood estimate only represents one model hypothesis, a Fully Bayesian Gaussian Process (FBGP) represents multiple model hypotheses at once. We will utilize this extra information
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
to create an acquisition function that simultaneously seeks the best model hypothesis and minimize the prediction error of the GP.
The hyperparameters of a GP are typically fitted through evaluation of the marginal likelihood, which automatically incorporates a trade-off between complexity and regularization [Williams and Rasmussen, 2006], also known as the bias-variance trade-off [Bishop, 2006]. However, when the data is scarce, it is more challenging to choose the appropriate trade-off, and different configurations of the hyperparameters of the GP can give rise to distinct fits. We highlight this issue in Figure 1, where two seemingly reasonable fits describe the data very distinctly, which would result in different acquisitions of new data.
The bias-variance trade-off for Gaussian Processes For a GP with a common stationary covariance function with a length scale and a noise-term, e.g., the radial-basis function or the Matérn class of functions with a Gaussian likelihood, the challenge of choosing the best trade-off between bias-variance trade-off, can be formulated using the hyperparameters of the GP. The two central hyperparameters are the length scale ` and the variance of the noise 2" . The length scale describes how much the underlying model f fluctuates, i.e., if the length scale is short or long, the model varies quickly or slowly, respectively. In terms of tuning the hyperparameters, often a short length scale goes well with a small noise-term since the greater flexibility of a short length scale means that the noise level can be reduced. This results in a flexible model with high variance and low bias. Conversely, a long length scale tends to increase the noise level, resulting in a rigid model with low variance and high bias [Bishop, 2006]. In the extremes, the former corresponds to a white-noise process model, and the latter corresponds to a constant with white noise [Williams and Rasmussen, 2006]. Thus, there is a direct relation between the bias-variance trade-off and the values of the two hyperparameters.
How to choose the best bias-variance trade-off? Finding a good bias-variance trade-off is a non-trivial problem. In the case of small to medium-sized data sets, the joint posterior distribution of the two hyperparameters is likely to be characterized by two modes, as illustrated in Figure 2 for the data in Figure 1. The two modes illustrate two different bias-variance trade-offs, which both describe the data well. However, depending on the choice of trade-off, the data acquisition can be very distinct, and thus a wrong choice of mode will imply non-optimal guidance from the acquisition function.
Though the multimodal posterior has been studied for GPs before [Yao et al., 2022], the literature often searches for the single best mode with clever initializations of the hyperparameters [Williams and Rasmussen, 1995] or by favoring small ` and 2" by either always initializing hyperparameters in the low noise regime or by applying strong priors [Gramacy, 2020]. However, none of these approaches directly address the core problem: which mode to choose? When working with Bayesian optimization and active learning, this should ideally be answered with prior information about the problem, although typically that is not available, making these approaches less practical [Antunes et al., 2018, Gramacy, 2020, Riis et al., 2021].
Our contribution We follow a general approach to the problem and assume no prior knowledge about the functional form of the data, kernel, nor hyperparameters. If it is known that the data has a periodic trend or high noise, it is advantageous to incorporate that into the kernel and the hyperparameters by using a periodic kernel and a prior on the hyperparameters that favor high noise, respectively. However, with no prior knowledge, we tend to use the general-purpose Radial-
Basis function (RBF) kernel with non-informative priors on the hyperparameters. Further, the fitted hyperparameters are often given by point estimates (e.g. fitted with maximum likelihood estimation or maximum a posteriori), but we consider multiple model hypotheses by replacing the fitting procedure of the marginal likelihood with Markov Chain Monte Carlo (MCMC) sampling to get the joint posterior of the hyperparameters. The result is that we have multiple models (same kernel, but different hyperparameters), which represent different model hypotheses. We utilize the extra information from the hyperparameters’ joint posterior to handle the bias-variance trade-off by incorporating the extra information into two new acquisition functions.
Our main contribution is the proposal of two new acquisition functions for active learning that utilize the extra information from the hyperparamters’ posterior estimated by MCMC to seek the most reasonable mode alongside minimizing the predictive variance. Through empirical results, we show that the two acquisition functions are more accurate and robust than other common functions across multiple benchmark simulators used in the literature.
2 Related work
In this section, we review related work to the proposed acquisition functions. We cover active learning schemes for regression tasks, including Query-by-Committee and GP as Gaussian Mixture Models.
Active Learning The main idea of active learning is to actively choose a new data point to label and add to the current training data set, to iteratively improve the performance of the predictive model [Settles, 2009]. In the context of metamodeling or surrogate modeling of simulators, new data is often added sequentially, i.e., one data point at a time [Gramacy, 2020], but in other applications, it can be beneficial to query batches of data instead [Kirsch et al., 2019].
The acquisition functions can be divided into model-based and model-free functions, where the former utilize information from the model and is often based on uncertainty measures (recently also function values and gradients [Fernandez et al., 2020, Svendsen et al., 2020]), whereas the latter do not use information from the model and is typically based on distance metrics in the input space [O’Neill et al., 2017]. Both types of functions seek to minimize the expected predictive loss of the model. Another distinction between the acquisition functions is decision-based and information theory-based [Houlsby et al., 2011]. Decision-based functions seek to minimize the expected predictive loss in the hope of maximizing the performance on the test set. Information-theoretic-based functions instead try to reduce the number of possible models, e.g., through the KL-divergence or Shannon entropy.
It is not straightforward to use information-theoretic acquisition functions. However, if one has access to the posterior of the parameters, Houlsby et al. [2011] have derived the algorithm Bayesian Active Learning by Disagreement (BALD), which can be applied in general. Generally, BALD seeks the data point that maximizes the decrease in the expected posterior entropy of the parameters.
Query-by-Committee The Query-by-Committee (QBC) is a specific acquisition function that was originally proposed for classification tasks [Seung et al., 1992]. It aims to maximize the disagreement among the committee to get the highest information gain and minimize the version space, which is the set of model hypotheses aligned with the training data. The construction of the committee is the core component of QBC since it is the committee’s ability to accurately and diversely represent the version space that gives rise to informative disagreement criteria [Settles, 2009].
Query-by-Committee can also be applied for regression problems. Krogh and Vedelsby [1995] construct the members of the committee by random initializations of the weights in the neural networks. RayChaudhuri and Hamey [1995] apply bagging and train the members on different subsets of the data set. In general, QBC constructed by bagging has been used as a benchmark with mixed results [Cai et al., 2013, Wu, 2018, Wu et al., 2019]. Burbidge et al. [2007] show that the less noise there is in the output, the better QBC is compared to random querying. They also highlight the fact that with a misspecified model, QBC might perform worse than random querying. None of these approaches explore the usage of MCMC samples of the posterior to construct a committee.
Gaussian Process as a Gaussian Mixture Model Mixture models have recently been applied in active learning for classification tasks. Iswanto [2021] proposes to use Gaussian Mixture Models (GMMs) with active learning, where he designs a specific acquisition function that queries the data point that maximizes the expected likelihood of the model. Zhao et al. [2020] use a mixture of GPs in active learning, where each component is fitted to a subset of the training set. The combination of
GMMs and GPs have previously been explored for static data sets. Chen and Ren [2009] investigate regression tasks and apply bagging, where they repeatedly randomly sample data points from the training set to construct new subsets to get GPs fitted to different data.
3 Gaussian Processes
The Gaussian Processes (GPs) are the central models in this work. In this section, we give a brief overview of GPs before covering the Fully Bayesian GPs. For a thorough description of GPs, we refer to Williams and Rasmussen [2006].
A Gaussian Process (GP) is a stochastic function fully defined by a mean function m(·) and a covariance function (often called a kernel) k(·, ·). Given the data D = (X,y) = {xi, yi}Ni=1, where yi is the corrupted observations of some latent function values f with Gaussian noise ", i.e., yi = fi + "i, "i 2 N (0, 2"), a GP is typically denoted as GP(mf (x), kf (x,x0)). It is common practice to set the mean function equal to the zero-value vector and thus, the GP is fully determined by the kernel kf (x,x0). For short, we will denote the kernel K✓, which explicitly states that the kernel is parameterized with some hyperparameters ✓. The generative model of the GP can be found in Appendix A.1. Given the hyperparameters ✓, the predictive posterior for unknown test inputs X? is given by p(f?|✓,y, X,X?) = N (µ?,⌃?) with
µ? = K?✓ K✓ + 2 "I 1 y and ⌃? = K??✓ K?✓ K✓ + 2 "I 1 K?>✓ (1)
where K??✓ denotes the covariance matrix between the test inputs, and K ? ✓ denotes the covariance matrix between the test inputs and training inputs.
We use the canonical kernel automatic relevance determination (ARD) Radial-basis function (RBF) given by k (x,x0) = exp ||x x0||2/2`2 where ` is a vector of length scales `1, ..., `d, one for each input dimension. Often the kernel is scaled by an output variance but here we fix it to one and solely focus on the two other hyperparameters: length scale and noise-term. The noise-term 2" is integrated into the kernel with an indicator variable by adding the term 2"I{x=x0} to the current kernel [Williams and Rasmussen, 2006, Bishop, 2006].
Fully Bayesian Gaussian Processes (FBGP) An FBGP extends a GP by putting a prior over the hyperparameters p(✓) and approximating their full posteriors. The joint posterior is then given by
p(f ,✓|y, X) / p(y|f)p(f |✓, X)p(✓) (2)
and the predictive posterior for the test inputs X? is
p(y?|y) = ZZ p (y?|f?,✓) p(f?|✓,y)p(✓|y)df?d✓ (3)
where the conditioning on X and X? have been omitted for brevity. The inner integral reduces to the predictive posterior given by a normal GP, whereas the outer integral remains intractable and is approximated with MCMC inference with M samples as
p (y?|y) = Z
p (y?|y,✓) p(✓|y)d✓ ⇡ 1 M
MX
j=1
p (y?|y,✓j) , ✓j ⇠ p(✓|y) (4)
Adapting the hyperparameters of an FBGP is computationally expensive compared to the approach with GPs and maximum likelihood estimation. However, in Bayesian optimization and active learning, the computational burden for querying a new data point will often be of magnitudes higher. For example for simulators, the computational cost of querying a new data point is, in general, expensive and can take minutes and hours [Gorissen et al., 2009, Riis et al., 2021, Chabanet et al., 2021].
4 Active Learning
In this section, we lay out the most common acquisition functions and then propose first a Bayesian variant of Query-by-Committee and second an extension motivated by Gaussian Mixture Models, which seek to minimize both the predictive variance and the number of model hypotheses.
Many active learning acquisition functions are based on the model’s uncertainty and entropy and can thus be denoted as Bayesian active learning acquisition functions [Settles, 2009, Gramacy, 2020]. The most common acquisition function is based on the predictive entropy and denoted Active Learning MacKay (ALM) [MacKay, 1992]. All the following objective functions query a new data point by maximizing the argument x. In the following, we write a new test point x? as x for brevity. All the acquisition functions choose a data point x among the possible data points in the unlabeled pool U .
Entropy (ALM) For a Gaussian distribution, the Shannon entropy H[·] is proportional to the predictive variance 2(x) (derived in A.2), so ALM is given as
ALM = H[y|x,D] / 2(x) (5)
Intuitively, ALM queries the data point, where the uncertainty of the prediction is the highest.
If we have access to the posterior of the model’s hyperparameters, we can utilize acquisition functions with an extra Bayesian level. The posterior of the hyperparameters of an FBGP can be estimated with MCMC such that GPs with different kernel parameters can be drawn, e.g., the length scale and the noise-term `, 2" ⇠ p(✓|D). The following four acquisition functions all utilize this information and are approximated using the samples from the MCMC, cf. (4).
Entropy (B-ALM) With the information from the posterior p(✓|D), the criteria for the extra Bayesian variant of ALM (B-ALM) is then given as
B-ALM = H Z p(y|x,✓)p(✓|D)d✓ = H ⇥ Ep(✓|D)[p(y|x,✓)] ⇤ / Ep(✓|D)[ 2✓(x)|✓] (6)
Bayesian Active learning by Disagreement (BALD) Another common objective in Bayesian active learning, is to maximize the expected decrease in posterior entropy [Guestrin et al., 2005, Houlsby et al., 2012]. Houlsby et al. [2011] rewrite the objective from computing entropies in the parameter space to the output space by observing that it is equivalent to maximizing the conditional mutual information between the model’s parameters ✓̂ and output I[✓̂, y|x,D]. The acquisition function is denoted Bayesian Active Learning by Disagreement (BALD) and the criteria is given by:
I[✓̂, y|x,D] = H[y|x,D] Ep(✓̂|D)[H[y|x, ✓̂]] (7)
BALD was originally derived for non-parametric discriminative models but has recently been extended to batches and deep learning with great success [Kirsch et al., 2019]. In the context of non-parametric models, such as GPs, the models parameters now correspond to the latent function f . For a regular GP, BALD is equivalent to ALM (cf. A.3), although that is not the case for an FBGP. If we let the hyperparameters be the main parameters of interest and set f to be a nuisance parameter, BALD can be written as (cf. A.3.1):
BALD = H ⇥ Ep(✓|D)[y|x,D,✓] ⇤ Ep(✓|D) [H[y|x,✓]] (8)
Bayesian Query-by-Committee (B-QBC) Motivated by finding the optimal bias-variance tradeoff, we propose a Bayesian version of the Query-by-Committee, using the MCMC samples of the hyperparameters’ joint posterior. We have previously argued that the optimal bias-variance trade-off is equivalent to the optimal mode of the multimodal posterior of the hyperparameters, which is exactly what we utilize here. We use the joint posterior of the hyperparameters obtained through MCMC to draw multiple models and then query a new data point where the mean predictions µ✓(x) of these models disagree the most, i.e. querying the data point that maximizes the variance of µ✓(x). Each mean predictor µ✓(·) drawn from the posterior is equivalent to a single model, and thus this criteria can be seen as a Bayesian variant of a Query-by-Committee, and thus denoted as Bayesian Query-by-Committee (B-QBC). Given that µ(x) is the average mean function, B-QBC is given as
B-QBC = Vp(✓|D)[µ✓(x)|✓] = Ep(✓|D)[(µ✓(x) µ(x))2|✓] (9)
Since the models are drawn from the hyperparameters’ posterior, the collection of models is dominated by models near the posterior modes. High variance in µ✓(x), thus corresponds to high disagreement between modes. Querying the data point that maximizes this disagreement, gives information about which mode is most likely to be the optimal one, and thus this can be seen as a mode-seeking Bayesian Query-by-Committee. To the best of our knowledge, we are the first to propose QBC based on model hypotheses drawn from the hyperparameters’ joint posterior.
Query by Mixture of Gaussian Processes (QB-MGP) Bayesian Query-by-Committee (B-QBC) seek the optimal mode, but does not take the predictive performance of the model into account. Since the predictive performance, and thereby the predictive uncertainty, is also important, we extend B-QBC to consider the predictive entropy as well. We denote the new acquisition function Query by Mixture of Gaussian Processes (QB-MGP), because of its relation to Gaussian Mixture Models (GMMs). Using the MCMC samples, each prediction of the FBGP can be seen as an MGP, yielding the predictive posterior given as in equation (4): 1M PM j=1 p (y
?|y,✓j). This hierarchical predictive posterior is a mixture of M Gaussians with mean µGMM and variance 2GMM defined as (cf. A.4):
µGMM (x) = 1
M
MX
j=1
µ✓j (x) (10)
2GMM (x) = 1
M
MX
j=1
2✓j (x) + 1
M
MX
j=1
(µ✓j (x) µGMM (x))2 (11)
Finding the data point that maximizes the variance of the Mixture of Gaussian Processes (MGP) is now equivalent to simultaneously considering the B-ALM and B-QBC, i.e., the sum of the two:
QB-MGP = Ep(✓|D)[ 2✓(x)|✓] + Ep(✓|D)[(µ✓(x) µGMM (x))2|✓] (12) Instead of using bagging, we construct the multiple GPs by using the MCMC samples of the hyperparameters’ joint posterior, and then obtain a natural weighting of the GPs: the MGP will consist of more GPs with hyperparameters close to the modes than hyperparameters far away. To the best of our knowledge, we are the first to use MGP in this manner for active learning.
5 Experiments
In this section, we benchmark the performance of the two proposed acquisition functions against the standard acquisition functions based on the entropy, i.e., ALM, B-ALM, and BALD, on various classic simulators used in recent literature on GPs and active learning. They are all listed in Table 1, and those with less than three inputs are shown in Figure 3.1 The multimodal posteriors of the FBGPs fitted to the simulators can be found in appendix A.5.
Experimental settings In the experiments, we use a zero-mean GP with an ARD RBF kernel. In each iteration of the active learning loop, the inputs are rescaled to the unit cube [0, 1]d, and the outputs are standardized to have zero mean and unit variance. Following Lalchand and Rasmussen [2020], we give all the hyperparameters relatively uninformative N (0, 3) priors in log space. The initial data sets consist of three data points chosen by maximin Latin Hypercube Sampling [Pedregosa et al., 2011], and in each iteration, one data point is queried. The unlabeled pool U consists of the
1All of them can be found at https://www.sfu.ca/~ssurjano/ [Surjanovic and Bingham, 2022].
input space discretized into 100 equidistant points along each dimension. If U contains more than 10, 000 data points, we randomly sample a subset of 10, 000 data points in each iteration and use that as the new pool. The inference in FBGP is carried out using NUTS [Hoffman and Gelman, 2014] in Pyro [Bingham et al., 2019] with five chains and 500 samples, including a warm-up period with 200 samples. The remaining 1500 samples are all used for the acquisition functions. For all the predictions, we use the best mode of the hyperparameters’ posterior, since the mean is of limited value when the posterior is multimodal. The best mode is computed by using a kernel density estimation with a Gaussian kernel [Pedregosa et al., 2011]. The models are implemented in GPyTorch [Gardner et al., 2018]. All experiments are repeated ten times with different initial data sets. With seven simulators and five acquisition functions, this gives 350 active learning runs, each with a running time on approximately one hour, using five CPU cores on a Threadripper 3960X. The code for reproducing the experiments is available on GitHub.2
Evaluation It is common to evaluate the performance of active learning by visually inspecting the learning or loss curves, or by measuring the performance after a specific iteration [Gramacy, 2020, Settles, 2009]. However, both procedures are inadequate to quantify how better an acquisition function is than another if we are not interested in the performance of a specific iteration but the performance in general. We are unaware of any metric for quantifying the overall performance within the regression setting, which is comparable across different data sets. Within the classification setting, Yang and Loog [2018] propose to use the area under the learning curve (AUC) based on accuracy. Since the accuracy is bounded between 0 and 1, they can compare this measure across different data sets. Likewise, O’Neill et al. [2017] compute the AUC using the root mean square error (RMSE) as the performance metric. However, since the magnitude of the RMSE is data-specific, they can not compare the performance across data sets. To resolve this problem, we suggest using the relative decrease in AUC.
We compare the relative decrease with respect to the baseline acquisition function ALM since the latter is widely used and regarded as the standard within active learning with GPs. For a metric that is lower bounded by zero, such as RMSE, the AUC will give the overall error of the acquisition function. We can directly calculate the relative decrease in error from the AUCs of the active learning acquisition functions and the baseline. If the metric has no lower bound, such as the negative log marginal likelihood (NLML), the interpretation is less intuitive. Therefore, we make a lower bound for the metric using the lowest NLML obtained across all the acquisition functions, such that the relative decrease in the AUC then can be interpreted in the same way as for the RMSE. We compare all 10 runs of each acquisition function with the 10 runs for the baseline to get a precise estimate of both the mean and standard deviation of the relative decreases. Since the relative decrease is a ratio, we compute the unbiased estimates using the formulas from Van Kempen and Van Vliet [2000]. See Appendix A.6 for the formulas and the pseudo-code.
5.1 Experiments with 6 simulators
We benchmark the acquisition functions on the six classic simulators. We divide the simulators into subgroups and describe how the functions work in different complexities and with multiple inputs, effectively encompassing three distinct modeling scenarios. The following paragraphs describe the active learning curves in Figure 4.
Noise or signal The simulator Gramacy1d has previously been used to study the effect of the noise-term in GPs [Gramacy and Lee, 2012]. The simulator has a periodic signal that is hard to reveal if the data points are not queried cleverly. Both B-QBC and QB-MGP reach convergence simultaneously, while the other acquisition functions struggle in distinguishing noise from the signal.
Linear and non-linear output regions The two simulators Higdon and Gramacy2d have been used to illustrate cases where GPs struggle in modeling the data due to the output signal having both linear and non-linear regions [Gramacy and Lee, 2009]. Our experiments on these simulators show the performance of the acquisition functions when the GP is an non-optimal choice of model. Querying data points in the linear and non-linear regions will yield a GP with a longer and shorter length scale, respectively. For Higdon, both B-QBC and QB-MGP balance the sampling since the corresponding NLML and RMSE are low, likewise, for Gramacy2d, QB-MGP is achieving both the
2 https://github.com/coriis/active-learning-fbgp
lowest NLML and RMSE. Overall, these results show that when the GP is inadequate to model the data, both B-QBC and QB-MGP perform better than the other acquisition functions.
Multiple inputs To evaluate the performance on higher dimensions, we consider the smooth 2d Branin simulator, the strongly non-linear 3d Ishigami simulator, and the 6d Hartmann simulator with six local minima. BALD underperforms on Branin, but the other acquisition functions have similar performance, with B-QBC having an overall better NLML. For Ishigami, BALD, B-QBC, and QB-MGP reach the best NLML, but the earlier iterations show that BALD is slightly better than the other two acquisition functions. For Hartmann, B-QBC and QB-MGP are the most stable and best in terms of the NLML, whereas the latter achieves the lowest RMSE.
5.2 The Overall Performance
From the visual inspection of Figure 4, it is hard to measure how good each the of acquisition functions are. In Table 2, we quantify the performance using the method described earlier, based on the relative decrease in the area under the curve (AUC).
First of all, it is clear that there is not a single acquisition function that is performing the best for all the simulators. B-QBC achieves the largest decrease in AUC for either NLML or RMSE in five cases, and QB-MGP is the next best, having the largest decrease four times. This is reflected in the overall performance, where B-QBC and QB-MGP are the best performing acquisition functions in terms of the marginal likelihood and root mean square error, respectively. Oppositely, BALD is the worst performing acquisition function overall, not even better than the baseline, which suggests that this acquisition function might not be suited for Gaussian Processes within the regression setting. Given the overall good consistent performance of B-QBC and QB-MGP (B-QBC only worse than the baseline twice), we say that they are robust to different complexities in the simulators’ outputs.
5.3 Limitations
Gaussian Processes are known to be computationally expensive [Williams and Rasmussen, 2006]. The computational cost scales cubically with the number of data points in the data set, i.e., O(n3), and the GPs are thus only suited for small data sets. A fully Bayesian GP is even more computationally expensive because of the MCMC sampling. There exist methods to circumvent this, e.g, variational inference (VI), but often at the cost of the approximation of the joint posterior of the hyperparameters [Lalchand and Rasmussen, 2020]. For future work, we will investigate if the posterior can be approximated by VI, using several initializations, or if it is possible to achieve similar performance by alternating between using a fully Bayesian GP and a regular GP.
This paper is based on empirical results that are dependent on specific simulators. The simulators represent diverse and distinct classic homoscedastic simulators and are representative of the engineering problems occurring in the real world [Gramacy, 2020, Sauer et al., 2022, Cole et al., 2022]. However, an interesting case that is less often investigated in the literature on active learning with simulators, is a simulator with heteroscedastic noise.
A common heteroscedastic case study is the motorcycle data set [Silverman, 1985, Gramacy and Lee, 2008, Gramacy, 2020]. We create a corresponding simulator by fitting a variational GP [Hensman et al., 2015] to the motorcycle accident data. For reproducibility, the mean and standard deviation of the simulator is given in appendix (A.7). The experiments on the Motorcycle simulator explore how the active learning acquisition functions perform when the simulator has heteroscedastic noise, but we model it with a homoscedastic GP. The simulator and the results are seen in Figure 5. Most conspicuous is the poor and good performance of B-QBC and QB-MGP, respectively. In appendix A.8, we show that B-QBC is misled by the heteroscedastic noise and focuses too much on the disagreement in the middle, and that the B-ALM component of QB-MGP acts as a diversity measure that encourages more exploration since it aims in reducing the overall predictive uncertainty.
Another aspect of the work in this paper is the use of the domain and expert knowledge. The incorporation of the domain and expert guidance regarding the simulator under study can be a decisive factor in a successful active learning strategy. However, in many practical situations, such a priori domain expertise may not be readily accessible or even translatable into the functional structure of the model as useful modeling information. On these occasions, generic tools that are robust enough to handle a plethora of diverse simulation output behaviors are prudently advisable. If we had such information regarding the functional complexity, e.g., knowing that the signal is periodic, this study does not show which of the acquisition functions is best. In future work, it would indeed be interesting to see if B-QBC and QB-MGP would perform equally well.
6 Conclusion
In this paper, we propose two active learning acquisition functions: Bayesian Query-by-Committee (B-QBC) and Query by a Mixture of Gaussian Processes (QB-MGP), both of which are suited for fully Bayesian GPs. They are designed to explicitly handle the well-known bias-variance trade-off by optimization of the GP’s two hyperparameters, length scale and noise-term. We empirically show that they query new data points more efficiently than previously used acquisition functions. Across six classic simulators, which cover different complexities and numbers of inputs, we show that B-QBC and QB-MGP are the two functions that achieve the best marginal likelihood and root mean square error, respectively, with the fewest iterations. On average, across the simulators, B-QCB reduced the negative marginal log-likelihood with 41%, and QB-MGP decreased the root mean square error with 12% compared to the baseline. To this end, we believe that the proposed acquisition functions are robust enough to handle a variety of diverse simulation output behaviors, while being entirely independent of any prior understanding of the underlying output distributions of the simulator.
Acknowledgments and Disclosure of Funding
This work was supported by NOSTROMO, framed in the scope of the SESAR 2020 Exploratory Research topic SESAR-ER4-26-2019, funded by SESAR Joint Undertakingthrough the European Union’s Horizon 2020 research and innovation programme under grant agreement No 892517. | 1. What are the strengths and weaknesses of the paper regarding its approach to active learning using Gaussian Processes?
2. Can you elaborate on the advantages of the proposed active learning schemes in a small data setting?
3. How do the computational costs of the proposed methods compare to other schemes?
4. Can you clarify whether and how the proposed schemes can be generalized beyond the tasks/settings considered in the current study?
5. How do the proposed schemes compare to other active learning schemes that focus on predictive performance?
6. What are the potential advantages of the proposed schemes compared to other recent developments in Bayesian active learning? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper focuses on active learning using Gaussian Processes, where the authors take a fully Bayesian approach to optimize the bias-variance trade-off, an important problem in various learning problems. Especially, this work focuses on a small-data setting, where such bias-variance trade-off can significantly affect the learning outcomes. The authors propose two acquisition functions based on FBGP for regression tasks, showing the potential advantages of the proposed schemes.
Strengths And Weaknesses
Taking a Bayesian approach to automatically balance the bias-variance trade-off and improve the robustness of the learning outcomes and the predictive performance of the learned models in a small-data setting is expected to be beneficial, and this work demonstrates the potential advantages of using FBGP for active learning in regression tasks.
Table 2 and Figure 4 show that the two proposed acquisition functions - B-QBC and QB-MGP - can lead to better regression results for a number of test cases. However, these results also show that the proposed scheme do not consistently outperform other alternatives, nor significantly improve the learning outcomes when they do. Especially, while the comparison based on AUC (in Table 2) make the performance improvement attained by the proposed schemes relatively prominent in some cases, the actual improvement shown in the curves (Figure 4) do not appear to be very significant. This is especially so when the data size is relatively small, even though that is the setting that motivates the current work.
The computational burden of taking a fully Bayesian approach is not discussed in detail, and the authors simply mention that they assume that the computational burden for querying new data points would far exceed that for the Bayesian inference. While this may be the case in some applications, it would be nevertheless important and informative to compare the computational cost of different active learning schemes, which is absent in the current study.
The authors do not discuss the general applicability of the proposed scheme beyond the regression tasks considered in the current work. For example, BALD - shown to outperform the proposed scheme in a number of cases - is widely used for active learning for classification, and it would be meaningful to present how the proposed schemes would be applied to classification tasks and how their performance would compare to BALD and other alternatives.
While the authors proposed QB-MGP to consider the "predictive performance" of the learned model, when the prediction performance is of main interest, it would be more sensible to compare the performance with other active learning schemes that focus on this aspect. For example, ELR (expected loss reduction) strategies are widely used to acquire new data points that are expected to optimally reduce the error rather than reducing the uncertainty of the model parameters.
Recently, a number of Bayesian active learning schemes have been proposed based on the ELR strategy, whose performance has been shown to outperform BALD and various other methods with theoretical convergence guarantees. Some recent examples include:
(1) Tan et al, Diversity Enhanced Active Learning with Strictly Proper Scoring Rules, NeurIPS 2021. (2) Zhao et al, Efficient Active Learning for Gaussian Process Classification by Error Reduction, NeurIPS 2021.
Considering that the aforementioned Bayesian active learning methods have shown to consistently outperform BALD and other existing methods with the added benefit of convergence guarantee to the optimal model, the potential benefits of the proposed schemes remain somewhat unclear. Further comparison and elaboration would be necessary to clarify the most significant advantages of the proposed methods, in comparison with the latest relevant developments in Bayesian active learning.
Questions
Please elaborate on the major advantages of the proposed active learning schemes in a small-data setting, especially, based on the results shown in Figure 4, since it is unclear based on the current evaluation results.
Please compare the computational cost of the proposed methods, in comparison with other schemes considered in this study.
Please clarify whether and how the proposed schemes can be generalized beyond the tasks/settings considered in the current study.
How would the proposed schemes (especially, QB-MGP) compare to other active learning schemes (such as ELR) that focus on predictive performance?
Please clarify what would be the main potential advantages of the proposed schemes compared to other recent developments in Bayesian active learning.
Limitations
The authors note that they "create a generic active learning acquisition function" and therefore "there is no direct negative societal impacts." |
NIPS | Title
Bayesian Active Learning with Fully Bayesian Gaussian Processes
Abstract
The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on active learning with Gaussian Processes (GPs). For the GP, the bias-variance trade-off is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first is a Bayesian variant of Query-by-Committee (B-QBC), and the second is an extension that explicitly minimizes the predictive variance through a Query by Mixture of Gaussian Processes (QB-MGP) formulation. Across six simulators, we empirically show that B-QBC, on average, achieves the best marginal likelihood, whereas QB-MGP achieves the best predictive performance. We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.
1 Introduction
Gaussian Processes (GPs) are well-known for their ability to deal with small to medium-size data sets by balancing model complexity and regularization [Williams and Rasmussen, 2006]. Together with their inherent ability to model uncertainties, this has made GPs the go-to models to use for Bayesian optimization and metamodeling [Snoek et al., 2012, Gramacy, 2020]. For both cases, the data is often scarce, making the modeling task a balance between complexity and regularization, i.e., preventing severe overfitting while maintaining the ability to fit nonlinear functions. Likewise, the ability to quantify the uncertainty often guides the acquisition functions of Bayesian optimization and active learning schemes, which are inevitably required to build metamodels efficiently.
On the other hand, it is not flawless to use GPs in Bayesian optimization and active learning. In both cases, the same GP is used in an iterative process firstly to predict the mean and variance of a new data point and then secondly to use those estimates to guide the data acquisition. Since this is an iterative process, poor predictions will lead to poor data acquisition, and vice versa. The problem is less pronounced for larger data sets as predictions become increasingly accurate as more data is available. However, in Bayesian optimization and active learning, where the data sets tend to be relatively small, wrong predictions can result in misguidance, thus hindering performance and efficiency. In this paper, we mitigate this problem by applying a fully Bayesian approach to the GPs and formulating two new acquisition functions for active learning. Where a single GP trained with a maximum likelihood estimate only represents one model hypothesis, a Fully Bayesian Gaussian Process (FBGP) represents multiple model hypotheses at once. We will utilize this extra information
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
to create an acquisition function that simultaneously seeks the best model hypothesis and minimize the prediction error of the GP.
The hyperparameters of a GP are typically fitted through evaluation of the marginal likelihood, which automatically incorporates a trade-off between complexity and regularization [Williams and Rasmussen, 2006], also known as the bias-variance trade-off [Bishop, 2006]. However, when the data is scarce, it is more challenging to choose the appropriate trade-off, and different configurations of the hyperparameters of the GP can give rise to distinct fits. We highlight this issue in Figure 1, where two seemingly reasonable fits describe the data very distinctly, which would result in different acquisitions of new data.
The bias-variance trade-off for Gaussian Processes For a GP with a common stationary covariance function with a length scale and a noise-term, e.g., the radial-basis function or the Matérn class of functions with a Gaussian likelihood, the challenge of choosing the best trade-off between bias-variance trade-off, can be formulated using the hyperparameters of the GP. The two central hyperparameters are the length scale ` and the variance of the noise 2" . The length scale describes how much the underlying model f fluctuates, i.e., if the length scale is short or long, the model varies quickly or slowly, respectively. In terms of tuning the hyperparameters, often a short length scale goes well with a small noise-term since the greater flexibility of a short length scale means that the noise level can be reduced. This results in a flexible model with high variance and low bias. Conversely, a long length scale tends to increase the noise level, resulting in a rigid model with low variance and high bias [Bishop, 2006]. In the extremes, the former corresponds to a white-noise process model, and the latter corresponds to a constant with white noise [Williams and Rasmussen, 2006]. Thus, there is a direct relation between the bias-variance trade-off and the values of the two hyperparameters.
How to choose the best bias-variance trade-off? Finding a good bias-variance trade-off is a non-trivial problem. In the case of small to medium-sized data sets, the joint posterior distribution of the two hyperparameters is likely to be characterized by two modes, as illustrated in Figure 2 for the data in Figure 1. The two modes illustrate two different bias-variance trade-offs, which both describe the data well. However, depending on the choice of trade-off, the data acquisition can be very distinct, and thus a wrong choice of mode will imply non-optimal guidance from the acquisition function.
Though the multimodal posterior has been studied for GPs before [Yao et al., 2022], the literature often searches for the single best mode with clever initializations of the hyperparameters [Williams and Rasmussen, 1995] or by favoring small ` and 2" by either always initializing hyperparameters in the low noise regime or by applying strong priors [Gramacy, 2020]. However, none of these approaches directly address the core problem: which mode to choose? When working with Bayesian optimization and active learning, this should ideally be answered with prior information about the problem, although typically that is not available, making these approaches less practical [Antunes et al., 2018, Gramacy, 2020, Riis et al., 2021].
Our contribution We follow a general approach to the problem and assume no prior knowledge about the functional form of the data, kernel, nor hyperparameters. If it is known that the data has a periodic trend or high noise, it is advantageous to incorporate that into the kernel and the hyperparameters by using a periodic kernel and a prior on the hyperparameters that favor high noise, respectively. However, with no prior knowledge, we tend to use the general-purpose Radial-
Basis function (RBF) kernel with non-informative priors on the hyperparameters. Further, the fitted hyperparameters are often given by point estimates (e.g. fitted with maximum likelihood estimation or maximum a posteriori), but we consider multiple model hypotheses by replacing the fitting procedure of the marginal likelihood with Markov Chain Monte Carlo (MCMC) sampling to get the joint posterior of the hyperparameters. The result is that we have multiple models (same kernel, but different hyperparameters), which represent different model hypotheses. We utilize the extra information from the hyperparameters’ joint posterior to handle the bias-variance trade-off by incorporating the extra information into two new acquisition functions.
Our main contribution is the proposal of two new acquisition functions for active learning that utilize the extra information from the hyperparamters’ posterior estimated by MCMC to seek the most reasonable mode alongside minimizing the predictive variance. Through empirical results, we show that the two acquisition functions are more accurate and robust than other common functions across multiple benchmark simulators used in the literature.
2 Related work
In this section, we review related work to the proposed acquisition functions. We cover active learning schemes for regression tasks, including Query-by-Committee and GP as Gaussian Mixture Models.
Active Learning The main idea of active learning is to actively choose a new data point to label and add to the current training data set, to iteratively improve the performance of the predictive model [Settles, 2009]. In the context of metamodeling or surrogate modeling of simulators, new data is often added sequentially, i.e., one data point at a time [Gramacy, 2020], but in other applications, it can be beneficial to query batches of data instead [Kirsch et al., 2019].
The acquisition functions can be divided into model-based and model-free functions, where the former utilize information from the model and is often based on uncertainty measures (recently also function values and gradients [Fernandez et al., 2020, Svendsen et al., 2020]), whereas the latter do not use information from the model and is typically based on distance metrics in the input space [O’Neill et al., 2017]. Both types of functions seek to minimize the expected predictive loss of the model. Another distinction between the acquisition functions is decision-based and information theory-based [Houlsby et al., 2011]. Decision-based functions seek to minimize the expected predictive loss in the hope of maximizing the performance on the test set. Information-theoretic-based functions instead try to reduce the number of possible models, e.g., through the KL-divergence or Shannon entropy.
It is not straightforward to use information-theoretic acquisition functions. However, if one has access to the posterior of the parameters, Houlsby et al. [2011] have derived the algorithm Bayesian Active Learning by Disagreement (BALD), which can be applied in general. Generally, BALD seeks the data point that maximizes the decrease in the expected posterior entropy of the parameters.
Query-by-Committee The Query-by-Committee (QBC) is a specific acquisition function that was originally proposed for classification tasks [Seung et al., 1992]. It aims to maximize the disagreement among the committee to get the highest information gain and minimize the version space, which is the set of model hypotheses aligned with the training data. The construction of the committee is the core component of QBC since it is the committee’s ability to accurately and diversely represent the version space that gives rise to informative disagreement criteria [Settles, 2009].
Query-by-Committee can also be applied for regression problems. Krogh and Vedelsby [1995] construct the members of the committee by random initializations of the weights in the neural networks. RayChaudhuri and Hamey [1995] apply bagging and train the members on different subsets of the data set. In general, QBC constructed by bagging has been used as a benchmark with mixed results [Cai et al., 2013, Wu, 2018, Wu et al., 2019]. Burbidge et al. [2007] show that the less noise there is in the output, the better QBC is compared to random querying. They also highlight the fact that with a misspecified model, QBC might perform worse than random querying. None of these approaches explore the usage of MCMC samples of the posterior to construct a committee.
Gaussian Process as a Gaussian Mixture Model Mixture models have recently been applied in active learning for classification tasks. Iswanto [2021] proposes to use Gaussian Mixture Models (GMMs) with active learning, where he designs a specific acquisition function that queries the data point that maximizes the expected likelihood of the model. Zhao et al. [2020] use a mixture of GPs in active learning, where each component is fitted to a subset of the training set. The combination of
GMMs and GPs have previously been explored for static data sets. Chen and Ren [2009] investigate regression tasks and apply bagging, where they repeatedly randomly sample data points from the training set to construct new subsets to get GPs fitted to different data.
3 Gaussian Processes
The Gaussian Processes (GPs) are the central models in this work. In this section, we give a brief overview of GPs before covering the Fully Bayesian GPs. For a thorough description of GPs, we refer to Williams and Rasmussen [2006].
A Gaussian Process (GP) is a stochastic function fully defined by a mean function m(·) and a covariance function (often called a kernel) k(·, ·). Given the data D = (X,y) = {xi, yi}Ni=1, where yi is the corrupted observations of some latent function values f with Gaussian noise ", i.e., yi = fi + "i, "i 2 N (0, 2"), a GP is typically denoted as GP(mf (x), kf (x,x0)). It is common practice to set the mean function equal to the zero-value vector and thus, the GP is fully determined by the kernel kf (x,x0). For short, we will denote the kernel K✓, which explicitly states that the kernel is parameterized with some hyperparameters ✓. The generative model of the GP can be found in Appendix A.1. Given the hyperparameters ✓, the predictive posterior for unknown test inputs X? is given by p(f?|✓,y, X,X?) = N (µ?,⌃?) with
µ? = K?✓ K✓ + 2 "I 1 y and ⌃? = K??✓ K?✓ K✓ + 2 "I 1 K?>✓ (1)
where K??✓ denotes the covariance matrix between the test inputs, and K ? ✓ denotes the covariance matrix between the test inputs and training inputs.
We use the canonical kernel automatic relevance determination (ARD) Radial-basis function (RBF) given by k (x,x0) = exp ||x x0||2/2`2 where ` is a vector of length scales `1, ..., `d, one for each input dimension. Often the kernel is scaled by an output variance but here we fix it to one and solely focus on the two other hyperparameters: length scale and noise-term. The noise-term 2" is integrated into the kernel with an indicator variable by adding the term 2"I{x=x0} to the current kernel [Williams and Rasmussen, 2006, Bishop, 2006].
Fully Bayesian Gaussian Processes (FBGP) An FBGP extends a GP by putting a prior over the hyperparameters p(✓) and approximating their full posteriors. The joint posterior is then given by
p(f ,✓|y, X) / p(y|f)p(f |✓, X)p(✓) (2)
and the predictive posterior for the test inputs X? is
p(y?|y) = ZZ p (y?|f?,✓) p(f?|✓,y)p(✓|y)df?d✓ (3)
where the conditioning on X and X? have been omitted for brevity. The inner integral reduces to the predictive posterior given by a normal GP, whereas the outer integral remains intractable and is approximated with MCMC inference with M samples as
p (y?|y) = Z
p (y?|y,✓) p(✓|y)d✓ ⇡ 1 M
MX
j=1
p (y?|y,✓j) , ✓j ⇠ p(✓|y) (4)
Adapting the hyperparameters of an FBGP is computationally expensive compared to the approach with GPs and maximum likelihood estimation. However, in Bayesian optimization and active learning, the computational burden for querying a new data point will often be of magnitudes higher. For example for simulators, the computational cost of querying a new data point is, in general, expensive and can take minutes and hours [Gorissen et al., 2009, Riis et al., 2021, Chabanet et al., 2021].
4 Active Learning
In this section, we lay out the most common acquisition functions and then propose first a Bayesian variant of Query-by-Committee and second an extension motivated by Gaussian Mixture Models, which seek to minimize both the predictive variance and the number of model hypotheses.
Many active learning acquisition functions are based on the model’s uncertainty and entropy and can thus be denoted as Bayesian active learning acquisition functions [Settles, 2009, Gramacy, 2020]. The most common acquisition function is based on the predictive entropy and denoted Active Learning MacKay (ALM) [MacKay, 1992]. All the following objective functions query a new data point by maximizing the argument x. In the following, we write a new test point x? as x for brevity. All the acquisition functions choose a data point x among the possible data points in the unlabeled pool U .
Entropy (ALM) For a Gaussian distribution, the Shannon entropy H[·] is proportional to the predictive variance 2(x) (derived in A.2), so ALM is given as
ALM = H[y|x,D] / 2(x) (5)
Intuitively, ALM queries the data point, where the uncertainty of the prediction is the highest.
If we have access to the posterior of the model’s hyperparameters, we can utilize acquisition functions with an extra Bayesian level. The posterior of the hyperparameters of an FBGP can be estimated with MCMC such that GPs with different kernel parameters can be drawn, e.g., the length scale and the noise-term `, 2" ⇠ p(✓|D). The following four acquisition functions all utilize this information and are approximated using the samples from the MCMC, cf. (4).
Entropy (B-ALM) With the information from the posterior p(✓|D), the criteria for the extra Bayesian variant of ALM (B-ALM) is then given as
B-ALM = H Z p(y|x,✓)p(✓|D)d✓ = H ⇥ Ep(✓|D)[p(y|x,✓)] ⇤ / Ep(✓|D)[ 2✓(x)|✓] (6)
Bayesian Active learning by Disagreement (BALD) Another common objective in Bayesian active learning, is to maximize the expected decrease in posterior entropy [Guestrin et al., 2005, Houlsby et al., 2012]. Houlsby et al. [2011] rewrite the objective from computing entropies in the parameter space to the output space by observing that it is equivalent to maximizing the conditional mutual information between the model’s parameters ✓̂ and output I[✓̂, y|x,D]. The acquisition function is denoted Bayesian Active Learning by Disagreement (BALD) and the criteria is given by:
I[✓̂, y|x,D] = H[y|x,D] Ep(✓̂|D)[H[y|x, ✓̂]] (7)
BALD was originally derived for non-parametric discriminative models but has recently been extended to batches and deep learning with great success [Kirsch et al., 2019]. In the context of non-parametric models, such as GPs, the models parameters now correspond to the latent function f . For a regular GP, BALD is equivalent to ALM (cf. A.3), although that is not the case for an FBGP. If we let the hyperparameters be the main parameters of interest and set f to be a nuisance parameter, BALD can be written as (cf. A.3.1):
BALD = H ⇥ Ep(✓|D)[y|x,D,✓] ⇤ Ep(✓|D) [H[y|x,✓]] (8)
Bayesian Query-by-Committee (B-QBC) Motivated by finding the optimal bias-variance tradeoff, we propose a Bayesian version of the Query-by-Committee, using the MCMC samples of the hyperparameters’ joint posterior. We have previously argued that the optimal bias-variance trade-off is equivalent to the optimal mode of the multimodal posterior of the hyperparameters, which is exactly what we utilize here. We use the joint posterior of the hyperparameters obtained through MCMC to draw multiple models and then query a new data point where the mean predictions µ✓(x) of these models disagree the most, i.e. querying the data point that maximizes the variance of µ✓(x). Each mean predictor µ✓(·) drawn from the posterior is equivalent to a single model, and thus this criteria can be seen as a Bayesian variant of a Query-by-Committee, and thus denoted as Bayesian Query-by-Committee (B-QBC). Given that µ(x) is the average mean function, B-QBC is given as
B-QBC = Vp(✓|D)[µ✓(x)|✓] = Ep(✓|D)[(µ✓(x) µ(x))2|✓] (9)
Since the models are drawn from the hyperparameters’ posterior, the collection of models is dominated by models near the posterior modes. High variance in µ✓(x), thus corresponds to high disagreement between modes. Querying the data point that maximizes this disagreement, gives information about which mode is most likely to be the optimal one, and thus this can be seen as a mode-seeking Bayesian Query-by-Committee. To the best of our knowledge, we are the first to propose QBC based on model hypotheses drawn from the hyperparameters’ joint posterior.
Query by Mixture of Gaussian Processes (QB-MGP) Bayesian Query-by-Committee (B-QBC) seek the optimal mode, but does not take the predictive performance of the model into account. Since the predictive performance, and thereby the predictive uncertainty, is also important, we extend B-QBC to consider the predictive entropy as well. We denote the new acquisition function Query by Mixture of Gaussian Processes (QB-MGP), because of its relation to Gaussian Mixture Models (GMMs). Using the MCMC samples, each prediction of the FBGP can be seen as an MGP, yielding the predictive posterior given as in equation (4): 1M PM j=1 p (y
?|y,✓j). This hierarchical predictive posterior is a mixture of M Gaussians with mean µGMM and variance 2GMM defined as (cf. A.4):
µGMM (x) = 1
M
MX
j=1
µ✓j (x) (10)
2GMM (x) = 1
M
MX
j=1
2✓j (x) + 1
M
MX
j=1
(µ✓j (x) µGMM (x))2 (11)
Finding the data point that maximizes the variance of the Mixture of Gaussian Processes (MGP) is now equivalent to simultaneously considering the B-ALM and B-QBC, i.e., the sum of the two:
QB-MGP = Ep(✓|D)[ 2✓(x)|✓] + Ep(✓|D)[(µ✓(x) µGMM (x))2|✓] (12) Instead of using bagging, we construct the multiple GPs by using the MCMC samples of the hyperparameters’ joint posterior, and then obtain a natural weighting of the GPs: the MGP will consist of more GPs with hyperparameters close to the modes than hyperparameters far away. To the best of our knowledge, we are the first to use MGP in this manner for active learning.
5 Experiments
In this section, we benchmark the performance of the two proposed acquisition functions against the standard acquisition functions based on the entropy, i.e., ALM, B-ALM, and BALD, on various classic simulators used in recent literature on GPs and active learning. They are all listed in Table 1, and those with less than three inputs are shown in Figure 3.1 The multimodal posteriors of the FBGPs fitted to the simulators can be found in appendix A.5.
Experimental settings In the experiments, we use a zero-mean GP with an ARD RBF kernel. In each iteration of the active learning loop, the inputs are rescaled to the unit cube [0, 1]d, and the outputs are standardized to have zero mean and unit variance. Following Lalchand and Rasmussen [2020], we give all the hyperparameters relatively uninformative N (0, 3) priors in log space. The initial data sets consist of three data points chosen by maximin Latin Hypercube Sampling [Pedregosa et al., 2011], and in each iteration, one data point is queried. The unlabeled pool U consists of the
1All of them can be found at https://www.sfu.ca/~ssurjano/ [Surjanovic and Bingham, 2022].
input space discretized into 100 equidistant points along each dimension. If U contains more than 10, 000 data points, we randomly sample a subset of 10, 000 data points in each iteration and use that as the new pool. The inference in FBGP is carried out using NUTS [Hoffman and Gelman, 2014] in Pyro [Bingham et al., 2019] with five chains and 500 samples, including a warm-up period with 200 samples. The remaining 1500 samples are all used for the acquisition functions. For all the predictions, we use the best mode of the hyperparameters’ posterior, since the mean is of limited value when the posterior is multimodal. The best mode is computed by using a kernel density estimation with a Gaussian kernel [Pedregosa et al., 2011]. The models are implemented in GPyTorch [Gardner et al., 2018]. All experiments are repeated ten times with different initial data sets. With seven simulators and five acquisition functions, this gives 350 active learning runs, each with a running time on approximately one hour, using five CPU cores on a Threadripper 3960X. The code for reproducing the experiments is available on GitHub.2
Evaluation It is common to evaluate the performance of active learning by visually inspecting the learning or loss curves, or by measuring the performance after a specific iteration [Gramacy, 2020, Settles, 2009]. However, both procedures are inadequate to quantify how better an acquisition function is than another if we are not interested in the performance of a specific iteration but the performance in general. We are unaware of any metric for quantifying the overall performance within the regression setting, which is comparable across different data sets. Within the classification setting, Yang and Loog [2018] propose to use the area under the learning curve (AUC) based on accuracy. Since the accuracy is bounded between 0 and 1, they can compare this measure across different data sets. Likewise, O’Neill et al. [2017] compute the AUC using the root mean square error (RMSE) as the performance metric. However, since the magnitude of the RMSE is data-specific, they can not compare the performance across data sets. To resolve this problem, we suggest using the relative decrease in AUC.
We compare the relative decrease with respect to the baseline acquisition function ALM since the latter is widely used and regarded as the standard within active learning with GPs. For a metric that is lower bounded by zero, such as RMSE, the AUC will give the overall error of the acquisition function. We can directly calculate the relative decrease in error from the AUCs of the active learning acquisition functions and the baseline. If the metric has no lower bound, such as the negative log marginal likelihood (NLML), the interpretation is less intuitive. Therefore, we make a lower bound for the metric using the lowest NLML obtained across all the acquisition functions, such that the relative decrease in the AUC then can be interpreted in the same way as for the RMSE. We compare all 10 runs of each acquisition function with the 10 runs for the baseline to get a precise estimate of both the mean and standard deviation of the relative decreases. Since the relative decrease is a ratio, we compute the unbiased estimates using the formulas from Van Kempen and Van Vliet [2000]. See Appendix A.6 for the formulas and the pseudo-code.
5.1 Experiments with 6 simulators
We benchmark the acquisition functions on the six classic simulators. We divide the simulators into subgroups and describe how the functions work in different complexities and with multiple inputs, effectively encompassing three distinct modeling scenarios. The following paragraphs describe the active learning curves in Figure 4.
Noise or signal The simulator Gramacy1d has previously been used to study the effect of the noise-term in GPs [Gramacy and Lee, 2012]. The simulator has a periodic signal that is hard to reveal if the data points are not queried cleverly. Both B-QBC and QB-MGP reach convergence simultaneously, while the other acquisition functions struggle in distinguishing noise from the signal.
Linear and non-linear output regions The two simulators Higdon and Gramacy2d have been used to illustrate cases where GPs struggle in modeling the data due to the output signal having both linear and non-linear regions [Gramacy and Lee, 2009]. Our experiments on these simulators show the performance of the acquisition functions when the GP is an non-optimal choice of model. Querying data points in the linear and non-linear regions will yield a GP with a longer and shorter length scale, respectively. For Higdon, both B-QBC and QB-MGP balance the sampling since the corresponding NLML and RMSE are low, likewise, for Gramacy2d, QB-MGP is achieving both the
2 https://github.com/coriis/active-learning-fbgp
lowest NLML and RMSE. Overall, these results show that when the GP is inadequate to model the data, both B-QBC and QB-MGP perform better than the other acquisition functions.
Multiple inputs To evaluate the performance on higher dimensions, we consider the smooth 2d Branin simulator, the strongly non-linear 3d Ishigami simulator, and the 6d Hartmann simulator with six local minima. BALD underperforms on Branin, but the other acquisition functions have similar performance, with B-QBC having an overall better NLML. For Ishigami, BALD, B-QBC, and QB-MGP reach the best NLML, but the earlier iterations show that BALD is slightly better than the other two acquisition functions. For Hartmann, B-QBC and QB-MGP are the most stable and best in terms of the NLML, whereas the latter achieves the lowest RMSE.
5.2 The Overall Performance
From the visual inspection of Figure 4, it is hard to measure how good each the of acquisition functions are. In Table 2, we quantify the performance using the method described earlier, based on the relative decrease in the area under the curve (AUC).
First of all, it is clear that there is not a single acquisition function that is performing the best for all the simulators. B-QBC achieves the largest decrease in AUC for either NLML or RMSE in five cases, and QB-MGP is the next best, having the largest decrease four times. This is reflected in the overall performance, where B-QBC and QB-MGP are the best performing acquisition functions in terms of the marginal likelihood and root mean square error, respectively. Oppositely, BALD is the worst performing acquisition function overall, not even better than the baseline, which suggests that this acquisition function might not be suited for Gaussian Processes within the regression setting. Given the overall good consistent performance of B-QBC and QB-MGP (B-QBC only worse than the baseline twice), we say that they are robust to different complexities in the simulators’ outputs.
5.3 Limitations
Gaussian Processes are known to be computationally expensive [Williams and Rasmussen, 2006]. The computational cost scales cubically with the number of data points in the data set, i.e., O(n3), and the GPs are thus only suited for small data sets. A fully Bayesian GP is even more computationally expensive because of the MCMC sampling. There exist methods to circumvent this, e.g, variational inference (VI), but often at the cost of the approximation of the joint posterior of the hyperparameters [Lalchand and Rasmussen, 2020]. For future work, we will investigate if the posterior can be approximated by VI, using several initializations, or if it is possible to achieve similar performance by alternating between using a fully Bayesian GP and a regular GP.
This paper is based on empirical results that are dependent on specific simulators. The simulators represent diverse and distinct classic homoscedastic simulators and are representative of the engineering problems occurring in the real world [Gramacy, 2020, Sauer et al., 2022, Cole et al., 2022]. However, an interesting case that is less often investigated in the literature on active learning with simulators, is a simulator with heteroscedastic noise.
A common heteroscedastic case study is the motorcycle data set [Silverman, 1985, Gramacy and Lee, 2008, Gramacy, 2020]. We create a corresponding simulator by fitting a variational GP [Hensman et al., 2015] to the motorcycle accident data. For reproducibility, the mean and standard deviation of the simulator is given in appendix (A.7). The experiments on the Motorcycle simulator explore how the active learning acquisition functions perform when the simulator has heteroscedastic noise, but we model it with a homoscedastic GP. The simulator and the results are seen in Figure 5. Most conspicuous is the poor and good performance of B-QBC and QB-MGP, respectively. In appendix A.8, we show that B-QBC is misled by the heteroscedastic noise and focuses too much on the disagreement in the middle, and that the B-ALM component of QB-MGP acts as a diversity measure that encourages more exploration since it aims in reducing the overall predictive uncertainty.
Another aspect of the work in this paper is the use of the domain and expert knowledge. The incorporation of the domain and expert guidance regarding the simulator under study can be a decisive factor in a successful active learning strategy. However, in many practical situations, such a priori domain expertise may not be readily accessible or even translatable into the functional structure of the model as useful modeling information. On these occasions, generic tools that are robust enough to handle a plethora of diverse simulation output behaviors are prudently advisable. If we had such information regarding the functional complexity, e.g., knowing that the signal is periodic, this study does not show which of the acquisition functions is best. In future work, it would indeed be interesting to see if B-QBC and QB-MGP would perform equally well.
6 Conclusion
In this paper, we propose two active learning acquisition functions: Bayesian Query-by-Committee (B-QBC) and Query by a Mixture of Gaussian Processes (QB-MGP), both of which are suited for fully Bayesian GPs. They are designed to explicitly handle the well-known bias-variance trade-off by optimization of the GP’s two hyperparameters, length scale and noise-term. We empirically show that they query new data points more efficiently than previously used acquisition functions. Across six classic simulators, which cover different complexities and numbers of inputs, we show that B-QBC and QB-MGP are the two functions that achieve the best marginal likelihood and root mean square error, respectively, with the fewest iterations. On average, across the simulators, B-QCB reduced the negative marginal log-likelihood with 41%, and QB-MGP decreased the root mean square error with 12% compared to the baseline. To this end, we believe that the proposed acquisition functions are robust enough to handle a variety of diverse simulation output behaviors, while being entirely independent of any prior understanding of the underlying output distributions of the simulator.
Acknowledgments and Disclosure of Funding
This work was supported by NOSTROMO, framed in the scope of the SESAR 2020 Exploratory Research topic SESAR-ER4-26-2019, funded by SESAR Joint Undertakingthrough the European Union’s Horizon 2020 research and innovation programme under grant agreement No 892517. | 1. What is the focus and contribution of the paper regarding Bayesian GPs in Bayes opt and active learning?
2. What are the strengths of the proposed approach, particularly in its interpretation and presentation?
3. What are the weaknesses of the paper, especially concerning computational costs and scalability?
4. Do you have any concerns or questions about the method's performance and its comparison with other works?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work proposes the use of fully Bayesian GPs in the context of Bayes opt and active learning. They propose fully Bayesian versions of known acquisition functions that leverage information from the hyperparameter posterior and also suggest using the posterior predictive variance of the GMM (non-Gaussian) process to derive an acquisition function. The claim is that the Bayesian variants yield more effective acquisition functions -- implicitly because they approximate the posterior predictive uncertainty better than ML-II. They demonstrate and analyse the performance across methods on 6 simulator tasks
Strengths And Weaknesses
Strengths:
Interesting interpretation of the multi-modal marginal likelihood surface in terms of the bias-variance tradeoff parlance.
Good presentation, well-written and clear.
Decent experimental evaluation.
Approximating the hyperparameter posterior - a relatively understudied area in literature.
Weakness:
*A major concern is the computational cost of running FBGP at each iteration. A naive strategy (albeit sufficient for small datasets like the ones studied) would scale as O(MN^{3}) where M is the number of samples - how exactly can this method scale? perhaps by interleaving FBGP at intervals with traditional ML-II optimisation - a more sophisticated strategy would be required.
Questions
Isn't the most reasonable mode given by the mode where the marginal likelihood is maximized? and typically, the modes manifest in very starkly different marginal likelihood values.
The underperformance of B-QBC on the heteroscedastic motorcycle data is not sufficiently explained, QB-MGP performs best but in terms of the criteria is not so different than B-QBC.
Both B-QBC and QB-MGP use hyper samples from the posterior - then \mu_GMM is the same as the mean of the models corresponding to thetas drawn from the hyper posterior \bar{\mu}? Their performance should be very similar but they are quite different - is there more insight on this?
Limitations
The authors do discuss limitations - but conspicuously miss out the obvious one which is the compute for FBGPs. |
NIPS | Title
A probabilistic population code based on neural samples
Abstract
Sensory processing is often characterized as implementing probabilistic inference: networks of neurons compute posterior beliefs over unobserved causes given the sensory inputs. How these beliefs are computed and represented by neural responses is much-debated (Fiser et al. 2010, Pouget et al. 2013). A central debate concerns the question of whether neural responses represent samples of latent variables (Hoyer & Hyvarinnen 2003) or parameters of their distributions (Ma et al. 2006) with efforts being made to distinguish between them (Grabska-Barwinska et al. 2013). A separate debate addresses the question of whether neural responses are proportionally related to the encoded probabilities (Barlow 1969), or proportional to the logarithm of those probabilities (Jazayeri & Movshon 2006, Ma et al. 2006, Beck et al. 2012). Here, we show that these alternatives – contrary to common assumptions – are not mutually exclusive and that the very same system can be compatible with all of them. As a central analytical result, we show that modeling neural responses in area V1 as samples from a posterior distribution over latents in a linear Gaussian model of the image implies that those neural responses form a linear Probabilistic Population Code (PPC, Ma et al. 2006). In particular, the posterior distribution over some experimenter-defined variable like “orientation” is part of the exponential family with sufficient statistics that are linear in the neural sampling-based firing rates.
1 Introduction
In order to guide behavior, the brain has to infer behaviorally relevant but unobserved quantities from observed inputs in the senses. Bayesian inference provides a normative framework to do so; however, the computations required to compute posterior beliefs about those variables exactly are typically intractable. As a result, the brain needs to perform these computations in an approximate manner. The nature of this approximation is unclear with two principal classes having emerged as candidate hypotheses: parametric (variational) and sampling-based [8, 20].
In the first class, neural responses are interpreted as the parameters of the probability distributions that the brain computes and represents. The most popular members of this class are Probabilistic Population Codes (PPCs, [13, 4, 3, 2, 21, 19]). Common PPCs are based on the empirical observation that neural variability is well-described by an exponential family with linear sufficient statistics. Applying Bayes’ rule to compute the posterior probability, p(s|r), over some task-relevant scalar quantity, s, from the neural population response, r, one can write [2]:
p(s|r) ∝ g(s) exp [ h(s)>r ] (1)
where each entry of h(s) represents a stimulus-dependent kernel characterizing the contribution of each neuron’s response to the distribution, and g(s) is some stimulus-dependent function that
∗Equal contribution
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
is independent of r. Importantly, the neural responses, r, are linearly related to the logarithm of the probability rather than the probability itself. This has been argued to be a convenient choice for the brain to implement important probabilistic operations like evidence integration over time and cues using linear operations on firing rates [2]. In addition, PPC-like codes are typically “distributed” since the belief over a single variable is distributed over the activity of many neurons, and different low-dimensional projections of those activities may represent beliefs over multiple variables simultaneously [19]. Furthermore, because s is defined by the experimenter and not explicitly inferred by the brain in our model we call it “implicit.”
In the second class of models, instead of representing parameters, neural responses are interpreted as samples from the represented distribution. First proposed by Hoyer & Hyvarinnen (2003), this line of research has been elaborated in the abstract showing how it might be implemented in neural circuits [7, 18, 5] as well as for concrete generative models designed to explain properties of neurons in early visual cortex [14, 15, 24, 12, 16, 10]. Here, each neuron (or a subset of principal neurons), represents a single latent variable in a probabilistic model of the world. The starting point for these models is typically a specific generative model of the inputs which is assumed to have been learnt by the brain from earlier sensory experience, effectively assuming a separation of time-scales for learning and inference that is empirically justified at least for early visual areas. Rather than being the starting point as for PPCs, neural variability in sampling-based models emerges as a consequence of any uncertainty in the represented posterior. Importantly, samples have the same domain as the latents and do not normally relate to either log probability or probability directly.
This paper will proceed as illustrated in Figure 1: First, we will define a simple linear Gaussian image model as has been used in previous studies. Second, we will show that samples from this model approximate an exponential family with linear sufficient statistics. Third, we will relate the implied PPC, in particular the kernels, h(s), to the projective fields in our image model. Fourth, we will discuss the role of nuisance variables in our model. And finally, we will show that under assumption of binary latent in the image model, neural firing rates are both proportional to probability (of presence of a given image element) and log probability (of implicitly encoded variables like orientation).
2 A neural sampling-based model
We follow previous work in assuming that neurons in primary visual cortex (V1) implement probabilistic inference in a linear Gaussian model of the input image [14, 15, 12, 6, 10]:
P (I|x) = N (I;Ax, σ2x1) (2)
where N (y;µ,Σ) denotes the probability distribution function of a normal random variable (mean µ and covariance Σ) evaluated at y, and 1 is the identity matrix. The observed image, I, is
drawn from a Normal distribution around a linear combination of the projective fields (PFn), A = (PF1, . . . ,PFN ) of all the neurons (1, . . . , N) weighted by their activation (responses), x = (x1, . . . , xN )
>. The projective fields can be thought of as the brain’s learned set of basis functions over images. The main empirical justification for this model consists in the fact that under the assumption of a sparse independent prior over x, the model learns projective field parameters that strongly resemble the localized, oriented and bandpass features that characterize V1 neurons when trained on natural images [14, 6]. Hoyer & Hyvarinen (2003) proposed that during inference neural responses can be interpreted as samples in this model. Furthermore, Orban et al. (2016) showed that samples from a closely related generative model (Gaussian Scale Mixture Model, [24]) could explain many response properties of V1 neurons beyond receptive fields. Since our main points are conceptual in nature, we will develop them for the slightly simpler original model described above.
Given an image, I, we assume that neural activities can be thought of as samples from the posterior distribution, x(i) ∼ p(x|I) ∝ p(I|x)pbrain(x) where pbrain(x) is the brain’s prior over x. In this model each population response, x = (x1, . . . , xN )>, represents a sample from the brain’s posterior belief about x|I. Each xn, individually, then represents the brain marginal belief about the intensity of the feature PFn in the image. This interpretation is independent of any task demands, or assumptions by the experimenter. It is up to the experimenter to infer the nature of the variables encoded by some population of neurons from their responses, e.g. by fitting this model to data. In the next section we will show how these samples can also be interpreted as a population code over some experimenter-defined quantity like orientation (Figure 1).
3 Neural samples form a Probabilistic Population Code (PPC)
In many classic neurophysiology experiments [17], the experimenter presents images that only vary along a single experimenter-defined dimension, e.g. orientation. We call this dimension the quantity of interest, or s. The question is then posed, what can be inferred about s given the neural activity in response to a single image representing s, x ∼ p(x|s). An ideal observer would simply apply Bayes’ rule to infer p(s|x) ∝ p(x|s)p(s) using its knowledge of the likelihood, p(x|s), and prior knowledge, p(s). We will now derive this posterior over s as implied by the samples drawn from our model in section (2).
We assume the image as represented by the brain’s sensory periphery (retinal ganglion cells) can be written as
p(I|s) = N (I;T(s), σ2exp→brain1) (3)
where T is the experimenter-defined function that translates the scalar quantity of interest, s, into an actual image, I. T could represent a grating of a particular spatial frequency and contrast, or any other shape that is being varied along s in the course of the experimenter. We further allow for Gaussian pixel noise with variance σ2exp→brain around the template T(s) in order to model both external noise (which is sometimes added by experimentalists to vary the informativeness of the image) and noise internal to the brain (e.g. sensor noise). Let us now consider a single neural sample x(i) drawn from the brain’s posterior conditioned on an image I. From the linear Gaussian generative model in equation (2), the likelihood of a single sample is p(I|x(i)) = N (I;Ax(i), σ2x1). The probability of drawing t independent samples2 of x is,
p(x(1,2,...,t)|I) = t∏
i=1
p(x(i)|I)
= t∏ i=1 p(I|x(i))pbrain(x(i)) pbrain(I)
2Depending on how the samples are being generated, consecutive samples are likely to be correlated to some degree. However, the central result derived in this section which is valid for infinitely many samples still holds due to the possibility of thinning in this case. Only for the finite sample case will autocorrelations lead to deviations from the solutions here
= 1
pbrain(I)t t∏ i=1 p(I|x(i))pbrain(x(i))
Since the experimenter and brain have different generative models, the prior over the variables are dependent on the generative model that they are a part of (specified by the subscript in their pdf). Substituting in the Gaussian densities and combining all terms that depend on x but not on I into κ(x(1,2,...,t)), we get
p(x(1,2,...,t)|I) = κ ( x(1,2,...,t) ) 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) . (4)
where x̄ = 1t ∑t 1 x (i) is the mean activity of the units over time. We next derive the posterior over samples given the experimenter-defined stimulus s:
p(x(1,2,...,t)|s) = ∫ p(x(1,2,...,t)|I)p(I|s)dI
Substituting in our result from equation (4), we obtain p(x(1,2,...,t)|s) = κ ( x(1,2,...,t) )∫ 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) p(I|s)dI. Making use of equation (3) we can write
p(x(1,2,...,t)|s) = κ ( x(1,2,...,t) )∫ 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) N (I;T(s), σ2exp→brain1)dI
= κ ( x(1,2,...,t) ) N [ T(s);Ax̄, ( σ2exp→brain +
σ2x t
) 1 ]
. . .∫ 1
pbrain(I)t N
[ I; T(s)σ2x + Ax̄tσ 2 exp→brain
tσ2exp→brain + σ 2 x
, σ2xσ 2 exp→brain
tσ2exp→brain + σ 2 x
1 ] dI
As the number of samples, t, increases, the variance of the Gaussian inside the integral converges to zero so that for large t we can approximate the integral by the integrand’s value at the mean of the Gaussian. The Gaussian’s mean itself converges to Ax̄ so that we obtain:
p(x(1,2,...,t)|s) ≈ κ ( x(1,2,...,t) ) N [ T(s);Ax̄, σ2exp→brain1 ] 1 pbrain(Ax̄)t .
Applying Bayes’ rule and absorbing all terms that do not contain s into the proportionality we find that in the limit of infinitely many samples
p(s|x(1,2,...,t)) ∝ N (T(s);Ax̄, σ2exp→brain1)pexp(s). (5) We can now rewrite this expression in the canonical form for the exponential family
p(s|x(1,2,...,t)) ∝ g(s) exp(h(s)>x̄) where (6)
g(s) = exp ( −T(s) >T(s)
2σ2exp→brain
) pexp(s) and (7)
h(s) = T(s)>A
σ2exp→brain . (8)
If x(i) is represented by neural responses (either spikes or instantaneous rates), x̄ becomes the vector of mean firing rates (r) of the population up to time t. Hence, in the limit of many samples, the neural responses form a linear PPC (equation (1)).
Finite number of samples
The top row of Figure 2 shows a numerical approximation to the posterior over s for the finite sample case and illustrates its convergence for t → ∞ for the example model described in the previous section. As expected, posteriors for small numbers of samples are both wide and variable, and they get sharper and less variable as the number of samples increases (three runs are shown for each condition). Since the mean samples (x̄) only depends on the marginals over x, we can approximate it using the mean field solution for our image model. The bottom row of Figure 2 shows the corresponding population responses: spike count for each neurons on the y−axis sorted by the preferred stimulus of each neuron on the x−axis.
Interpretation of the implied PPC
The relationships that we have derived for g(s) and h(s) (equations (7-8)) provide insights into the nature of the PPC that arises in a linear Gaussian model of the inputs. A classic stimulus to consider when probing and modeling neurons in area V1 is orientation. If the presented images are identical up to orientation, and if the prior distribution over presented orientations is flat, then g(s) will be constant. Equation (7) shows how g(s) changes as either of those conditions does not apply, for instance when considering stimuli like spatial frequency or binocular disparity for which the prior significantly deviates from constant. More interestingly, equation (8) tells us how the kernels that characterize how each neuron’s response contribute to the population code over s depends both on the used images, T(s), and the projective fields, PFn, contained in A. Intuitively, the more T(s)>PFn depends on s, the more informative is that neuron’s response for the posterior over s. Interestingly, equation (8) can be seen as a generalization from a classic feedforward model consisting of independent linear-nonlinear-Poisson (LNP) neurons in which the output nonlinearity is exponential, to a non-factorized model in which neural responses are generally correlated. In this case, h(s) is determined by the projective field, rather than the receptive field of a neuron (the receptive field, RF, being the linear image kernel in an LNP model of the neuron’s response). It has been proposed that each latent’s sample may be represented by a linear combination of neural responses [23], which can be incorporated into our model with h(s) absorbing the linear mapping.
Importantly, the kernels, h(s), and hence the nature of the PPC changes both with changes in the experimenter-defined variable, s (e.g. whether it is orientation, spatial frequency, binocular disparity, etc.), and with the set of images, T(s). The h(s) will be different for gratings of different size and spatial frequency, for plaids, and for rotated images of houses, to name a few examples. This means that a downstream area trying to form a belief about s (e.g. a best estimate), or an area that is combining the information contained in the neural responses x with that contained in another population (e.g. in the context of cue integration) will need to learn the h(s) separately for each task.
Multimodality of the PPC
Useful insights can be gained from the fact that – at least in the case investigated here — the implied PPC is crucially shaped by the distance measure in the space of sensory inputs, I, defined by our generative model (see equation 3). Figure 3 illustrates this dependence in pixel space: the posterior for a given value of s is monotonically related to the distance between the image “reconstructed” by the mean sample, x̄, and the image corresponding to that value of s. If this reconstruction lies close enough to the image manifold defined by T(s), then the implied posterior will have a local maximum at the value for s which corresponds to the T(s) closest to Ax̄. Whether p(s|x(1), . . . ,x(t)) has other local extrema depends on the shape of the T(s)−manifold (compare panels a and b). Importantly, the relative height of the global peak compared to other local maxima will depend on two other factors: (a) the amount of noise in the experimenter-brain channel, represented by σexp→brain, and (b) how well the generative model learnt by the brain can reconstruct the T(s) in the first place. For a complete, or overcomplete model, for instance, Ax̄ will exactly reconstruct the input image in the limit of many samples. As a result, the brain’s likelihood, and hence the implied posterior over s, will have a global maximum at the corresponding s (blue in Figure 3B). However, if the generative model is undercomplete, then Ax̄ may lie far from the T(s) manifold and in fact be approximately equidistant to two or more points on T(s) with the result that the implied posterior over s becomes multimodal with the possibility that multiple peaks have similar height. While V1’s model for monocular images is commonly assumed to be complete or even overcomplete [25], it may be undercomplete for binocular images where large parts of the binocular image space do not contain any natural images. (Note that the multimodality in the posterior over s discussed here is independent of any multimodality in the posterior over x. In fact, it is easy to see that for an exponential prior and Gaussian likelihood, the posterior p(x|I) is always Gaussian and hence unimodal while the posterior over s may still be multimodal.)
Dissociation of neural variability and uncertainty
It is important to appreciate the difference between the brain’s posteriors over x, and over s. The former represents a belief about the intensity or absence/presence of individual image elements in the input. The latter represents implicit knowledge about the stimulus that caused the input given the neural responses. Neural variability, as modeled here, corresponds to variability in the samples x(i) and is directly related to the uncertainty in the posterior over x. The uncertainty over s encoded by the PPC, on the other hand, depends on the samples only through their mean, not their variance. Given sufficiently many samples, the uncertainty over s is only determined by the noise in the channel between experimenter and brain (modeled as external pixel noise plus pixel-wise internal sensor noise added to the template, T(s)). This means that an experimenter increasing uncertainty over s by increasing external noise should not necessarily expect a corresponding increase in neural variability.
Nuisance variables
So far we have ignored the possible presence of nuisance variables beyond individual pixel noise. Such nuisance variables can be internal or external to the brain. Relevant nuisance variables when considering experiments done on V1 neurons include overall luminance, contrast, phases, spatial frequencies, etc (for an illustration of the effect of luminance and contrast see Figure 4). An important question from the perspective of a downstream area in the brain interpreting V1 responses is whether they need to be inferred separately and incorporated in any computations, or whether they leave the PPC invariant and can be ignored.
For any external nuisance variables, we can easily modify the experimenter’s model in equation (3) to include a nuisance variable η that modifies the template, T(s, η), and hence, the brain’s observation, I. This dependency carries through the derivation of the PPC to the end, such that
g(s, η) = exp ( −T(s, η) >T(s, η)
2σ2exp→brain
) pexp(s) and h(s, η) = T(s, η)>A
σ2exp→brain . (9)
As long as T(s, η)>T(s, η) are separable in s and η, the nuisance’s parameter influence on g can be absorbed into the proportionality constant. This is clearly the case for the contrast as nuisance variable as discussed in Ma et al. (2006), but in general it will be under the experimenter’s control of T whether the separability condition is met. For the PPC over s to be invariant over η, additionally, h(s) needs to be independent of η. For a linear Gaussian model, this is the case when the projective fields making up A = (PF1, . . . ,PFn) are either invariant to s or to η. For instance, when A is learnt on natural images, this is usually the case for overall luminance (Figure 4a) since one projective field will represent the DC component of any input image, while the other projective fields average to zero. So while T(s, η)>PF for the projective field representing the DC component will depend on the image’s DC component (overall luminance), it does not depend on other aspects of the image (i.e. s). For projective fields that integrate to zero, however, T(s, η)>PF is independent of η, but may be modulated by s (e.g. orientation if the projective fields are orientation-selective).
The original PPC described by Ma et al. (2006) was shown to be contrast-invariant since both the “tuning curve” of each neuron, relating to T(s, η)>PF in our case, and the response variance (taking the place of σ2exp→brain) were assumed to scale linearly with contrast (in line with empirical measurements). For our model, we assumed that σexp→brain was independent of the input, and hence, the T are not invariant to contrast. However, since the noise characteristics of the brain’s sensory periphery (included as sensor noise in our σexp→brain term) generally depend on the inputs, it remains a question for future research whether more realistic assumptions about the sensory noise imply an approximately invariant PPC over s. 3
Generally speaking, the nature of the PPC will depend on the particular image model that the brain has learnt. For instance, numerical results by Orban et al. (2016) suggest that explicitly including
3In contrast to the interpretation of Ma et al. (2006), where contrast invariance is the result of a combination of mean response scaling and response variance scaling, in our case it would be a combination of the “feedforward” part of the mean response scaling and the scaling of the variability of the inputs.
a contrast variable in the image model (Gaussian Scale Mixture, [24]) implies an approximately contrast-invariant PPC over orientation, but how precise and general that finding is, remains to be seen analytically.
4 Neurons simultaneously represent both probability & log probabilities
Taking the log of equation 6 makes it explicit that the neural responses, x, are linearly related to the log posterior over s. This interpretation agrees with a long list of prior work suggesting that neural responses are linearly related to the logarithm of the probabilities that they represent. This contrasts with a number of proposals, starting with Barlow (1969) [1], in which neural responses are proportional to the probabilities themselves (both schemes are reviewed in [20]). Both schemes have different advantages and disadvantages in terms of computation (e.g. making multiplication and addition particularly convenient, respectively) and are commonly discussed as mutually exclusive.
While in our model, with respect to the posterior over x, neural responses generally correspond to samples, i.e. neither probabilities nor log probabilities, they do become proportional to probabilities for the special case of binary latents. In that case, on the time scale of a single sample, the response is either 0 or 1, making the firing rate of neuron i proportional to its marginal probability, p(xn|I). Such a binary image model has been shown to be as successful as the original continuous model of Olshausen & Field (1996) in explaining the properties of V1 receptive fields [11, 6], and is supported by studies on the biological implementability of binary sampling [7, 18].
In sum, for the special case of binary latents, responses implied by our neural sampling model are at once proportional to probabilities (over xn), and to log probabilities (over s).
5 Discussion
We have shown that sampling-based inference in a simple generative model of V1 can be interpreted in multiple ways, some previously discussed as mutually exclusive. In particular, the neural responses can be interpreted both as samples from the probabilistic model that the brain has learnt for its inputs and as parameters of the posterior distribution over any experimenter-defined variables that are only implicitly encoded, like orientation. Furthermore, we describe how both a log probability code as well as a direct probability code can be used to describe the very same system.
The idea of multiple codes present in a single system has been mentioned in earlier work [23, 5] but we make this link explicit by starting with one type of code (sampling) and showing how it can be interpreted as a different type of code (parametric) depending on the variable assumed to be represented by the neurons. Our findings indicate the importance of committing to a model and set of variables for which the probabilities are computed when comparing alternate coding schemes (e.g. as done in [9]).
Our work connects to machine learning in several ways: (1) our distinction between explicit variables (which are sampled) and implicit variables (which can be decoded parametrically) is analogous to the practice of re-using pre-trained models in new tasks, where the “encoding” is given but the “decoding” is re-learned per task. Furthermore, (2) the nature of approximate inference might be different for encoded latents and for other task-relevant decoded variables, given that our model can be interpreted either as performing parametric or sampling-based inference. Finally, (3) this suggests a relaxation of the commonplace distinction between Monte-Carlo and Variational methods for approximate inference [22]. For instance, our model could potentially be interpreted as a mixture of parametric distributions, where the parameters themselves are sampled.
We emphasize that we are not proposing that the model analyzed here is the best, or even a particular good model for neural responses in area V1. Our primary goal was to show that the same model can support multiple interpretations that had previously been thought to be mutually exclusive, and to derive analytical relationships between those interpretations.
The connection between the two codes specifies the dependence of the PPC kernels on how the image manifold defined by the implicit variable interacts with the properties of the explicitly represented variables. It makes explicit how infinitely many posteriors over implicit variables can be “decoded” by taking linear projections of the neural responses, raising questions about the parsimony of a description of the neural code based on implicitly represented variables like orientation.
We also note that the PPC that arises from the image model analyzed here is not contrast invariant like the one proposed by Ma et al. (2006), which was based on the empirically observed response variability of V1 neurons, and the linear contrast scaling of their tuning with respect to orientation. Of course, a linear Gaussian model is insufficient to explain V1 responses, and it would be interesting to derive the PPC implied by more sophisticated models like a Gaussian Scale Mixture Model [24] that is both a better model for natural images, enjoys more empirical support and, based on numerical simulations, may approximate a contrast-invariant linear PPC over orientation [16].
Finally, a more general relationship between the structure of the generative model for the inputs, and the invariance properties of PPCs empirically observed for different cortical areas, may help extend probabilistic generative models to higher cortical areas beyond V1.
Acknowledgments
This work was supported by NEI/NIH awards R01 EY028811 (RMH) and T32 EY007125 (RDL), as well as an NSF/NRT graduate training grant NSF-1449828 (RDL, SS). | 1. What are the main contributions and findings of the paper regarding neural responses and generative models?
2. What are the strengths and weaknesses of the proposed approach in deriving the log posterior over experimenter-defined variables?
3. Do you have any concerns or questions about the mathematical formulations and derivations presented in the paper?
4. How does the paper's content relate to previous works in the field, such as Orban et al. (2016)?
5. Are there any areas where the authors could improve their proofreading and clarity in presenting their arguments? | Review | Review
The authors start by assuming that: (1) neural responses x are samples from p(x|Image), and (2) the brain already has (or has pre-learned) a linear Gaussian generative model of images given responses i.e. p(Image|x) is N(Ax,noiseSD). Using these, they derive that the log posterior over some experimenter defined variable s (that generated the image using an arbitrary function) is a linearly weighted sum of neural responses x; i.e. the neural responses form a probabilisitic population code (PPC) which can be linearly decoded to give the posterior over any experimenter defined variable that generated the images. The authors thus show that the sampling vs PPC hypotheses are not disjoint, but can actually co-exist by properly defining what is being sampled and what is coded in the PPC. This is a very significant result and definitely deserves to be widely disseminated in the community. Thus I recommend this work to be accepted at NIPS, after these corrections. major: l 111: If I follow the math correctly, then in the second equality after line 111, there seems to be an extra factor of a normal distribution function with three dots trailing. While the paper is clear, the authors must put in more effort in proof-reading their texts, so as to not burden reviewers with a huge number of trivial corrections as below! Laxity here also raises doubts on the rigour in the main results ... minor: l 54: "and do not normally related to either log probability or probability directly." l 86: "presents a images" l 71-72: "Furthermore, and Orban et al. (2016)" l 90: p(s|x) not p(s|r) l 98: Equation 3 and the one below. How does the Normal distribution function have 3 arguments here compared to two arguments earlier? What is the symbol at the end that looks like an Identity symbol? pg 3 footnote: "infinitely samples" l 106: "we get Dr" l 106: equation 4: \bar{x} should be defined. l 165: "will have a global maximum for at the corresponding" l 199: "it will be under the experimenterâs control of T whether the separability condition is met" l 200: " to be invariance over " l 230-231: "generally correspond samples" l 232: "a binary latents" l 228-229: "e.g. making addition and multiplication particularly, respectively" |
NIPS | Title
A probabilistic population code based on neural samples
Abstract
Sensory processing is often characterized as implementing probabilistic inference: networks of neurons compute posterior beliefs over unobserved causes given the sensory inputs. How these beliefs are computed and represented by neural responses is much-debated (Fiser et al. 2010, Pouget et al. 2013). A central debate concerns the question of whether neural responses represent samples of latent variables (Hoyer & Hyvarinnen 2003) or parameters of their distributions (Ma et al. 2006) with efforts being made to distinguish between them (Grabska-Barwinska et al. 2013). A separate debate addresses the question of whether neural responses are proportionally related to the encoded probabilities (Barlow 1969), or proportional to the logarithm of those probabilities (Jazayeri & Movshon 2006, Ma et al. 2006, Beck et al. 2012). Here, we show that these alternatives – contrary to common assumptions – are not mutually exclusive and that the very same system can be compatible with all of them. As a central analytical result, we show that modeling neural responses in area V1 as samples from a posterior distribution over latents in a linear Gaussian model of the image implies that those neural responses form a linear Probabilistic Population Code (PPC, Ma et al. 2006). In particular, the posterior distribution over some experimenter-defined variable like “orientation” is part of the exponential family with sufficient statistics that are linear in the neural sampling-based firing rates.
1 Introduction
In order to guide behavior, the brain has to infer behaviorally relevant but unobserved quantities from observed inputs in the senses. Bayesian inference provides a normative framework to do so; however, the computations required to compute posterior beliefs about those variables exactly are typically intractable. As a result, the brain needs to perform these computations in an approximate manner. The nature of this approximation is unclear with two principal classes having emerged as candidate hypotheses: parametric (variational) and sampling-based [8, 20].
In the first class, neural responses are interpreted as the parameters of the probability distributions that the brain computes and represents. The most popular members of this class are Probabilistic Population Codes (PPCs, [13, 4, 3, 2, 21, 19]). Common PPCs are based on the empirical observation that neural variability is well-described by an exponential family with linear sufficient statistics. Applying Bayes’ rule to compute the posterior probability, p(s|r), over some task-relevant scalar quantity, s, from the neural population response, r, one can write [2]:
p(s|r) ∝ g(s) exp [ h(s)>r ] (1)
where each entry of h(s) represents a stimulus-dependent kernel characterizing the contribution of each neuron’s response to the distribution, and g(s) is some stimulus-dependent function that
∗Equal contribution
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
is independent of r. Importantly, the neural responses, r, are linearly related to the logarithm of the probability rather than the probability itself. This has been argued to be a convenient choice for the brain to implement important probabilistic operations like evidence integration over time and cues using linear operations on firing rates [2]. In addition, PPC-like codes are typically “distributed” since the belief over a single variable is distributed over the activity of many neurons, and different low-dimensional projections of those activities may represent beliefs over multiple variables simultaneously [19]. Furthermore, because s is defined by the experimenter and not explicitly inferred by the brain in our model we call it “implicit.”
In the second class of models, instead of representing parameters, neural responses are interpreted as samples from the represented distribution. First proposed by Hoyer & Hyvarinnen (2003), this line of research has been elaborated in the abstract showing how it might be implemented in neural circuits [7, 18, 5] as well as for concrete generative models designed to explain properties of neurons in early visual cortex [14, 15, 24, 12, 16, 10]. Here, each neuron (or a subset of principal neurons), represents a single latent variable in a probabilistic model of the world. The starting point for these models is typically a specific generative model of the inputs which is assumed to have been learnt by the brain from earlier sensory experience, effectively assuming a separation of time-scales for learning and inference that is empirically justified at least for early visual areas. Rather than being the starting point as for PPCs, neural variability in sampling-based models emerges as a consequence of any uncertainty in the represented posterior. Importantly, samples have the same domain as the latents and do not normally relate to either log probability or probability directly.
This paper will proceed as illustrated in Figure 1: First, we will define a simple linear Gaussian image model as has been used in previous studies. Second, we will show that samples from this model approximate an exponential family with linear sufficient statistics. Third, we will relate the implied PPC, in particular the kernels, h(s), to the projective fields in our image model. Fourth, we will discuss the role of nuisance variables in our model. And finally, we will show that under assumption of binary latent in the image model, neural firing rates are both proportional to probability (of presence of a given image element) and log probability (of implicitly encoded variables like orientation).
2 A neural sampling-based model
We follow previous work in assuming that neurons in primary visual cortex (V1) implement probabilistic inference in a linear Gaussian model of the input image [14, 15, 12, 6, 10]:
P (I|x) = N (I;Ax, σ2x1) (2)
where N (y;µ,Σ) denotes the probability distribution function of a normal random variable (mean µ and covariance Σ) evaluated at y, and 1 is the identity matrix. The observed image, I, is
drawn from a Normal distribution around a linear combination of the projective fields (PFn), A = (PF1, . . . ,PFN ) of all the neurons (1, . . . , N) weighted by their activation (responses), x = (x1, . . . , xN )
>. The projective fields can be thought of as the brain’s learned set of basis functions over images. The main empirical justification for this model consists in the fact that under the assumption of a sparse independent prior over x, the model learns projective field parameters that strongly resemble the localized, oriented and bandpass features that characterize V1 neurons when trained on natural images [14, 6]. Hoyer & Hyvarinen (2003) proposed that during inference neural responses can be interpreted as samples in this model. Furthermore, Orban et al. (2016) showed that samples from a closely related generative model (Gaussian Scale Mixture Model, [24]) could explain many response properties of V1 neurons beyond receptive fields. Since our main points are conceptual in nature, we will develop them for the slightly simpler original model described above.
Given an image, I, we assume that neural activities can be thought of as samples from the posterior distribution, x(i) ∼ p(x|I) ∝ p(I|x)pbrain(x) where pbrain(x) is the brain’s prior over x. In this model each population response, x = (x1, . . . , xN )>, represents a sample from the brain’s posterior belief about x|I. Each xn, individually, then represents the brain marginal belief about the intensity of the feature PFn in the image. This interpretation is independent of any task demands, or assumptions by the experimenter. It is up to the experimenter to infer the nature of the variables encoded by some population of neurons from their responses, e.g. by fitting this model to data. In the next section we will show how these samples can also be interpreted as a population code over some experimenter-defined quantity like orientation (Figure 1).
3 Neural samples form a Probabilistic Population Code (PPC)
In many classic neurophysiology experiments [17], the experimenter presents images that only vary along a single experimenter-defined dimension, e.g. orientation. We call this dimension the quantity of interest, or s. The question is then posed, what can be inferred about s given the neural activity in response to a single image representing s, x ∼ p(x|s). An ideal observer would simply apply Bayes’ rule to infer p(s|x) ∝ p(x|s)p(s) using its knowledge of the likelihood, p(x|s), and prior knowledge, p(s). We will now derive this posterior over s as implied by the samples drawn from our model in section (2).
We assume the image as represented by the brain’s sensory periphery (retinal ganglion cells) can be written as
p(I|s) = N (I;T(s), σ2exp→brain1) (3)
where T is the experimenter-defined function that translates the scalar quantity of interest, s, into an actual image, I. T could represent a grating of a particular spatial frequency and contrast, or any other shape that is being varied along s in the course of the experimenter. We further allow for Gaussian pixel noise with variance σ2exp→brain around the template T(s) in order to model both external noise (which is sometimes added by experimentalists to vary the informativeness of the image) and noise internal to the brain (e.g. sensor noise). Let us now consider a single neural sample x(i) drawn from the brain’s posterior conditioned on an image I. From the linear Gaussian generative model in equation (2), the likelihood of a single sample is p(I|x(i)) = N (I;Ax(i), σ2x1). The probability of drawing t independent samples2 of x is,
p(x(1,2,...,t)|I) = t∏
i=1
p(x(i)|I)
= t∏ i=1 p(I|x(i))pbrain(x(i)) pbrain(I)
2Depending on how the samples are being generated, consecutive samples are likely to be correlated to some degree. However, the central result derived in this section which is valid for infinitely many samples still holds due to the possibility of thinning in this case. Only for the finite sample case will autocorrelations lead to deviations from the solutions here
= 1
pbrain(I)t t∏ i=1 p(I|x(i))pbrain(x(i))
Since the experimenter and brain have different generative models, the prior over the variables are dependent on the generative model that they are a part of (specified by the subscript in their pdf). Substituting in the Gaussian densities and combining all terms that depend on x but not on I into κ(x(1,2,...,t)), we get
p(x(1,2,...,t)|I) = κ ( x(1,2,...,t) ) 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) . (4)
where x̄ = 1t ∑t 1 x (i) is the mean activity of the units over time. We next derive the posterior over samples given the experimenter-defined stimulus s:
p(x(1,2,...,t)|s) = ∫ p(x(1,2,...,t)|I)p(I|s)dI
Substituting in our result from equation (4), we obtain p(x(1,2,...,t)|s) = κ ( x(1,2,...,t) )∫ 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) p(I|s)dI. Making use of equation (3) we can write
p(x(1,2,...,t)|s) = κ ( x(1,2,...,t) )∫ 1 pbrain(I)t N ( I;Ax̄, σ2x t 1 ) N (I;T(s), σ2exp→brain1)dI
= κ ( x(1,2,...,t) ) N [ T(s);Ax̄, ( σ2exp→brain +
σ2x t
) 1 ]
. . .∫ 1
pbrain(I)t N
[ I; T(s)σ2x + Ax̄tσ 2 exp→brain
tσ2exp→brain + σ 2 x
, σ2xσ 2 exp→brain
tσ2exp→brain + σ 2 x
1 ] dI
As the number of samples, t, increases, the variance of the Gaussian inside the integral converges to zero so that for large t we can approximate the integral by the integrand’s value at the mean of the Gaussian. The Gaussian’s mean itself converges to Ax̄ so that we obtain:
p(x(1,2,...,t)|s) ≈ κ ( x(1,2,...,t) ) N [ T(s);Ax̄, σ2exp→brain1 ] 1 pbrain(Ax̄)t .
Applying Bayes’ rule and absorbing all terms that do not contain s into the proportionality we find that in the limit of infinitely many samples
p(s|x(1,2,...,t)) ∝ N (T(s);Ax̄, σ2exp→brain1)pexp(s). (5) We can now rewrite this expression in the canonical form for the exponential family
p(s|x(1,2,...,t)) ∝ g(s) exp(h(s)>x̄) where (6)
g(s) = exp ( −T(s) >T(s)
2σ2exp→brain
) pexp(s) and (7)
h(s) = T(s)>A
σ2exp→brain . (8)
If x(i) is represented by neural responses (either spikes or instantaneous rates), x̄ becomes the vector of mean firing rates (r) of the population up to time t. Hence, in the limit of many samples, the neural responses form a linear PPC (equation (1)).
Finite number of samples
The top row of Figure 2 shows a numerical approximation to the posterior over s for the finite sample case and illustrates its convergence for t → ∞ for the example model described in the previous section. As expected, posteriors for small numbers of samples are both wide and variable, and they get sharper and less variable as the number of samples increases (three runs are shown for each condition). Since the mean samples (x̄) only depends on the marginals over x, we can approximate it using the mean field solution for our image model. The bottom row of Figure 2 shows the corresponding population responses: spike count for each neurons on the y−axis sorted by the preferred stimulus of each neuron on the x−axis.
Interpretation of the implied PPC
The relationships that we have derived for g(s) and h(s) (equations (7-8)) provide insights into the nature of the PPC that arises in a linear Gaussian model of the inputs. A classic stimulus to consider when probing and modeling neurons in area V1 is orientation. If the presented images are identical up to orientation, and if the prior distribution over presented orientations is flat, then g(s) will be constant. Equation (7) shows how g(s) changes as either of those conditions does not apply, for instance when considering stimuli like spatial frequency or binocular disparity for which the prior significantly deviates from constant. More interestingly, equation (8) tells us how the kernels that characterize how each neuron’s response contribute to the population code over s depends both on the used images, T(s), and the projective fields, PFn, contained in A. Intuitively, the more T(s)>PFn depends on s, the more informative is that neuron’s response for the posterior over s. Interestingly, equation (8) can be seen as a generalization from a classic feedforward model consisting of independent linear-nonlinear-Poisson (LNP) neurons in which the output nonlinearity is exponential, to a non-factorized model in which neural responses are generally correlated. In this case, h(s) is determined by the projective field, rather than the receptive field of a neuron (the receptive field, RF, being the linear image kernel in an LNP model of the neuron’s response). It has been proposed that each latent’s sample may be represented by a linear combination of neural responses [23], which can be incorporated into our model with h(s) absorbing the linear mapping.
Importantly, the kernels, h(s), and hence the nature of the PPC changes both with changes in the experimenter-defined variable, s (e.g. whether it is orientation, spatial frequency, binocular disparity, etc.), and with the set of images, T(s). The h(s) will be different for gratings of different size and spatial frequency, for plaids, and for rotated images of houses, to name a few examples. This means that a downstream area trying to form a belief about s (e.g. a best estimate), or an area that is combining the information contained in the neural responses x with that contained in another population (e.g. in the context of cue integration) will need to learn the h(s) separately for each task.
Multimodality of the PPC
Useful insights can be gained from the fact that – at least in the case investigated here — the implied PPC is crucially shaped by the distance measure in the space of sensory inputs, I, defined by our generative model (see equation 3). Figure 3 illustrates this dependence in pixel space: the posterior for a given value of s is monotonically related to the distance between the image “reconstructed” by the mean sample, x̄, and the image corresponding to that value of s. If this reconstruction lies close enough to the image manifold defined by T(s), then the implied posterior will have a local maximum at the value for s which corresponds to the T(s) closest to Ax̄. Whether p(s|x(1), . . . ,x(t)) has other local extrema depends on the shape of the T(s)−manifold (compare panels a and b). Importantly, the relative height of the global peak compared to other local maxima will depend on two other factors: (a) the amount of noise in the experimenter-brain channel, represented by σexp→brain, and (b) how well the generative model learnt by the brain can reconstruct the T(s) in the first place. For a complete, or overcomplete model, for instance, Ax̄ will exactly reconstruct the input image in the limit of many samples. As a result, the brain’s likelihood, and hence the implied posterior over s, will have a global maximum at the corresponding s (blue in Figure 3B). However, if the generative model is undercomplete, then Ax̄ may lie far from the T(s) manifold and in fact be approximately equidistant to two or more points on T(s) with the result that the implied posterior over s becomes multimodal with the possibility that multiple peaks have similar height. While V1’s model for monocular images is commonly assumed to be complete or even overcomplete [25], it may be undercomplete for binocular images where large parts of the binocular image space do not contain any natural images. (Note that the multimodality in the posterior over s discussed here is independent of any multimodality in the posterior over x. In fact, it is easy to see that for an exponential prior and Gaussian likelihood, the posterior p(x|I) is always Gaussian and hence unimodal while the posterior over s may still be multimodal.)
Dissociation of neural variability and uncertainty
It is important to appreciate the difference between the brain’s posteriors over x, and over s. The former represents a belief about the intensity or absence/presence of individual image elements in the input. The latter represents implicit knowledge about the stimulus that caused the input given the neural responses. Neural variability, as modeled here, corresponds to variability in the samples x(i) and is directly related to the uncertainty in the posterior over x. The uncertainty over s encoded by the PPC, on the other hand, depends on the samples only through their mean, not their variance. Given sufficiently many samples, the uncertainty over s is only determined by the noise in the channel between experimenter and brain (modeled as external pixel noise plus pixel-wise internal sensor noise added to the template, T(s)). This means that an experimenter increasing uncertainty over s by increasing external noise should not necessarily expect a corresponding increase in neural variability.
Nuisance variables
So far we have ignored the possible presence of nuisance variables beyond individual pixel noise. Such nuisance variables can be internal or external to the brain. Relevant nuisance variables when considering experiments done on V1 neurons include overall luminance, contrast, phases, spatial frequencies, etc (for an illustration of the effect of luminance and contrast see Figure 4). An important question from the perspective of a downstream area in the brain interpreting V1 responses is whether they need to be inferred separately and incorporated in any computations, or whether they leave the PPC invariant and can be ignored.
For any external nuisance variables, we can easily modify the experimenter’s model in equation (3) to include a nuisance variable η that modifies the template, T(s, η), and hence, the brain’s observation, I. This dependency carries through the derivation of the PPC to the end, such that
g(s, η) = exp ( −T(s, η) >T(s, η)
2σ2exp→brain
) pexp(s) and h(s, η) = T(s, η)>A
σ2exp→brain . (9)
As long as T(s, η)>T(s, η) are separable in s and η, the nuisance’s parameter influence on g can be absorbed into the proportionality constant. This is clearly the case for the contrast as nuisance variable as discussed in Ma et al. (2006), but in general it will be under the experimenter’s control of T whether the separability condition is met. For the PPC over s to be invariant over η, additionally, h(s) needs to be independent of η. For a linear Gaussian model, this is the case when the projective fields making up A = (PF1, . . . ,PFn) are either invariant to s or to η. For instance, when A is learnt on natural images, this is usually the case for overall luminance (Figure 4a) since one projective field will represent the DC component of any input image, while the other projective fields average to zero. So while T(s, η)>PF for the projective field representing the DC component will depend on the image’s DC component (overall luminance), it does not depend on other aspects of the image (i.e. s). For projective fields that integrate to zero, however, T(s, η)>PF is independent of η, but may be modulated by s (e.g. orientation if the projective fields are orientation-selective).
The original PPC described by Ma et al. (2006) was shown to be contrast-invariant since both the “tuning curve” of each neuron, relating to T(s, η)>PF in our case, and the response variance (taking the place of σ2exp→brain) were assumed to scale linearly with contrast (in line with empirical measurements). For our model, we assumed that σexp→brain was independent of the input, and hence, the T are not invariant to contrast. However, since the noise characteristics of the brain’s sensory periphery (included as sensor noise in our σexp→brain term) generally depend on the inputs, it remains a question for future research whether more realistic assumptions about the sensory noise imply an approximately invariant PPC over s. 3
Generally speaking, the nature of the PPC will depend on the particular image model that the brain has learnt. For instance, numerical results by Orban et al. (2016) suggest that explicitly including
3In contrast to the interpretation of Ma et al. (2006), where contrast invariance is the result of a combination of mean response scaling and response variance scaling, in our case it would be a combination of the “feedforward” part of the mean response scaling and the scaling of the variability of the inputs.
a contrast variable in the image model (Gaussian Scale Mixture, [24]) implies an approximately contrast-invariant PPC over orientation, but how precise and general that finding is, remains to be seen analytically.
4 Neurons simultaneously represent both probability & log probabilities
Taking the log of equation 6 makes it explicit that the neural responses, x, are linearly related to the log posterior over s. This interpretation agrees with a long list of prior work suggesting that neural responses are linearly related to the logarithm of the probabilities that they represent. This contrasts with a number of proposals, starting with Barlow (1969) [1], in which neural responses are proportional to the probabilities themselves (both schemes are reviewed in [20]). Both schemes have different advantages and disadvantages in terms of computation (e.g. making multiplication and addition particularly convenient, respectively) and are commonly discussed as mutually exclusive.
While in our model, with respect to the posterior over x, neural responses generally correspond to samples, i.e. neither probabilities nor log probabilities, they do become proportional to probabilities for the special case of binary latents. In that case, on the time scale of a single sample, the response is either 0 or 1, making the firing rate of neuron i proportional to its marginal probability, p(xn|I). Such a binary image model has been shown to be as successful as the original continuous model of Olshausen & Field (1996) in explaining the properties of V1 receptive fields [11, 6], and is supported by studies on the biological implementability of binary sampling [7, 18].
In sum, for the special case of binary latents, responses implied by our neural sampling model are at once proportional to probabilities (over xn), and to log probabilities (over s).
5 Discussion
We have shown that sampling-based inference in a simple generative model of V1 can be interpreted in multiple ways, some previously discussed as mutually exclusive. In particular, the neural responses can be interpreted both as samples from the probabilistic model that the brain has learnt for its inputs and as parameters of the posterior distribution over any experimenter-defined variables that are only implicitly encoded, like orientation. Furthermore, we describe how both a log probability code as well as a direct probability code can be used to describe the very same system.
The idea of multiple codes present in a single system has been mentioned in earlier work [23, 5] but we make this link explicit by starting with one type of code (sampling) and showing how it can be interpreted as a different type of code (parametric) depending on the variable assumed to be represented by the neurons. Our findings indicate the importance of committing to a model and set of variables for which the probabilities are computed when comparing alternate coding schemes (e.g. as done in [9]).
Our work connects to machine learning in several ways: (1) our distinction between explicit variables (which are sampled) and implicit variables (which can be decoded parametrically) is analogous to the practice of re-using pre-trained models in new tasks, where the “encoding” is given but the “decoding” is re-learned per task. Furthermore, (2) the nature of approximate inference might be different for encoded latents and for other task-relevant decoded variables, given that our model can be interpreted either as performing parametric or sampling-based inference. Finally, (3) this suggests a relaxation of the commonplace distinction between Monte-Carlo and Variational methods for approximate inference [22]. For instance, our model could potentially be interpreted as a mixture of parametric distributions, where the parameters themselves are sampled.
We emphasize that we are not proposing that the model analyzed here is the best, or even a particular good model for neural responses in area V1. Our primary goal was to show that the same model can support multiple interpretations that had previously been thought to be mutually exclusive, and to derive analytical relationships between those interpretations.
The connection between the two codes specifies the dependence of the PPC kernels on how the image manifold defined by the implicit variable interacts with the properties of the explicitly represented variables. It makes explicit how infinitely many posteriors over implicit variables can be “decoded” by taking linear projections of the neural responses, raising questions about the parsimony of a description of the neural code based on implicitly represented variables like orientation.
We also note that the PPC that arises from the image model analyzed here is not contrast invariant like the one proposed by Ma et al. (2006), which was based on the empirically observed response variability of V1 neurons, and the linear contrast scaling of their tuning with respect to orientation. Of course, a linear Gaussian model is insufficient to explain V1 responses, and it would be interesting to derive the PPC implied by more sophisticated models like a Gaussian Scale Mixture Model [24] that is both a better model for natural images, enjoys more empirical support and, based on numerical simulations, may approximate a contrast-invariant linear PPC over orientation [16].
Finally, a more general relationship between the structure of the generative model for the inputs, and the invariance properties of PPCs empirically observed for different cortical areas, may help extend probabilistic generative models to higher cortical areas beyond V1.
Acknowledgments
This work was supported by NEI/NIH awards R01 EY028811 (RMH) and T32 EY007125 (RDL), as well as an NSF/NRT graduate training grant NSF-1449828 (RDL, SS). | 1. What is the focus of the paper in systems neuroscience?
2. What are the two theories put forward in the past that the authors aim to unify?
3. What is the simple Gaussian image model used by the authors to study probabilistic inference?
4. How do the authors interpret the linear PPC obtained from computing the posterior of the image model?
5. Are there any limitations or suggestions for improvement regarding the model's biological plausibility or illustration of its ingredients? | Review | Review
The authors tackle an important problem in systems neuroscience, namely whether and how neural activity can be interpreted as implementing probabilistic inference. In particular, two theories have been put forward in the past, probabilistic inference and sampling-based approaches. The authors claim to be able to unify the two approaches, thus providing an integrated account of probabilistic inference with neural populations. To this end, the authors study a relatively simple Gaussian image model, where an image is represented in a neural population with independent Gaussian noise. From this, they compute the posterior and show that the mean of samples from the posterior corresponds to a linear PPC. The interpretation of this PPC makes several interesting prediction, e.g. that adding noise to an image does not necessarily imply more variance in the spike counts, as the uncertainty of the representation is related to the mean, not the variance of the samples. While the paper touches on an important topic, and the calculations seem largely correct, the relationship of the model to neural data could have been worked out more, e.g. with more concrete examples. As it is, the model does not seem particularly biologically plausible, as common ingredients such as Poisson noise are missing. Even illustrating a few of the main ingredients and how they work together would have been helpful. The analysis of the model presented in Fig. 3/4 seems to be more illustrative than actual results, here a more in depth interpretation would be helpful. 106: extra Dr Fig 2: asymptotive -> asymptotic 125: left column -> bottom row? |
NIPS | Title
Detecting Anomalous Event Sequences with Temporal Point Processes
Abstract
Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security. In this paper, we frame the problem of detecting anomalous continuous-time event sequences as out-of-distribution (OoD) detection for temporal point processes (TPPs). First, we show how this problem can be approached using goodness-of-fit (GoF) tests. We then demonstrate the limitations of popular GoF statistics for TPPs and propose a new test that addresses these shortcomings. The proposed method can be combined with various TPP models, such as neural TPPs, and is easy to implement. In our experiments, we show that the proposed statistic excels at both traditional GoF testing, as well as at detecting anomalies in simulated and real-world data.
1 Introduction
Event data is abundant in the real world and is encountered in various important applications. For example, transactions in financial systems, server logs, and user activity traces can all naturally be represented as discrete events in continuous time. Detecting anomalies in such data can provide immense industrial value. For example, abnormal entries in system logs may correspond to unnoticed server failures, atypical user activity in computer networks may correspond to intrusions, and irregular patterns in financial systems may correspond to fraud or shifts in the market structure.
Manual inspection of such event data is usually infeasible due to its sheer volume. At the same time, hand-crafted rules quickly become obsolete due to software updates or changing trends (He et al., 2016). Ideally, we would like to have an adaptive system that can learn the normal behavior from the data, and automatically detect abnormal event sequences. Importantly, such a system should detect anomalies in a completely unsupervised way, as high-quality labels are usually hard to obtain.
Assuming “normal” data is available, we can formulate the problem of detecting anomalous event sequences as an instance of out-of-distribution (OoD) detection. Multiple recent works consider OoD detection for image data based on deep generative models (Ren et al., 2019; Nalisnick et al., 2019; Wang et al., 2020). However, none of these papers consider continuous-time event data. Deep generative models for such variable-length event sequences are known as neural temporal point processes (TPPs) (Du et al., 2016). Still, the literature on neural TPPs mostly focuses on prediction tasks, and the problem of anomaly detection has not been adequately addressed by existing works (Shchur et al., 2021). We aim to fill this gap in our paper.
∗Work done during an internship at Amazon Research. Code and datasets: https://github.com/shchur/tpp-anomaly-detection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Our main contributions are the following:
1. Approach for anomaly detection for TPPs. We draw connections between OoD detection and GoF testing for TPPs (Section 2). By combining this insight with neural TPPs, we propose an approach for anomaly detection that shows high accuracy on synthetic and real-world event data.
2. A new test statistic for TPPs. We highlight the limitations of popular GoF statistics for TPPs and propose the sum-of-squared-spacings statistic that addresses these shortcomings (Section 4). The proposed statistic can be applied to both unmarked and marked TPPs.
2 Anomaly detection and goodness-of-fit testing
Background. A temporal point process (TPP) (Daley & Vere-Jones, 2003), denoted as P, defines a probability distribution over variable-length event sequences in an interval [0, T ]. A TPP realization X consists of strictly increasing arrival times (t1, . . . , tN ), where N , the number of events, is itself a random variable. A TPP is characterized by its conditional intensity function λ∗(t) := λ(t|Ht) that is equal to the rate of arrival of new events given the historyHt = {tj : tj < t}. Equivalently, a TPP can be specified with the integrated intensity function (a.k.a. the compensator) Λ∗(t) = ∫ t 0 λ∗(u)du.
Out-of-distribution (OoD) detection. We formulate the problem of detecting anomalous event sequences as an instance of OoD detection (Liang et al., 2018). Namely, we assume that we are given a large set of training sequencesDtrain = {X1, . . . , XM} that were sampled i.i.d. from some unknown distribution Pdata over a domain X . At test time, we need to determine whether a new sequenceX was also drawn from Pdata (i.e., X is in-distribution or “normal”) or from another distribution Q 6= Pdata (i.e., X is out-of-distribution or anomalous). We can phrase this problem as a null hypothesis test:
H0 : X ∼ Pdata H1 : X ∼ Q for some Q 6= Pdata. (1)
To reiterate, here we consider the case where X is a variable-length event sequence and Pdata is some unknown TPP. However, the rest of the discussion in Section 2 also applies to distributions over other data types, such as images.
Goodness-of-fit (GoF) testing. First, we observe that the problem of OoD detection is closely related to the problem of GoF testing (D’Agostino, 1986). We now outline the setup and approaches for GoF testing, and then describe how these can be applied to OoD detection. The goal of a GoF test to determine whether a random elementX follows a known distribution Pmodel2
H0 : X ∼ Pmodel H1 : X ∼ Q for some Q 6= Pmodel. (2)
We can perform such a test by defining a test statistic s(X), where s : X → R (Fisher, 1936). For this, we compute the (two-sided) p-value for an observed realization x of X as3
ps(x) = 2×min{Pr(s(X) ≤ s(x)|H0), 1− Pr(s(X) ≤ s(x)|H0)}. (3)
The factor 2 accounts for the fact that the test is two-sided. We reject the null hypothesis (i.e., conclude that X doesn’t follow Pmodel) if the p-value is below some predefined confidence level α. Note that computing the p-value requires evaluating the cumulative distribution function (CDF) of the sampling distribution, i.e., the distribution test statistic s(X) under the null hypothesis H0.
GoF testing vs. OoD detection. The two hypothesis tests (Equations 1 and 2) appear similar—in both cases the goal is to determine whetherX follows a certain distribution P and no assumptions are made about the alternative Q. This means that we can perform OoD detection using the procedure described above, that is, by defining a test statistic s(X) and computing the respective p-value (Equation 3). However, in case of GoF testing (Equation 2), the distribution Pmodel is known. Therefore, we can analytically compute or approximate the CDF of s(X)|X ∼ Pmodel, and thus the p-value. In contrast, in an OoD detection hypothesis test (Equation 1), we make no assumptions about Pdata and only
2We test a single realization X , as is common in TPP literature (Brown et al., 2002). Note that this differs from works on univariate GoF testing that consider multiple realizations, i.e., H0 : X1, . . . , XM
i.i.d.∼ Pmodel. 3In the rest of the paper, the difference between the random element X and its realization x is unimportant,
so we denote both as X , as is usually done in the literature.
have access to samples Dtrain that were drawn from this distribution. For this reason, we cannot compute the CDF of s(X)|X ∼ Pdata analytically. Instead, we can approximate the p-value using the empirical distribution function (EDF) of the test statistic s(X) on Dtrain. The above procedure can be seen as a generalization of many existing methods for unsupervised OoD detection. These approaches usually define the test statistic based on the log-likelihood (LL) of a generative model fitted to Dtrain (Choi et al., 2018; Ren et al., 2019; Ruff et al., 2021). However, as follows from our discussion above, there is no need to limit ourselves to LL-based statistics. For instance, we can define a test statistic for event sequences based on the rich literature on GoF testing for TPPs. We show in Section 6 that this often leads to more accurate anomaly detection compared to LL. Moreover, the difference between OoD detection and GoF testing is often overlooked. By drawing a clear distinction between the two, we can avoid some of the pitfalls encountered by other works (Nalisnick et al., 2019), as we elaborate in Appendix A.
The anomaly detection framework we outlined above can be applied to any type of data—such as images or time series—but in this work we mostly focus on continuous-time event data. This means that our main goal is to find an appropriate test statistic for variable-length continuous-time event sequences. In Section 3, we take a look at existing GoF statistics for TPPs and analyze their limitations. Then in Section 4, we propose a new test statistic that addresses these shortcomings and describe in more detail how it can be used for OoD detection.
3 Review of existing GoF test statistics for TPPs
Here, we consider a GoF test (Equation 2), where the goal is to determine whether an event sequence X = (t1, . . . , tN ) was generated by a known TPP Pmodel with compensator Λ∗. We will return to the problem of OoD detection, where the data-generating distribution Pdata is unknown, in Section 4.2. Many popular GoF tests for TPPs are based on the following result (Ogata, 1988; Brown et al., 2002).
Theorem 1 (Random time change theorem (Brown et al., 2002)). A sequence X = (t1, . . . , tN ) is distributed according to a TPP with compensator Λ∗ on the interval [0, V ] if and only if the sequence Z = (Λ∗(t1), . . . ,Λ ∗(tN )) is distributed according to the standard Poisson process on [0,Λ∗(V )].
Intuitively, Theorem 1 can be viewed as a TPP analogue of how the CDF of an arbitrary random variable over R transforms its realizations into samples from Uniform([0, 1]). Similarly, the compensator Λ∗ converts a random event sequence X into a realization Z of the standard Poisson process (SPP). Therefore, the problem of GoF testing for an arbitrary TPP reduces to testing whether the transformed sequence Z follows the SPP on [0,Λ∗(T )]. In other words, we can define a GoF statistic for a TPP with compensator Λ∗ by (1) applying the compensator to X to obtain Z and (2) computing one of the existing GoF statistics for the SPP on the transformed sequence. This can also be generalized to marked TPPs (where events can belong to one ofK classes) by simply concatentating the transformed sequences Z(k) for each event type k ∈ {1, . . . ,K} (see Appendix D for details). SPP, i.e., the Poisson process with constant intensity λ∗(t) = 1, is the most basic TPP one can conceive. However, as we will shortly see, existing GoF statistics even for this simple model have considerable shortcomings and can only detect a limited class of deviations from the SPP. More importantly, test statistics for general TPPs defined using the above recipe (Theorem 1) inherit the limitations of the SPP statistics.
For brevity, we denote the transformed arrival times as Z = (v1, . . . , vN ) = (Λ∗(t1), . . . ,Λ∗(tN )) and the length of the transformed interval as V = Λ∗(T ). One way to describe the generative process of an SPP is as follows (Pasupathy, 2010)
N |V ∼ Poisson(V ) ui|N,V ∼ Uniform([0, V ]) for i = 1, . . . , N. (4)
An SPP realization Z = (v1, . . . , vN ) is obtained by sorting the ui’s in increasing order. This is equivalent to defining the arrival time vi as the i-th order statistic u(i). We can also represent Z by the inter-event times (w1, . . . , wN+1) where wi = vi − vi−1, assuming v0 = 0 and vN+1 = V . Barnard (1953) proposed a GoF test for the SPP based on the above description (Equation 4) and the Kolmogorov–Smirnov (KS) statistic. The main idea of this approach is to check whether the arrival times v1, . . . , vN are distributed uniformly in the [0, V ] interval. For this, we compare F̂arr, the empirical CDF of the arrival times, with Farr(u) = u/V , the CDF of the Uniform([0, V ]) distribution.
This can be done using the KS statistic on the arrival times (KS arrival), defined as
κarr(Z) = √ N · sup
u∈[0,V ] |F̂arr(u)− Farr(u)| where F̂arr(u) =
1
N N∑ i=1 1(vi ≤ u). (5)
Another popular GoF test for the SPP is based on the fact that the inter-event times wi are distributed according to the Exponential(1) distribution (Cox, 1966). The test compares F̂int, the empirical CDF of the inter-event times, and Fint(u) = 1− exp(−u), the CDF of the Exponential(1) distribution. This leads to the KS statistic for the inter-event times (KS inter-event)
κint(Z) = √ N · sup
u∈[0,∞) |F̂int(u)− Fint(u)| where F̂int(u) =
1
N + 1 N+1∑ i=1 1(wi ≤ u). (6)
KS arrival and KS inter-event statistics are often presented as the go-to approach for testing the goodness-of-fit of the standard Poisson process (Daley & Vere-Jones, 2003). Combining them with Theorem 1 leads to simple GoF tests for arbitrary TPPs that are widely used to this day (Gerhard et al., 2011; Alizadeh et al., 2013; Kim & Whitt, 2014; Tao et al., 2018; Li et al., 2018).
Limitations of the KS statistics. The KS statistics κarr(Z) and κint(Z) are only able to differentiate the SPP from a narrow class of alternative processes. For example, KS arrival only checks if the arrival times vi are distributed uniformly, conditioned on the event count N . But what if the observed N is itself extremely unlikely under the SPP (Equation 4)? KS inter-event can be similarly insensitive to the event count—removing all events V2 < vi ≤ V from an SPP realization Z will only result in just a single atypically large inter-event time wi, which changes the value of κint(Z) at most by 1N+1 . We demonstrate these limitations of κarr(Z) and κint(Z) in our experiments in Section 6.1. Other failure modes of the KS statistics were described by Pillow (2009). Note that ad-hoc fixes to the KS statistics do not address these problems. For example, combining multiple tests performed separately for the event count and arrival times using Fisher’s method (Fisher, 1948; Cox, 1966) consistently decreases the accuracy, as we show in Appendix G. In the next section, we introduce a different test statistic that aims to address these shortcomings.
4 Sum-of-squared-spacings (3S) statistic for TPPs
4.1 Goodness-of-fit testing with the 3S statistic
A good test statistic should capture multiple properties of the SPP at once: it should detect deviations w.r.t. both the event count N and the distribution of the arrival or inter-event times. Here, we propose to approach GoF testing with a sum-of-squared-spacings (3S) statistic that satisfies these desiderata,
ψ(Z) = 1
V N+1∑ i=1 w2i = 1 V N+1∑ i=1 (vi − vi−1)2. (7)
This statistic extends the sum-of-squared-spacings statistic proposed as a test of uniformity for fixedlength samples by Greenwood (1946). The important difference between our definition (Equation 7) and prior works (D’Agostino, 1986) is that we, for the first time, consider the TPP setting, where the number of events N is random as well. For this reason, we use the normalizing constant 1/V instead of N/V 2 (see Appendix B for details). As we will see, this helps capture abnormalities in the event count and results in more favorable asymptotic properties for the case of SPP.
Intuitively, for a fixed N , the statistic ψ is maximized if the spacings are extremely imbalanced, i.e., if one inter-event time wi is close to V and the rest are close to zero. Conversely, ψ attains its minimum when the spacings are all equal, that is wi = VN+1 for all i.
In Figure 2a we visualize the distribution of ψ|N,V for two different values of N . We see that the distribution of ψ depends strongly on N , therefore a GoF test involving ψ will detect if the event count N is atypical for the given SPP. This is in contrast to κarr and κint, the distributions of which, by design, are (asymptotically) invariant under N (Figure 2b). Even if one accounts for this effect, e.g., by removing the correction factor √ N in Equations 5 and 6, their distributions change only slightly compared to the sum of squared spacings (see Figures 2c and 2d). To analyze other properties of the statistic, we consider its moments under the null hypothesis.
Proposition 1. Suppose the sequence Z is distributed according to the standard Poisson process on the interval [0, V ]. Then the first two moments of the statistic ψ := ψ(Z) are
E[ψ|V ] = 2 V (V + e−V − 1) and Var[ψ|V ] = 4 V 2 (2V − 7 + e−V (2V 2 + 4V + 8− e−V )).
The proof of Proposition 1 can be found in Appendix C. From Proposition 1 it follows that
lim V→∞ E[ψ|V ] = 2 lim V→∞ Var[ψ|V ] = 0. (8)
This leads to a natural notion of typicality in the sense of Nalisnick et al. (2019) and Wang et al. (2020) for the standard Poisson process. We can define the typical set of the SPP as the set of variable-length sequences Z on the interval [0, V ] that satisfy |ψ(Z)− 2| ≤ for some small > 0. It follows from Equation 8 and Chebyshev’s inequality that for large enough V , the SPP realizations will fall into the typical set with high probability. Therefore, at least for large V , we should be able to detect sequences that are not distributed according the SPP based on the statistic ψ.
Summary. To test the GoF of a TPP with a known compensator Λ∗ for an event sequence X = (t1, . . . , tN ), we first obtain the transformed sequence Z = (Λ∗(t1), . . . ,Λ∗(tN )) and compute the statistic ψ(Z) according to Equation 7. Since the CDF of the statistic under H0 cannot be computed analytically, we approximate it using samples drawn from Pmodel. That is, we draw realizations Dmodel = {X1, . . . , XM} from the TPP (e.g., using the inversion method (Rasmussen, 2018)) and compute the p-value for X (Equation 3) using the EDF of the statistic on Dmodel (North et al., 2002).
4.2 Out-of-distribution detection with the 3S statistic
We now return to the original problem of OoD detection in TPPs, where we have access to a set of in-distribution sequences Dtrain and do not know the data-generating process Pdata. Our idea is to perform the OoD detection hypothesis test (Equation 1) using the sum-of-squared-spacings test statistic that we introduced in the previous section. However, since the data-generating TPP Pdata is unknown, we do not know the corresponding compensator that is necessary to compute the statistic. Instead, we can fit a neural TPP model Pmodel (Du et al., 2016) to the sequences in Dtrain and use the compensator Λ∗ of the learned model to compute the statistic s(X).4 High flexibility of neural TPPs allows these models to more accurately approximate the true compensator. Having defined the statistic, we can approximate its distribution under H0 (i.e., assuming X ∼ Pdata) by the EDF of the statistic on Dtrain. We
use this EDF to compute the p-values for our OoD detection hypothesis test and thus detect anomalous sequences. We provide the pseudocode description of our OoD detection method in Appendix D.
We highlight that an OoD detection procedure like the one above is not equivalent to a GoF test for the learned generative model Pmodel, as suggested by earlier works (Nalisnick et al., 2019). While we use
4We can replace the 3S statistic on the transformed sequence Z with any other statistic for the SPP, such as KS arrival. In Sections 6.2 and 6.3, we compare different statistics constructed this way.
the compensator of the learned model to define the test statistic s(X), we compute the p-value for the OoD detection test based on s(X)|X ∼ Pdata. This is different from the distribution s(X)|X ∼ Pmodel used in a GoF test, since in general Pmodel 6= Pdata. Therefore, even if the distribution of a test statistic under the GoF test can be approximated analytically (as, e.g., for the KS statistic (Marsaglia et al., 2003)), we have to use the EDF of the statistic onDtrain for the OoD detection test. Figure 3 visualizes this difference. Here, we fit a TPP model on the in-distribution sequences from the STEAD dataset (Section 6.3) and plot the empirical distribution of the respective statistic s(X) on Dtrain (corresponds to s(X)|X ∼ Pdata) and on model samples Dmodel (corresponds to s(X)|X ∼ Pmodel).
5 Related work
Unsupervised OoD detection. OoD detection approaches based on deep generative models (similar to our approach in Section 4.2) have received a lot of attention in the literature. However, there are several important differences between our method and prior works. First, most existing approaches perform OoD detection based on the log-likelihood (LL) of the model or some derived statistic (Choi et al., 2018; Ren et al., 2019; Nalisnick et al., 2019; Morningstar et al., 2021; Ruff et al., 2021). We observe that LL can be replaced by any other test statistic, e.g., taken from the GoF testing literature, which often leads to more accurate anomaly detection (Section 6). Second, unlike prior works, we draw a clear distinction between OoD detection and GoF testing. While this difference may seem obvious in hindsight, it is not acknowledged by the existing works, which may lead to complications (see Appendix A). Also, our formulation of the OoD detection problem in Section 2 provides an intuitive explanation to the phenomenon of “typicality” (Nalisnick et al., 2019; Wang et al., 2020). The ( , 1)-typical set of a distribution P simply corresponds to the acceptance region of the respective hypothesis test with confidence level (Equation 1). Finally, most existing papers study OoD detection for image data and none consider variable-length event sequences, which is the focus of our work.
Our OoD detection procedure is also related to the rarity anomaly score (Ferragut et al., 2012; Janzing et al., 2019). The rarity score can be interpreted as the negative logarithm of a one-sided p-value (Equation 3) of a GoF test that uses the log-likelihood of some known model as the test statistic. In contrast, we consider a broader class of statistics and learn the model from the data.
Anomaly detection for TPPs. OoD detection, as described in Section 2, is not the only way to formalize anomaly detection for TPPs. For example, Ojeda et al. (2019) developed a distance-based approach for Poisson processes. Recently, Zhu et al. (2020) proposed to detect anomalous event sequences with an adversarially-trained model. Unlike these two methods, our approach can be combined with any TPP model without altering the training procedure. Liu & Hauskrecht (2019) studied anomalous event detection with TPPs, while we are concerned with entire event sequences.
GoF tests for TPPs. Existing GoF tests for the SPP usually check if the arrival times are distributed uniformly, using, e.g., the KS (Lewis, 1965) or chi-squared statistic (Cox, 1955). Our 3S statistic favorably compares to these approaches thanks to its dependence on the event countN , as we explain in Section 4 and show experimentally in Section 6.1. Methods combining the random time change theorem with a GoF test for the SPP (usually, the KS test) have been used at least since Ogata (1988), and are especially popular in neuroscience (Brown et al., 2002; Gerhard et al., 2011; Tao et al., 2018). However, these approaches inherit the limitations of the underlying KS statistic. Replacing the KS score with the 3S statistic consistently leads to a better separation between different TPP distributions (Section 6).
Gerhard & Gerstner (2010) discussed several GoF tests for discrete-time TPPs, while we deal with continuous time. Yang et al. (2019) proposed a GoF test for point processes based on Stein’s identity, which is related to a more general class of kernel-based GoF tests (Chwialkowski et al., 2016; Liu et al., 2016). Their approach isn’t suitable for neural TPPs, where the Papangelou intensity cannot be computed analytically. A recent work by Wei et al. (2021) designed a GoF test for self-exciting processes under model misspecification. In contrast to these approaches, our proposed GoF test from Section 4.1 can be applied to any TPP with a known compensator.
Sum-of-squared-spacings statistic. A similar statistic was first used by Greenwood (1946) for testing whether a fixed number of points are distributed uniformly in an interval. Several follow-up works studied the limiting distribution of the statistic (conditioned on N ) as N → ∞ (Hill, 1979; Stephens, 1981; Rao & Kuo, 1984). Our proposed statistic (Equation 7) is not invariant w.r.t. N and, therefore, is better suited for testing TPPs. We discuss other related statistics in Appendix B.
6 Experiments
Our experimental evaluation covers two main topics. In Section 6.1, we compare the proposed 3S statistic with existing GoF statistics for the SPP. Then in Sections 6.2 and 6.3, we evaluate our OoD detection approach on simulated and real-world data, respectively. The experiments were run on a machine with a 1080Ti GPU. Details on the setup and datasets construction are provided in Appendix E & F.
6.1 Standard Poisson process
In Section 3 we mentioned several failure modes of existing GoF statistics for the SPP. Then, in Section 4.1 we introduced the 3S statistic that was supposed to address these limitations. Hence, the goal of this section is to compare the proposed statistic with the existing ones in the task of GoF testing for the SPP. We consider four test statistics: (1) KS statistic on arrival times (Equation 5), (2) KS statistic on inter-event times (Equation 6), (3) chi-squared statistic on the arrival times (Cox, 1955; Tao et al., 2018), and (4) the proposed 3S statistic (Equation 7).
To quantitatively compare the discriminative power of different statistics, we adopt an evaluation strategy similar to Gerhard & Gerstner (2010); Yang et al. (2019). First, we generate a set Dmodel consisting of 1000 SPP realizations. We use Dmodel to compute the empirical distribution function of each statistic s(Z) under H0. Then, we define two test sets: DIDtest (consisting of samples from Pmodel, the SPP) and DOODtest (consisting of samples from Q, another TPP), each with 1000 sequences. Importantly, in this and following experiments, the training and test sets are always disjoint.
We follow the GoF testing procedure described at the end of Section 4.1, which corresponds to the hypothesis test in Equation 2. That is, we compute the p-value (Equation 3) for each sequence in the test sets using the EDF of s(Z) on Dmodel. A good test statistic s(Z) should assign lower p-values to the OoD sequences from DOODtest than to ID sequences from DIDtest, allowing us to discriminate between samples from Q and Pmodel. We quantify how well a given statistic separates the two distributions by computing the area under the ROC curve (ROC AUC). This effectively averages the performance of a statistic for the GoF hypothesis test over different significance levels α.
Datasets. We consider six choices for the distribution Q: • RATE, a homogeneous Poisson process with intensity µ < 1; • STOPPING, where events stop after some time tstop ∈ [0, V ]; • RENEWAL, where inter-event times are drawn i.i.d. from the Gamma distribution; • HAWKES, where events are more clustered compared to the SPP; • INHOMOGENEOUS, a Poisson process with non-constant intensity λ(t) = β sin(ωt); • SELFCORRECTING, where events are more evenly spaced compared to the SPP.
For cases the last 4 cases, the expected number of events is the same as for the SPP.
For each choice of Q we define a detectability parameter δ ∈ [0, 1], where higher δ corresponds to TPPs that are increasingly dissimilar to the SPP. That is, setting δ = 0 corresponds to a distribution Q that is exactly equal to the SPP, and δ = 1 corresponds to a distribution that deviates significantly from the SPP. For example, for a Hawkes with conditional intensity λ∗(t) = µ+ β ∑ tj<t
exp(−(t− tj)), the detectability value of δ = 0 corresponds to µ = 1 and β = 0 (i.e., λ∗(t) = 1) making Q
indistinguishable from P. The value of δ = 0.5 corresponds to µ = 0.5 and β = 0.5, which preserves the expected number of events N but makes the arrival times ti “burstier.” We describe how the parameters of each distribution Q are defined based on δ in Appendix E. Note that, in general, the ROC AUC scores are not guaranteed to monotonically increase as the detectability δ is increased.
Results. In Figure 4, we present AUC scores for different statistics as δ is varied. As expected, KS arrival accurately identifies sequences that come from Q where the absolute time of events are non-uniform (as in INHOMOGENEOUS). Similarly, KS inter-event is good at detecting deviations in the distribution of inter-event times, as in RENEWAL. The performance of the chi-squared statistic is similar to that of KS arrival. Nevertheless, the above statistics fail when the expected number of events, N , changes substantially—as in KS arrival and chi-squared on RATE, and KS inter-event on STOPPING. These failure modes match our discussion from Section 3.
In contrast, the 3S statistic stands out as the most consistent test (best or close-to-best performance in 5 out of 6 cases) and does not completely fail in any of the scenarios. The relatively weaker performance on SELFCORRECTING implies that the 3S statistic is less sensitive to superuniform spacings (D’Agostino, 1986) than to imbalanced spacings. The results show that the 3S statistic is able to detect deviations w.r.t. both the event count N (RATE and STOPPING), as well as the distributions of the inter-event times wi (RENEWAL) or the arrival times vi (HAWKES and INHOMOGENEOUS)— something that other GoF statistics for the SPP cannot provide.
6.2 Detecting anomalies in simulated data
In this section, we test the OoD detection approach discussed in Section 4.2, i.e., we perform anomaly detection for a TPP with an unknown compensator. This corresponds to the hypothesis test in Equation 1. We use the training set Dtrain to fit an RNN-based neural TPP model (Shchur et al., 2020) via maximum likelihood estimation (see Appendix F for details). Then, we define test statistics for the general TPP as follows. We apply the compensator Λ∗ of the learned model to each event sequence X and compute the four statistics for the SPP from Section 6.1 on the transformed sequence Z = Λ∗(X). We highlight that these methods are not “baselines” in the usual sense—the idea of combining a GoF statistic with a learned TPP model to detect anomalous event sequences is itself novel and hasn’t been explored by earlier works. The rest of the setup is similar to Section 6.1. We use Dtrain to compute the EDF of each statistic under H0, and then compute the ROC AUC scores on the p-values. In addition to the four statistics discussed before, we consider a two-sided test on the log-likelihood log q(X) of the learned generative model, which corresponds to the approach by Nalisnick et al. (2019).
Datasets. Like before, we define a detectability parameter δ for each scenario that determines how dissimilar ID and OoD sequences are. SERVER-STOP, SERVER-OVERLOAD and LATENCY are inspired by applications in DevOps, such as detecting anomalies in server logs.
• SERVER-OVERLOAD and SERVER-STOP contain data generated by a multivariate Hawkes process with 3 marks, e.g., modeling network traffic among 3 hosts. In OoD sequences, we change the influence matrix to simulate scenarios where a host goes offline (SERVER-STOP), and where a host goes down and the traffic is routed to a different host (SERVER-OVERLOAD). Higher δ implies that the change in the influence matrix happens earlier.
• LATENCY contains events of two types, sampled as follows. The first mark, the “trigger,” is sampled from a homogeneous Poisson process with rate µ = 3. The arrival times of the second
Results are shown in Figure 5. The 3S statistic demonstrates excellent performance in all four scenarios, followed by KS arrival and chi-squared. In case of SERVER-STOP and SERVER-OVERLOAD, the 3S statistic allows us to perfectly detect the anomalies even when only 5% of the time interval are affected by the change in the influence structure. KS inter-event and log-likelihood statistics completely fail on SERVER-STOP and SERVER-OVERLOAD, respectively. These two statistics also struggle to discriminate OoD sequences in LATENCY and SPIKETRAINS scenarios. The non-monotone behavior of the ROC AUC scores for some statistics (as the δ increases) indicates that these statistics are poorly suited for the respective scenarios.
6.3 Detecting anomalies in real-world data
Finally, we apply our methods to detect anomalies in two real-world event sequence datasets. We keep the setup (e.g., configuration of the neural TPP model) identical to Section 6.2.
LOGS: We generate server logs using Sock Shop microservices (Weave, 2017) and represent them as marked event sequences. Sock Shop is a standard testbed for research in microservice applications (Aderaldo et al., 2017) and contains a web application that runs on several containerized services. We generate OoD sequences by injecting various failures (e.g., packet corruption, increased latency) among these microservices using a chaos testing tool Pumba (Ledenev et al., 2016). We split one large server log into 30-second subintervals, that are then partitioned into train and test sets.
STEAD (Stanford Earthquake Dataset) (Mousavi et al., 2019) includes detailed seismic measurements on over 1 million earthquakes. We construct four subsets, each containing 72-hour subintervals in a period of five years within a 350km radius of a fixed geographical location. We treat sequences corresponding the San Mateo, CA region as in-distribution data, and the remaining 3 regions (Anchorage, AK, Aleutian Islands, AK and Helmet, CA) as OoD data.
Results. Table 1 shows the ROC AUC scores for all scenarios. KS arrival and chi-squared achieve surprisingly low scores in 6 out of 8 scenarios, even though these two methods showed strong results on simulated data in Sections 6.1 and 6.2. In contrast, KS inter-event and log-likelihood perform better here than in previous experiments, but still produce poor results on Packet corruption. The 3S statistic is the only method that consistently shows high ROC AUC scores across all scenarios. Moreover, we observe that for marked sequences (LOGS and all datasets in Section 6.2), the 3S statistic leads to more accurate detection compared to the log-likelihood statistic in 9 out of 9 cases.
7 Discussion
Limitations. Our approach assumes that the sequences in Dtrain were drawn i.i.d. from the true data-generating distribution Pdata (Section 2). This assumption can be violated in two ways: some of the training sequences might be anomalous or there might exist dependencies between them. We have
considered the latter case in our experiments on SPIKETRAINS and LOGS datasets, where despite the non-i.i.d. nature of the data our method was able to accurately detect anomalies. However, there might exist scenarios where the violation of the assumptions significantly degrades the performance.
No single test statistic can be “optimal” for either OoD detection or GoF testing, since we make no assumptions about the alternative distribution Q (Section 2). We empirically showed that the proposed 3S statistic compares favorably to other choices over a range of datasets and applications domains. Still, for any fixed pair of distributions P and Q, one can always find a statistic that will have equal or higher power s.t. the same false positive rate (Neyman & Pearson, 1933). Hence, it won’t be surprising to find cases where our (or any other chosen a priori) statistic is inferior.
Broader impact. Continuous-time variable-length event sequences provide a natural representation for data such as electronic health records (Enguehard et al., 2020), server logs (He et al., 2016) and user activity traces (Zhu et al., 2020). The ability to perform unsupervised anomaly detection in such data can enable practitioners to find at-risk patients, reduce DevOps costs, and automatically detect security breaches—all of which are important tasks in the respective fields. One of the risks when applying an anomaly detection method in practice is that the statistical anomalies found by the method will not be relevant for the use case. For example, when looking for health insurance fraud, the method might instead flag legitimate patients who underwent atypically many procedures as “suspicious” and freeze their accounts. To avoid such situations, automated decisions systems should be deployed with care, especially in sensitive domains like healthcare.
Conclusion. We have presented an approach for OoD detection for temporal point processes based on goodness-of-fit testing. At the core of our approach lies a new GoF test for standard Poisson processes based on the 3S statistic. Our method applies to a wide class of TPPs and is extremely easy to implement. We empirically showed that the proposed approach leads to better OoD detection accuracy compared to both popular GoF statistics for TPPs (Kolmogorov–Smirnov, chi-squared) and approaches commonly used in OoD detection literature (model log-likelihood). While our analysis focuses on TPPs, we believe our discussion on similarities and distinctions between GoF testing and OoD detection offers insights to the broader machine learning community.
Funding transparency statement
The work was funded by Amazon Research. | 1. What is the focus of the paper regarding anomaly detection for event sequences?
2. What are the strengths of the proposed approach, particularly in utilizing goodness-of-fit statistics?
3. What are the weaknesses of the paper, especially regarding the requirement for a large number of observations and the choice of 3S statistics?
4. Do you have any questions about the justification of replacing N with its expectation E[N|V] in Appendix l.562?
5. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses an anomaly detection problem for event sequences as temporal point process (TPP), where they formalize the problem as out-of-distribution (OoD) detection in a batch scenario. The authors proposed using goodness-of-fit (GoF) statistics to determine whether a sequence is OoD or not, which provides vast options for this purpose to us than an ordinary OoD procedure just using log-likelihood. They make it possible to use GoF statistics designed for known standard Poisson process by using Neural TPP fitted to training sequences. We can convert a random event sequence distributed according to an arbitrary TPP into a realization of the standard Poisson process with a compensator derived from the fitted Neural TPP. They also propose a sum-of-squared-spacings (3S) statistic that can detect anomalies w.r.t. the event count N, which is difficult to detect by Kolmogorov–Smirnov (KS) statistic, as well as anomalies w.r.t. the distribution of the arrivals or inter-event times. Experimental results demonstrate that the proposed statistic performs better than either other statistics or log-likelihood-based OoD.
Review
Originality:
This work proposes a novel combination of GoF statistic and OoD for event sequences with learned neural TPP for the unknown data-generating distribution.
Quality:
The authors introduced the motivation of the proposed method and provided the necessary background, which would be informative for many readers.
The requirements of the proposed method would limit its practical usability.
T
number of observations are required for anomaly detection with the proposed method. Also,
T
and
V
are assumed to be large enough in their analysis. In that case, it is challenging to perform timely anomaly detection, which is crucial for real applications, such as detecting intrusions in computer networks and detecting fraud or shifts in the market structure on financial systems. It would be better to add analysis w.r.t different configurations for
V
.
The motivation and justification of the use of 3S statistics are unclear.
Is it possible to use other statistics instead of 3S statistics?
In l.562 in Appendix, the authors replaced
N
with its expectation
E
[
N
|
V
]
. I cannot find the justification for that. Is it just heuristics for considering
N
as a random variable? Are there any other variants?
The compared methods seem naive.
They analyze when KS statistic is modified by removing
N
, but they can also replace
1
/
N
in the second equation with
1
/
V
similar to the proposed method. It would work better.
It would be better to add a comparison with the original 3S statistic without modification.
Clarity:
The discussion in l.195 is confusing since it would be mainly for KS statistic, not for 3S statistic, which does not require CDF. If so, the paragraph from l.195 would not be a help to understand the proposed method.
There is no description of how to choose hyperparameters of models, such as how they split validation set from the training set and test set.
In section 6.3, there is no description of the model they used.
Figure 1 is not mentioned in the main text. Also, in general, it is better to locate figures on top.
Significance:
The practical usability of the proposed method can be limited because of its requirements, as stated above.
Since the compared methods seem naive in the experiments, it is not easy to find the significance of the proposed method.
================ Update: The motivation and novelty of the proposed method have been clarified in the rebuttal. Also, contributions on empirical evaluation should be noted, where the proposed approach worked well even on short intervals. |
NIPS | Title
Detecting Anomalous Event Sequences with Temporal Point Processes
Abstract
Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security. In this paper, we frame the problem of detecting anomalous continuous-time event sequences as out-of-distribution (OoD) detection for temporal point processes (TPPs). First, we show how this problem can be approached using goodness-of-fit (GoF) tests. We then demonstrate the limitations of popular GoF statistics for TPPs and propose a new test that addresses these shortcomings. The proposed method can be combined with various TPP models, such as neural TPPs, and is easy to implement. In our experiments, we show that the proposed statistic excels at both traditional GoF testing, as well as at detecting anomalies in simulated and real-world data.
1 Introduction
Event data is abundant in the real world and is encountered in various important applications. For example, transactions in financial systems, server logs, and user activity traces can all naturally be represented as discrete events in continuous time. Detecting anomalies in such data can provide immense industrial value. For example, abnormal entries in system logs may correspond to unnoticed server failures, atypical user activity in computer networks may correspond to intrusions, and irregular patterns in financial systems may correspond to fraud or shifts in the market structure.
Manual inspection of such event data is usually infeasible due to its sheer volume. At the same time, hand-crafted rules quickly become obsolete due to software updates or changing trends (He et al., 2016). Ideally, we would like to have an adaptive system that can learn the normal behavior from the data, and automatically detect abnormal event sequences. Importantly, such a system should detect anomalies in a completely unsupervised way, as high-quality labels are usually hard to obtain.
Assuming “normal” data is available, we can formulate the problem of detecting anomalous event sequences as an instance of out-of-distribution (OoD) detection. Multiple recent works consider OoD detection for image data based on deep generative models (Ren et al., 2019; Nalisnick et al., 2019; Wang et al., 2020). However, none of these papers consider continuous-time event data. Deep generative models for such variable-length event sequences are known as neural temporal point processes (TPPs) (Du et al., 2016). Still, the literature on neural TPPs mostly focuses on prediction tasks, and the problem of anomaly detection has not been adequately addressed by existing works (Shchur et al., 2021). We aim to fill this gap in our paper.
∗Work done during an internship at Amazon Research. Code and datasets: https://github.com/shchur/tpp-anomaly-detection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Our main contributions are the following:
1. Approach for anomaly detection for TPPs. We draw connections between OoD detection and GoF testing for TPPs (Section 2). By combining this insight with neural TPPs, we propose an approach for anomaly detection that shows high accuracy on synthetic and real-world event data.
2. A new test statistic for TPPs. We highlight the limitations of popular GoF statistics for TPPs and propose the sum-of-squared-spacings statistic that addresses these shortcomings (Section 4). The proposed statistic can be applied to both unmarked and marked TPPs.
2 Anomaly detection and goodness-of-fit testing
Background. A temporal point process (TPP) (Daley & Vere-Jones, 2003), denoted as P, defines a probability distribution over variable-length event sequences in an interval [0, T ]. A TPP realization X consists of strictly increasing arrival times (t1, . . . , tN ), where N , the number of events, is itself a random variable. A TPP is characterized by its conditional intensity function λ∗(t) := λ(t|Ht) that is equal to the rate of arrival of new events given the historyHt = {tj : tj < t}. Equivalently, a TPP can be specified with the integrated intensity function (a.k.a. the compensator) Λ∗(t) = ∫ t 0 λ∗(u)du.
Out-of-distribution (OoD) detection. We formulate the problem of detecting anomalous event sequences as an instance of OoD detection (Liang et al., 2018). Namely, we assume that we are given a large set of training sequencesDtrain = {X1, . . . , XM} that were sampled i.i.d. from some unknown distribution Pdata over a domain X . At test time, we need to determine whether a new sequenceX was also drawn from Pdata (i.e., X is in-distribution or “normal”) or from another distribution Q 6= Pdata (i.e., X is out-of-distribution or anomalous). We can phrase this problem as a null hypothesis test:
H0 : X ∼ Pdata H1 : X ∼ Q for some Q 6= Pdata. (1)
To reiterate, here we consider the case where X is a variable-length event sequence and Pdata is some unknown TPP. However, the rest of the discussion in Section 2 also applies to distributions over other data types, such as images.
Goodness-of-fit (GoF) testing. First, we observe that the problem of OoD detection is closely related to the problem of GoF testing (D’Agostino, 1986). We now outline the setup and approaches for GoF testing, and then describe how these can be applied to OoD detection. The goal of a GoF test to determine whether a random elementX follows a known distribution Pmodel2
H0 : X ∼ Pmodel H1 : X ∼ Q for some Q 6= Pmodel. (2)
We can perform such a test by defining a test statistic s(X), where s : X → R (Fisher, 1936). For this, we compute the (two-sided) p-value for an observed realization x of X as3
ps(x) = 2×min{Pr(s(X) ≤ s(x)|H0), 1− Pr(s(X) ≤ s(x)|H0)}. (3)
The factor 2 accounts for the fact that the test is two-sided. We reject the null hypothesis (i.e., conclude that X doesn’t follow Pmodel) if the p-value is below some predefined confidence level α. Note that computing the p-value requires evaluating the cumulative distribution function (CDF) of the sampling distribution, i.e., the distribution test statistic s(X) under the null hypothesis H0.
GoF testing vs. OoD detection. The two hypothesis tests (Equations 1 and 2) appear similar—in both cases the goal is to determine whetherX follows a certain distribution P and no assumptions are made about the alternative Q. This means that we can perform OoD detection using the procedure described above, that is, by defining a test statistic s(X) and computing the respective p-value (Equation 3). However, in case of GoF testing (Equation 2), the distribution Pmodel is known. Therefore, we can analytically compute or approximate the CDF of s(X)|X ∼ Pmodel, and thus the p-value. In contrast, in an OoD detection hypothesis test (Equation 1), we make no assumptions about Pdata and only
2We test a single realization X , as is common in TPP literature (Brown et al., 2002). Note that this differs from works on univariate GoF testing that consider multiple realizations, i.e., H0 : X1, . . . , XM
i.i.d.∼ Pmodel. 3In the rest of the paper, the difference between the random element X and its realization x is unimportant,
so we denote both as X , as is usually done in the literature.
have access to samples Dtrain that were drawn from this distribution. For this reason, we cannot compute the CDF of s(X)|X ∼ Pdata analytically. Instead, we can approximate the p-value using the empirical distribution function (EDF) of the test statistic s(X) on Dtrain. The above procedure can be seen as a generalization of many existing methods for unsupervised OoD detection. These approaches usually define the test statistic based on the log-likelihood (LL) of a generative model fitted to Dtrain (Choi et al., 2018; Ren et al., 2019; Ruff et al., 2021). However, as follows from our discussion above, there is no need to limit ourselves to LL-based statistics. For instance, we can define a test statistic for event sequences based on the rich literature on GoF testing for TPPs. We show in Section 6 that this often leads to more accurate anomaly detection compared to LL. Moreover, the difference between OoD detection and GoF testing is often overlooked. By drawing a clear distinction between the two, we can avoid some of the pitfalls encountered by other works (Nalisnick et al., 2019), as we elaborate in Appendix A.
The anomaly detection framework we outlined above can be applied to any type of data—such as images or time series—but in this work we mostly focus on continuous-time event data. This means that our main goal is to find an appropriate test statistic for variable-length continuous-time event sequences. In Section 3, we take a look at existing GoF statistics for TPPs and analyze their limitations. Then in Section 4, we propose a new test statistic that addresses these shortcomings and describe in more detail how it can be used for OoD detection.
3 Review of existing GoF test statistics for TPPs
Here, we consider a GoF test (Equation 2), where the goal is to determine whether an event sequence X = (t1, . . . , tN ) was generated by a known TPP Pmodel with compensator Λ∗. We will return to the problem of OoD detection, where the data-generating distribution Pdata is unknown, in Section 4.2. Many popular GoF tests for TPPs are based on the following result (Ogata, 1988; Brown et al., 2002).
Theorem 1 (Random time change theorem (Brown et al., 2002)). A sequence X = (t1, . . . , tN ) is distributed according to a TPP with compensator Λ∗ on the interval [0, V ] if and only if the sequence Z = (Λ∗(t1), . . . ,Λ ∗(tN )) is distributed according to the standard Poisson process on [0,Λ∗(V )].
Intuitively, Theorem 1 can be viewed as a TPP analogue of how the CDF of an arbitrary random variable over R transforms its realizations into samples from Uniform([0, 1]). Similarly, the compensator Λ∗ converts a random event sequence X into a realization Z of the standard Poisson process (SPP). Therefore, the problem of GoF testing for an arbitrary TPP reduces to testing whether the transformed sequence Z follows the SPP on [0,Λ∗(T )]. In other words, we can define a GoF statistic for a TPP with compensator Λ∗ by (1) applying the compensator to X to obtain Z and (2) computing one of the existing GoF statistics for the SPP on the transformed sequence. This can also be generalized to marked TPPs (where events can belong to one ofK classes) by simply concatentating the transformed sequences Z(k) for each event type k ∈ {1, . . . ,K} (see Appendix D for details). SPP, i.e., the Poisson process with constant intensity λ∗(t) = 1, is the most basic TPP one can conceive. However, as we will shortly see, existing GoF statistics even for this simple model have considerable shortcomings and can only detect a limited class of deviations from the SPP. More importantly, test statistics for general TPPs defined using the above recipe (Theorem 1) inherit the limitations of the SPP statistics.
For brevity, we denote the transformed arrival times as Z = (v1, . . . , vN ) = (Λ∗(t1), . . . ,Λ∗(tN )) and the length of the transformed interval as V = Λ∗(T ). One way to describe the generative process of an SPP is as follows (Pasupathy, 2010)
N |V ∼ Poisson(V ) ui|N,V ∼ Uniform([0, V ]) for i = 1, . . . , N. (4)
An SPP realization Z = (v1, . . . , vN ) is obtained by sorting the ui’s in increasing order. This is equivalent to defining the arrival time vi as the i-th order statistic u(i). We can also represent Z by the inter-event times (w1, . . . , wN+1) where wi = vi − vi−1, assuming v0 = 0 and vN+1 = V . Barnard (1953) proposed a GoF test for the SPP based on the above description (Equation 4) and the Kolmogorov–Smirnov (KS) statistic. The main idea of this approach is to check whether the arrival times v1, . . . , vN are distributed uniformly in the [0, V ] interval. For this, we compare F̂arr, the empirical CDF of the arrival times, with Farr(u) = u/V , the CDF of the Uniform([0, V ]) distribution.
This can be done using the KS statistic on the arrival times (KS arrival), defined as
κarr(Z) = √ N · sup
u∈[0,V ] |F̂arr(u)− Farr(u)| where F̂arr(u) =
1
N N∑ i=1 1(vi ≤ u). (5)
Another popular GoF test for the SPP is based on the fact that the inter-event times wi are distributed according to the Exponential(1) distribution (Cox, 1966). The test compares F̂int, the empirical CDF of the inter-event times, and Fint(u) = 1− exp(−u), the CDF of the Exponential(1) distribution. This leads to the KS statistic for the inter-event times (KS inter-event)
κint(Z) = √ N · sup
u∈[0,∞) |F̂int(u)− Fint(u)| where F̂int(u) =
1
N + 1 N+1∑ i=1 1(wi ≤ u). (6)
KS arrival and KS inter-event statistics are often presented as the go-to approach for testing the goodness-of-fit of the standard Poisson process (Daley & Vere-Jones, 2003). Combining them with Theorem 1 leads to simple GoF tests for arbitrary TPPs that are widely used to this day (Gerhard et al., 2011; Alizadeh et al., 2013; Kim & Whitt, 2014; Tao et al., 2018; Li et al., 2018).
Limitations of the KS statistics. The KS statistics κarr(Z) and κint(Z) are only able to differentiate the SPP from a narrow class of alternative processes. For example, KS arrival only checks if the arrival times vi are distributed uniformly, conditioned on the event count N . But what if the observed N is itself extremely unlikely under the SPP (Equation 4)? KS inter-event can be similarly insensitive to the event count—removing all events V2 < vi ≤ V from an SPP realization Z will only result in just a single atypically large inter-event time wi, which changes the value of κint(Z) at most by 1N+1 . We demonstrate these limitations of κarr(Z) and κint(Z) in our experiments in Section 6.1. Other failure modes of the KS statistics were described by Pillow (2009). Note that ad-hoc fixes to the KS statistics do not address these problems. For example, combining multiple tests performed separately for the event count and arrival times using Fisher’s method (Fisher, 1948; Cox, 1966) consistently decreases the accuracy, as we show in Appendix G. In the next section, we introduce a different test statistic that aims to address these shortcomings.
4 Sum-of-squared-spacings (3S) statistic for TPPs
4.1 Goodness-of-fit testing with the 3S statistic
A good test statistic should capture multiple properties of the SPP at once: it should detect deviations w.r.t. both the event count N and the distribution of the arrival or inter-event times. Here, we propose to approach GoF testing with a sum-of-squared-spacings (3S) statistic that satisfies these desiderata,
ψ(Z) = 1
V N+1∑ i=1 w2i = 1 V N+1∑ i=1 (vi − vi−1)2. (7)
This statistic extends the sum-of-squared-spacings statistic proposed as a test of uniformity for fixedlength samples by Greenwood (1946). The important difference between our definition (Equation 7) and prior works (D’Agostino, 1986) is that we, for the first time, consider the TPP setting, where the number of events N is random as well. For this reason, we use the normalizing constant 1/V instead of N/V 2 (see Appendix B for details). As we will see, this helps capture abnormalities in the event count and results in more favorable asymptotic properties for the case of SPP.
Intuitively, for a fixed N , the statistic ψ is maximized if the spacings are extremely imbalanced, i.e., if one inter-event time wi is close to V and the rest are close to zero. Conversely, ψ attains its minimum when the spacings are all equal, that is wi = VN+1 for all i.
In Figure 2a we visualize the distribution of ψ|N,V for two different values of N . We see that the distribution of ψ depends strongly on N , therefore a GoF test involving ψ will detect if the event count N is atypical for the given SPP. This is in contrast to κarr and κint, the distributions of which, by design, are (asymptotically) invariant under N (Figure 2b). Even if one accounts for this effect, e.g., by removing the correction factor √ N in Equations 5 and 6, their distributions change only slightly compared to the sum of squared spacings (see Figures 2c and 2d). To analyze other properties of the statistic, we consider its moments under the null hypothesis.
Proposition 1. Suppose the sequence Z is distributed according to the standard Poisson process on the interval [0, V ]. Then the first two moments of the statistic ψ := ψ(Z) are
E[ψ|V ] = 2 V (V + e−V − 1) and Var[ψ|V ] = 4 V 2 (2V − 7 + e−V (2V 2 + 4V + 8− e−V )).
The proof of Proposition 1 can be found in Appendix C. From Proposition 1 it follows that
lim V→∞ E[ψ|V ] = 2 lim V→∞ Var[ψ|V ] = 0. (8)
This leads to a natural notion of typicality in the sense of Nalisnick et al. (2019) and Wang et al. (2020) for the standard Poisson process. We can define the typical set of the SPP as the set of variable-length sequences Z on the interval [0, V ] that satisfy |ψ(Z)− 2| ≤ for some small > 0. It follows from Equation 8 and Chebyshev’s inequality that for large enough V , the SPP realizations will fall into the typical set with high probability. Therefore, at least for large V , we should be able to detect sequences that are not distributed according the SPP based on the statistic ψ.
Summary. To test the GoF of a TPP with a known compensator Λ∗ for an event sequence X = (t1, . . . , tN ), we first obtain the transformed sequence Z = (Λ∗(t1), . . . ,Λ∗(tN )) and compute the statistic ψ(Z) according to Equation 7. Since the CDF of the statistic under H0 cannot be computed analytically, we approximate it using samples drawn from Pmodel. That is, we draw realizations Dmodel = {X1, . . . , XM} from the TPP (e.g., using the inversion method (Rasmussen, 2018)) and compute the p-value for X (Equation 3) using the EDF of the statistic on Dmodel (North et al., 2002).
4.2 Out-of-distribution detection with the 3S statistic
We now return to the original problem of OoD detection in TPPs, where we have access to a set of in-distribution sequences Dtrain and do not know the data-generating process Pdata. Our idea is to perform the OoD detection hypothesis test (Equation 1) using the sum-of-squared-spacings test statistic that we introduced in the previous section. However, since the data-generating TPP Pdata is unknown, we do not know the corresponding compensator that is necessary to compute the statistic. Instead, we can fit a neural TPP model Pmodel (Du et al., 2016) to the sequences in Dtrain and use the compensator Λ∗ of the learned model to compute the statistic s(X).4 High flexibility of neural TPPs allows these models to more accurately approximate the true compensator. Having defined the statistic, we can approximate its distribution under H0 (i.e., assuming X ∼ Pdata) by the EDF of the statistic on Dtrain. We
use this EDF to compute the p-values for our OoD detection hypothesis test and thus detect anomalous sequences. We provide the pseudocode description of our OoD detection method in Appendix D.
We highlight that an OoD detection procedure like the one above is not equivalent to a GoF test for the learned generative model Pmodel, as suggested by earlier works (Nalisnick et al., 2019). While we use
4We can replace the 3S statistic on the transformed sequence Z with any other statistic for the SPP, such as KS arrival. In Sections 6.2 and 6.3, we compare different statistics constructed this way.
the compensator of the learned model to define the test statistic s(X), we compute the p-value for the OoD detection test based on s(X)|X ∼ Pdata. This is different from the distribution s(X)|X ∼ Pmodel used in a GoF test, since in general Pmodel 6= Pdata. Therefore, even if the distribution of a test statistic under the GoF test can be approximated analytically (as, e.g., for the KS statistic (Marsaglia et al., 2003)), we have to use the EDF of the statistic onDtrain for the OoD detection test. Figure 3 visualizes this difference. Here, we fit a TPP model on the in-distribution sequences from the STEAD dataset (Section 6.3) and plot the empirical distribution of the respective statistic s(X) on Dtrain (corresponds to s(X)|X ∼ Pdata) and on model samples Dmodel (corresponds to s(X)|X ∼ Pmodel).
5 Related work
Unsupervised OoD detection. OoD detection approaches based on deep generative models (similar to our approach in Section 4.2) have received a lot of attention in the literature. However, there are several important differences between our method and prior works. First, most existing approaches perform OoD detection based on the log-likelihood (LL) of the model or some derived statistic (Choi et al., 2018; Ren et al., 2019; Nalisnick et al., 2019; Morningstar et al., 2021; Ruff et al., 2021). We observe that LL can be replaced by any other test statistic, e.g., taken from the GoF testing literature, which often leads to more accurate anomaly detection (Section 6). Second, unlike prior works, we draw a clear distinction between OoD detection and GoF testing. While this difference may seem obvious in hindsight, it is not acknowledged by the existing works, which may lead to complications (see Appendix A). Also, our formulation of the OoD detection problem in Section 2 provides an intuitive explanation to the phenomenon of “typicality” (Nalisnick et al., 2019; Wang et al., 2020). The ( , 1)-typical set of a distribution P simply corresponds to the acceptance region of the respective hypothesis test with confidence level (Equation 1). Finally, most existing papers study OoD detection for image data and none consider variable-length event sequences, which is the focus of our work.
Our OoD detection procedure is also related to the rarity anomaly score (Ferragut et al., 2012; Janzing et al., 2019). The rarity score can be interpreted as the negative logarithm of a one-sided p-value (Equation 3) of a GoF test that uses the log-likelihood of some known model as the test statistic. In contrast, we consider a broader class of statistics and learn the model from the data.
Anomaly detection for TPPs. OoD detection, as described in Section 2, is not the only way to formalize anomaly detection for TPPs. For example, Ojeda et al. (2019) developed a distance-based approach for Poisson processes. Recently, Zhu et al. (2020) proposed to detect anomalous event sequences with an adversarially-trained model. Unlike these two methods, our approach can be combined with any TPP model without altering the training procedure. Liu & Hauskrecht (2019) studied anomalous event detection with TPPs, while we are concerned with entire event sequences.
GoF tests for TPPs. Existing GoF tests for the SPP usually check if the arrival times are distributed uniformly, using, e.g., the KS (Lewis, 1965) or chi-squared statistic (Cox, 1955). Our 3S statistic favorably compares to these approaches thanks to its dependence on the event countN , as we explain in Section 4 and show experimentally in Section 6.1. Methods combining the random time change theorem with a GoF test for the SPP (usually, the KS test) have been used at least since Ogata (1988), and are especially popular in neuroscience (Brown et al., 2002; Gerhard et al., 2011; Tao et al., 2018). However, these approaches inherit the limitations of the underlying KS statistic. Replacing the KS score with the 3S statistic consistently leads to a better separation between different TPP distributions (Section 6).
Gerhard & Gerstner (2010) discussed several GoF tests for discrete-time TPPs, while we deal with continuous time. Yang et al. (2019) proposed a GoF test for point processes based on Stein’s identity, which is related to a more general class of kernel-based GoF tests (Chwialkowski et al., 2016; Liu et al., 2016). Their approach isn’t suitable for neural TPPs, where the Papangelou intensity cannot be computed analytically. A recent work by Wei et al. (2021) designed a GoF test for self-exciting processes under model misspecification. In contrast to these approaches, our proposed GoF test from Section 4.1 can be applied to any TPP with a known compensator.
Sum-of-squared-spacings statistic. A similar statistic was first used by Greenwood (1946) for testing whether a fixed number of points are distributed uniformly in an interval. Several follow-up works studied the limiting distribution of the statistic (conditioned on N ) as N → ∞ (Hill, 1979; Stephens, 1981; Rao & Kuo, 1984). Our proposed statistic (Equation 7) is not invariant w.r.t. N and, therefore, is better suited for testing TPPs. We discuss other related statistics in Appendix B.
6 Experiments
Our experimental evaluation covers two main topics. In Section 6.1, we compare the proposed 3S statistic with existing GoF statistics for the SPP. Then in Sections 6.2 and 6.3, we evaluate our OoD detection approach on simulated and real-world data, respectively. The experiments were run on a machine with a 1080Ti GPU. Details on the setup and datasets construction are provided in Appendix E & F.
6.1 Standard Poisson process
In Section 3 we mentioned several failure modes of existing GoF statistics for the SPP. Then, in Section 4.1 we introduced the 3S statistic that was supposed to address these limitations. Hence, the goal of this section is to compare the proposed statistic with the existing ones in the task of GoF testing for the SPP. We consider four test statistics: (1) KS statistic on arrival times (Equation 5), (2) KS statistic on inter-event times (Equation 6), (3) chi-squared statistic on the arrival times (Cox, 1955; Tao et al., 2018), and (4) the proposed 3S statistic (Equation 7).
To quantitatively compare the discriminative power of different statistics, we adopt an evaluation strategy similar to Gerhard & Gerstner (2010); Yang et al. (2019). First, we generate a set Dmodel consisting of 1000 SPP realizations. We use Dmodel to compute the empirical distribution function of each statistic s(Z) under H0. Then, we define two test sets: DIDtest (consisting of samples from Pmodel, the SPP) and DOODtest (consisting of samples from Q, another TPP), each with 1000 sequences. Importantly, in this and following experiments, the training and test sets are always disjoint.
We follow the GoF testing procedure described at the end of Section 4.1, which corresponds to the hypothesis test in Equation 2. That is, we compute the p-value (Equation 3) for each sequence in the test sets using the EDF of s(Z) on Dmodel. A good test statistic s(Z) should assign lower p-values to the OoD sequences from DOODtest than to ID sequences from DIDtest, allowing us to discriminate between samples from Q and Pmodel. We quantify how well a given statistic separates the two distributions by computing the area under the ROC curve (ROC AUC). This effectively averages the performance of a statistic for the GoF hypothesis test over different significance levels α.
Datasets. We consider six choices for the distribution Q: • RATE, a homogeneous Poisson process with intensity µ < 1; • STOPPING, where events stop after some time tstop ∈ [0, V ]; • RENEWAL, where inter-event times are drawn i.i.d. from the Gamma distribution; • HAWKES, where events are more clustered compared to the SPP; • INHOMOGENEOUS, a Poisson process with non-constant intensity λ(t) = β sin(ωt); • SELFCORRECTING, where events are more evenly spaced compared to the SPP.
For cases the last 4 cases, the expected number of events is the same as for the SPP.
For each choice of Q we define a detectability parameter δ ∈ [0, 1], where higher δ corresponds to TPPs that are increasingly dissimilar to the SPP. That is, setting δ = 0 corresponds to a distribution Q that is exactly equal to the SPP, and δ = 1 corresponds to a distribution that deviates significantly from the SPP. For example, for a Hawkes with conditional intensity λ∗(t) = µ+ β ∑ tj<t
exp(−(t− tj)), the detectability value of δ = 0 corresponds to µ = 1 and β = 0 (i.e., λ∗(t) = 1) making Q
indistinguishable from P. The value of δ = 0.5 corresponds to µ = 0.5 and β = 0.5, which preserves the expected number of events N but makes the arrival times ti “burstier.” We describe how the parameters of each distribution Q are defined based on δ in Appendix E. Note that, in general, the ROC AUC scores are not guaranteed to monotonically increase as the detectability δ is increased.
Results. In Figure 4, we present AUC scores for different statistics as δ is varied. As expected, KS arrival accurately identifies sequences that come from Q where the absolute time of events are non-uniform (as in INHOMOGENEOUS). Similarly, KS inter-event is good at detecting deviations in the distribution of inter-event times, as in RENEWAL. The performance of the chi-squared statistic is similar to that of KS arrival. Nevertheless, the above statistics fail when the expected number of events, N , changes substantially—as in KS arrival and chi-squared on RATE, and KS inter-event on STOPPING. These failure modes match our discussion from Section 3.
In contrast, the 3S statistic stands out as the most consistent test (best or close-to-best performance in 5 out of 6 cases) and does not completely fail in any of the scenarios. The relatively weaker performance on SELFCORRECTING implies that the 3S statistic is less sensitive to superuniform spacings (D’Agostino, 1986) than to imbalanced spacings. The results show that the 3S statistic is able to detect deviations w.r.t. both the event count N (RATE and STOPPING), as well as the distributions of the inter-event times wi (RENEWAL) or the arrival times vi (HAWKES and INHOMOGENEOUS)— something that other GoF statistics for the SPP cannot provide.
6.2 Detecting anomalies in simulated data
In this section, we test the OoD detection approach discussed in Section 4.2, i.e., we perform anomaly detection for a TPP with an unknown compensator. This corresponds to the hypothesis test in Equation 1. We use the training set Dtrain to fit an RNN-based neural TPP model (Shchur et al., 2020) via maximum likelihood estimation (see Appendix F for details). Then, we define test statistics for the general TPP as follows. We apply the compensator Λ∗ of the learned model to each event sequence X and compute the four statistics for the SPP from Section 6.1 on the transformed sequence Z = Λ∗(X). We highlight that these methods are not “baselines” in the usual sense—the idea of combining a GoF statistic with a learned TPP model to detect anomalous event sequences is itself novel and hasn’t been explored by earlier works. The rest of the setup is similar to Section 6.1. We use Dtrain to compute the EDF of each statistic under H0, and then compute the ROC AUC scores on the p-values. In addition to the four statistics discussed before, we consider a two-sided test on the log-likelihood log q(X) of the learned generative model, which corresponds to the approach by Nalisnick et al. (2019).
Datasets. Like before, we define a detectability parameter δ for each scenario that determines how dissimilar ID and OoD sequences are. SERVER-STOP, SERVER-OVERLOAD and LATENCY are inspired by applications in DevOps, such as detecting anomalies in server logs.
• SERVER-OVERLOAD and SERVER-STOP contain data generated by a multivariate Hawkes process with 3 marks, e.g., modeling network traffic among 3 hosts. In OoD sequences, we change the influence matrix to simulate scenarios where a host goes offline (SERVER-STOP), and where a host goes down and the traffic is routed to a different host (SERVER-OVERLOAD). Higher δ implies that the change in the influence matrix happens earlier.
• LATENCY contains events of two types, sampled as follows. The first mark, the “trigger,” is sampled from a homogeneous Poisson process with rate µ = 3. The arrival times of the second
Results are shown in Figure 5. The 3S statistic demonstrates excellent performance in all four scenarios, followed by KS arrival and chi-squared. In case of SERVER-STOP and SERVER-OVERLOAD, the 3S statistic allows us to perfectly detect the anomalies even when only 5% of the time interval are affected by the change in the influence structure. KS inter-event and log-likelihood statistics completely fail on SERVER-STOP and SERVER-OVERLOAD, respectively. These two statistics also struggle to discriminate OoD sequences in LATENCY and SPIKETRAINS scenarios. The non-monotone behavior of the ROC AUC scores for some statistics (as the δ increases) indicates that these statistics are poorly suited for the respective scenarios.
6.3 Detecting anomalies in real-world data
Finally, we apply our methods to detect anomalies in two real-world event sequence datasets. We keep the setup (e.g., configuration of the neural TPP model) identical to Section 6.2.
LOGS: We generate server logs using Sock Shop microservices (Weave, 2017) and represent them as marked event sequences. Sock Shop is a standard testbed for research in microservice applications (Aderaldo et al., 2017) and contains a web application that runs on several containerized services. We generate OoD sequences by injecting various failures (e.g., packet corruption, increased latency) among these microservices using a chaos testing tool Pumba (Ledenev et al., 2016). We split one large server log into 30-second subintervals, that are then partitioned into train and test sets.
STEAD (Stanford Earthquake Dataset) (Mousavi et al., 2019) includes detailed seismic measurements on over 1 million earthquakes. We construct four subsets, each containing 72-hour subintervals in a period of five years within a 350km radius of a fixed geographical location. We treat sequences corresponding the San Mateo, CA region as in-distribution data, and the remaining 3 regions (Anchorage, AK, Aleutian Islands, AK and Helmet, CA) as OoD data.
Results. Table 1 shows the ROC AUC scores for all scenarios. KS arrival and chi-squared achieve surprisingly low scores in 6 out of 8 scenarios, even though these two methods showed strong results on simulated data in Sections 6.1 and 6.2. In contrast, KS inter-event and log-likelihood perform better here than in previous experiments, but still produce poor results on Packet corruption. The 3S statistic is the only method that consistently shows high ROC AUC scores across all scenarios. Moreover, we observe that for marked sequences (LOGS and all datasets in Section 6.2), the 3S statistic leads to more accurate detection compared to the log-likelihood statistic in 9 out of 9 cases.
7 Discussion
Limitations. Our approach assumes that the sequences in Dtrain were drawn i.i.d. from the true data-generating distribution Pdata (Section 2). This assumption can be violated in two ways: some of the training sequences might be anomalous or there might exist dependencies between them. We have
considered the latter case in our experiments on SPIKETRAINS and LOGS datasets, where despite the non-i.i.d. nature of the data our method was able to accurately detect anomalies. However, there might exist scenarios where the violation of the assumptions significantly degrades the performance.
No single test statistic can be “optimal” for either OoD detection or GoF testing, since we make no assumptions about the alternative distribution Q (Section 2). We empirically showed that the proposed 3S statistic compares favorably to other choices over a range of datasets and applications domains. Still, for any fixed pair of distributions P and Q, one can always find a statistic that will have equal or higher power s.t. the same false positive rate (Neyman & Pearson, 1933). Hence, it won’t be surprising to find cases where our (or any other chosen a priori) statistic is inferior.
Broader impact. Continuous-time variable-length event sequences provide a natural representation for data such as electronic health records (Enguehard et al., 2020), server logs (He et al., 2016) and user activity traces (Zhu et al., 2020). The ability to perform unsupervised anomaly detection in such data can enable practitioners to find at-risk patients, reduce DevOps costs, and automatically detect security breaches—all of which are important tasks in the respective fields. One of the risks when applying an anomaly detection method in practice is that the statistical anomalies found by the method will not be relevant for the use case. For example, when looking for health insurance fraud, the method might instead flag legitimate patients who underwent atypically many procedures as “suspicious” and freeze their accounts. To avoid such situations, automated decisions systems should be deployed with care, especially in sensitive domains like healthcare.
Conclusion. We have presented an approach for OoD detection for temporal point processes based on goodness-of-fit testing. At the core of our approach lies a new GoF test for standard Poisson processes based on the 3S statistic. Our method applies to a wide class of TPPs and is extremely easy to implement. We empirically showed that the proposed approach leads to better OoD detection accuracy compared to both popular GoF statistics for TPPs (Kolmogorov–Smirnov, chi-squared) and approaches commonly used in OoD detection literature (model log-likelihood). While our analysis focuses on TPPs, we believe our discussion on similarities and distinctions between GoF testing and OoD detection offers insights to the broader machine learning community.
Funding transparency statement
The work was funded by Amazon Research. | 1. What is the focus of the paper regarding anomaly detection?
2. What is the novel approach or proposal introduced by the paper?
3. How does the reviewer assess the quality and effectiveness of the proposed method based on simulations and real-world data sets?
4. What are the strengths and weaknesses of the proposed method compared to other existing methods? | Summary Of The Paper
Review | Summary Of The Paper
This paper discusses anomaly detection in continuous-time event sequences as out-of-distribution (OoD) detection for temporal point processes, and statistical tests for the detection. The paper proposes a new statistical test for the detection and demonstrates the usefulness of the proposed test in simulated and real-world data sets.
Review
This is a well-written paper. The connection between anomaly detection in continuous-time event sequences and out of distribution detection has been discussed, and such a connection is useful for anomaly detection research. The proposed 3S test is well motivated and sound. 3S test has been shown effective for anomaly detection in continuous-time event sequences in experiments.
Discussions in the paper are focused, clear and easy to follow. The authors evaluate the proposed test in different circumstances, and the strengths and weaknesses of the proposed test have been presented. The connection between anomaly detection and out of distribution detection, and existing and proposed statistical tests are useful for anomaly detection research. |
NIPS | Title
Detecting Anomalous Event Sequences with Temporal Point Processes
Abstract
Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security. In this paper, we frame the problem of detecting anomalous continuous-time event sequences as out-of-distribution (OoD) detection for temporal point processes (TPPs). First, we show how this problem can be approached using goodness-of-fit (GoF) tests. We then demonstrate the limitations of popular GoF statistics for TPPs and propose a new test that addresses these shortcomings. The proposed method can be combined with various TPP models, such as neural TPPs, and is easy to implement. In our experiments, we show that the proposed statistic excels at both traditional GoF testing, as well as at detecting anomalies in simulated and real-world data.
1 Introduction
Event data is abundant in the real world and is encountered in various important applications. For example, transactions in financial systems, server logs, and user activity traces can all naturally be represented as discrete events in continuous time. Detecting anomalies in such data can provide immense industrial value. For example, abnormal entries in system logs may correspond to unnoticed server failures, atypical user activity in computer networks may correspond to intrusions, and irregular patterns in financial systems may correspond to fraud or shifts in the market structure.
Manual inspection of such event data is usually infeasible due to its sheer volume. At the same time, hand-crafted rules quickly become obsolete due to software updates or changing trends (He et al., 2016). Ideally, we would like to have an adaptive system that can learn the normal behavior from the data, and automatically detect abnormal event sequences. Importantly, such a system should detect anomalies in a completely unsupervised way, as high-quality labels are usually hard to obtain.
Assuming “normal” data is available, we can formulate the problem of detecting anomalous event sequences as an instance of out-of-distribution (OoD) detection. Multiple recent works consider OoD detection for image data based on deep generative models (Ren et al., 2019; Nalisnick et al., 2019; Wang et al., 2020). However, none of these papers consider continuous-time event data. Deep generative models for such variable-length event sequences are known as neural temporal point processes (TPPs) (Du et al., 2016). Still, the literature on neural TPPs mostly focuses on prediction tasks, and the problem of anomaly detection has not been adequately addressed by existing works (Shchur et al., 2021). We aim to fill this gap in our paper.
∗Work done during an internship at Amazon Research. Code and datasets: https://github.com/shchur/tpp-anomaly-detection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Our main contributions are the following:
1. Approach for anomaly detection for TPPs. We draw connections between OoD detection and GoF testing for TPPs (Section 2). By combining this insight with neural TPPs, we propose an approach for anomaly detection that shows high accuracy on synthetic and real-world event data.
2. A new test statistic for TPPs. We highlight the limitations of popular GoF statistics for TPPs and propose the sum-of-squared-spacings statistic that addresses these shortcomings (Section 4). The proposed statistic can be applied to both unmarked and marked TPPs.
2 Anomaly detection and goodness-of-fit testing
Background. A temporal point process (TPP) (Daley & Vere-Jones, 2003), denoted as P, defines a probability distribution over variable-length event sequences in an interval [0, T ]. A TPP realization X consists of strictly increasing arrival times (t1, . . . , tN ), where N , the number of events, is itself a random variable. A TPP is characterized by its conditional intensity function λ∗(t) := λ(t|Ht) that is equal to the rate of arrival of new events given the historyHt = {tj : tj < t}. Equivalently, a TPP can be specified with the integrated intensity function (a.k.a. the compensator) Λ∗(t) = ∫ t 0 λ∗(u)du.
Out-of-distribution (OoD) detection. We formulate the problem of detecting anomalous event sequences as an instance of OoD detection (Liang et al., 2018). Namely, we assume that we are given a large set of training sequencesDtrain = {X1, . . . , XM} that were sampled i.i.d. from some unknown distribution Pdata over a domain X . At test time, we need to determine whether a new sequenceX was also drawn from Pdata (i.e., X is in-distribution or “normal”) or from another distribution Q 6= Pdata (i.e., X is out-of-distribution or anomalous). We can phrase this problem as a null hypothesis test:
H0 : X ∼ Pdata H1 : X ∼ Q for some Q 6= Pdata. (1)
To reiterate, here we consider the case where X is a variable-length event sequence and Pdata is some unknown TPP. However, the rest of the discussion in Section 2 also applies to distributions over other data types, such as images.
Goodness-of-fit (GoF) testing. First, we observe that the problem of OoD detection is closely related to the problem of GoF testing (D’Agostino, 1986). We now outline the setup and approaches for GoF testing, and then describe how these can be applied to OoD detection. The goal of a GoF test to determine whether a random elementX follows a known distribution Pmodel2
H0 : X ∼ Pmodel H1 : X ∼ Q for some Q 6= Pmodel. (2)
We can perform such a test by defining a test statistic s(X), where s : X → R (Fisher, 1936). For this, we compute the (two-sided) p-value for an observed realization x of X as3
ps(x) = 2×min{Pr(s(X) ≤ s(x)|H0), 1− Pr(s(X) ≤ s(x)|H0)}. (3)
The factor 2 accounts for the fact that the test is two-sided. We reject the null hypothesis (i.e., conclude that X doesn’t follow Pmodel) if the p-value is below some predefined confidence level α. Note that computing the p-value requires evaluating the cumulative distribution function (CDF) of the sampling distribution, i.e., the distribution test statistic s(X) under the null hypothesis H0.
GoF testing vs. OoD detection. The two hypothesis tests (Equations 1 and 2) appear similar—in both cases the goal is to determine whetherX follows a certain distribution P and no assumptions are made about the alternative Q. This means that we can perform OoD detection using the procedure described above, that is, by defining a test statistic s(X) and computing the respective p-value (Equation 3). However, in case of GoF testing (Equation 2), the distribution Pmodel is known. Therefore, we can analytically compute or approximate the CDF of s(X)|X ∼ Pmodel, and thus the p-value. In contrast, in an OoD detection hypothesis test (Equation 1), we make no assumptions about Pdata and only
2We test a single realization X , as is common in TPP literature (Brown et al., 2002). Note that this differs from works on univariate GoF testing that consider multiple realizations, i.e., H0 : X1, . . . , XM
i.i.d.∼ Pmodel. 3In the rest of the paper, the difference between the random element X and its realization x is unimportant,
so we denote both as X , as is usually done in the literature.
have access to samples Dtrain that were drawn from this distribution. For this reason, we cannot compute the CDF of s(X)|X ∼ Pdata analytically. Instead, we can approximate the p-value using the empirical distribution function (EDF) of the test statistic s(X) on Dtrain. The above procedure can be seen as a generalization of many existing methods for unsupervised OoD detection. These approaches usually define the test statistic based on the log-likelihood (LL) of a generative model fitted to Dtrain (Choi et al., 2018; Ren et al., 2019; Ruff et al., 2021). However, as follows from our discussion above, there is no need to limit ourselves to LL-based statistics. For instance, we can define a test statistic for event sequences based on the rich literature on GoF testing for TPPs. We show in Section 6 that this often leads to more accurate anomaly detection compared to LL. Moreover, the difference between OoD detection and GoF testing is often overlooked. By drawing a clear distinction between the two, we can avoid some of the pitfalls encountered by other works (Nalisnick et al., 2019), as we elaborate in Appendix A.
The anomaly detection framework we outlined above can be applied to any type of data—such as images or time series—but in this work we mostly focus on continuous-time event data. This means that our main goal is to find an appropriate test statistic for variable-length continuous-time event sequences. In Section 3, we take a look at existing GoF statistics for TPPs and analyze their limitations. Then in Section 4, we propose a new test statistic that addresses these shortcomings and describe in more detail how it can be used for OoD detection.
3 Review of existing GoF test statistics for TPPs
Here, we consider a GoF test (Equation 2), where the goal is to determine whether an event sequence X = (t1, . . . , tN ) was generated by a known TPP Pmodel with compensator Λ∗. We will return to the problem of OoD detection, where the data-generating distribution Pdata is unknown, in Section 4.2. Many popular GoF tests for TPPs are based on the following result (Ogata, 1988; Brown et al., 2002).
Theorem 1 (Random time change theorem (Brown et al., 2002)). A sequence X = (t1, . . . , tN ) is distributed according to a TPP with compensator Λ∗ on the interval [0, V ] if and only if the sequence Z = (Λ∗(t1), . . . ,Λ ∗(tN )) is distributed according to the standard Poisson process on [0,Λ∗(V )].
Intuitively, Theorem 1 can be viewed as a TPP analogue of how the CDF of an arbitrary random variable over R transforms its realizations into samples from Uniform([0, 1]). Similarly, the compensator Λ∗ converts a random event sequence X into a realization Z of the standard Poisson process (SPP). Therefore, the problem of GoF testing for an arbitrary TPP reduces to testing whether the transformed sequence Z follows the SPP on [0,Λ∗(T )]. In other words, we can define a GoF statistic for a TPP with compensator Λ∗ by (1) applying the compensator to X to obtain Z and (2) computing one of the existing GoF statistics for the SPP on the transformed sequence. This can also be generalized to marked TPPs (where events can belong to one ofK classes) by simply concatentating the transformed sequences Z(k) for each event type k ∈ {1, . . . ,K} (see Appendix D for details). SPP, i.e., the Poisson process with constant intensity λ∗(t) = 1, is the most basic TPP one can conceive. However, as we will shortly see, existing GoF statistics even for this simple model have considerable shortcomings and can only detect a limited class of deviations from the SPP. More importantly, test statistics for general TPPs defined using the above recipe (Theorem 1) inherit the limitations of the SPP statistics.
For brevity, we denote the transformed arrival times as Z = (v1, . . . , vN ) = (Λ∗(t1), . . . ,Λ∗(tN )) and the length of the transformed interval as V = Λ∗(T ). One way to describe the generative process of an SPP is as follows (Pasupathy, 2010)
N |V ∼ Poisson(V ) ui|N,V ∼ Uniform([0, V ]) for i = 1, . . . , N. (4)
An SPP realization Z = (v1, . . . , vN ) is obtained by sorting the ui’s in increasing order. This is equivalent to defining the arrival time vi as the i-th order statistic u(i). We can also represent Z by the inter-event times (w1, . . . , wN+1) where wi = vi − vi−1, assuming v0 = 0 and vN+1 = V . Barnard (1953) proposed a GoF test for the SPP based on the above description (Equation 4) and the Kolmogorov–Smirnov (KS) statistic. The main idea of this approach is to check whether the arrival times v1, . . . , vN are distributed uniformly in the [0, V ] interval. For this, we compare F̂arr, the empirical CDF of the arrival times, with Farr(u) = u/V , the CDF of the Uniform([0, V ]) distribution.
This can be done using the KS statistic on the arrival times (KS arrival), defined as
κarr(Z) = √ N · sup
u∈[0,V ] |F̂arr(u)− Farr(u)| where F̂arr(u) =
1
N N∑ i=1 1(vi ≤ u). (5)
Another popular GoF test for the SPP is based on the fact that the inter-event times wi are distributed according to the Exponential(1) distribution (Cox, 1966). The test compares F̂int, the empirical CDF of the inter-event times, and Fint(u) = 1− exp(−u), the CDF of the Exponential(1) distribution. This leads to the KS statistic for the inter-event times (KS inter-event)
κint(Z) = √ N · sup
u∈[0,∞) |F̂int(u)− Fint(u)| where F̂int(u) =
1
N + 1 N+1∑ i=1 1(wi ≤ u). (6)
KS arrival and KS inter-event statistics are often presented as the go-to approach for testing the goodness-of-fit of the standard Poisson process (Daley & Vere-Jones, 2003). Combining them with Theorem 1 leads to simple GoF tests for arbitrary TPPs that are widely used to this day (Gerhard et al., 2011; Alizadeh et al., 2013; Kim & Whitt, 2014; Tao et al., 2018; Li et al., 2018).
Limitations of the KS statistics. The KS statistics κarr(Z) and κint(Z) are only able to differentiate the SPP from a narrow class of alternative processes. For example, KS arrival only checks if the arrival times vi are distributed uniformly, conditioned on the event count N . But what if the observed N is itself extremely unlikely under the SPP (Equation 4)? KS inter-event can be similarly insensitive to the event count—removing all events V2 < vi ≤ V from an SPP realization Z will only result in just a single atypically large inter-event time wi, which changes the value of κint(Z) at most by 1N+1 . We demonstrate these limitations of κarr(Z) and κint(Z) in our experiments in Section 6.1. Other failure modes of the KS statistics were described by Pillow (2009). Note that ad-hoc fixes to the KS statistics do not address these problems. For example, combining multiple tests performed separately for the event count and arrival times using Fisher’s method (Fisher, 1948; Cox, 1966) consistently decreases the accuracy, as we show in Appendix G. In the next section, we introduce a different test statistic that aims to address these shortcomings.
4 Sum-of-squared-spacings (3S) statistic for TPPs
4.1 Goodness-of-fit testing with the 3S statistic
A good test statistic should capture multiple properties of the SPP at once: it should detect deviations w.r.t. both the event count N and the distribution of the arrival or inter-event times. Here, we propose to approach GoF testing with a sum-of-squared-spacings (3S) statistic that satisfies these desiderata,
ψ(Z) = 1
V N+1∑ i=1 w2i = 1 V N+1∑ i=1 (vi − vi−1)2. (7)
This statistic extends the sum-of-squared-spacings statistic proposed as a test of uniformity for fixedlength samples by Greenwood (1946). The important difference between our definition (Equation 7) and prior works (D’Agostino, 1986) is that we, for the first time, consider the TPP setting, where the number of events N is random as well. For this reason, we use the normalizing constant 1/V instead of N/V 2 (see Appendix B for details). As we will see, this helps capture abnormalities in the event count and results in more favorable asymptotic properties for the case of SPP.
Intuitively, for a fixed N , the statistic ψ is maximized if the spacings are extremely imbalanced, i.e., if one inter-event time wi is close to V and the rest are close to zero. Conversely, ψ attains its minimum when the spacings are all equal, that is wi = VN+1 for all i.
In Figure 2a we visualize the distribution of ψ|N,V for two different values of N . We see that the distribution of ψ depends strongly on N , therefore a GoF test involving ψ will detect if the event count N is atypical for the given SPP. This is in contrast to κarr and κint, the distributions of which, by design, are (asymptotically) invariant under N (Figure 2b). Even if one accounts for this effect, e.g., by removing the correction factor √ N in Equations 5 and 6, their distributions change only slightly compared to the sum of squared spacings (see Figures 2c and 2d). To analyze other properties of the statistic, we consider its moments under the null hypothesis.
Proposition 1. Suppose the sequence Z is distributed according to the standard Poisson process on the interval [0, V ]. Then the first two moments of the statistic ψ := ψ(Z) are
E[ψ|V ] = 2 V (V + e−V − 1) and Var[ψ|V ] = 4 V 2 (2V − 7 + e−V (2V 2 + 4V + 8− e−V )).
The proof of Proposition 1 can be found in Appendix C. From Proposition 1 it follows that
lim V→∞ E[ψ|V ] = 2 lim V→∞ Var[ψ|V ] = 0. (8)
This leads to a natural notion of typicality in the sense of Nalisnick et al. (2019) and Wang et al. (2020) for the standard Poisson process. We can define the typical set of the SPP as the set of variable-length sequences Z on the interval [0, V ] that satisfy |ψ(Z)− 2| ≤ for some small > 0. It follows from Equation 8 and Chebyshev’s inequality that for large enough V , the SPP realizations will fall into the typical set with high probability. Therefore, at least for large V , we should be able to detect sequences that are not distributed according the SPP based on the statistic ψ.
Summary. To test the GoF of a TPP with a known compensator Λ∗ for an event sequence X = (t1, . . . , tN ), we first obtain the transformed sequence Z = (Λ∗(t1), . . . ,Λ∗(tN )) and compute the statistic ψ(Z) according to Equation 7. Since the CDF of the statistic under H0 cannot be computed analytically, we approximate it using samples drawn from Pmodel. That is, we draw realizations Dmodel = {X1, . . . , XM} from the TPP (e.g., using the inversion method (Rasmussen, 2018)) and compute the p-value for X (Equation 3) using the EDF of the statistic on Dmodel (North et al., 2002).
4.2 Out-of-distribution detection with the 3S statistic
We now return to the original problem of OoD detection in TPPs, where we have access to a set of in-distribution sequences Dtrain and do not know the data-generating process Pdata. Our idea is to perform the OoD detection hypothesis test (Equation 1) using the sum-of-squared-spacings test statistic that we introduced in the previous section. However, since the data-generating TPP Pdata is unknown, we do not know the corresponding compensator that is necessary to compute the statistic. Instead, we can fit a neural TPP model Pmodel (Du et al., 2016) to the sequences in Dtrain and use the compensator Λ∗ of the learned model to compute the statistic s(X).4 High flexibility of neural TPPs allows these models to more accurately approximate the true compensator. Having defined the statistic, we can approximate its distribution under H0 (i.e., assuming X ∼ Pdata) by the EDF of the statistic on Dtrain. We
use this EDF to compute the p-values for our OoD detection hypothesis test and thus detect anomalous sequences. We provide the pseudocode description of our OoD detection method in Appendix D.
We highlight that an OoD detection procedure like the one above is not equivalent to a GoF test for the learned generative model Pmodel, as suggested by earlier works (Nalisnick et al., 2019). While we use
4We can replace the 3S statistic on the transformed sequence Z with any other statistic for the SPP, such as KS arrival. In Sections 6.2 and 6.3, we compare different statistics constructed this way.
the compensator of the learned model to define the test statistic s(X), we compute the p-value for the OoD detection test based on s(X)|X ∼ Pdata. This is different from the distribution s(X)|X ∼ Pmodel used in a GoF test, since in general Pmodel 6= Pdata. Therefore, even if the distribution of a test statistic under the GoF test can be approximated analytically (as, e.g., for the KS statistic (Marsaglia et al., 2003)), we have to use the EDF of the statistic onDtrain for the OoD detection test. Figure 3 visualizes this difference. Here, we fit a TPP model on the in-distribution sequences from the STEAD dataset (Section 6.3) and plot the empirical distribution of the respective statistic s(X) on Dtrain (corresponds to s(X)|X ∼ Pdata) and on model samples Dmodel (corresponds to s(X)|X ∼ Pmodel).
5 Related work
Unsupervised OoD detection. OoD detection approaches based on deep generative models (similar to our approach in Section 4.2) have received a lot of attention in the literature. However, there are several important differences between our method and prior works. First, most existing approaches perform OoD detection based on the log-likelihood (LL) of the model or some derived statistic (Choi et al., 2018; Ren et al., 2019; Nalisnick et al., 2019; Morningstar et al., 2021; Ruff et al., 2021). We observe that LL can be replaced by any other test statistic, e.g., taken from the GoF testing literature, which often leads to more accurate anomaly detection (Section 6). Second, unlike prior works, we draw a clear distinction between OoD detection and GoF testing. While this difference may seem obvious in hindsight, it is not acknowledged by the existing works, which may lead to complications (see Appendix A). Also, our formulation of the OoD detection problem in Section 2 provides an intuitive explanation to the phenomenon of “typicality” (Nalisnick et al., 2019; Wang et al., 2020). The ( , 1)-typical set of a distribution P simply corresponds to the acceptance region of the respective hypothesis test with confidence level (Equation 1). Finally, most existing papers study OoD detection for image data and none consider variable-length event sequences, which is the focus of our work.
Our OoD detection procedure is also related to the rarity anomaly score (Ferragut et al., 2012; Janzing et al., 2019). The rarity score can be interpreted as the negative logarithm of a one-sided p-value (Equation 3) of a GoF test that uses the log-likelihood of some known model as the test statistic. In contrast, we consider a broader class of statistics and learn the model from the data.
Anomaly detection for TPPs. OoD detection, as described in Section 2, is not the only way to formalize anomaly detection for TPPs. For example, Ojeda et al. (2019) developed a distance-based approach for Poisson processes. Recently, Zhu et al. (2020) proposed to detect anomalous event sequences with an adversarially-trained model. Unlike these two methods, our approach can be combined with any TPP model without altering the training procedure. Liu & Hauskrecht (2019) studied anomalous event detection with TPPs, while we are concerned with entire event sequences.
GoF tests for TPPs. Existing GoF tests for the SPP usually check if the arrival times are distributed uniformly, using, e.g., the KS (Lewis, 1965) or chi-squared statistic (Cox, 1955). Our 3S statistic favorably compares to these approaches thanks to its dependence on the event countN , as we explain in Section 4 and show experimentally in Section 6.1. Methods combining the random time change theorem with a GoF test for the SPP (usually, the KS test) have been used at least since Ogata (1988), and are especially popular in neuroscience (Brown et al., 2002; Gerhard et al., 2011; Tao et al., 2018). However, these approaches inherit the limitations of the underlying KS statistic. Replacing the KS score with the 3S statistic consistently leads to a better separation between different TPP distributions (Section 6).
Gerhard & Gerstner (2010) discussed several GoF tests for discrete-time TPPs, while we deal with continuous time. Yang et al. (2019) proposed a GoF test for point processes based on Stein’s identity, which is related to a more general class of kernel-based GoF tests (Chwialkowski et al., 2016; Liu et al., 2016). Their approach isn’t suitable for neural TPPs, where the Papangelou intensity cannot be computed analytically. A recent work by Wei et al. (2021) designed a GoF test for self-exciting processes under model misspecification. In contrast to these approaches, our proposed GoF test from Section 4.1 can be applied to any TPP with a known compensator.
Sum-of-squared-spacings statistic. A similar statistic was first used by Greenwood (1946) for testing whether a fixed number of points are distributed uniformly in an interval. Several follow-up works studied the limiting distribution of the statistic (conditioned on N ) as N → ∞ (Hill, 1979; Stephens, 1981; Rao & Kuo, 1984). Our proposed statistic (Equation 7) is not invariant w.r.t. N and, therefore, is better suited for testing TPPs. We discuss other related statistics in Appendix B.
6 Experiments
Our experimental evaluation covers two main topics. In Section 6.1, we compare the proposed 3S statistic with existing GoF statistics for the SPP. Then in Sections 6.2 and 6.3, we evaluate our OoD detection approach on simulated and real-world data, respectively. The experiments were run on a machine with a 1080Ti GPU. Details on the setup and datasets construction are provided in Appendix E & F.
6.1 Standard Poisson process
In Section 3 we mentioned several failure modes of existing GoF statistics for the SPP. Then, in Section 4.1 we introduced the 3S statistic that was supposed to address these limitations. Hence, the goal of this section is to compare the proposed statistic with the existing ones in the task of GoF testing for the SPP. We consider four test statistics: (1) KS statistic on arrival times (Equation 5), (2) KS statistic on inter-event times (Equation 6), (3) chi-squared statistic on the arrival times (Cox, 1955; Tao et al., 2018), and (4) the proposed 3S statistic (Equation 7).
To quantitatively compare the discriminative power of different statistics, we adopt an evaluation strategy similar to Gerhard & Gerstner (2010); Yang et al. (2019). First, we generate a set Dmodel consisting of 1000 SPP realizations. We use Dmodel to compute the empirical distribution function of each statistic s(Z) under H0. Then, we define two test sets: DIDtest (consisting of samples from Pmodel, the SPP) and DOODtest (consisting of samples from Q, another TPP), each with 1000 sequences. Importantly, in this and following experiments, the training and test sets are always disjoint.
We follow the GoF testing procedure described at the end of Section 4.1, which corresponds to the hypothesis test in Equation 2. That is, we compute the p-value (Equation 3) for each sequence in the test sets using the EDF of s(Z) on Dmodel. A good test statistic s(Z) should assign lower p-values to the OoD sequences from DOODtest than to ID sequences from DIDtest, allowing us to discriminate between samples from Q and Pmodel. We quantify how well a given statistic separates the two distributions by computing the area under the ROC curve (ROC AUC). This effectively averages the performance of a statistic for the GoF hypothesis test over different significance levels α.
Datasets. We consider six choices for the distribution Q: • RATE, a homogeneous Poisson process with intensity µ < 1; • STOPPING, where events stop after some time tstop ∈ [0, V ]; • RENEWAL, where inter-event times are drawn i.i.d. from the Gamma distribution; • HAWKES, where events are more clustered compared to the SPP; • INHOMOGENEOUS, a Poisson process with non-constant intensity λ(t) = β sin(ωt); • SELFCORRECTING, where events are more evenly spaced compared to the SPP.
For cases the last 4 cases, the expected number of events is the same as for the SPP.
For each choice of Q we define a detectability parameter δ ∈ [0, 1], where higher δ corresponds to TPPs that are increasingly dissimilar to the SPP. That is, setting δ = 0 corresponds to a distribution Q that is exactly equal to the SPP, and δ = 1 corresponds to a distribution that deviates significantly from the SPP. For example, for a Hawkes with conditional intensity λ∗(t) = µ+ β ∑ tj<t
exp(−(t− tj)), the detectability value of δ = 0 corresponds to µ = 1 and β = 0 (i.e., λ∗(t) = 1) making Q
indistinguishable from P. The value of δ = 0.5 corresponds to µ = 0.5 and β = 0.5, which preserves the expected number of events N but makes the arrival times ti “burstier.” We describe how the parameters of each distribution Q are defined based on δ in Appendix E. Note that, in general, the ROC AUC scores are not guaranteed to monotonically increase as the detectability δ is increased.
Results. In Figure 4, we present AUC scores for different statistics as δ is varied. As expected, KS arrival accurately identifies sequences that come from Q where the absolute time of events are non-uniform (as in INHOMOGENEOUS). Similarly, KS inter-event is good at detecting deviations in the distribution of inter-event times, as in RENEWAL. The performance of the chi-squared statistic is similar to that of KS arrival. Nevertheless, the above statistics fail when the expected number of events, N , changes substantially—as in KS arrival and chi-squared on RATE, and KS inter-event on STOPPING. These failure modes match our discussion from Section 3.
In contrast, the 3S statistic stands out as the most consistent test (best or close-to-best performance in 5 out of 6 cases) and does not completely fail in any of the scenarios. The relatively weaker performance on SELFCORRECTING implies that the 3S statistic is less sensitive to superuniform spacings (D’Agostino, 1986) than to imbalanced spacings. The results show that the 3S statistic is able to detect deviations w.r.t. both the event count N (RATE and STOPPING), as well as the distributions of the inter-event times wi (RENEWAL) or the arrival times vi (HAWKES and INHOMOGENEOUS)— something that other GoF statistics for the SPP cannot provide.
6.2 Detecting anomalies in simulated data
In this section, we test the OoD detection approach discussed in Section 4.2, i.e., we perform anomaly detection for a TPP with an unknown compensator. This corresponds to the hypothesis test in Equation 1. We use the training set Dtrain to fit an RNN-based neural TPP model (Shchur et al., 2020) via maximum likelihood estimation (see Appendix F for details). Then, we define test statistics for the general TPP as follows. We apply the compensator Λ∗ of the learned model to each event sequence X and compute the four statistics for the SPP from Section 6.1 on the transformed sequence Z = Λ∗(X). We highlight that these methods are not “baselines” in the usual sense—the idea of combining a GoF statistic with a learned TPP model to detect anomalous event sequences is itself novel and hasn’t been explored by earlier works. The rest of the setup is similar to Section 6.1. We use Dtrain to compute the EDF of each statistic under H0, and then compute the ROC AUC scores on the p-values. In addition to the four statistics discussed before, we consider a two-sided test on the log-likelihood log q(X) of the learned generative model, which corresponds to the approach by Nalisnick et al. (2019).
Datasets. Like before, we define a detectability parameter δ for each scenario that determines how dissimilar ID and OoD sequences are. SERVER-STOP, SERVER-OVERLOAD and LATENCY are inspired by applications in DevOps, such as detecting anomalies in server logs.
• SERVER-OVERLOAD and SERVER-STOP contain data generated by a multivariate Hawkes process with 3 marks, e.g., modeling network traffic among 3 hosts. In OoD sequences, we change the influence matrix to simulate scenarios where a host goes offline (SERVER-STOP), and where a host goes down and the traffic is routed to a different host (SERVER-OVERLOAD). Higher δ implies that the change in the influence matrix happens earlier.
• LATENCY contains events of two types, sampled as follows. The first mark, the “trigger,” is sampled from a homogeneous Poisson process with rate µ = 3. The arrival times of the second
Results are shown in Figure 5. The 3S statistic demonstrates excellent performance in all four scenarios, followed by KS arrival and chi-squared. In case of SERVER-STOP and SERVER-OVERLOAD, the 3S statistic allows us to perfectly detect the anomalies even when only 5% of the time interval are affected by the change in the influence structure. KS inter-event and log-likelihood statistics completely fail on SERVER-STOP and SERVER-OVERLOAD, respectively. These two statistics also struggle to discriminate OoD sequences in LATENCY and SPIKETRAINS scenarios. The non-monotone behavior of the ROC AUC scores for some statistics (as the δ increases) indicates that these statistics are poorly suited for the respective scenarios.
6.3 Detecting anomalies in real-world data
Finally, we apply our methods to detect anomalies in two real-world event sequence datasets. We keep the setup (e.g., configuration of the neural TPP model) identical to Section 6.2.
LOGS: We generate server logs using Sock Shop microservices (Weave, 2017) and represent them as marked event sequences. Sock Shop is a standard testbed for research in microservice applications (Aderaldo et al., 2017) and contains a web application that runs on several containerized services. We generate OoD sequences by injecting various failures (e.g., packet corruption, increased latency) among these microservices using a chaos testing tool Pumba (Ledenev et al., 2016). We split one large server log into 30-second subintervals, that are then partitioned into train and test sets.
STEAD (Stanford Earthquake Dataset) (Mousavi et al., 2019) includes detailed seismic measurements on over 1 million earthquakes. We construct four subsets, each containing 72-hour subintervals in a period of five years within a 350km radius of a fixed geographical location. We treat sequences corresponding the San Mateo, CA region as in-distribution data, and the remaining 3 regions (Anchorage, AK, Aleutian Islands, AK and Helmet, CA) as OoD data.
Results. Table 1 shows the ROC AUC scores for all scenarios. KS arrival and chi-squared achieve surprisingly low scores in 6 out of 8 scenarios, even though these two methods showed strong results on simulated data in Sections 6.1 and 6.2. In contrast, KS inter-event and log-likelihood perform better here than in previous experiments, but still produce poor results on Packet corruption. The 3S statistic is the only method that consistently shows high ROC AUC scores across all scenarios. Moreover, we observe that for marked sequences (LOGS and all datasets in Section 6.2), the 3S statistic leads to more accurate detection compared to the log-likelihood statistic in 9 out of 9 cases.
7 Discussion
Limitations. Our approach assumes that the sequences in Dtrain were drawn i.i.d. from the true data-generating distribution Pdata (Section 2). This assumption can be violated in two ways: some of the training sequences might be anomalous or there might exist dependencies between them. We have
considered the latter case in our experiments on SPIKETRAINS and LOGS datasets, where despite the non-i.i.d. nature of the data our method was able to accurately detect anomalies. However, there might exist scenarios where the violation of the assumptions significantly degrades the performance.
No single test statistic can be “optimal” for either OoD detection or GoF testing, since we make no assumptions about the alternative distribution Q (Section 2). We empirically showed that the proposed 3S statistic compares favorably to other choices over a range of datasets and applications domains. Still, for any fixed pair of distributions P and Q, one can always find a statistic that will have equal or higher power s.t. the same false positive rate (Neyman & Pearson, 1933). Hence, it won’t be surprising to find cases where our (or any other chosen a priori) statistic is inferior.
Broader impact. Continuous-time variable-length event sequences provide a natural representation for data such as electronic health records (Enguehard et al., 2020), server logs (He et al., 2016) and user activity traces (Zhu et al., 2020). The ability to perform unsupervised anomaly detection in such data can enable practitioners to find at-risk patients, reduce DevOps costs, and automatically detect security breaches—all of which are important tasks in the respective fields. One of the risks when applying an anomaly detection method in practice is that the statistical anomalies found by the method will not be relevant for the use case. For example, when looking for health insurance fraud, the method might instead flag legitimate patients who underwent atypically many procedures as “suspicious” and freeze their accounts. To avoid such situations, automated decisions systems should be deployed with care, especially in sensitive domains like healthcare.
Conclusion. We have presented an approach for OoD detection for temporal point processes based on goodness-of-fit testing. At the core of our approach lies a new GoF test for standard Poisson processes based on the 3S statistic. Our method applies to a wide class of TPPs and is extremely easy to implement. We empirically showed that the proposed approach leads to better OoD detection accuracy compared to both popular GoF statistics for TPPs (Kolmogorov–Smirnov, chi-squared) and approaches commonly used in OoD detection literature (model log-likelihood). While our analysis focuses on TPPs, we believe our discussion on similarities and distinctions between GoF testing and OoD detection offers insights to the broader machine learning community.
Funding transparency statement
The work was funded by Amazon Research. | 1. What is the main contribution of the paper in anomaly detection?
2. What is the novel approach proposed by the paper in identifying anomalous event sequences?
3. How effective is the proposed method in detecting anomalies, based on the experimental results?
4. What are the strengths of the paper regarding its clarity, references, and experiments? | Summary Of The Paper
Review | Summary Of The Paper
This paper focuses on the anomaly detection of continuous-time event sequences with temporal point processes and proposes to leverage the idea of goodness-of-fit testing to check whether sequences follow the in-distribution. Consider the limitation that the current statistics are not sensitive to the event number N, this paper proposes a new statistic called sum-of-squared-spacings (3S) to check the fitness of the sequence. The experimental results show the effectiveness of the proposed statistic for anomaly detection.
Review
Detecting anomalous continuous-time event sequences is an important task yet under-exploited. This paper proposes a new statistic to identify the anomalous event sequences, called 3S. Evaluating on two synthetic and one real datasets shows the effectiveness of using 3S for anomaly detection. The overall paper is well-written with sufficient references and experiments. |
NIPS | Title
Learning to solve TV regularised problems with unrolled algorithms
Abstract
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the `1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
1 Introduction
Ill-posed inverse problems appear naturally in signal and image processing and machine learning, requiring extra regularization techniques. Total Variation (TV) is a popular regularization strategy with a long history (Rudin et al., 1992), and has found a large number of applications in neuro-imaging (Fikret et al., 2013), medical imaging reconstruction (Tian et al., 2011), among myriad applications (Rodríguez, 2013; Darbon and Sigelle, 2006). TV promotes piece-wise constant estimates by penalizing the `1-norm of the first order derivative of the estimated signal, and it provides a simple, yet efficient regularization technique.
TV-regularized problems are typically convex, and so a wide variety of algorithms are in principle applicable. Since the `1 norm in the TV term is non-smooth, Proximal Gradient Descent (PGD) is the most popular choice (Rockafellar, 1976). Yet, the computation for the corresponding proximal operator (denoted prox-TV) represents a major difficulty in this case as it does not have a closed-form analytic solution. For 1D problems, it is possible to rely on dynamic programming to compute proxTV, such as the taut string algorithm (Davies and Kovac, 2001; Condat, 2013a). Another alternative consists in computing the proximal operator with iterative first order algorithm (Chambolle, 2004; Beck and Teboulle, 2009; Boyd et al., 2011; Condat, 2013b). Other algorithms to solve TV-regularized
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
problems rely on primal dual algorithms (Chambolle and Pock, 2011; Condat, 2013b) or Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011). These algorithms typically use one sequence of estimates for each term in the objective and try to make them as close as possible while minimizing the associated term. While these algorithms are efficient for denoising problems – where one is mainly concerned with good reconstruction – they can result in estimate that are not very well regularized if the two sequences are not close enough.
When on fixed computational budget, iterative optimization methods can become impractical as they often require many iterations to give a satisfactory estimate. To accelerate the resolution of these problems with a finite (and small) number of iterations, one can resort to unrolled and learned optimization algorithms (see Monga et al. 2019 for a review). In their seminal work, Gregor and Le Cun (2010) proposed the Learned ISTA (LISTA), where the parameters of an unfolded Iterative Shrinkage-Thresholding Algorithm (ISTA) are learned with gradient descent and back-propagation. This allows to accelerate the approximate solution of a Lasso problem (Tibshirani, 1996), with a fixed number of iteration, for signals from a certain distribution. The core principle behind the success of this approach is that the network parameters can adaptively leverage the sensing matrix structure (Moreau and Bruna, 2017) as well as the input distribution (Giryes et al., 2018; Ablin et al., 2019). Many extensions of this original idea have been proposed to learn different algorithms (Sprechmann et al., 2012, 2013; Borgerding et al., 2017) or for different classes of problem (Xin et al., 2016; Giryes et al., 2018; Sulam et al., 2019). The motif in most of these adaptations is that all operations in the learned algorithms are either linear or separable, thus resulting in sub-differentials that are easy to compute and implement via back-propagation. Algorithm unrolling is also used in the context of bi-level optimization problems such as hyper-parameter selection. Here, the unrolled architecture provides a way to compute the derivative of the inner optimization problem solution compared to another variable such as the regularisation parameter using back-propagation (Bertrand et al., 2020).
The focus of this paper is to apply algorithm unrolling to TV-regularized problems in the 1D case. While one could indeed apply the LISTA approach directly to the synthesis formulation of these problems, we show in this paper that using such formulation leads to slower iterative or learned algorithms compared to their analysis counterparts. The extension of learnable algorithms to the analysis formulation is not trivial, as the inner proximal operator does not have an analytical or separable expression. We propose two architectures that can learn TV-solvers in their analysis form directly based on PGD. The first architecture uses an exact algorithm to compute the prox-TV and we derive the formulation of its weak Jacobian in order to learn the network’s parameters. Our second method rely on a nested LISTA network in order to approximate the prox-TV itself in a differentiable way. This latter approach can be linked to inexact proximal gradient methods (Schmidt et al., 2011; Machart et al., 2012). These results are backed with numerical experiments on synthetic and real data. Concurrently to our work, Lecouat et al. (2020) also proposed an approach to differentiate the solution of TV-regularized problems. While their work can be applied in the context of 2D signals, they rely on smoothing the regularization term using Moreau-Yosida regularization, which results in smoother estimates from theirs learned networks. In contrast, our work allows to compute sharper signals but can only be applied to 1D signals.
The rest of the paper is organized as follows. In Section 2, we describe the different formulations for TV-regularized problems and their complexity. We also recall central ideas of algorithm unfolding. Section 3 introduces our two approaches for learnable network architectures based on PGD. Finally, the two proposed methods are evaluated on real and synthetic data in Section 4.
Notations For a vector x ∈ Rk, we denote ‖x‖q its `q-norm. For a matrix A ∈ Rm×k, we denote ‖A‖2 its `2-norm, which corresponds to its largest singular value and A† denotes its pseudoinverse. For an ordered subset of indices S ⊂ {1, . . . , k}, xS denote the vector in R|S| with element (xS)t = xit for it ∈ S. For a matrix A ∈ Rm×k, A:,S denotes the sub-matrix [A:,i1 , . . . A:,i|S| ] composed with the columns A:,it of index it ∈ S of A. For the rest of the paper, we refer to the operators D ∈ Rk−1×k, D̃ ∈ Rk×k, L ∈ Rk×k and R ∈ Rk×k as:
D = −1 1 0 . . . 0 0 −1 1 . . . ... ...
. . . . . . . . . 0 0 . . . 0 −1 1
D̃ = 1 0 . . . 0 −1 1 . . . ...
. . . . . . 0 0 −1 1
L = 1 0 . . . 0 1 1 . . . ... ...
. . . . . . 0 1 . . . 1 1
R = 0 0 . . . 0 0 1 . . . ... ...
. . . . . . 0 0 . . . 0 1
2 Solving TV-regularized problems
We begin by detailing the TV-regularized problem that will be the main focus of our work. Consider a latent vector u ∈ Rk, a design matrix A ∈ Rm×k and the corresponding observation x ∈ Rm. The original formulation of the TV-regularized regression problem is referred to as the analysis formulation (Rudin et al., 1992). For a given regularization parameter λ > 0, it reads
min u∈Rk
P (u) = 1
2 ‖x−Au‖22 + λ‖u‖TV , (1)
where ‖u‖TV = ‖Du‖1, and D ∈ Rk−1×k stands for the first order finite difference operator, as defined above. The problem in (1) can be seen as a special case of a Generalized Lasso problem (Tibshirani and Taylor, 2011); one in which the analysis operator is D. Note that problem P is convex, but the TV -norm is non-smooth. In these cases, a practical alternative is the PGD, which iterates between a gradient descent step and the prox-TV. This algorithm’s iterates read
u(t+1) = proxλ ρ ‖·‖TV
( u(t) − 1
ρ A>(Au(t) − x)
) , (2)
where ρ = ‖A‖22 and the prox-TV is defined as
proxµ‖·‖TV (y) = arg min u∈Rk
Fy(u) = 1
2 ‖y − u‖22 + µ‖u‖TV . (3)
Problem (3) does not have a closed-form solution, and one needs to resort to iterative techniques to compute it. In our case, as the problem is 1D, the prox-TV problem can be addressed with a dynamic programming approach, such as the taut-string algorithm (Condat, 2013a). This scales as O(k) in all practical situations and is thus much more efficient than other optimization based iterative algorithms (Rockafellar, 1976; Chambolle, 2004; Condat, 2013b) for which each iteration is O(k2) at best.
With a generic matrix A ∈ Rm×k, the PGD algorithm is known to have a sublinear convergence rate (Combettes and Bauschke, 2011). More precisely, for any initialization u(0) and solution u∗, the iterates satisfy
P (u(t))− P (u∗) ≤ ρ 2t ‖u(0) − u∗‖22, (4)
where u∗ is a solution of the problem in (1). Note that the constant ρ can have a significant effect. Indeed, it is clear from (4) that doubling ρ leads to consider doubling the number of iterations.
2.1 Synthesis formulation
An alternative formulation for TV-regularized problems relies on removing the analysis operator D from the `1-norm and translating it into a synthesis expression (Elad et al., 2007). Removing D from the non-smooth term simplifies the expression of the proximal operator by making it separable, as in the Lasso. The operator D is not directly invertible but keeping the first value of the vector u allows for perfect reconstruction. This motivates the definition of the operator D̃ ∈ Rk×k, and its inverse L ∈ Rk×k, as defined previously. Naturally, L is the discrete integration operator. Considering the change of variable z = D̃u, and using the operator R ∈ Rk×k, the problem in (1) is equivalent to
min z∈Rk
S(z) = 1
2 ‖x−ALz‖22 + λ‖Rz‖1. (5)
Note that for any z ∈ Rk, S(z) = P (Lz). There is thus an exact equivalence between solutions from the synthesis and the analysis formulation, and the solution for the analysis can be obtained with u∗ = Lz∗. The benefit of this formulation is that the problem above now reduces to a Lasso problem (Tibshirani, 1996). In this case, the PGD algorithm is reduced to the ISTA with a closed-form proximal operator (the soft-thresholding). Note that this simple formulation is only possible in 1D where the first order derivative space is unconstrained. In larger dimensions, the derivative must be constrained to verify the Fubini’s formula that enforces the symmetry of integration over dimensions. While it is also possible to derive synthesis formulation in higher dimension (Elad et al., 2007), this does not lead to simplistic proximal operator.
For this synthesis formulation, with a generic matrix A ∈ Rm×k, the PGD algorithm has also a sublinear convergence rate (Beck and Teboulle, 2009) such that
P (u(t))− P (u∗) ≤ 2ρ̃ t ‖u(0) − u∗‖22, (6)
with ρ̃ = ‖AL‖22 (see Subsection F.1 for full derivation). While the rate of this algorithm is the same as in the analysis formulation – in O( 1t ) – the constant ρ̃ related to the operator norm differs. We now present two results that will characterize the value of ρ̃.
Proposition 2.1. [Lower bound for the ratio ‖AL‖ 2 2
‖A‖22 expectation] Let A be a random matrix in Rm×k
with i.i.d normally distributed entries. The expectation of ‖AL‖22/‖A‖22 is asymptotically lower bounded when k tends to∞ by
E [‖AL‖22 ‖A‖22 ] ≥ 2k + 1 4π2 + o(1)
The full proof can be found in Subsection F.3. The lower bound is constructed by using ATA ‖A‖22u1u>1 for a unit vector u1 and computing explicitely the expectation for rank one matrices. To assess the tightness of this bound, we evaluated numerically E
[ ‖AL‖22 ‖A‖22 ] on a set of 1000
matrices sampled with i.i.d normally distributed entries. The results are displayed w.r.t the dimension k in Figure 1. It is clear that the lower bound from Proposition 2.1 is not tight. This is expected as we consider only the leading eigenvector of A to derive it in the proof. The following conjecture gives a tighter bound.
Conjecture 2.2 (Expectation for the ratio ‖AL‖ 2 2
‖A‖22 ). Under the same conditions as in Proposition 2.1,
the expectation of ‖AL‖22/‖A‖22 is given by
E [‖AL‖22 ‖A‖22 ] = (2k + 1)2 16π2 + o(1) .
We believe this conjecture can potentially be proven with analogous developments as those in Proposition 2.1, but integrating over all dimensions. However, a main difficulty lies in the fact that integration over all eigenvectors have to be carried out jointly as they are not independent. This is subject of current ongoing work.
Finally, we can expect that ρ̃/ρ scales as Θ(k2). This leads to the observation that ρ̃2 ρ in large enough dimension. As a result, the analysis formulation should be much more efficient in terms of iterations than the synthesis formulation – as long as the prox-TVcan be dealt with efficiently.
2.2 Unrolled iterative algorithms
As shown by Gregor and Le Cun (2010), ISTA is equivalent to a recurrent neural network (RNN) with a particular structure. This observation can be generalized to PGD algorithms for any penalized least squares problem of the form
u∗(x) = arg min u L(x, u) = 1 2 ‖x−Bu‖22 + λg(u) , (7)
where g is proper and convex, as depicted in Figure 2a. By unrolling this architecture with T layers, we obtain a network φΘ(T )(x) = u(T ) – illustrated in Figure 2b – with parameters Θ(T ) = {W (t)x ,W (t)u , µ(t)}Tt=1, defined by the following recursion
u(0) = B†x ; u(t) = proxµ(t)g(W (t) x x+W (t) u u (t−1)) . (8)
As underlined by (4), a good estimate u(0) is crucial in order to have a fast convergence toward u∗(x). However, this chosen initialization is mitigated by the first layer of the network which learns to set a good initial guess for u(1). For a network with T layers, one recovers exactly the T -th iteration of PGD if the weights are chosen constant equal to
W (t)x = 1
ρ B>, W (t)u = (Id−
1 ρ B>B) , µ(t) = λ ρ , with ρ = ‖B‖22 . (9)
In practice, this choice of parameters are used as initialization for a posterior training stage. In many practical applications, one is interested in minimizing the loss (7) for a fixed B and a particular distribution over the space of x, P . As a result, the goal of this training stage is to find parameters Θ(T ) that minimize the risk, or expected loss, E[L(x, φΘ(T )(x))] over P . Since one does not have access to this distribution, and following an empirical risk minimization approach with a given training set {x1, . . . xN} (assumed sampled i.i.d from P), the network is trained by minimizing
min Θ(T )
1
N
N∑
i=1
L(xi, φΘ(T )(xi)) . (10)
Note that when T → +∞, the presented initialization in (9) gives a global minimizer of the loss for all xi, as the network converges to exact PGD. When T is fixed, however, the output of the network is not a minimizer of (7) in general. Minimizing this empirical risk can therefore find a weight configuration that reduces the sub-optimality of the network relative to (7) over the input distribution used to train the network. In such a way, the network learns an algorithm to approximate the solution of (7) for a particular class or distributions of signals. It is important to note here that while this procedure can accelerate the resolution the problem, the learned algorithm will only be valid for inputs xi coming from the same input distribution P as the training samples. The algorithm might not converge for samples which are too different from the training set, unlike the iterative algorithm which is guaranteed to converge for any sample.
This network architecture design can be directly applied to TV regularised problems if the synthesis formulation (5) is used. Indeed, in this case PGD reduces to the ISTA algorithm, with B = AL and proxµg = ST(·, µ) becomes simply a soft-thresholding operator (which is only applied on the coordinates {2, . . . k}, following the definition of R). However, as discussed in Proposition 2.1, the conditioning of the synthesis problem makes the estimation of the solution slow, increasing the number of network layers needed to get a good estimate of the solution. In the next section, we will extend these learning-based ideas directly to the analysis formulation by deriving a way to obtain exact and approximate expressions for the sub-differential of the non-separable prox-TV.
3 Back-propagating through TV proximal operator
Our two approaches to define learnable networks based on PGD for TV-regularised problems in the analysis formulation differ on the computation of the prox-TV and its derivatives. Our first approach
consists in directly computing the weak derivatives of the exact proximal operator while the second one uses a differentiable approximation.
3.1 Derivative of prox-TV
While there is no analytic solution to the prox-TV, it can be computed exactly (numerically) for 1D problems using the taut-string algorithm (Condat, 2013a). This operator can thus be applied at each layer of the network, reproducing the architecture described in Figure 2b. We define the LPGD-Taut network φΘ(T )(x) with the following recursion formula
φΘ(T )(x) = proxµ(T )‖·‖TV ( W (T )x x+W (T ) u φΘ(T−1)(x) ) (11)
To be able to learn the parameters through gradient descent, one needs to compute the derivatives of (10) w.r.t the parameters Θ(T ). Denoting h = W (t)x x+W
(t) u φΘ(t−1)(x) and u = proxµ(t)‖·‖TV (h),
the application of the chain rule (as implemented efficiently by automatic differentiation) results in ∂L ∂h = Jx(h, µ (t))> ∂L ∂u , and ∂L ∂µ(t) = Jµ(h, µ (t))> ∂L ∂u , (12)
where Jx(h, µ) ∈ Rk×k and Jµ(h, µ) ∈ Rk×1 denotes the weak Jacobian of the output of the proximal operator u with respect to the first and second input respectively. We now give the analytic formulation of these weak Jacobians in the following proposition. Proposition 3.1. [Weak Jacobian of prox-TV] Let x ∈ Rk and u = proxµ‖·‖TV (x), and denote by S the support of z = D̃u. Then, the weak Jacobian Jx and Jµ of the prox-TV relative to x and µ can be computed as
Jx(x, µ) = L:,S(L > :,SL:,S) −1L>:,S and Jµ(x, µ) = −L:,S(L>:,SL:,S)−1 sign(Du)S The proof of this proposition can be found in Subsection G.1. Note that the dependency in the inputs is only through S and sign(Du), where u is a short-hand for proxµ‖·‖TV (x). As a result, computing these weak Jacobians can be done efficiently by simply storing sign(Du) as a mask, as it would be done for a RELU or the soft-thresholding activations, and requiring just 2(k − 1) bits. With these expressions, it is thus possible to compute gradient relatively to all parameters in the network, and employ them via back-propagation.
3.2 Unrolled prox-TV
As an alternative to the previous approach, we propose to use the LISTA network to approximate the prox-TV (3). The prox-TV can be reformulated with a synthesis approach resulting in a Lasso i.e.
z∗ = arg min z
1 2 ‖h− Lz‖22 + µ‖Rz‖1 (13)
The proximal operator solution can then be retrieved with proxµ‖·‖TV (h) = Lz ∗. This problem can be solved using ISTA, and approximated efficiently with a LISTA network Gregor and Le Cun (2010). For the resulting architecture – dubbed LPGD-LISTA – proxµ‖·‖TV (h) is replaced by a nested LISTA network with a fixed number of layers Tin defined recursively with z(0) = Dh and
z(`+1) = ST ( W (`,t)z z (`) +W (`,t) h ΦΘ(t) , µ(`,t)
ρ
) . (14)
Here, W (`,t)z ,W (`,t) h , µ (`,t) are the weights of the nested LISTA network for layer `. They are initialized with weights chosen as in (9) to ensure that the initial state approximates the prox-TV. Note that the weigths of each of these inner layers are also learned through back-propagation during training.
The choice of this architecture provides a differentiable (approximate) proximal operator. Indeed, the LISTA network is composed only of linear and soft-thresholding layers – standard tools for deep-learning libraries. The gradient of the network’s parameters can thus be computed using classic automatic differentiation. Moreover, if the inner network is not trained, the gradient computed with this method will converge toward the gradient computed using Proposition 3.1 as Tin goes to∞ (see Proposition G.2). Thus, in this untrained setting with infinitely many inner layers, the network is equivalent to LPGD-Taut as the output of the layer also converges toward the exact proximal operator.
Connections to inexact PGD A drawback of approximating the prox-TV via an iterative procedure is, precisely, that it is not exact. This optimization error results from a trade-off between computational cost and convergence rate. Using results from Machart et al. (2012), one can compute the scaling of T and Tin to reach an error level of δ with an untrained network. Proposition G.3 shows that without learning, T should scale as O( 1t ) and Tin should be larger than O(ln( 1 δ )). This scaling gives potential guidelines to set these parameters, as one can expect that learning the parameters of the network would reduce these requirement.
4 Experiments
All experiments are performed in Python using PyTorch (Paszke et al., 2019). We used the implementation1 of Barbero and Sra (2018) to compute TV proximal operator using taut-string algorithm. The code to reproduce the figures is available online2.
In all experiments, we initialize u0 = A†x. Moreover, we employed a normalized λreg as a penalty parameter: we first compute the value of λmax (which is the minimal value for which z = 0 is solution of (5)) and we refer to λ as the ratio so that λreg = λλmax, with λ ∈ [0, 1] (see Appendix D). As the computational complexity of all compared algorithms is the same except for the proximal operator, we compare them in term of iterations.
4.1 Simulation
We generate n = 2000 times series and used half for training and other half for testing and comparing the different algorithms. We train all the network’s parameters jointly – those to approximate the gradient for each iteration along with those to define the inner proximal operator. The full training process is described in Appendix A. We set the length of the source signals (ui)ni=1 ∈ Rn×k to k = 8 with a support of |S| = 2 non-zero coefficients (larger dimensions will be showcased in the real data application). We generate A ∈ Rm×k as a Gaussian matrix with m = 5, obtaining then (ui) n i=1 ∈ Rn×p. Moreover, we add Gaussian noise to measurements xi = Aui with a signal to noise ratio (SNR) of 1.0.
We compare our proposed methods, LPGD-Taut network and the LPGD-LISTA with Tin = 50 inner layers to PGD and Accelerated PGD with the analysis formulation. For completeness, we also add the FISTA algorithm for the synthesis formulation in order to illustrate Proposition 2.1 along with its learned version.
Figure 3 presents the risk (or expected function value, P ) of each algorithm as a function of the number of layers or, equivalently, iterations. For the learned algorithms, the curves in t display the performances of a network with t layer trained specifically. We observe that all the synthesis formulation algorithms are slower than their analysis counterparts, empirically validating Proposition 2.1.
1Available at https://github.com/albarji/proxTV 2Available at https://github.com/hcherkaoui/carpet.
Moreover, both of the proposed methods accelerate the resolution of (20) in a low iteration regime. However, when the regularization parameter is high (λ = 0.8), we observe that the performance of the LPGD-LISTA tends to plateau. It is possible that such a high level of sparsity require more than 50 layers for the inner network (which computes the prox-TV). According to Section 3.2, the error associated with this proximity step hinders the global convergence, making the loss function decrease slowly. Increasing the number of inner layers would alleviate this issue, though at the expense of increased computational burden for both training and runtime. For LPGD-Taut, while the Taut-string algorithm ensures that the recovered support is exact for the proximal step, the overall support can be badly estimated in the first iterations. This can lead to un-informative gradients as they greatly depend on the support of the solution in this case, and explain the reduced performances of the network in the high sparsity setting.
Inexact prox-TV With the same data (xi)ni=1 ∈ Rn×m, we empirically investigate the error of the prox-TV (t)k = Fu(t)(z
(t)) − Fu(t)(z∗) and evaluate it for c with different number of layers (T ∈ [20, 50]). We also investigate the case where the parameter of the nested LISTA in LPGD-LISTA are trained compared to their initialization in untrained version.
Figure 4 depicts the error k for each layer. We see that learning the parameters of the unrolled prox-TV in LPGD-LISTA barely improves the performance. More interestingly, we observe that in a high sparsity setting the error sharply increases after a certain number of layers. This is likely cause by the high sparsity of the estimates, the small numbers of iterations of the inner network (between 20 and 50) are insufficient to obtain an accurate solution to the proximal operator. This is in accordance with inexact PGD theory which predict that such algorithm has no exact convergence guarantees (Schmidt et al., 2011).
4.2 fMRI data deconvolution
Functional magnetic resonance imaging (fMRI) is a non-invasive method for recording the brain activity by dynamically measuring blood oxygenation level-dependent (BOLD) contrast, denoted here x. The latter reflects the local changes in the deoxyhemoglobin concentration in the brain Ogawa et al. (1992) and thus indirectly measures neural activity through the neurovascular coupling. This coupling is usually modelled as a linear and time-invariant system and characterized by its impulse response, the so-called haemodynamic response function (HRF), denoted here h. Recent developments propose to estimate either the neural activity signal independently (Fikret et al., 2013; Cherkaoui et al., 2019b) or jointly with the HRF (Cherkaoui et al., 2019a; Farouj et al., 2019). Estimating the neural activity signal with a fixed HRF is akin to a deconvolution problem regularized with TV-norm,
min u∈Rk
P (u) = 1
2 ‖h ∗ u− x‖22 + λ‖u‖TV (15)
To demonstrate the usefulness of our approach with real data, where the training set has not the exact same distribution than the testing set, we compare the LPGD-Taut to Accelerated PGD for the analysis formulation on this deconvolution problem. We choose two subjects from the UK Bio Bank (UKBB) dataset (Sudlow et al., 2015), perform the usual fMRI processing and reduce the dimension of the problem to retain only 8000 time-series of 250 time-frames, corresponding to a record of 3 minute 03 seconds. The full preprocessing pipeline is described in Appendix B. We train
the LPGD taut-string network solver on the first subject and Figure 5 reports the performance of the two algorithms on the second subject for λ = 0.1. The performance is reported relatively to the number of iteration as the computational complexity of each iteration or layer for both methods is equivalent. It is clear that LPGD-Taut converges faster than the Accelerated PGD even on real data. In particular, acceleration is higher when the regularization parameter λ is smaller. As mentioned previously, this acceleration is likely to be caused by the better learning capacity of the network in a low sparsity context. The same experiment is repeated for λ = 0.8 in Figure C.1.
5 Conclusion
This paper studies the optimization of TV-regularised problems via learned PGD. We demonstrated, both analytically and numerically, that it is better to address these problems in their original analysis formulation rather than resort to the simpler (alas slower) synthesis version. We then proposed two different algorithms that allow for the efficient computation and derivation of the required prox-TV, exactly or approximately. Our experiments on synthetic and real data demonstrate that our learned networks for prox-TV provide a significant advantage in convergence speed.
Finally, we believe that the principles presented in this paper could be generalized and deployed in other optimization problems, involving not just the TV-norm but more general analysis-type priors. In particular, this paper only apply for 1D TV problems because the equivalence between Lasso and TV is not exact in higher dimension. In this case, we believe exploiting a dual formulation (Chambolle, 2004) for the problem could allow us to derive similar learnable algorithms.
Broader Impact
This work attempts to shed some understanding into empirical phenomena in signal processing – in our case, piecewise constant approximations. As such, it is our hope that this work encourages fellow researchers to invest in the study and development of principled machine learning tools. Besides these, we do not foresee any other immediate societal consequences.
Acknowledgement
We gratefully acknowledge discussions with Pierre Ablin, whose suggestions helped us completing some parts of the proofs. H. Cherkaoui is supported by a CEA PhD scholarship. J. Sulam is partially supported by NSF Grant 2007649. | 1. What is the focus of the paper in terms of the problem it addresses?
2. What are the strengths of the proposed approach, particularly in its ability to improve performance?
3. What are the weaknesses of the paper regarding its contributions and potential impact? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper considers the problem of accelerating TV-regularized problems. The paper first shows that assuming random Gaussian design, the analysis formulation might converge faster than the synthesis formulation based on the usual convergence rate for PGD. The paper then proposes two methods to accelerate the analysis formulation, using unrolling as done in LISTA. With regular lasso, the proximal operator is soft thresholding and back propagation is easy. With TV, backpropagation is a bit more difficult, and the paper proposes two alternatives. Numerical experiments are performed both on synthetic and real fMRI data, where the methods are compared.
Strengths
TV regularization is a problem of interest to the community, and the proposed methods are interesting. In particular, using LISTA internally seems to lead to much better performance in low sparsity regimes. Code is available as part of a general toolbox for TV-regularized problems, which makes it easy to use and disseminate.
Weaknesses
The contribution is mostly a performance improvement in terms of number of iterations/run time to solve the TV-regularized inverse problems. I am not familiar with applications enough to assess whether this would have a significant impact. |
NIPS | Title
Learning to solve TV regularised problems with unrolled algorithms
Abstract
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the `1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
1 Introduction
Ill-posed inverse problems appear naturally in signal and image processing and machine learning, requiring extra regularization techniques. Total Variation (TV) is a popular regularization strategy with a long history (Rudin et al., 1992), and has found a large number of applications in neuro-imaging (Fikret et al., 2013), medical imaging reconstruction (Tian et al., 2011), among myriad applications (Rodríguez, 2013; Darbon and Sigelle, 2006). TV promotes piece-wise constant estimates by penalizing the `1-norm of the first order derivative of the estimated signal, and it provides a simple, yet efficient regularization technique.
TV-regularized problems are typically convex, and so a wide variety of algorithms are in principle applicable. Since the `1 norm in the TV term is non-smooth, Proximal Gradient Descent (PGD) is the most popular choice (Rockafellar, 1976). Yet, the computation for the corresponding proximal operator (denoted prox-TV) represents a major difficulty in this case as it does not have a closed-form analytic solution. For 1D problems, it is possible to rely on dynamic programming to compute proxTV, such as the taut string algorithm (Davies and Kovac, 2001; Condat, 2013a). Another alternative consists in computing the proximal operator with iterative first order algorithm (Chambolle, 2004; Beck and Teboulle, 2009; Boyd et al., 2011; Condat, 2013b). Other algorithms to solve TV-regularized
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
problems rely on primal dual algorithms (Chambolle and Pock, 2011; Condat, 2013b) or Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011). These algorithms typically use one sequence of estimates for each term in the objective and try to make them as close as possible while minimizing the associated term. While these algorithms are efficient for denoising problems – where one is mainly concerned with good reconstruction – they can result in estimate that are not very well regularized if the two sequences are not close enough.
When on fixed computational budget, iterative optimization methods can become impractical as they often require many iterations to give a satisfactory estimate. To accelerate the resolution of these problems with a finite (and small) number of iterations, one can resort to unrolled and learned optimization algorithms (see Monga et al. 2019 for a review). In their seminal work, Gregor and Le Cun (2010) proposed the Learned ISTA (LISTA), where the parameters of an unfolded Iterative Shrinkage-Thresholding Algorithm (ISTA) are learned with gradient descent and back-propagation. This allows to accelerate the approximate solution of a Lasso problem (Tibshirani, 1996), with a fixed number of iteration, for signals from a certain distribution. The core principle behind the success of this approach is that the network parameters can adaptively leverage the sensing matrix structure (Moreau and Bruna, 2017) as well as the input distribution (Giryes et al., 2018; Ablin et al., 2019). Many extensions of this original idea have been proposed to learn different algorithms (Sprechmann et al., 2012, 2013; Borgerding et al., 2017) or for different classes of problem (Xin et al., 2016; Giryes et al., 2018; Sulam et al., 2019). The motif in most of these adaptations is that all operations in the learned algorithms are either linear or separable, thus resulting in sub-differentials that are easy to compute and implement via back-propagation. Algorithm unrolling is also used in the context of bi-level optimization problems such as hyper-parameter selection. Here, the unrolled architecture provides a way to compute the derivative of the inner optimization problem solution compared to another variable such as the regularisation parameter using back-propagation (Bertrand et al., 2020).
The focus of this paper is to apply algorithm unrolling to TV-regularized problems in the 1D case. While one could indeed apply the LISTA approach directly to the synthesis formulation of these problems, we show in this paper that using such formulation leads to slower iterative or learned algorithms compared to their analysis counterparts. The extension of learnable algorithms to the analysis formulation is not trivial, as the inner proximal operator does not have an analytical or separable expression. We propose two architectures that can learn TV-solvers in their analysis form directly based on PGD. The first architecture uses an exact algorithm to compute the prox-TV and we derive the formulation of its weak Jacobian in order to learn the network’s parameters. Our second method rely on a nested LISTA network in order to approximate the prox-TV itself in a differentiable way. This latter approach can be linked to inexact proximal gradient methods (Schmidt et al., 2011; Machart et al., 2012). These results are backed with numerical experiments on synthetic and real data. Concurrently to our work, Lecouat et al. (2020) also proposed an approach to differentiate the solution of TV-regularized problems. While their work can be applied in the context of 2D signals, they rely on smoothing the regularization term using Moreau-Yosida regularization, which results in smoother estimates from theirs learned networks. In contrast, our work allows to compute sharper signals but can only be applied to 1D signals.
The rest of the paper is organized as follows. In Section 2, we describe the different formulations for TV-regularized problems and their complexity. We also recall central ideas of algorithm unfolding. Section 3 introduces our two approaches for learnable network architectures based on PGD. Finally, the two proposed methods are evaluated on real and synthetic data in Section 4.
Notations For a vector x ∈ Rk, we denote ‖x‖q its `q-norm. For a matrix A ∈ Rm×k, we denote ‖A‖2 its `2-norm, which corresponds to its largest singular value and A† denotes its pseudoinverse. For an ordered subset of indices S ⊂ {1, . . . , k}, xS denote the vector in R|S| with element (xS)t = xit for it ∈ S. For a matrix A ∈ Rm×k, A:,S denotes the sub-matrix [A:,i1 , . . . A:,i|S| ] composed with the columns A:,it of index it ∈ S of A. For the rest of the paper, we refer to the operators D ∈ Rk−1×k, D̃ ∈ Rk×k, L ∈ Rk×k and R ∈ Rk×k as:
D = −1 1 0 . . . 0 0 −1 1 . . . ... ...
. . . . . . . . . 0 0 . . . 0 −1 1
D̃ = 1 0 . . . 0 −1 1 . . . ...
. . . . . . 0 0 −1 1
L = 1 0 . . . 0 1 1 . . . ... ...
. . . . . . 0 1 . . . 1 1
R = 0 0 . . . 0 0 1 . . . ... ...
. . . . . . 0 0 . . . 0 1
2 Solving TV-regularized problems
We begin by detailing the TV-regularized problem that will be the main focus of our work. Consider a latent vector u ∈ Rk, a design matrix A ∈ Rm×k and the corresponding observation x ∈ Rm. The original formulation of the TV-regularized regression problem is referred to as the analysis formulation (Rudin et al., 1992). For a given regularization parameter λ > 0, it reads
min u∈Rk
P (u) = 1
2 ‖x−Au‖22 + λ‖u‖TV , (1)
where ‖u‖TV = ‖Du‖1, and D ∈ Rk−1×k stands for the first order finite difference operator, as defined above. The problem in (1) can be seen as a special case of a Generalized Lasso problem (Tibshirani and Taylor, 2011); one in which the analysis operator is D. Note that problem P is convex, but the TV -norm is non-smooth. In these cases, a practical alternative is the PGD, which iterates between a gradient descent step and the prox-TV. This algorithm’s iterates read
u(t+1) = proxλ ρ ‖·‖TV
( u(t) − 1
ρ A>(Au(t) − x)
) , (2)
where ρ = ‖A‖22 and the prox-TV is defined as
proxµ‖·‖TV (y) = arg min u∈Rk
Fy(u) = 1
2 ‖y − u‖22 + µ‖u‖TV . (3)
Problem (3) does not have a closed-form solution, and one needs to resort to iterative techniques to compute it. In our case, as the problem is 1D, the prox-TV problem can be addressed with a dynamic programming approach, such as the taut-string algorithm (Condat, 2013a). This scales as O(k) in all practical situations and is thus much more efficient than other optimization based iterative algorithms (Rockafellar, 1976; Chambolle, 2004; Condat, 2013b) for which each iteration is O(k2) at best.
With a generic matrix A ∈ Rm×k, the PGD algorithm is known to have a sublinear convergence rate (Combettes and Bauschke, 2011). More precisely, for any initialization u(0) and solution u∗, the iterates satisfy
P (u(t))− P (u∗) ≤ ρ 2t ‖u(0) − u∗‖22, (4)
where u∗ is a solution of the problem in (1). Note that the constant ρ can have a significant effect. Indeed, it is clear from (4) that doubling ρ leads to consider doubling the number of iterations.
2.1 Synthesis formulation
An alternative formulation for TV-regularized problems relies on removing the analysis operator D from the `1-norm and translating it into a synthesis expression (Elad et al., 2007). Removing D from the non-smooth term simplifies the expression of the proximal operator by making it separable, as in the Lasso. The operator D is not directly invertible but keeping the first value of the vector u allows for perfect reconstruction. This motivates the definition of the operator D̃ ∈ Rk×k, and its inverse L ∈ Rk×k, as defined previously. Naturally, L is the discrete integration operator. Considering the change of variable z = D̃u, and using the operator R ∈ Rk×k, the problem in (1) is equivalent to
min z∈Rk
S(z) = 1
2 ‖x−ALz‖22 + λ‖Rz‖1. (5)
Note that for any z ∈ Rk, S(z) = P (Lz). There is thus an exact equivalence between solutions from the synthesis and the analysis formulation, and the solution for the analysis can be obtained with u∗ = Lz∗. The benefit of this formulation is that the problem above now reduces to a Lasso problem (Tibshirani, 1996). In this case, the PGD algorithm is reduced to the ISTA with a closed-form proximal operator (the soft-thresholding). Note that this simple formulation is only possible in 1D where the first order derivative space is unconstrained. In larger dimensions, the derivative must be constrained to verify the Fubini’s formula that enforces the symmetry of integration over dimensions. While it is also possible to derive synthesis formulation in higher dimension (Elad et al., 2007), this does not lead to simplistic proximal operator.
For this synthesis formulation, with a generic matrix A ∈ Rm×k, the PGD algorithm has also a sublinear convergence rate (Beck and Teboulle, 2009) such that
P (u(t))− P (u∗) ≤ 2ρ̃ t ‖u(0) − u∗‖22, (6)
with ρ̃ = ‖AL‖22 (see Subsection F.1 for full derivation). While the rate of this algorithm is the same as in the analysis formulation – in O( 1t ) – the constant ρ̃ related to the operator norm differs. We now present two results that will characterize the value of ρ̃.
Proposition 2.1. [Lower bound for the ratio ‖AL‖ 2 2
‖A‖22 expectation] Let A be a random matrix in Rm×k
with i.i.d normally distributed entries. The expectation of ‖AL‖22/‖A‖22 is asymptotically lower bounded when k tends to∞ by
E [‖AL‖22 ‖A‖22 ] ≥ 2k + 1 4π2 + o(1)
The full proof can be found in Subsection F.3. The lower bound is constructed by using ATA ‖A‖22u1u>1 for a unit vector u1 and computing explicitely the expectation for rank one matrices. To assess the tightness of this bound, we evaluated numerically E
[ ‖AL‖22 ‖A‖22 ] on a set of 1000
matrices sampled with i.i.d normally distributed entries. The results are displayed w.r.t the dimension k in Figure 1. It is clear that the lower bound from Proposition 2.1 is not tight. This is expected as we consider only the leading eigenvector of A to derive it in the proof. The following conjecture gives a tighter bound.
Conjecture 2.2 (Expectation for the ratio ‖AL‖ 2 2
‖A‖22 ). Under the same conditions as in Proposition 2.1,
the expectation of ‖AL‖22/‖A‖22 is given by
E [‖AL‖22 ‖A‖22 ] = (2k + 1)2 16π2 + o(1) .
We believe this conjecture can potentially be proven with analogous developments as those in Proposition 2.1, but integrating over all dimensions. However, a main difficulty lies in the fact that integration over all eigenvectors have to be carried out jointly as they are not independent. This is subject of current ongoing work.
Finally, we can expect that ρ̃/ρ scales as Θ(k2). This leads to the observation that ρ̃2 ρ in large enough dimension. As a result, the analysis formulation should be much more efficient in terms of iterations than the synthesis formulation – as long as the prox-TVcan be dealt with efficiently.
2.2 Unrolled iterative algorithms
As shown by Gregor and Le Cun (2010), ISTA is equivalent to a recurrent neural network (RNN) with a particular structure. This observation can be generalized to PGD algorithms for any penalized least squares problem of the form
u∗(x) = arg min u L(x, u) = 1 2 ‖x−Bu‖22 + λg(u) , (7)
where g is proper and convex, as depicted in Figure 2a. By unrolling this architecture with T layers, we obtain a network φΘ(T )(x) = u(T ) – illustrated in Figure 2b – with parameters Θ(T ) = {W (t)x ,W (t)u , µ(t)}Tt=1, defined by the following recursion
u(0) = B†x ; u(t) = proxµ(t)g(W (t) x x+W (t) u u (t−1)) . (8)
As underlined by (4), a good estimate u(0) is crucial in order to have a fast convergence toward u∗(x). However, this chosen initialization is mitigated by the first layer of the network which learns to set a good initial guess for u(1). For a network with T layers, one recovers exactly the T -th iteration of PGD if the weights are chosen constant equal to
W (t)x = 1
ρ B>, W (t)u = (Id−
1 ρ B>B) , µ(t) = λ ρ , with ρ = ‖B‖22 . (9)
In practice, this choice of parameters are used as initialization for a posterior training stage. In many practical applications, one is interested in minimizing the loss (7) for a fixed B and a particular distribution over the space of x, P . As a result, the goal of this training stage is to find parameters Θ(T ) that minimize the risk, or expected loss, E[L(x, φΘ(T )(x))] over P . Since one does not have access to this distribution, and following an empirical risk minimization approach with a given training set {x1, . . . xN} (assumed sampled i.i.d from P), the network is trained by minimizing
min Θ(T )
1
N
N∑
i=1
L(xi, φΘ(T )(xi)) . (10)
Note that when T → +∞, the presented initialization in (9) gives a global minimizer of the loss for all xi, as the network converges to exact PGD. When T is fixed, however, the output of the network is not a minimizer of (7) in general. Minimizing this empirical risk can therefore find a weight configuration that reduces the sub-optimality of the network relative to (7) over the input distribution used to train the network. In such a way, the network learns an algorithm to approximate the solution of (7) for a particular class or distributions of signals. It is important to note here that while this procedure can accelerate the resolution the problem, the learned algorithm will only be valid for inputs xi coming from the same input distribution P as the training samples. The algorithm might not converge for samples which are too different from the training set, unlike the iterative algorithm which is guaranteed to converge for any sample.
This network architecture design can be directly applied to TV regularised problems if the synthesis formulation (5) is used. Indeed, in this case PGD reduces to the ISTA algorithm, with B = AL and proxµg = ST(·, µ) becomes simply a soft-thresholding operator (which is only applied on the coordinates {2, . . . k}, following the definition of R). However, as discussed in Proposition 2.1, the conditioning of the synthesis problem makes the estimation of the solution slow, increasing the number of network layers needed to get a good estimate of the solution. In the next section, we will extend these learning-based ideas directly to the analysis formulation by deriving a way to obtain exact and approximate expressions for the sub-differential of the non-separable prox-TV.
3 Back-propagating through TV proximal operator
Our two approaches to define learnable networks based on PGD for TV-regularised problems in the analysis formulation differ on the computation of the prox-TV and its derivatives. Our first approach
consists in directly computing the weak derivatives of the exact proximal operator while the second one uses a differentiable approximation.
3.1 Derivative of prox-TV
While there is no analytic solution to the prox-TV, it can be computed exactly (numerically) for 1D problems using the taut-string algorithm (Condat, 2013a). This operator can thus be applied at each layer of the network, reproducing the architecture described in Figure 2b. We define the LPGD-Taut network φΘ(T )(x) with the following recursion formula
φΘ(T )(x) = proxµ(T )‖·‖TV ( W (T )x x+W (T ) u φΘ(T−1)(x) ) (11)
To be able to learn the parameters through gradient descent, one needs to compute the derivatives of (10) w.r.t the parameters Θ(T ). Denoting h = W (t)x x+W
(t) u φΘ(t−1)(x) and u = proxµ(t)‖·‖TV (h),
the application of the chain rule (as implemented efficiently by automatic differentiation) results in ∂L ∂h = Jx(h, µ (t))> ∂L ∂u , and ∂L ∂µ(t) = Jµ(h, µ (t))> ∂L ∂u , (12)
where Jx(h, µ) ∈ Rk×k and Jµ(h, µ) ∈ Rk×1 denotes the weak Jacobian of the output of the proximal operator u with respect to the first and second input respectively. We now give the analytic formulation of these weak Jacobians in the following proposition. Proposition 3.1. [Weak Jacobian of prox-TV] Let x ∈ Rk and u = proxµ‖·‖TV (x), and denote by S the support of z = D̃u. Then, the weak Jacobian Jx and Jµ of the prox-TV relative to x and µ can be computed as
Jx(x, µ) = L:,S(L > :,SL:,S) −1L>:,S and Jµ(x, µ) = −L:,S(L>:,SL:,S)−1 sign(Du)S The proof of this proposition can be found in Subsection G.1. Note that the dependency in the inputs is only through S and sign(Du), where u is a short-hand for proxµ‖·‖TV (x). As a result, computing these weak Jacobians can be done efficiently by simply storing sign(Du) as a mask, as it would be done for a RELU or the soft-thresholding activations, and requiring just 2(k − 1) bits. With these expressions, it is thus possible to compute gradient relatively to all parameters in the network, and employ them via back-propagation.
3.2 Unrolled prox-TV
As an alternative to the previous approach, we propose to use the LISTA network to approximate the prox-TV (3). The prox-TV can be reformulated with a synthesis approach resulting in a Lasso i.e.
z∗ = arg min z
1 2 ‖h− Lz‖22 + µ‖Rz‖1 (13)
The proximal operator solution can then be retrieved with proxµ‖·‖TV (h) = Lz ∗. This problem can be solved using ISTA, and approximated efficiently with a LISTA network Gregor and Le Cun (2010). For the resulting architecture – dubbed LPGD-LISTA – proxµ‖·‖TV (h) is replaced by a nested LISTA network with a fixed number of layers Tin defined recursively with z(0) = Dh and
z(`+1) = ST ( W (`,t)z z (`) +W (`,t) h ΦΘ(t) , µ(`,t)
ρ
) . (14)
Here, W (`,t)z ,W (`,t) h , µ (`,t) are the weights of the nested LISTA network for layer `. They are initialized with weights chosen as in (9) to ensure that the initial state approximates the prox-TV. Note that the weigths of each of these inner layers are also learned through back-propagation during training.
The choice of this architecture provides a differentiable (approximate) proximal operator. Indeed, the LISTA network is composed only of linear and soft-thresholding layers – standard tools for deep-learning libraries. The gradient of the network’s parameters can thus be computed using classic automatic differentiation. Moreover, if the inner network is not trained, the gradient computed with this method will converge toward the gradient computed using Proposition 3.1 as Tin goes to∞ (see Proposition G.2). Thus, in this untrained setting with infinitely many inner layers, the network is equivalent to LPGD-Taut as the output of the layer also converges toward the exact proximal operator.
Connections to inexact PGD A drawback of approximating the prox-TV via an iterative procedure is, precisely, that it is not exact. This optimization error results from a trade-off between computational cost and convergence rate. Using results from Machart et al. (2012), one can compute the scaling of T and Tin to reach an error level of δ with an untrained network. Proposition G.3 shows that without learning, T should scale as O( 1t ) and Tin should be larger than O(ln( 1 δ )). This scaling gives potential guidelines to set these parameters, as one can expect that learning the parameters of the network would reduce these requirement.
4 Experiments
All experiments are performed in Python using PyTorch (Paszke et al., 2019). We used the implementation1 of Barbero and Sra (2018) to compute TV proximal operator using taut-string algorithm. The code to reproduce the figures is available online2.
In all experiments, we initialize u0 = A†x. Moreover, we employed a normalized λreg as a penalty parameter: we first compute the value of λmax (which is the minimal value for which z = 0 is solution of (5)) and we refer to λ as the ratio so that λreg = λλmax, with λ ∈ [0, 1] (see Appendix D). As the computational complexity of all compared algorithms is the same except for the proximal operator, we compare them in term of iterations.
4.1 Simulation
We generate n = 2000 times series and used half for training and other half for testing and comparing the different algorithms. We train all the network’s parameters jointly – those to approximate the gradient for each iteration along with those to define the inner proximal operator. The full training process is described in Appendix A. We set the length of the source signals (ui)ni=1 ∈ Rn×k to k = 8 with a support of |S| = 2 non-zero coefficients (larger dimensions will be showcased in the real data application). We generate A ∈ Rm×k as a Gaussian matrix with m = 5, obtaining then (ui) n i=1 ∈ Rn×p. Moreover, we add Gaussian noise to measurements xi = Aui with a signal to noise ratio (SNR) of 1.0.
We compare our proposed methods, LPGD-Taut network and the LPGD-LISTA with Tin = 50 inner layers to PGD and Accelerated PGD with the analysis formulation. For completeness, we also add the FISTA algorithm for the synthesis formulation in order to illustrate Proposition 2.1 along with its learned version.
Figure 3 presents the risk (or expected function value, P ) of each algorithm as a function of the number of layers or, equivalently, iterations. For the learned algorithms, the curves in t display the performances of a network with t layer trained specifically. We observe that all the synthesis formulation algorithms are slower than their analysis counterparts, empirically validating Proposition 2.1.
1Available at https://github.com/albarji/proxTV 2Available at https://github.com/hcherkaoui/carpet.
Moreover, both of the proposed methods accelerate the resolution of (20) in a low iteration regime. However, when the regularization parameter is high (λ = 0.8), we observe that the performance of the LPGD-LISTA tends to plateau. It is possible that such a high level of sparsity require more than 50 layers for the inner network (which computes the prox-TV). According to Section 3.2, the error associated with this proximity step hinders the global convergence, making the loss function decrease slowly. Increasing the number of inner layers would alleviate this issue, though at the expense of increased computational burden for both training and runtime. For LPGD-Taut, while the Taut-string algorithm ensures that the recovered support is exact for the proximal step, the overall support can be badly estimated in the first iterations. This can lead to un-informative gradients as they greatly depend on the support of the solution in this case, and explain the reduced performances of the network in the high sparsity setting.
Inexact prox-TV With the same data (xi)ni=1 ∈ Rn×m, we empirically investigate the error of the prox-TV (t)k = Fu(t)(z
(t)) − Fu(t)(z∗) and evaluate it for c with different number of layers (T ∈ [20, 50]). We also investigate the case where the parameter of the nested LISTA in LPGD-LISTA are trained compared to their initialization in untrained version.
Figure 4 depicts the error k for each layer. We see that learning the parameters of the unrolled prox-TV in LPGD-LISTA barely improves the performance. More interestingly, we observe that in a high sparsity setting the error sharply increases after a certain number of layers. This is likely cause by the high sparsity of the estimates, the small numbers of iterations of the inner network (between 20 and 50) are insufficient to obtain an accurate solution to the proximal operator. This is in accordance with inexact PGD theory which predict that such algorithm has no exact convergence guarantees (Schmidt et al., 2011).
4.2 fMRI data deconvolution
Functional magnetic resonance imaging (fMRI) is a non-invasive method for recording the brain activity by dynamically measuring blood oxygenation level-dependent (BOLD) contrast, denoted here x. The latter reflects the local changes in the deoxyhemoglobin concentration in the brain Ogawa et al. (1992) and thus indirectly measures neural activity through the neurovascular coupling. This coupling is usually modelled as a linear and time-invariant system and characterized by its impulse response, the so-called haemodynamic response function (HRF), denoted here h. Recent developments propose to estimate either the neural activity signal independently (Fikret et al., 2013; Cherkaoui et al., 2019b) or jointly with the HRF (Cherkaoui et al., 2019a; Farouj et al., 2019). Estimating the neural activity signal with a fixed HRF is akin to a deconvolution problem regularized with TV-norm,
min u∈Rk
P (u) = 1
2 ‖h ∗ u− x‖22 + λ‖u‖TV (15)
To demonstrate the usefulness of our approach with real data, where the training set has not the exact same distribution than the testing set, we compare the LPGD-Taut to Accelerated PGD for the analysis formulation on this deconvolution problem. We choose two subjects from the UK Bio Bank (UKBB) dataset (Sudlow et al., 2015), perform the usual fMRI processing and reduce the dimension of the problem to retain only 8000 time-series of 250 time-frames, corresponding to a record of 3 minute 03 seconds. The full preprocessing pipeline is described in Appendix B. We train
the LPGD taut-string network solver on the first subject and Figure 5 reports the performance of the two algorithms on the second subject for λ = 0.1. The performance is reported relatively to the number of iteration as the computational complexity of each iteration or layer for both methods is equivalent. It is clear that LPGD-Taut converges faster than the Accelerated PGD even on real data. In particular, acceleration is higher when the regularization parameter λ is smaller. As mentioned previously, this acceleration is likely to be caused by the better learning capacity of the network in a low sparsity context. The same experiment is repeated for λ = 0.8 in Figure C.1.
5 Conclusion
This paper studies the optimization of TV-regularised problems via learned PGD. We demonstrated, both analytically and numerically, that it is better to address these problems in their original analysis formulation rather than resort to the simpler (alas slower) synthesis version. We then proposed two different algorithms that allow for the efficient computation and derivation of the required prox-TV, exactly or approximately. Our experiments on synthetic and real data demonstrate that our learned networks for prox-TV provide a significant advantage in convergence speed.
Finally, we believe that the principles presented in this paper could be generalized and deployed in other optimization problems, involving not just the TV-norm but more general analysis-type priors. In particular, this paper only apply for 1D TV problems because the equivalence between Lasso and TV is not exact in higher dimension. In this case, we believe exploiting a dual formulation (Chambolle, 2004) for the problem could allow us to derive similar learnable algorithms.
Broader Impact
This work attempts to shed some understanding into empirical phenomena in signal processing – in our case, piecewise constant approximations. As such, it is our hope that this work encourages fellow researchers to invest in the study and development of principled machine learning tools. Besides these, we do not foresee any other immediate societal consequences.
Acknowledgement
We gratefully acknowledge discussions with Pierre Ablin, whose suggestions helped us completing some parts of the proofs. H. Cherkaoui is supported by a CEA PhD scholarship. J. Sulam is partially supported by NSF Grant 2007649. | 1. What is the focus and contribution of the paper on signal recovery?
2. What are the strengths of the proposed approach, particularly in combining classical methods with neural networks?
3. What are the weaknesses of the paper regarding its theoretical analysis and understanding?
4. How does the reviewer assess the limitations of the proposed algorithms in certain scenarios? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes two unrolled algorithms based on proximal gradient descent and LISTA network for solving the total variation regularized signal recovery problem. Numerical experiments on 1D synthetic and real data have justified the proposed effectiveness in terms of convergence speed.
Strengths
Combination of neural networks and the classical proximal gradient descent algorithms is the major strength of the work, which sounds interesting and important to other related works.
Weaknesses
Theoretical discussion of the proposed algorithms seems to be still at an early development stage without any convergence or stability guarantee. Section 2.1 seems to review the synthesis formulation with the proposed proposition and conjecture, which however do not shed lights on the later empirical evaluations. Thus the theoretical contributions of this work are not clear. Furthermore, similarity of the training data and the test data plays a crucial role in neural network type of deep learning algorithms while there is no such a restriction in the traditional gradient descent and its iterative variants, which may limit the performance of the proposed algorithms in some circumstances. |
NIPS | Title
Learning to solve TV regularised problems with unrolled algorithms
Abstract
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the `1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
1 Introduction
Ill-posed inverse problems appear naturally in signal and image processing and machine learning, requiring extra regularization techniques. Total Variation (TV) is a popular regularization strategy with a long history (Rudin et al., 1992), and has found a large number of applications in neuro-imaging (Fikret et al., 2013), medical imaging reconstruction (Tian et al., 2011), among myriad applications (Rodríguez, 2013; Darbon and Sigelle, 2006). TV promotes piece-wise constant estimates by penalizing the `1-norm of the first order derivative of the estimated signal, and it provides a simple, yet efficient regularization technique.
TV-regularized problems are typically convex, and so a wide variety of algorithms are in principle applicable. Since the `1 norm in the TV term is non-smooth, Proximal Gradient Descent (PGD) is the most popular choice (Rockafellar, 1976). Yet, the computation for the corresponding proximal operator (denoted prox-TV) represents a major difficulty in this case as it does not have a closed-form analytic solution. For 1D problems, it is possible to rely on dynamic programming to compute proxTV, such as the taut string algorithm (Davies and Kovac, 2001; Condat, 2013a). Another alternative consists in computing the proximal operator with iterative first order algorithm (Chambolle, 2004; Beck and Teboulle, 2009; Boyd et al., 2011; Condat, 2013b). Other algorithms to solve TV-regularized
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
problems rely on primal dual algorithms (Chambolle and Pock, 2011; Condat, 2013b) or Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011). These algorithms typically use one sequence of estimates for each term in the objective and try to make them as close as possible while minimizing the associated term. While these algorithms are efficient for denoising problems – where one is mainly concerned with good reconstruction – they can result in estimate that are not very well regularized if the two sequences are not close enough.
When on fixed computational budget, iterative optimization methods can become impractical as they often require many iterations to give a satisfactory estimate. To accelerate the resolution of these problems with a finite (and small) number of iterations, one can resort to unrolled and learned optimization algorithms (see Monga et al. 2019 for a review). In their seminal work, Gregor and Le Cun (2010) proposed the Learned ISTA (LISTA), where the parameters of an unfolded Iterative Shrinkage-Thresholding Algorithm (ISTA) are learned with gradient descent and back-propagation. This allows to accelerate the approximate solution of a Lasso problem (Tibshirani, 1996), with a fixed number of iteration, for signals from a certain distribution. The core principle behind the success of this approach is that the network parameters can adaptively leverage the sensing matrix structure (Moreau and Bruna, 2017) as well as the input distribution (Giryes et al., 2018; Ablin et al., 2019). Many extensions of this original idea have been proposed to learn different algorithms (Sprechmann et al., 2012, 2013; Borgerding et al., 2017) or for different classes of problem (Xin et al., 2016; Giryes et al., 2018; Sulam et al., 2019). The motif in most of these adaptations is that all operations in the learned algorithms are either linear or separable, thus resulting in sub-differentials that are easy to compute and implement via back-propagation. Algorithm unrolling is also used in the context of bi-level optimization problems such as hyper-parameter selection. Here, the unrolled architecture provides a way to compute the derivative of the inner optimization problem solution compared to another variable such as the regularisation parameter using back-propagation (Bertrand et al., 2020).
The focus of this paper is to apply algorithm unrolling to TV-regularized problems in the 1D case. While one could indeed apply the LISTA approach directly to the synthesis formulation of these problems, we show in this paper that using such formulation leads to slower iterative or learned algorithms compared to their analysis counterparts. The extension of learnable algorithms to the analysis formulation is not trivial, as the inner proximal operator does not have an analytical or separable expression. We propose two architectures that can learn TV-solvers in their analysis form directly based on PGD. The first architecture uses an exact algorithm to compute the prox-TV and we derive the formulation of its weak Jacobian in order to learn the network’s parameters. Our second method rely on a nested LISTA network in order to approximate the prox-TV itself in a differentiable way. This latter approach can be linked to inexact proximal gradient methods (Schmidt et al., 2011; Machart et al., 2012). These results are backed with numerical experiments on synthetic and real data. Concurrently to our work, Lecouat et al. (2020) also proposed an approach to differentiate the solution of TV-regularized problems. While their work can be applied in the context of 2D signals, they rely on smoothing the regularization term using Moreau-Yosida regularization, which results in smoother estimates from theirs learned networks. In contrast, our work allows to compute sharper signals but can only be applied to 1D signals.
The rest of the paper is organized as follows. In Section 2, we describe the different formulations for TV-regularized problems and their complexity. We also recall central ideas of algorithm unfolding. Section 3 introduces our two approaches for learnable network architectures based on PGD. Finally, the two proposed methods are evaluated on real and synthetic data in Section 4.
Notations For a vector x ∈ Rk, we denote ‖x‖q its `q-norm. For a matrix A ∈ Rm×k, we denote ‖A‖2 its `2-norm, which corresponds to its largest singular value and A† denotes its pseudoinverse. For an ordered subset of indices S ⊂ {1, . . . , k}, xS denote the vector in R|S| with element (xS)t = xit for it ∈ S. For a matrix A ∈ Rm×k, A:,S denotes the sub-matrix [A:,i1 , . . . A:,i|S| ] composed with the columns A:,it of index it ∈ S of A. For the rest of the paper, we refer to the operators D ∈ Rk−1×k, D̃ ∈ Rk×k, L ∈ Rk×k and R ∈ Rk×k as:
D = −1 1 0 . . . 0 0 −1 1 . . . ... ...
. . . . . . . . . 0 0 . . . 0 −1 1
D̃ = 1 0 . . . 0 −1 1 . . . ...
. . . . . . 0 0 −1 1
L = 1 0 . . . 0 1 1 . . . ... ...
. . . . . . 0 1 . . . 1 1
R = 0 0 . . . 0 0 1 . . . ... ...
. . . . . . 0 0 . . . 0 1
2 Solving TV-regularized problems
We begin by detailing the TV-regularized problem that will be the main focus of our work. Consider a latent vector u ∈ Rk, a design matrix A ∈ Rm×k and the corresponding observation x ∈ Rm. The original formulation of the TV-regularized regression problem is referred to as the analysis formulation (Rudin et al., 1992). For a given regularization parameter λ > 0, it reads
min u∈Rk
P (u) = 1
2 ‖x−Au‖22 + λ‖u‖TV , (1)
where ‖u‖TV = ‖Du‖1, and D ∈ Rk−1×k stands for the first order finite difference operator, as defined above. The problem in (1) can be seen as a special case of a Generalized Lasso problem (Tibshirani and Taylor, 2011); one in which the analysis operator is D. Note that problem P is convex, but the TV -norm is non-smooth. In these cases, a practical alternative is the PGD, which iterates between a gradient descent step and the prox-TV. This algorithm’s iterates read
u(t+1) = proxλ ρ ‖·‖TV
( u(t) − 1
ρ A>(Au(t) − x)
) , (2)
where ρ = ‖A‖22 and the prox-TV is defined as
proxµ‖·‖TV (y) = arg min u∈Rk
Fy(u) = 1
2 ‖y − u‖22 + µ‖u‖TV . (3)
Problem (3) does not have a closed-form solution, and one needs to resort to iterative techniques to compute it. In our case, as the problem is 1D, the prox-TV problem can be addressed with a dynamic programming approach, such as the taut-string algorithm (Condat, 2013a). This scales as O(k) in all practical situations and is thus much more efficient than other optimization based iterative algorithms (Rockafellar, 1976; Chambolle, 2004; Condat, 2013b) for which each iteration is O(k2) at best.
With a generic matrix A ∈ Rm×k, the PGD algorithm is known to have a sublinear convergence rate (Combettes and Bauschke, 2011). More precisely, for any initialization u(0) and solution u∗, the iterates satisfy
P (u(t))− P (u∗) ≤ ρ 2t ‖u(0) − u∗‖22, (4)
where u∗ is a solution of the problem in (1). Note that the constant ρ can have a significant effect. Indeed, it is clear from (4) that doubling ρ leads to consider doubling the number of iterations.
2.1 Synthesis formulation
An alternative formulation for TV-regularized problems relies on removing the analysis operator D from the `1-norm and translating it into a synthesis expression (Elad et al., 2007). Removing D from the non-smooth term simplifies the expression of the proximal operator by making it separable, as in the Lasso. The operator D is not directly invertible but keeping the first value of the vector u allows for perfect reconstruction. This motivates the definition of the operator D̃ ∈ Rk×k, and its inverse L ∈ Rk×k, as defined previously. Naturally, L is the discrete integration operator. Considering the change of variable z = D̃u, and using the operator R ∈ Rk×k, the problem in (1) is equivalent to
min z∈Rk
S(z) = 1
2 ‖x−ALz‖22 + λ‖Rz‖1. (5)
Note that for any z ∈ Rk, S(z) = P (Lz). There is thus an exact equivalence between solutions from the synthesis and the analysis formulation, and the solution for the analysis can be obtained with u∗ = Lz∗. The benefit of this formulation is that the problem above now reduces to a Lasso problem (Tibshirani, 1996). In this case, the PGD algorithm is reduced to the ISTA with a closed-form proximal operator (the soft-thresholding). Note that this simple formulation is only possible in 1D where the first order derivative space is unconstrained. In larger dimensions, the derivative must be constrained to verify the Fubini’s formula that enforces the symmetry of integration over dimensions. While it is also possible to derive synthesis formulation in higher dimension (Elad et al., 2007), this does not lead to simplistic proximal operator.
For this synthesis formulation, with a generic matrix A ∈ Rm×k, the PGD algorithm has also a sublinear convergence rate (Beck and Teboulle, 2009) such that
P (u(t))− P (u∗) ≤ 2ρ̃ t ‖u(0) − u∗‖22, (6)
with ρ̃ = ‖AL‖22 (see Subsection F.1 for full derivation). While the rate of this algorithm is the same as in the analysis formulation – in O( 1t ) – the constant ρ̃ related to the operator norm differs. We now present two results that will characterize the value of ρ̃.
Proposition 2.1. [Lower bound for the ratio ‖AL‖ 2 2
‖A‖22 expectation] Let A be a random matrix in Rm×k
with i.i.d normally distributed entries. The expectation of ‖AL‖22/‖A‖22 is asymptotically lower bounded when k tends to∞ by
E [‖AL‖22 ‖A‖22 ] ≥ 2k + 1 4π2 + o(1)
The full proof can be found in Subsection F.3. The lower bound is constructed by using ATA ‖A‖22u1u>1 for a unit vector u1 and computing explicitely the expectation for rank one matrices. To assess the tightness of this bound, we evaluated numerically E
[ ‖AL‖22 ‖A‖22 ] on a set of 1000
matrices sampled with i.i.d normally distributed entries. The results are displayed w.r.t the dimension k in Figure 1. It is clear that the lower bound from Proposition 2.1 is not tight. This is expected as we consider only the leading eigenvector of A to derive it in the proof. The following conjecture gives a tighter bound.
Conjecture 2.2 (Expectation for the ratio ‖AL‖ 2 2
‖A‖22 ). Under the same conditions as in Proposition 2.1,
the expectation of ‖AL‖22/‖A‖22 is given by
E [‖AL‖22 ‖A‖22 ] = (2k + 1)2 16π2 + o(1) .
We believe this conjecture can potentially be proven with analogous developments as those in Proposition 2.1, but integrating over all dimensions. However, a main difficulty lies in the fact that integration over all eigenvectors have to be carried out jointly as they are not independent. This is subject of current ongoing work.
Finally, we can expect that ρ̃/ρ scales as Θ(k2). This leads to the observation that ρ̃2 ρ in large enough dimension. As a result, the analysis formulation should be much more efficient in terms of iterations than the synthesis formulation – as long as the prox-TVcan be dealt with efficiently.
2.2 Unrolled iterative algorithms
As shown by Gregor and Le Cun (2010), ISTA is equivalent to a recurrent neural network (RNN) with a particular structure. This observation can be generalized to PGD algorithms for any penalized least squares problem of the form
u∗(x) = arg min u L(x, u) = 1 2 ‖x−Bu‖22 + λg(u) , (7)
where g is proper and convex, as depicted in Figure 2a. By unrolling this architecture with T layers, we obtain a network φΘ(T )(x) = u(T ) – illustrated in Figure 2b – with parameters Θ(T ) = {W (t)x ,W (t)u , µ(t)}Tt=1, defined by the following recursion
u(0) = B†x ; u(t) = proxµ(t)g(W (t) x x+W (t) u u (t−1)) . (8)
As underlined by (4), a good estimate u(0) is crucial in order to have a fast convergence toward u∗(x). However, this chosen initialization is mitigated by the first layer of the network which learns to set a good initial guess for u(1). For a network with T layers, one recovers exactly the T -th iteration of PGD if the weights are chosen constant equal to
W (t)x = 1
ρ B>, W (t)u = (Id−
1 ρ B>B) , µ(t) = λ ρ , with ρ = ‖B‖22 . (9)
In practice, this choice of parameters are used as initialization for a posterior training stage. In many practical applications, one is interested in minimizing the loss (7) for a fixed B and a particular distribution over the space of x, P . As a result, the goal of this training stage is to find parameters Θ(T ) that minimize the risk, or expected loss, E[L(x, φΘ(T )(x))] over P . Since one does not have access to this distribution, and following an empirical risk minimization approach with a given training set {x1, . . . xN} (assumed sampled i.i.d from P), the network is trained by minimizing
min Θ(T )
1
N
N∑
i=1
L(xi, φΘ(T )(xi)) . (10)
Note that when T → +∞, the presented initialization in (9) gives a global minimizer of the loss for all xi, as the network converges to exact PGD. When T is fixed, however, the output of the network is not a minimizer of (7) in general. Minimizing this empirical risk can therefore find a weight configuration that reduces the sub-optimality of the network relative to (7) over the input distribution used to train the network. In such a way, the network learns an algorithm to approximate the solution of (7) for a particular class or distributions of signals. It is important to note here that while this procedure can accelerate the resolution the problem, the learned algorithm will only be valid for inputs xi coming from the same input distribution P as the training samples. The algorithm might not converge for samples which are too different from the training set, unlike the iterative algorithm which is guaranteed to converge for any sample.
This network architecture design can be directly applied to TV regularised problems if the synthesis formulation (5) is used. Indeed, in this case PGD reduces to the ISTA algorithm, with B = AL and proxµg = ST(·, µ) becomes simply a soft-thresholding operator (which is only applied on the coordinates {2, . . . k}, following the definition of R). However, as discussed in Proposition 2.1, the conditioning of the synthesis problem makes the estimation of the solution slow, increasing the number of network layers needed to get a good estimate of the solution. In the next section, we will extend these learning-based ideas directly to the analysis formulation by deriving a way to obtain exact and approximate expressions for the sub-differential of the non-separable prox-TV.
3 Back-propagating through TV proximal operator
Our two approaches to define learnable networks based on PGD for TV-regularised problems in the analysis formulation differ on the computation of the prox-TV and its derivatives. Our first approach
consists in directly computing the weak derivatives of the exact proximal operator while the second one uses a differentiable approximation.
3.1 Derivative of prox-TV
While there is no analytic solution to the prox-TV, it can be computed exactly (numerically) for 1D problems using the taut-string algorithm (Condat, 2013a). This operator can thus be applied at each layer of the network, reproducing the architecture described in Figure 2b. We define the LPGD-Taut network φΘ(T )(x) with the following recursion formula
φΘ(T )(x) = proxµ(T )‖·‖TV ( W (T )x x+W (T ) u φΘ(T−1)(x) ) (11)
To be able to learn the parameters through gradient descent, one needs to compute the derivatives of (10) w.r.t the parameters Θ(T ). Denoting h = W (t)x x+W
(t) u φΘ(t−1)(x) and u = proxµ(t)‖·‖TV (h),
the application of the chain rule (as implemented efficiently by automatic differentiation) results in ∂L ∂h = Jx(h, µ (t))> ∂L ∂u , and ∂L ∂µ(t) = Jµ(h, µ (t))> ∂L ∂u , (12)
where Jx(h, µ) ∈ Rk×k and Jµ(h, µ) ∈ Rk×1 denotes the weak Jacobian of the output of the proximal operator u with respect to the first and second input respectively. We now give the analytic formulation of these weak Jacobians in the following proposition. Proposition 3.1. [Weak Jacobian of prox-TV] Let x ∈ Rk and u = proxµ‖·‖TV (x), and denote by S the support of z = D̃u. Then, the weak Jacobian Jx and Jµ of the prox-TV relative to x and µ can be computed as
Jx(x, µ) = L:,S(L > :,SL:,S) −1L>:,S and Jµ(x, µ) = −L:,S(L>:,SL:,S)−1 sign(Du)S The proof of this proposition can be found in Subsection G.1. Note that the dependency in the inputs is only through S and sign(Du), where u is a short-hand for proxµ‖·‖TV (x). As a result, computing these weak Jacobians can be done efficiently by simply storing sign(Du) as a mask, as it would be done for a RELU or the soft-thresholding activations, and requiring just 2(k − 1) bits. With these expressions, it is thus possible to compute gradient relatively to all parameters in the network, and employ them via back-propagation.
3.2 Unrolled prox-TV
As an alternative to the previous approach, we propose to use the LISTA network to approximate the prox-TV (3). The prox-TV can be reformulated with a synthesis approach resulting in a Lasso i.e.
z∗ = arg min z
1 2 ‖h− Lz‖22 + µ‖Rz‖1 (13)
The proximal operator solution can then be retrieved with proxµ‖·‖TV (h) = Lz ∗. This problem can be solved using ISTA, and approximated efficiently with a LISTA network Gregor and Le Cun (2010). For the resulting architecture – dubbed LPGD-LISTA – proxµ‖·‖TV (h) is replaced by a nested LISTA network with a fixed number of layers Tin defined recursively with z(0) = Dh and
z(`+1) = ST ( W (`,t)z z (`) +W (`,t) h ΦΘ(t) , µ(`,t)
ρ
) . (14)
Here, W (`,t)z ,W (`,t) h , µ (`,t) are the weights of the nested LISTA network for layer `. They are initialized with weights chosen as in (9) to ensure that the initial state approximates the prox-TV. Note that the weigths of each of these inner layers are also learned through back-propagation during training.
The choice of this architecture provides a differentiable (approximate) proximal operator. Indeed, the LISTA network is composed only of linear and soft-thresholding layers – standard tools for deep-learning libraries. The gradient of the network’s parameters can thus be computed using classic automatic differentiation. Moreover, if the inner network is not trained, the gradient computed with this method will converge toward the gradient computed using Proposition 3.1 as Tin goes to∞ (see Proposition G.2). Thus, in this untrained setting with infinitely many inner layers, the network is equivalent to LPGD-Taut as the output of the layer also converges toward the exact proximal operator.
Connections to inexact PGD A drawback of approximating the prox-TV via an iterative procedure is, precisely, that it is not exact. This optimization error results from a trade-off between computational cost and convergence rate. Using results from Machart et al. (2012), one can compute the scaling of T and Tin to reach an error level of δ with an untrained network. Proposition G.3 shows that without learning, T should scale as O( 1t ) and Tin should be larger than O(ln( 1 δ )). This scaling gives potential guidelines to set these parameters, as one can expect that learning the parameters of the network would reduce these requirement.
4 Experiments
All experiments are performed in Python using PyTorch (Paszke et al., 2019). We used the implementation1 of Barbero and Sra (2018) to compute TV proximal operator using taut-string algorithm. The code to reproduce the figures is available online2.
In all experiments, we initialize u0 = A†x. Moreover, we employed a normalized λreg as a penalty parameter: we first compute the value of λmax (which is the minimal value for which z = 0 is solution of (5)) and we refer to λ as the ratio so that λreg = λλmax, with λ ∈ [0, 1] (see Appendix D). As the computational complexity of all compared algorithms is the same except for the proximal operator, we compare them in term of iterations.
4.1 Simulation
We generate n = 2000 times series and used half for training and other half for testing and comparing the different algorithms. We train all the network’s parameters jointly – those to approximate the gradient for each iteration along with those to define the inner proximal operator. The full training process is described in Appendix A. We set the length of the source signals (ui)ni=1 ∈ Rn×k to k = 8 with a support of |S| = 2 non-zero coefficients (larger dimensions will be showcased in the real data application). We generate A ∈ Rm×k as a Gaussian matrix with m = 5, obtaining then (ui) n i=1 ∈ Rn×p. Moreover, we add Gaussian noise to measurements xi = Aui with a signal to noise ratio (SNR) of 1.0.
We compare our proposed methods, LPGD-Taut network and the LPGD-LISTA with Tin = 50 inner layers to PGD and Accelerated PGD with the analysis formulation. For completeness, we also add the FISTA algorithm for the synthesis formulation in order to illustrate Proposition 2.1 along with its learned version.
Figure 3 presents the risk (or expected function value, P ) of each algorithm as a function of the number of layers or, equivalently, iterations. For the learned algorithms, the curves in t display the performances of a network with t layer trained specifically. We observe that all the synthesis formulation algorithms are slower than their analysis counterparts, empirically validating Proposition 2.1.
1Available at https://github.com/albarji/proxTV 2Available at https://github.com/hcherkaoui/carpet.
Moreover, both of the proposed methods accelerate the resolution of (20) in a low iteration regime. However, when the regularization parameter is high (λ = 0.8), we observe that the performance of the LPGD-LISTA tends to plateau. It is possible that such a high level of sparsity require more than 50 layers for the inner network (which computes the prox-TV). According to Section 3.2, the error associated with this proximity step hinders the global convergence, making the loss function decrease slowly. Increasing the number of inner layers would alleviate this issue, though at the expense of increased computational burden for both training and runtime. For LPGD-Taut, while the Taut-string algorithm ensures that the recovered support is exact for the proximal step, the overall support can be badly estimated in the first iterations. This can lead to un-informative gradients as they greatly depend on the support of the solution in this case, and explain the reduced performances of the network in the high sparsity setting.
Inexact prox-TV With the same data (xi)ni=1 ∈ Rn×m, we empirically investigate the error of the prox-TV (t)k = Fu(t)(z
(t)) − Fu(t)(z∗) and evaluate it for c with different number of layers (T ∈ [20, 50]). We also investigate the case where the parameter of the nested LISTA in LPGD-LISTA are trained compared to their initialization in untrained version.
Figure 4 depicts the error k for each layer. We see that learning the parameters of the unrolled prox-TV in LPGD-LISTA barely improves the performance. More interestingly, we observe that in a high sparsity setting the error sharply increases after a certain number of layers. This is likely cause by the high sparsity of the estimates, the small numbers of iterations of the inner network (between 20 and 50) are insufficient to obtain an accurate solution to the proximal operator. This is in accordance with inexact PGD theory which predict that such algorithm has no exact convergence guarantees (Schmidt et al., 2011).
4.2 fMRI data deconvolution
Functional magnetic resonance imaging (fMRI) is a non-invasive method for recording the brain activity by dynamically measuring blood oxygenation level-dependent (BOLD) contrast, denoted here x. The latter reflects the local changes in the deoxyhemoglobin concentration in the brain Ogawa et al. (1992) and thus indirectly measures neural activity through the neurovascular coupling. This coupling is usually modelled as a linear and time-invariant system and characterized by its impulse response, the so-called haemodynamic response function (HRF), denoted here h. Recent developments propose to estimate either the neural activity signal independently (Fikret et al., 2013; Cherkaoui et al., 2019b) or jointly with the HRF (Cherkaoui et al., 2019a; Farouj et al., 2019). Estimating the neural activity signal with a fixed HRF is akin to a deconvolution problem regularized with TV-norm,
min u∈Rk
P (u) = 1
2 ‖h ∗ u− x‖22 + λ‖u‖TV (15)
To demonstrate the usefulness of our approach with real data, where the training set has not the exact same distribution than the testing set, we compare the LPGD-Taut to Accelerated PGD for the analysis formulation on this deconvolution problem. We choose two subjects from the UK Bio Bank (UKBB) dataset (Sudlow et al., 2015), perform the usual fMRI processing and reduce the dimension of the problem to retain only 8000 time-series of 250 time-frames, corresponding to a record of 3 minute 03 seconds. The full preprocessing pipeline is described in Appendix B. We train
the LPGD taut-string network solver on the first subject and Figure 5 reports the performance of the two algorithms on the second subject for λ = 0.1. The performance is reported relatively to the number of iteration as the computational complexity of each iteration or layer for both methods is equivalent. It is clear that LPGD-Taut converges faster than the Accelerated PGD even on real data. In particular, acceleration is higher when the regularization parameter λ is smaller. As mentioned previously, this acceleration is likely to be caused by the better learning capacity of the network in a low sparsity context. The same experiment is repeated for λ = 0.8 in Figure C.1.
5 Conclusion
This paper studies the optimization of TV-regularised problems via learned PGD. We demonstrated, both analytically and numerically, that it is better to address these problems in their original analysis formulation rather than resort to the simpler (alas slower) synthesis version. We then proposed two different algorithms that allow for the efficient computation and derivation of the required prox-TV, exactly or approximately. Our experiments on synthetic and real data demonstrate that our learned networks for prox-TV provide a significant advantage in convergence speed.
Finally, we believe that the principles presented in this paper could be generalized and deployed in other optimization problems, involving not just the TV-norm but more general analysis-type priors. In particular, this paper only apply for 1D TV problems because the equivalence between Lasso and TV is not exact in higher dimension. In this case, we believe exploiting a dual formulation (Chambolle, 2004) for the problem could allow us to derive similar learnable algorithms.
Broader Impact
This work attempts to shed some understanding into empirical phenomena in signal processing – in our case, piecewise constant approximations. As such, it is our hope that this work encourages fellow researchers to invest in the study and development of principled machine learning tools. Besides these, we do not foresee any other immediate societal consequences.
Acknowledgement
We gratefully acknowledge discussions with Pierre Ablin, whose suggestions helped us completing some parts of the proofs. H. Cherkaoui is supported by a CEA PhD scholarship. J. Sulam is partially supported by NSF Grant 2007649. | 1. What is the focus of the paper regarding unrolling optimization algorithms?
2. What are the strengths and weaknesses of the proposed approach in dealing with the proximal term?
3. How does the reviewer assess the significance and novelty of the work, particularly in its restriction to the 1D TV case?
4. Are there any concerns regarding the experimental setup and methodology used in the study?
5. How does the reviewer evaluate the performance and computational gains of the proposed schemes compared to existing methods like PGD? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
In recent years, there has been extensive research into unrolling optimization algorithms (developed for imaging or inverse problems) as neural networks and learning the parameters. However, much of the existing works are in the synthesis regime (where they unroll for example PGD), the contributions of this work is to show how to unroll in the analysis regime, and in particular, with a TV regulariser. The contributions is essentially applying LISTA for TV regularized problems, with an approximate prox. The results are largely empirical, and the proposed schemes apply only to the 1D setting. The authors propose to compute the TV-prox using either the Taut string scheme of Condat, or by reformulating the prox problem as an lasso problem and applying LISTA. After reading the author response, I am inclined to stay with my initial assessment that this is 'marginally above the acceptance threshold'. This paper's novelty of dealing with the prox is somewhat limited because this work restricts to the 1D TV case, where it is well known (due to the work of Condat) that one can compute the prox in a fast and non-iterative manner.
Strengths
This is mainly an empirical study, they argue that the approach of reformulating an analysis problem as a synthesis problem (even if this can be done) will lead to suboptimal performance. It is therefore necessary to approximate prox-TV directly, and the authors demonstrate some computation gain in doing so.
Weaknesses
- their results only apply to 1D TV. Of course, the most natural one when working beyond 1D is to compute the prox by solving the dual problem and this can be done using PGD, however, this is not investigated, but mentioned briefly in the conclusion. - They show and comment in the numerics that there is little gain between simply applying PGD to solve this lasso problem vs LISTA. Hence, they replace the prox term of LISTA with very standard approximate prox techniques. - Experimental setup and precise methodology is unclear: There are no details on the training procedure. Do you train the weights for LISTA approximation of the prox term separately? - If one wants an accurate solution, then it seems from fig. 3 that accelerated PGD performs just as well as LPGD-Taut in left plot and better than both learned methods in the right plot. So, despite the high computational effort in training a network in the first place, it seems that the computational gains are actually quite modest? You show error per layer, am I right in assuming that the computational cost for each layer is the same across the different methods? - They show and comment in the numerics that there is little gain between simply applying PGD to compute prox_TV vs LISTA (Figure 4). So, it seems likely that the runtime computation time may be dominance by the prox calculation and there will be limited computational gain overall, |
NIPS | Title
Learning to solve TV regularised problems with unrolled algorithms
Abstract
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the `1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
1 Introduction
Ill-posed inverse problems appear naturally in signal and image processing and machine learning, requiring extra regularization techniques. Total Variation (TV) is a popular regularization strategy with a long history (Rudin et al., 1992), and has found a large number of applications in neuro-imaging (Fikret et al., 2013), medical imaging reconstruction (Tian et al., 2011), among myriad applications (Rodríguez, 2013; Darbon and Sigelle, 2006). TV promotes piece-wise constant estimates by penalizing the `1-norm of the first order derivative of the estimated signal, and it provides a simple, yet efficient regularization technique.
TV-regularized problems are typically convex, and so a wide variety of algorithms are in principle applicable. Since the `1 norm in the TV term is non-smooth, Proximal Gradient Descent (PGD) is the most popular choice (Rockafellar, 1976). Yet, the computation for the corresponding proximal operator (denoted prox-TV) represents a major difficulty in this case as it does not have a closed-form analytic solution. For 1D problems, it is possible to rely on dynamic programming to compute proxTV, such as the taut string algorithm (Davies and Kovac, 2001; Condat, 2013a). Another alternative consists in computing the proximal operator with iterative first order algorithm (Chambolle, 2004; Beck and Teboulle, 2009; Boyd et al., 2011; Condat, 2013b). Other algorithms to solve TV-regularized
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
problems rely on primal dual algorithms (Chambolle and Pock, 2011; Condat, 2013b) or Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011). These algorithms typically use one sequence of estimates for each term in the objective and try to make them as close as possible while minimizing the associated term. While these algorithms are efficient for denoising problems – where one is mainly concerned with good reconstruction – they can result in estimate that are not very well regularized if the two sequences are not close enough.
When on fixed computational budget, iterative optimization methods can become impractical as they often require many iterations to give a satisfactory estimate. To accelerate the resolution of these problems with a finite (and small) number of iterations, one can resort to unrolled and learned optimization algorithms (see Monga et al. 2019 for a review). In their seminal work, Gregor and Le Cun (2010) proposed the Learned ISTA (LISTA), where the parameters of an unfolded Iterative Shrinkage-Thresholding Algorithm (ISTA) are learned with gradient descent and back-propagation. This allows to accelerate the approximate solution of a Lasso problem (Tibshirani, 1996), with a fixed number of iteration, for signals from a certain distribution. The core principle behind the success of this approach is that the network parameters can adaptively leverage the sensing matrix structure (Moreau and Bruna, 2017) as well as the input distribution (Giryes et al., 2018; Ablin et al., 2019). Many extensions of this original idea have been proposed to learn different algorithms (Sprechmann et al., 2012, 2013; Borgerding et al., 2017) or for different classes of problem (Xin et al., 2016; Giryes et al., 2018; Sulam et al., 2019). The motif in most of these adaptations is that all operations in the learned algorithms are either linear or separable, thus resulting in sub-differentials that are easy to compute and implement via back-propagation. Algorithm unrolling is also used in the context of bi-level optimization problems such as hyper-parameter selection. Here, the unrolled architecture provides a way to compute the derivative of the inner optimization problem solution compared to another variable such as the regularisation parameter using back-propagation (Bertrand et al., 2020).
The focus of this paper is to apply algorithm unrolling to TV-regularized problems in the 1D case. While one could indeed apply the LISTA approach directly to the synthesis formulation of these problems, we show in this paper that using such formulation leads to slower iterative or learned algorithms compared to their analysis counterparts. The extension of learnable algorithms to the analysis formulation is not trivial, as the inner proximal operator does not have an analytical or separable expression. We propose two architectures that can learn TV-solvers in their analysis form directly based on PGD. The first architecture uses an exact algorithm to compute the prox-TV and we derive the formulation of its weak Jacobian in order to learn the network’s parameters. Our second method rely on a nested LISTA network in order to approximate the prox-TV itself in a differentiable way. This latter approach can be linked to inexact proximal gradient methods (Schmidt et al., 2011; Machart et al., 2012). These results are backed with numerical experiments on synthetic and real data. Concurrently to our work, Lecouat et al. (2020) also proposed an approach to differentiate the solution of TV-regularized problems. While their work can be applied in the context of 2D signals, they rely on smoothing the regularization term using Moreau-Yosida regularization, which results in smoother estimates from theirs learned networks. In contrast, our work allows to compute sharper signals but can only be applied to 1D signals.
The rest of the paper is organized as follows. In Section 2, we describe the different formulations for TV-regularized problems and their complexity. We also recall central ideas of algorithm unfolding. Section 3 introduces our two approaches for learnable network architectures based on PGD. Finally, the two proposed methods are evaluated on real and synthetic data in Section 4.
Notations For a vector x ∈ Rk, we denote ‖x‖q its `q-norm. For a matrix A ∈ Rm×k, we denote ‖A‖2 its `2-norm, which corresponds to its largest singular value and A† denotes its pseudoinverse. For an ordered subset of indices S ⊂ {1, . . . , k}, xS denote the vector in R|S| with element (xS)t = xit for it ∈ S. For a matrix A ∈ Rm×k, A:,S denotes the sub-matrix [A:,i1 , . . . A:,i|S| ] composed with the columns A:,it of index it ∈ S of A. For the rest of the paper, we refer to the operators D ∈ Rk−1×k, D̃ ∈ Rk×k, L ∈ Rk×k and R ∈ Rk×k as:
D = −1 1 0 . . . 0 0 −1 1 . . . ... ...
. . . . . . . . . 0 0 . . . 0 −1 1
D̃ = 1 0 . . . 0 −1 1 . . . ...
. . . . . . 0 0 −1 1
L = 1 0 . . . 0 1 1 . . . ... ...
. . . . . . 0 1 . . . 1 1
R = 0 0 . . . 0 0 1 . . . ... ...
. . . . . . 0 0 . . . 0 1
2 Solving TV-regularized problems
We begin by detailing the TV-regularized problem that will be the main focus of our work. Consider a latent vector u ∈ Rk, a design matrix A ∈ Rm×k and the corresponding observation x ∈ Rm. The original formulation of the TV-regularized regression problem is referred to as the analysis formulation (Rudin et al., 1992). For a given regularization parameter λ > 0, it reads
min u∈Rk
P (u) = 1
2 ‖x−Au‖22 + λ‖u‖TV , (1)
where ‖u‖TV = ‖Du‖1, and D ∈ Rk−1×k stands for the first order finite difference operator, as defined above. The problem in (1) can be seen as a special case of a Generalized Lasso problem (Tibshirani and Taylor, 2011); one in which the analysis operator is D. Note that problem P is convex, but the TV -norm is non-smooth. In these cases, a practical alternative is the PGD, which iterates between a gradient descent step and the prox-TV. This algorithm’s iterates read
u(t+1) = proxλ ρ ‖·‖TV
( u(t) − 1
ρ A>(Au(t) − x)
) , (2)
where ρ = ‖A‖22 and the prox-TV is defined as
proxµ‖·‖TV (y) = arg min u∈Rk
Fy(u) = 1
2 ‖y − u‖22 + µ‖u‖TV . (3)
Problem (3) does not have a closed-form solution, and one needs to resort to iterative techniques to compute it. In our case, as the problem is 1D, the prox-TV problem can be addressed with a dynamic programming approach, such as the taut-string algorithm (Condat, 2013a). This scales as O(k) in all practical situations and is thus much more efficient than other optimization based iterative algorithms (Rockafellar, 1976; Chambolle, 2004; Condat, 2013b) for which each iteration is O(k2) at best.
With a generic matrix A ∈ Rm×k, the PGD algorithm is known to have a sublinear convergence rate (Combettes and Bauschke, 2011). More precisely, for any initialization u(0) and solution u∗, the iterates satisfy
P (u(t))− P (u∗) ≤ ρ 2t ‖u(0) − u∗‖22, (4)
where u∗ is a solution of the problem in (1). Note that the constant ρ can have a significant effect. Indeed, it is clear from (4) that doubling ρ leads to consider doubling the number of iterations.
2.1 Synthesis formulation
An alternative formulation for TV-regularized problems relies on removing the analysis operator D from the `1-norm and translating it into a synthesis expression (Elad et al., 2007). Removing D from the non-smooth term simplifies the expression of the proximal operator by making it separable, as in the Lasso. The operator D is not directly invertible but keeping the first value of the vector u allows for perfect reconstruction. This motivates the definition of the operator D̃ ∈ Rk×k, and its inverse L ∈ Rk×k, as defined previously. Naturally, L is the discrete integration operator. Considering the change of variable z = D̃u, and using the operator R ∈ Rk×k, the problem in (1) is equivalent to
min z∈Rk
S(z) = 1
2 ‖x−ALz‖22 + λ‖Rz‖1. (5)
Note that for any z ∈ Rk, S(z) = P (Lz). There is thus an exact equivalence between solutions from the synthesis and the analysis formulation, and the solution for the analysis can be obtained with u∗ = Lz∗. The benefit of this formulation is that the problem above now reduces to a Lasso problem (Tibshirani, 1996). In this case, the PGD algorithm is reduced to the ISTA with a closed-form proximal operator (the soft-thresholding). Note that this simple formulation is only possible in 1D where the first order derivative space is unconstrained. In larger dimensions, the derivative must be constrained to verify the Fubini’s formula that enforces the symmetry of integration over dimensions. While it is also possible to derive synthesis formulation in higher dimension (Elad et al., 2007), this does not lead to simplistic proximal operator.
For this synthesis formulation, with a generic matrix A ∈ Rm×k, the PGD algorithm has also a sublinear convergence rate (Beck and Teboulle, 2009) such that
P (u(t))− P (u∗) ≤ 2ρ̃ t ‖u(0) − u∗‖22, (6)
with ρ̃ = ‖AL‖22 (see Subsection F.1 for full derivation). While the rate of this algorithm is the same as in the analysis formulation – in O( 1t ) – the constant ρ̃ related to the operator norm differs. We now present two results that will characterize the value of ρ̃.
Proposition 2.1. [Lower bound for the ratio ‖AL‖ 2 2
‖A‖22 expectation] Let A be a random matrix in Rm×k
with i.i.d normally distributed entries. The expectation of ‖AL‖22/‖A‖22 is asymptotically lower bounded when k tends to∞ by
E [‖AL‖22 ‖A‖22 ] ≥ 2k + 1 4π2 + o(1)
The full proof can be found in Subsection F.3. The lower bound is constructed by using ATA ‖A‖22u1u>1 for a unit vector u1 and computing explicitely the expectation for rank one matrices. To assess the tightness of this bound, we evaluated numerically E
[ ‖AL‖22 ‖A‖22 ] on a set of 1000
matrices sampled with i.i.d normally distributed entries. The results are displayed w.r.t the dimension k in Figure 1. It is clear that the lower bound from Proposition 2.1 is not tight. This is expected as we consider only the leading eigenvector of A to derive it in the proof. The following conjecture gives a tighter bound.
Conjecture 2.2 (Expectation for the ratio ‖AL‖ 2 2
‖A‖22 ). Under the same conditions as in Proposition 2.1,
the expectation of ‖AL‖22/‖A‖22 is given by
E [‖AL‖22 ‖A‖22 ] = (2k + 1)2 16π2 + o(1) .
We believe this conjecture can potentially be proven with analogous developments as those in Proposition 2.1, but integrating over all dimensions. However, a main difficulty lies in the fact that integration over all eigenvectors have to be carried out jointly as they are not independent. This is subject of current ongoing work.
Finally, we can expect that ρ̃/ρ scales as Θ(k2). This leads to the observation that ρ̃2 ρ in large enough dimension. As a result, the analysis formulation should be much more efficient in terms of iterations than the synthesis formulation – as long as the prox-TVcan be dealt with efficiently.
2.2 Unrolled iterative algorithms
As shown by Gregor and Le Cun (2010), ISTA is equivalent to a recurrent neural network (RNN) with a particular structure. This observation can be generalized to PGD algorithms for any penalized least squares problem of the form
u∗(x) = arg min u L(x, u) = 1 2 ‖x−Bu‖22 + λg(u) , (7)
where g is proper and convex, as depicted in Figure 2a. By unrolling this architecture with T layers, we obtain a network φΘ(T )(x) = u(T ) – illustrated in Figure 2b – with parameters Θ(T ) = {W (t)x ,W (t)u , µ(t)}Tt=1, defined by the following recursion
u(0) = B†x ; u(t) = proxµ(t)g(W (t) x x+W (t) u u (t−1)) . (8)
As underlined by (4), a good estimate u(0) is crucial in order to have a fast convergence toward u∗(x). However, this chosen initialization is mitigated by the first layer of the network which learns to set a good initial guess for u(1). For a network with T layers, one recovers exactly the T -th iteration of PGD if the weights are chosen constant equal to
W (t)x = 1
ρ B>, W (t)u = (Id−
1 ρ B>B) , µ(t) = λ ρ , with ρ = ‖B‖22 . (9)
In practice, this choice of parameters are used as initialization for a posterior training stage. In many practical applications, one is interested in minimizing the loss (7) for a fixed B and a particular distribution over the space of x, P . As a result, the goal of this training stage is to find parameters Θ(T ) that minimize the risk, or expected loss, E[L(x, φΘ(T )(x))] over P . Since one does not have access to this distribution, and following an empirical risk minimization approach with a given training set {x1, . . . xN} (assumed sampled i.i.d from P), the network is trained by minimizing
min Θ(T )
1
N
N∑
i=1
L(xi, φΘ(T )(xi)) . (10)
Note that when T → +∞, the presented initialization in (9) gives a global minimizer of the loss for all xi, as the network converges to exact PGD. When T is fixed, however, the output of the network is not a minimizer of (7) in general. Minimizing this empirical risk can therefore find a weight configuration that reduces the sub-optimality of the network relative to (7) over the input distribution used to train the network. In such a way, the network learns an algorithm to approximate the solution of (7) for a particular class or distributions of signals. It is important to note here that while this procedure can accelerate the resolution the problem, the learned algorithm will only be valid for inputs xi coming from the same input distribution P as the training samples. The algorithm might not converge for samples which are too different from the training set, unlike the iterative algorithm which is guaranteed to converge for any sample.
This network architecture design can be directly applied to TV regularised problems if the synthesis formulation (5) is used. Indeed, in this case PGD reduces to the ISTA algorithm, with B = AL and proxµg = ST(·, µ) becomes simply a soft-thresholding operator (which is only applied on the coordinates {2, . . . k}, following the definition of R). However, as discussed in Proposition 2.1, the conditioning of the synthesis problem makes the estimation of the solution slow, increasing the number of network layers needed to get a good estimate of the solution. In the next section, we will extend these learning-based ideas directly to the analysis formulation by deriving a way to obtain exact and approximate expressions for the sub-differential of the non-separable prox-TV.
3 Back-propagating through TV proximal operator
Our two approaches to define learnable networks based on PGD for TV-regularised problems in the analysis formulation differ on the computation of the prox-TV and its derivatives. Our first approach
consists in directly computing the weak derivatives of the exact proximal operator while the second one uses a differentiable approximation.
3.1 Derivative of prox-TV
While there is no analytic solution to the prox-TV, it can be computed exactly (numerically) for 1D problems using the taut-string algorithm (Condat, 2013a). This operator can thus be applied at each layer of the network, reproducing the architecture described in Figure 2b. We define the LPGD-Taut network φΘ(T )(x) with the following recursion formula
φΘ(T )(x) = proxµ(T )‖·‖TV ( W (T )x x+W (T ) u φΘ(T−1)(x) ) (11)
To be able to learn the parameters through gradient descent, one needs to compute the derivatives of (10) w.r.t the parameters Θ(T ). Denoting h = W (t)x x+W
(t) u φΘ(t−1)(x) and u = proxµ(t)‖·‖TV (h),
the application of the chain rule (as implemented efficiently by automatic differentiation) results in ∂L ∂h = Jx(h, µ (t))> ∂L ∂u , and ∂L ∂µ(t) = Jµ(h, µ (t))> ∂L ∂u , (12)
where Jx(h, µ) ∈ Rk×k and Jµ(h, µ) ∈ Rk×1 denotes the weak Jacobian of the output of the proximal operator u with respect to the first and second input respectively. We now give the analytic formulation of these weak Jacobians in the following proposition. Proposition 3.1. [Weak Jacobian of prox-TV] Let x ∈ Rk and u = proxµ‖·‖TV (x), and denote by S the support of z = D̃u. Then, the weak Jacobian Jx and Jµ of the prox-TV relative to x and µ can be computed as
Jx(x, µ) = L:,S(L > :,SL:,S) −1L>:,S and Jµ(x, µ) = −L:,S(L>:,SL:,S)−1 sign(Du)S The proof of this proposition can be found in Subsection G.1. Note that the dependency in the inputs is only through S and sign(Du), where u is a short-hand for proxµ‖·‖TV (x). As a result, computing these weak Jacobians can be done efficiently by simply storing sign(Du) as a mask, as it would be done for a RELU or the soft-thresholding activations, and requiring just 2(k − 1) bits. With these expressions, it is thus possible to compute gradient relatively to all parameters in the network, and employ them via back-propagation.
3.2 Unrolled prox-TV
As an alternative to the previous approach, we propose to use the LISTA network to approximate the prox-TV (3). The prox-TV can be reformulated with a synthesis approach resulting in a Lasso i.e.
z∗ = arg min z
1 2 ‖h− Lz‖22 + µ‖Rz‖1 (13)
The proximal operator solution can then be retrieved with proxµ‖·‖TV (h) = Lz ∗. This problem can be solved using ISTA, and approximated efficiently with a LISTA network Gregor and Le Cun (2010). For the resulting architecture – dubbed LPGD-LISTA – proxµ‖·‖TV (h) is replaced by a nested LISTA network with a fixed number of layers Tin defined recursively with z(0) = Dh and
z(`+1) = ST ( W (`,t)z z (`) +W (`,t) h ΦΘ(t) , µ(`,t)
ρ
) . (14)
Here, W (`,t)z ,W (`,t) h , µ (`,t) are the weights of the nested LISTA network for layer `. They are initialized with weights chosen as in (9) to ensure that the initial state approximates the prox-TV. Note that the weigths of each of these inner layers are also learned through back-propagation during training.
The choice of this architecture provides a differentiable (approximate) proximal operator. Indeed, the LISTA network is composed only of linear and soft-thresholding layers – standard tools for deep-learning libraries. The gradient of the network’s parameters can thus be computed using classic automatic differentiation. Moreover, if the inner network is not trained, the gradient computed with this method will converge toward the gradient computed using Proposition 3.1 as Tin goes to∞ (see Proposition G.2). Thus, in this untrained setting with infinitely many inner layers, the network is equivalent to LPGD-Taut as the output of the layer also converges toward the exact proximal operator.
Connections to inexact PGD A drawback of approximating the prox-TV via an iterative procedure is, precisely, that it is not exact. This optimization error results from a trade-off between computational cost and convergence rate. Using results from Machart et al. (2012), one can compute the scaling of T and Tin to reach an error level of δ with an untrained network. Proposition G.3 shows that without learning, T should scale as O( 1t ) and Tin should be larger than O(ln( 1 δ )). This scaling gives potential guidelines to set these parameters, as one can expect that learning the parameters of the network would reduce these requirement.
4 Experiments
All experiments are performed in Python using PyTorch (Paszke et al., 2019). We used the implementation1 of Barbero and Sra (2018) to compute TV proximal operator using taut-string algorithm. The code to reproduce the figures is available online2.
In all experiments, we initialize u0 = A†x. Moreover, we employed a normalized λreg as a penalty parameter: we first compute the value of λmax (which is the minimal value for which z = 0 is solution of (5)) and we refer to λ as the ratio so that λreg = λλmax, with λ ∈ [0, 1] (see Appendix D). As the computational complexity of all compared algorithms is the same except for the proximal operator, we compare them in term of iterations.
4.1 Simulation
We generate n = 2000 times series and used half for training and other half for testing and comparing the different algorithms. We train all the network’s parameters jointly – those to approximate the gradient for each iteration along with those to define the inner proximal operator. The full training process is described in Appendix A. We set the length of the source signals (ui)ni=1 ∈ Rn×k to k = 8 with a support of |S| = 2 non-zero coefficients (larger dimensions will be showcased in the real data application). We generate A ∈ Rm×k as a Gaussian matrix with m = 5, obtaining then (ui) n i=1 ∈ Rn×p. Moreover, we add Gaussian noise to measurements xi = Aui with a signal to noise ratio (SNR) of 1.0.
We compare our proposed methods, LPGD-Taut network and the LPGD-LISTA with Tin = 50 inner layers to PGD and Accelerated PGD with the analysis formulation. For completeness, we also add the FISTA algorithm for the synthesis formulation in order to illustrate Proposition 2.1 along with its learned version.
Figure 3 presents the risk (or expected function value, P ) of each algorithm as a function of the number of layers or, equivalently, iterations. For the learned algorithms, the curves in t display the performances of a network with t layer trained specifically. We observe that all the synthesis formulation algorithms are slower than their analysis counterparts, empirically validating Proposition 2.1.
1Available at https://github.com/albarji/proxTV 2Available at https://github.com/hcherkaoui/carpet.
Moreover, both of the proposed methods accelerate the resolution of (20) in a low iteration regime. However, when the regularization parameter is high (λ = 0.8), we observe that the performance of the LPGD-LISTA tends to plateau. It is possible that such a high level of sparsity require more than 50 layers for the inner network (which computes the prox-TV). According to Section 3.2, the error associated with this proximity step hinders the global convergence, making the loss function decrease slowly. Increasing the number of inner layers would alleviate this issue, though at the expense of increased computational burden for both training and runtime. For LPGD-Taut, while the Taut-string algorithm ensures that the recovered support is exact for the proximal step, the overall support can be badly estimated in the first iterations. This can lead to un-informative gradients as they greatly depend on the support of the solution in this case, and explain the reduced performances of the network in the high sparsity setting.
Inexact prox-TV With the same data (xi)ni=1 ∈ Rn×m, we empirically investigate the error of the prox-TV (t)k = Fu(t)(z
(t)) − Fu(t)(z∗) and evaluate it for c with different number of layers (T ∈ [20, 50]). We also investigate the case where the parameter of the nested LISTA in LPGD-LISTA are trained compared to their initialization in untrained version.
Figure 4 depicts the error k for each layer. We see that learning the parameters of the unrolled prox-TV in LPGD-LISTA barely improves the performance. More interestingly, we observe that in a high sparsity setting the error sharply increases after a certain number of layers. This is likely cause by the high sparsity of the estimates, the small numbers of iterations of the inner network (between 20 and 50) are insufficient to obtain an accurate solution to the proximal operator. This is in accordance with inexact PGD theory which predict that such algorithm has no exact convergence guarantees (Schmidt et al., 2011).
4.2 fMRI data deconvolution
Functional magnetic resonance imaging (fMRI) is a non-invasive method for recording the brain activity by dynamically measuring blood oxygenation level-dependent (BOLD) contrast, denoted here x. The latter reflects the local changes in the deoxyhemoglobin concentration in the brain Ogawa et al. (1992) and thus indirectly measures neural activity through the neurovascular coupling. This coupling is usually modelled as a linear and time-invariant system and characterized by its impulse response, the so-called haemodynamic response function (HRF), denoted here h. Recent developments propose to estimate either the neural activity signal independently (Fikret et al., 2013; Cherkaoui et al., 2019b) or jointly with the HRF (Cherkaoui et al., 2019a; Farouj et al., 2019). Estimating the neural activity signal with a fixed HRF is akin to a deconvolution problem regularized with TV-norm,
min u∈Rk
P (u) = 1
2 ‖h ∗ u− x‖22 + λ‖u‖TV (15)
To demonstrate the usefulness of our approach with real data, where the training set has not the exact same distribution than the testing set, we compare the LPGD-Taut to Accelerated PGD for the analysis formulation on this deconvolution problem. We choose two subjects from the UK Bio Bank (UKBB) dataset (Sudlow et al., 2015), perform the usual fMRI processing and reduce the dimension of the problem to retain only 8000 time-series of 250 time-frames, corresponding to a record of 3 minute 03 seconds. The full preprocessing pipeline is described in Appendix B. We train
the LPGD taut-string network solver on the first subject and Figure 5 reports the performance of the two algorithms on the second subject for λ = 0.1. The performance is reported relatively to the number of iteration as the computational complexity of each iteration or layer for both methods is equivalent. It is clear that LPGD-Taut converges faster than the Accelerated PGD even on real data. In particular, acceleration is higher when the regularization parameter λ is smaller. As mentioned previously, this acceleration is likely to be caused by the better learning capacity of the network in a low sparsity context. The same experiment is repeated for λ = 0.8 in Figure C.1.
5 Conclusion
This paper studies the optimization of TV-regularised problems via learned PGD. We demonstrated, both analytically and numerically, that it is better to address these problems in their original analysis formulation rather than resort to the simpler (alas slower) synthesis version. We then proposed two different algorithms that allow for the efficient computation and derivation of the required prox-TV, exactly or approximately. Our experiments on synthetic and real data demonstrate that our learned networks for prox-TV provide a significant advantage in convergence speed.
Finally, we believe that the principles presented in this paper could be generalized and deployed in other optimization problems, involving not just the TV-norm but more general analysis-type priors. In particular, this paper only apply for 1D TV problems because the equivalence between Lasso and TV is not exact in higher dimension. In this case, we believe exploiting a dual formulation (Chambolle, 2004) for the problem could allow us to derive similar learnable algorithms.
Broader Impact
This work attempts to shed some understanding into empirical phenomena in signal processing – in our case, piecewise constant approximations. As such, it is our hope that this work encourages fellow researchers to invest in the study and development of principled machine learning tools. Besides these, we do not foresee any other immediate societal consequences.
Acknowledgement
We gratefully acknowledge discussions with Pierre Ablin, whose suggestions helped us completing some parts of the proofs. H. Cherkaoui is supported by a CEA PhD scholarship. J. Sulam is partially supported by NSF Grant 2007649. | 1. What is the main contribution of the paper regarding TV-regularized problems?
2. What are the strengths of the paper, particularly in its structure, motivation, and experimental demonstrations?
3. What are the weaknesses of the paper, especially regarding its novelty and notational errors?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes two approaches to efficiently compute the derivatives for TV-regularised problems via learned proximal gradient descent using deep neural networks to emulate the algorithm's iterations. It analyzes their benefits and limitations and discusses the regime in which the proposed approaches can improve over the iterative analogues.
Strengths
* The paper is well structured. There is a clear motivation given, well documented related work and experiments demonstrating the claims. The proposed approaches are clear in comparison and context of the related work. * The theoretical claims are well justified, with proofs provided in the Supplementary Material. Code implementing the described approaches is also provided. One can reproduce the shown results. * Experiments demonstrate that the learned networks for prox-TV provide a significant advantage in convergence speed.
Weaknesses
* The novelty of the work is not that significant. Using deep nets to model unrolled algorithms has been proposed already in the methods by Gregor and LeCun (2010), which are described in the related works. The novelty lies in formulating the algorithms in their "analysis" instead of their "synthesis" version. Further, this paper only applies to 1D TV problems. * Many of the equations in the paper mix / equate the minimizer with the objective or the minimum with the objective. It is possible that the notation is such in order to save space, but it is nevertheless incorrect and might confuse the readers. * A few grammatical errors here and there, mostly using singular instead of plural. |
NIPS | Title
Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs
Abstract
Linear programming (LP) is used in many machine learning applications, such as `1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real and synthetic data.
1 Introduction
Linear programming (LP) is one of the most useful tools available to theoreticians and practitioners throughout science and engineering. In Machine Learning, LP appears in numerous settings, including `1-regularized SVMs [57], basis pursuit (BP) [54], sparse inverse covariance matrix estimation (SICE) [55], the nonnegative matrix factorization (NMF) [45], MAP inference [37], etc. Not surprisingly, designing and analyzing LP algorithms is a topic of paramount importance in computer science and applied mathematics.
One of the most successful paradigms for solving LPs is the family of Interior Point Methods (IPMs), pioneered by Karmarkar in the mid 1980s [25]. Path-following IPMs and, in particular, long-step path following IPMs, are among the most practical approaches for solving linear programs. Consider the standard form of the primal LP problem:
min cTx , subject to Ax = b ,x ≥ 0 , (1) where A ∈ Rm×n, b ∈ Rm, and c ∈ Rn are the inputs, and x ∈ Rn is the vector of the primal variables. The associated dual problem is
max bTy , subject to ATy + s = c , s ≥ 0 , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
where y ∈ Rm and s ∈ Rn are the vectors of the dual and slack variables respectively. Triplets (x,y, s) that uphold both (1) and (2) are called primal-dual solutions. Path-following IPMs typically converge towards a primal-dual solution by operating as follows: given the current iterate (xk,yk, sk), they compute the Newton search direction (∆x,∆y,∆s) and update the current iterate by following a step towards the search direction. To compute the search direction, one standard approach [41] involves solving the normal equations1:
AD2AT∆y = p. (3)
Here, D = X1/2S−1/2 is a diagonal matrix, X,S ∈ Rn×n are diagonal matrices whose i-th diagonal entries are equal to xi and si, respectively, and p ∈ Rm is a vector whose exact definition is given in eqn. (16)2. Given ∆y, computing ∆s and ∆x only involves matrix-vector products.
The core computational bottleneck in IPMs is the need to solve the linear system of eqn. (3) at each iteration. This leads to two key challenges: first, for high-dimensional matrices A, solving the linear system is computationally prohibitive. Most implementations of IPMs use a direct solver; see Chapter 6 of [41]. However, if AD2AT is large and dense, direct solvers are computationally impractical. If AD2AT is sparse, specialized direct solvers have been developed, but these do not apply to many LP problems arising in machine learning applications due to irregular sparsity patterns. Second, an alternative to direct solvers is the use of iterative solvers, but the situation is further complicated since AD2AT is typically ill-conditioned. Indeed, as IPM algorithms approach the optimal primal-dual solution, the diagonal matrix D is ill-conditioned, which also results in the matrix AD2AT being ill-conditioned. Additionally, using approximate solutions for the linear system of eqn. (3) causes certain invariants, which are crucial for guaranteeing the convergence of IPMs, to be violated; see Section 1.1 for details.
In this paper, we address the aforementioned challenges, for the special case where m n, i.e., the number of constraints is much smaller than the number of variables; see Appendix A for a generalization. This is a common setting in ML applications of LP solvers, since `1-SVMs and basis pursuit problems often exhibit such structure when the number of available features (n) is larger than the number of objects (m). This setting has been of interest in recent work on LPs [17, 4, 31]. For simplicity of exposition, we also assume that the constraint matrix A has full rank, equal to m. First, we propose and analyze a preconditioned Conjugate Gradient (CG) iterative solver for the normal equations of eqn. (3), using matrix sketching constructions from the Randomized Linear Algebra (RLA) literature. We develop a preconditioner for AD2AT using matrix sketching which allows us to prove strong convergence guarantees for the residual of CG solvers. Second, building upon the work of [39], we propose and analyze a provably accurate long-step infeasible IPM algorithm. The proposed IPM solves the normal equations using iterative solvers. In this paper, for brevity and clarity, we primarily focus our description and analysis on the CG iterative solver. We note that a non-trivial concern is that the use of iterative solvers and matrix sketching tools implies that the normal equations at each iteration will be solved only approximately. In our proposed IPM, we develop a novel way to correct for the error induced by the approximate solution in order to guarantee convergence. Importantly, this correction step is relatively computationally light, unlike a similar step proposed in [39]. Third, we empirically show that our algorithm performs well in practice. We consider solving LPs that arise from `1-regularized SVMs and test them on a variety of synthetic and real datasets. Several extensions of our work are discussed in Appendix A.
1.1 Our contributions
Our point of departure in this work is the introduction of preconditioned, iterative solvers for solving eqn. (3). Preconditioning is used to address the ill-conditioning of the matrix AD2AT. Iterative solvers allow the computation of approximate solutions using only matrix-vector products while avoiding matrix inversion, Cholesky or LU factorizations, etc. A preconditioned formulation of eqn. (3) is:
Q−1AD2AT∆y = Q−1p, (4)
where Q ∈ Rm×m is the preconditioning matrix; Q should be easily invertible (see [3, 22] for background). An alternative yet equivalent formulation of eqn. (4), which is more amenable to
1Another widely used approach is to solve the augmented system [41] which is less relevant for this paper. 2The superscript k in eqn. (16) simply indicates iteration count and is omitted here for notational simplicity.
theoretical analysis, is
Q− 1/2AD2ATQ− 1/2z = Q− 1/2p, (5)
where z ∈ Rm is a vector such that ∆y = Q−1/2z. Note that the matrix in the left-hand side of the above equation is always symmetric, which is not necessarily the case for eqn. (4). We do emphasize that one can use eqn. (4) in the actual implementation of the preconditioned solver; eqn. (5) is much more useful in theoretical analyses.
Recall that we focus on the special case where A ∈ Rm×n has m n, i.e., it is a short-and-fat matrix. Our first contribution starts with the design and analysis of a preconditioner for the Conjugate Gradient solver that satisfies, with high probability,
2
2 + ζ ≤ σ2min(Q− 1 2 AD) ≤ σ2max(Q−
1 2 AD) ≤ 2
2− ζ , (6)
for some error parameter ζ ∈ [0, 1]. In the above, σmin(·) and σmax(·) correspond to the smallest and largest singular value of the matrix in parentheses. The above condition says that the preconditioner effectively reduces the condition number of AD to a constant. We note that the particular form of the lower and upper bounds in eqn. (6) was chosen to simplify our derivations. RLA matrix-sketching techniques allow us to construct preconditioners for all short-and-fat matrices that satisfy the above inequality and can be inverted efficiently. Such constructions go back to the work of [2]; see Section 2 for details on the construction of Q and its inverse. Importantly, given such a preconditioner, we then prove that the resulting CG iterative solver satisfies
‖Q−1/2AD2ATQ−1/2z̃t −Q−1/2p‖2 ≤ ζt‖Q− 1/2p‖2. (7)
Here z̃t is the approximate solution returned by the CG iterative solver after t iterations. In words, the above inequality states that the residual achieved after t iterations of the CG iterative solver drops exponentially fast. To the best of our knowledge, this result is not known in the CG literature: indeed, it is actually well-known that the residual of CG may oscillate [21], even in cases where the energy norm of the solution error decreases monotonically. However, we prove that if the preconditioner is sufficiently good, i.e., it satisfies the constraint of eqn. (6), then the residual decreases as well.
Our second contribution is the analysis of a novel variant of a long-step infeasible IPM algorithm proposed by [39]. Recall that such algorithms can, in general, start with an initial point that is not necessarily feasible, but does need to satisfy some, more relaxed, constraints. Following the lines of [56, 39], let S be the set of feasible and optimal solutions of the form (x∗,y∗, s∗) for the primal and dual problems of eqns. (1) and (2) and assume that S is not empty. Then, long-step infeasible IPMs can start with any initial point (x0,y0, s0) that satisfies (x0, s0) > 0 and (x0, s0) ≥ (x∗, s∗), for some feasible and optimal solution (x∗, s∗) ∈ S . In words, the starting primal and slack variables must be strictly positive and larger (element-wise) when compared to some feasible, optimal primaldual solution. See Chapter 6 of [52] for a discussion regarding why such choices of starting points are relevant to computational practice and can be identified more efficiently than feasible points.
The flexibility of infeasible IPMs comes at a cost: long-step feasible IPMs converge in O(n log 1/ ) iterations, while long-step infeasible IPMs need O(n2 log 1/ ) iterations to converge [56, 39] (Here is the accuracy of the approximate LP solution returned by the IPM; see Algorithm 2 for the exact definition.). Let
Ax0 − b = r0p, (8) ATy0 + s0 − c = r0d, (9)
where r0p ∈ Rn and r0d ∈ Rm are the primal and dual residuals, respectively, and characterize how far the initial point is from being feasible. As long-step infeasible IPM algorithms iterate and update the primal and dual solutions, the residuals are updated as well. Let rk = (rkp, r k d) ∈ Rn+m be the primal and dual residual at the k-th iteration: it is well-known that the convergence analysis of infeasible long-step IPMs critically depends on rk lying on the line segment between 0 and r0. Unfortunately, using approximate solvers (such as the CG solver proposed above) for the normal equations violates this invariant. [39] proposed a simple solution to fix this problem by adding a perturbation vector v to the current primal-dual solution that guarantees that the invariant is satisfied. Again, we use RLA matrix sketching principles to propose an efficient construction for v that provably satisfies the invariant. Next, we combine the above two primitives to prove that Algorithm 2 in Section 3 satisfies the following theorem.
Theorem 1 Let 0 ≤ ≤ 1 be an accuracy parameter. Consider the long-step infeasible IPM Algorithm 2 (Section 3) that solves eqn. (5) using the CG solver of Algorithm 1 (Section 2). Assume that the CG iterative solver runs with accuracy parameter ζ = 1/2 and iteration count t = O(log n). Then, with probability at least 0.9, the long-step infeasible IPM converges after O(n2 log 1/ ) iterations.
We note that the 0.9 success probability above is for simplicity of exposition and can be easily amplified using standard techniques. Also, at each iteration of our infeasible long-step IPM algorithm, the running time is O((nnz(A) +m3) log n), ignoring constant terms. See Section 3 for a detailed discussion of the overall running time.
Our empirical evaluation demonstrates that our algorithm requires an order of magnitude much fewer inner CG iterations than a standard IPM using CG, while producing a comparably accurate solution (see Section 4).
1.2 Prior Work
There is a large body of literature on solving LPs using IPMs. We only review literature that is immediately relevant to our work. Recall that we solve the normal equations inexactly at each iteration, and develop a way to correct for the error incurred. We also focus on IPMs that can use a sufficiently positive, infeasible initial point (see Section 1.1). We discuss below two papers that present related ideas.
[39] proposed the use of an approximate iterative solver for eqn. (3), followed by a correction step to “fix” the approximate solution (see our discussion in Section 1.1). We propose efficient, RLAbased approaches to precondition and solve eqn. (3), as well as a novel approach to correct for the approximation error in order to guarantee the convergence of the IPM algorithm. Specifically, [39] propose to solve eqn. (3) using the so-called maximum weight basis preconditioner [46]. However, computing such a preconditioner needs access to a maximal linearly independent set of columns of AD in each iteration, which is costly, taking O(m2n) time in the worst-case. More importantly, while [38] was able to provide a bound on the condition number of the preconditioned matrix that depends only on properties of A, and is independent of D, this bound might, in general, be very large. In contrast, our bound is a constant and it does not depend on properties of A or its dimensions. In addition, [39] assumed a bound on the two-norm of the residual of the preconditioned system, but it is unclear how their preconditioner guarantees such a bound. Similar concerns exist for the construction of the correction vector v proposed by [39], which our work alleviates.
The line of research in the Theoretical Computer Science literature that is closest to our work is [15], who presented an IPM that uses an approximate solver in each iteration. However, their accuracy guarantee is in terms of the final objective value which is different from ours. More importantly, [15] focuses on short-step, feasible IPMs, whereas ours is long-step and does not require a feasible starting point. Finally, the approximate solver proposed by [15] works only for the special case of input matrices that correspond to graph Laplacians, following the lines of [47, 48].
We also note that in the Theoretical Computer Science literature, [26, 27, 28, 29, 30, 7, 12] proposed and analyzed theoretically ground-breaking algorithms for LPs based on novel tools such as the so-called inverse maintenance for accelerating the linear system solvers in IPMs. However, all these endeavors are primarily focused on the theoretically fast but practically inefficient short-step feasible IPMs and, to the best of our knowledge, no implementations of these approaches are available for comparisons to standard long-step IPMs. We highlight that our work is focused on infeasible long-step IPMs, known to work efficiently in practice.
Another relevant line of research is the work of [14], which proposed solving eqn. (3) using preconditioned Krylov subspace methods, including variants of generalized minimum residual (GMRES) or CG methods. Indeed, [14] conducted extensive numerical experiments on LP problems taken from standard benchmark libraries, but did not provide any theoretical guarantees.
From a matrix-sketching perspective, our work was also partially motivated by [8], which presented an iterative, sketching-based algorithm to solve under-constrained ridge regression problems, but did not address how to make use of such approaches in an IPM-based framework, as we do here. In another work, [1] proposed a similar sketching-based preconditioning technique. However, their efforts broadly revolved around speeding up and scaling kernel ridge regression. [43, 53] proposed the so-called Newton sketch to construct an approximate Hessian matrix for more general convex objective functions of which LP is a special case. Nevertheless, these randomized second-order
methods are significantly faster than the conventional approach only when the data matrix is overconstrained, i.e. m n. It is unclear whether the approach of [43, 53] is faster than IPMs when the optimization problem to be solved is linear. [49] proposed a probabilistic algorithm to solve LP approximately in a random projection-based reduced feature-space. A possible drawback of this paper is that the approximate solution is infeasible with respect to the original region. Finally, we refer the interested reader to the surveys [51, 19, 33, 18, 24, 34] for more background on Randomized Linear Algebra.
1.3 Notation and Background
A,B, . . . denote matrices and a,b, . . . denote vectors. For vector a, ‖a‖2 denotes its Euclidean norm; for a matrix A, ‖A‖2 denotes its spectral norm and ‖A‖F denotes its Frobenius norm. We use 0 to denote a null vector or null matrix, dependent upon context, and 1 to denote the all-ones vector. For any matrix X ∈ Rm×n with m ≤ n of rank m its thin Singular Value Decomposition (SVD) is the product UΣVT , with U ∈ Rm×m (the matrix of the left singular vectors), V ∈ Rn×m( the matrix of the top-m right singular vectors), and Σ ∈ Rm×m a diagonal matrix whose entries are equal to the singular values of X. We use σi(·) to denote the i-th singular value of the matrix in parentheses.
We now briefly discuss a result on matrix sketching [13, 11] that is particularly useful in our theoretical analyses. In our parlance, [13] proved that, for any matrix Z ∈ Rm×n, there exists a sketching matrix W ∈ Rn×w such that ∥∥ZWWTZT − ZZT∥∥
2 ≤ ζ
4
( ‖Z‖22 + ‖Z‖2F r ) (10)
holds with probability at least 1− δ for any r ≥ 1. Here ζ ∈ [0, 1] is a (constant) accuracy parameter. Ignoring constant terms, w = O(r log(r/δ)); W has s = O(log(r/δ)) non-zero entries per row with s uniformly random entries are chosen without replacement and set to ± 1s independently; the product ZW can be computed in time O(log(r/δ) · nnz(Z)).
2 Conjugate Gradient Solver
In this section, we discuss the computation of the preconditioner Q (and its inverse), followed by a discussion on how such a preconditioner can be used to satisfy eqns. (6) and (7).
Algorithm 1 Solving eqn. (5) via CG Input: AD ∈ Rm×n, p ∈ Rm, sketching matrix W ∈ Rn×w, iteration count t;
1: Compute ADW and its SVD: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be
the matrix of its singular values; 2: Compute Q−1/2 = UQΣ −1/2 Q U T Q; 3: Initialize z̃0 ← 0m and run standard CG on the preconditioned system of eqn. (5) for t iterations; Output: z̃t;
Algorithm 1 takes as input the sketching matrix W ∈ Rn×w, which we construct as discussed in Section 1.3. Our preconditioner Q is equal to
Q = ADWWTDAT. (11)
Notice that we only need to compute Q−1/2 in order to use it to solve eqn. (5). Towards that end, we first compute the sketched matrix ADW ∈ Rm×w. Then, we compute the SVD of the matrix ADW: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be the matrix of its singular values. Notice that the left singular vectors of Q−1/2 are equal to UQ and its singular values are equal to Σ−
1/2 Q . Therefore, Q −1/2 = UQΣ −1/2 Q U T Q.
Let AD = UΣVT be the thin SVD representation of AD. We apply the results of [13] (see Section 1.3) to the matrix Z = VT ∈ Rm×n with r = m to get that, with probability at least 1− δ,∥∥VTWWTV − Im∥∥2 ≤ ζ/2 (12)
The running time needed to compute the sketch ADW is equal to (ignoring constant factors) O(nnz(A) · log(m/δ)). Note that nnz(AD) = nnz(A). The cost of computing the SVD of ADW (and therefore Q−1/2) is O(m3 log(m/δ)). Overall, computing Q−1/2 can be done in time
O(nnz(A) · log(m/δ) +m3 log(m/δ)). (13) Given these results, we now discuss how to satisfy eqns. (6) and (7) using the sketching matrix W. We start with the following bound, which is relatively straight-forward given prior RLA work (see Appendix C.1 for a proof).
Lemma 2 If the sketching matrix W satisfies eqn. (12), then, for all i = 1 . . .m,
(1 + ζ/2)−1 ≤ σ2i (Q− 1/2AD) ≤ (1− ζ/2)−1.
This lemma directly implies eqn. (6). We now proceed to show that the above construction for Q−1/2, when combined with the conjugate gradient solver to solve eqn. (5), indeed satisfies eqn. (7)3. We do note that in prior work most of the convergence guarantees for CG focus on the error of the approximate solution. However, in our work, we are interested in the convergence of the residuals and it is known that even if the energy norm of the error of the approximate solution decreases monotonically, the norms of the CG residuals may oscillate. Interestingly, we can combine a result on the residuals of CG from [6] with Lemma 2 to prove that in our setting the norms of the CG residuals also decrease monotonically (see Appendix C.2 for details).
We remark that one can consider using MINRES [42] instead of CG. Our results hinges on bounding the two-norm of the residual. MINRES finds, at each iteration, the optimal vector with respect the two-norm of the residual inside the same Krylov subspace of CG for the corresponding iteration. Thus, the bound we prove for CG applies to MINRES as well.
3 The Infeasible IPM algorithm
In order to avoid spurious solutions, primal-dual path-following IPMs bias the search direction towards the central path and restrict the iterates to a neighborhood of the central path. This search is controlled by the centering parameter σ ∈ [0, 1]. At each iteration, given the current solution (xk,yk, sk), a standard infeasible IPM obtains the search direction (∆xk,∆yk,∆sk) by solving the following system of linear equations:
AD2AT∆yk = pk , (14a)
∆sk = − rkd −AT∆yk , (14b) ∆xk = − xk + σµkS−11n −D2∆sk. (14c)
Here D and S are computed given the current iterate (xk and sk). After solving the above system, the infeasible IPM Algorithm 2 proceeds by computing a step-size ᾱ to return:
(xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆xk,∆yk,∆sk). (15)
Recall that rk = (rkp, r k d) is a vector with r k p = Ax k − b and rkd = ATyk + sk − c (the primal and dual residuals). We also use the duality measure µk = x kTsk/n and the vector
pk = −rkp − σµkAS−11n + Axk −AD2rkd. (16)
Given ∆yk from eqn. (14a), ∆sk and ∆xk are easy to compute from eqns. (14b) and (14c), as they only involve matrix-vector products. However, since we will use Algorithm 1 to solve eqn. (14a) approximately using the sketching-based preconditioned CG solver, the primal and dual residuals do not lie on the line segment between 0 and r0. This invalidates known proofs of convergence for infeasible IPMs.
For notational simplicity, we now drop the dependency of vectors and scalars on the iteration counter k. Let ∆̂y = Q−1/2z̃t be the approximate solution to eqn. (14a). In order to account for the loss of accuracy due to the approximate solver, we compute ∆̂x as follows:
∆̂x = − x + σµS−11n −D2∆̂s− S−1v. (17) 3See Chapter 9 of [32] for a detailed overview of CG.
Here v ∈ Rn is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the infeasible IPM:
AS−1v = AD2AT∆̂y − p . (18)
We note that the computation of ∆̂s is still done using eqn. (14b), which does not change. [39] argued that if v satisfies eqn. (18), the primal and dual residuals lie in the correct line segment.
Construction of v. There are many choices for v satisfying eqn. (18). A general choice is v = (AS−1)†(AD2AT∆̂y − p), which involves the computation of the pseudoinverse (AS−1)†, which is expensive, taking time O(m2n). Instead, we propose to construct v using the sketching matrix W of Section 1.3. More precisely, we construct the perturbation vector
v = (XS) 1/2W(ADW)†(AD2AT∆̂y − p). (19)
The following lemma proves that the proposed v satisfies eqn. (18); see Appendix C.3 for the proof.
Lemma 3 Let W ∈ Rn×w be the sketching matrix of Section 1.3 and v be the perturbation vector of eqn. (19). Then, with probability at least 1− δ, rank(ADW) = m and v satisfies eqn. (18).
We emphasize here that we will use the same exact sketching matrix W ∈ Rn×w to form the preconditioner used in the CG algorithm of Section 2 as well as the vector v in eqn.(19). This allows us to form the sketching matrix only once, thus saving time in practice. Next, we present a bound for the two-norm of the perturbation vector v of eqn. (19); see Appendix C.4 for the proof.
Lemma 4 With probability at least 1− δ, our perturbation vector v in Lemma 3 satisfies ‖v‖2 ≤ √ 3nµ ‖f̃ (t)‖2, (20)
with f̃ (t) = Q−1/2AD2ATQ−1/2z̃t −Q−1/2p.
Intuitively, the bound in eqn. (20) implies that ‖v‖2 depends on how close the approximate solution ∆̂y is to the exact solution. Lemma 4 is particularly useful in proving the convergence of Algorithm 2, which needs ‖v‖2 to be a small quantity. More precisely, combining a result from [39] with our preconditioner Q−1/2, we can prove that ‖Q−1/2p‖2 ≤ O(n) √ µ. This bound allows us to prove that if we run Algorithm 1 for O(log n) iterations, then ‖f̃ (t)‖2 ≤ γσ4√n √ µ and ‖v‖2 ≤ γσ4 µ. The last two inequalities are critical in the convergence analysis of Algorithm 2; see Appendix F.1 and Appendix F.2 for details.
We are now ready to present the infeasible IPM algorithm. We will need the following definition for the neighborhoodN (γ) = {(xk,yk, sk) : (xk, sk) > 0, xki ski ≥ (1− γ)µ and ‖r
k‖2/‖r0‖2 ≤ µk/µ0}. Here γ ∈ (0, 1) and we note that the duality measure µk steadily reduces at each iteration.
Algorithm 2 Infeasible IPM Input: A ∈ Rm×n, b ∈ Rm, c ∈ Rn, γ ∈ (0, 1), tolerance > 0, σ ∈ (0, 4/5); Initialize: k ← 0; initial point (x0,y0, s0);
1: while µk > do 2: Compute sketching matrix W ∈ Rn×w (Section 1.3) with ζ = 1/2 and δ = O(n−2); 3: Compute rkp = Ax
k − b; rkd = ATyk + sk − c; and pk from eqn. (16); 4: Solve the linear system of eqn. (5) for z using Algorithm 1 with W from step (2) and t = O(log n). Compute ∆̂y = Q−1/2z;
5: Compute v using eqn. (19) with W from step (2); ∆̂s using eqn. (14b); ∆̂x using eqn. (17); 6: Compute α̃ = argmax{α ∈ [0, 1] : (xk,yk, sk) + α(∆̂x k , ∆̂y k , ∆̂s k ) ∈ N (γ)}. 7: Compute ᾱ = argmin{α ∈ [0, α̃] : (xk + α∆̂x k )T(sk + α∆̂s k )}. 8: Compute (xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆̂x k , ∆̂y k , ∆̂s k ); set k ← k + 1; 9: end while
Running time of Algorithm 2. We start by discussing the running time to compute v. As discussed in Section 2, (ADW)† can be computed in O(nnz(A) · log(m/δ) +m3 log(m/δ)) time. Now, as
W has O(log(m/δ)) non-zero entries per row, pre-multiplying by W takes O(nnz(A) log(m/δ)) time (assuming nnz(A) ≥ n). Since X and S are diagonal matrices, computing v takes O(nnz(A) · log(m/δ)+m3 log(m/δ)) time, which is asymptotically the same as computing Q−1/2 (see eqn. (13)).
We now discuss the overall running time of Algorithm 2. At each iteration, with failure probability δ, the preconditioner Q−1/2 and the vector v can be computed inO(nnz(A) · log(m/δ)+m3 log(m/δ)) time. In addition, for t = O(log n) iterations of Algorithm 1, all the matrix-vector products in the CG solver can be computed inO(nnz(A) · log n) time. Therefore, the computational time for steps (2)-(5) is given by O(nnz(A) · (log n+ log(m/δ)) +m3 log(m/δ)). Finally, taking a union bound over all iterations with δ = O(n−2) (ignoring constant factors), Algorithm 2 converges with probability at least 0.9. The running time at each iteration is given by O((nnz(A) +m3) log n).
4 Experiments
We demonstrate the empirical performance of our algorithm on a variety of synthetic and real-world datasets from the UCI ML Repository [20], such as ARCENE, DEXTER [23], DrivFace [16], and a gene expression cancer RNA-Sequencing dataset that is part of the PANCAN dataset [50]. See Appendix G, Table 1 for a description of the datasets. We observed that the results for both synthetic (Appendix G.2) and real-world data were qualitatively similar; we highlight results on representative real datasets. The experiments were implemented in Python and run on a server with Intel E52623V3@3.0GHz 8 cores and 64GB RAM. As an application, we consider `1-regularized SVMs: all of the datasets are concerned with binary classification with m n, where n is the number of features. In Appendix G.1, we describe the `1-SVM problem and how it can be formulated as an LP. Here, m is the number of training points, n is the feature dimension, and the size of the constraint matrix in the LP becomes m× (2n+ 1).
Experimental Results. We compare our Algorithm 2 with a standard IPM (see Chapter 10, [44]) using CG and a standard IPM using a direct solver. We also use CVXPY as a benchmark to compare the accuracy of the solutions; we define the relative error ‖x̂−x?‖2/‖x?‖2, where x̂ is our solution and x? is the solution generated by CVXPY. We also consider the number of outer iterations, namely the number of iterations of the IPM algorithm, as well as the number of inner iterations, namely the number of iterations of the CG solver. We denote the relative stopping tolerance for CG by tolCG and we denote the outer iteration residual by τ . If not specified: τ = 10−9, tolCG = 10−5, and σ = 0.5. We evaluated a Gaussian sketching matrix and the initial triplet (x,y, s) for all IPM algorithms was set to be all ones.
Figure 1(a) shows that our Algorithm 2 uses an order of magnitude fewer inner iterations than the un-preconditioned standard solver. This is due to the improved conditioning of the respective matrices in the normal equations, as demonstrated in Figure 1(b). Across various real and synthetic data sets, the results were qualitatively similar to those shown in Figure 1. Results for several real data sets are summarized in Appendix G, Table 1. The number of outer iterations is unaffected by our internal approximation methods and is generally the same for our Algorithm 2, the standard IPM with CG, and the standard IPM with a direct linear solver (denoted IPM w/Dir), as seen in Appendix G, Table 1. Figure 1 also demonstrates the relative insensitivity to the choice of w (the sketching dimension, i.e., the number of columns of the sketching matrix W of Section 1.3). For smaller values of w, our algorithm requires more inner iterations. However, across various choices of w, the number of inner iterations is always an order of magnitude smaller than the number required by the standard solver.
Figures 1(c)-1(d) show the performance of our algorithm for a range of (w, tolCG) pairs. Figure 1(c) demonstrates that the number of the inner iterations is robust to the choice of tolCG and w. The number of inner iterations varies between 15 and 35 for the ARCENE data set, while the standard IPM took on the order of 1, 000 iterations across all parameter settings. Across all settings, the relative error was fixed at 0.04%. In general, our sketched IPM is able to produce an extremely high accuracy solution across parameter settings. Thus we do not report additional numerical results for the relative error, which was consistently 10−3 or less. Figure 1(d) demonstrates a tradeoff of our approach: as both tolCG and w are increased, the condition number κ(Q−1/2AD2ATQ−1/2) decreases, corresponding to better conditioned systems. As a result, fewer inner iterations are required. Additional experiments can be found in Appendix G.4.
5 Conclusions
We proposed and analyzed an infeasible IPM algorithm using a preconditioned conjugate gradient solver for the normal equations and a novel perturbation vector to correct for the error due to the approximate solver. Thus, we speed up each iteration of the IPM algorithm, without increasing the overall number of iterations. We demonstrate empirically that our IPM requires an order of magnitude fewer inner iterations within each linear solve than standard IPMs. Several extensions of our work are discussed in Appendix A.
Broader Impact
Our work is focused on speeding up algorithms for tall/wide LPs. As such, it could have significant broader impacts by allowing users to solve increasingly larger LPs in the numerous settings discussed in our introduction. While applications of our work to real data could result into ethical considerations, this is an indirect (and unpredictable) side-effect of our work. Our experimental work uses publicly available datasets to evaluate the performance of our algorithms; no ethical considerations are raised.
Acknowledgements, We thank the anonymous reviewers for their helpful comments. AC and PD were partially supported by NSF FRG 1760353 and NSF CCF-BSF 1814041. HA was partially supported by BSF grant 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence. | 1. What is the main contribution of the paper in the field of infeasible interior point methods (IIPM)?
2. What are the strengths of the proposed approach, particularly in terms of computational efficiency?
3. What are the weaknesses of the paper, especially regarding its numerical study and the choice of sketching method?
4. How does the reviewer assess the value of the theoretical results showing the convergence rate of the proposed IIPM?
5. What suggestions does the reviewer have for improving the numerical study and making the paper more practical and relevant to the community? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
I have read the response and the discussion of reviewers. The reviewer learned that there exist a line of prior theoretical works about sketching-based IPM which are very relevant to this work and which deserve more detailed discussion and ideally they could be included as comparison in experiments, to validate the practical advantageous as claimed in the "prior work" section. In general the reviewer believes that this is a good piece of work with new theoretical contribution in a subclass of IPM with infeasible start, but current version fall short a bit in numerical side which makes the paper borderline, considering the fact that NeurIPS exercises a high standard. =========================================== This paper propose a novel sketching-based infeasible IPM with preconditioned conjugate gradient for efficiently solving linear programming tasks. A theoretical convergence analysis is provided, showing the same convergence rate as standard infeasible IPM. Numerical reuslts demonstrates the computational efficiency of this approach due to sketching comparing to standard infeasible IPM.
Strengths
The paper provides a new practical approach for designing fast IIPM, using sketching techniques to perform dimensionality reduction for computational efficiency. The theoretical results showing that the IIPM with sketching have the same convergence as standard IIPM, are sound and valuable for the community.
Weaknesses
The numerical study in the current version is limited, since it only compare to standard IPM, while as cited (reference 41, 51) there are already a number of sketching-based algorithms which are readily applicable to LP. The author(s) only argue that these methods' performance in high-dimension regime are unclear -- then why not show it numerically? Another issue is that the author(s) use Gaussian sketch in experiments, which the review believes that it is not a good choice -- Gaussian sketch, although admits good theoretical property, is not recommended in practice since it is computationally expensive. In practice some efficient sketching method like the count-sketch, randomized orthongonal system sketch (fast Johnson-Lindenstrauss) are used for sketching-based frameworks. The reviewer would suggest the author(s) to include a set of experiments using these practical sketching schemes. |
NIPS | Title
Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs
Abstract
Linear programming (LP) is used in many machine learning applications, such as `1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real and synthetic data.
1 Introduction
Linear programming (LP) is one of the most useful tools available to theoreticians and practitioners throughout science and engineering. In Machine Learning, LP appears in numerous settings, including `1-regularized SVMs [57], basis pursuit (BP) [54], sparse inverse covariance matrix estimation (SICE) [55], the nonnegative matrix factorization (NMF) [45], MAP inference [37], etc. Not surprisingly, designing and analyzing LP algorithms is a topic of paramount importance in computer science and applied mathematics.
One of the most successful paradigms for solving LPs is the family of Interior Point Methods (IPMs), pioneered by Karmarkar in the mid 1980s [25]. Path-following IPMs and, in particular, long-step path following IPMs, are among the most practical approaches for solving linear programs. Consider the standard form of the primal LP problem:
min cTx , subject to Ax = b ,x ≥ 0 , (1) where A ∈ Rm×n, b ∈ Rm, and c ∈ Rn are the inputs, and x ∈ Rn is the vector of the primal variables. The associated dual problem is
max bTy , subject to ATy + s = c , s ≥ 0 , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
where y ∈ Rm and s ∈ Rn are the vectors of the dual and slack variables respectively. Triplets (x,y, s) that uphold both (1) and (2) are called primal-dual solutions. Path-following IPMs typically converge towards a primal-dual solution by operating as follows: given the current iterate (xk,yk, sk), they compute the Newton search direction (∆x,∆y,∆s) and update the current iterate by following a step towards the search direction. To compute the search direction, one standard approach [41] involves solving the normal equations1:
AD2AT∆y = p. (3)
Here, D = X1/2S−1/2 is a diagonal matrix, X,S ∈ Rn×n are diagonal matrices whose i-th diagonal entries are equal to xi and si, respectively, and p ∈ Rm is a vector whose exact definition is given in eqn. (16)2. Given ∆y, computing ∆s and ∆x only involves matrix-vector products.
The core computational bottleneck in IPMs is the need to solve the linear system of eqn. (3) at each iteration. This leads to two key challenges: first, for high-dimensional matrices A, solving the linear system is computationally prohibitive. Most implementations of IPMs use a direct solver; see Chapter 6 of [41]. However, if AD2AT is large and dense, direct solvers are computationally impractical. If AD2AT is sparse, specialized direct solvers have been developed, but these do not apply to many LP problems arising in machine learning applications due to irregular sparsity patterns. Second, an alternative to direct solvers is the use of iterative solvers, but the situation is further complicated since AD2AT is typically ill-conditioned. Indeed, as IPM algorithms approach the optimal primal-dual solution, the diagonal matrix D is ill-conditioned, which also results in the matrix AD2AT being ill-conditioned. Additionally, using approximate solutions for the linear system of eqn. (3) causes certain invariants, which are crucial for guaranteeing the convergence of IPMs, to be violated; see Section 1.1 for details.
In this paper, we address the aforementioned challenges, for the special case where m n, i.e., the number of constraints is much smaller than the number of variables; see Appendix A for a generalization. This is a common setting in ML applications of LP solvers, since `1-SVMs and basis pursuit problems often exhibit such structure when the number of available features (n) is larger than the number of objects (m). This setting has been of interest in recent work on LPs [17, 4, 31]. For simplicity of exposition, we also assume that the constraint matrix A has full rank, equal to m. First, we propose and analyze a preconditioned Conjugate Gradient (CG) iterative solver for the normal equations of eqn. (3), using matrix sketching constructions from the Randomized Linear Algebra (RLA) literature. We develop a preconditioner for AD2AT using matrix sketching which allows us to prove strong convergence guarantees for the residual of CG solvers. Second, building upon the work of [39], we propose and analyze a provably accurate long-step infeasible IPM algorithm. The proposed IPM solves the normal equations using iterative solvers. In this paper, for brevity and clarity, we primarily focus our description and analysis on the CG iterative solver. We note that a non-trivial concern is that the use of iterative solvers and matrix sketching tools implies that the normal equations at each iteration will be solved only approximately. In our proposed IPM, we develop a novel way to correct for the error induced by the approximate solution in order to guarantee convergence. Importantly, this correction step is relatively computationally light, unlike a similar step proposed in [39]. Third, we empirically show that our algorithm performs well in practice. We consider solving LPs that arise from `1-regularized SVMs and test them on a variety of synthetic and real datasets. Several extensions of our work are discussed in Appendix A.
1.1 Our contributions
Our point of departure in this work is the introduction of preconditioned, iterative solvers for solving eqn. (3). Preconditioning is used to address the ill-conditioning of the matrix AD2AT. Iterative solvers allow the computation of approximate solutions using only matrix-vector products while avoiding matrix inversion, Cholesky or LU factorizations, etc. A preconditioned formulation of eqn. (3) is:
Q−1AD2AT∆y = Q−1p, (4)
where Q ∈ Rm×m is the preconditioning matrix; Q should be easily invertible (see [3, 22] for background). An alternative yet equivalent formulation of eqn. (4), which is more amenable to
1Another widely used approach is to solve the augmented system [41] which is less relevant for this paper. 2The superscript k in eqn. (16) simply indicates iteration count and is omitted here for notational simplicity.
theoretical analysis, is
Q− 1/2AD2ATQ− 1/2z = Q− 1/2p, (5)
where z ∈ Rm is a vector such that ∆y = Q−1/2z. Note that the matrix in the left-hand side of the above equation is always symmetric, which is not necessarily the case for eqn. (4). We do emphasize that one can use eqn. (4) in the actual implementation of the preconditioned solver; eqn. (5) is much more useful in theoretical analyses.
Recall that we focus on the special case where A ∈ Rm×n has m n, i.e., it is a short-and-fat matrix. Our first contribution starts with the design and analysis of a preconditioner for the Conjugate Gradient solver that satisfies, with high probability,
2
2 + ζ ≤ σ2min(Q− 1 2 AD) ≤ σ2max(Q−
1 2 AD) ≤ 2
2− ζ , (6)
for some error parameter ζ ∈ [0, 1]. In the above, σmin(·) and σmax(·) correspond to the smallest and largest singular value of the matrix in parentheses. The above condition says that the preconditioner effectively reduces the condition number of AD to a constant. We note that the particular form of the lower and upper bounds in eqn. (6) was chosen to simplify our derivations. RLA matrix-sketching techniques allow us to construct preconditioners for all short-and-fat matrices that satisfy the above inequality and can be inverted efficiently. Such constructions go back to the work of [2]; see Section 2 for details on the construction of Q and its inverse. Importantly, given such a preconditioner, we then prove that the resulting CG iterative solver satisfies
‖Q−1/2AD2ATQ−1/2z̃t −Q−1/2p‖2 ≤ ζt‖Q− 1/2p‖2. (7)
Here z̃t is the approximate solution returned by the CG iterative solver after t iterations. In words, the above inequality states that the residual achieved after t iterations of the CG iterative solver drops exponentially fast. To the best of our knowledge, this result is not known in the CG literature: indeed, it is actually well-known that the residual of CG may oscillate [21], even in cases where the energy norm of the solution error decreases monotonically. However, we prove that if the preconditioner is sufficiently good, i.e., it satisfies the constraint of eqn. (6), then the residual decreases as well.
Our second contribution is the analysis of a novel variant of a long-step infeasible IPM algorithm proposed by [39]. Recall that such algorithms can, in general, start with an initial point that is not necessarily feasible, but does need to satisfy some, more relaxed, constraints. Following the lines of [56, 39], let S be the set of feasible and optimal solutions of the form (x∗,y∗, s∗) for the primal and dual problems of eqns. (1) and (2) and assume that S is not empty. Then, long-step infeasible IPMs can start with any initial point (x0,y0, s0) that satisfies (x0, s0) > 0 and (x0, s0) ≥ (x∗, s∗), for some feasible and optimal solution (x∗, s∗) ∈ S . In words, the starting primal and slack variables must be strictly positive and larger (element-wise) when compared to some feasible, optimal primaldual solution. See Chapter 6 of [52] for a discussion regarding why such choices of starting points are relevant to computational practice and can be identified more efficiently than feasible points.
The flexibility of infeasible IPMs comes at a cost: long-step feasible IPMs converge in O(n log 1/ ) iterations, while long-step infeasible IPMs need O(n2 log 1/ ) iterations to converge [56, 39] (Here is the accuracy of the approximate LP solution returned by the IPM; see Algorithm 2 for the exact definition.). Let
Ax0 − b = r0p, (8) ATy0 + s0 − c = r0d, (9)
where r0p ∈ Rn and r0d ∈ Rm are the primal and dual residuals, respectively, and characterize how far the initial point is from being feasible. As long-step infeasible IPM algorithms iterate and update the primal and dual solutions, the residuals are updated as well. Let rk = (rkp, r k d) ∈ Rn+m be the primal and dual residual at the k-th iteration: it is well-known that the convergence analysis of infeasible long-step IPMs critically depends on rk lying on the line segment between 0 and r0. Unfortunately, using approximate solvers (such as the CG solver proposed above) for the normal equations violates this invariant. [39] proposed a simple solution to fix this problem by adding a perturbation vector v to the current primal-dual solution that guarantees that the invariant is satisfied. Again, we use RLA matrix sketching principles to propose an efficient construction for v that provably satisfies the invariant. Next, we combine the above two primitives to prove that Algorithm 2 in Section 3 satisfies the following theorem.
Theorem 1 Let 0 ≤ ≤ 1 be an accuracy parameter. Consider the long-step infeasible IPM Algorithm 2 (Section 3) that solves eqn. (5) using the CG solver of Algorithm 1 (Section 2). Assume that the CG iterative solver runs with accuracy parameter ζ = 1/2 and iteration count t = O(log n). Then, with probability at least 0.9, the long-step infeasible IPM converges after O(n2 log 1/ ) iterations.
We note that the 0.9 success probability above is for simplicity of exposition and can be easily amplified using standard techniques. Also, at each iteration of our infeasible long-step IPM algorithm, the running time is O((nnz(A) +m3) log n), ignoring constant terms. See Section 3 for a detailed discussion of the overall running time.
Our empirical evaluation demonstrates that our algorithm requires an order of magnitude much fewer inner CG iterations than a standard IPM using CG, while producing a comparably accurate solution (see Section 4).
1.2 Prior Work
There is a large body of literature on solving LPs using IPMs. We only review literature that is immediately relevant to our work. Recall that we solve the normal equations inexactly at each iteration, and develop a way to correct for the error incurred. We also focus on IPMs that can use a sufficiently positive, infeasible initial point (see Section 1.1). We discuss below two papers that present related ideas.
[39] proposed the use of an approximate iterative solver for eqn. (3), followed by a correction step to “fix” the approximate solution (see our discussion in Section 1.1). We propose efficient, RLAbased approaches to precondition and solve eqn. (3), as well as a novel approach to correct for the approximation error in order to guarantee the convergence of the IPM algorithm. Specifically, [39] propose to solve eqn. (3) using the so-called maximum weight basis preconditioner [46]. However, computing such a preconditioner needs access to a maximal linearly independent set of columns of AD in each iteration, which is costly, taking O(m2n) time in the worst-case. More importantly, while [38] was able to provide a bound on the condition number of the preconditioned matrix that depends only on properties of A, and is independent of D, this bound might, in general, be very large. In contrast, our bound is a constant and it does not depend on properties of A or its dimensions. In addition, [39] assumed a bound on the two-norm of the residual of the preconditioned system, but it is unclear how their preconditioner guarantees such a bound. Similar concerns exist for the construction of the correction vector v proposed by [39], which our work alleviates.
The line of research in the Theoretical Computer Science literature that is closest to our work is [15], who presented an IPM that uses an approximate solver in each iteration. However, their accuracy guarantee is in terms of the final objective value which is different from ours. More importantly, [15] focuses on short-step, feasible IPMs, whereas ours is long-step and does not require a feasible starting point. Finally, the approximate solver proposed by [15] works only for the special case of input matrices that correspond to graph Laplacians, following the lines of [47, 48].
We also note that in the Theoretical Computer Science literature, [26, 27, 28, 29, 30, 7, 12] proposed and analyzed theoretically ground-breaking algorithms for LPs based on novel tools such as the so-called inverse maintenance for accelerating the linear system solvers in IPMs. However, all these endeavors are primarily focused on the theoretically fast but practically inefficient short-step feasible IPMs and, to the best of our knowledge, no implementations of these approaches are available for comparisons to standard long-step IPMs. We highlight that our work is focused on infeasible long-step IPMs, known to work efficiently in practice.
Another relevant line of research is the work of [14], which proposed solving eqn. (3) using preconditioned Krylov subspace methods, including variants of generalized minimum residual (GMRES) or CG methods. Indeed, [14] conducted extensive numerical experiments on LP problems taken from standard benchmark libraries, but did not provide any theoretical guarantees.
From a matrix-sketching perspective, our work was also partially motivated by [8], which presented an iterative, sketching-based algorithm to solve under-constrained ridge regression problems, but did not address how to make use of such approaches in an IPM-based framework, as we do here. In another work, [1] proposed a similar sketching-based preconditioning technique. However, their efforts broadly revolved around speeding up and scaling kernel ridge regression. [43, 53] proposed the so-called Newton sketch to construct an approximate Hessian matrix for more general convex objective functions of which LP is a special case. Nevertheless, these randomized second-order
methods are significantly faster than the conventional approach only when the data matrix is overconstrained, i.e. m n. It is unclear whether the approach of [43, 53] is faster than IPMs when the optimization problem to be solved is linear. [49] proposed a probabilistic algorithm to solve LP approximately in a random projection-based reduced feature-space. A possible drawback of this paper is that the approximate solution is infeasible with respect to the original region. Finally, we refer the interested reader to the surveys [51, 19, 33, 18, 24, 34] for more background on Randomized Linear Algebra.
1.3 Notation and Background
A,B, . . . denote matrices and a,b, . . . denote vectors. For vector a, ‖a‖2 denotes its Euclidean norm; for a matrix A, ‖A‖2 denotes its spectral norm and ‖A‖F denotes its Frobenius norm. We use 0 to denote a null vector or null matrix, dependent upon context, and 1 to denote the all-ones vector. For any matrix X ∈ Rm×n with m ≤ n of rank m its thin Singular Value Decomposition (SVD) is the product UΣVT , with U ∈ Rm×m (the matrix of the left singular vectors), V ∈ Rn×m( the matrix of the top-m right singular vectors), and Σ ∈ Rm×m a diagonal matrix whose entries are equal to the singular values of X. We use σi(·) to denote the i-th singular value of the matrix in parentheses.
We now briefly discuss a result on matrix sketching [13, 11] that is particularly useful in our theoretical analyses. In our parlance, [13] proved that, for any matrix Z ∈ Rm×n, there exists a sketching matrix W ∈ Rn×w such that ∥∥ZWWTZT − ZZT∥∥
2 ≤ ζ
4
( ‖Z‖22 + ‖Z‖2F r ) (10)
holds with probability at least 1− δ for any r ≥ 1. Here ζ ∈ [0, 1] is a (constant) accuracy parameter. Ignoring constant terms, w = O(r log(r/δ)); W has s = O(log(r/δ)) non-zero entries per row with s uniformly random entries are chosen without replacement and set to ± 1s independently; the product ZW can be computed in time O(log(r/δ) · nnz(Z)).
2 Conjugate Gradient Solver
In this section, we discuss the computation of the preconditioner Q (and its inverse), followed by a discussion on how such a preconditioner can be used to satisfy eqns. (6) and (7).
Algorithm 1 Solving eqn. (5) via CG Input: AD ∈ Rm×n, p ∈ Rm, sketching matrix W ∈ Rn×w, iteration count t;
1: Compute ADW and its SVD: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be
the matrix of its singular values; 2: Compute Q−1/2 = UQΣ −1/2 Q U T Q; 3: Initialize z̃0 ← 0m and run standard CG on the preconditioned system of eqn. (5) for t iterations; Output: z̃t;
Algorithm 1 takes as input the sketching matrix W ∈ Rn×w, which we construct as discussed in Section 1.3. Our preconditioner Q is equal to
Q = ADWWTDAT. (11)
Notice that we only need to compute Q−1/2 in order to use it to solve eqn. (5). Towards that end, we first compute the sketched matrix ADW ∈ Rm×w. Then, we compute the SVD of the matrix ADW: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be the matrix of its singular values. Notice that the left singular vectors of Q−1/2 are equal to UQ and its singular values are equal to Σ−
1/2 Q . Therefore, Q −1/2 = UQΣ −1/2 Q U T Q.
Let AD = UΣVT be the thin SVD representation of AD. We apply the results of [13] (see Section 1.3) to the matrix Z = VT ∈ Rm×n with r = m to get that, with probability at least 1− δ,∥∥VTWWTV − Im∥∥2 ≤ ζ/2 (12)
The running time needed to compute the sketch ADW is equal to (ignoring constant factors) O(nnz(A) · log(m/δ)). Note that nnz(AD) = nnz(A). The cost of computing the SVD of ADW (and therefore Q−1/2) is O(m3 log(m/δ)). Overall, computing Q−1/2 can be done in time
O(nnz(A) · log(m/δ) +m3 log(m/δ)). (13) Given these results, we now discuss how to satisfy eqns. (6) and (7) using the sketching matrix W. We start with the following bound, which is relatively straight-forward given prior RLA work (see Appendix C.1 for a proof).
Lemma 2 If the sketching matrix W satisfies eqn. (12), then, for all i = 1 . . .m,
(1 + ζ/2)−1 ≤ σ2i (Q− 1/2AD) ≤ (1− ζ/2)−1.
This lemma directly implies eqn. (6). We now proceed to show that the above construction for Q−1/2, when combined with the conjugate gradient solver to solve eqn. (5), indeed satisfies eqn. (7)3. We do note that in prior work most of the convergence guarantees for CG focus on the error of the approximate solution. However, in our work, we are interested in the convergence of the residuals and it is known that even if the energy norm of the error of the approximate solution decreases monotonically, the norms of the CG residuals may oscillate. Interestingly, we can combine a result on the residuals of CG from [6] with Lemma 2 to prove that in our setting the norms of the CG residuals also decrease monotonically (see Appendix C.2 for details).
We remark that one can consider using MINRES [42] instead of CG. Our results hinges on bounding the two-norm of the residual. MINRES finds, at each iteration, the optimal vector with respect the two-norm of the residual inside the same Krylov subspace of CG for the corresponding iteration. Thus, the bound we prove for CG applies to MINRES as well.
3 The Infeasible IPM algorithm
In order to avoid spurious solutions, primal-dual path-following IPMs bias the search direction towards the central path and restrict the iterates to a neighborhood of the central path. This search is controlled by the centering parameter σ ∈ [0, 1]. At each iteration, given the current solution (xk,yk, sk), a standard infeasible IPM obtains the search direction (∆xk,∆yk,∆sk) by solving the following system of linear equations:
AD2AT∆yk = pk , (14a)
∆sk = − rkd −AT∆yk , (14b) ∆xk = − xk + σµkS−11n −D2∆sk. (14c)
Here D and S are computed given the current iterate (xk and sk). After solving the above system, the infeasible IPM Algorithm 2 proceeds by computing a step-size ᾱ to return:
(xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆xk,∆yk,∆sk). (15)
Recall that rk = (rkp, r k d) is a vector with r k p = Ax k − b and rkd = ATyk + sk − c (the primal and dual residuals). We also use the duality measure µk = x kTsk/n and the vector
pk = −rkp − σµkAS−11n + Axk −AD2rkd. (16)
Given ∆yk from eqn. (14a), ∆sk and ∆xk are easy to compute from eqns. (14b) and (14c), as they only involve matrix-vector products. However, since we will use Algorithm 1 to solve eqn. (14a) approximately using the sketching-based preconditioned CG solver, the primal and dual residuals do not lie on the line segment between 0 and r0. This invalidates known proofs of convergence for infeasible IPMs.
For notational simplicity, we now drop the dependency of vectors and scalars on the iteration counter k. Let ∆̂y = Q−1/2z̃t be the approximate solution to eqn. (14a). In order to account for the loss of accuracy due to the approximate solver, we compute ∆̂x as follows:
∆̂x = − x + σµS−11n −D2∆̂s− S−1v. (17) 3See Chapter 9 of [32] for a detailed overview of CG.
Here v ∈ Rn is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the infeasible IPM:
AS−1v = AD2AT∆̂y − p . (18)
We note that the computation of ∆̂s is still done using eqn. (14b), which does not change. [39] argued that if v satisfies eqn. (18), the primal and dual residuals lie in the correct line segment.
Construction of v. There are many choices for v satisfying eqn. (18). A general choice is v = (AS−1)†(AD2AT∆̂y − p), which involves the computation of the pseudoinverse (AS−1)†, which is expensive, taking time O(m2n). Instead, we propose to construct v using the sketching matrix W of Section 1.3. More precisely, we construct the perturbation vector
v = (XS) 1/2W(ADW)†(AD2AT∆̂y − p). (19)
The following lemma proves that the proposed v satisfies eqn. (18); see Appendix C.3 for the proof.
Lemma 3 Let W ∈ Rn×w be the sketching matrix of Section 1.3 and v be the perturbation vector of eqn. (19). Then, with probability at least 1− δ, rank(ADW) = m and v satisfies eqn. (18).
We emphasize here that we will use the same exact sketching matrix W ∈ Rn×w to form the preconditioner used in the CG algorithm of Section 2 as well as the vector v in eqn.(19). This allows us to form the sketching matrix only once, thus saving time in practice. Next, we present a bound for the two-norm of the perturbation vector v of eqn. (19); see Appendix C.4 for the proof.
Lemma 4 With probability at least 1− δ, our perturbation vector v in Lemma 3 satisfies ‖v‖2 ≤ √ 3nµ ‖f̃ (t)‖2, (20)
with f̃ (t) = Q−1/2AD2ATQ−1/2z̃t −Q−1/2p.
Intuitively, the bound in eqn. (20) implies that ‖v‖2 depends on how close the approximate solution ∆̂y is to the exact solution. Lemma 4 is particularly useful in proving the convergence of Algorithm 2, which needs ‖v‖2 to be a small quantity. More precisely, combining a result from [39] with our preconditioner Q−1/2, we can prove that ‖Q−1/2p‖2 ≤ O(n) √ µ. This bound allows us to prove that if we run Algorithm 1 for O(log n) iterations, then ‖f̃ (t)‖2 ≤ γσ4√n √ µ and ‖v‖2 ≤ γσ4 µ. The last two inequalities are critical in the convergence analysis of Algorithm 2; see Appendix F.1 and Appendix F.2 for details.
We are now ready to present the infeasible IPM algorithm. We will need the following definition for the neighborhoodN (γ) = {(xk,yk, sk) : (xk, sk) > 0, xki ski ≥ (1− γ)µ and ‖r
k‖2/‖r0‖2 ≤ µk/µ0}. Here γ ∈ (0, 1) and we note that the duality measure µk steadily reduces at each iteration.
Algorithm 2 Infeasible IPM Input: A ∈ Rm×n, b ∈ Rm, c ∈ Rn, γ ∈ (0, 1), tolerance > 0, σ ∈ (0, 4/5); Initialize: k ← 0; initial point (x0,y0, s0);
1: while µk > do 2: Compute sketching matrix W ∈ Rn×w (Section 1.3) with ζ = 1/2 and δ = O(n−2); 3: Compute rkp = Ax
k − b; rkd = ATyk + sk − c; and pk from eqn. (16); 4: Solve the linear system of eqn. (5) for z using Algorithm 1 with W from step (2) and t = O(log n). Compute ∆̂y = Q−1/2z;
5: Compute v using eqn. (19) with W from step (2); ∆̂s using eqn. (14b); ∆̂x using eqn. (17); 6: Compute α̃ = argmax{α ∈ [0, 1] : (xk,yk, sk) + α(∆̂x k , ∆̂y k , ∆̂s k ) ∈ N (γ)}. 7: Compute ᾱ = argmin{α ∈ [0, α̃] : (xk + α∆̂x k )T(sk + α∆̂s k )}. 8: Compute (xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆̂x k , ∆̂y k , ∆̂s k ); set k ← k + 1; 9: end while
Running time of Algorithm 2. We start by discussing the running time to compute v. As discussed in Section 2, (ADW)† can be computed in O(nnz(A) · log(m/δ) +m3 log(m/δ)) time. Now, as
W has O(log(m/δ)) non-zero entries per row, pre-multiplying by W takes O(nnz(A) log(m/δ)) time (assuming nnz(A) ≥ n). Since X and S are diagonal matrices, computing v takes O(nnz(A) · log(m/δ)+m3 log(m/δ)) time, which is asymptotically the same as computing Q−1/2 (see eqn. (13)).
We now discuss the overall running time of Algorithm 2. At each iteration, with failure probability δ, the preconditioner Q−1/2 and the vector v can be computed inO(nnz(A) · log(m/δ)+m3 log(m/δ)) time. In addition, for t = O(log n) iterations of Algorithm 1, all the matrix-vector products in the CG solver can be computed inO(nnz(A) · log n) time. Therefore, the computational time for steps (2)-(5) is given by O(nnz(A) · (log n+ log(m/δ)) +m3 log(m/δ)). Finally, taking a union bound over all iterations with δ = O(n−2) (ignoring constant factors), Algorithm 2 converges with probability at least 0.9. The running time at each iteration is given by O((nnz(A) +m3) log n).
4 Experiments
We demonstrate the empirical performance of our algorithm on a variety of synthetic and real-world datasets from the UCI ML Repository [20], such as ARCENE, DEXTER [23], DrivFace [16], and a gene expression cancer RNA-Sequencing dataset that is part of the PANCAN dataset [50]. See Appendix G, Table 1 for a description of the datasets. We observed that the results for both synthetic (Appendix G.2) and real-world data were qualitatively similar; we highlight results on representative real datasets. The experiments were implemented in Python and run on a server with Intel E52623V3@3.0GHz 8 cores and 64GB RAM. As an application, we consider `1-regularized SVMs: all of the datasets are concerned with binary classification with m n, where n is the number of features. In Appendix G.1, we describe the `1-SVM problem and how it can be formulated as an LP. Here, m is the number of training points, n is the feature dimension, and the size of the constraint matrix in the LP becomes m× (2n+ 1).
Experimental Results. We compare our Algorithm 2 with a standard IPM (see Chapter 10, [44]) using CG and a standard IPM using a direct solver. We also use CVXPY as a benchmark to compare the accuracy of the solutions; we define the relative error ‖x̂−x?‖2/‖x?‖2, where x̂ is our solution and x? is the solution generated by CVXPY. We also consider the number of outer iterations, namely the number of iterations of the IPM algorithm, as well as the number of inner iterations, namely the number of iterations of the CG solver. We denote the relative stopping tolerance for CG by tolCG and we denote the outer iteration residual by τ . If not specified: τ = 10−9, tolCG = 10−5, and σ = 0.5. We evaluated a Gaussian sketching matrix and the initial triplet (x,y, s) for all IPM algorithms was set to be all ones.
Figure 1(a) shows that our Algorithm 2 uses an order of magnitude fewer inner iterations than the un-preconditioned standard solver. This is due to the improved conditioning of the respective matrices in the normal equations, as demonstrated in Figure 1(b). Across various real and synthetic data sets, the results were qualitatively similar to those shown in Figure 1. Results for several real data sets are summarized in Appendix G, Table 1. The number of outer iterations is unaffected by our internal approximation methods and is generally the same for our Algorithm 2, the standard IPM with CG, and the standard IPM with a direct linear solver (denoted IPM w/Dir), as seen in Appendix G, Table 1. Figure 1 also demonstrates the relative insensitivity to the choice of w (the sketching dimension, i.e., the number of columns of the sketching matrix W of Section 1.3). For smaller values of w, our algorithm requires more inner iterations. However, across various choices of w, the number of inner iterations is always an order of magnitude smaller than the number required by the standard solver.
Figures 1(c)-1(d) show the performance of our algorithm for a range of (w, tolCG) pairs. Figure 1(c) demonstrates that the number of the inner iterations is robust to the choice of tolCG and w. The number of inner iterations varies between 15 and 35 for the ARCENE data set, while the standard IPM took on the order of 1, 000 iterations across all parameter settings. Across all settings, the relative error was fixed at 0.04%. In general, our sketched IPM is able to produce an extremely high accuracy solution across parameter settings. Thus we do not report additional numerical results for the relative error, which was consistently 10−3 or less. Figure 1(d) demonstrates a tradeoff of our approach: as both tolCG and w are increased, the condition number κ(Q−1/2AD2ATQ−1/2) decreases, corresponding to better conditioned systems. As a result, fewer inner iterations are required. Additional experiments can be found in Appendix G.4.
5 Conclusions
We proposed and analyzed an infeasible IPM algorithm using a preconditioned conjugate gradient solver for the normal equations and a novel perturbation vector to correct for the error due to the approximate solver. Thus, we speed up each iteration of the IPM algorithm, without increasing the overall number of iterations. We demonstrate empirically that our IPM requires an order of magnitude fewer inner iterations within each linear solve than standard IPMs. Several extensions of our work are discussed in Appendix A.
Broader Impact
Our work is focused on speeding up algorithms for tall/wide LPs. As such, it could have significant broader impacts by allowing users to solve increasingly larger LPs in the numerous settings discussed in our introduction. While applications of our work to real data could result into ethical considerations, this is an indirect (and unpredictable) side-effect of our work. Our experimental work uses publicly available datasets to evaluate the performance of our algorithms; no ethical considerations are raised.
Acknowledgements, We thank the anonymous reviewers for their helpful comments. AC and PD were partially supported by NSF FRG 1760353 and NSF CCF-BSF 1814041. HA was partially supported by BSF grant 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence. | 1. What is the focus and contribution of the paper on linear programming solvers?
2. What are the strengths of the proposed approach, particularly in terms of using matrix sketching?
3. What are the weaknesses of the paper regarding its comparisons with other works and experimental evaluations?
4. How does the reviewer assess the clarity and readability of the paper's content?
5. Are there any questions raised by the reviewer that require further explanation or details from the authors? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes a solver for linear programming (LP). The solver uses Interior Point Methods (IPM) and solves the linear system -arising at each IPM iteration- via a preconditioned conjugate gradient. The novelty lies in the design of a new preconditioner based on matrix sketching. The paper also proposes a correction technique (to compensate for the error introduced when computing approximate solutions to the linear system) that ensures convergence to a feasible and approximately optimal solution of the LP.
Strengths
- linear programming is important and developing faster/more scalable solvers may impact several ML applications. - the idea of using matrix sketching to compute the preconditioner for eq (3) seems reasonable for wide/tall systems. - the use of the same sketching matrix in both the preconditioning and the correction vector v is clever and efficient. - the technical results look correct and the derivation is sound. - the paper is clear and easy to read.
Weaknesses
- the last paragraph in the “Prior Work” section, focusing on sketching techniques, does not do a great job at pointing out novelty. It talks about sketching being used for different problems, but it is unclear if those related works also focused on solving linear systems (maybe in a different context) and can be directly applied to eq (3). - related to the previous point: relevant references seem to be missing. For instance: [Faster Kernel Ridge Regression Using Sketching and Preconditioning Haim Avron, Kenneth L. Clarkson, and David P. Woodruff, SIAM Journal on Matrix Analysis and Applications 2017 38:4, 1116-1138.] [Preconditioning Kaczmarz method by sketching, Alexandr Katrutsa, Ivan Oseledets, https://arxiv.org/pdf/1903.01806.pdf] - the experimental evaluation only compares the proposed approach against an un-preconditioned conjugate gradient (CG) method. As expected, un-preconditioned CG takes many inner iterations. Why not comparing against standard preconditioners and possibly evaluate timing? Currently, the reader cannot figure out if or when the proposed approach is convenient over existing preconditioned CG methods. |
NIPS | Title
Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs
Abstract
Linear programming (LP) is used in many machine learning applications, such as `1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real and synthetic data.
1 Introduction
Linear programming (LP) is one of the most useful tools available to theoreticians and practitioners throughout science and engineering. In Machine Learning, LP appears in numerous settings, including `1-regularized SVMs [57], basis pursuit (BP) [54], sparse inverse covariance matrix estimation (SICE) [55], the nonnegative matrix factorization (NMF) [45], MAP inference [37], etc. Not surprisingly, designing and analyzing LP algorithms is a topic of paramount importance in computer science and applied mathematics.
One of the most successful paradigms for solving LPs is the family of Interior Point Methods (IPMs), pioneered by Karmarkar in the mid 1980s [25]. Path-following IPMs and, in particular, long-step path following IPMs, are among the most practical approaches for solving linear programs. Consider the standard form of the primal LP problem:
min cTx , subject to Ax = b ,x ≥ 0 , (1) where A ∈ Rm×n, b ∈ Rm, and c ∈ Rn are the inputs, and x ∈ Rn is the vector of the primal variables. The associated dual problem is
max bTy , subject to ATy + s = c , s ≥ 0 , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
where y ∈ Rm and s ∈ Rn are the vectors of the dual and slack variables respectively. Triplets (x,y, s) that uphold both (1) and (2) are called primal-dual solutions. Path-following IPMs typically converge towards a primal-dual solution by operating as follows: given the current iterate (xk,yk, sk), they compute the Newton search direction (∆x,∆y,∆s) and update the current iterate by following a step towards the search direction. To compute the search direction, one standard approach [41] involves solving the normal equations1:
AD2AT∆y = p. (3)
Here, D = X1/2S−1/2 is a diagonal matrix, X,S ∈ Rn×n are diagonal matrices whose i-th diagonal entries are equal to xi and si, respectively, and p ∈ Rm is a vector whose exact definition is given in eqn. (16)2. Given ∆y, computing ∆s and ∆x only involves matrix-vector products.
The core computational bottleneck in IPMs is the need to solve the linear system of eqn. (3) at each iteration. This leads to two key challenges: first, for high-dimensional matrices A, solving the linear system is computationally prohibitive. Most implementations of IPMs use a direct solver; see Chapter 6 of [41]. However, if AD2AT is large and dense, direct solvers are computationally impractical. If AD2AT is sparse, specialized direct solvers have been developed, but these do not apply to many LP problems arising in machine learning applications due to irregular sparsity patterns. Second, an alternative to direct solvers is the use of iterative solvers, but the situation is further complicated since AD2AT is typically ill-conditioned. Indeed, as IPM algorithms approach the optimal primal-dual solution, the diagonal matrix D is ill-conditioned, which also results in the matrix AD2AT being ill-conditioned. Additionally, using approximate solutions for the linear system of eqn. (3) causes certain invariants, which are crucial for guaranteeing the convergence of IPMs, to be violated; see Section 1.1 for details.
In this paper, we address the aforementioned challenges, for the special case where m n, i.e., the number of constraints is much smaller than the number of variables; see Appendix A for a generalization. This is a common setting in ML applications of LP solvers, since `1-SVMs and basis pursuit problems often exhibit such structure when the number of available features (n) is larger than the number of objects (m). This setting has been of interest in recent work on LPs [17, 4, 31]. For simplicity of exposition, we also assume that the constraint matrix A has full rank, equal to m. First, we propose and analyze a preconditioned Conjugate Gradient (CG) iterative solver for the normal equations of eqn. (3), using matrix sketching constructions from the Randomized Linear Algebra (RLA) literature. We develop a preconditioner for AD2AT using matrix sketching which allows us to prove strong convergence guarantees for the residual of CG solvers. Second, building upon the work of [39], we propose and analyze a provably accurate long-step infeasible IPM algorithm. The proposed IPM solves the normal equations using iterative solvers. In this paper, for brevity and clarity, we primarily focus our description and analysis on the CG iterative solver. We note that a non-trivial concern is that the use of iterative solvers and matrix sketching tools implies that the normal equations at each iteration will be solved only approximately. In our proposed IPM, we develop a novel way to correct for the error induced by the approximate solution in order to guarantee convergence. Importantly, this correction step is relatively computationally light, unlike a similar step proposed in [39]. Third, we empirically show that our algorithm performs well in practice. We consider solving LPs that arise from `1-regularized SVMs and test them on a variety of synthetic and real datasets. Several extensions of our work are discussed in Appendix A.
1.1 Our contributions
Our point of departure in this work is the introduction of preconditioned, iterative solvers for solving eqn. (3). Preconditioning is used to address the ill-conditioning of the matrix AD2AT. Iterative solvers allow the computation of approximate solutions using only matrix-vector products while avoiding matrix inversion, Cholesky or LU factorizations, etc. A preconditioned formulation of eqn. (3) is:
Q−1AD2AT∆y = Q−1p, (4)
where Q ∈ Rm×m is the preconditioning matrix; Q should be easily invertible (see [3, 22] for background). An alternative yet equivalent formulation of eqn. (4), which is more amenable to
1Another widely used approach is to solve the augmented system [41] which is less relevant for this paper. 2The superscript k in eqn. (16) simply indicates iteration count and is omitted here for notational simplicity.
theoretical analysis, is
Q− 1/2AD2ATQ− 1/2z = Q− 1/2p, (5)
where z ∈ Rm is a vector such that ∆y = Q−1/2z. Note that the matrix in the left-hand side of the above equation is always symmetric, which is not necessarily the case for eqn. (4). We do emphasize that one can use eqn. (4) in the actual implementation of the preconditioned solver; eqn. (5) is much more useful in theoretical analyses.
Recall that we focus on the special case where A ∈ Rm×n has m n, i.e., it is a short-and-fat matrix. Our first contribution starts with the design and analysis of a preconditioner for the Conjugate Gradient solver that satisfies, with high probability,
2
2 + ζ ≤ σ2min(Q− 1 2 AD) ≤ σ2max(Q−
1 2 AD) ≤ 2
2− ζ , (6)
for some error parameter ζ ∈ [0, 1]. In the above, σmin(·) and σmax(·) correspond to the smallest and largest singular value of the matrix in parentheses. The above condition says that the preconditioner effectively reduces the condition number of AD to a constant. We note that the particular form of the lower and upper bounds in eqn. (6) was chosen to simplify our derivations. RLA matrix-sketching techniques allow us to construct preconditioners for all short-and-fat matrices that satisfy the above inequality and can be inverted efficiently. Such constructions go back to the work of [2]; see Section 2 for details on the construction of Q and its inverse. Importantly, given such a preconditioner, we then prove that the resulting CG iterative solver satisfies
‖Q−1/2AD2ATQ−1/2z̃t −Q−1/2p‖2 ≤ ζt‖Q− 1/2p‖2. (7)
Here z̃t is the approximate solution returned by the CG iterative solver after t iterations. In words, the above inequality states that the residual achieved after t iterations of the CG iterative solver drops exponentially fast. To the best of our knowledge, this result is not known in the CG literature: indeed, it is actually well-known that the residual of CG may oscillate [21], even in cases where the energy norm of the solution error decreases monotonically. However, we prove that if the preconditioner is sufficiently good, i.e., it satisfies the constraint of eqn. (6), then the residual decreases as well.
Our second contribution is the analysis of a novel variant of a long-step infeasible IPM algorithm proposed by [39]. Recall that such algorithms can, in general, start with an initial point that is not necessarily feasible, but does need to satisfy some, more relaxed, constraints. Following the lines of [56, 39], let S be the set of feasible and optimal solutions of the form (x∗,y∗, s∗) for the primal and dual problems of eqns. (1) and (2) and assume that S is not empty. Then, long-step infeasible IPMs can start with any initial point (x0,y0, s0) that satisfies (x0, s0) > 0 and (x0, s0) ≥ (x∗, s∗), for some feasible and optimal solution (x∗, s∗) ∈ S . In words, the starting primal and slack variables must be strictly positive and larger (element-wise) when compared to some feasible, optimal primaldual solution. See Chapter 6 of [52] for a discussion regarding why such choices of starting points are relevant to computational practice and can be identified more efficiently than feasible points.
The flexibility of infeasible IPMs comes at a cost: long-step feasible IPMs converge in O(n log 1/ ) iterations, while long-step infeasible IPMs need O(n2 log 1/ ) iterations to converge [56, 39] (Here is the accuracy of the approximate LP solution returned by the IPM; see Algorithm 2 for the exact definition.). Let
Ax0 − b = r0p, (8) ATy0 + s0 − c = r0d, (9)
where r0p ∈ Rn and r0d ∈ Rm are the primal and dual residuals, respectively, and characterize how far the initial point is from being feasible. As long-step infeasible IPM algorithms iterate and update the primal and dual solutions, the residuals are updated as well. Let rk = (rkp, r k d) ∈ Rn+m be the primal and dual residual at the k-th iteration: it is well-known that the convergence analysis of infeasible long-step IPMs critically depends on rk lying on the line segment between 0 and r0. Unfortunately, using approximate solvers (such as the CG solver proposed above) for the normal equations violates this invariant. [39] proposed a simple solution to fix this problem by adding a perturbation vector v to the current primal-dual solution that guarantees that the invariant is satisfied. Again, we use RLA matrix sketching principles to propose an efficient construction for v that provably satisfies the invariant. Next, we combine the above two primitives to prove that Algorithm 2 in Section 3 satisfies the following theorem.
Theorem 1 Let 0 ≤ ≤ 1 be an accuracy parameter. Consider the long-step infeasible IPM Algorithm 2 (Section 3) that solves eqn. (5) using the CG solver of Algorithm 1 (Section 2). Assume that the CG iterative solver runs with accuracy parameter ζ = 1/2 and iteration count t = O(log n). Then, with probability at least 0.9, the long-step infeasible IPM converges after O(n2 log 1/ ) iterations.
We note that the 0.9 success probability above is for simplicity of exposition and can be easily amplified using standard techniques. Also, at each iteration of our infeasible long-step IPM algorithm, the running time is O((nnz(A) +m3) log n), ignoring constant terms. See Section 3 for a detailed discussion of the overall running time.
Our empirical evaluation demonstrates that our algorithm requires an order of magnitude much fewer inner CG iterations than a standard IPM using CG, while producing a comparably accurate solution (see Section 4).
1.2 Prior Work
There is a large body of literature on solving LPs using IPMs. We only review literature that is immediately relevant to our work. Recall that we solve the normal equations inexactly at each iteration, and develop a way to correct for the error incurred. We also focus on IPMs that can use a sufficiently positive, infeasible initial point (see Section 1.1). We discuss below two papers that present related ideas.
[39] proposed the use of an approximate iterative solver for eqn. (3), followed by a correction step to “fix” the approximate solution (see our discussion in Section 1.1). We propose efficient, RLAbased approaches to precondition and solve eqn. (3), as well as a novel approach to correct for the approximation error in order to guarantee the convergence of the IPM algorithm. Specifically, [39] propose to solve eqn. (3) using the so-called maximum weight basis preconditioner [46]. However, computing such a preconditioner needs access to a maximal linearly independent set of columns of AD in each iteration, which is costly, taking O(m2n) time in the worst-case. More importantly, while [38] was able to provide a bound on the condition number of the preconditioned matrix that depends only on properties of A, and is independent of D, this bound might, in general, be very large. In contrast, our bound is a constant and it does not depend on properties of A or its dimensions. In addition, [39] assumed a bound on the two-norm of the residual of the preconditioned system, but it is unclear how their preconditioner guarantees such a bound. Similar concerns exist for the construction of the correction vector v proposed by [39], which our work alleviates.
The line of research in the Theoretical Computer Science literature that is closest to our work is [15], who presented an IPM that uses an approximate solver in each iteration. However, their accuracy guarantee is in terms of the final objective value which is different from ours. More importantly, [15] focuses on short-step, feasible IPMs, whereas ours is long-step and does not require a feasible starting point. Finally, the approximate solver proposed by [15] works only for the special case of input matrices that correspond to graph Laplacians, following the lines of [47, 48].
We also note that in the Theoretical Computer Science literature, [26, 27, 28, 29, 30, 7, 12] proposed and analyzed theoretically ground-breaking algorithms for LPs based on novel tools such as the so-called inverse maintenance for accelerating the linear system solvers in IPMs. However, all these endeavors are primarily focused on the theoretically fast but practically inefficient short-step feasible IPMs and, to the best of our knowledge, no implementations of these approaches are available for comparisons to standard long-step IPMs. We highlight that our work is focused on infeasible long-step IPMs, known to work efficiently in practice.
Another relevant line of research is the work of [14], which proposed solving eqn. (3) using preconditioned Krylov subspace methods, including variants of generalized minimum residual (GMRES) or CG methods. Indeed, [14] conducted extensive numerical experiments on LP problems taken from standard benchmark libraries, but did not provide any theoretical guarantees.
From a matrix-sketching perspective, our work was also partially motivated by [8], which presented an iterative, sketching-based algorithm to solve under-constrained ridge regression problems, but did not address how to make use of such approaches in an IPM-based framework, as we do here. In another work, [1] proposed a similar sketching-based preconditioning technique. However, their efforts broadly revolved around speeding up and scaling kernel ridge regression. [43, 53] proposed the so-called Newton sketch to construct an approximate Hessian matrix for more general convex objective functions of which LP is a special case. Nevertheless, these randomized second-order
methods are significantly faster than the conventional approach only when the data matrix is overconstrained, i.e. m n. It is unclear whether the approach of [43, 53] is faster than IPMs when the optimization problem to be solved is linear. [49] proposed a probabilistic algorithm to solve LP approximately in a random projection-based reduced feature-space. A possible drawback of this paper is that the approximate solution is infeasible with respect to the original region. Finally, we refer the interested reader to the surveys [51, 19, 33, 18, 24, 34] for more background on Randomized Linear Algebra.
1.3 Notation and Background
A,B, . . . denote matrices and a,b, . . . denote vectors. For vector a, ‖a‖2 denotes its Euclidean norm; for a matrix A, ‖A‖2 denotes its spectral norm and ‖A‖F denotes its Frobenius norm. We use 0 to denote a null vector or null matrix, dependent upon context, and 1 to denote the all-ones vector. For any matrix X ∈ Rm×n with m ≤ n of rank m its thin Singular Value Decomposition (SVD) is the product UΣVT , with U ∈ Rm×m (the matrix of the left singular vectors), V ∈ Rn×m( the matrix of the top-m right singular vectors), and Σ ∈ Rm×m a diagonal matrix whose entries are equal to the singular values of X. We use σi(·) to denote the i-th singular value of the matrix in parentheses.
We now briefly discuss a result on matrix sketching [13, 11] that is particularly useful in our theoretical analyses. In our parlance, [13] proved that, for any matrix Z ∈ Rm×n, there exists a sketching matrix W ∈ Rn×w such that ∥∥ZWWTZT − ZZT∥∥
2 ≤ ζ
4
( ‖Z‖22 + ‖Z‖2F r ) (10)
holds with probability at least 1− δ for any r ≥ 1. Here ζ ∈ [0, 1] is a (constant) accuracy parameter. Ignoring constant terms, w = O(r log(r/δ)); W has s = O(log(r/δ)) non-zero entries per row with s uniformly random entries are chosen without replacement and set to ± 1s independently; the product ZW can be computed in time O(log(r/δ) · nnz(Z)).
2 Conjugate Gradient Solver
In this section, we discuss the computation of the preconditioner Q (and its inverse), followed by a discussion on how such a preconditioner can be used to satisfy eqns. (6) and (7).
Algorithm 1 Solving eqn. (5) via CG Input: AD ∈ Rm×n, p ∈ Rm, sketching matrix W ∈ Rn×w, iteration count t;
1: Compute ADW and its SVD: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be
the matrix of its singular values; 2: Compute Q−1/2 = UQΣ −1/2 Q U T Q; 3: Initialize z̃0 ← 0m and run standard CG on the preconditioned system of eqn. (5) for t iterations; Output: z̃t;
Algorithm 1 takes as input the sketching matrix W ∈ Rn×w, which we construct as discussed in Section 1.3. Our preconditioner Q is equal to
Q = ADWWTDAT. (11)
Notice that we only need to compute Q−1/2 in order to use it to solve eqn. (5). Towards that end, we first compute the sketched matrix ADW ∈ Rm×w. Then, we compute the SVD of the matrix ADW: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be the matrix of its singular values. Notice that the left singular vectors of Q−1/2 are equal to UQ and its singular values are equal to Σ−
1/2 Q . Therefore, Q −1/2 = UQΣ −1/2 Q U T Q.
Let AD = UΣVT be the thin SVD representation of AD. We apply the results of [13] (see Section 1.3) to the matrix Z = VT ∈ Rm×n with r = m to get that, with probability at least 1− δ,∥∥VTWWTV − Im∥∥2 ≤ ζ/2 (12)
The running time needed to compute the sketch ADW is equal to (ignoring constant factors) O(nnz(A) · log(m/δ)). Note that nnz(AD) = nnz(A). The cost of computing the SVD of ADW (and therefore Q−1/2) is O(m3 log(m/δ)). Overall, computing Q−1/2 can be done in time
O(nnz(A) · log(m/δ) +m3 log(m/δ)). (13) Given these results, we now discuss how to satisfy eqns. (6) and (7) using the sketching matrix W. We start with the following bound, which is relatively straight-forward given prior RLA work (see Appendix C.1 for a proof).
Lemma 2 If the sketching matrix W satisfies eqn. (12), then, for all i = 1 . . .m,
(1 + ζ/2)−1 ≤ σ2i (Q− 1/2AD) ≤ (1− ζ/2)−1.
This lemma directly implies eqn. (6). We now proceed to show that the above construction for Q−1/2, when combined with the conjugate gradient solver to solve eqn. (5), indeed satisfies eqn. (7)3. We do note that in prior work most of the convergence guarantees for CG focus on the error of the approximate solution. However, in our work, we are interested in the convergence of the residuals and it is known that even if the energy norm of the error of the approximate solution decreases monotonically, the norms of the CG residuals may oscillate. Interestingly, we can combine a result on the residuals of CG from [6] with Lemma 2 to prove that in our setting the norms of the CG residuals also decrease monotonically (see Appendix C.2 for details).
We remark that one can consider using MINRES [42] instead of CG. Our results hinges on bounding the two-norm of the residual. MINRES finds, at each iteration, the optimal vector with respect the two-norm of the residual inside the same Krylov subspace of CG for the corresponding iteration. Thus, the bound we prove for CG applies to MINRES as well.
3 The Infeasible IPM algorithm
In order to avoid spurious solutions, primal-dual path-following IPMs bias the search direction towards the central path and restrict the iterates to a neighborhood of the central path. This search is controlled by the centering parameter σ ∈ [0, 1]. At each iteration, given the current solution (xk,yk, sk), a standard infeasible IPM obtains the search direction (∆xk,∆yk,∆sk) by solving the following system of linear equations:
AD2AT∆yk = pk , (14a)
∆sk = − rkd −AT∆yk , (14b) ∆xk = − xk + σµkS−11n −D2∆sk. (14c)
Here D and S are computed given the current iterate (xk and sk). After solving the above system, the infeasible IPM Algorithm 2 proceeds by computing a step-size ᾱ to return:
(xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆xk,∆yk,∆sk). (15)
Recall that rk = (rkp, r k d) is a vector with r k p = Ax k − b and rkd = ATyk + sk − c (the primal and dual residuals). We also use the duality measure µk = x kTsk/n and the vector
pk = −rkp − σµkAS−11n + Axk −AD2rkd. (16)
Given ∆yk from eqn. (14a), ∆sk and ∆xk are easy to compute from eqns. (14b) and (14c), as they only involve matrix-vector products. However, since we will use Algorithm 1 to solve eqn. (14a) approximately using the sketching-based preconditioned CG solver, the primal and dual residuals do not lie on the line segment between 0 and r0. This invalidates known proofs of convergence for infeasible IPMs.
For notational simplicity, we now drop the dependency of vectors and scalars on the iteration counter k. Let ∆̂y = Q−1/2z̃t be the approximate solution to eqn. (14a). In order to account for the loss of accuracy due to the approximate solver, we compute ∆̂x as follows:
∆̂x = − x + σµS−11n −D2∆̂s− S−1v. (17) 3See Chapter 9 of [32] for a detailed overview of CG.
Here v ∈ Rn is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the infeasible IPM:
AS−1v = AD2AT∆̂y − p . (18)
We note that the computation of ∆̂s is still done using eqn. (14b), which does not change. [39] argued that if v satisfies eqn. (18), the primal and dual residuals lie in the correct line segment.
Construction of v. There are many choices for v satisfying eqn. (18). A general choice is v = (AS−1)†(AD2AT∆̂y − p), which involves the computation of the pseudoinverse (AS−1)†, which is expensive, taking time O(m2n). Instead, we propose to construct v using the sketching matrix W of Section 1.3. More precisely, we construct the perturbation vector
v = (XS) 1/2W(ADW)†(AD2AT∆̂y − p). (19)
The following lemma proves that the proposed v satisfies eqn. (18); see Appendix C.3 for the proof.
Lemma 3 Let W ∈ Rn×w be the sketching matrix of Section 1.3 and v be the perturbation vector of eqn. (19). Then, with probability at least 1− δ, rank(ADW) = m and v satisfies eqn. (18).
We emphasize here that we will use the same exact sketching matrix W ∈ Rn×w to form the preconditioner used in the CG algorithm of Section 2 as well as the vector v in eqn.(19). This allows us to form the sketching matrix only once, thus saving time in practice. Next, we present a bound for the two-norm of the perturbation vector v of eqn. (19); see Appendix C.4 for the proof.
Lemma 4 With probability at least 1− δ, our perturbation vector v in Lemma 3 satisfies ‖v‖2 ≤ √ 3nµ ‖f̃ (t)‖2, (20)
with f̃ (t) = Q−1/2AD2ATQ−1/2z̃t −Q−1/2p.
Intuitively, the bound in eqn. (20) implies that ‖v‖2 depends on how close the approximate solution ∆̂y is to the exact solution. Lemma 4 is particularly useful in proving the convergence of Algorithm 2, which needs ‖v‖2 to be a small quantity. More precisely, combining a result from [39] with our preconditioner Q−1/2, we can prove that ‖Q−1/2p‖2 ≤ O(n) √ µ. This bound allows us to prove that if we run Algorithm 1 for O(log n) iterations, then ‖f̃ (t)‖2 ≤ γσ4√n √ µ and ‖v‖2 ≤ γσ4 µ. The last two inequalities are critical in the convergence analysis of Algorithm 2; see Appendix F.1 and Appendix F.2 for details.
We are now ready to present the infeasible IPM algorithm. We will need the following definition for the neighborhoodN (γ) = {(xk,yk, sk) : (xk, sk) > 0, xki ski ≥ (1− γ)µ and ‖r
k‖2/‖r0‖2 ≤ µk/µ0}. Here γ ∈ (0, 1) and we note that the duality measure µk steadily reduces at each iteration.
Algorithm 2 Infeasible IPM Input: A ∈ Rm×n, b ∈ Rm, c ∈ Rn, γ ∈ (0, 1), tolerance > 0, σ ∈ (0, 4/5); Initialize: k ← 0; initial point (x0,y0, s0);
1: while µk > do 2: Compute sketching matrix W ∈ Rn×w (Section 1.3) with ζ = 1/2 and δ = O(n−2); 3: Compute rkp = Ax
k − b; rkd = ATyk + sk − c; and pk from eqn. (16); 4: Solve the linear system of eqn. (5) for z using Algorithm 1 with W from step (2) and t = O(log n). Compute ∆̂y = Q−1/2z;
5: Compute v using eqn. (19) with W from step (2); ∆̂s using eqn. (14b); ∆̂x using eqn. (17); 6: Compute α̃ = argmax{α ∈ [0, 1] : (xk,yk, sk) + α(∆̂x k , ∆̂y k , ∆̂s k ) ∈ N (γ)}. 7: Compute ᾱ = argmin{α ∈ [0, α̃] : (xk + α∆̂x k )T(sk + α∆̂s k )}. 8: Compute (xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆̂x k , ∆̂y k , ∆̂s k ); set k ← k + 1; 9: end while
Running time of Algorithm 2. We start by discussing the running time to compute v. As discussed in Section 2, (ADW)† can be computed in O(nnz(A) · log(m/δ) +m3 log(m/δ)) time. Now, as
W has O(log(m/δ)) non-zero entries per row, pre-multiplying by W takes O(nnz(A) log(m/δ)) time (assuming nnz(A) ≥ n). Since X and S are diagonal matrices, computing v takes O(nnz(A) · log(m/δ)+m3 log(m/δ)) time, which is asymptotically the same as computing Q−1/2 (see eqn. (13)).
We now discuss the overall running time of Algorithm 2. At each iteration, with failure probability δ, the preconditioner Q−1/2 and the vector v can be computed inO(nnz(A) · log(m/δ)+m3 log(m/δ)) time. In addition, for t = O(log n) iterations of Algorithm 1, all the matrix-vector products in the CG solver can be computed inO(nnz(A) · log n) time. Therefore, the computational time for steps (2)-(5) is given by O(nnz(A) · (log n+ log(m/δ)) +m3 log(m/δ)). Finally, taking a union bound over all iterations with δ = O(n−2) (ignoring constant factors), Algorithm 2 converges with probability at least 0.9. The running time at each iteration is given by O((nnz(A) +m3) log n).
4 Experiments
We demonstrate the empirical performance of our algorithm on a variety of synthetic and real-world datasets from the UCI ML Repository [20], such as ARCENE, DEXTER [23], DrivFace [16], and a gene expression cancer RNA-Sequencing dataset that is part of the PANCAN dataset [50]. See Appendix G, Table 1 for a description of the datasets. We observed that the results for both synthetic (Appendix G.2) and real-world data were qualitatively similar; we highlight results on representative real datasets. The experiments were implemented in Python and run on a server with Intel E52623V3@3.0GHz 8 cores and 64GB RAM. As an application, we consider `1-regularized SVMs: all of the datasets are concerned with binary classification with m n, where n is the number of features. In Appendix G.1, we describe the `1-SVM problem and how it can be formulated as an LP. Here, m is the number of training points, n is the feature dimension, and the size of the constraint matrix in the LP becomes m× (2n+ 1).
Experimental Results. We compare our Algorithm 2 with a standard IPM (see Chapter 10, [44]) using CG and a standard IPM using a direct solver. We also use CVXPY as a benchmark to compare the accuracy of the solutions; we define the relative error ‖x̂−x?‖2/‖x?‖2, where x̂ is our solution and x? is the solution generated by CVXPY. We also consider the number of outer iterations, namely the number of iterations of the IPM algorithm, as well as the number of inner iterations, namely the number of iterations of the CG solver. We denote the relative stopping tolerance for CG by tolCG and we denote the outer iteration residual by τ . If not specified: τ = 10−9, tolCG = 10−5, and σ = 0.5. We evaluated a Gaussian sketching matrix and the initial triplet (x,y, s) for all IPM algorithms was set to be all ones.
Figure 1(a) shows that our Algorithm 2 uses an order of magnitude fewer inner iterations than the un-preconditioned standard solver. This is due to the improved conditioning of the respective matrices in the normal equations, as demonstrated in Figure 1(b). Across various real and synthetic data sets, the results were qualitatively similar to those shown in Figure 1. Results for several real data sets are summarized in Appendix G, Table 1. The number of outer iterations is unaffected by our internal approximation methods and is generally the same for our Algorithm 2, the standard IPM with CG, and the standard IPM with a direct linear solver (denoted IPM w/Dir), as seen in Appendix G, Table 1. Figure 1 also demonstrates the relative insensitivity to the choice of w (the sketching dimension, i.e., the number of columns of the sketching matrix W of Section 1.3). For smaller values of w, our algorithm requires more inner iterations. However, across various choices of w, the number of inner iterations is always an order of magnitude smaller than the number required by the standard solver.
Figures 1(c)-1(d) show the performance of our algorithm for a range of (w, tolCG) pairs. Figure 1(c) demonstrates that the number of the inner iterations is robust to the choice of tolCG and w. The number of inner iterations varies between 15 and 35 for the ARCENE data set, while the standard IPM took on the order of 1, 000 iterations across all parameter settings. Across all settings, the relative error was fixed at 0.04%. In general, our sketched IPM is able to produce an extremely high accuracy solution across parameter settings. Thus we do not report additional numerical results for the relative error, which was consistently 10−3 or less. Figure 1(d) demonstrates a tradeoff of our approach: as both tolCG and w are increased, the condition number κ(Q−1/2AD2ATQ−1/2) decreases, corresponding to better conditioned systems. As a result, fewer inner iterations are required. Additional experiments can be found in Appendix G.4.
5 Conclusions
We proposed and analyzed an infeasible IPM algorithm using a preconditioned conjugate gradient solver for the normal equations and a novel perturbation vector to correct for the error due to the approximate solver. Thus, we speed up each iteration of the IPM algorithm, without increasing the overall number of iterations. We demonstrate empirically that our IPM requires an order of magnitude fewer inner iterations within each linear solve than standard IPMs. Several extensions of our work are discussed in Appendix A.
Broader Impact
Our work is focused on speeding up algorithms for tall/wide LPs. As such, it could have significant broader impacts by allowing users to solve increasingly larger LPs in the numerous settings discussed in our introduction. While applications of our work to real data could result into ethical considerations, this is an indirect (and unpredictable) side-effect of our work. Our experimental work uses publicly available datasets to evaluate the performance of our algorithms; no ethical considerations are raised.
Acknowledgements, We thank the anonymous reviewers for their helpful comments. AC and PD were partially supported by NSF FRG 1760353 and NSF CCF-BSF 1814041. HA was partially supported by BSF grant 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence. | 1. What is the main contribution of the paper regarding linear programming?
2. How does the proposed approach differ from previous methods in terms of efficiency and accuracy?
3. How does the paper handle errors in approximate linear system solves?
4. How does the method proposed in the paper compare to other approaches in terms of practicality and runtime efficiency?
5. Are there any limitations or potential drawbacks to the method proposed in the paper? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper considers the problem of solving linear programs with infeasible-start long-step interior point methods (IPMs), a prominent class of linear programming methods. The paper provides a method for approximately implementing the steps of such an IPM, i.e. approximately solving a linear system, in nearly linear time when the constraint matrix is sufficiently tall (/ wide) (depending on whether the primal or dual is considered). Further, the paper shows how to efficiently and simply modify the steps of the infeasible start IPM to account for the error induced by the approximate solver so that the overall number of iterations required by the interior point method is not impaired. The linear system solver used is a combination of previous subspace embedding techniques from randomized linear algebra and the conjugate gradient (CG) method and along the way the paper proves convergence results of CG that may be of further interest. Finally, the paper provides experiments which corroborate the theoretical claims of the paper. UPDATE AFTER AUTHOR FEEDBACK AND DISCUSSION: Thank you to the authors for their thoughtful response. After reading the response and further discussion, my core view of the paper is similar to what I wrote in the review. I think this is a nice result on the theory of infeasible-start primal-dual interior point methods that would be great to publish in some venue. However, for the reasons raised the ultimate novelty of the techniques used and how surprising the result is, is less clear. Further, that this paper does not appear to be improving the best theoretical complexity or best practical runtimes of linear programming, makes it difficult to raise the score.
Strengths
This paper cleanly advances the theory for infeasible-start long-step IPMs, a prominent class of IPMs, for solving tall linear programs (a prominent class of linear programs that can arise in certain ML-applications when there is an abundance of data). The paper provides a natural efficient preconditioning technique for solving the linear systems that arise when implementing the method, a simple way for handling the errors the system induces, and analysis of the performance of the resulting methods. Consequently, this paper improves upon the theory of infeasible start IPM by providing new techniques for leveraging approximate linear system solvers (previous methods were more computationally expensive) and efficiently implement them. The paper analyzes a natural linear system solving technique, combining subspace embeddings (a known powerful matrix sketching result) with conjugate gradient (a known powerful linear system solver), to achieve this result. It is also possibly of further interest how the paper combines the preconditioning technique with its technique for modifying the steps to handle error in infeasible start IPMs. Ultimately, this paper could promote further research on infeasible-start long-step IPMs and lead to faster algorithms for solving certain large scale problems.
Weaknesses
This paper provides a nice advance in the theory of infeasible-start long-step IPMs, however the novelty of the approach taken and the relation of the work in the paper to prior work could use further clarity. First, solving regression problems in an A in nearly linear time, when A has many more rows than columns has been the subject of a line of research, e.g. [8], [38], and “Iterative Row Sampling.” These results, including ones based on the subspace embedding result used in this paper, readily extend to solving linear systems in A^T A and this has been used by the Theoretical Computer Science papers mentioned for implementing short step IPMs. Consequently, I think it would have been beneficial to state earlier that the paper is using the known linear system solving machinery of subspace embeddings to build preconditioners (rather than just saying that “Randomized Linear Algebra” is used) and put this in the context of prior work. There may be novelty in the particular way in which the paper is using conjugate gradient and subspace embeddings, however the paper would be strengthened if it articulated how this is different than this previous literature; as the appendix points out, conjugate gradient can be replaced with other iterative methods which possibly puts the approach considered closer to the ones from the literature. In light of the previous paragraph, I think more of the novelty in the paper may lie in exactly how they handle the error from approximate linear system solves in a way sensitive to the design of the preconditioner. However, here I think it should be noted that prior IPM results in related spaces, e.g. short-step methods or dual methods, have studied the effect of such error. Further, from this literature that the guarantees of the long-step IPM can be preserved if systems are solved to inverse polynomial accuracy in the right norm seems reasonable. Consequently, while it is nice that the paper handles the error in such a clean way, the novelty of this approach in light of the previous work should be commented on. (The paper does touch upon why error analysis for infeasible-start long-step primal-dual methods may be more difficult than in other cases, but explaining the non-applicability of other approaches would be beneficial). Further, the paper motivates its study of infeasible-start long-step IPMs (over perhaps feasible start IPMs or short-step methods) due to its practicality and mentions how certain theoretical results on implementing steps efficiently (i.e. “inverse maintenance”) are not used in practice. Consequently, the paper would strengthened if it could argue that the method proposed achieved faster end-to-end runtimes for solving linear programs. However, while the empirical section does provide iteration bounds which corroborates its theoretical findings, it doesn’t give full runtime bounds. This is perhaps problematic as the method proposed requires computing an SVD of a matrix which could cause large runtime issues. Also, the paper mentions that [13] which provided substantial empirical experiments on using preconditioned Krylov methods, but doesn’t compare the details in a way to see if this paper is making an improvement. Consequently, ultimately the paper doesn’t seem to justify that it improves either the theory or practice for linear programming, though it does improve the theory for a prominent class of practical methods. Lastly (and perhaps more minor) there are theoretical details not considered in this work that are in others in the area and it would be beneficial to comment on. In particular, it is known that computing SVDs and applying conjugate gradient can cause numerical stability issues. Therefore, while the paper is improving the theoretical analysis of infeasible start IPMs in some ways, without discussing this the methods performance when precision is taken into account it, in theory under certain computational assumptions it may be worse. The paper would be strengthened by discussing this a little. |
NIPS | Title
Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs
Abstract
Linear programming (LP) is used in many machine learning applications, such as `1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints (i.e., wide), or vice-versa (i.e., tall) by taking the dual. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real and synthetic data.
1 Introduction
Linear programming (LP) is one of the most useful tools available to theoreticians and practitioners throughout science and engineering. In Machine Learning, LP appears in numerous settings, including `1-regularized SVMs [57], basis pursuit (BP) [54], sparse inverse covariance matrix estimation (SICE) [55], the nonnegative matrix factorization (NMF) [45], MAP inference [37], etc. Not surprisingly, designing and analyzing LP algorithms is a topic of paramount importance in computer science and applied mathematics.
One of the most successful paradigms for solving LPs is the family of Interior Point Methods (IPMs), pioneered by Karmarkar in the mid 1980s [25]. Path-following IPMs and, in particular, long-step path following IPMs, are among the most practical approaches for solving linear programs. Consider the standard form of the primal LP problem:
min cTx , subject to Ax = b ,x ≥ 0 , (1) where A ∈ Rm×n, b ∈ Rm, and c ∈ Rn are the inputs, and x ∈ Rn is the vector of the primal variables. The associated dual problem is
max bTy , subject to ATy + s = c , s ≥ 0 , (2)
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
where y ∈ Rm and s ∈ Rn are the vectors of the dual and slack variables respectively. Triplets (x,y, s) that uphold both (1) and (2) are called primal-dual solutions. Path-following IPMs typically converge towards a primal-dual solution by operating as follows: given the current iterate (xk,yk, sk), they compute the Newton search direction (∆x,∆y,∆s) and update the current iterate by following a step towards the search direction. To compute the search direction, one standard approach [41] involves solving the normal equations1:
AD2AT∆y = p. (3)
Here, D = X1/2S−1/2 is a diagonal matrix, X,S ∈ Rn×n are diagonal matrices whose i-th diagonal entries are equal to xi and si, respectively, and p ∈ Rm is a vector whose exact definition is given in eqn. (16)2. Given ∆y, computing ∆s and ∆x only involves matrix-vector products.
The core computational bottleneck in IPMs is the need to solve the linear system of eqn. (3) at each iteration. This leads to two key challenges: first, for high-dimensional matrices A, solving the linear system is computationally prohibitive. Most implementations of IPMs use a direct solver; see Chapter 6 of [41]. However, if AD2AT is large and dense, direct solvers are computationally impractical. If AD2AT is sparse, specialized direct solvers have been developed, but these do not apply to many LP problems arising in machine learning applications due to irregular sparsity patterns. Second, an alternative to direct solvers is the use of iterative solvers, but the situation is further complicated since AD2AT is typically ill-conditioned. Indeed, as IPM algorithms approach the optimal primal-dual solution, the diagonal matrix D is ill-conditioned, which also results in the matrix AD2AT being ill-conditioned. Additionally, using approximate solutions for the linear system of eqn. (3) causes certain invariants, which are crucial for guaranteeing the convergence of IPMs, to be violated; see Section 1.1 for details.
In this paper, we address the aforementioned challenges, for the special case where m n, i.e., the number of constraints is much smaller than the number of variables; see Appendix A for a generalization. This is a common setting in ML applications of LP solvers, since `1-SVMs and basis pursuit problems often exhibit such structure when the number of available features (n) is larger than the number of objects (m). This setting has been of interest in recent work on LPs [17, 4, 31]. For simplicity of exposition, we also assume that the constraint matrix A has full rank, equal to m. First, we propose and analyze a preconditioned Conjugate Gradient (CG) iterative solver for the normal equations of eqn. (3), using matrix sketching constructions from the Randomized Linear Algebra (RLA) literature. We develop a preconditioner for AD2AT using matrix sketching which allows us to prove strong convergence guarantees for the residual of CG solvers. Second, building upon the work of [39], we propose and analyze a provably accurate long-step infeasible IPM algorithm. The proposed IPM solves the normal equations using iterative solvers. In this paper, for brevity and clarity, we primarily focus our description and analysis on the CG iterative solver. We note that a non-trivial concern is that the use of iterative solvers and matrix sketching tools implies that the normal equations at each iteration will be solved only approximately. In our proposed IPM, we develop a novel way to correct for the error induced by the approximate solution in order to guarantee convergence. Importantly, this correction step is relatively computationally light, unlike a similar step proposed in [39]. Third, we empirically show that our algorithm performs well in practice. We consider solving LPs that arise from `1-regularized SVMs and test them on a variety of synthetic and real datasets. Several extensions of our work are discussed in Appendix A.
1.1 Our contributions
Our point of departure in this work is the introduction of preconditioned, iterative solvers for solving eqn. (3). Preconditioning is used to address the ill-conditioning of the matrix AD2AT. Iterative solvers allow the computation of approximate solutions using only matrix-vector products while avoiding matrix inversion, Cholesky or LU factorizations, etc. A preconditioned formulation of eqn. (3) is:
Q−1AD2AT∆y = Q−1p, (4)
where Q ∈ Rm×m is the preconditioning matrix; Q should be easily invertible (see [3, 22] for background). An alternative yet equivalent formulation of eqn. (4), which is more amenable to
1Another widely used approach is to solve the augmented system [41] which is less relevant for this paper. 2The superscript k in eqn. (16) simply indicates iteration count and is omitted here for notational simplicity.
theoretical analysis, is
Q− 1/2AD2ATQ− 1/2z = Q− 1/2p, (5)
where z ∈ Rm is a vector such that ∆y = Q−1/2z. Note that the matrix in the left-hand side of the above equation is always symmetric, which is not necessarily the case for eqn. (4). We do emphasize that one can use eqn. (4) in the actual implementation of the preconditioned solver; eqn. (5) is much more useful in theoretical analyses.
Recall that we focus on the special case where A ∈ Rm×n has m n, i.e., it is a short-and-fat matrix. Our first contribution starts with the design and analysis of a preconditioner for the Conjugate Gradient solver that satisfies, with high probability,
2
2 + ζ ≤ σ2min(Q− 1 2 AD) ≤ σ2max(Q−
1 2 AD) ≤ 2
2− ζ , (6)
for some error parameter ζ ∈ [0, 1]. In the above, σmin(·) and σmax(·) correspond to the smallest and largest singular value of the matrix in parentheses. The above condition says that the preconditioner effectively reduces the condition number of AD to a constant. We note that the particular form of the lower and upper bounds in eqn. (6) was chosen to simplify our derivations. RLA matrix-sketching techniques allow us to construct preconditioners for all short-and-fat matrices that satisfy the above inequality and can be inverted efficiently. Such constructions go back to the work of [2]; see Section 2 for details on the construction of Q and its inverse. Importantly, given such a preconditioner, we then prove that the resulting CG iterative solver satisfies
‖Q−1/2AD2ATQ−1/2z̃t −Q−1/2p‖2 ≤ ζt‖Q− 1/2p‖2. (7)
Here z̃t is the approximate solution returned by the CG iterative solver after t iterations. In words, the above inequality states that the residual achieved after t iterations of the CG iterative solver drops exponentially fast. To the best of our knowledge, this result is not known in the CG literature: indeed, it is actually well-known that the residual of CG may oscillate [21], even in cases where the energy norm of the solution error decreases monotonically. However, we prove that if the preconditioner is sufficiently good, i.e., it satisfies the constraint of eqn. (6), then the residual decreases as well.
Our second contribution is the analysis of a novel variant of a long-step infeasible IPM algorithm proposed by [39]. Recall that such algorithms can, in general, start with an initial point that is not necessarily feasible, but does need to satisfy some, more relaxed, constraints. Following the lines of [56, 39], let S be the set of feasible and optimal solutions of the form (x∗,y∗, s∗) for the primal and dual problems of eqns. (1) and (2) and assume that S is not empty. Then, long-step infeasible IPMs can start with any initial point (x0,y0, s0) that satisfies (x0, s0) > 0 and (x0, s0) ≥ (x∗, s∗), for some feasible and optimal solution (x∗, s∗) ∈ S . In words, the starting primal and slack variables must be strictly positive and larger (element-wise) when compared to some feasible, optimal primaldual solution. See Chapter 6 of [52] for a discussion regarding why such choices of starting points are relevant to computational practice and can be identified more efficiently than feasible points.
The flexibility of infeasible IPMs comes at a cost: long-step feasible IPMs converge in O(n log 1/ ) iterations, while long-step infeasible IPMs need O(n2 log 1/ ) iterations to converge [56, 39] (Here is the accuracy of the approximate LP solution returned by the IPM; see Algorithm 2 for the exact definition.). Let
Ax0 − b = r0p, (8) ATy0 + s0 − c = r0d, (9)
where r0p ∈ Rn and r0d ∈ Rm are the primal and dual residuals, respectively, and characterize how far the initial point is from being feasible. As long-step infeasible IPM algorithms iterate and update the primal and dual solutions, the residuals are updated as well. Let rk = (rkp, r k d) ∈ Rn+m be the primal and dual residual at the k-th iteration: it is well-known that the convergence analysis of infeasible long-step IPMs critically depends on rk lying on the line segment between 0 and r0. Unfortunately, using approximate solvers (such as the CG solver proposed above) for the normal equations violates this invariant. [39] proposed a simple solution to fix this problem by adding a perturbation vector v to the current primal-dual solution that guarantees that the invariant is satisfied. Again, we use RLA matrix sketching principles to propose an efficient construction for v that provably satisfies the invariant. Next, we combine the above two primitives to prove that Algorithm 2 in Section 3 satisfies the following theorem.
Theorem 1 Let 0 ≤ ≤ 1 be an accuracy parameter. Consider the long-step infeasible IPM Algorithm 2 (Section 3) that solves eqn. (5) using the CG solver of Algorithm 1 (Section 2). Assume that the CG iterative solver runs with accuracy parameter ζ = 1/2 and iteration count t = O(log n). Then, with probability at least 0.9, the long-step infeasible IPM converges after O(n2 log 1/ ) iterations.
We note that the 0.9 success probability above is for simplicity of exposition and can be easily amplified using standard techniques. Also, at each iteration of our infeasible long-step IPM algorithm, the running time is O((nnz(A) +m3) log n), ignoring constant terms. See Section 3 for a detailed discussion of the overall running time.
Our empirical evaluation demonstrates that our algorithm requires an order of magnitude much fewer inner CG iterations than a standard IPM using CG, while producing a comparably accurate solution (see Section 4).
1.2 Prior Work
There is a large body of literature on solving LPs using IPMs. We only review literature that is immediately relevant to our work. Recall that we solve the normal equations inexactly at each iteration, and develop a way to correct for the error incurred. We also focus on IPMs that can use a sufficiently positive, infeasible initial point (see Section 1.1). We discuss below two papers that present related ideas.
[39] proposed the use of an approximate iterative solver for eqn. (3), followed by a correction step to “fix” the approximate solution (see our discussion in Section 1.1). We propose efficient, RLAbased approaches to precondition and solve eqn. (3), as well as a novel approach to correct for the approximation error in order to guarantee the convergence of the IPM algorithm. Specifically, [39] propose to solve eqn. (3) using the so-called maximum weight basis preconditioner [46]. However, computing such a preconditioner needs access to a maximal linearly independent set of columns of AD in each iteration, which is costly, taking O(m2n) time in the worst-case. More importantly, while [38] was able to provide a bound on the condition number of the preconditioned matrix that depends only on properties of A, and is independent of D, this bound might, in general, be very large. In contrast, our bound is a constant and it does not depend on properties of A or its dimensions. In addition, [39] assumed a bound on the two-norm of the residual of the preconditioned system, but it is unclear how their preconditioner guarantees such a bound. Similar concerns exist for the construction of the correction vector v proposed by [39], which our work alleviates.
The line of research in the Theoretical Computer Science literature that is closest to our work is [15], who presented an IPM that uses an approximate solver in each iteration. However, their accuracy guarantee is in terms of the final objective value which is different from ours. More importantly, [15] focuses on short-step, feasible IPMs, whereas ours is long-step and does not require a feasible starting point. Finally, the approximate solver proposed by [15] works only for the special case of input matrices that correspond to graph Laplacians, following the lines of [47, 48].
We also note that in the Theoretical Computer Science literature, [26, 27, 28, 29, 30, 7, 12] proposed and analyzed theoretically ground-breaking algorithms for LPs based on novel tools such as the so-called inverse maintenance for accelerating the linear system solvers in IPMs. However, all these endeavors are primarily focused on the theoretically fast but practically inefficient short-step feasible IPMs and, to the best of our knowledge, no implementations of these approaches are available for comparisons to standard long-step IPMs. We highlight that our work is focused on infeasible long-step IPMs, known to work efficiently in practice.
Another relevant line of research is the work of [14], which proposed solving eqn. (3) using preconditioned Krylov subspace methods, including variants of generalized minimum residual (GMRES) or CG methods. Indeed, [14] conducted extensive numerical experiments on LP problems taken from standard benchmark libraries, but did not provide any theoretical guarantees.
From a matrix-sketching perspective, our work was also partially motivated by [8], which presented an iterative, sketching-based algorithm to solve under-constrained ridge regression problems, but did not address how to make use of such approaches in an IPM-based framework, as we do here. In another work, [1] proposed a similar sketching-based preconditioning technique. However, their efforts broadly revolved around speeding up and scaling kernel ridge regression. [43, 53] proposed the so-called Newton sketch to construct an approximate Hessian matrix for more general convex objective functions of which LP is a special case. Nevertheless, these randomized second-order
methods are significantly faster than the conventional approach only when the data matrix is overconstrained, i.e. m n. It is unclear whether the approach of [43, 53] is faster than IPMs when the optimization problem to be solved is linear. [49] proposed a probabilistic algorithm to solve LP approximately in a random projection-based reduced feature-space. A possible drawback of this paper is that the approximate solution is infeasible with respect to the original region. Finally, we refer the interested reader to the surveys [51, 19, 33, 18, 24, 34] for more background on Randomized Linear Algebra.
1.3 Notation and Background
A,B, . . . denote matrices and a,b, . . . denote vectors. For vector a, ‖a‖2 denotes its Euclidean norm; for a matrix A, ‖A‖2 denotes its spectral norm and ‖A‖F denotes its Frobenius norm. We use 0 to denote a null vector or null matrix, dependent upon context, and 1 to denote the all-ones vector. For any matrix X ∈ Rm×n with m ≤ n of rank m its thin Singular Value Decomposition (SVD) is the product UΣVT , with U ∈ Rm×m (the matrix of the left singular vectors), V ∈ Rn×m( the matrix of the top-m right singular vectors), and Σ ∈ Rm×m a diagonal matrix whose entries are equal to the singular values of X. We use σi(·) to denote the i-th singular value of the matrix in parentheses.
We now briefly discuss a result on matrix sketching [13, 11] that is particularly useful in our theoretical analyses. In our parlance, [13] proved that, for any matrix Z ∈ Rm×n, there exists a sketching matrix W ∈ Rn×w such that ∥∥ZWWTZT − ZZT∥∥
2 ≤ ζ
4
( ‖Z‖22 + ‖Z‖2F r ) (10)
holds with probability at least 1− δ for any r ≥ 1. Here ζ ∈ [0, 1] is a (constant) accuracy parameter. Ignoring constant terms, w = O(r log(r/δ)); W has s = O(log(r/δ)) non-zero entries per row with s uniformly random entries are chosen without replacement and set to ± 1s independently; the product ZW can be computed in time O(log(r/δ) · nnz(Z)).
2 Conjugate Gradient Solver
In this section, we discuss the computation of the preconditioner Q (and its inverse), followed by a discussion on how such a preconditioner can be used to satisfy eqns. (6) and (7).
Algorithm 1 Solving eqn. (5) via CG Input: AD ∈ Rm×n, p ∈ Rm, sketching matrix W ∈ Rn×w, iteration count t;
1: Compute ADW and its SVD: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be
the matrix of its singular values; 2: Compute Q−1/2 = UQΣ −1/2 Q U T Q; 3: Initialize z̃0 ← 0m and run standard CG on the preconditioned system of eqn. (5) for t iterations; Output: z̃t;
Algorithm 1 takes as input the sketching matrix W ∈ Rn×w, which we construct as discussed in Section 1.3. Our preconditioner Q is equal to
Q = ADWWTDAT. (11)
Notice that we only need to compute Q−1/2 in order to use it to solve eqn. (5). Towards that end, we first compute the sketched matrix ADW ∈ Rm×w. Then, we compute the SVD of the matrix ADW: let UQ be the matrix of its left singular vectors and let Σ 1/2 Q be the matrix of its singular values. Notice that the left singular vectors of Q−1/2 are equal to UQ and its singular values are equal to Σ−
1/2 Q . Therefore, Q −1/2 = UQΣ −1/2 Q U T Q.
Let AD = UΣVT be the thin SVD representation of AD. We apply the results of [13] (see Section 1.3) to the matrix Z = VT ∈ Rm×n with r = m to get that, with probability at least 1− δ,∥∥VTWWTV − Im∥∥2 ≤ ζ/2 (12)
The running time needed to compute the sketch ADW is equal to (ignoring constant factors) O(nnz(A) · log(m/δ)). Note that nnz(AD) = nnz(A). The cost of computing the SVD of ADW (and therefore Q−1/2) is O(m3 log(m/δ)). Overall, computing Q−1/2 can be done in time
O(nnz(A) · log(m/δ) +m3 log(m/δ)). (13) Given these results, we now discuss how to satisfy eqns. (6) and (7) using the sketching matrix W. We start with the following bound, which is relatively straight-forward given prior RLA work (see Appendix C.1 for a proof).
Lemma 2 If the sketching matrix W satisfies eqn. (12), then, for all i = 1 . . .m,
(1 + ζ/2)−1 ≤ σ2i (Q− 1/2AD) ≤ (1− ζ/2)−1.
This lemma directly implies eqn. (6). We now proceed to show that the above construction for Q−1/2, when combined with the conjugate gradient solver to solve eqn. (5), indeed satisfies eqn. (7)3. We do note that in prior work most of the convergence guarantees for CG focus on the error of the approximate solution. However, in our work, we are interested in the convergence of the residuals and it is known that even if the energy norm of the error of the approximate solution decreases monotonically, the norms of the CG residuals may oscillate. Interestingly, we can combine a result on the residuals of CG from [6] with Lemma 2 to prove that in our setting the norms of the CG residuals also decrease monotonically (see Appendix C.2 for details).
We remark that one can consider using MINRES [42] instead of CG. Our results hinges on bounding the two-norm of the residual. MINRES finds, at each iteration, the optimal vector with respect the two-norm of the residual inside the same Krylov subspace of CG for the corresponding iteration. Thus, the bound we prove for CG applies to MINRES as well.
3 The Infeasible IPM algorithm
In order to avoid spurious solutions, primal-dual path-following IPMs bias the search direction towards the central path and restrict the iterates to a neighborhood of the central path. This search is controlled by the centering parameter σ ∈ [0, 1]. At each iteration, given the current solution (xk,yk, sk), a standard infeasible IPM obtains the search direction (∆xk,∆yk,∆sk) by solving the following system of linear equations:
AD2AT∆yk = pk , (14a)
∆sk = − rkd −AT∆yk , (14b) ∆xk = − xk + σµkS−11n −D2∆sk. (14c)
Here D and S are computed given the current iterate (xk and sk). After solving the above system, the infeasible IPM Algorithm 2 proceeds by computing a step-size ᾱ to return:
(xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆xk,∆yk,∆sk). (15)
Recall that rk = (rkp, r k d) is a vector with r k p = Ax k − b and rkd = ATyk + sk − c (the primal and dual residuals). We also use the duality measure µk = x kTsk/n and the vector
pk = −rkp − σµkAS−11n + Axk −AD2rkd. (16)
Given ∆yk from eqn. (14a), ∆sk and ∆xk are easy to compute from eqns. (14b) and (14c), as they only involve matrix-vector products. However, since we will use Algorithm 1 to solve eqn. (14a) approximately using the sketching-based preconditioned CG solver, the primal and dual residuals do not lie on the line segment between 0 and r0. This invalidates known proofs of convergence for infeasible IPMs.
For notational simplicity, we now drop the dependency of vectors and scalars on the iteration counter k. Let ∆̂y = Q−1/2z̃t be the approximate solution to eqn. (14a). In order to account for the loss of accuracy due to the approximate solver, we compute ∆̂x as follows:
∆̂x = − x + σµS−11n −D2∆̂s− S−1v. (17) 3See Chapter 9 of [32] for a detailed overview of CG.
Here v ∈ Rn is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the infeasible IPM:
AS−1v = AD2AT∆̂y − p . (18)
We note that the computation of ∆̂s is still done using eqn. (14b), which does not change. [39] argued that if v satisfies eqn. (18), the primal and dual residuals lie in the correct line segment.
Construction of v. There are many choices for v satisfying eqn. (18). A general choice is v = (AS−1)†(AD2AT∆̂y − p), which involves the computation of the pseudoinverse (AS−1)†, which is expensive, taking time O(m2n). Instead, we propose to construct v using the sketching matrix W of Section 1.3. More precisely, we construct the perturbation vector
v = (XS) 1/2W(ADW)†(AD2AT∆̂y − p). (19)
The following lemma proves that the proposed v satisfies eqn. (18); see Appendix C.3 for the proof.
Lemma 3 Let W ∈ Rn×w be the sketching matrix of Section 1.3 and v be the perturbation vector of eqn. (19). Then, with probability at least 1− δ, rank(ADW) = m and v satisfies eqn. (18).
We emphasize here that we will use the same exact sketching matrix W ∈ Rn×w to form the preconditioner used in the CG algorithm of Section 2 as well as the vector v in eqn.(19). This allows us to form the sketching matrix only once, thus saving time in practice. Next, we present a bound for the two-norm of the perturbation vector v of eqn. (19); see Appendix C.4 for the proof.
Lemma 4 With probability at least 1− δ, our perturbation vector v in Lemma 3 satisfies ‖v‖2 ≤ √ 3nµ ‖f̃ (t)‖2, (20)
with f̃ (t) = Q−1/2AD2ATQ−1/2z̃t −Q−1/2p.
Intuitively, the bound in eqn. (20) implies that ‖v‖2 depends on how close the approximate solution ∆̂y is to the exact solution. Lemma 4 is particularly useful in proving the convergence of Algorithm 2, which needs ‖v‖2 to be a small quantity. More precisely, combining a result from [39] with our preconditioner Q−1/2, we can prove that ‖Q−1/2p‖2 ≤ O(n) √ µ. This bound allows us to prove that if we run Algorithm 1 for O(log n) iterations, then ‖f̃ (t)‖2 ≤ γσ4√n √ µ and ‖v‖2 ≤ γσ4 µ. The last two inequalities are critical in the convergence analysis of Algorithm 2; see Appendix F.1 and Appendix F.2 for details.
We are now ready to present the infeasible IPM algorithm. We will need the following definition for the neighborhoodN (γ) = {(xk,yk, sk) : (xk, sk) > 0, xki ski ≥ (1− γ)µ and ‖r
k‖2/‖r0‖2 ≤ µk/µ0}. Here γ ∈ (0, 1) and we note that the duality measure µk steadily reduces at each iteration.
Algorithm 2 Infeasible IPM Input: A ∈ Rm×n, b ∈ Rm, c ∈ Rn, γ ∈ (0, 1), tolerance > 0, σ ∈ (0, 4/5); Initialize: k ← 0; initial point (x0,y0, s0);
1: while µk > do 2: Compute sketching matrix W ∈ Rn×w (Section 1.3) with ζ = 1/2 and δ = O(n−2); 3: Compute rkp = Ax
k − b; rkd = ATyk + sk − c; and pk from eqn. (16); 4: Solve the linear system of eqn. (5) for z using Algorithm 1 with W from step (2) and t = O(log n). Compute ∆̂y = Q−1/2z;
5: Compute v using eqn. (19) with W from step (2); ∆̂s using eqn. (14b); ∆̂x using eqn. (17); 6: Compute α̃ = argmax{α ∈ [0, 1] : (xk,yk, sk) + α(∆̂x k , ∆̂y k , ∆̂s k ) ∈ N (γ)}. 7: Compute ᾱ = argmin{α ∈ [0, α̃] : (xk + α∆̂x k )T(sk + α∆̂s k )}. 8: Compute (xk+1,yk+1, sk+1) = (xk,yk, sk) + ᾱ(∆̂x k , ∆̂y k , ∆̂s k ); set k ← k + 1; 9: end while
Running time of Algorithm 2. We start by discussing the running time to compute v. As discussed in Section 2, (ADW)† can be computed in O(nnz(A) · log(m/δ) +m3 log(m/δ)) time. Now, as
W has O(log(m/δ)) non-zero entries per row, pre-multiplying by W takes O(nnz(A) log(m/δ)) time (assuming nnz(A) ≥ n). Since X and S are diagonal matrices, computing v takes O(nnz(A) · log(m/δ)+m3 log(m/δ)) time, which is asymptotically the same as computing Q−1/2 (see eqn. (13)).
We now discuss the overall running time of Algorithm 2. At each iteration, with failure probability δ, the preconditioner Q−1/2 and the vector v can be computed inO(nnz(A) · log(m/δ)+m3 log(m/δ)) time. In addition, for t = O(log n) iterations of Algorithm 1, all the matrix-vector products in the CG solver can be computed inO(nnz(A) · log n) time. Therefore, the computational time for steps (2)-(5) is given by O(nnz(A) · (log n+ log(m/δ)) +m3 log(m/δ)). Finally, taking a union bound over all iterations with δ = O(n−2) (ignoring constant factors), Algorithm 2 converges with probability at least 0.9. The running time at each iteration is given by O((nnz(A) +m3) log n).
4 Experiments
We demonstrate the empirical performance of our algorithm on a variety of synthetic and real-world datasets from the UCI ML Repository [20], such as ARCENE, DEXTER [23], DrivFace [16], and a gene expression cancer RNA-Sequencing dataset that is part of the PANCAN dataset [50]. See Appendix G, Table 1 for a description of the datasets. We observed that the results for both synthetic (Appendix G.2) and real-world data were qualitatively similar; we highlight results on representative real datasets. The experiments were implemented in Python and run on a server with Intel E52623V3@3.0GHz 8 cores and 64GB RAM. As an application, we consider `1-regularized SVMs: all of the datasets are concerned with binary classification with m n, where n is the number of features. In Appendix G.1, we describe the `1-SVM problem and how it can be formulated as an LP. Here, m is the number of training points, n is the feature dimension, and the size of the constraint matrix in the LP becomes m× (2n+ 1).
Experimental Results. We compare our Algorithm 2 with a standard IPM (see Chapter 10, [44]) using CG and a standard IPM using a direct solver. We also use CVXPY as a benchmark to compare the accuracy of the solutions; we define the relative error ‖x̂−x?‖2/‖x?‖2, where x̂ is our solution and x? is the solution generated by CVXPY. We also consider the number of outer iterations, namely the number of iterations of the IPM algorithm, as well as the number of inner iterations, namely the number of iterations of the CG solver. We denote the relative stopping tolerance for CG by tolCG and we denote the outer iteration residual by τ . If not specified: τ = 10−9, tolCG = 10−5, and σ = 0.5. We evaluated a Gaussian sketching matrix and the initial triplet (x,y, s) for all IPM algorithms was set to be all ones.
Figure 1(a) shows that our Algorithm 2 uses an order of magnitude fewer inner iterations than the un-preconditioned standard solver. This is due to the improved conditioning of the respective matrices in the normal equations, as demonstrated in Figure 1(b). Across various real and synthetic data sets, the results were qualitatively similar to those shown in Figure 1. Results for several real data sets are summarized in Appendix G, Table 1. The number of outer iterations is unaffected by our internal approximation methods and is generally the same for our Algorithm 2, the standard IPM with CG, and the standard IPM with a direct linear solver (denoted IPM w/Dir), as seen in Appendix G, Table 1. Figure 1 also demonstrates the relative insensitivity to the choice of w (the sketching dimension, i.e., the number of columns of the sketching matrix W of Section 1.3). For smaller values of w, our algorithm requires more inner iterations. However, across various choices of w, the number of inner iterations is always an order of magnitude smaller than the number required by the standard solver.
Figures 1(c)-1(d) show the performance of our algorithm for a range of (w, tolCG) pairs. Figure 1(c) demonstrates that the number of the inner iterations is robust to the choice of tolCG and w. The number of inner iterations varies between 15 and 35 for the ARCENE data set, while the standard IPM took on the order of 1, 000 iterations across all parameter settings. Across all settings, the relative error was fixed at 0.04%. In general, our sketched IPM is able to produce an extremely high accuracy solution across parameter settings. Thus we do not report additional numerical results for the relative error, which was consistently 10−3 or less. Figure 1(d) demonstrates a tradeoff of our approach: as both tolCG and w are increased, the condition number κ(Q−1/2AD2ATQ−1/2) decreases, corresponding to better conditioned systems. As a result, fewer inner iterations are required. Additional experiments can be found in Appendix G.4.
5 Conclusions
We proposed and analyzed an infeasible IPM algorithm using a preconditioned conjugate gradient solver for the normal equations and a novel perturbation vector to correct for the error due to the approximate solver. Thus, we speed up each iteration of the IPM algorithm, without increasing the overall number of iterations. We demonstrate empirically that our IPM requires an order of magnitude fewer inner iterations within each linear solve than standard IPMs. Several extensions of our work are discussed in Appendix A.
Broader Impact
Our work is focused on speeding up algorithms for tall/wide LPs. As such, it could have significant broader impacts by allowing users to solve increasingly larger LPs in the numerous settings discussed in our introduction. While applications of our work to real data could result into ethical considerations, this is an indirect (and unpredictable) side-effect of our work. Our experimental work uses publicly available datasets to evaluate the performance of our algorithms; no ethical considerations are raised.
Acknowledgements, We thank the anonymous reviewers for their helpful comments. AC and PD were partially supported by NSF FRG 1760353 and NSF CCF-BSF 1814041. HA was partially supported by BSF grant 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence. | 1. What is the focus and contribution of the paper on linear programs?
2. What are the strengths of the proposed approach, particularly in combining randomized numerical linear algebra and interior point methods?
3. What are the weaknesses of the paper regarding its claims and reliance on prior works?
4. How does the reviewer assess the technical soundness of the complexity analysis?
5. Are there any questions regarding the applicability of the proposed method in machine learning problems? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The manuscript considered infeasible interior-point methods for linear programs which have way more variables than constraints (or vice-versa by the dual). The key ingredients of the algorithm are i) sketching the wide system matrix, ii) designing a preconditioner based on the sketch, and iii) applying conjugate gradient to solve the modified Newton step. The authors analyzed the iteration complexity of the proposed algorithm.
Strengths
The combination of randomized numerical linear algebra and and interior point method is interesting. The assumption of tall/wide linear programs is well motivated - many machine learning problems enjoy this property. The complexity analysis appears to be technically solid.
Weaknesses
- In several places (line 93-97, line 217 - 223), the authors claim that their convergence result for the CG in equation (7) is novel. However, it is a simple application of Theorem 8 in [5] and the well-conditionedness of their preconditioned system matrix. - The heavy lifting parts of the analysis are respectively done by the randomized numerical linear algebra literature [12], convergence analysis of conjugate gradient [5], and analysis of long-step infeasible IPMs [37]. The combination is novel though. |
NIPS | Title
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Abstract
Empirical risk minimization (ERM) is known to be non-robust in practice to 1 distributional shift where the training and the test distributions are different. A suite 2 of approaches, such as importance weighting, and variants of distributionally robust 3 optimization (DRO), have been proposed to solve this problem. But a line of recent 4 work has empirically shown that these approaches do not significantly improve 5 over ERM in real applications with distribution shift. The goal of this work is to 6 obtain a comprehensive theoretical understanding of this intriguing phenomenon. 7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad 8 category of approaches that iteratively update model parameters based on iterative 9 reweighting of the training samples. We show that when overparameterized models 10 are trained under GRW, the resulting models are close to that obtained by ERM. 11 We also show that adding small regularization which does not greatly affect the 12 empirical training accuracy does not help. Together, our results show that a broad 13 category of what we term GRW approaches are not able to achieve distributionally 14 robust generalization. Our work thus has the following sobering takeaway: to 15 make progress towards distributionally robust generalization, we either have to 16 develop non-GRW approaches, or perhaps devise novel classification/regression 17 loss functions that are adapted to the class of GRW approaches. 18
N/A
Empirical risk minimization (ERM) is known to be non-robust in practice to1 distributional shift where the training and the test distributions are different. A suite2 of approaches, such as importance weighting, and variants of distributionally robust3 optimization (DRO), have been proposed to solve this problem. But a line of recent4 work has empirically shown that these approaches do not significantly improve5 over ERM in real applications with distribution shift. The goal of this work is to6 obtain a comprehensive theoretical understanding of this intriguing phenomenon.7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad8 category of approaches that iteratively update model parameters based on iterative9 reweighting of the training samples. We show that when overparameterized models10 are trained under GRW, the resulting models are close to that obtained by ERM.11 We also show that adding small regularization which does not greatly affect the12 empirical training accuracy does not help. Together, our results show that a broad13 category of what we term GRW approaches are not able to achieve distributionally14 robust generalization. Our work thus has the following sobering takeaway: to15 make progress towards distributionally robust generalization, we either have to16 develop non-GRW approaches, or perhaps devise novel classification/regression17 loss functions that are adapted to the class of GRW approaches.18
1 Introduction19
It has now been well established that empirical risk minimization (ERM) can empirically achieve high20 test performance on a variety of tasks, particularly with modern overparameterized models where the21 number of parameters is much larger than the number of training samples. This strong performance22 of ERM however has been shown to degrade under distributional shift, where the training and test23 distributions are different [HS15, BGO16, Tat17]. There are two broad categories of distribution24 shift: domain generalization where the test distribution contains new environments not in the training25 distribution like in domain adaptation, and subpopulation shift where the two distributions have the26 same set of subpopulations but their mixture weights differ like in algorithmic fairness applications.27
People have proposed various approaches to learn models that are robust to distributional shift. The28 most classical approach is importance weighting (IW) [Shi00], which reweights training samples; in29 the context of subpopulation shift these weights are typically set so that each subpopulation/group30 has the same overall weight in the training objective. The approach most widely used today is31 Distributional Robust Optimization (DRO) [DN18, HSNL18], in which we assume that the test32 distribution belongs to a certain set of distributions that are close to the training distribution (called33 the uncertainty set), and train the model on the worst distribution in that set. Many variants of DRO34 have been proposed and are used in practice [HNSS18, SKHL20, XDKR20, ZDKR21, ZDS+21].35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
While these approaches have been developed for the express purpose of improving ERM for distri-36 bution shift, a line of recent work has empirically shown the negative result that when used to train37 overparameterized models, these methods do not improve over ERM. For IW, [BL19] observed that38 its effect under stochastic gradient descent (SGD) diminishes over training epochs, and finally does39 not improve over ERM. For variants of DRO, [SKHL20] found that these methods overfit very easily,40 i.e. their test performances will drop to the same low level as ERM after sufficiently many epochs if41 no regularization is applied. [GLP21, KSM+21] compared these methods with ERM on a number of42 real-world applications, and found that in most cases none of these methods improves over ERM.43
This line of empirical results has also been bolstered by some recent theoretical results. [SRKL20]44 constructed a synthetic dataset where a linear model trained with IW is provably not robust to45 subpopulation shift. [XYR21] further proved that under gradient descent (GD) with a sufficiently46 small learning rate, a linear classifier trained with either IW or ERM converges to the same max-47 margin classifier, and thus upon convergence, are no different. These previous theoretical results are48 limited to linear models and specific approaches such as IW where sample weights are fixed during49 training. They are not applicable to more complex models, and more general approaches where the50 sample weights could iteratively change, including most DRO variants.51
Towards placing the empirical results on a stronger theoretical footing, we define the class of52 generalized reweighting (GRW), which dynamically assigns weights to the training samples, and53 iteratively minimizes the weighted average of the sample losses. By allowing the weights to vary54 with iterations, we cover not just static importance weighting, but also DRO approaches outlined55 earlier; though of course, the GRW class is much broader than just these instances.56
In this work, we prove the comprehensive result that in both regression and classification, and for57 both overparameterized linear models and wide neural networks, the models learnt via any GRW58 approach and ERM are similar, in the sense that their implicit biases are (almost) equivalent. We note59 that extending the analysis from linear models to wide neural networks is non-trivial since it requires60 the result that wide neural networks can be approximated by their linearized counterparts to hold61 uniformly throughout the iterative process of GRW algorithms. Our results extend the analysis in62 [LXS+19], but as we show, the proof in the original paper had some flaws, and due to which we have63 to fix the proof by changing the network initialization (Eqn. (9), see Appendix E).64
Overall, the important takeaway is that distributionally robust generalization cannot be directly65 achieved by the broad class of GRW algorithms (which includes popular approaches such as impor-66 tance weighting and most DRO variants). Progress towards this important goal thus requires either67 going beyond GRW algorithms, or devising novel loss functions that are adapted to GRW approaches.68 In Section 6 we will discuss some promising future directions as well as the limitations of this work.69
2 Preliminaries70
Let the input space be X ⊆ Rd and the output space be Y ⊆ R.1 We assume that X is a subset of the71 unit L2 ball of Rd, so that any x ∈ X satisfies ‖x‖2 ≤ 1. We have a training set {zi = (xi, yi)}ni=172 i.i.d. sampled from an underlying distribution P over X × Y . Denote X = (x1, · · · ,xn) ∈ Rd×n,73 and Y = (y1, · · · , yn) ∈ Rn. For any function g : X 7→ Rm, we overload notation and use74 g(X) = (g(x1), · · · , g(xn)) ∈ Rm×n (except when m = 1, g(X) is defined as a column vector).75 Let the loss function be ` : Y × Y → [0, 1]. ERM trains a model by minimizing its expected risk76 R(f ;P ) = Ez∼P [`(f(x), y)] via minimizing the empirical risk R̂(f) = 1n ∑n i=1 `(f(xi), yi).77
In distributional shift, the model is evaluated not on the training distribution P , but a different test78 distribution Ptest, so that we care about the expected risk R(f ;Ptest). A large family of methods79 designed for such distributional shift is distributionally robust optimization (DRO), which minimizes80 the expected risk over the worst-case distribution Q P 2 in a ball w.r.t. divergence D around the81 training distribution P . Specifically, DRO minimizes the expected DRO risk defined as:82
RD,ρ(f ;P ) = sup Q P {EQ[`(f(x), y)] : D(Q ‖ P ) ≤ ρ} (1)
for ρ > 0. Examples include CVaR, χ2-DRO [HSNL18], and DORO [ZDKR21], among others.83 1Our results can be easily extended to the multi-class scenario (see Appendix B). 2For distributions P and Q, Q is absolute continuous to P , or Q P , means that for any event A,
P (A) = 0 implies Q(A) = 0.
A common category of distribution shift is known as subpopulation shift. Let the data domain contain84 K groups D1, · · · ,DK . The training distribution P is the distribution over all groups, and the test85 distribution Ptest is the distribution over one of the groups. Let Pk(z) = P (z | z ∈ Dk) be the86 conditional distribution over group k, then Ptest can be any one of P1, · · · , Pk. The goal is to train a87 model f that performs well over every group. There are two common ways to achieve this goal: one88 is minimizing the balanced empirical risk which is an unweighted average of the empirical risk over89 each group, and the other is minimizing the worst-group risk defined as90
Rmax(f ;P ) = max k=1,··· ,K R(f ;Pk) = max k=1,··· ,K Ez∼P [`(f(x), y)|z ∈ Dk] (2)
3 Generalized Reweighting (GRW)91
Various methods have been proposed towards learning models that are robust to distributional shift.92 In contrast to analyzing each of these individually, we instead consider a large class of what we call93 Generalized Reweighting (GRW) algorithms that includes the ones mentioned earlier, but potentially94 many others more. Loosely, GRW algorithms iteratively assign each sample a weight during training95 (that could vary with the iteration) and iteratively minimize the weighted average risk. Specifically, at96 iteration t, GRW assigns a weight q(t)i to sample zi, and minimizes the weighted empirical risk:97
R̂q(t)(f) = n∑ i=1 q (t) i `(f(xi), yi) (3)
where q(t) = (q(t)1 , · · · , q (t) n ) and q (t) 1 + · · ·+ q (t) n = 1.98
Static GRW assigns to each zi = (xi, yi) a fixed weight qi that does not change during training, i.e.99 q (t) i ≡ qi. A classical method is importance weighting [Shi00], where if zi ∈ Dk and the size of Dk100 is nk, then qi = (Knk)−1. Under importance weighting, (3) becomes the balanced empirical risk in101 which each group has the same weight. Note that ERM is also a special case of static GRW.102
On the other hand, in dynamic GRW, q(t) changes with t. For instance, any approach that iteratively103 upweights samples with high losses in order to help the model learn “hard” samples, such as DRO,104 is an instance of GRW. When estimating the population DRO risk RD,ρ(f ;P ) in Eqn. (1), if P105 is set to the empirical distribution over the training samples, then Q P implies that Q is also106 a distribution over the training samples. Thus, DRO methods belong to the broad class of GRW107 algorithms. There are two common ways to implement DRO. One uses Danskin’s theorem and108 chooses Q as the maximizer of EQ[`(f(x), y)] in each epoch. The other one formulates DRO as a109 bi-level optimization problem, where the lower level updates the model to minimize the expected risk110 over Q, and the upper level updates Q to maximize it. Both can be seen as instances of GRW. As one111 popular instance of the latter, Group DRO was proposed by [SKHL20] to minimize (2). Denote the112 empirical risk over group k by R̂k(f), and the model at time t by f (t). Group DRO iteratively sets113 q (t) i = g (t) k /nk for all zi ∈ Dk where g (t) k is the group weight that is updated as114
g (t) k ∝ g (t−1) k exp ( νR̂k(f (t−1)) ) (∀k = 1, · · · ,K) (4)
for some ν > 0, and then normalized so that q(t)1 + · · · + q (t) n = 1. [SKHL20] then showed (in115 their Proposition 2) that for convex settings, the Group DRO risk of iterates converges to the global116 minimum with the rate O(t−1/2) if ν is sufficiently small.117
4 Theoretical Results for Regression118
In this section, we will study GRW for regression tasks that use the squared loss119
`(ŷ, y) = 1
2 (ŷ − y)2. (5)
We will prove that for both linear models and sufficiently wide fully-connected neural networks, the120 implicit bias of GRW is equivalent to ERM, so that starting from the same initial point, GRW and121 ERM will converge to the same point when trained for an infinitely long time, which explains why122 GRW does not improve over ERM without regularization and early stopping. We will further show123 that while regularization can affect this implicit bias, it must be large enough to significantly lower124 the training performance, or the final model will still be similar to the unregularized ERM model.125
4.1 Linear Models126
We first demonstrate our result on simple linear models to provide our readers with a key intuition;127 later, we will apply this same intuition to neural networks. This key intuition draws from results128 of [GLSS18]. Let the linear model be denoted by f(x) = 〈θ,x〉, where θ ∈ Rd. We consider the129 overparameterized setting where d > n. The weight update rule of GRW under GD is the following:130
θ(t+1) = θ(t) − η n∑ i=1 q (t) i ∇θ`(f (t)(xi), yi) (6)
where η > 0 is the learning rate. For a linear model with the squared loss, the update rule is131
θ(t+1) = θ(t) − η n∑ i=1 q (t) i xi(f (t)(xi)− yi) (7)
For this training scheme, we can prove that if the training error converges to zero, then the model132 converges to an interpolator θ∗ (s.t. ∀i, 〈θ∗,xi〉 = yi) independent of q(t)i (proofs in Appendix D):133 Theorem 1. If x1, · · · ,xn are linearly independent, then under the squared loss, for any GRW such134 that the empirical training risk R̂(f (t))→ 0 as t→∞, it holds that θ(t) converges to an interpolator135 θ∗ that only depends on θ(0) and x1, · · · ,xn, but does not depend on q(t)i .136
The proof is based on the following key intuition regarding the update rule (7): θ(t+1) − θ(t) is137 a linear combination of x1, · · · ,xn for all t, so θ(t) − θ(0) always lies in the linear subspace138 span{x1, · · · ,xn}, which is an n-dimensional linear subspace if x1, · · · ,xn are linearly independent.139 By Cramer’s rule, there is exactly one θ̃ in this subspace such that we get interpolation of all the140 data 〈θ̃ + θ(0),xi〉 = yi for all i ∈ {1, . . . , n}. In other words, the parameter θ∗ = θ̃ + θ(0) in this141 subspace that interpolates all the data is unique. Thus the proof would follow if we were to show that142 θ(t) − θ(0), which lies in the subspace, also converges to interpolating the data.143 We have essentially proved the following sobering result: the implicit bias of any GRW that achieves144 zero training error is equivalent to ERM, so GRW does not improve over ERM. While the various145 distributional shift methods discussed in the introduction have been shown to satisfy the precondition146 of convergence to zero training error with overparameterized models and linearly independent147 inputs [SKHL20], we provide the following theorem that shows this for the broad class of GRW148 methods. Specifically, we show this result for any GRW satisfying the following assumption with a149 sufficiently small learning rate:150
Assumption 1. There are constants q1, · · · , qn s.t. ∀i, q(t)i → qi as t→∞. And mini qi = q∗ > 0.151 Theorem 2. If x1, · · · ,xn are linearly independent, then there exists η0 > 0 such that for any152 GRW satisfying Assumption 1 with the squared loss, and any η ≤ η0, the empirical training risk153 R̂(f (t))→ 0 as t→∞.154
Finally, we use a simple experiment to demonstrate the correctness of this result. The experiment is155 conducted on a training set of six MNIST images, five of which are digit 0 and one is digit 1. We use156 a 784-dimensional linear model and run ERM, importance weighting and group DRO. The results are157 presented in Figure 1, and they show that the training loss of each method converges to 0, and the gap158 between the model weights of importance weighting, Group DRO and ERM converges to 0, meaning159 that all three model weights converge to the same point, whose L2 norm is about 0.63. Figure 1d also160 shows that the group weights in Group DRO empirically satisfy Assumption 1.161
4.2 Wide Neural Networks (Wide NNs)162
Now we study sufficiently wide fully-connected neural networks. We extend the analysis in [LXS+19]163 in the neural tangent kernel (NTK) regime [JGH18]. In particular we study the following network:164
hl+1 = W l√ dl xl + βbl and xl+1 = σ(hl+1) (l = 0, · · · , L) (8)
where σ is a non-linear activation function, W l ∈ Rdl+1×dl and WL ∈ R1×dL . Here d0 = d. The165 parameter vector θ consists of W 0, · · · ,WL and b0, · · · , bL (θ is the concatenation of all flattened166 weights and biases). The final output is f(x) = hL+1. And let the neural network be initialized as167 {
W l(0) i,j ∼ N (0, 1)
b l(0) j ∼ N (0, 1)
(l = 0, · · · , L− 1) and
{ W
L(0) i,j = 0
b L(0) j ∼ N (0, 1)
(9)
We also need the following assumption on the wide neural network:168 Assumption 2. σ is differentiable everywhere. Both σ and its first-order derivative σ̇ are Lipschitz.3169
Difference from [JGH18]. Our initialization (9) differs from the original one in [JGH18] in the last170 (output) layer, where we use the zero initialization WL(0)i,j = 0 instead of the Gaussian initialization171
W L(0) i,j ∼ N (0, 1). This modification permits us to accurately approximate the neural network with172 its linearized counterpart (11), as we notice that the proofs in [LXS+19] (particularly the proofs of173 their Theorem 2.1 and their Lemma 1 in Appendix G) are flawed. In Appendix E we will explain174 what goes wrong in their proofs and how we manage to fix the proofs with our modification.175
Denote the neural network at time t by f (t)(x) = f(x; θ(t)) which is parameterized by θ(t) ∈ Rp176 where p is the number of parameters. We use the shorthand ∇θf (0)(x) := ∇θf(x; θ) ∣∣ θ=θ0
. The177 neural tangent kernel (NTK) of this model is Θ(0)(x,x′) = ∇θf (0)(x)>∇θf (0)(x′), and the Gram178 matrix is Θ(0) = Θ(0)(X,X) ∈ Rn×n. For this wide NN, we still have the following NTK theorem:179 Lemma 3. If σ is Lipschitz and dl →∞ for l = 1, · · · , L sequentially, then Θ(0)(x,x′) converges180 in probability to a non-degenerate4 deterministic limiting kernel Θ(x,x′).181
The kernel Gram matrix Θ = Θ(X,X) ∈ Rn×n is a positive semi-definite symmetric matrix.182 Denote its largest and smallest eigenvalues by λmax and λmin. Note that Θ is non-degenerate, so we183 can assume that λmin > 0 (which is almost surely true when dL n). Then we have:184 Theorem 4. Let f (t) be a wide fully-connected neural network that satisfies Assumption 2 and is185 trained by any GRW satisfying Assumption 1 with the squared loss. Let f (t)ERM be the same model186 trained by ERM from the same initial point. If d1 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)187 are linearly independent, and λmin > 0, then there exists a constant η1 > 0 such that: if η ≤ η15,188 then for any δ > 0, there exists D̃ > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)189 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1, as d̃→∞,190
lim sup t→∞ ∣∣∣f (t)(x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4)→ 0 (10) Note that for simplicity, in the theorem we only consider the case where d1 = · · · = dL = d̃→∞,191 but in fact the result can be very easily extended to the case where dl/d1 → αl for l = 2, · · · , L for192 some constants α2, · · · , αL, and d1 →∞. Here we provide a proof sketch for this theorem. The key193 is to consider the linearized neural network of f (t)(x):194
f (t) lin (x) = f (0)(x) + 〈θ(t) − θ(0),∇θf (0)(x)〉 (11)
which is a linear model with features ∇θf (0)(x). Thus if ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly195 independent, then the linearized NN converges to the unique interpolator. Then we show that the196
3f is Lipschitz if there exists a constant L > 0 such that for any x1,x2, |f(x1)− f(x2)| ≤ L ‖x1 − x2‖2. 4Non-degenerate means that Θ(x,x′) depends on x and x′ and is not a constant. 5For ease of understanding, later we will write this condition as “with a sufficiently small learning rate”.
wide neural network can be approximated by its linearized counterpart uniformly throughout training,197 which is considerably more subtle in our case due to the GRW dynamics. Here we prove that the gap198 is bounded by O(d̃−1/4), but in fact we can prove that it is bounded by O(d̃−1/2+ ) for any > 0:199
Lemma 5 (Approximation Theorem). For a wide fully-connected neural network f (t) satisfying200 Assumption 2 and is trained by any GRW satisfying Assumption 1 with the squared loss, let f (t)lin be its201 linearized neural network trained by the same GRW (i.e. q(t)i are the same for both networks for any202 i and t). Under the conditions of Theorem 4, with a sufficiently small learning rate, for any δ > 0,203 there exist constants D̃ > 0 and C > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)204 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1,205
sup t≥0 ∣∣∣f (t)lin (x)− f (t)(x)∣∣∣ ≤ Cd̃−1/4 (12) Theorem 4 shows that at any test point x within the unit ball, the gap between the outputs of wide206 NNs trained by GRW and ERM from the same initial point is arbitrarily close to 0. So we have shown207 that for regression, with both linear and wide NNs, GRW does not improve over ERM.208
4.3 Wide Neural Networks, with L2 Regularization209
Previous work such as [SKHL20] proposed to improve DRO algorithms by adding L2 penalty to the210 objective function. In this section, we thus study adding L2 regularization to GRW algorithms:211
R̂µ q(t) (f) = n∑ i=1 q (t) i `(f(xi), yi) + µ 2 ∥∥∥θ − θ(0)∥∥∥2 2
(13)
From the outset, it is easy to see that under L2 regularization, GRW methods have different implicit212 biases than ERM. For example, when f is a linear model, ` is convex and smooth, then R̂µ
q(t) (f) with213 static GRW is a convex smooth objective function, so under GD with a sufficiently small learning rate,214 the model will converge to the global minimizer (see Appendix D.1). Moreover, the global optimum215 θ∗ satisfies∇θR̂µq(t)(f(x; θ
∗)) = 0, solving which yields θ∗ = θ(0) + (XQX>+µI)−1XQ(Y −216 f (0)(X)), which depends on Q = diag(q1, · · · , qn), so adding L2 regularization at least seems to217 yield different results from ERM (whether it improves over ERM might depend on q1, · · · , qn).218 However, the following result shows that this regularization must be large enough to significantly219 lower the training performance, or the resulting model would still be close to the unregularized ERM220 model. We still denote the largest and smallest eigenvalues of the kernel Gram matrix Θ by λmax and221 λmin. We use the subscript “reg” to refer to a regularized model (trained by minimizing (13)).222
Theorem 6. Suppose there exists M0 > 0 s.t. ∥∥∇θf (0)(x)∥∥2 ≤M0 for all ‖x‖2 ≤ 1. If λmin > 0223 and µ > 0, then for a wide NN satisfying Assumption 2, and any GRW minimizing the squared loss224 with a sufficiently small learning rate η, if d1 = d2 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)225 are linearly independent, and the empirical training risk of f (t)reg satisfies226
lim sup t→∞
R̂(f (t)reg ) < (14)
for some > 0, then with a sufficiently small learning rate, as d̃→∞, with probability close to 1227 over random initialization, for any x such that ‖x‖2 ≤ 1 we have228
lim sup t→∞ ∣∣∣f (t)reg (x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4 +√ )→ O(√ ) (15) where f (t)reg is trained by regularized GRW and f (t) ERM by unregularized ERM from same initial points.229
The proof again starts from analyzing linearized neural networks, and showing that regularization230 does not help there (Appendix D.4.2). Then, we need to prove a new approximation theorem for L2231 regularized GRW connecting wide NNs to their linearized counterparts uniformly through the GRW232 training process (Appendix D.4.1). Note that with regularization, we no longer need Assumption233 1 to prove the new approximation theorem, because previously Assumption 1 is used to prove the234 convergence of GRW, but with regularization GRW naturally converges.235
Theorem 6 shows that if the training error can go below , then the gap between the outputs of the236 two models on any test point x within the unit ball will be at most O( √ ). Thus, if is very small,237 regularized GRW yields a very similar model to unregularized ERM, and thus makes improvement.238
To empirically demonstrate this result, we run the same experiment as in Section 4.1 but with L2239 regularization. The results are presented in Figure 2. We can see that when the regularization is small,240 the training losses still converge to 0, and the three model weights still converge to the same point.241 On the contrary, with a large regularization, the training loss does not converge to 0, and the three242 model weights no longer converge to the same point. This shows that the regularization must be large243 enough to lower the training performance in order to make a significant difference to the implicit bias.244
5 Theoretical Results for Classification245
Now we consider classification where Y = {+1,−1}. The big difference is that classification losses246 don’t have finite minimizers. A classification loss converging to zero means that the model weight247 “explodes” to infinity instead of converging to a finite point. We focus on the canonical logistic loss:248
`(ŷ, y) = log(1 + exp(−ŷy)) (16)
5.1 Linear Models249
We first consider training the linear model f(x) = 〈θ,x〉 with GRW under gradient descent with the250 logistic loss. As noted earlier, in this setting, [BL19] made the empirical observation that importance251 weighting does not improve over ERM. Then, [XYR21] proved that for importance weighting252 algorithms, as t→∞, ‖θ(t)‖2 →∞ and θ(t)/‖θ(t)‖2 converges to a unit vector that does not depend253 on the sample weights, so it does not improve over ERM. To extend this theoretical result to the broad254 class of GRW algorithms, we will prove two results. First, in Theorem 7 we will show that under the255 logistic loss, any GRW algorithm satisfying the following weaker assumption:256
Assumption 3. For all i, lim inft→∞ q (t) i > 0,257
if the training error converges to 0, and the direction of the model weight converges to a fixed unit258 vector, then this unit vector must be the max-margin classifier defined as259
θ̂MM = arg max θ:‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,xi〉
} (17)
Second, Theorem 8 shows that for any GRW satisfying Assumption 1, the training error converges to260 0 and the direction of the model weight converges, so it does not improve over ERM.261 Theorem 7. If x1, · · · ,xn are linearly independent, then for the logistic loss, we have: for any262 GRW satisfying Assumption 3, if as t→∞ the empirical training risk R̂(f (t)) converges to 0 and263 θ(t)/‖θ(t)‖2 → u for some unit vector u, then u = θ̂MM.264
This result is an extension of [SHN+18]. Note that θ̂MM does not depend on q (t) i , so this result shows265 that the sample weights have no effect on the implicit bias. Thus, for any GRW method that only266 satisfies the weak Assumption 3, as long as the training error converges to 0 and the model weight267 direction converges, GRW does not improve over ERM. We next show that any GRW satisfying268 Assumption 1 does have its model weight direction converge, and its training error converge to 0.269 Theorem 8. For any loss ` that is convex, L-smooth in ŷ and strictly monotonically decreasing to270 zero as yŷ → +∞, and GRW satisfying Assumption 1, denote F (θ) = ∑n i=1 qi`(〈θ,xi〉, yi). If271 x1, · · · ,xn are linearly independent, then with a sufficiently small learning rate η, we have:272
F (θ(t))→ 0 as t→∞.(i) ∥∥θ(t)∥∥
2 →∞ as t→∞.(ii)273
Let θR = arg minθ{F (θ) : ‖θ‖2 ≤ R}. θR is unique for any R such that min‖θ‖2≤R F (θ) < mini qi`(0, yi). And if limR→∞ θRR exists, then limt→∞ θ(t)
‖θ(t)‖ 2
also exists and they are equal.
(iii)274
This result is an extension of Theorem 1 of [JDST20]. For the logistic loss, it is easy to show that275 it satisfies the conditions of the above theorem and limR→∞ θRR = θ̂MM. Thus, Theorems 8 and 7276 together imply that all GRW satisfying Assumption 1 (including ERM) have the same implicit bias277 (see Appendix D.5.3). We also have empirical verification for these results (see Appendix C).278
Remark. It is impossible to extend these results to wide NNs like Theorem 4 because for a neural279 network, if ‖θ(t)‖2 goes to infinity, then ‖∇θf‖2 will also go to infinity. However, for a linear model,280 the gradient is a constant. Consequently, the gap between the neural networks and its linearized281 counterpart will “explode” under gradient descent, so there can be no approximation theorem like282 Lemma 5 that can connect wide NNs to their linearized counterparts. Thus, we consider regularized283 GRW, for which θ(t) converges to a finite point and there is an approximation theorem.284
5.2 Wide Neural Networks, with L2 Regularization285
Consider minimizing the regularized weighted empirical risk (13) with ` being the logistic loss. As in286 the regression case, with L2 regularization, GRW methods have different implicit biases than ERM287 for the same reasons as in Section 4.3. And similarly, we can show that in order for GRW methods to288 be sufficiently different from ERM, the regularization needs to be large enough to significantly lower289 the training performance. Specifically, in the following theorem we show that if the regularization290 is too small to lower the training performance, then a wide neural network trained with regularized291 GRW and the logistic loss will still be very close to the max-margin linearized neural network:292
fMM(x) = 〈θ̂MM,∇θf (0)(x)〉 where θ̂MM = arg max ‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,∇θf (0)(xi)〉
} (18)
Note that fMM does not depend on q (t) i . Moreover, using the result in the previous section we can293 show that a linearized neural network trained with unregularized ERM will converge to fMM:294
Theorem 9. Suppose there exists M0 > 0 such that ∥∥∇θf (0)(x)∥∥2 ≤M0 for all test point x. For a295 wide NN satisfying Assumption 2, and for any GRW satisfying Assumption 1 with the logistic loss,296 if d1 = d2 = · · · = dL = d̃ and ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly independent and the297 learning rate is sufficiently small, then for any δ > 0 there exists a constant C > 0 such that: with298 probability at least (1 − δ) over random initialization, as d̃ → ∞ we have: for any ∈ (0, 14 ), if299 the empirical training error satisfies lim supt→∞ R̂(f (t) reg ) < , then for any test point x such that300 |fMM(x)| > C · (− log 2 )−1/2, f (t)reg (x) has the same sign as fMM(x) when t is sufficiently large.301
This result says that at any test point x on which the max-margin linear classifier classifies with a302 margin of Ω((− log 2 )−1/2), the neural network has the same prediction. And as decreases, the303 confidence threshold also becomes lower. Similar to Theorem 6, this theorem provides the scaling of304 the gap between the regularized GRW model and the unregularized ERM model w.r.t. .305
This result justifies the empirical observation in [SKHL20] that with large regularization, some GRW306 algorithms can maintain a high worst-group test performance, with the cost of suffering a significant307 drop in training accuracy. On the other hand, if the regularization is small and the model can achieve308 nearly perfect training accuracy, then its worst-group test performance will still significantly drop.309
6 Discussion310
6.1 Distributionally Robust Generalization and Future Directions311
A large body of prior work focused on distributionally robust optimization, but we show that these312 methods have (almost) equivalent implicit biases as ERM. In other words, distributionally robust313 optimization (DRO) does not necessarily have better distributionally robust generalization (DRG).314
Therefore, we argue that it is necessary to design principled ways to improve DRG, which is what315 people really want in the first place. Here we discuss three promising approaches to improving DRG.316
The first approach is data augmentation and pretraining on large datasets. Our theoretical findings317 suggest that the implicit bias of GRW is determined by the training samples and the initial point, but318 not the sample weights. Thus, to improve DRG, we can either obtain more training samples, or start319 from a better initial point, as demonstrated in two recent papers [WGS+22, SKL+22].320
The second approach (for classification) is to go beyond the class of (iterative) sample reweighting321 based GRW algorithms, for instance via logit adjustment [MJR+21], which makes a classifier have322 larger margins on smaller groups to improve its generalization on smaller groups. An early approach323 by [CWG+19] proposed to add an O(n−1/4k ) additive adjustment term to the logits output by the324 classifier. Following this spirit, [MJR+21] proposed the LA-loss which also adds an additive adjust-325 ment term to the logits. [YCZC20] proposed the CDT-loss which adds a multiplicative adjustment326 term to the logits by dividing the logits of different classes with different temperatures. [KPOT21]327 proposed the VS-loss which includes both additive and multiplicative adjustment terms, and they328 showed that only the multiplicative adjustment term affects the implicit bias, while the additive term329 only affects optimization, a fact that can be easily derived from our Theorem 8. Finally, [LZT+21]330 proposed AutoBalance which optimizes the adjustment terms with a bi-level optimization framework.331
The third approach is to stay within the class of GRW algorithms, but to change the classifica-332 tion/regression loss function to be suited to GRW. A recent paper [WCHH22] showed that for linear333 classifiers, one can make the implicit bias of GRW dependent on the sample weights by replacing the334 exponentially-tailed logistic loss with the following polynomially-tailed loss:335
`α,β(ŷ, y) = `left(ŷy) , if ŷy < β 1
[ŷy − (β − 1)]α , if ŷy ≥ β
(19)
And this result can be extended to GRW satisfying Assumption 1 using our Theorem 8. The reason336 why loss (19) works is that it changes limR→∞ θRR , and the new limit depends on the sample weights.337
6.2 Limitations338
Like most theory papers, our work makes some strong assumptions. The two main assumptions are:339
(i) The model is a linear model or a sufficiently wide fully-connected neural network.340 (ii) The model is trained for sufficiently long time, i.e. without early stopping.341
Regarding (i), [COB19] argued that NTK neural networks fall in the “lazy training” regime and342 results might not be transferable to general neural networks. However, this class of neural networks343 has been widely studied in recent years and has provided considerable insights into the behavior344 of general neural networks, which is hard to analyze otherwise. Regarding (ii), in some easy tasks,345 when early stopping is applied, existing algorithms for distributional shift can do better than ERM346 [SKHL20]. However, as demonstrated in [GLP21, KSM+21], in real applications these methods still347 cannot significantly improve over ERM even with early stopping, so early stopping is not the ultimate348 universal solution. Thus, though inevitably our results rely on some strong assumptions, we believe349 that they provide important insights into the problems of existing methods and directions for future350 work, which are significant contributions to the study of distributional shift problems.351
7 Conclusion352
In this work, we posit a broad class of what we call Generalized Reweighting (GRW) algorithms that353 include popular approaches such as importance weighting, and Distributionally Robust Optimization354 (DRO) variants, that were designed towards the task of learning models that are robust to distributional355 shift. We show that when used to train overparameterized linear models or wide NN models, even this356 very broad class of GRW algorithms does not improve over ERM, because they have the same implicit357 biases. We also showed that regularization does not help if it is not large enough to significantly358 lower the average training performance. Our results thus suggest to make progress towards learning359 models that are robust to distributional shift, we have to either go beyond this broad class of GRW360 algorithms, or design new losses specifically targeted to this class.361
References362 [BGO16] Su Lin Blodgett, Lisa Green, and Brendan O’Connor. Demographic dialectal variation363 in social media: A case study of African-American English. In Proceedings of the 2016364 Conference on Empirical Methods in Natural Language Processing, pages 1119–1130,365 Austin, Texas, November 2016. Association for Computational Linguistics.366
[BL19] Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep367 learning? In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of368 the 36th International Conference on Machine Learning, volume 97 of Proceedings of369 Machine Learning Research, pages 872–881. PMLR, 09–15 Jun 2019.370
[COB19] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable371 programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox,372 and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32.373 Curran Associates, Inc., 2019.374
[CWG+19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning375 imbalanced datasets with label-distribution-aware margin loss. Advances in Neural376 Information Processing Systems, 32:1567–1578, 2019.377
[DN18] John Duchi and Hongseok Namkoong. Learning models with uniform performance via378 distributionally robust optimization. arXiv preprint arXiv:1810.08750, 2018.379
[GLP21] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In380 International Conference on Learning Representations, 2021.381
[GLSS18] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit382 bias in terms of optimization geometry. In Jennifer Dy and Andreas Krause, editors,383 Proceedings of the 35th International Conference on Machine Learning, volume 80 of384 Proceedings of Machine Learning Research, pages 1832–1841. PMLR, 10–15 Jul 2018.385
[HNSS18] Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust386 supervised learning give robust classifiers? In International Conference on Machine387 Learning, pages 2029–2037. PMLR, 2018.388
[HS15] Dirk Hovy and Anders Søgaard. Tagging performance correlates with author age. In389 Proceedings of the 53rd annual meeting of the Association for Computational Linguistics390 and the 7th international joint conference on natural language processing (volume 2:391 Short papers), pages 483–488, 2015.392
[HSNL18] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fair-393 ness without demographics in repeated loss minimization. In Jennifer Dy and Andreas394 Krause, editors, International Conference on Machine Learning, volume 80 of Proceed-395 ings of Machine Learning Research, pages 1929–1938, Stockholmsmässan, Stockholm396 Sweden, 10–15 Jul 2018. PMLR.397
[JDST20] Ziwei Ji, Miroslav Dudík, Robert E. Schapire, and Matus Telgarsky. Gradient descent398 follows the regularization path for general losses. In Jacob Abernethy and Shivani399 Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory, volume400 125 of Proceedings of Machine Learning Research, pages 2109–2136. PMLR, 09–12401 Jul 2020.402
[JGH18] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Conver-403 gence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle,404 K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information405 Processing Systems, volume 31. Curran Associates, Inc., 2018.406
[KPOT21] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thram-407 poulidis. Label-imbalanced and group-sensitive classification under overparameteriza-408 tion. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.409
[KSM+21] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang,410 Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena411 Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque,412 Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea413 Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina414 Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on415 Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages416 5637–5664. PMLR, 18–24 Jul 2021.417
[LXS+19] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha418 Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve419 as linear models under gradient descent. Advances in neural information processing420 systems, 32:8572–8583, 2019.421
[LZT+21] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak.422 Autobalance: Optimized loss functions for imbalanced data. In Thirty-Fifth Conference423 on Neural Information Processing Systems, 2021.424
[MJR+21] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, An-425 dreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International426 Conference on Learning Representations, 2021.427
[Shi00] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting428 the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244,429 2000.430
[SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro.431 The implicit bias of gradient descent on separable data. The Journal of Machine Learning432 Research, 19(1):2822–2878, 2018.433
[SKHL20] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distribution-434 ally robust neural networks for group shifts: On the importance of regularization for435 worst-case generalization. In International Conference on Learning Representations,436 2020.437
[SKL+22] Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen,438 Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne439 David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto,440 Sergey Levine, Chelsea Finn, and Percy Liang. Extending the WILDS benchmark for441 unsupervised adaptation. In International Conference on Learning Representations,442 2022.443
[SRKL20] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of444 why overparameterization exacerbates spurious correlations. In Hal Daumé III and Aarti445 Singh, editors, Proceedings of the 37th International Conference on Machine Learning,446 volume 119 of Proceedings of Machine Learning Research, pages 8346–8356. PMLR,447 13–18 Jul 2020.448
[Tat17] Rachael Tatman. Gender and dialect bias in youtube’s automatic captions. In Proceed-449 ings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59,450 2017.451
[WCHH22] Ke Alexander Wang, Niladri Shekhar Chatterji, Saminul Haque, and Tatsunori452 Hashimoto. Is importance weighting incompatible with interpolating classifiers? In453 International Conference on Learning Representations, 2022.454
[WGS+22] Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre-Alvise Rebuffi, Ira Ktena, Krish-455 namurthy Dj Dvijotham, and Ali Taylan Cemgil. A fine-grained analysis on distribution456 shift. In International Conference on Learning Representations, 2022.457
[XDKR20] Ziyu Xu, Chen Dan, Justin Khim, and Pradeep Ravikumar. Class-weighted classifi-458 cation: Trade-offs and robust approaches. In Hal Daumé III and Aarti Singh, editors,459 Proceedings of the 37th International Conference on Machine Learning, volume 119460
of Proceedings of Machine Learning Research, pages 10544–10554. PMLR, 13–18 Jul461 2020.462
[XYR21] Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting463 for deep learning. In International Conference on Learning Representations, 2021.464
[YCZC20] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and465 compensating for feature deviation in imbalanced deep learning. arXiv preprint466 arXiv:2001.01385, 2020.467
[ZDKR21] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Doro: Distributional468 and outlier robust optimization. In Marina Meila and Tong Zhang, editors, Proceedings469 of the 38th International Conference on Machine Learning, volume 139 of Proceedings470 of Machine Learning Research, pages 12345–12355. PMLR, 18–24 Jul 2021.471
[ZDS+21] Runtian Zhai, Chen Dan, Arun Suggala, J Zico Kolter, and Pradeep Kumar Raviku-472 mar. Boosted CVar classification. In Thirty-Fifth Conference on Neural Information473 Processing Systems, 2021.474
Checklist475
The checklist follows the references. Please read the checklist guidelines carefully for information on476 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or477 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing478 the appropriate section of your paper or providing a brief inline description. For example:479
• Did you include the license to the code and datasets? [Yes] See Section.480 • Did you include the license to the code and datasets? [No] The code and the data are481 proprietary.482 • Did you include the license to the code and datasets? [N/A]483
Please do not modify the questions and only use the provided macros for your answers. Note that the484 Checklist section does not count towards the page limit. In your paper, please delete this instructions485 block and only keep the Checklist section heading above along with the questions/answers below.486
1. For all authors...487 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s488 contributions and scope? [Yes]489 (b) Did you describe the limitations of your work? [Yes] See Section 6.2.490 (c) Did you discuss any potential negative societal impacts of your work? [No] Not491 relevant.492 (d) Have you read the ethics review guidelines and ensured that your paper conforms to493 them? [Yes]494 2. If you are including theoretical results...495
(a) Did you state the full set of assumptions of all theoretical results? [Yes]496 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix D.497
3. If you ran experiments...498 (a) Did you include the code, data, and instructions needed to reproduce the main experi-499 mental results (either in the supplemental material or as a URL)? [Yes]500 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they501 were chosen)? [Yes]502 (c) Did you report error bars (e.g., with respect to the random seed after running experi-503 ments multiple times)? [No] The experiments are only for demonstration.504 (d) Did you include the total amount of compute and the type of resources used (e.g., type505 of GPUs, internal cluster, or cloud provider)? [Yes] See the code.506 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...507
(a) If your work uses existing assets, did you cite the creators? [N/A]508 (b) Did you mention the license of the assets? [N/A]509 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]510
511
(d) Did you discuss whether and how consent was obtained from people whose data you’re512 using/curating? [N/A]513
(e) Did you discuss whether the data you are using/curating contains personally identifiable514 information or offensive content? [N/A]515
5. If you used crowdsourcing or conducted research with human subjects...516 (a) Did you include the full text of instructions given to participants and screenshots, if517 applicable? [N/A]518 (b) Did you describe any potential participant risks, with links to Institutional Review519 Board (IRB) approvals, if applicable? [N/A]520 (c) Did you include the estimated hourly wage paid to participants and the total amount521 spent on participant compensation? [N/A]522 | 1. What is the focus of the paper regarding distributional drift and Empirical risk minimization?
2. What are the strengths of the proposed approach, particularly in its ability to theoretically prove the shortcomings of popular methods?
3. What are the weaknesses of the paper, especially regarding the limitations of the theoretical analysis?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the applicability of the proposed method in real-world scenarios? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This manuscript demonstrates that the popular approaches, such as importance weighting, and Distributionally Robust Optimization (DRO) variants, cannot really improve the distributional drift problem over Empirical risk minimization (ERM). To achieve this goal, this manuscript presents a Generalized Reweighting (GRW) (the above popular algorithms are its special cases). When training overparameterized linear models or wide NN models, the result for GRW is very close to that for ERM. Moreover, it also demonstrates a small regularization that needs to not greatly affect the empirical training accuracy does not help.
Strengths And Weaknesses
Observing that some works uncovered the popular approaches that aim to address distributional drift do not significantly improve over ERM, the authors try to theoretically prove this empirical phenomenon. Moreover, according to the theoretical conclusion, the authors also provide some potential approaches that can alleviate the distributional drift problem. and it might fundamentally change this direction for distributionally robust approaches.
Questions
The theoretical analysis is built on linear models. To extend to neural networks, the manuscripts utilizes existing the neural tangent kernel (NTK) theory to take networks as an approximately linear model. It is known that there is a gap between NTK networks and the practical networks. The width of NTK networks should be infinite; the NTK networks are commonly simple fully-connected networks; he optimization algorithms should be simple stochastic gradient descent (SGD). All these conditions cannot be satisfied in real-world tasks. Moreover, Assumption assumes the first-order of the active function is Lipschitz continuous. We know it is common to apply ReLU as the active function in the modern networks, which is not satisfied this assumption. In summary, these ideal assumptions will influence the validity of the theoretical analysis.
Limitations
There is no potential negative societal impact for this manuscript. |
NIPS | Title
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Abstract
Empirical risk minimization (ERM) is known to be non-robust in practice to 1 distributional shift where the training and the test distributions are different. A suite 2 of approaches, such as importance weighting, and variants of distributionally robust 3 optimization (DRO), have been proposed to solve this problem. But a line of recent 4 work has empirically shown that these approaches do not significantly improve 5 over ERM in real applications with distribution shift. The goal of this work is to 6 obtain a comprehensive theoretical understanding of this intriguing phenomenon. 7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad 8 category of approaches that iteratively update model parameters based on iterative 9 reweighting of the training samples. We show that when overparameterized models 10 are trained under GRW, the resulting models are close to that obtained by ERM. 11 We also show that adding small regularization which does not greatly affect the 12 empirical training accuracy does not help. Together, our results show that a broad 13 category of what we term GRW approaches are not able to achieve distributionally 14 robust generalization. Our work thus has the following sobering takeaway: to 15 make progress towards distributionally robust generalization, we either have to 16 develop non-GRW approaches, or perhaps devise novel classification/regression 17 loss functions that are adapted to the class of GRW approaches. 18
N/A
Empirical risk minimization (ERM) is known to be non-robust in practice to1 distributional shift where the training and the test distributions are different. A suite2 of approaches, such as importance weighting, and variants of distributionally robust3 optimization (DRO), have been proposed to solve this problem. But a line of recent4 work has empirically shown that these approaches do not significantly improve5 over ERM in real applications with distribution shift. The goal of this work is to6 obtain a comprehensive theoretical understanding of this intriguing phenomenon.7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad8 category of approaches that iteratively update model parameters based on iterative9 reweighting of the training samples. We show that when overparameterized models10 are trained under GRW, the resulting models are close to that obtained by ERM.11 We also show that adding small regularization which does not greatly affect the12 empirical training accuracy does not help. Together, our results show that a broad13 category of what we term GRW approaches are not able to achieve distributionally14 robust generalization. Our work thus has the following sobering takeaway: to15 make progress towards distributionally robust generalization, we either have to16 develop non-GRW approaches, or perhaps devise novel classification/regression17 loss functions that are adapted to the class of GRW approaches.18
1 Introduction19
It has now been well established that empirical risk minimization (ERM) can empirically achieve high20 test performance on a variety of tasks, particularly with modern overparameterized models where the21 number of parameters is much larger than the number of training samples. This strong performance22 of ERM however has been shown to degrade under distributional shift, where the training and test23 distributions are different [HS15, BGO16, Tat17]. There are two broad categories of distribution24 shift: domain generalization where the test distribution contains new environments not in the training25 distribution like in domain adaptation, and subpopulation shift where the two distributions have the26 same set of subpopulations but their mixture weights differ like in algorithmic fairness applications.27
People have proposed various approaches to learn models that are robust to distributional shift. The28 most classical approach is importance weighting (IW) [Shi00], which reweights training samples; in29 the context of subpopulation shift these weights are typically set so that each subpopulation/group30 has the same overall weight in the training objective. The approach most widely used today is31 Distributional Robust Optimization (DRO) [DN18, HSNL18], in which we assume that the test32 distribution belongs to a certain set of distributions that are close to the training distribution (called33 the uncertainty set), and train the model on the worst distribution in that set. Many variants of DRO34 have been proposed and are used in practice [HNSS18, SKHL20, XDKR20, ZDKR21, ZDS+21].35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
While these approaches have been developed for the express purpose of improving ERM for distri-36 bution shift, a line of recent work has empirically shown the negative result that when used to train37 overparameterized models, these methods do not improve over ERM. For IW, [BL19] observed that38 its effect under stochastic gradient descent (SGD) diminishes over training epochs, and finally does39 not improve over ERM. For variants of DRO, [SKHL20] found that these methods overfit very easily,40 i.e. their test performances will drop to the same low level as ERM after sufficiently many epochs if41 no regularization is applied. [GLP21, KSM+21] compared these methods with ERM on a number of42 real-world applications, and found that in most cases none of these methods improves over ERM.43
This line of empirical results has also been bolstered by some recent theoretical results. [SRKL20]44 constructed a synthetic dataset where a linear model trained with IW is provably not robust to45 subpopulation shift. [XYR21] further proved that under gradient descent (GD) with a sufficiently46 small learning rate, a linear classifier trained with either IW or ERM converges to the same max-47 margin classifier, and thus upon convergence, are no different. These previous theoretical results are48 limited to linear models and specific approaches such as IW where sample weights are fixed during49 training. They are not applicable to more complex models, and more general approaches where the50 sample weights could iteratively change, including most DRO variants.51
Towards placing the empirical results on a stronger theoretical footing, we define the class of52 generalized reweighting (GRW), which dynamically assigns weights to the training samples, and53 iteratively minimizes the weighted average of the sample losses. By allowing the weights to vary54 with iterations, we cover not just static importance weighting, but also DRO approaches outlined55 earlier; though of course, the GRW class is much broader than just these instances.56
In this work, we prove the comprehensive result that in both regression and classification, and for57 both overparameterized linear models and wide neural networks, the models learnt via any GRW58 approach and ERM are similar, in the sense that their implicit biases are (almost) equivalent. We note59 that extending the analysis from linear models to wide neural networks is non-trivial since it requires60 the result that wide neural networks can be approximated by their linearized counterparts to hold61 uniformly throughout the iterative process of GRW algorithms. Our results extend the analysis in62 [LXS+19], but as we show, the proof in the original paper had some flaws, and due to which we have63 to fix the proof by changing the network initialization (Eqn. (9), see Appendix E).64
Overall, the important takeaway is that distributionally robust generalization cannot be directly65 achieved by the broad class of GRW algorithms (which includes popular approaches such as impor-66 tance weighting and most DRO variants). Progress towards this important goal thus requires either67 going beyond GRW algorithms, or devising novel loss functions that are adapted to GRW approaches.68 In Section 6 we will discuss some promising future directions as well as the limitations of this work.69
2 Preliminaries70
Let the input space be X ⊆ Rd and the output space be Y ⊆ R.1 We assume that X is a subset of the71 unit L2 ball of Rd, so that any x ∈ X satisfies ‖x‖2 ≤ 1. We have a training set {zi = (xi, yi)}ni=172 i.i.d. sampled from an underlying distribution P over X × Y . Denote X = (x1, · · · ,xn) ∈ Rd×n,73 and Y = (y1, · · · , yn) ∈ Rn. For any function g : X 7→ Rm, we overload notation and use74 g(X) = (g(x1), · · · , g(xn)) ∈ Rm×n (except when m = 1, g(X) is defined as a column vector).75 Let the loss function be ` : Y × Y → [0, 1]. ERM trains a model by minimizing its expected risk76 R(f ;P ) = Ez∼P [`(f(x), y)] via minimizing the empirical risk R̂(f) = 1n ∑n i=1 `(f(xi), yi).77
In distributional shift, the model is evaluated not on the training distribution P , but a different test78 distribution Ptest, so that we care about the expected risk R(f ;Ptest). A large family of methods79 designed for such distributional shift is distributionally robust optimization (DRO), which minimizes80 the expected risk over the worst-case distribution Q P 2 in a ball w.r.t. divergence D around the81 training distribution P . Specifically, DRO minimizes the expected DRO risk defined as:82
RD,ρ(f ;P ) = sup Q P {EQ[`(f(x), y)] : D(Q ‖ P ) ≤ ρ} (1)
for ρ > 0. Examples include CVaR, χ2-DRO [HSNL18], and DORO [ZDKR21], among others.83 1Our results can be easily extended to the multi-class scenario (see Appendix B). 2For distributions P and Q, Q is absolute continuous to P , or Q P , means that for any event A,
P (A) = 0 implies Q(A) = 0.
A common category of distribution shift is known as subpopulation shift. Let the data domain contain84 K groups D1, · · · ,DK . The training distribution P is the distribution over all groups, and the test85 distribution Ptest is the distribution over one of the groups. Let Pk(z) = P (z | z ∈ Dk) be the86 conditional distribution over group k, then Ptest can be any one of P1, · · · , Pk. The goal is to train a87 model f that performs well over every group. There are two common ways to achieve this goal: one88 is minimizing the balanced empirical risk which is an unweighted average of the empirical risk over89 each group, and the other is minimizing the worst-group risk defined as90
Rmax(f ;P ) = max k=1,··· ,K R(f ;Pk) = max k=1,··· ,K Ez∼P [`(f(x), y)|z ∈ Dk] (2)
3 Generalized Reweighting (GRW)91
Various methods have been proposed towards learning models that are robust to distributional shift.92 In contrast to analyzing each of these individually, we instead consider a large class of what we call93 Generalized Reweighting (GRW) algorithms that includes the ones mentioned earlier, but potentially94 many others more. Loosely, GRW algorithms iteratively assign each sample a weight during training95 (that could vary with the iteration) and iteratively minimize the weighted average risk. Specifically, at96 iteration t, GRW assigns a weight q(t)i to sample zi, and minimizes the weighted empirical risk:97
R̂q(t)(f) = n∑ i=1 q (t) i `(f(xi), yi) (3)
where q(t) = (q(t)1 , · · · , q (t) n ) and q (t) 1 + · · ·+ q (t) n = 1.98
Static GRW assigns to each zi = (xi, yi) a fixed weight qi that does not change during training, i.e.99 q (t) i ≡ qi. A classical method is importance weighting [Shi00], where if zi ∈ Dk and the size of Dk100 is nk, then qi = (Knk)−1. Under importance weighting, (3) becomes the balanced empirical risk in101 which each group has the same weight. Note that ERM is also a special case of static GRW.102
On the other hand, in dynamic GRW, q(t) changes with t. For instance, any approach that iteratively103 upweights samples with high losses in order to help the model learn “hard” samples, such as DRO,104 is an instance of GRW. When estimating the population DRO risk RD,ρ(f ;P ) in Eqn. (1), if P105 is set to the empirical distribution over the training samples, then Q P implies that Q is also106 a distribution over the training samples. Thus, DRO methods belong to the broad class of GRW107 algorithms. There are two common ways to implement DRO. One uses Danskin’s theorem and108 chooses Q as the maximizer of EQ[`(f(x), y)] in each epoch. The other one formulates DRO as a109 bi-level optimization problem, where the lower level updates the model to minimize the expected risk110 over Q, and the upper level updates Q to maximize it. Both can be seen as instances of GRW. As one111 popular instance of the latter, Group DRO was proposed by [SKHL20] to minimize (2). Denote the112 empirical risk over group k by R̂k(f), and the model at time t by f (t). Group DRO iteratively sets113 q (t) i = g (t) k /nk for all zi ∈ Dk where g (t) k is the group weight that is updated as114
g (t) k ∝ g (t−1) k exp ( νR̂k(f (t−1)) ) (∀k = 1, · · · ,K) (4)
for some ν > 0, and then normalized so that q(t)1 + · · · + q (t) n = 1. [SKHL20] then showed (in115 their Proposition 2) that for convex settings, the Group DRO risk of iterates converges to the global116 minimum with the rate O(t−1/2) if ν is sufficiently small.117
4 Theoretical Results for Regression118
In this section, we will study GRW for regression tasks that use the squared loss119
`(ŷ, y) = 1
2 (ŷ − y)2. (5)
We will prove that for both linear models and sufficiently wide fully-connected neural networks, the120 implicit bias of GRW is equivalent to ERM, so that starting from the same initial point, GRW and121 ERM will converge to the same point when trained for an infinitely long time, which explains why122 GRW does not improve over ERM without regularization and early stopping. We will further show123 that while regularization can affect this implicit bias, it must be large enough to significantly lower124 the training performance, or the final model will still be similar to the unregularized ERM model.125
4.1 Linear Models126
We first demonstrate our result on simple linear models to provide our readers with a key intuition;127 later, we will apply this same intuition to neural networks. This key intuition draws from results128 of [GLSS18]. Let the linear model be denoted by f(x) = 〈θ,x〉, where θ ∈ Rd. We consider the129 overparameterized setting where d > n. The weight update rule of GRW under GD is the following:130
θ(t+1) = θ(t) − η n∑ i=1 q (t) i ∇θ`(f (t)(xi), yi) (6)
where η > 0 is the learning rate. For a linear model with the squared loss, the update rule is131
θ(t+1) = θ(t) − η n∑ i=1 q (t) i xi(f (t)(xi)− yi) (7)
For this training scheme, we can prove that if the training error converges to zero, then the model132 converges to an interpolator θ∗ (s.t. ∀i, 〈θ∗,xi〉 = yi) independent of q(t)i (proofs in Appendix D):133 Theorem 1. If x1, · · · ,xn are linearly independent, then under the squared loss, for any GRW such134 that the empirical training risk R̂(f (t))→ 0 as t→∞, it holds that θ(t) converges to an interpolator135 θ∗ that only depends on θ(0) and x1, · · · ,xn, but does not depend on q(t)i .136
The proof is based on the following key intuition regarding the update rule (7): θ(t+1) − θ(t) is137 a linear combination of x1, · · · ,xn for all t, so θ(t) − θ(0) always lies in the linear subspace138 span{x1, · · · ,xn}, which is an n-dimensional linear subspace if x1, · · · ,xn are linearly independent.139 By Cramer’s rule, there is exactly one θ̃ in this subspace such that we get interpolation of all the140 data 〈θ̃ + θ(0),xi〉 = yi for all i ∈ {1, . . . , n}. In other words, the parameter θ∗ = θ̃ + θ(0) in this141 subspace that interpolates all the data is unique. Thus the proof would follow if we were to show that142 θ(t) − θ(0), which lies in the subspace, also converges to interpolating the data.143 We have essentially proved the following sobering result: the implicit bias of any GRW that achieves144 zero training error is equivalent to ERM, so GRW does not improve over ERM. While the various145 distributional shift methods discussed in the introduction have been shown to satisfy the precondition146 of convergence to zero training error with overparameterized models and linearly independent147 inputs [SKHL20], we provide the following theorem that shows this for the broad class of GRW148 methods. Specifically, we show this result for any GRW satisfying the following assumption with a149 sufficiently small learning rate:150
Assumption 1. There are constants q1, · · · , qn s.t. ∀i, q(t)i → qi as t→∞. And mini qi = q∗ > 0.151 Theorem 2. If x1, · · · ,xn are linearly independent, then there exists η0 > 0 such that for any152 GRW satisfying Assumption 1 with the squared loss, and any η ≤ η0, the empirical training risk153 R̂(f (t))→ 0 as t→∞.154
Finally, we use a simple experiment to demonstrate the correctness of this result. The experiment is155 conducted on a training set of six MNIST images, five of which are digit 0 and one is digit 1. We use156 a 784-dimensional linear model and run ERM, importance weighting and group DRO. The results are157 presented in Figure 1, and they show that the training loss of each method converges to 0, and the gap158 between the model weights of importance weighting, Group DRO and ERM converges to 0, meaning159 that all three model weights converge to the same point, whose L2 norm is about 0.63. Figure 1d also160 shows that the group weights in Group DRO empirically satisfy Assumption 1.161
4.2 Wide Neural Networks (Wide NNs)162
Now we study sufficiently wide fully-connected neural networks. We extend the analysis in [LXS+19]163 in the neural tangent kernel (NTK) regime [JGH18]. In particular we study the following network:164
hl+1 = W l√ dl xl + βbl and xl+1 = σ(hl+1) (l = 0, · · · , L) (8)
where σ is a non-linear activation function, W l ∈ Rdl+1×dl and WL ∈ R1×dL . Here d0 = d. The165 parameter vector θ consists of W 0, · · · ,WL and b0, · · · , bL (θ is the concatenation of all flattened166 weights and biases). The final output is f(x) = hL+1. And let the neural network be initialized as167 {
W l(0) i,j ∼ N (0, 1)
b l(0) j ∼ N (0, 1)
(l = 0, · · · , L− 1) and
{ W
L(0) i,j = 0
b L(0) j ∼ N (0, 1)
(9)
We also need the following assumption on the wide neural network:168 Assumption 2. σ is differentiable everywhere. Both σ and its first-order derivative σ̇ are Lipschitz.3169
Difference from [JGH18]. Our initialization (9) differs from the original one in [JGH18] in the last170 (output) layer, where we use the zero initialization WL(0)i,j = 0 instead of the Gaussian initialization171
W L(0) i,j ∼ N (0, 1). This modification permits us to accurately approximate the neural network with172 its linearized counterpart (11), as we notice that the proofs in [LXS+19] (particularly the proofs of173 their Theorem 2.1 and their Lemma 1 in Appendix G) are flawed. In Appendix E we will explain174 what goes wrong in their proofs and how we manage to fix the proofs with our modification.175
Denote the neural network at time t by f (t)(x) = f(x; θ(t)) which is parameterized by θ(t) ∈ Rp176 where p is the number of parameters. We use the shorthand ∇θf (0)(x) := ∇θf(x; θ) ∣∣ θ=θ0
. The177 neural tangent kernel (NTK) of this model is Θ(0)(x,x′) = ∇θf (0)(x)>∇θf (0)(x′), and the Gram178 matrix is Θ(0) = Θ(0)(X,X) ∈ Rn×n. For this wide NN, we still have the following NTK theorem:179 Lemma 3. If σ is Lipschitz and dl →∞ for l = 1, · · · , L sequentially, then Θ(0)(x,x′) converges180 in probability to a non-degenerate4 deterministic limiting kernel Θ(x,x′).181
The kernel Gram matrix Θ = Θ(X,X) ∈ Rn×n is a positive semi-definite symmetric matrix.182 Denote its largest and smallest eigenvalues by λmax and λmin. Note that Θ is non-degenerate, so we183 can assume that λmin > 0 (which is almost surely true when dL n). Then we have:184 Theorem 4. Let f (t) be a wide fully-connected neural network that satisfies Assumption 2 and is185 trained by any GRW satisfying Assumption 1 with the squared loss. Let f (t)ERM be the same model186 trained by ERM from the same initial point. If d1 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)187 are linearly independent, and λmin > 0, then there exists a constant η1 > 0 such that: if η ≤ η15,188 then for any δ > 0, there exists D̃ > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)189 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1, as d̃→∞,190
lim sup t→∞ ∣∣∣f (t)(x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4)→ 0 (10) Note that for simplicity, in the theorem we only consider the case where d1 = · · · = dL = d̃→∞,191 but in fact the result can be very easily extended to the case where dl/d1 → αl for l = 2, · · · , L for192 some constants α2, · · · , αL, and d1 →∞. Here we provide a proof sketch for this theorem. The key193 is to consider the linearized neural network of f (t)(x):194
f (t) lin (x) = f (0)(x) + 〈θ(t) − θ(0),∇θf (0)(x)〉 (11)
which is a linear model with features ∇θf (0)(x). Thus if ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly195 independent, then the linearized NN converges to the unique interpolator. Then we show that the196
3f is Lipschitz if there exists a constant L > 0 such that for any x1,x2, |f(x1)− f(x2)| ≤ L ‖x1 − x2‖2. 4Non-degenerate means that Θ(x,x′) depends on x and x′ and is not a constant. 5For ease of understanding, later we will write this condition as “with a sufficiently small learning rate”.
wide neural network can be approximated by its linearized counterpart uniformly throughout training,197 which is considerably more subtle in our case due to the GRW dynamics. Here we prove that the gap198 is bounded by O(d̃−1/4), but in fact we can prove that it is bounded by O(d̃−1/2+ ) for any > 0:199
Lemma 5 (Approximation Theorem). For a wide fully-connected neural network f (t) satisfying200 Assumption 2 and is trained by any GRW satisfying Assumption 1 with the squared loss, let f (t)lin be its201 linearized neural network trained by the same GRW (i.e. q(t)i are the same for both networks for any202 i and t). Under the conditions of Theorem 4, with a sufficiently small learning rate, for any δ > 0,203 there exist constants D̃ > 0 and C > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)204 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1,205
sup t≥0 ∣∣∣f (t)lin (x)− f (t)(x)∣∣∣ ≤ Cd̃−1/4 (12) Theorem 4 shows that at any test point x within the unit ball, the gap between the outputs of wide206 NNs trained by GRW and ERM from the same initial point is arbitrarily close to 0. So we have shown207 that for regression, with both linear and wide NNs, GRW does not improve over ERM.208
4.3 Wide Neural Networks, with L2 Regularization209
Previous work such as [SKHL20] proposed to improve DRO algorithms by adding L2 penalty to the210 objective function. In this section, we thus study adding L2 regularization to GRW algorithms:211
R̂µ q(t) (f) = n∑ i=1 q (t) i `(f(xi), yi) + µ 2 ∥∥∥θ − θ(0)∥∥∥2 2
(13)
From the outset, it is easy to see that under L2 regularization, GRW methods have different implicit212 biases than ERM. For example, when f is a linear model, ` is convex and smooth, then R̂µ
q(t) (f) with213 static GRW is a convex smooth objective function, so under GD with a sufficiently small learning rate,214 the model will converge to the global minimizer (see Appendix D.1). Moreover, the global optimum215 θ∗ satisfies∇θR̂µq(t)(f(x; θ
∗)) = 0, solving which yields θ∗ = θ(0) + (XQX>+µI)−1XQ(Y −216 f (0)(X)), which depends on Q = diag(q1, · · · , qn), so adding L2 regularization at least seems to217 yield different results from ERM (whether it improves over ERM might depend on q1, · · · , qn).218 However, the following result shows that this regularization must be large enough to significantly219 lower the training performance, or the resulting model would still be close to the unregularized ERM220 model. We still denote the largest and smallest eigenvalues of the kernel Gram matrix Θ by λmax and221 λmin. We use the subscript “reg” to refer to a regularized model (trained by minimizing (13)).222
Theorem 6. Suppose there exists M0 > 0 s.t. ∥∥∇θf (0)(x)∥∥2 ≤M0 for all ‖x‖2 ≤ 1. If λmin > 0223 and µ > 0, then for a wide NN satisfying Assumption 2, and any GRW minimizing the squared loss224 with a sufficiently small learning rate η, if d1 = d2 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)225 are linearly independent, and the empirical training risk of f (t)reg satisfies226
lim sup t→∞
R̂(f (t)reg ) < (14)
for some > 0, then with a sufficiently small learning rate, as d̃→∞, with probability close to 1227 over random initialization, for any x such that ‖x‖2 ≤ 1 we have228
lim sup t→∞ ∣∣∣f (t)reg (x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4 +√ )→ O(√ ) (15) where f (t)reg is trained by regularized GRW and f (t) ERM by unregularized ERM from same initial points.229
The proof again starts from analyzing linearized neural networks, and showing that regularization230 does not help there (Appendix D.4.2). Then, we need to prove a new approximation theorem for L2231 regularized GRW connecting wide NNs to their linearized counterparts uniformly through the GRW232 training process (Appendix D.4.1). Note that with regularization, we no longer need Assumption233 1 to prove the new approximation theorem, because previously Assumption 1 is used to prove the234 convergence of GRW, but with regularization GRW naturally converges.235
Theorem 6 shows that if the training error can go below , then the gap between the outputs of the236 two models on any test point x within the unit ball will be at most O( √ ). Thus, if is very small,237 regularized GRW yields a very similar model to unregularized ERM, and thus makes improvement.238
To empirically demonstrate this result, we run the same experiment as in Section 4.1 but with L2239 regularization. The results are presented in Figure 2. We can see that when the regularization is small,240 the training losses still converge to 0, and the three model weights still converge to the same point.241 On the contrary, with a large regularization, the training loss does not converge to 0, and the three242 model weights no longer converge to the same point. This shows that the regularization must be large243 enough to lower the training performance in order to make a significant difference to the implicit bias.244
5 Theoretical Results for Classification245
Now we consider classification where Y = {+1,−1}. The big difference is that classification losses246 don’t have finite minimizers. A classification loss converging to zero means that the model weight247 “explodes” to infinity instead of converging to a finite point. We focus on the canonical logistic loss:248
`(ŷ, y) = log(1 + exp(−ŷy)) (16)
5.1 Linear Models249
We first consider training the linear model f(x) = 〈θ,x〉 with GRW under gradient descent with the250 logistic loss. As noted earlier, in this setting, [BL19] made the empirical observation that importance251 weighting does not improve over ERM. Then, [XYR21] proved that for importance weighting252 algorithms, as t→∞, ‖θ(t)‖2 →∞ and θ(t)/‖θ(t)‖2 converges to a unit vector that does not depend253 on the sample weights, so it does not improve over ERM. To extend this theoretical result to the broad254 class of GRW algorithms, we will prove two results. First, in Theorem 7 we will show that under the255 logistic loss, any GRW algorithm satisfying the following weaker assumption:256
Assumption 3. For all i, lim inft→∞ q (t) i > 0,257
if the training error converges to 0, and the direction of the model weight converges to a fixed unit258 vector, then this unit vector must be the max-margin classifier defined as259
θ̂MM = arg max θ:‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,xi〉
} (17)
Second, Theorem 8 shows that for any GRW satisfying Assumption 1, the training error converges to260 0 and the direction of the model weight converges, so it does not improve over ERM.261 Theorem 7. If x1, · · · ,xn are linearly independent, then for the logistic loss, we have: for any262 GRW satisfying Assumption 3, if as t→∞ the empirical training risk R̂(f (t)) converges to 0 and263 θ(t)/‖θ(t)‖2 → u for some unit vector u, then u = θ̂MM.264
This result is an extension of [SHN+18]. Note that θ̂MM does not depend on q (t) i , so this result shows265 that the sample weights have no effect on the implicit bias. Thus, for any GRW method that only266 satisfies the weak Assumption 3, as long as the training error converges to 0 and the model weight267 direction converges, GRW does not improve over ERM. We next show that any GRW satisfying268 Assumption 1 does have its model weight direction converge, and its training error converge to 0.269 Theorem 8. For any loss ` that is convex, L-smooth in ŷ and strictly monotonically decreasing to270 zero as yŷ → +∞, and GRW satisfying Assumption 1, denote F (θ) = ∑n i=1 qi`(〈θ,xi〉, yi). If271 x1, · · · ,xn are linearly independent, then with a sufficiently small learning rate η, we have:272
F (θ(t))→ 0 as t→∞.(i) ∥∥θ(t)∥∥
2 →∞ as t→∞.(ii)273
Let θR = arg minθ{F (θ) : ‖θ‖2 ≤ R}. θR is unique for any R such that min‖θ‖2≤R F (θ) < mini qi`(0, yi). And if limR→∞ θRR exists, then limt→∞ θ(t)
‖θ(t)‖ 2
also exists and they are equal.
(iii)274
This result is an extension of Theorem 1 of [JDST20]. For the logistic loss, it is easy to show that275 it satisfies the conditions of the above theorem and limR→∞ θRR = θ̂MM. Thus, Theorems 8 and 7276 together imply that all GRW satisfying Assumption 1 (including ERM) have the same implicit bias277 (see Appendix D.5.3). We also have empirical verification for these results (see Appendix C).278
Remark. It is impossible to extend these results to wide NNs like Theorem 4 because for a neural279 network, if ‖θ(t)‖2 goes to infinity, then ‖∇θf‖2 will also go to infinity. However, for a linear model,280 the gradient is a constant. Consequently, the gap between the neural networks and its linearized281 counterpart will “explode” under gradient descent, so there can be no approximation theorem like282 Lemma 5 that can connect wide NNs to their linearized counterparts. Thus, we consider regularized283 GRW, for which θ(t) converges to a finite point and there is an approximation theorem.284
5.2 Wide Neural Networks, with L2 Regularization285
Consider minimizing the regularized weighted empirical risk (13) with ` being the logistic loss. As in286 the regression case, with L2 regularization, GRW methods have different implicit biases than ERM287 for the same reasons as in Section 4.3. And similarly, we can show that in order for GRW methods to288 be sufficiently different from ERM, the regularization needs to be large enough to significantly lower289 the training performance. Specifically, in the following theorem we show that if the regularization290 is too small to lower the training performance, then a wide neural network trained with regularized291 GRW and the logistic loss will still be very close to the max-margin linearized neural network:292
fMM(x) = 〈θ̂MM,∇θf (0)(x)〉 where θ̂MM = arg max ‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,∇θf (0)(xi)〉
} (18)
Note that fMM does not depend on q (t) i . Moreover, using the result in the previous section we can293 show that a linearized neural network trained with unregularized ERM will converge to fMM:294
Theorem 9. Suppose there exists M0 > 0 such that ∥∥∇θf (0)(x)∥∥2 ≤M0 for all test point x. For a295 wide NN satisfying Assumption 2, and for any GRW satisfying Assumption 1 with the logistic loss,296 if d1 = d2 = · · · = dL = d̃ and ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly independent and the297 learning rate is sufficiently small, then for any δ > 0 there exists a constant C > 0 such that: with298 probability at least (1 − δ) over random initialization, as d̃ → ∞ we have: for any ∈ (0, 14 ), if299 the empirical training error satisfies lim supt→∞ R̂(f (t) reg ) < , then for any test point x such that300 |fMM(x)| > C · (− log 2 )−1/2, f (t)reg (x) has the same sign as fMM(x) when t is sufficiently large.301
This result says that at any test point x on which the max-margin linear classifier classifies with a302 margin of Ω((− log 2 )−1/2), the neural network has the same prediction. And as decreases, the303 confidence threshold also becomes lower. Similar to Theorem 6, this theorem provides the scaling of304 the gap between the regularized GRW model and the unregularized ERM model w.r.t. .305
This result justifies the empirical observation in [SKHL20] that with large regularization, some GRW306 algorithms can maintain a high worst-group test performance, with the cost of suffering a significant307 drop in training accuracy. On the other hand, if the regularization is small and the model can achieve308 nearly perfect training accuracy, then its worst-group test performance will still significantly drop.309
6 Discussion310
6.1 Distributionally Robust Generalization and Future Directions311
A large body of prior work focused on distributionally robust optimization, but we show that these312 methods have (almost) equivalent implicit biases as ERM. In other words, distributionally robust313 optimization (DRO) does not necessarily have better distributionally robust generalization (DRG).314
Therefore, we argue that it is necessary to design principled ways to improve DRG, which is what315 people really want in the first place. Here we discuss three promising approaches to improving DRG.316
The first approach is data augmentation and pretraining on large datasets. Our theoretical findings317 suggest that the implicit bias of GRW is determined by the training samples and the initial point, but318 not the sample weights. Thus, to improve DRG, we can either obtain more training samples, or start319 from a better initial point, as demonstrated in two recent papers [WGS+22, SKL+22].320
The second approach (for classification) is to go beyond the class of (iterative) sample reweighting321 based GRW algorithms, for instance via logit adjustment [MJR+21], which makes a classifier have322 larger margins on smaller groups to improve its generalization on smaller groups. An early approach323 by [CWG+19] proposed to add an O(n−1/4k ) additive adjustment term to the logits output by the324 classifier. Following this spirit, [MJR+21] proposed the LA-loss which also adds an additive adjust-325 ment term to the logits. [YCZC20] proposed the CDT-loss which adds a multiplicative adjustment326 term to the logits by dividing the logits of different classes with different temperatures. [KPOT21]327 proposed the VS-loss which includes both additive and multiplicative adjustment terms, and they328 showed that only the multiplicative adjustment term affects the implicit bias, while the additive term329 only affects optimization, a fact that can be easily derived from our Theorem 8. Finally, [LZT+21]330 proposed AutoBalance which optimizes the adjustment terms with a bi-level optimization framework.331
The third approach is to stay within the class of GRW algorithms, but to change the classifica-332 tion/regression loss function to be suited to GRW. A recent paper [WCHH22] showed that for linear333 classifiers, one can make the implicit bias of GRW dependent on the sample weights by replacing the334 exponentially-tailed logistic loss with the following polynomially-tailed loss:335
`α,β(ŷ, y) = `left(ŷy) , if ŷy < β 1
[ŷy − (β − 1)]α , if ŷy ≥ β
(19)
And this result can be extended to GRW satisfying Assumption 1 using our Theorem 8. The reason336 why loss (19) works is that it changes limR→∞ θRR , and the new limit depends on the sample weights.337
6.2 Limitations338
Like most theory papers, our work makes some strong assumptions. The two main assumptions are:339
(i) The model is a linear model or a sufficiently wide fully-connected neural network.340 (ii) The model is trained for sufficiently long time, i.e. without early stopping.341
Regarding (i), [COB19] argued that NTK neural networks fall in the “lazy training” regime and342 results might not be transferable to general neural networks. However, this class of neural networks343 has been widely studied in recent years and has provided considerable insights into the behavior344 of general neural networks, which is hard to analyze otherwise. Regarding (ii), in some easy tasks,345 when early stopping is applied, existing algorithms for distributional shift can do better than ERM346 [SKHL20]. However, as demonstrated in [GLP21, KSM+21], in real applications these methods still347 cannot significantly improve over ERM even with early stopping, so early stopping is not the ultimate348 universal solution. Thus, though inevitably our results rely on some strong assumptions, we believe349 that they provide important insights into the problems of existing methods and directions for future350 work, which are significant contributions to the study of distributional shift problems.351
7 Conclusion352
In this work, we posit a broad class of what we call Generalized Reweighting (GRW) algorithms that353 include popular approaches such as importance weighting, and Distributionally Robust Optimization354 (DRO) variants, that were designed towards the task of learning models that are robust to distributional355 shift. We show that when used to train overparameterized linear models or wide NN models, even this356 very broad class of GRW algorithms does not improve over ERM, because they have the same implicit357 biases. We also showed that regularization does not help if it is not large enough to significantly358 lower the average training performance. Our results thus suggest to make progress towards learning359 models that are robust to distributional shift, we have to either go beyond this broad class of GRW360 algorithms, or design new losses specifically targeted to this class.361
References362 [BGO16] Su Lin Blodgett, Lisa Green, and Brendan O’Connor. Demographic dialectal variation363 in social media: A case study of African-American English. In Proceedings of the 2016364 Conference on Empirical Methods in Natural Language Processing, pages 1119–1130,365 Austin, Texas, November 2016. Association for Computational Linguistics.366
[BL19] Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep367 learning? In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of368 the 36th International Conference on Machine Learning, volume 97 of Proceedings of369 Machine Learning Research, pages 872–881. PMLR, 09–15 Jun 2019.370
[COB19] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable371 programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox,372 and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32.373 Curran Associates, Inc., 2019.374
[CWG+19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning375 imbalanced datasets with label-distribution-aware margin loss. Advances in Neural376 Information Processing Systems, 32:1567–1578, 2019.377
[DN18] John Duchi and Hongseok Namkoong. Learning models with uniform performance via378 distributionally robust optimization. arXiv preprint arXiv:1810.08750, 2018.379
[GLP21] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In380 International Conference on Learning Representations, 2021.381
[GLSS18] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit382 bias in terms of optimization geometry. In Jennifer Dy and Andreas Krause, editors,383 Proceedings of the 35th International Conference on Machine Learning, volume 80 of384 Proceedings of Machine Learning Research, pages 1832–1841. PMLR, 10–15 Jul 2018.385
[HNSS18] Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust386 supervised learning give robust classifiers? In International Conference on Machine387 Learning, pages 2029–2037. PMLR, 2018.388
[HS15] Dirk Hovy and Anders Søgaard. Tagging performance correlates with author age. In389 Proceedings of the 53rd annual meeting of the Association for Computational Linguistics390 and the 7th international joint conference on natural language processing (volume 2:391 Short papers), pages 483–488, 2015.392
[HSNL18] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fair-393 ness without demographics in repeated loss minimization. In Jennifer Dy and Andreas394 Krause, editors, International Conference on Machine Learning, volume 80 of Proceed-395 ings of Machine Learning Research, pages 1929–1938, Stockholmsmässan, Stockholm396 Sweden, 10–15 Jul 2018. PMLR.397
[JDST20] Ziwei Ji, Miroslav Dudík, Robert E. Schapire, and Matus Telgarsky. Gradient descent398 follows the regularization path for general losses. In Jacob Abernethy and Shivani399 Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory, volume400 125 of Proceedings of Machine Learning Research, pages 2109–2136. PMLR, 09–12401 Jul 2020.402
[JGH18] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Conver-403 gence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle,404 K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information405 Processing Systems, volume 31. Curran Associates, Inc., 2018.406
[KPOT21] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thram-407 poulidis. Label-imbalanced and group-sensitive classification under overparameteriza-408 tion. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.409
[KSM+21] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang,410 Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena411 Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque,412 Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea413 Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina414 Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on415 Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages416 5637–5664. PMLR, 18–24 Jul 2021.417
[LXS+19] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha418 Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve419 as linear models under gradient descent. Advances in neural information processing420 systems, 32:8572–8583, 2019.421
[LZT+21] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak.422 Autobalance: Optimized loss functions for imbalanced data. In Thirty-Fifth Conference423 on Neural Information Processing Systems, 2021.424
[MJR+21] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, An-425 dreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International426 Conference on Learning Representations, 2021.427
[Shi00] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting428 the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244,429 2000.430
[SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro.431 The implicit bias of gradient descent on separable data. The Journal of Machine Learning432 Research, 19(1):2822–2878, 2018.433
[SKHL20] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distribution-434 ally robust neural networks for group shifts: On the importance of regularization for435 worst-case generalization. In International Conference on Learning Representations,436 2020.437
[SKL+22] Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen,438 Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne439 David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto,440 Sergey Levine, Chelsea Finn, and Percy Liang. Extending the WILDS benchmark for441 unsupervised adaptation. In International Conference on Learning Representations,442 2022.443
[SRKL20] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of444 why overparameterization exacerbates spurious correlations. In Hal Daumé III and Aarti445 Singh, editors, Proceedings of the 37th International Conference on Machine Learning,446 volume 119 of Proceedings of Machine Learning Research, pages 8346–8356. PMLR,447 13–18 Jul 2020.448
[Tat17] Rachael Tatman. Gender and dialect bias in youtube’s automatic captions. In Proceed-449 ings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59,450 2017.451
[WCHH22] Ke Alexander Wang, Niladri Shekhar Chatterji, Saminul Haque, and Tatsunori452 Hashimoto. Is importance weighting incompatible with interpolating classifiers? In453 International Conference on Learning Representations, 2022.454
[WGS+22] Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre-Alvise Rebuffi, Ira Ktena, Krish-455 namurthy Dj Dvijotham, and Ali Taylan Cemgil. A fine-grained analysis on distribution456 shift. In International Conference on Learning Representations, 2022.457
[XDKR20] Ziyu Xu, Chen Dan, Justin Khim, and Pradeep Ravikumar. Class-weighted classifi-458 cation: Trade-offs and robust approaches. In Hal Daumé III and Aarti Singh, editors,459 Proceedings of the 37th International Conference on Machine Learning, volume 119460
of Proceedings of Machine Learning Research, pages 10544–10554. PMLR, 13–18 Jul461 2020.462
[XYR21] Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting463 for deep learning. In International Conference on Learning Representations, 2021.464
[YCZC20] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and465 compensating for feature deviation in imbalanced deep learning. arXiv preprint466 arXiv:2001.01385, 2020.467
[ZDKR21] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Doro: Distributional468 and outlier robust optimization. In Marina Meila and Tong Zhang, editors, Proceedings469 of the 38th International Conference on Machine Learning, volume 139 of Proceedings470 of Machine Learning Research, pages 12345–12355. PMLR, 18–24 Jul 2021.471
[ZDS+21] Runtian Zhai, Chen Dan, Arun Suggala, J Zico Kolter, and Pradeep Kumar Raviku-472 mar. Boosted CVar classification. In Thirty-Fifth Conference on Neural Information473 Processing Systems, 2021.474
Checklist475
The checklist follows the references. Please read the checklist guidelines carefully for information on476 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or477 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing478 the appropriate section of your paper or providing a brief inline description. For example:479
• Did you include the license to the code and datasets? [Yes] See Section.480 • Did you include the license to the code and datasets? [No] The code and the data are481 proprietary.482 • Did you include the license to the code and datasets? [N/A]483
Please do not modify the questions and only use the provided macros for your answers. Note that the484 Checklist section does not count towards the page limit. In your paper, please delete this instructions485 block and only keep the Checklist section heading above along with the questions/answers below.486
1. For all authors...487 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s488 contributions and scope? [Yes]489 (b) Did you describe the limitations of your work? [Yes] See Section 6.2.490 (c) Did you discuss any potential negative societal impacts of your work? [No] Not491 relevant.492 (d) Have you read the ethics review guidelines and ensured that your paper conforms to493 them? [Yes]494 2. If you are including theoretical results...495
(a) Did you state the full set of assumptions of all theoretical results? [Yes]496 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix D.497
3. If you ran experiments...498 (a) Did you include the code, data, and instructions needed to reproduce the main experi-499 mental results (either in the supplemental material or as a URL)? [Yes]500 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they501 were chosen)? [Yes]502 (c) Did you report error bars (e.g., with respect to the random seed after running experi-503 ments multiple times)? [No] The experiments are only for demonstration.504 (d) Did you include the total amount of compute and the type of resources used (e.g., type505 of GPUs, internal cluster, or cloud provider)? [Yes] See the code.506 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...507
(a) If your work uses existing assets, did you cite the creators? [N/A]508 (b) Did you mention the license of the assets? [N/A]509 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]510
511
(d) Did you discuss whether and how consent was obtained from people whose data you’re512 using/curating? [N/A]513
(e) Did you discuss whether the data you are using/curating contains personally identifiable514 information or offensive content? [N/A]515
5. If you used crowdsourcing or conducted research with human subjects...516 (a) Did you include the full text of instructions given to participants and screenshots, if517 applicable? [N/A]518 (b) Did you describe any potential participant risks, with links to Institutional Review519 Board (IRB) approvals, if applicable? [N/A]520 (c) Did you include the estimated hourly wage paid to participants and the total amount521 spent on participant compensation? [N/A]522 | 1. What is the main contribution of the paper regarding understanding the performance of generalized reweighting and ERM methods?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical investigation and limitations?
3. Do you have any concerns about the results being overclaimed or neglecting essential factors like regularization and early stopping?
4. Can you explain the uniqueness of the interpolator and its properties?
5. How does the paper differentiate itself from previous works considering the linear model case, and how does it bridge the gap between theory and practice?
6. Are there any plans to conduct generalization analyses or additional experiments to support the theoretical findings? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
As a theoretical work, this paper aims to understand why generalized reweighting does not improve over ERM in the setting of distribution shift. Specifically, it considers the overparameterized linear model and wide neural networks under gradient descent, and prove the same solution between generalized reweighting and ERM methods via the implicit bias of the gradient descent algorithm. Then, it claims that these two method families should share the same performance empirically. Technically, it mainly utilizes the neural tangent kernel theory for the wide neural networks.
Strengths And Weaknesses
Strengths:
This work is motivated by the empirical results that the reweighting methods often have no superiority over ERM in the distribution shift problem. In my view, its theoretical investigation is valuable.
The claim is partially supported by formal theoretical results.
Weaknesses:
The considered setting is rather limited as previous work has considered the linear model case. Besides, although this paper considers the wide neural networks via the connection between the kernel method and neural networks (i.e., neural tangent kernel theory), it still has a large gap in practice.
The results are not surprising and interesting as mentioned in 1. The key reason why generalized reweighting and ERM converge to the same solution is the implicit bias of the optimization algorithms (i.e., gradient descent in this paper) for overparameterized models. However, this paper does not highlight the key effects of the overparameterized model case. Thus, I am concerned that the authors may overclaim the results.
In practice, regularization and early stopping are usually used to improve the generalization performance. However, this paper neglects them.
Questions
The authors mentioned that the reweighting and ERM algorithms converge to the unique interpolator. But which one? Does the interpolator have some special property?
Although this paper mainly considers the distribution shift problem, I haven't seen its key difference or point with the classical setting. Please give more explanations why ERM is not suitable for this case.
When theoretically comparing with empirical performance of two or many learning algorithms, generalization analysis is preferred. While this paper focuses on the optimization properties via implicit bias of gradient descent, implicit bias can also offer insight for generalization. Thus, what is the generalization performance of the learning algorithms considered in this paper?
Limitations
The authors have mentioned some limitations in this paper. But, I still have the following suggestions.
The generalization analysis can provide more insights into the empirical performance.
More experiments in real settings can support the theoretical results. |
NIPS | Title
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Abstract
Empirical risk minimization (ERM) is known to be non-robust in practice to 1 distributional shift where the training and the test distributions are different. A suite 2 of approaches, such as importance weighting, and variants of distributionally robust 3 optimization (DRO), have been proposed to solve this problem. But a line of recent 4 work has empirically shown that these approaches do not significantly improve 5 over ERM in real applications with distribution shift. The goal of this work is to 6 obtain a comprehensive theoretical understanding of this intriguing phenomenon. 7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad 8 category of approaches that iteratively update model parameters based on iterative 9 reweighting of the training samples. We show that when overparameterized models 10 are trained under GRW, the resulting models are close to that obtained by ERM. 11 We also show that adding small regularization which does not greatly affect the 12 empirical training accuracy does not help. Together, our results show that a broad 13 category of what we term GRW approaches are not able to achieve distributionally 14 robust generalization. Our work thus has the following sobering takeaway: to 15 make progress towards distributionally robust generalization, we either have to 16 develop non-GRW approaches, or perhaps devise novel classification/regression 17 loss functions that are adapted to the class of GRW approaches. 18
N/A
Empirical risk minimization (ERM) is known to be non-robust in practice to1 distributional shift where the training and the test distributions are different. A suite2 of approaches, such as importance weighting, and variants of distributionally robust3 optimization (DRO), have been proposed to solve this problem. But a line of recent4 work has empirically shown that these approaches do not significantly improve5 over ERM in real applications with distribution shift. The goal of this work is to6 obtain a comprehensive theoretical understanding of this intriguing phenomenon.7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad8 category of approaches that iteratively update model parameters based on iterative9 reweighting of the training samples. We show that when overparameterized models10 are trained under GRW, the resulting models are close to that obtained by ERM.11 We also show that adding small regularization which does not greatly affect the12 empirical training accuracy does not help. Together, our results show that a broad13 category of what we term GRW approaches are not able to achieve distributionally14 robust generalization. Our work thus has the following sobering takeaway: to15 make progress towards distributionally robust generalization, we either have to16 develop non-GRW approaches, or perhaps devise novel classification/regression17 loss functions that are adapted to the class of GRW approaches.18
1 Introduction19
It has now been well established that empirical risk minimization (ERM) can empirically achieve high20 test performance on a variety of tasks, particularly with modern overparameterized models where the21 number of parameters is much larger than the number of training samples. This strong performance22 of ERM however has been shown to degrade under distributional shift, where the training and test23 distributions are different [HS15, BGO16, Tat17]. There are two broad categories of distribution24 shift: domain generalization where the test distribution contains new environments not in the training25 distribution like in domain adaptation, and subpopulation shift where the two distributions have the26 same set of subpopulations but their mixture weights differ like in algorithmic fairness applications.27
People have proposed various approaches to learn models that are robust to distributional shift. The28 most classical approach is importance weighting (IW) [Shi00], which reweights training samples; in29 the context of subpopulation shift these weights are typically set so that each subpopulation/group30 has the same overall weight in the training objective. The approach most widely used today is31 Distributional Robust Optimization (DRO) [DN18, HSNL18], in which we assume that the test32 distribution belongs to a certain set of distributions that are close to the training distribution (called33 the uncertainty set), and train the model on the worst distribution in that set. Many variants of DRO34 have been proposed and are used in practice [HNSS18, SKHL20, XDKR20, ZDKR21, ZDS+21].35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
While these approaches have been developed for the express purpose of improving ERM for distri-36 bution shift, a line of recent work has empirically shown the negative result that when used to train37 overparameterized models, these methods do not improve over ERM. For IW, [BL19] observed that38 its effect under stochastic gradient descent (SGD) diminishes over training epochs, and finally does39 not improve over ERM. For variants of DRO, [SKHL20] found that these methods overfit very easily,40 i.e. their test performances will drop to the same low level as ERM after sufficiently many epochs if41 no regularization is applied. [GLP21, KSM+21] compared these methods with ERM on a number of42 real-world applications, and found that in most cases none of these methods improves over ERM.43
This line of empirical results has also been bolstered by some recent theoretical results. [SRKL20]44 constructed a synthetic dataset where a linear model trained with IW is provably not robust to45 subpopulation shift. [XYR21] further proved that under gradient descent (GD) with a sufficiently46 small learning rate, a linear classifier trained with either IW or ERM converges to the same max-47 margin classifier, and thus upon convergence, are no different. These previous theoretical results are48 limited to linear models and specific approaches such as IW where sample weights are fixed during49 training. They are not applicable to more complex models, and more general approaches where the50 sample weights could iteratively change, including most DRO variants.51
Towards placing the empirical results on a stronger theoretical footing, we define the class of52 generalized reweighting (GRW), which dynamically assigns weights to the training samples, and53 iteratively minimizes the weighted average of the sample losses. By allowing the weights to vary54 with iterations, we cover not just static importance weighting, but also DRO approaches outlined55 earlier; though of course, the GRW class is much broader than just these instances.56
In this work, we prove the comprehensive result that in both regression and classification, and for57 both overparameterized linear models and wide neural networks, the models learnt via any GRW58 approach and ERM are similar, in the sense that their implicit biases are (almost) equivalent. We note59 that extending the analysis from linear models to wide neural networks is non-trivial since it requires60 the result that wide neural networks can be approximated by their linearized counterparts to hold61 uniformly throughout the iterative process of GRW algorithms. Our results extend the analysis in62 [LXS+19], but as we show, the proof in the original paper had some flaws, and due to which we have63 to fix the proof by changing the network initialization (Eqn. (9), see Appendix E).64
Overall, the important takeaway is that distributionally robust generalization cannot be directly65 achieved by the broad class of GRW algorithms (which includes popular approaches such as impor-66 tance weighting and most DRO variants). Progress towards this important goal thus requires either67 going beyond GRW algorithms, or devising novel loss functions that are adapted to GRW approaches.68 In Section 6 we will discuss some promising future directions as well as the limitations of this work.69
2 Preliminaries70
Let the input space be X ⊆ Rd and the output space be Y ⊆ R.1 We assume that X is a subset of the71 unit L2 ball of Rd, so that any x ∈ X satisfies ‖x‖2 ≤ 1. We have a training set {zi = (xi, yi)}ni=172 i.i.d. sampled from an underlying distribution P over X × Y . Denote X = (x1, · · · ,xn) ∈ Rd×n,73 and Y = (y1, · · · , yn) ∈ Rn. For any function g : X 7→ Rm, we overload notation and use74 g(X) = (g(x1), · · · , g(xn)) ∈ Rm×n (except when m = 1, g(X) is defined as a column vector).75 Let the loss function be ` : Y × Y → [0, 1]. ERM trains a model by minimizing its expected risk76 R(f ;P ) = Ez∼P [`(f(x), y)] via minimizing the empirical risk R̂(f) = 1n ∑n i=1 `(f(xi), yi).77
In distributional shift, the model is evaluated not on the training distribution P , but a different test78 distribution Ptest, so that we care about the expected risk R(f ;Ptest). A large family of methods79 designed for such distributional shift is distributionally robust optimization (DRO), which minimizes80 the expected risk over the worst-case distribution Q P 2 in a ball w.r.t. divergence D around the81 training distribution P . Specifically, DRO minimizes the expected DRO risk defined as:82
RD,ρ(f ;P ) = sup Q P {EQ[`(f(x), y)] : D(Q ‖ P ) ≤ ρ} (1)
for ρ > 0. Examples include CVaR, χ2-DRO [HSNL18], and DORO [ZDKR21], among others.83 1Our results can be easily extended to the multi-class scenario (see Appendix B). 2For distributions P and Q, Q is absolute continuous to P , or Q P , means that for any event A,
P (A) = 0 implies Q(A) = 0.
A common category of distribution shift is known as subpopulation shift. Let the data domain contain84 K groups D1, · · · ,DK . The training distribution P is the distribution over all groups, and the test85 distribution Ptest is the distribution over one of the groups. Let Pk(z) = P (z | z ∈ Dk) be the86 conditional distribution over group k, then Ptest can be any one of P1, · · · , Pk. The goal is to train a87 model f that performs well over every group. There are two common ways to achieve this goal: one88 is minimizing the balanced empirical risk which is an unweighted average of the empirical risk over89 each group, and the other is minimizing the worst-group risk defined as90
Rmax(f ;P ) = max k=1,··· ,K R(f ;Pk) = max k=1,··· ,K Ez∼P [`(f(x), y)|z ∈ Dk] (2)
3 Generalized Reweighting (GRW)91
Various methods have been proposed towards learning models that are robust to distributional shift.92 In contrast to analyzing each of these individually, we instead consider a large class of what we call93 Generalized Reweighting (GRW) algorithms that includes the ones mentioned earlier, but potentially94 many others more. Loosely, GRW algorithms iteratively assign each sample a weight during training95 (that could vary with the iteration) and iteratively minimize the weighted average risk. Specifically, at96 iteration t, GRW assigns a weight q(t)i to sample zi, and minimizes the weighted empirical risk:97
R̂q(t)(f) = n∑ i=1 q (t) i `(f(xi), yi) (3)
where q(t) = (q(t)1 , · · · , q (t) n ) and q (t) 1 + · · ·+ q (t) n = 1.98
Static GRW assigns to each zi = (xi, yi) a fixed weight qi that does not change during training, i.e.99 q (t) i ≡ qi. A classical method is importance weighting [Shi00], where if zi ∈ Dk and the size of Dk100 is nk, then qi = (Knk)−1. Under importance weighting, (3) becomes the balanced empirical risk in101 which each group has the same weight. Note that ERM is also a special case of static GRW.102
On the other hand, in dynamic GRW, q(t) changes with t. For instance, any approach that iteratively103 upweights samples with high losses in order to help the model learn “hard” samples, such as DRO,104 is an instance of GRW. When estimating the population DRO risk RD,ρ(f ;P ) in Eqn. (1), if P105 is set to the empirical distribution over the training samples, then Q P implies that Q is also106 a distribution over the training samples. Thus, DRO methods belong to the broad class of GRW107 algorithms. There are two common ways to implement DRO. One uses Danskin’s theorem and108 chooses Q as the maximizer of EQ[`(f(x), y)] in each epoch. The other one formulates DRO as a109 bi-level optimization problem, where the lower level updates the model to minimize the expected risk110 over Q, and the upper level updates Q to maximize it. Both can be seen as instances of GRW. As one111 popular instance of the latter, Group DRO was proposed by [SKHL20] to minimize (2). Denote the112 empirical risk over group k by R̂k(f), and the model at time t by f (t). Group DRO iteratively sets113 q (t) i = g (t) k /nk for all zi ∈ Dk where g (t) k is the group weight that is updated as114
g (t) k ∝ g (t−1) k exp ( νR̂k(f (t−1)) ) (∀k = 1, · · · ,K) (4)
for some ν > 0, and then normalized so that q(t)1 + · · · + q (t) n = 1. [SKHL20] then showed (in115 their Proposition 2) that for convex settings, the Group DRO risk of iterates converges to the global116 minimum with the rate O(t−1/2) if ν is sufficiently small.117
4 Theoretical Results for Regression118
In this section, we will study GRW for regression tasks that use the squared loss119
`(ŷ, y) = 1
2 (ŷ − y)2. (5)
We will prove that for both linear models and sufficiently wide fully-connected neural networks, the120 implicit bias of GRW is equivalent to ERM, so that starting from the same initial point, GRW and121 ERM will converge to the same point when trained for an infinitely long time, which explains why122 GRW does not improve over ERM without regularization and early stopping. We will further show123 that while regularization can affect this implicit bias, it must be large enough to significantly lower124 the training performance, or the final model will still be similar to the unregularized ERM model.125
4.1 Linear Models126
We first demonstrate our result on simple linear models to provide our readers with a key intuition;127 later, we will apply this same intuition to neural networks. This key intuition draws from results128 of [GLSS18]. Let the linear model be denoted by f(x) = 〈θ,x〉, where θ ∈ Rd. We consider the129 overparameterized setting where d > n. The weight update rule of GRW under GD is the following:130
θ(t+1) = θ(t) − η n∑ i=1 q (t) i ∇θ`(f (t)(xi), yi) (6)
where η > 0 is the learning rate. For a linear model with the squared loss, the update rule is131
θ(t+1) = θ(t) − η n∑ i=1 q (t) i xi(f (t)(xi)− yi) (7)
For this training scheme, we can prove that if the training error converges to zero, then the model132 converges to an interpolator θ∗ (s.t. ∀i, 〈θ∗,xi〉 = yi) independent of q(t)i (proofs in Appendix D):133 Theorem 1. If x1, · · · ,xn are linearly independent, then under the squared loss, for any GRW such134 that the empirical training risk R̂(f (t))→ 0 as t→∞, it holds that θ(t) converges to an interpolator135 θ∗ that only depends on θ(0) and x1, · · · ,xn, but does not depend on q(t)i .136
The proof is based on the following key intuition regarding the update rule (7): θ(t+1) − θ(t) is137 a linear combination of x1, · · · ,xn for all t, so θ(t) − θ(0) always lies in the linear subspace138 span{x1, · · · ,xn}, which is an n-dimensional linear subspace if x1, · · · ,xn are linearly independent.139 By Cramer’s rule, there is exactly one θ̃ in this subspace such that we get interpolation of all the140 data 〈θ̃ + θ(0),xi〉 = yi for all i ∈ {1, . . . , n}. In other words, the parameter θ∗ = θ̃ + θ(0) in this141 subspace that interpolates all the data is unique. Thus the proof would follow if we were to show that142 θ(t) − θ(0), which lies in the subspace, also converges to interpolating the data.143 We have essentially proved the following sobering result: the implicit bias of any GRW that achieves144 zero training error is equivalent to ERM, so GRW does not improve over ERM. While the various145 distributional shift methods discussed in the introduction have been shown to satisfy the precondition146 of convergence to zero training error with overparameterized models and linearly independent147 inputs [SKHL20], we provide the following theorem that shows this for the broad class of GRW148 methods. Specifically, we show this result for any GRW satisfying the following assumption with a149 sufficiently small learning rate:150
Assumption 1. There are constants q1, · · · , qn s.t. ∀i, q(t)i → qi as t→∞. And mini qi = q∗ > 0.151 Theorem 2. If x1, · · · ,xn are linearly independent, then there exists η0 > 0 such that for any152 GRW satisfying Assumption 1 with the squared loss, and any η ≤ η0, the empirical training risk153 R̂(f (t))→ 0 as t→∞.154
Finally, we use a simple experiment to demonstrate the correctness of this result. The experiment is155 conducted on a training set of six MNIST images, five of which are digit 0 and one is digit 1. We use156 a 784-dimensional linear model and run ERM, importance weighting and group DRO. The results are157 presented in Figure 1, and they show that the training loss of each method converges to 0, and the gap158 between the model weights of importance weighting, Group DRO and ERM converges to 0, meaning159 that all three model weights converge to the same point, whose L2 norm is about 0.63. Figure 1d also160 shows that the group weights in Group DRO empirically satisfy Assumption 1.161
4.2 Wide Neural Networks (Wide NNs)162
Now we study sufficiently wide fully-connected neural networks. We extend the analysis in [LXS+19]163 in the neural tangent kernel (NTK) regime [JGH18]. In particular we study the following network:164
hl+1 = W l√ dl xl + βbl and xl+1 = σ(hl+1) (l = 0, · · · , L) (8)
where σ is a non-linear activation function, W l ∈ Rdl+1×dl and WL ∈ R1×dL . Here d0 = d. The165 parameter vector θ consists of W 0, · · · ,WL and b0, · · · , bL (θ is the concatenation of all flattened166 weights and biases). The final output is f(x) = hL+1. And let the neural network be initialized as167 {
W l(0) i,j ∼ N (0, 1)
b l(0) j ∼ N (0, 1)
(l = 0, · · · , L− 1) and
{ W
L(0) i,j = 0
b L(0) j ∼ N (0, 1)
(9)
We also need the following assumption on the wide neural network:168 Assumption 2. σ is differentiable everywhere. Both σ and its first-order derivative σ̇ are Lipschitz.3169
Difference from [JGH18]. Our initialization (9) differs from the original one in [JGH18] in the last170 (output) layer, where we use the zero initialization WL(0)i,j = 0 instead of the Gaussian initialization171
W L(0) i,j ∼ N (0, 1). This modification permits us to accurately approximate the neural network with172 its linearized counterpart (11), as we notice that the proofs in [LXS+19] (particularly the proofs of173 their Theorem 2.1 and their Lemma 1 in Appendix G) are flawed. In Appendix E we will explain174 what goes wrong in their proofs and how we manage to fix the proofs with our modification.175
Denote the neural network at time t by f (t)(x) = f(x; θ(t)) which is parameterized by θ(t) ∈ Rp176 where p is the number of parameters. We use the shorthand ∇θf (0)(x) := ∇θf(x; θ) ∣∣ θ=θ0
. The177 neural tangent kernel (NTK) of this model is Θ(0)(x,x′) = ∇θf (0)(x)>∇θf (0)(x′), and the Gram178 matrix is Θ(0) = Θ(0)(X,X) ∈ Rn×n. For this wide NN, we still have the following NTK theorem:179 Lemma 3. If σ is Lipschitz and dl →∞ for l = 1, · · · , L sequentially, then Θ(0)(x,x′) converges180 in probability to a non-degenerate4 deterministic limiting kernel Θ(x,x′).181
The kernel Gram matrix Θ = Θ(X,X) ∈ Rn×n is a positive semi-definite symmetric matrix.182 Denote its largest and smallest eigenvalues by λmax and λmin. Note that Θ is non-degenerate, so we183 can assume that λmin > 0 (which is almost surely true when dL n). Then we have:184 Theorem 4. Let f (t) be a wide fully-connected neural network that satisfies Assumption 2 and is185 trained by any GRW satisfying Assumption 1 with the squared loss. Let f (t)ERM be the same model186 trained by ERM from the same initial point. If d1 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)187 are linearly independent, and λmin > 0, then there exists a constant η1 > 0 such that: if η ≤ η15,188 then for any δ > 0, there exists D̃ > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)189 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1, as d̃→∞,190
lim sup t→∞ ∣∣∣f (t)(x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4)→ 0 (10) Note that for simplicity, in the theorem we only consider the case where d1 = · · · = dL = d̃→∞,191 but in fact the result can be very easily extended to the case where dl/d1 → αl for l = 2, · · · , L for192 some constants α2, · · · , αL, and d1 →∞. Here we provide a proof sketch for this theorem. The key193 is to consider the linearized neural network of f (t)(x):194
f (t) lin (x) = f (0)(x) + 〈θ(t) − θ(0),∇θf (0)(x)〉 (11)
which is a linear model with features ∇θf (0)(x). Thus if ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly195 independent, then the linearized NN converges to the unique interpolator. Then we show that the196
3f is Lipschitz if there exists a constant L > 0 such that for any x1,x2, |f(x1)− f(x2)| ≤ L ‖x1 − x2‖2. 4Non-degenerate means that Θ(x,x′) depends on x and x′ and is not a constant. 5For ease of understanding, later we will write this condition as “with a sufficiently small learning rate”.
wide neural network can be approximated by its linearized counterpart uniformly throughout training,197 which is considerably more subtle in our case due to the GRW dynamics. Here we prove that the gap198 is bounded by O(d̃−1/4), but in fact we can prove that it is bounded by O(d̃−1/2+ ) for any > 0:199
Lemma 5 (Approximation Theorem). For a wide fully-connected neural network f (t) satisfying200 Assumption 2 and is trained by any GRW satisfying Assumption 1 with the squared loss, let f (t)lin be its201 linearized neural network trained by the same GRW (i.e. q(t)i are the same for both networks for any202 i and t). Under the conditions of Theorem 4, with a sufficiently small learning rate, for any δ > 0,203 there exist constants D̃ > 0 and C > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)204 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1,205
sup t≥0 ∣∣∣f (t)lin (x)− f (t)(x)∣∣∣ ≤ Cd̃−1/4 (12) Theorem 4 shows that at any test point x within the unit ball, the gap between the outputs of wide206 NNs trained by GRW and ERM from the same initial point is arbitrarily close to 0. So we have shown207 that for regression, with both linear and wide NNs, GRW does not improve over ERM.208
4.3 Wide Neural Networks, with L2 Regularization209
Previous work such as [SKHL20] proposed to improve DRO algorithms by adding L2 penalty to the210 objective function. In this section, we thus study adding L2 regularization to GRW algorithms:211
R̂µ q(t) (f) = n∑ i=1 q (t) i `(f(xi), yi) + µ 2 ∥∥∥θ − θ(0)∥∥∥2 2
(13)
From the outset, it is easy to see that under L2 regularization, GRW methods have different implicit212 biases than ERM. For example, when f is a linear model, ` is convex and smooth, then R̂µ
q(t) (f) with213 static GRW is a convex smooth objective function, so under GD with a sufficiently small learning rate,214 the model will converge to the global minimizer (see Appendix D.1). Moreover, the global optimum215 θ∗ satisfies∇θR̂µq(t)(f(x; θ
∗)) = 0, solving which yields θ∗ = θ(0) + (XQX>+µI)−1XQ(Y −216 f (0)(X)), which depends on Q = diag(q1, · · · , qn), so adding L2 regularization at least seems to217 yield different results from ERM (whether it improves over ERM might depend on q1, · · · , qn).218 However, the following result shows that this regularization must be large enough to significantly219 lower the training performance, or the resulting model would still be close to the unregularized ERM220 model. We still denote the largest and smallest eigenvalues of the kernel Gram matrix Θ by λmax and221 λmin. We use the subscript “reg” to refer to a regularized model (trained by minimizing (13)).222
Theorem 6. Suppose there exists M0 > 0 s.t. ∥∥∇θf (0)(x)∥∥2 ≤M0 for all ‖x‖2 ≤ 1. If λmin > 0223 and µ > 0, then for a wide NN satisfying Assumption 2, and any GRW minimizing the squared loss224 with a sufficiently small learning rate η, if d1 = d2 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)225 are linearly independent, and the empirical training risk of f (t)reg satisfies226
lim sup t→∞
R̂(f (t)reg ) < (14)
for some > 0, then with a sufficiently small learning rate, as d̃→∞, with probability close to 1227 over random initialization, for any x such that ‖x‖2 ≤ 1 we have228
lim sup t→∞ ∣∣∣f (t)reg (x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4 +√ )→ O(√ ) (15) where f (t)reg is trained by regularized GRW and f (t) ERM by unregularized ERM from same initial points.229
The proof again starts from analyzing linearized neural networks, and showing that regularization230 does not help there (Appendix D.4.2). Then, we need to prove a new approximation theorem for L2231 regularized GRW connecting wide NNs to their linearized counterparts uniformly through the GRW232 training process (Appendix D.4.1). Note that with regularization, we no longer need Assumption233 1 to prove the new approximation theorem, because previously Assumption 1 is used to prove the234 convergence of GRW, but with regularization GRW naturally converges.235
Theorem 6 shows that if the training error can go below , then the gap between the outputs of the236 two models on any test point x within the unit ball will be at most O( √ ). Thus, if is very small,237 regularized GRW yields a very similar model to unregularized ERM, and thus makes improvement.238
To empirically demonstrate this result, we run the same experiment as in Section 4.1 but with L2239 regularization. The results are presented in Figure 2. We can see that when the regularization is small,240 the training losses still converge to 0, and the three model weights still converge to the same point.241 On the contrary, with a large regularization, the training loss does not converge to 0, and the three242 model weights no longer converge to the same point. This shows that the regularization must be large243 enough to lower the training performance in order to make a significant difference to the implicit bias.244
5 Theoretical Results for Classification245
Now we consider classification where Y = {+1,−1}. The big difference is that classification losses246 don’t have finite minimizers. A classification loss converging to zero means that the model weight247 “explodes” to infinity instead of converging to a finite point. We focus on the canonical logistic loss:248
`(ŷ, y) = log(1 + exp(−ŷy)) (16)
5.1 Linear Models249
We first consider training the linear model f(x) = 〈θ,x〉 with GRW under gradient descent with the250 logistic loss. As noted earlier, in this setting, [BL19] made the empirical observation that importance251 weighting does not improve over ERM. Then, [XYR21] proved that for importance weighting252 algorithms, as t→∞, ‖θ(t)‖2 →∞ and θ(t)/‖θ(t)‖2 converges to a unit vector that does not depend253 on the sample weights, so it does not improve over ERM. To extend this theoretical result to the broad254 class of GRW algorithms, we will prove two results. First, in Theorem 7 we will show that under the255 logistic loss, any GRW algorithm satisfying the following weaker assumption:256
Assumption 3. For all i, lim inft→∞ q (t) i > 0,257
if the training error converges to 0, and the direction of the model weight converges to a fixed unit258 vector, then this unit vector must be the max-margin classifier defined as259
θ̂MM = arg max θ:‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,xi〉
} (17)
Second, Theorem 8 shows that for any GRW satisfying Assumption 1, the training error converges to260 0 and the direction of the model weight converges, so it does not improve over ERM.261 Theorem 7. If x1, · · · ,xn are linearly independent, then for the logistic loss, we have: for any262 GRW satisfying Assumption 3, if as t→∞ the empirical training risk R̂(f (t)) converges to 0 and263 θ(t)/‖θ(t)‖2 → u for some unit vector u, then u = θ̂MM.264
This result is an extension of [SHN+18]. Note that θ̂MM does not depend on q (t) i , so this result shows265 that the sample weights have no effect on the implicit bias. Thus, for any GRW method that only266 satisfies the weak Assumption 3, as long as the training error converges to 0 and the model weight267 direction converges, GRW does not improve over ERM. We next show that any GRW satisfying268 Assumption 1 does have its model weight direction converge, and its training error converge to 0.269 Theorem 8. For any loss ` that is convex, L-smooth in ŷ and strictly monotonically decreasing to270 zero as yŷ → +∞, and GRW satisfying Assumption 1, denote F (θ) = ∑n i=1 qi`(〈θ,xi〉, yi). If271 x1, · · · ,xn are linearly independent, then with a sufficiently small learning rate η, we have:272
F (θ(t))→ 0 as t→∞.(i) ∥∥θ(t)∥∥
2 →∞ as t→∞.(ii)273
Let θR = arg minθ{F (θ) : ‖θ‖2 ≤ R}. θR is unique for any R such that min‖θ‖2≤R F (θ) < mini qi`(0, yi). And if limR→∞ θRR exists, then limt→∞ θ(t)
‖θ(t)‖ 2
also exists and they are equal.
(iii)274
This result is an extension of Theorem 1 of [JDST20]. For the logistic loss, it is easy to show that275 it satisfies the conditions of the above theorem and limR→∞ θRR = θ̂MM. Thus, Theorems 8 and 7276 together imply that all GRW satisfying Assumption 1 (including ERM) have the same implicit bias277 (see Appendix D.5.3). We also have empirical verification for these results (see Appendix C).278
Remark. It is impossible to extend these results to wide NNs like Theorem 4 because for a neural279 network, if ‖θ(t)‖2 goes to infinity, then ‖∇θf‖2 will also go to infinity. However, for a linear model,280 the gradient is a constant. Consequently, the gap between the neural networks and its linearized281 counterpart will “explode” under gradient descent, so there can be no approximation theorem like282 Lemma 5 that can connect wide NNs to their linearized counterparts. Thus, we consider regularized283 GRW, for which θ(t) converges to a finite point and there is an approximation theorem.284
5.2 Wide Neural Networks, with L2 Regularization285
Consider minimizing the regularized weighted empirical risk (13) with ` being the logistic loss. As in286 the regression case, with L2 regularization, GRW methods have different implicit biases than ERM287 for the same reasons as in Section 4.3. And similarly, we can show that in order for GRW methods to288 be sufficiently different from ERM, the regularization needs to be large enough to significantly lower289 the training performance. Specifically, in the following theorem we show that if the regularization290 is too small to lower the training performance, then a wide neural network trained with regularized291 GRW and the logistic loss will still be very close to the max-margin linearized neural network:292
fMM(x) = 〈θ̂MM,∇θf (0)(x)〉 where θ̂MM = arg max ‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,∇θf (0)(xi)〉
} (18)
Note that fMM does not depend on q (t) i . Moreover, using the result in the previous section we can293 show that a linearized neural network trained with unregularized ERM will converge to fMM:294
Theorem 9. Suppose there exists M0 > 0 such that ∥∥∇θf (0)(x)∥∥2 ≤M0 for all test point x. For a295 wide NN satisfying Assumption 2, and for any GRW satisfying Assumption 1 with the logistic loss,296 if d1 = d2 = · · · = dL = d̃ and ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly independent and the297 learning rate is sufficiently small, then for any δ > 0 there exists a constant C > 0 such that: with298 probability at least (1 − δ) over random initialization, as d̃ → ∞ we have: for any ∈ (0, 14 ), if299 the empirical training error satisfies lim supt→∞ R̂(f (t) reg ) < , then for any test point x such that300 |fMM(x)| > C · (− log 2 )−1/2, f (t)reg (x) has the same sign as fMM(x) when t is sufficiently large.301
This result says that at any test point x on which the max-margin linear classifier classifies with a302 margin of Ω((− log 2 )−1/2), the neural network has the same prediction. And as decreases, the303 confidence threshold also becomes lower. Similar to Theorem 6, this theorem provides the scaling of304 the gap between the regularized GRW model and the unregularized ERM model w.r.t. .305
This result justifies the empirical observation in [SKHL20] that with large regularization, some GRW306 algorithms can maintain a high worst-group test performance, with the cost of suffering a significant307 drop in training accuracy. On the other hand, if the regularization is small and the model can achieve308 nearly perfect training accuracy, then its worst-group test performance will still significantly drop.309
6 Discussion310
6.1 Distributionally Robust Generalization and Future Directions311
A large body of prior work focused on distributionally robust optimization, but we show that these312 methods have (almost) equivalent implicit biases as ERM. In other words, distributionally robust313 optimization (DRO) does not necessarily have better distributionally robust generalization (DRG).314
Therefore, we argue that it is necessary to design principled ways to improve DRG, which is what315 people really want in the first place. Here we discuss three promising approaches to improving DRG.316
The first approach is data augmentation and pretraining on large datasets. Our theoretical findings317 suggest that the implicit bias of GRW is determined by the training samples and the initial point, but318 not the sample weights. Thus, to improve DRG, we can either obtain more training samples, or start319 from a better initial point, as demonstrated in two recent papers [WGS+22, SKL+22].320
The second approach (for classification) is to go beyond the class of (iterative) sample reweighting321 based GRW algorithms, for instance via logit adjustment [MJR+21], which makes a classifier have322 larger margins on smaller groups to improve its generalization on smaller groups. An early approach323 by [CWG+19] proposed to add an O(n−1/4k ) additive adjustment term to the logits output by the324 classifier. Following this spirit, [MJR+21] proposed the LA-loss which also adds an additive adjust-325 ment term to the logits. [YCZC20] proposed the CDT-loss which adds a multiplicative adjustment326 term to the logits by dividing the logits of different classes with different temperatures. [KPOT21]327 proposed the VS-loss which includes both additive and multiplicative adjustment terms, and they328 showed that only the multiplicative adjustment term affects the implicit bias, while the additive term329 only affects optimization, a fact that can be easily derived from our Theorem 8. Finally, [LZT+21]330 proposed AutoBalance which optimizes the adjustment terms with a bi-level optimization framework.331
The third approach is to stay within the class of GRW algorithms, but to change the classifica-332 tion/regression loss function to be suited to GRW. A recent paper [WCHH22] showed that for linear333 classifiers, one can make the implicit bias of GRW dependent on the sample weights by replacing the334 exponentially-tailed logistic loss with the following polynomially-tailed loss:335
`α,β(ŷ, y) = `left(ŷy) , if ŷy < β 1
[ŷy − (β − 1)]α , if ŷy ≥ β
(19)
And this result can be extended to GRW satisfying Assumption 1 using our Theorem 8. The reason336 why loss (19) works is that it changes limR→∞ θRR , and the new limit depends on the sample weights.337
6.2 Limitations338
Like most theory papers, our work makes some strong assumptions. The two main assumptions are:339
(i) The model is a linear model or a sufficiently wide fully-connected neural network.340 (ii) The model is trained for sufficiently long time, i.e. without early stopping.341
Regarding (i), [COB19] argued that NTK neural networks fall in the “lazy training” regime and342 results might not be transferable to general neural networks. However, this class of neural networks343 has been widely studied in recent years and has provided considerable insights into the behavior344 of general neural networks, which is hard to analyze otherwise. Regarding (ii), in some easy tasks,345 when early stopping is applied, existing algorithms for distributional shift can do better than ERM346 [SKHL20]. However, as demonstrated in [GLP21, KSM+21], in real applications these methods still347 cannot significantly improve over ERM even with early stopping, so early stopping is not the ultimate348 universal solution. Thus, though inevitably our results rely on some strong assumptions, we believe349 that they provide important insights into the problems of existing methods and directions for future350 work, which are significant contributions to the study of distributional shift problems.351
7 Conclusion352
In this work, we posit a broad class of what we call Generalized Reweighting (GRW) algorithms that353 include popular approaches such as importance weighting, and Distributionally Robust Optimization354 (DRO) variants, that were designed towards the task of learning models that are robust to distributional355 shift. We show that when used to train overparameterized linear models or wide NN models, even this356 very broad class of GRW algorithms does not improve over ERM, because they have the same implicit357 biases. We also showed that regularization does not help if it is not large enough to significantly358 lower the average training performance. Our results thus suggest to make progress towards learning359 models that are robust to distributional shift, we have to either go beyond this broad class of GRW360 algorithms, or design new losses specifically targeted to this class.361
References362 [BGO16] Su Lin Blodgett, Lisa Green, and Brendan O’Connor. Demographic dialectal variation363 in social media: A case study of African-American English. In Proceedings of the 2016364 Conference on Empirical Methods in Natural Language Processing, pages 1119–1130,365 Austin, Texas, November 2016. Association for Computational Linguistics.366
[BL19] Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep367 learning? In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of368 the 36th International Conference on Machine Learning, volume 97 of Proceedings of369 Machine Learning Research, pages 872–881. PMLR, 09–15 Jun 2019.370
[COB19] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable371 programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox,372 and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32.373 Curran Associates, Inc., 2019.374
[CWG+19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning375 imbalanced datasets with label-distribution-aware margin loss. Advances in Neural376 Information Processing Systems, 32:1567–1578, 2019.377
[DN18] John Duchi and Hongseok Namkoong. Learning models with uniform performance via378 distributionally robust optimization. arXiv preprint arXiv:1810.08750, 2018.379
[GLP21] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In380 International Conference on Learning Representations, 2021.381
[GLSS18] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit382 bias in terms of optimization geometry. In Jennifer Dy and Andreas Krause, editors,383 Proceedings of the 35th International Conference on Machine Learning, volume 80 of384 Proceedings of Machine Learning Research, pages 1832–1841. PMLR, 10–15 Jul 2018.385
[HNSS18] Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust386 supervised learning give robust classifiers? In International Conference on Machine387 Learning, pages 2029–2037. PMLR, 2018.388
[HS15] Dirk Hovy and Anders Søgaard. Tagging performance correlates with author age. In389 Proceedings of the 53rd annual meeting of the Association for Computational Linguistics390 and the 7th international joint conference on natural language processing (volume 2:391 Short papers), pages 483–488, 2015.392
[HSNL18] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fair-393 ness without demographics in repeated loss minimization. In Jennifer Dy and Andreas394 Krause, editors, International Conference on Machine Learning, volume 80 of Proceed-395 ings of Machine Learning Research, pages 1929–1938, Stockholmsmässan, Stockholm396 Sweden, 10–15 Jul 2018. PMLR.397
[JDST20] Ziwei Ji, Miroslav Dudík, Robert E. Schapire, and Matus Telgarsky. Gradient descent398 follows the regularization path for general losses. In Jacob Abernethy and Shivani399 Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory, volume400 125 of Proceedings of Machine Learning Research, pages 2109–2136. PMLR, 09–12401 Jul 2020.402
[JGH18] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Conver-403 gence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle,404 K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information405 Processing Systems, volume 31. Curran Associates, Inc., 2018.406
[KPOT21] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thram-407 poulidis. Label-imbalanced and group-sensitive classification under overparameteriza-408 tion. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.409
[KSM+21] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang,410 Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena411 Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque,412 Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea413 Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina414 Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on415 Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages416 5637–5664. PMLR, 18–24 Jul 2021.417
[LXS+19] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha418 Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve419 as linear models under gradient descent. Advances in neural information processing420 systems, 32:8572–8583, 2019.421
[LZT+21] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak.422 Autobalance: Optimized loss functions for imbalanced data. In Thirty-Fifth Conference423 on Neural Information Processing Systems, 2021.424
[MJR+21] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, An-425 dreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International426 Conference on Learning Representations, 2021.427
[Shi00] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting428 the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244,429 2000.430
[SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro.431 The implicit bias of gradient descent on separable data. The Journal of Machine Learning432 Research, 19(1):2822–2878, 2018.433
[SKHL20] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distribution-434 ally robust neural networks for group shifts: On the importance of regularization for435 worst-case generalization. In International Conference on Learning Representations,436 2020.437
[SKL+22] Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen,438 Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne439 David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto,440 Sergey Levine, Chelsea Finn, and Percy Liang. Extending the WILDS benchmark for441 unsupervised adaptation. In International Conference on Learning Representations,442 2022.443
[SRKL20] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of444 why overparameterization exacerbates spurious correlations. In Hal Daumé III and Aarti445 Singh, editors, Proceedings of the 37th International Conference on Machine Learning,446 volume 119 of Proceedings of Machine Learning Research, pages 8346–8356. PMLR,447 13–18 Jul 2020.448
[Tat17] Rachael Tatman. Gender and dialect bias in youtube’s automatic captions. In Proceed-449 ings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59,450 2017.451
[WCHH22] Ke Alexander Wang, Niladri Shekhar Chatterji, Saminul Haque, and Tatsunori452 Hashimoto. Is importance weighting incompatible with interpolating classifiers? In453 International Conference on Learning Representations, 2022.454
[WGS+22] Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre-Alvise Rebuffi, Ira Ktena, Krish-455 namurthy Dj Dvijotham, and Ali Taylan Cemgil. A fine-grained analysis on distribution456 shift. In International Conference on Learning Representations, 2022.457
[XDKR20] Ziyu Xu, Chen Dan, Justin Khim, and Pradeep Ravikumar. Class-weighted classifi-458 cation: Trade-offs and robust approaches. In Hal Daumé III and Aarti Singh, editors,459 Proceedings of the 37th International Conference on Machine Learning, volume 119460
of Proceedings of Machine Learning Research, pages 10544–10554. PMLR, 13–18 Jul461 2020.462
[XYR21] Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting463 for deep learning. In International Conference on Learning Representations, 2021.464
[YCZC20] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and465 compensating for feature deviation in imbalanced deep learning. arXiv preprint466 arXiv:2001.01385, 2020.467
[ZDKR21] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Doro: Distributional468 and outlier robust optimization. In Marina Meila and Tong Zhang, editors, Proceedings469 of the 38th International Conference on Machine Learning, volume 139 of Proceedings470 of Machine Learning Research, pages 12345–12355. PMLR, 18–24 Jul 2021.471
[ZDS+21] Runtian Zhai, Chen Dan, Arun Suggala, J Zico Kolter, and Pradeep Kumar Raviku-472 mar. Boosted CVar classification. In Thirty-Fifth Conference on Neural Information473 Processing Systems, 2021.474
Checklist475
The checklist follows the references. Please read the checklist guidelines carefully for information on476 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or477 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing478 the appropriate section of your paper or providing a brief inline description. For example:479
• Did you include the license to the code and datasets? [Yes] See Section.480 • Did you include the license to the code and datasets? [No] The code and the data are481 proprietary.482 • Did you include the license to the code and datasets? [N/A]483
Please do not modify the questions and only use the provided macros for your answers. Note that the484 Checklist section does not count towards the page limit. In your paper, please delete this instructions485 block and only keep the Checklist section heading above along with the questions/answers below.486
1. For all authors...487 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s488 contributions and scope? [Yes]489 (b) Did you describe the limitations of your work? [Yes] See Section 6.2.490 (c) Did you discuss any potential negative societal impacts of your work? [No] Not491 relevant.492 (d) Have you read the ethics review guidelines and ensured that your paper conforms to493 them? [Yes]494 2. If you are including theoretical results...495
(a) Did you state the full set of assumptions of all theoretical results? [Yes]496 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix D.497
3. If you ran experiments...498 (a) Did you include the code, data, and instructions needed to reproduce the main experi-499 mental results (either in the supplemental material or as a URL)? [Yes]500 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they501 were chosen)? [Yes]502 (c) Did you report error bars (e.g., with respect to the random seed after running experi-503 ments multiple times)? [No] The experiments are only for demonstration.504 (d) Did you include the total amount of compute and the type of resources used (e.g., type505 of GPUs, internal cluster, or cloud provider)? [Yes] See the code.506 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...507
(a) If your work uses existing assets, did you cite the creators? [N/A]508 (b) Did you mention the license of the assets? [N/A]509 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]510
511
(d) Did you discuss whether and how consent was obtained from people whose data you’re512 using/curating? [N/A]513
(e) Did you discuss whether the data you are using/curating contains personally identifiable514 information or offensive content? [N/A]515
5. If you used crowdsourcing or conducted research with human subjects...516 (a) Did you include the full text of instructions given to participants and screenshots, if517 applicable? [N/A]518 (b) Did you describe any potential participant risks, with links to Institutional Review519 Board (IRB) approvals, if applicable? [N/A]520 (c) Did you include the estimated hourly wage paid to participants and the total amount521 spent on participant compensation? [N/A]522 | 1. What is the focus of the paper regarding Generalized Reweighting (GRW) and its impact on distribution shift?
2. What are the strengths of the paper, particularly in terms of its theoretical evidence and explanations?
3. What are the weaknesses of the paper, such as the lack of concrete algorithms and the asymptotic behavior of wide neural networks?
4. Do you have any questions regarding the paper's results with regularization and its interpretation?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper shows that Generalized Reweighting (GRW), dynamically weighting each loss term of the empirical risk, used for training a model that is robust to distribution shift has almost no impact on the result of the optimization compared with the ordinary Empirical Risk Minimization (ERM) for regression and classification with linear models or wide neural networks. This implies that despite being a popular approach, GRW fails to improve over ERM in the cases that the authors consider. The authors also prove that even with regularization that compromise the training loss by
ϵ
, GRM only brings a change of
O
(
ϵ
)
to the minimizer compared to ERM. Their results suggest that learning robustly to distribution shift may need approaches other than GRM.
Strengths And Weaknesses
Strengths
The paper is well-written and provide intuitive explanations even for technical part.
The paper provides theoretical evidence supporting previously reported empirical observations for failures in training distributionally robust models with the weighting approach.
While existing theoretical work only handles linear models, the paper consider wide neural networks.
As a technical contribution, the authors analysis corrects and extends that of Lee et al. (2019).
Although the main results are negative ones for the GRW approach, the paper discusses implications about other possible approaches.
Weaknesses
The paper does not provide concrete algorithms that can avoid their negative results.
The paper analyzes the asymptotic behavior of wide neural networks when the widths go to infinity, which may not reflect the behavior of neural networks commonly used in reality.
The results with regularization are hard to interpret because the paper does not analyze how much change we need for truly robust training and we do not know the
O
(
ϵ
)
impact is insufficient or not.
Questions
[HNSS18] also provides some theoretical, negative results similarly to this paper. Could the authors discuss the difference?
Do the authors have any thoughts about how much difference we should expect in the result of optimization,
f
(
t
)
, from that of ERM in order to be distributionally robust? If we only need a small change from ERM to achieve robustness, the result for regularization in the paper may not be a negative one.
Limitations
The paper clearly discusses limitations of the work about the theoretical assumptions. |
NIPS | Title
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Abstract
Empirical risk minimization (ERM) is known to be non-robust in practice to 1 distributional shift where the training and the test distributions are different. A suite 2 of approaches, such as importance weighting, and variants of distributionally robust 3 optimization (DRO), have been proposed to solve this problem. But a line of recent 4 work has empirically shown that these approaches do not significantly improve 5 over ERM in real applications with distribution shift. The goal of this work is to 6 obtain a comprehensive theoretical understanding of this intriguing phenomenon. 7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad 8 category of approaches that iteratively update model parameters based on iterative 9 reweighting of the training samples. We show that when overparameterized models 10 are trained under GRW, the resulting models are close to that obtained by ERM. 11 We also show that adding small regularization which does not greatly affect the 12 empirical training accuracy does not help. Together, our results show that a broad 13 category of what we term GRW approaches are not able to achieve distributionally 14 robust generalization. Our work thus has the following sobering takeaway: to 15 make progress towards distributionally robust generalization, we either have to 16 develop non-GRW approaches, or perhaps devise novel classification/regression 17 loss functions that are adapted to the class of GRW approaches. 18
N/A
Empirical risk minimization (ERM) is known to be non-robust in practice to1 distributional shift where the training and the test distributions are different. A suite2 of approaches, such as importance weighting, and variants of distributionally robust3 optimization (DRO), have been proposed to solve this problem. But a line of recent4 work has empirically shown that these approaches do not significantly improve5 over ERM in real applications with distribution shift. The goal of this work is to6 obtain a comprehensive theoretical understanding of this intriguing phenomenon.7 We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad8 category of approaches that iteratively update model parameters based on iterative9 reweighting of the training samples. We show that when overparameterized models10 are trained under GRW, the resulting models are close to that obtained by ERM.11 We also show that adding small regularization which does not greatly affect the12 empirical training accuracy does not help. Together, our results show that a broad13 category of what we term GRW approaches are not able to achieve distributionally14 robust generalization. Our work thus has the following sobering takeaway: to15 make progress towards distributionally robust generalization, we either have to16 develop non-GRW approaches, or perhaps devise novel classification/regression17 loss functions that are adapted to the class of GRW approaches.18
1 Introduction19
It has now been well established that empirical risk minimization (ERM) can empirically achieve high20 test performance on a variety of tasks, particularly with modern overparameterized models where the21 number of parameters is much larger than the number of training samples. This strong performance22 of ERM however has been shown to degrade under distributional shift, where the training and test23 distributions are different [HS15, BGO16, Tat17]. There are two broad categories of distribution24 shift: domain generalization where the test distribution contains new environments not in the training25 distribution like in domain adaptation, and subpopulation shift where the two distributions have the26 same set of subpopulations but their mixture weights differ like in algorithmic fairness applications.27
People have proposed various approaches to learn models that are robust to distributional shift. The28 most classical approach is importance weighting (IW) [Shi00], which reweights training samples; in29 the context of subpopulation shift these weights are typically set so that each subpopulation/group30 has the same overall weight in the training objective. The approach most widely used today is31 Distributional Robust Optimization (DRO) [DN18, HSNL18], in which we assume that the test32 distribution belongs to a certain set of distributions that are close to the training distribution (called33 the uncertainty set), and train the model on the worst distribution in that set. Many variants of DRO34 have been proposed and are used in practice [HNSS18, SKHL20, XDKR20, ZDKR21, ZDS+21].35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
While these approaches have been developed for the express purpose of improving ERM for distri-36 bution shift, a line of recent work has empirically shown the negative result that when used to train37 overparameterized models, these methods do not improve over ERM. For IW, [BL19] observed that38 its effect under stochastic gradient descent (SGD) diminishes over training epochs, and finally does39 not improve over ERM. For variants of DRO, [SKHL20] found that these methods overfit very easily,40 i.e. their test performances will drop to the same low level as ERM after sufficiently many epochs if41 no regularization is applied. [GLP21, KSM+21] compared these methods with ERM on a number of42 real-world applications, and found that in most cases none of these methods improves over ERM.43
This line of empirical results has also been bolstered by some recent theoretical results. [SRKL20]44 constructed a synthetic dataset where a linear model trained with IW is provably not robust to45 subpopulation shift. [XYR21] further proved that under gradient descent (GD) with a sufficiently46 small learning rate, a linear classifier trained with either IW or ERM converges to the same max-47 margin classifier, and thus upon convergence, are no different. These previous theoretical results are48 limited to linear models and specific approaches such as IW where sample weights are fixed during49 training. They are not applicable to more complex models, and more general approaches where the50 sample weights could iteratively change, including most DRO variants.51
Towards placing the empirical results on a stronger theoretical footing, we define the class of52 generalized reweighting (GRW), which dynamically assigns weights to the training samples, and53 iteratively minimizes the weighted average of the sample losses. By allowing the weights to vary54 with iterations, we cover not just static importance weighting, but also DRO approaches outlined55 earlier; though of course, the GRW class is much broader than just these instances.56
In this work, we prove the comprehensive result that in both regression and classification, and for57 both overparameterized linear models and wide neural networks, the models learnt via any GRW58 approach and ERM are similar, in the sense that their implicit biases are (almost) equivalent. We note59 that extending the analysis from linear models to wide neural networks is non-trivial since it requires60 the result that wide neural networks can be approximated by their linearized counterparts to hold61 uniformly throughout the iterative process of GRW algorithms. Our results extend the analysis in62 [LXS+19], but as we show, the proof in the original paper had some flaws, and due to which we have63 to fix the proof by changing the network initialization (Eqn. (9), see Appendix E).64
Overall, the important takeaway is that distributionally robust generalization cannot be directly65 achieved by the broad class of GRW algorithms (which includes popular approaches such as impor-66 tance weighting and most DRO variants). Progress towards this important goal thus requires either67 going beyond GRW algorithms, or devising novel loss functions that are adapted to GRW approaches.68 In Section 6 we will discuss some promising future directions as well as the limitations of this work.69
2 Preliminaries70
Let the input space be X ⊆ Rd and the output space be Y ⊆ R.1 We assume that X is a subset of the71 unit L2 ball of Rd, so that any x ∈ X satisfies ‖x‖2 ≤ 1. We have a training set {zi = (xi, yi)}ni=172 i.i.d. sampled from an underlying distribution P over X × Y . Denote X = (x1, · · · ,xn) ∈ Rd×n,73 and Y = (y1, · · · , yn) ∈ Rn. For any function g : X 7→ Rm, we overload notation and use74 g(X) = (g(x1), · · · , g(xn)) ∈ Rm×n (except when m = 1, g(X) is defined as a column vector).75 Let the loss function be ` : Y × Y → [0, 1]. ERM trains a model by minimizing its expected risk76 R(f ;P ) = Ez∼P [`(f(x), y)] via minimizing the empirical risk R̂(f) = 1n ∑n i=1 `(f(xi), yi).77
In distributional shift, the model is evaluated not on the training distribution P , but a different test78 distribution Ptest, so that we care about the expected risk R(f ;Ptest). A large family of methods79 designed for such distributional shift is distributionally robust optimization (DRO), which minimizes80 the expected risk over the worst-case distribution Q P 2 in a ball w.r.t. divergence D around the81 training distribution P . Specifically, DRO minimizes the expected DRO risk defined as:82
RD,ρ(f ;P ) = sup Q P {EQ[`(f(x), y)] : D(Q ‖ P ) ≤ ρ} (1)
for ρ > 0. Examples include CVaR, χ2-DRO [HSNL18], and DORO [ZDKR21], among others.83 1Our results can be easily extended to the multi-class scenario (see Appendix B). 2For distributions P and Q, Q is absolute continuous to P , or Q P , means that for any event A,
P (A) = 0 implies Q(A) = 0.
A common category of distribution shift is known as subpopulation shift. Let the data domain contain84 K groups D1, · · · ,DK . The training distribution P is the distribution over all groups, and the test85 distribution Ptest is the distribution over one of the groups. Let Pk(z) = P (z | z ∈ Dk) be the86 conditional distribution over group k, then Ptest can be any one of P1, · · · , Pk. The goal is to train a87 model f that performs well over every group. There are two common ways to achieve this goal: one88 is minimizing the balanced empirical risk which is an unweighted average of the empirical risk over89 each group, and the other is minimizing the worst-group risk defined as90
Rmax(f ;P ) = max k=1,··· ,K R(f ;Pk) = max k=1,··· ,K Ez∼P [`(f(x), y)|z ∈ Dk] (2)
3 Generalized Reweighting (GRW)91
Various methods have been proposed towards learning models that are robust to distributional shift.92 In contrast to analyzing each of these individually, we instead consider a large class of what we call93 Generalized Reweighting (GRW) algorithms that includes the ones mentioned earlier, but potentially94 many others more. Loosely, GRW algorithms iteratively assign each sample a weight during training95 (that could vary with the iteration) and iteratively minimize the weighted average risk. Specifically, at96 iteration t, GRW assigns a weight q(t)i to sample zi, and minimizes the weighted empirical risk:97
R̂q(t)(f) = n∑ i=1 q (t) i `(f(xi), yi) (3)
where q(t) = (q(t)1 , · · · , q (t) n ) and q (t) 1 + · · ·+ q (t) n = 1.98
Static GRW assigns to each zi = (xi, yi) a fixed weight qi that does not change during training, i.e.99 q (t) i ≡ qi. A classical method is importance weighting [Shi00], where if zi ∈ Dk and the size of Dk100 is nk, then qi = (Knk)−1. Under importance weighting, (3) becomes the balanced empirical risk in101 which each group has the same weight. Note that ERM is also a special case of static GRW.102
On the other hand, in dynamic GRW, q(t) changes with t. For instance, any approach that iteratively103 upweights samples with high losses in order to help the model learn “hard” samples, such as DRO,104 is an instance of GRW. When estimating the population DRO risk RD,ρ(f ;P ) in Eqn. (1), if P105 is set to the empirical distribution over the training samples, then Q P implies that Q is also106 a distribution over the training samples. Thus, DRO methods belong to the broad class of GRW107 algorithms. There are two common ways to implement DRO. One uses Danskin’s theorem and108 chooses Q as the maximizer of EQ[`(f(x), y)] in each epoch. The other one formulates DRO as a109 bi-level optimization problem, where the lower level updates the model to minimize the expected risk110 over Q, and the upper level updates Q to maximize it. Both can be seen as instances of GRW. As one111 popular instance of the latter, Group DRO was proposed by [SKHL20] to minimize (2). Denote the112 empirical risk over group k by R̂k(f), and the model at time t by f (t). Group DRO iteratively sets113 q (t) i = g (t) k /nk for all zi ∈ Dk where g (t) k is the group weight that is updated as114
g (t) k ∝ g (t−1) k exp ( νR̂k(f (t−1)) ) (∀k = 1, · · · ,K) (4)
for some ν > 0, and then normalized so that q(t)1 + · · · + q (t) n = 1. [SKHL20] then showed (in115 their Proposition 2) that for convex settings, the Group DRO risk of iterates converges to the global116 minimum with the rate O(t−1/2) if ν is sufficiently small.117
4 Theoretical Results for Regression118
In this section, we will study GRW for regression tasks that use the squared loss119
`(ŷ, y) = 1
2 (ŷ − y)2. (5)
We will prove that for both linear models and sufficiently wide fully-connected neural networks, the120 implicit bias of GRW is equivalent to ERM, so that starting from the same initial point, GRW and121 ERM will converge to the same point when trained for an infinitely long time, which explains why122 GRW does not improve over ERM without regularization and early stopping. We will further show123 that while regularization can affect this implicit bias, it must be large enough to significantly lower124 the training performance, or the final model will still be similar to the unregularized ERM model.125
4.1 Linear Models126
We first demonstrate our result on simple linear models to provide our readers with a key intuition;127 later, we will apply this same intuition to neural networks. This key intuition draws from results128 of [GLSS18]. Let the linear model be denoted by f(x) = 〈θ,x〉, where θ ∈ Rd. We consider the129 overparameterized setting where d > n. The weight update rule of GRW under GD is the following:130
θ(t+1) = θ(t) − η n∑ i=1 q (t) i ∇θ`(f (t)(xi), yi) (6)
where η > 0 is the learning rate. For a linear model with the squared loss, the update rule is131
θ(t+1) = θ(t) − η n∑ i=1 q (t) i xi(f (t)(xi)− yi) (7)
For this training scheme, we can prove that if the training error converges to zero, then the model132 converges to an interpolator θ∗ (s.t. ∀i, 〈θ∗,xi〉 = yi) independent of q(t)i (proofs in Appendix D):133 Theorem 1. If x1, · · · ,xn are linearly independent, then under the squared loss, for any GRW such134 that the empirical training risk R̂(f (t))→ 0 as t→∞, it holds that θ(t) converges to an interpolator135 θ∗ that only depends on θ(0) and x1, · · · ,xn, but does not depend on q(t)i .136
The proof is based on the following key intuition regarding the update rule (7): θ(t+1) − θ(t) is137 a linear combination of x1, · · · ,xn for all t, so θ(t) − θ(0) always lies in the linear subspace138 span{x1, · · · ,xn}, which is an n-dimensional linear subspace if x1, · · · ,xn are linearly independent.139 By Cramer’s rule, there is exactly one θ̃ in this subspace such that we get interpolation of all the140 data 〈θ̃ + θ(0),xi〉 = yi for all i ∈ {1, . . . , n}. In other words, the parameter θ∗ = θ̃ + θ(0) in this141 subspace that interpolates all the data is unique. Thus the proof would follow if we were to show that142 θ(t) − θ(0), which lies in the subspace, also converges to interpolating the data.143 We have essentially proved the following sobering result: the implicit bias of any GRW that achieves144 zero training error is equivalent to ERM, so GRW does not improve over ERM. While the various145 distributional shift methods discussed in the introduction have been shown to satisfy the precondition146 of convergence to zero training error with overparameterized models and linearly independent147 inputs [SKHL20], we provide the following theorem that shows this for the broad class of GRW148 methods. Specifically, we show this result for any GRW satisfying the following assumption with a149 sufficiently small learning rate:150
Assumption 1. There are constants q1, · · · , qn s.t. ∀i, q(t)i → qi as t→∞. And mini qi = q∗ > 0.151 Theorem 2. If x1, · · · ,xn are linearly independent, then there exists η0 > 0 such that for any152 GRW satisfying Assumption 1 with the squared loss, and any η ≤ η0, the empirical training risk153 R̂(f (t))→ 0 as t→∞.154
Finally, we use a simple experiment to demonstrate the correctness of this result. The experiment is155 conducted on a training set of six MNIST images, five of which are digit 0 and one is digit 1. We use156 a 784-dimensional linear model and run ERM, importance weighting and group DRO. The results are157 presented in Figure 1, and they show that the training loss of each method converges to 0, and the gap158 between the model weights of importance weighting, Group DRO and ERM converges to 0, meaning159 that all three model weights converge to the same point, whose L2 norm is about 0.63. Figure 1d also160 shows that the group weights in Group DRO empirically satisfy Assumption 1.161
4.2 Wide Neural Networks (Wide NNs)162
Now we study sufficiently wide fully-connected neural networks. We extend the analysis in [LXS+19]163 in the neural tangent kernel (NTK) regime [JGH18]. In particular we study the following network:164
hl+1 = W l√ dl xl + βbl and xl+1 = σ(hl+1) (l = 0, · · · , L) (8)
where σ is a non-linear activation function, W l ∈ Rdl+1×dl and WL ∈ R1×dL . Here d0 = d. The165 parameter vector θ consists of W 0, · · · ,WL and b0, · · · , bL (θ is the concatenation of all flattened166 weights and biases). The final output is f(x) = hL+1. And let the neural network be initialized as167 {
W l(0) i,j ∼ N (0, 1)
b l(0) j ∼ N (0, 1)
(l = 0, · · · , L− 1) and
{ W
L(0) i,j = 0
b L(0) j ∼ N (0, 1)
(9)
We also need the following assumption on the wide neural network:168 Assumption 2. σ is differentiable everywhere. Both σ and its first-order derivative σ̇ are Lipschitz.3169
Difference from [JGH18]. Our initialization (9) differs from the original one in [JGH18] in the last170 (output) layer, where we use the zero initialization WL(0)i,j = 0 instead of the Gaussian initialization171
W L(0) i,j ∼ N (0, 1). This modification permits us to accurately approximate the neural network with172 its linearized counterpart (11), as we notice that the proofs in [LXS+19] (particularly the proofs of173 their Theorem 2.1 and their Lemma 1 in Appendix G) are flawed. In Appendix E we will explain174 what goes wrong in their proofs and how we manage to fix the proofs with our modification.175
Denote the neural network at time t by f (t)(x) = f(x; θ(t)) which is parameterized by θ(t) ∈ Rp176 where p is the number of parameters. We use the shorthand ∇θf (0)(x) := ∇θf(x; θ) ∣∣ θ=θ0
. The177 neural tangent kernel (NTK) of this model is Θ(0)(x,x′) = ∇θf (0)(x)>∇θf (0)(x′), and the Gram178 matrix is Θ(0) = Θ(0)(X,X) ∈ Rn×n. For this wide NN, we still have the following NTK theorem:179 Lemma 3. If σ is Lipschitz and dl →∞ for l = 1, · · · , L sequentially, then Θ(0)(x,x′) converges180 in probability to a non-degenerate4 deterministic limiting kernel Θ(x,x′).181
The kernel Gram matrix Θ = Θ(X,X) ∈ Rn×n is a positive semi-definite symmetric matrix.182 Denote its largest and smallest eigenvalues by λmax and λmin. Note that Θ is non-degenerate, so we183 can assume that λmin > 0 (which is almost surely true when dL n). Then we have:184 Theorem 4. Let f (t) be a wide fully-connected neural network that satisfies Assumption 2 and is185 trained by any GRW satisfying Assumption 1 with the squared loss. Let f (t)ERM be the same model186 trained by ERM from the same initial point. If d1 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)187 are linearly independent, and λmin > 0, then there exists a constant η1 > 0 such that: if η ≤ η15,188 then for any δ > 0, there exists D̃ > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)189 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1, as d̃→∞,190
lim sup t→∞ ∣∣∣f (t)(x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4)→ 0 (10) Note that for simplicity, in the theorem we only consider the case where d1 = · · · = dL = d̃→∞,191 but in fact the result can be very easily extended to the case where dl/d1 → αl for l = 2, · · · , L for192 some constants α2, · · · , αL, and d1 →∞. Here we provide a proof sketch for this theorem. The key193 is to consider the linearized neural network of f (t)(x):194
f (t) lin (x) = f (0)(x) + 〈θ(t) − θ(0),∇θf (0)(x)〉 (11)
which is a linear model with features ∇θf (0)(x). Thus if ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly195 independent, then the linearized NN converges to the unique interpolator. Then we show that the196
3f is Lipschitz if there exists a constant L > 0 such that for any x1,x2, |f(x1)− f(x2)| ≤ L ‖x1 − x2‖2. 4Non-degenerate means that Θ(x,x′) depends on x and x′ and is not a constant. 5For ease of understanding, later we will write this condition as “with a sufficiently small learning rate”.
wide neural network can be approximated by its linearized counterpart uniformly throughout training,197 which is considerably more subtle in our case due to the GRW dynamics. Here we prove that the gap198 is bounded by O(d̃−1/4), but in fact we can prove that it is bounded by O(d̃−1/2+ ) for any > 0:199
Lemma 5 (Approximation Theorem). For a wide fully-connected neural network f (t) satisfying200 Assumption 2 and is trained by any GRW satisfying Assumption 1 with the squared loss, let f (t)lin be its201 linearized neural network trained by the same GRW (i.e. q(t)i are the same for both networks for any202 i and t). Under the conditions of Theorem 4, with a sufficiently small learning rate, for any δ > 0,203 there exist constants D̃ > 0 and C > 0 such that as long as d̃ ≥ D̃, with probability at least (1− δ)204 over random initialization we have: for any test point x ∈ Rd such that ‖x‖2 ≤ 1,205
sup t≥0 ∣∣∣f (t)lin (x)− f (t)(x)∣∣∣ ≤ Cd̃−1/4 (12) Theorem 4 shows that at any test point x within the unit ball, the gap between the outputs of wide206 NNs trained by GRW and ERM from the same initial point is arbitrarily close to 0. So we have shown207 that for regression, with both linear and wide NNs, GRW does not improve over ERM.208
4.3 Wide Neural Networks, with L2 Regularization209
Previous work such as [SKHL20] proposed to improve DRO algorithms by adding L2 penalty to the210 objective function. In this section, we thus study adding L2 regularization to GRW algorithms:211
R̂µ q(t) (f) = n∑ i=1 q (t) i `(f(xi), yi) + µ 2 ∥∥∥θ − θ(0)∥∥∥2 2
(13)
From the outset, it is easy to see that under L2 regularization, GRW methods have different implicit212 biases than ERM. For example, when f is a linear model, ` is convex and smooth, then R̂µ
q(t) (f) with213 static GRW is a convex smooth objective function, so under GD with a sufficiently small learning rate,214 the model will converge to the global minimizer (see Appendix D.1). Moreover, the global optimum215 θ∗ satisfies∇θR̂µq(t)(f(x; θ
∗)) = 0, solving which yields θ∗ = θ(0) + (XQX>+µI)−1XQ(Y −216 f (0)(X)), which depends on Q = diag(q1, · · · , qn), so adding L2 regularization at least seems to217 yield different results from ERM (whether it improves over ERM might depend on q1, · · · , qn).218 However, the following result shows that this regularization must be large enough to significantly219 lower the training performance, or the resulting model would still be close to the unregularized ERM220 model. We still denote the largest and smallest eigenvalues of the kernel Gram matrix Θ by λmax and221 λmin. We use the subscript “reg” to refer to a regularized model (trained by minimizing (13)).222
Theorem 6. Suppose there exists M0 > 0 s.t. ∥∥∇θf (0)(x)∥∥2 ≤M0 for all ‖x‖2 ≤ 1. If λmin > 0223 and µ > 0, then for a wide NN satisfying Assumption 2, and any GRW minimizing the squared loss224 with a sufficiently small learning rate η, if d1 = d2 = · · · = dL = d̃, ∇θf (0)(x1), · · · ,∇θf (0)(xn)225 are linearly independent, and the empirical training risk of f (t)reg satisfies226
lim sup t→∞
R̂(f (t)reg ) < (14)
for some > 0, then with a sufficiently small learning rate, as d̃→∞, with probability close to 1227 over random initialization, for any x such that ‖x‖2 ≤ 1 we have228
lim sup t→∞ ∣∣∣f (t)reg (x)− f (t)ERM(x)∣∣∣ = O(d̃−1/4 +√ )→ O(√ ) (15) where f (t)reg is trained by regularized GRW and f (t) ERM by unregularized ERM from same initial points.229
The proof again starts from analyzing linearized neural networks, and showing that regularization230 does not help there (Appendix D.4.2). Then, we need to prove a new approximation theorem for L2231 regularized GRW connecting wide NNs to their linearized counterparts uniformly through the GRW232 training process (Appendix D.4.1). Note that with regularization, we no longer need Assumption233 1 to prove the new approximation theorem, because previously Assumption 1 is used to prove the234 convergence of GRW, but with regularization GRW naturally converges.235
Theorem 6 shows that if the training error can go below , then the gap between the outputs of the236 two models on any test point x within the unit ball will be at most O( √ ). Thus, if is very small,237 regularized GRW yields a very similar model to unregularized ERM, and thus makes improvement.238
To empirically demonstrate this result, we run the same experiment as in Section 4.1 but with L2239 regularization. The results are presented in Figure 2. We can see that when the regularization is small,240 the training losses still converge to 0, and the three model weights still converge to the same point.241 On the contrary, with a large regularization, the training loss does not converge to 0, and the three242 model weights no longer converge to the same point. This shows that the regularization must be large243 enough to lower the training performance in order to make a significant difference to the implicit bias.244
5 Theoretical Results for Classification245
Now we consider classification where Y = {+1,−1}. The big difference is that classification losses246 don’t have finite minimizers. A classification loss converging to zero means that the model weight247 “explodes” to infinity instead of converging to a finite point. We focus on the canonical logistic loss:248
`(ŷ, y) = log(1 + exp(−ŷy)) (16)
5.1 Linear Models249
We first consider training the linear model f(x) = 〈θ,x〉 with GRW under gradient descent with the250 logistic loss. As noted earlier, in this setting, [BL19] made the empirical observation that importance251 weighting does not improve over ERM. Then, [XYR21] proved that for importance weighting252 algorithms, as t→∞, ‖θ(t)‖2 →∞ and θ(t)/‖θ(t)‖2 converges to a unit vector that does not depend253 on the sample weights, so it does not improve over ERM. To extend this theoretical result to the broad254 class of GRW algorithms, we will prove two results. First, in Theorem 7 we will show that under the255 logistic loss, any GRW algorithm satisfying the following weaker assumption:256
Assumption 3. For all i, lim inft→∞ q (t) i > 0,257
if the training error converges to 0, and the direction of the model weight converges to a fixed unit258 vector, then this unit vector must be the max-margin classifier defined as259
θ̂MM = arg max θ:‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,xi〉
} (17)
Second, Theorem 8 shows that for any GRW satisfying Assumption 1, the training error converges to260 0 and the direction of the model weight converges, so it does not improve over ERM.261 Theorem 7. If x1, · · · ,xn are linearly independent, then for the logistic loss, we have: for any262 GRW satisfying Assumption 3, if as t→∞ the empirical training risk R̂(f (t)) converges to 0 and263 θ(t)/‖θ(t)‖2 → u for some unit vector u, then u = θ̂MM.264
This result is an extension of [SHN+18]. Note that θ̂MM does not depend on q (t) i , so this result shows265 that the sample weights have no effect on the implicit bias. Thus, for any GRW method that only266 satisfies the weak Assumption 3, as long as the training error converges to 0 and the model weight267 direction converges, GRW does not improve over ERM. We next show that any GRW satisfying268 Assumption 1 does have its model weight direction converge, and its training error converge to 0.269 Theorem 8. For any loss ` that is convex, L-smooth in ŷ and strictly monotonically decreasing to270 zero as yŷ → +∞, and GRW satisfying Assumption 1, denote F (θ) = ∑n i=1 qi`(〈θ,xi〉, yi). If271 x1, · · · ,xn are linearly independent, then with a sufficiently small learning rate η, we have:272
F (θ(t))→ 0 as t→∞.(i) ∥∥θ(t)∥∥
2 →∞ as t→∞.(ii)273
Let θR = arg minθ{F (θ) : ‖θ‖2 ≤ R}. θR is unique for any R such that min‖θ‖2≤R F (θ) < mini qi`(0, yi). And if limR→∞ θRR exists, then limt→∞ θ(t)
‖θ(t)‖ 2
also exists and they are equal.
(iii)274
This result is an extension of Theorem 1 of [JDST20]. For the logistic loss, it is easy to show that275 it satisfies the conditions of the above theorem and limR→∞ θRR = θ̂MM. Thus, Theorems 8 and 7276 together imply that all GRW satisfying Assumption 1 (including ERM) have the same implicit bias277 (see Appendix D.5.3). We also have empirical verification for these results (see Appendix C).278
Remark. It is impossible to extend these results to wide NNs like Theorem 4 because for a neural279 network, if ‖θ(t)‖2 goes to infinity, then ‖∇θf‖2 will also go to infinity. However, for a linear model,280 the gradient is a constant. Consequently, the gap between the neural networks and its linearized281 counterpart will “explode” under gradient descent, so there can be no approximation theorem like282 Lemma 5 that can connect wide NNs to their linearized counterparts. Thus, we consider regularized283 GRW, for which θ(t) converges to a finite point and there is an approximation theorem.284
5.2 Wide Neural Networks, with L2 Regularization285
Consider minimizing the regularized weighted empirical risk (13) with ` being the logistic loss. As in286 the regression case, with L2 regularization, GRW methods have different implicit biases than ERM287 for the same reasons as in Section 4.3. And similarly, we can show that in order for GRW methods to288 be sufficiently different from ERM, the regularization needs to be large enough to significantly lower289 the training performance. Specifically, in the following theorem we show that if the regularization290 is too small to lower the training performance, then a wide neural network trained with regularized291 GRW and the logistic loss will still be very close to the max-margin linearized neural network:292
fMM(x) = 〈θ̂MM,∇θf (0)(x)〉 where θ̂MM = arg max ‖θ‖2=1
{ min
i=1,··· ,n yi · 〈θ,∇θf (0)(xi)〉
} (18)
Note that fMM does not depend on q (t) i . Moreover, using the result in the previous section we can293 show that a linearized neural network trained with unregularized ERM will converge to fMM:294
Theorem 9. Suppose there exists M0 > 0 such that ∥∥∇θf (0)(x)∥∥2 ≤M0 for all test point x. For a295 wide NN satisfying Assumption 2, and for any GRW satisfying Assumption 1 with the logistic loss,296 if d1 = d2 = · · · = dL = d̃ and ∇θf (0)(x1), · · · ,∇θf (0)(xn) are linearly independent and the297 learning rate is sufficiently small, then for any δ > 0 there exists a constant C > 0 such that: with298 probability at least (1 − δ) over random initialization, as d̃ → ∞ we have: for any ∈ (0, 14 ), if299 the empirical training error satisfies lim supt→∞ R̂(f (t) reg ) < , then for any test point x such that300 |fMM(x)| > C · (− log 2 )−1/2, f (t)reg (x) has the same sign as fMM(x) when t is sufficiently large.301
This result says that at any test point x on which the max-margin linear classifier classifies with a302 margin of Ω((− log 2 )−1/2), the neural network has the same prediction. And as decreases, the303 confidence threshold also becomes lower. Similar to Theorem 6, this theorem provides the scaling of304 the gap between the regularized GRW model and the unregularized ERM model w.r.t. .305
This result justifies the empirical observation in [SKHL20] that with large regularization, some GRW306 algorithms can maintain a high worst-group test performance, with the cost of suffering a significant307 drop in training accuracy. On the other hand, if the regularization is small and the model can achieve308 nearly perfect training accuracy, then its worst-group test performance will still significantly drop.309
6 Discussion310
6.1 Distributionally Robust Generalization and Future Directions311
A large body of prior work focused on distributionally robust optimization, but we show that these312 methods have (almost) equivalent implicit biases as ERM. In other words, distributionally robust313 optimization (DRO) does not necessarily have better distributionally robust generalization (DRG).314
Therefore, we argue that it is necessary to design principled ways to improve DRG, which is what315 people really want in the first place. Here we discuss three promising approaches to improving DRG.316
The first approach is data augmentation and pretraining on large datasets. Our theoretical findings317 suggest that the implicit bias of GRW is determined by the training samples and the initial point, but318 not the sample weights. Thus, to improve DRG, we can either obtain more training samples, or start319 from a better initial point, as demonstrated in two recent papers [WGS+22, SKL+22].320
The second approach (for classification) is to go beyond the class of (iterative) sample reweighting321 based GRW algorithms, for instance via logit adjustment [MJR+21], which makes a classifier have322 larger margins on smaller groups to improve its generalization on smaller groups. An early approach323 by [CWG+19] proposed to add an O(n−1/4k ) additive adjustment term to the logits output by the324 classifier. Following this spirit, [MJR+21] proposed the LA-loss which also adds an additive adjust-325 ment term to the logits. [YCZC20] proposed the CDT-loss which adds a multiplicative adjustment326 term to the logits by dividing the logits of different classes with different temperatures. [KPOT21]327 proposed the VS-loss which includes both additive and multiplicative adjustment terms, and they328 showed that only the multiplicative adjustment term affects the implicit bias, while the additive term329 only affects optimization, a fact that can be easily derived from our Theorem 8. Finally, [LZT+21]330 proposed AutoBalance which optimizes the adjustment terms with a bi-level optimization framework.331
The third approach is to stay within the class of GRW algorithms, but to change the classifica-332 tion/regression loss function to be suited to GRW. A recent paper [WCHH22] showed that for linear333 classifiers, one can make the implicit bias of GRW dependent on the sample weights by replacing the334 exponentially-tailed logistic loss with the following polynomially-tailed loss:335
`α,β(ŷ, y) = `left(ŷy) , if ŷy < β 1
[ŷy − (β − 1)]α , if ŷy ≥ β
(19)
And this result can be extended to GRW satisfying Assumption 1 using our Theorem 8. The reason336 why loss (19) works is that it changes limR→∞ θRR , and the new limit depends on the sample weights.337
6.2 Limitations338
Like most theory papers, our work makes some strong assumptions. The two main assumptions are:339
(i) The model is a linear model or a sufficiently wide fully-connected neural network.340 (ii) The model is trained for sufficiently long time, i.e. without early stopping.341
Regarding (i), [COB19] argued that NTK neural networks fall in the “lazy training” regime and342 results might not be transferable to general neural networks. However, this class of neural networks343 has been widely studied in recent years and has provided considerable insights into the behavior344 of general neural networks, which is hard to analyze otherwise. Regarding (ii), in some easy tasks,345 when early stopping is applied, existing algorithms for distributional shift can do better than ERM346 [SKHL20]. However, as demonstrated in [GLP21, KSM+21], in real applications these methods still347 cannot significantly improve over ERM even with early stopping, so early stopping is not the ultimate348 universal solution. Thus, though inevitably our results rely on some strong assumptions, we believe349 that they provide important insights into the problems of existing methods and directions for future350 work, which are significant contributions to the study of distributional shift problems.351
7 Conclusion352
In this work, we posit a broad class of what we call Generalized Reweighting (GRW) algorithms that353 include popular approaches such as importance weighting, and Distributionally Robust Optimization354 (DRO) variants, that were designed towards the task of learning models that are robust to distributional355 shift. We show that when used to train overparameterized linear models or wide NN models, even this356 very broad class of GRW algorithms does not improve over ERM, because they have the same implicit357 biases. We also showed that regularization does not help if it is not large enough to significantly358 lower the average training performance. Our results thus suggest to make progress towards learning359 models that are robust to distributional shift, we have to either go beyond this broad class of GRW360 algorithms, or design new losses specifically targeted to this class.361
References362 [BGO16] Su Lin Blodgett, Lisa Green, and Brendan O’Connor. Demographic dialectal variation363 in social media: A case study of African-American English. In Proceedings of the 2016364 Conference on Empirical Methods in Natural Language Processing, pages 1119–1130,365 Austin, Texas, November 2016. Association for Computational Linguistics.366
[BL19] Jonathon Byrd and Zachary Lipton. What is the effect of importance weighting in deep367 learning? In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of368 the 36th International Conference on Machine Learning, volume 97 of Proceedings of369 Machine Learning Research, pages 872–881. PMLR, 09–15 Jun 2019.370
[COB19] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable371 programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox,372 and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32.373 Curran Associates, Inc., 2019.374
[CWG+19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning375 imbalanced datasets with label-distribution-aware margin loss. Advances in Neural376 Information Processing Systems, 32:1567–1578, 2019.377
[DN18] John Duchi and Hongseok Namkoong. Learning models with uniform performance via378 distributionally robust optimization. arXiv preprint arXiv:1810.08750, 2018.379
[GLP21] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In380 International Conference on Learning Representations, 2021.381
[GLSS18] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit382 bias in terms of optimization geometry. In Jennifer Dy and Andreas Krause, editors,383 Proceedings of the 35th International Conference on Machine Learning, volume 80 of384 Proceedings of Machine Learning Research, pages 1832–1841. PMLR, 10–15 Jul 2018.385
[HNSS18] Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust386 supervised learning give robust classifiers? In International Conference on Machine387 Learning, pages 2029–2037. PMLR, 2018.388
[HS15] Dirk Hovy and Anders Søgaard. Tagging performance correlates with author age. In389 Proceedings of the 53rd annual meeting of the Association for Computational Linguistics390 and the 7th international joint conference on natural language processing (volume 2:391 Short papers), pages 483–488, 2015.392
[HSNL18] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fair-393 ness without demographics in repeated loss minimization. In Jennifer Dy and Andreas394 Krause, editors, International Conference on Machine Learning, volume 80 of Proceed-395 ings of Machine Learning Research, pages 1929–1938, Stockholmsmässan, Stockholm396 Sweden, 10–15 Jul 2018. PMLR.397
[JDST20] Ziwei Ji, Miroslav Dudík, Robert E. Schapire, and Matus Telgarsky. Gradient descent398 follows the regularization path for general losses. In Jacob Abernethy and Shivani399 Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory, volume400 125 of Proceedings of Machine Learning Research, pages 2109–2136. PMLR, 09–12401 Jul 2020.402
[JGH18] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Conver-403 gence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle,404 K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information405 Processing Systems, volume 31. Curran Associates, Inc., 2018.406
[KPOT21] Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thram-407 poulidis. Label-imbalanced and group-sensitive classification under overparameteriza-408 tion. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.409
[KSM+21] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang,410 Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena411 Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque,412 Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea413 Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina414 Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on415 Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages416 5637–5664. PMLR, 18–24 Jul 2021.417
[LXS+19] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha418 Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve419 as linear models under gradient descent. Advances in neural information processing420 systems, 32:8572–8583, 2019.421
[LZT+21] Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, and Samet Oymak.422 Autobalance: Optimized loss functions for imbalanced data. In Thirty-Fifth Conference423 on Neural Information Processing Systems, 2021.424
[MJR+21] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, An-425 dreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In International426 Conference on Learning Representations, 2021.427
[Shi00] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting428 the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244,429 2000.430
[SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro.431 The implicit bias of gradient descent on separable data. The Journal of Machine Learning432 Research, 19(1):2822–2878, 2018.433
[SKHL20] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distribution-434 ally robust neural networks for group shifts: On the importance of regularization for435 worst-case generalization. In International Conference on Learning Representations,436 2020.437
[SKL+22] Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen,438 Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne439 David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto,440 Sergey Levine, Chelsea Finn, and Percy Liang. Extending the WILDS benchmark for441 unsupervised adaptation. In International Conference on Learning Representations,442 2022.443
[SRKL20] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of444 why overparameterization exacerbates spurious correlations. In Hal Daumé III and Aarti445 Singh, editors, Proceedings of the 37th International Conference on Machine Learning,446 volume 119 of Proceedings of Machine Learning Research, pages 8346–8356. PMLR,447 13–18 Jul 2020.448
[Tat17] Rachael Tatman. Gender and dialect bias in youtube’s automatic captions. In Proceed-449 ings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59,450 2017.451
[WCHH22] Ke Alexander Wang, Niladri Shekhar Chatterji, Saminul Haque, and Tatsunori452 Hashimoto. Is importance weighting incompatible with interpolating classifiers? In453 International Conference on Learning Representations, 2022.454
[WGS+22] Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre-Alvise Rebuffi, Ira Ktena, Krish-455 namurthy Dj Dvijotham, and Ali Taylan Cemgil. A fine-grained analysis on distribution456 shift. In International Conference on Learning Representations, 2022.457
[XDKR20] Ziyu Xu, Chen Dan, Justin Khim, and Pradeep Ravikumar. Class-weighted classifi-458 cation: Trade-offs and robust approaches. In Hal Daumé III and Aarti Singh, editors,459 Proceedings of the 37th International Conference on Machine Learning, volume 119460
of Proceedings of Machine Learning Research, pages 10544–10554. PMLR, 13–18 Jul461 2020.462
[XYR21] Da Xu, Yuting Ye, and Chuanwei Ruan. Understanding the role of importance weighting463 for deep learning. In International Conference on Learning Representations, 2021.464
[YCZC20] Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, and Wei-Lun Chao. Identifying and465 compensating for feature deviation in imbalanced deep learning. arXiv preprint466 arXiv:2001.01385, 2020.467
[ZDKR21] Runtian Zhai, Chen Dan, Zico Kolter, and Pradeep Ravikumar. Doro: Distributional468 and outlier robust optimization. In Marina Meila and Tong Zhang, editors, Proceedings469 of the 38th International Conference on Machine Learning, volume 139 of Proceedings470 of Machine Learning Research, pages 12345–12355. PMLR, 18–24 Jul 2021.471
[ZDS+21] Runtian Zhai, Chen Dan, Arun Suggala, J Zico Kolter, and Pradeep Kumar Raviku-472 mar. Boosted CVar classification. In Thirty-Fifth Conference on Neural Information473 Processing Systems, 2021.474
Checklist475
The checklist follows the references. Please read the checklist guidelines carefully for information on476 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or477 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing478 the appropriate section of your paper or providing a brief inline description. For example:479
• Did you include the license to the code and datasets? [Yes] See Section.480 • Did you include the license to the code and datasets? [No] The code and the data are481 proprietary.482 • Did you include the license to the code and datasets? [N/A]483
Please do not modify the questions and only use the provided macros for your answers. Note that the484 Checklist section does not count towards the page limit. In your paper, please delete this instructions485 block and only keep the Checklist section heading above along with the questions/answers below.486
1. For all authors...487 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s488 contributions and scope? [Yes]489 (b) Did you describe the limitations of your work? [Yes] See Section 6.2.490 (c) Did you discuss any potential negative societal impacts of your work? [No] Not491 relevant.492 (d) Have you read the ethics review guidelines and ensured that your paper conforms to493 them? [Yes]494 2. If you are including theoretical results...495
(a) Did you state the full set of assumptions of all theoretical results? [Yes]496 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix D.497
3. If you ran experiments...498 (a) Did you include the code, data, and instructions needed to reproduce the main experi-499 mental results (either in the supplemental material or as a URL)? [Yes]500 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they501 were chosen)? [Yes]502 (c) Did you report error bars (e.g., with respect to the random seed after running experi-503 ments multiple times)? [No] The experiments are only for demonstration.504 (d) Did you include the total amount of compute and the type of resources used (e.g., type505 of GPUs, internal cluster, or cloud provider)? [Yes] See the code.506 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...507
(a) If your work uses existing assets, did you cite the creators? [N/A]508 (b) Did you mention the license of the assets? [N/A]509 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]510
511
(d) Did you discuss whether and how consent was obtained from people whose data you’re512 using/curating? [N/A]513
(e) Did you discuss whether the data you are using/curating contains personally identifiable514 information or offensive content? [N/A]515
5. If you used crowdsourcing or conducted research with human subjects...516 (a) Did you include the full text of instructions given to participants and screenshots, if517 applicable? [N/A]518 (b) Did you describe any potential participant risks, with links to Institutional Review519 Board (IRB) approvals, if applicable? [N/A]520 (c) Did you include the estimated hourly wage paid to participants and the total amount521 spent on participant compensation? [N/A]522 | 1. What is the focus of the paper regarding improving the robustness of ERM?
2. What are the strengths and weaknesses of the proposed approach?
3. What are some questions or concerns that the reviewer has regarding the paper's contributions and novelty?
4. How can the paper be improved, and what additional research directions could enhance its impact? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Focused on understanding the effect of reweighing strategy on improving the robustness of ERM to distributional shift, this paper demonstrates notably that iteratively updating the sample weights does not improve over ERM in an overparametrized setting. With similars results already derived for the method of fixed weights, the main contribution of this work is to confirm, in a more general setting, the fact that the solution obtained by gradient descent in the overparametrized regime converges to an interpolator independent of the samples weights.
Strengths And Weaknesses
Strengths: Theoretical understanding of reweighing strategy under a more general framework that allows iterative updating of sample weights.
Weaknesses : As previous work already explained the ineffectiveness of sample reweighing in the overparametrized regime with the convergence of the gradient descent solution to an interpolator independent of the sample weights, the generalized results given in this paper do not seem to provide further insight into the effect of the reweighing approach.
Questions
As the main contribution of this work concerns the generalization to the iterative reweighing setting, it would help to clarify the originality of this work to discuss the technical challenges brought by the iteratively updated sample weights in obtaining the theoretical results.
Furthermore, the interest of this study would be significantly increased by adding deeper results in the regularized case to shed light on the impact of the iteratively reweighed samples on the improvement of robustness.
Limitations
The limitations of this work are well discussed. There does not seem to be any negative societal impact.
The contribution of this paper would be greatly enhanced if the authors could go further in exploring one of the directions mentioned in the article where the robustness is likely to improve. |
NIPS | Title
Efficient Equivariant Network
Abstract
Convolutional neural networks (CNNs) have dominated the field of Computer Vision and achieved great success due to their built-in translation equivariance. Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs. However, G-CNNs are faced with two major challenges: spatial-agnostic problem and expensive computational cost. In this work, we propose a general framework of previous equivariant models, which includes G-CNNs and equivariant self-attention layers as special cases. Under this framework, we explicitly decompose the feature aggregation operation into a kernel generator and an encoder, and decouple the spatial and extra geometric dimensions in the computation. Therefore, our filters are essentially dynamic rather than being spatial-agnostic. We further show that our Equivariant model is parameter Efficient and computational Efficient by complexity analysis, and also data Efficient by experiments, so we call our model E-Net. Extensive experiments verify that our model can significantly improve previous works with smaller model size. Especially, under the setting of training on 1/5 data of CIFAR10, our model improves G-CNNs by 5%+ accuracy, while using only 56% parameters and 68% FLOPs.
1 Introduction
In the past few years, convolutional neural networks (CNNs) have been widely used and achieved superior results on multiple vision tasks, such as image classification [31, 55, 51, 22], semantic segmentation [3], and object detection [44]. A compelling explanation of the good performance of CNNs is that their built-in parameter sharing scheme brings in translation equivariance: shifting an image and then feeding it through a CNN layer is the same as feeding the original image and then shifting the resulted feature maps. In other words, the translation symmetry is preserved by each layer. Motivated by this, Cohen and Welling [9] proposed Group Equivariant CNNs (G-CNNs), showing how convolutional networks can be generalized to exploit larger groups of symmetries. Following G-CNNs, researchers have designed new neural networks that are equivariant to other transformations like rotations [9, 61, 24, 49] and scales [65, 53]. However, G-CNNs still have two main drawbacks: 1) In the implementation, G-CNNs would introduce extra dimensions to encode new transformations, such as rotations and scales, thus have a very high computational cost. 2) Although G-CNNs achieve group equivariance by sharing kernels, like vanilla CNNs, they lack the ability to adapt kernels to diverse feature patterns with respect to different spatial positions, namely, the spatial-agnostic problem [68, 39, 70, 71, 54, 67, 36].
∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Some previous works focus on solving these two problems. Cheng et al. [4] proposed to decompose the convolutional filters over joint steerable bases to reduce model size. However, it is essentially G-CNNs which still have the inherent spatial-agnostic problem. To incorporate dynamic filters, one solution is introducing attention mechanism into each convolution layer in G-CNNs without disturbing inherent equivariance [48, 45]. The cost is that they introduce extra parameters and increase the complexity of space and time. Another solution is to replace group convolution layers with standalone self-attention layers by designing a specific position embedding to ensure equivariance [47, 26]. However, the self-attention mechanism suffers from quadratic memory and time complexity, because it has to compute the attention score at each pair of inputs.
Actually, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] revealed that an equivariant linear layer is essentially a convolution-like operation. Inspired by this, we further discover that a general feature-extraction layer, either linear or non-linear, being equivariant is equivalent to that the feature aggregation mechanism between each pair of inputs only depends on the relative positions of these two inputs. Based on this observation, we propose a generalized framework of previous equivariant models, which includes G-CNNs and equivariant attention networks as special cases. Under this generalized framework, we design a new equivariant layer to conquer the aforementioned difficulties. Firstly, to avoid quadratic computational complexity, the feature aggregation operator is explicitly decomposed into a kernel generator and an encoder which takes one single feature as the input. Since our kernels are calculated based on input features, they are essentially dynamic rather than being spatial-agnostic. In addition, we decouple the feature aggregation mechanism across spatial and extra geometric dimensions to reduce the inter-channel redundancy in convolution filters [4] and further accelerate computation. Extensive experiments show that our method can process data very efficiently and perform significantly better than previous works using lower computational cost. As our method is parameter Efficient, computational Efficient, data Efficient and Equivariant, we name our new layer as E4-layer.
We summarize our main contributions as follows:
• We propose a generalized framework of previous equivariant models, which includes GCNNs and attention-based equivariant models as special cases.
• Under the generalized framework, we explicitly decompose the feature aggregation operator into a kernel generator and an encoder, and further decouple the spatial and extra geometric dimensions to reduce computation.
• Extensive experiments verify that our method is also data efficient and performs competitively with lower computational cost.
2 Related Work
Vanilla CNNs [34] are naturally translation equivariant. More symmetries are considered to be exploited into the network for different tasks, such as rotations over plane [9, 66, 35, 12, 61, 59, 49, 37, 52, 4, 41, 2, 10, 58, 21, 24], rotations over 3D space [62, 16, 57, 64, 14, 60, 15, 50, 28, 6], scaling [65, 40, 53, 46], symmetries on manifold [8, 11], and other general symmetry groups [17, 56, 18]. These works accomplish equivariance by constraining the linear mappings in layers, followed by pointwise non-linearities to enhance their expressive power. In general, researchers [29, 7, 1] pointed out that an equivariant linear mapping can always be written as a convolution-like integral, i.e., G-CNNs in practice. However, their theory is still limited to linear cases.
As works [68, 39, 70, 71, 54, 67, 36] point out the spatial-agnostic problem of CNNs and attention mechanisms [25, 63, 43, 13, 20] achieve impressive results on various vision tasks, researchers start to consider non-linear equivariant mapping. Romero et al. [48, 45] directly reweighted the convolution kernels with attention weights generated by features and obtained non-linear equivariant models. However, compared with G-CNNs, these methods introduce extra parameters and operations, resulting in an even heavier computational burden. Also, some works [47, 26, 23] proposed group equivariant self-attention [43, 13]. Fuchs et al. [19] incorporated self-attention into 3D equivariant networks and proposed SE(3)-Transformers. However, since their filters are essentially calculated based on a pair of inputs, the computational complexity is quadratic.
In this work, we further extend the linear equivariant theory to a more general situation, including non-linear cases. Under the framework, we design a new equivariant layer to solve both the spatial-
agnostic problem in convolution-based equivariant models and heavy computation cost problem in most equivariant models.
3 A Unified Framework of Previous Group Equivariant Models
In this section, we first briefly review two representative group equivariant models: the linear model G-CNNs [9], and the non-linear model equivariant self-attention [47, 26]. Then, we propose a general framework of previous equivariant models based on the inner relationship among these specific models.
3.1 Equivariance
Equivariance indicates that the outputs of a mapping transform in a predictable way with the transformation of the inputs. Formally, a group equivariant map Ψ satisfies that
∀u ∈ G, Ψ [Tu[f ]] = T ′u[Ψ[f ]], (1) where G is a transformation group, f is an input feature map, and Tu and T ′u are group actions, indicating how the transformation u acts on the input and output features, respectively. Besides, since we hope that two transformations u, v ∈ G acting on the feature maps successively is equivalent to the composition of transformations uv ∈ G acting on the feature maps directly, we require that TuTv = Tuv , where uv is the group product of u and v. The same is the case with T ′u. Now we examine the specific form of the transformation group G. In this work, we focus on the analysis of 2D images defined on R2. Consequently, we are most interested in the groups of the form G = R2 o A, resulting from the semi-product (o) between the translation group R2 and a group A acts on R2, e.g., rotations, scalings and mirrorings. This family of groups is referred to as affine groups and their group product rule is:
uv = (xu, au)(xv, av) = (xu + auxv, auav), (2)
where u = (xu, au) and v = (xv, av), in which xu, xv ∈ R2 and au, av ∈ A. For ease of implementation, following [9], we take A as the cyclic group C4 or the dihedral group D4, then G becomes p4 or p4m. As for the group action, we employ the most common regular group action in this work, i.e., Tu[f ](v) = f(u−1v). (3) Here, we only care about the group action over the feature maps defined on G, because we always use a lifting operation to lift the input images defined on R2 to the feature maps on G, where the equivariance can be preserved properly, as will be shown in Section 3.2.
3.2 G-CNNs
Let f (l) : X → RCl and W : G → RCl+1×Cl be the input feature and the convolutional filter in the l-th layer, respectively, where Cl denotes the channel number of the l-th layer. X is taken as R2 for the first layer, and taken as G for the following layers. Then for any g ∈ G, the group convolution [29, 7, 1] of f (l) and W on G at g is given by
f (l+1)(g) = Ψ[f (l)](g) = ∫ X W (g−1g̃)f (l)(g̃)dµ(g̃), (4)
where µ(·) is the Haar measure. When X is discrete, Eqn. (4) can be rewritten as f (l+1)(g) = ∑ g̃∈X W (g−1g̃)f (l)(g̃). (5)
G-CNNs essentially generalize the translation equivariance of conventional convolution to a more general group G. In fact, the first layer maps the 2D images to a function defined on G, while the following layers map one feature map on G to another. As a result, the computational complexity of the first layer and the following layers are of the order O(k2|A|) and O(k2|A|2), respectively, where k is the kernel size in the spatial space. As a result, G-CNNs have a much larger computational cost when A is large, especially for the intermediate layers. In this work, we employ the first layer of G-CNNs as a lifting operation, and focus on reducing the computation of the latter layers.
3.3 Equivariant Attention Networks
Group Equivariant Self-Attention (G-SA) [47, 26] is a representative method of equivariant attention networks, whose form can be simplified as follows:
f (l+1)(g) = ∑ g̃∈G Softmaxg̃[h T Q(f (l)(g))(hK(f (l)(g̃)) + Pg−1g̃)]hV (f (l)(g̃)), (6)
where hV : RCl → RCl+1 , and hQ, hK : RCl → Rd are the embedding functions of values, querys and keys, respectively, which are neural networks in the most general case. d is the dimension of the low dimensional embeddings, and Pg−1g̃ ∈ Rd encodes the relative positions of the query f (l)(g) and the key f (l)(g̃).
3.4 Generalized Equivariant Framework
As more and more group equivariant structures emerge, researchers start to deduce the most general equivariant structures. To this end, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] proposed a general theory of linear group equivariant structures, which indicates that G-CNNs are the most general equivariant linear layers. Besides, a lot of non-linear equivariant structures appear recently, such as equivariant self-attention layers [47, 26]. This motivates us to investigate a more general framework.
In all, with only slight modification, most of layers in a neural network can be viewed as a kind of aggregation of pair-wise feature interaction as follows:
f (l+1)(g) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)), (7)
where the feature aggregation operator Hg,g̃(·, ·) : RCl × RCl → RCl+1 is a mapping indexed by a pair of location g and g̃, which describes how to aggregate the input feature pair f(g) and f(g̃). In general, the above layer is not equivariant. However, we can find a general constraint for Hg,g̃(f
(l)(g), f (l)(g̃)) to make this layer equivariant over G. Theorem 1 The layer formulated as Eqn.(7) is group equivariant if and only if there is a mapping H̃ĝ : RCl × RCl → RCl+1 which is indexed by a single group element ĝ, such that, ∀f (l) and ∀g, g̃ ∈ G, the layer satisfies:∑
g̃
Hg,g̃(f (l)(g), f (l)(g̃)) = ∑ g̃ H̃g−1g̃(f (l)(g), f (l)(g̃)) (8)
Proof ⇒ Firstly, ∀u, g and g̃ ∈ G, Tuf (l+1)(g) = f (l+1)(u−1g) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
On the other hand,∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(u−1g), f (l)(u−1g̃)) = ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)).
As Tuf (l+1)(g) = ∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)),
⇒ ∀f (l), g, u, ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
Let g → ug, we get: ∀f (l), g, u, ∑ g̃∈G Hug,ug̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
then, we let u to be g−1, ∀f (l), g, ∑ g̃∈G He,g−1g̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
We denote H̃g−1g̃(·, ·) as He,g−1g̃(·, ·), we can get exactly the Eqn.(8) ⇐ This is obvious. Q.E.D
From the theorem, we can get a group equivariant layer: f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(f (l)(g), f (l)(g̃)), (9)
which is also the only equivariant form of Eqn. (7). Actually, the above theorem also reveals the essence of equivariance in previous works, i.e., if the relative positions of (g1,g̃1) and (g2,g̃2) are the same, i.e., g1−1g̃1 = g−12 g̃2 = ĝ, the feature pairs located at the two tuples should be processed equally. In other words, we should employ the same function H̃ĝ to act on these two input feature pairs.
From this perspective, we can readily see that both the kernel sharing used in G-CNNs, Eqn. (4), and the relative position encoding adopted in the G-SA, Eqn. (6), utilizes the above rule. According to Theorem 1, designing a group equivariant layer becomes much more easily and flexibly than ever, as we only need to design a new function H̃ĝ. In addition, the new formulation provides a more general perspective on the group equivariant layer, i.e., sharing the parameters of function H̃ĝ , which generalizes the kernel sharing schemes in G-CNNs. Based on the above understanding, we can see that if we replace the feature vector in the right hand side of Eqn. (9) with the local patch at group element g and g̃, respectively, it is still equivariant. Proposition 1 The following layer is equivariant,
f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(FN1(g),FN2(g̃)) (10)
where for i = 1, 2, the FNi(g) denote the local patches of g, in which Ni(g) represent g’s neighborhood {gg′|g′ ∈ Ni(e)} and Ni(e) is the predefined neighborhood of the identity element e ∈ G.
One remarkable advantages of introducing local patch is that it contains more semantic information than feature vector. Notice, we acquire the local patches by concatenating features in the neighborhoods of g and g̃ in a predefined order onN1(e) andN2(e) respectively, i.e., f(g′) is concatenated at the same place in FN1(g) as f(g−1g′) in FN1(e). We denote the concatenation operator as ⋃ , and will discuss the above in detail in Section 4.1, which shows that concatenating features can not only make our framework more flexible, but also help to reduce the computational burden of our newly proposed equivariant layer.
4 Efficient Equivariant Layer
A straight-forward and easy case of Eqn. (10) is to adopt H̃ĝ, ∀ĝ ∈ G, as a multi-layer perceptron (MLP), where the subscript ĝ is used to identify different MLPs. However, in Eqn. (10), we have to compute a mapping from two high dimensional vectors to another high dimensional one for each input pair of g and g̃, which is very expensive. A similar issue exists in computing the attention score in self-attention. To deal with this problem, we decompose H̃ into the following form to reduce the computation, i.e.,
∀ĝ ∈ G, H̃ĝ(x, y) = Kĝ(x) V (y) (11)
where means element-wise product, and Kĝ : RCl|N1(e)| → RCl+1 is a kernel generator and V : RCl|N2(e)| → RCl+1 is an encoder. We use | · | to denote the numbers of elements in a set. Hence, we can compute Kĝ(x) and V (y) separately. In addition, to further save computation, we split the kernel into several slices along the channels, such that Kĝ is shared across these slices,
i.e., ∀ 1 ≤ i, j ≤ Cl+1, Kiĝ = K j ĝ if i ≡ j (mod s), where s is the number of slices, and i and j are channel indexes. The Kĝ is essentially a dynamic filters which is adaptive to features around g, avoiding the spatial-agnostic problem in G-CNNs. Unlike conventional dynamic filters, which are matrices, the output of Kĝ is a vector, which can be viewed as a depth wise kernel [5]. This can decouple channel dimension with spatial dimension during feature aggregation to reduce the computational cost. Position information is implicitly encoded in the organized output form of our kernel generator, rather than using explicit positional embedding in the group self-attention layer [26, 47].
In practice, we can view the whole kernel family {Kĝ}ĝ∈G as the output of a single mapping, i.e., K̃: RCl|N1(e)| → R|G|Cl+1 . Then, we resize the output of K̃ to be a |G| × Cl+1 matrix, with different rows represent different Kĝ. Namely, if we adopt K̃ as an MLP, the computations and parameters used for hidden layer are shared across Kĝ for different ĝ, which is another merit of the Eqn. (11). However, there is still a large search space for H̃ĝ, as Eqn (11) is only a special structue of H̃ĝ, we leave a more complete study of H̃ĝ in the future work.
4.1 Implementation on Affine Group
In this section, we design a very efficient equivariant layer based on Eqn. (11) for affine group R2 oA. The computation of the operator is:
f (l+1)(g) = ∑
g̃∈N (g)
Kg−1g̃ ⋃ g′∈N1(g) f (l)(g′) V ⋃ g̃′∈N2(g̃) f (l)(g̃′) . (12) Following the standard practice in computer vision, aggregation is done only on the local neighborhood of g, N (g). To save computation, we choose N (g) to be only spatial-wise neighborhood, i.e.,
N (g) = {g(v, eA) | v ∈ Ω}, where Ω ∈ R2 and eA is the identity element of group A. However, aggregating information along spatial neighborhood only discards the information interaction along A, which could lead to a drop in performance [32]. We alleviate the issue by concatenating the feature map along A, i.e., we choose N1(g) and N2(g) to be {g(0, a)|a ∈ A}. The order of concatenation is predefined on A. As will be shown in the later experiments, this concatenation does not introduce much computation but can significantly improve performance. Compared to group convolution, such a design enables us to decouple the feature aggregation across the spatial dimension and the A dimension to further reduce computational cost. In practice, we adopt the K̃ as a two layer MLP: K̃(x) = W2Relu(W1x), where W1 ∈ RCl/r×Cl|A|,W2 ∈ R|Ω|s×Cl/r, and r is the reduction ratio which saves both parameters and computation, s is the number of slices defined before. For 2D images, Ω is usually adopted as a k × k square mesh grids and |Ω| = k2, where k is the kernel size. We simply adopt the encoder V as a linear transform: V (y) = W3y, where W3 ∈ RCl+1×Cl|A|. For better illustration, we visualize a concrete layer of Eqn. (12) by choosing G as p4 in Figure 1.
4.2 Computational Complexity Analysis
In practice, the feature map is defined on discrete mesh grids. We use h and w to denote the height and the width of mesh grids. As the numbers of the input and output channels are usually the same, we assume Cl = Cl+1 = c.
Parameter Analysis The number of learnable parameters of E4-layer (12) is c2|A|(1 + 1/r) + csk2/r. As s c, parameter counts are dominated by the first term when k is not too large, and increasing kernel size will not significantly increase parameter counts, which is shown in later experiments. The parameters count of group convolution layer is c2k2|A|. Notice that (1+1/r) k2 and s/r c|A|, parameters count of our E4-layer is significantly less than that of group convolution layer.
Time Complexity Analysis The FLOPs of E4-layer and group convolution layer are (1 + 1/r)c2|A|2hw + (1 + s/r)k2c|A|hw and k2c2|A|2hw, respectively. Similarly, as (1 + 1/r) k2 and (1 + s/r) c|A|, the FLOPs of E4-layer is significantly lower than that of group convolutional layer.
It can be observed that both the parameter count and FLOPs of our E4-layer are composed of two terms, one depending on k2 and the other not relying on k, which is a result of disentangling across spatial dimension with both channels and A during feature aggregation.
5 Experiments
In this section, we conduct extensive experiments to study and demonstrate the performance of our model. The experimental results show that our model has a greater capacity than the groupconvolution-based one in terms of parameter efficiency, computational efficiency, data efficiency and accuracy. On the MNIST-rot dataset, we detailedly study the effect of hyperparameters on the number of parameters, computation FLOPs and performance of our model. All the experiments are done on the GeForce RTX 3090 GPU.
5.1 Rotated MNIST
The MNIST-rot dataset [33] is the most widely used benchmark to test the equivariant models. It contains 62k 28×28 randomly rotated gray-scale handwritten digits. Images in the dataset are split into 10k for training, 2k for validation and 50k for testing. Random rotation of digits and only 20 percent of training data of the
standard MNIST dataset increases the difficulty of classification.
For a fair comparison, we keep both training settings and architectures of our model as close as possible to previous works [9, 47]. In addition, we adopt the p4 group to construct all our models in this section. In our first experiment, we adopt our E4-Net given in the supplementary material to make a comparison to previous works. This is a very lightweight model which contains only 18.8K learnable parameters. It is composed of one group convolutional layer which lifts the image to the p4 group, six E4-layers and one fully connected layer. Two 2× 2 max-pooling layers are inserted after the first and the third E4-layer to downsample feature maps. The last E4-layer is followed by a global max group pooling layer [9], which takes the maximum response over the entire group, to ensure the predictions invariant to rotations.
Our model is trained using the Adam optimizer [27] for 200 epochs with a batch size of 128. The learning rate is initialized as 0.02 and is reduced by 10 at the 60th, 120th and 160th epochs. The weight decay is set as 0.0001 and no data augmentation is used during training. The results are listed in Table 1. Our models significantly outperform G-CNNs [9] using only about 25% parameters and 40% FLOPs. For G-SA [47], which is a group equivariant stand-alone self-attention model, even performs inferiorly to G-CNNs with much more computational cost. The α-p4-CNN model [45] further introduces the attention mechanism to group convolution along both spatial and channel dimensions to enhance the expressiveness of G-CNNs, while our E4-Net still significantly outperforms it with less computational cost. We also experiment with a larger model to further demonstrate the capacity of our model, which is listed in the last line of Table 1.
Ablation Study of Concatenation: In the E4layer (12), we introduce the concatenation operation to enable the disentanglement across the rotation and the spatial information interaction. To study the importance of concatenation, we carry out experiments on the case that neither Kĝ nor V in Eqn. (12) use concatenation, i.e.,
N1(g) = g, N2(g̃) = g̃. As shown in the first line of Table 2, this leads to a significant drop in performance. This is because if aggregation in Eqn.(12) is done merely in the spatial neighborhoods without concatenation, there is no information interaction along the rotation dimensions. We also experiment the cases using concatenation only in Kĝ or V , and the performance of both is better than the case without concatenation but is still inferior to the case with concatenation in both Kĝ and V . This further illustrates the importance of concatenation along A.
Hyperparameters Analysis: We investigate the effect of various hyperparameters used in the E4-layer. The reduction ratio r and the slice number s in the Kĝ and kernel size k control the computations and parameters of the layer. Based on the baseline model, we vary the three hyperparameters respectively. As shown in the Table 3, improvement is observed when decreasing the reduction ratio and increasing the slice number, with the cost of computational burden increasing. Especially, the improvement of s = 2 over s = 4 and r = 1 over r = 2 is marginal, which is attributes to redundancy
in the kernel [4]. In conclusion, appropriately increasing the reduction ratio r and decreasing the slice number s can help to reduce computational cost while preserving performance. Keeping other hyperparameters fixed, we study the effect of kernel size on our model. In Table 3, the performance peaks when kernel size equals 7. In general, a larger kernel size leads to improved performance due to a larger receptive field. In addition, as explained in Section 4.2, increasing kernel size does not dramatically increase parameters and FLOPs as standard convolution.
5.2 Natural Image Classification
In this section, we evaluate the performance of our model on the two common natural image datasets, CIFAR10 and CIFAR100 [30]. The CIFAR-10 and the CIFAR100 datasets consist of 32× 32 images
belonging to 10 and 100 classes, respectively. Both of the datasets contain 50k training data and 10k testing data. Before training, images are normalized according to the channel means and standard deviations.
In this experiment, we adopt ResNet-18 [22] as the baseline model(short as R18), which is composed of an initial convolution layer, followed by 4 stage Res-Blocks and one final classification layer. Following the standard practice in [9], we replace all the conventional layers with p4 (p4m) convolutions in R18 and increase the width of each layer by √ 4 ( √
8) to keep the learnable parameters approximately the same. We denote the resulting models as p4-R18 (p4m-R18). We replace the second group convolution layer in each Res-Block of p4-R18 (p4m-R18) with our E4-layer, resulting in the p4-E4R18 (p4m-E4R18). For a fair comparison, all the above models are trained under the same training settings. We use the stochastic gradient descent with an initial learning rate of 0.1, a Nesterov momentum of 0.9 and a weight decay of 0.0005. The learning rate is reduced by 5 at 60th, 120th, and 160th epochs. Models are trained for 200 epochs using 128 batch size. No data augmentation is used during training to illustrate data efficiency of our model.
The classification accuracy, parameters count and FLOPs of all models on CIFAR10 and CIFAR100 are reported in Table 4. We can see that models incorporating more symmetry achieve better performance, i.e., R18 ≤ p4-R18 ≤ p4m-R18. Our p4 and p4m models significantly outperform their counterparts on both CIFAR10 and CIFAR100. Furthermore, our model decreases the parameter count and FLOPs by 45% and 32%, respectively. Notice that the model size reduction is purely caused by the introduction of our E4-layers, as topological connections and width of each layer of E4 model and its counterparts are the same.
Data Efficiency: To further study the performance of our model, we train all the models listed in Table 4 on CIFAR10 with different sizes of training data. To be specific, we consider 5 settings, where 1k, 2k, 3k, 4k and 5k training data of each class are randomly sampled from the CIFAR10 training set. Testing is still performed on the original test set of CIFAR10. Other training settings are identical to the above. We visualize the results in Figure 2.
It is observed that the performance gap between p4, p4m and R2 models tend to increase as we reduce the training data. This is mainly because that the prior that the label is invariant to rotations is more important when training data are fewer. The trend is also observed in the gap between our models and their counterparts. For instance, the gap between p4m-E4R18 and p4m-R18 is 0.87% when training data of each class is 5k, while it is enlarged to 5.22% when training data of each class is reduced to 1k.
Especially, we observe the line of p4-E4R18 intersects with the one of p4m-R18, which further indicates that our model is much more data efficient than G-CNNs. As indicated above, symmetry prior is more important when training data are fewer, and the data efficiency of our model implies that p4-E4R18 and p4m-E4R18 can better exploit the symmetry of data.
6 Limitation and Future Work
From the theory perspective, although we extend the general equivaraint framework from linear cases to common non-linear cases, there’s two limitations on the generalization: 1) we only focus on layers with such pair-wise interactions proposed in Eqn.(7), and higher-order interactions cases are not included. 2) We only consider regular group action in this framework, which is a special case of general group actions. We leave extending this equivariant framework to these cases as future work.
From the practice perspective, we only give a special implementation of Eqn.(10) in an intuitive insight, and further exploration in the space of equivariant map is in demand. An alternative is to exploit searching algorithms from neural architecture search [42, 38, 69] to find a more powerful and efficient model. Besides this, our E4-layer is slower than G-CNN despite less FLOPs due to convolutions are optimized by many speedup libraries. Our layer is implemented only in a naive way, that is, using the unfold operation followed by a summation operation for the aggregation step. In the future, we will try to implement a customized CUDA kernel for GPU acceleration to reduce training and inference time of our model.
7 Conclusions
In this work, we propose a general framework of group equivariant models which delivers a unified understanding on the previous group equivariant models. Based on the new understanding, we propose a novel efficient and powerful group equivariant layer which can serve as a drop-in replacement for convolutional layers. Extensive experiments demonstrate the E4-layer is more powerful, parameter efficient and computational efficient than group convolution layers and their variants. Through a side by side comparison with G-CNNs, we demonstrate our E4-layer can significantly improve data efficiency of equivariant models, which show great potential for reducing the cost of collecting data.
Acknowledgment
Zhouchen Lin was supported by the NSF China (No.s 61625301 and 61731018), NSFC Tianyuan Fund for Mathematics (No. 12026606) and Project 2020BD006 supported by PKU-Baidu Fund. Yisen Wang is partially supported by the National Natural Science Foundation of China under Grant 62006153, and Project 2020BD006 supported by PKU-Baidu Fund. | 1. What is the focus and contribution of the paper on group-convolutional networks?
2. What are the strengths and weaknesses of the proposed functional abstraction and its application to second-order group-convolutional networks?
3. Do you have any concerns or questions regarding the experimental setup and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper or exploring further research directions? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a functional abstraction for group-convolutional networks employing first- and second-order features. Based on this functional abstraction, a new functional form for second-order group-convolutional networks is devised, which can be made more parameter-efficient than existing approaches. Experiments are performed on rotated MNIST and CIFAR, evaluating error, number of parameters and certain ablations.
Review
While I enjoyed reading about the generalization idea, I feel that there is a certain amount of imprecision hampering it.
Some general comments and questions:
The statement that the authors "propose a generalized framework of equivariant models" could be construed as having found the most general formulation. This is certainly not the case, given higher-order interactions that cannot be framed this way.
The "spatial-agnostic problem" is not a problem. It is just an architecture choice.
Most CNNs, and incidentally the network proposed in this paper, are not actually equivariant to pixel shifts due to the pooling
the formulation of the kernels with neighborhoods \mathcal N(g) is a bit vague. It should be made clear that the precise indexing of the elements of the neighborhood is important if one does not want to resort to permutation invariant functions such as averaging. It is mentioned briefly when incorporating group \mathcal A in the example. A further question would be: It would seem that these neighborhoods need to exhibit some form of transformability as well. Can this be simply characterized? At some points, MLPs are proposed to handle whole neighborhoods. How would these be made to transform correctly?
The experiments on CIFAR are a bit contrived -- please compare to fully data-augmented CNNs. If this approach is to be shown to be useful, it should be able to perform at the level of data augmentation using the symmetries encoded in the network. Data augmentation may also help in the proposed architecture because of the lack of shift-invariance below aggregated stride size.
A demonstration of object recognition on ImageNet would truly show this approach to be efficient. This benchmark is also a form of water-shed - many architectures work on CIFAR that do not work on ImageNet
the symmetry groups chosen are not very interesting. It would have been nice to see some more non-trivial rotation at least.
style suggestion: s/Besides, //g |
NIPS | Title
Efficient Equivariant Network
Abstract
Convolutional neural networks (CNNs) have dominated the field of Computer Vision and achieved great success due to their built-in translation equivariance. Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs. However, G-CNNs are faced with two major challenges: spatial-agnostic problem and expensive computational cost. In this work, we propose a general framework of previous equivariant models, which includes G-CNNs and equivariant self-attention layers as special cases. Under this framework, we explicitly decompose the feature aggregation operation into a kernel generator and an encoder, and decouple the spatial and extra geometric dimensions in the computation. Therefore, our filters are essentially dynamic rather than being spatial-agnostic. We further show that our Equivariant model is parameter Efficient and computational Efficient by complexity analysis, and also data Efficient by experiments, so we call our model E-Net. Extensive experiments verify that our model can significantly improve previous works with smaller model size. Especially, under the setting of training on 1/5 data of CIFAR10, our model improves G-CNNs by 5%+ accuracy, while using only 56% parameters and 68% FLOPs.
1 Introduction
In the past few years, convolutional neural networks (CNNs) have been widely used and achieved superior results on multiple vision tasks, such as image classification [31, 55, 51, 22], semantic segmentation [3], and object detection [44]. A compelling explanation of the good performance of CNNs is that their built-in parameter sharing scheme brings in translation equivariance: shifting an image and then feeding it through a CNN layer is the same as feeding the original image and then shifting the resulted feature maps. In other words, the translation symmetry is preserved by each layer. Motivated by this, Cohen and Welling [9] proposed Group Equivariant CNNs (G-CNNs), showing how convolutional networks can be generalized to exploit larger groups of symmetries. Following G-CNNs, researchers have designed new neural networks that are equivariant to other transformations like rotations [9, 61, 24, 49] and scales [65, 53]. However, G-CNNs still have two main drawbacks: 1) In the implementation, G-CNNs would introduce extra dimensions to encode new transformations, such as rotations and scales, thus have a very high computational cost. 2) Although G-CNNs achieve group equivariance by sharing kernels, like vanilla CNNs, they lack the ability to adapt kernels to diverse feature patterns with respect to different spatial positions, namely, the spatial-agnostic problem [68, 39, 70, 71, 54, 67, 36].
∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Some previous works focus on solving these two problems. Cheng et al. [4] proposed to decompose the convolutional filters over joint steerable bases to reduce model size. However, it is essentially G-CNNs which still have the inherent spatial-agnostic problem. To incorporate dynamic filters, one solution is introducing attention mechanism into each convolution layer in G-CNNs without disturbing inherent equivariance [48, 45]. The cost is that they introduce extra parameters and increase the complexity of space and time. Another solution is to replace group convolution layers with standalone self-attention layers by designing a specific position embedding to ensure equivariance [47, 26]. However, the self-attention mechanism suffers from quadratic memory and time complexity, because it has to compute the attention score at each pair of inputs.
Actually, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] revealed that an equivariant linear layer is essentially a convolution-like operation. Inspired by this, we further discover that a general feature-extraction layer, either linear or non-linear, being equivariant is equivalent to that the feature aggregation mechanism between each pair of inputs only depends on the relative positions of these two inputs. Based on this observation, we propose a generalized framework of previous equivariant models, which includes G-CNNs and equivariant attention networks as special cases. Under this generalized framework, we design a new equivariant layer to conquer the aforementioned difficulties. Firstly, to avoid quadratic computational complexity, the feature aggregation operator is explicitly decomposed into a kernel generator and an encoder which takes one single feature as the input. Since our kernels are calculated based on input features, they are essentially dynamic rather than being spatial-agnostic. In addition, we decouple the feature aggregation mechanism across spatial and extra geometric dimensions to reduce the inter-channel redundancy in convolution filters [4] and further accelerate computation. Extensive experiments show that our method can process data very efficiently and perform significantly better than previous works using lower computational cost. As our method is parameter Efficient, computational Efficient, data Efficient and Equivariant, we name our new layer as E4-layer.
We summarize our main contributions as follows:
• We propose a generalized framework of previous equivariant models, which includes GCNNs and attention-based equivariant models as special cases.
• Under the generalized framework, we explicitly decompose the feature aggregation operator into a kernel generator and an encoder, and further decouple the spatial and extra geometric dimensions to reduce computation.
• Extensive experiments verify that our method is also data efficient and performs competitively with lower computational cost.
2 Related Work
Vanilla CNNs [34] are naturally translation equivariant. More symmetries are considered to be exploited into the network for different tasks, such as rotations over plane [9, 66, 35, 12, 61, 59, 49, 37, 52, 4, 41, 2, 10, 58, 21, 24], rotations over 3D space [62, 16, 57, 64, 14, 60, 15, 50, 28, 6], scaling [65, 40, 53, 46], symmetries on manifold [8, 11], and other general symmetry groups [17, 56, 18]. These works accomplish equivariance by constraining the linear mappings in layers, followed by pointwise non-linearities to enhance their expressive power. In general, researchers [29, 7, 1] pointed out that an equivariant linear mapping can always be written as a convolution-like integral, i.e., G-CNNs in practice. However, their theory is still limited to linear cases.
As works [68, 39, 70, 71, 54, 67, 36] point out the spatial-agnostic problem of CNNs and attention mechanisms [25, 63, 43, 13, 20] achieve impressive results on various vision tasks, researchers start to consider non-linear equivariant mapping. Romero et al. [48, 45] directly reweighted the convolution kernels with attention weights generated by features and obtained non-linear equivariant models. However, compared with G-CNNs, these methods introduce extra parameters and operations, resulting in an even heavier computational burden. Also, some works [47, 26, 23] proposed group equivariant self-attention [43, 13]. Fuchs et al. [19] incorporated self-attention into 3D equivariant networks and proposed SE(3)-Transformers. However, since their filters are essentially calculated based on a pair of inputs, the computational complexity is quadratic.
In this work, we further extend the linear equivariant theory to a more general situation, including non-linear cases. Under the framework, we design a new equivariant layer to solve both the spatial-
agnostic problem in convolution-based equivariant models and heavy computation cost problem in most equivariant models.
3 A Unified Framework of Previous Group Equivariant Models
In this section, we first briefly review two representative group equivariant models: the linear model G-CNNs [9], and the non-linear model equivariant self-attention [47, 26]. Then, we propose a general framework of previous equivariant models based on the inner relationship among these specific models.
3.1 Equivariance
Equivariance indicates that the outputs of a mapping transform in a predictable way with the transformation of the inputs. Formally, a group equivariant map Ψ satisfies that
∀u ∈ G, Ψ [Tu[f ]] = T ′u[Ψ[f ]], (1) where G is a transformation group, f is an input feature map, and Tu and T ′u are group actions, indicating how the transformation u acts on the input and output features, respectively. Besides, since we hope that two transformations u, v ∈ G acting on the feature maps successively is equivalent to the composition of transformations uv ∈ G acting on the feature maps directly, we require that TuTv = Tuv , where uv is the group product of u and v. The same is the case with T ′u. Now we examine the specific form of the transformation group G. In this work, we focus on the analysis of 2D images defined on R2. Consequently, we are most interested in the groups of the form G = R2 o A, resulting from the semi-product (o) between the translation group R2 and a group A acts on R2, e.g., rotations, scalings and mirrorings. This family of groups is referred to as affine groups and their group product rule is:
uv = (xu, au)(xv, av) = (xu + auxv, auav), (2)
where u = (xu, au) and v = (xv, av), in which xu, xv ∈ R2 and au, av ∈ A. For ease of implementation, following [9], we take A as the cyclic group C4 or the dihedral group D4, then G becomes p4 or p4m. As for the group action, we employ the most common regular group action in this work, i.e., Tu[f ](v) = f(u−1v). (3) Here, we only care about the group action over the feature maps defined on G, because we always use a lifting operation to lift the input images defined on R2 to the feature maps on G, where the equivariance can be preserved properly, as will be shown in Section 3.2.
3.2 G-CNNs
Let f (l) : X → RCl and W : G → RCl+1×Cl be the input feature and the convolutional filter in the l-th layer, respectively, where Cl denotes the channel number of the l-th layer. X is taken as R2 for the first layer, and taken as G for the following layers. Then for any g ∈ G, the group convolution [29, 7, 1] of f (l) and W on G at g is given by
f (l+1)(g) = Ψ[f (l)](g) = ∫ X W (g−1g̃)f (l)(g̃)dµ(g̃), (4)
where µ(·) is the Haar measure. When X is discrete, Eqn. (4) can be rewritten as f (l+1)(g) = ∑ g̃∈X W (g−1g̃)f (l)(g̃). (5)
G-CNNs essentially generalize the translation equivariance of conventional convolution to a more general group G. In fact, the first layer maps the 2D images to a function defined on G, while the following layers map one feature map on G to another. As a result, the computational complexity of the first layer and the following layers are of the order O(k2|A|) and O(k2|A|2), respectively, where k is the kernel size in the spatial space. As a result, G-CNNs have a much larger computational cost when A is large, especially for the intermediate layers. In this work, we employ the first layer of G-CNNs as a lifting operation, and focus on reducing the computation of the latter layers.
3.3 Equivariant Attention Networks
Group Equivariant Self-Attention (G-SA) [47, 26] is a representative method of equivariant attention networks, whose form can be simplified as follows:
f (l+1)(g) = ∑ g̃∈G Softmaxg̃[h T Q(f (l)(g))(hK(f (l)(g̃)) + Pg−1g̃)]hV (f (l)(g̃)), (6)
where hV : RCl → RCl+1 , and hQ, hK : RCl → Rd are the embedding functions of values, querys and keys, respectively, which are neural networks in the most general case. d is the dimension of the low dimensional embeddings, and Pg−1g̃ ∈ Rd encodes the relative positions of the query f (l)(g) and the key f (l)(g̃).
3.4 Generalized Equivariant Framework
As more and more group equivariant structures emerge, researchers start to deduce the most general equivariant structures. To this end, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] proposed a general theory of linear group equivariant structures, which indicates that G-CNNs are the most general equivariant linear layers. Besides, a lot of non-linear equivariant structures appear recently, such as equivariant self-attention layers [47, 26]. This motivates us to investigate a more general framework.
In all, with only slight modification, most of layers in a neural network can be viewed as a kind of aggregation of pair-wise feature interaction as follows:
f (l+1)(g) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)), (7)
where the feature aggregation operator Hg,g̃(·, ·) : RCl × RCl → RCl+1 is a mapping indexed by a pair of location g and g̃, which describes how to aggregate the input feature pair f(g) and f(g̃). In general, the above layer is not equivariant. However, we can find a general constraint for Hg,g̃(f
(l)(g), f (l)(g̃)) to make this layer equivariant over G. Theorem 1 The layer formulated as Eqn.(7) is group equivariant if and only if there is a mapping H̃ĝ : RCl × RCl → RCl+1 which is indexed by a single group element ĝ, such that, ∀f (l) and ∀g, g̃ ∈ G, the layer satisfies:∑
g̃
Hg,g̃(f (l)(g), f (l)(g̃)) = ∑ g̃ H̃g−1g̃(f (l)(g), f (l)(g̃)) (8)
Proof ⇒ Firstly, ∀u, g and g̃ ∈ G, Tuf (l+1)(g) = f (l+1)(u−1g) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
On the other hand,∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(u−1g), f (l)(u−1g̃)) = ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)).
As Tuf (l+1)(g) = ∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)),
⇒ ∀f (l), g, u, ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
Let g → ug, we get: ∀f (l), g, u, ∑ g̃∈G Hug,ug̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
then, we let u to be g−1, ∀f (l), g, ∑ g̃∈G He,g−1g̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
We denote H̃g−1g̃(·, ·) as He,g−1g̃(·, ·), we can get exactly the Eqn.(8) ⇐ This is obvious. Q.E.D
From the theorem, we can get a group equivariant layer: f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(f (l)(g), f (l)(g̃)), (9)
which is also the only equivariant form of Eqn. (7). Actually, the above theorem also reveals the essence of equivariance in previous works, i.e., if the relative positions of (g1,g̃1) and (g2,g̃2) are the same, i.e., g1−1g̃1 = g−12 g̃2 = ĝ, the feature pairs located at the two tuples should be processed equally. In other words, we should employ the same function H̃ĝ to act on these two input feature pairs.
From this perspective, we can readily see that both the kernel sharing used in G-CNNs, Eqn. (4), and the relative position encoding adopted in the G-SA, Eqn. (6), utilizes the above rule. According to Theorem 1, designing a group equivariant layer becomes much more easily and flexibly than ever, as we only need to design a new function H̃ĝ. In addition, the new formulation provides a more general perspective on the group equivariant layer, i.e., sharing the parameters of function H̃ĝ , which generalizes the kernel sharing schemes in G-CNNs. Based on the above understanding, we can see that if we replace the feature vector in the right hand side of Eqn. (9) with the local patch at group element g and g̃, respectively, it is still equivariant. Proposition 1 The following layer is equivariant,
f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(FN1(g),FN2(g̃)) (10)
where for i = 1, 2, the FNi(g) denote the local patches of g, in which Ni(g) represent g’s neighborhood {gg′|g′ ∈ Ni(e)} and Ni(e) is the predefined neighborhood of the identity element e ∈ G.
One remarkable advantages of introducing local patch is that it contains more semantic information than feature vector. Notice, we acquire the local patches by concatenating features in the neighborhoods of g and g̃ in a predefined order onN1(e) andN2(e) respectively, i.e., f(g′) is concatenated at the same place in FN1(g) as f(g−1g′) in FN1(e). We denote the concatenation operator as ⋃ , and will discuss the above in detail in Section 4.1, which shows that concatenating features can not only make our framework more flexible, but also help to reduce the computational burden of our newly proposed equivariant layer.
4 Efficient Equivariant Layer
A straight-forward and easy case of Eqn. (10) is to adopt H̃ĝ, ∀ĝ ∈ G, as a multi-layer perceptron (MLP), where the subscript ĝ is used to identify different MLPs. However, in Eqn. (10), we have to compute a mapping from two high dimensional vectors to another high dimensional one for each input pair of g and g̃, which is very expensive. A similar issue exists in computing the attention score in self-attention. To deal with this problem, we decompose H̃ into the following form to reduce the computation, i.e.,
∀ĝ ∈ G, H̃ĝ(x, y) = Kĝ(x) V (y) (11)
where means element-wise product, and Kĝ : RCl|N1(e)| → RCl+1 is a kernel generator and V : RCl|N2(e)| → RCl+1 is an encoder. We use | · | to denote the numbers of elements in a set. Hence, we can compute Kĝ(x) and V (y) separately. In addition, to further save computation, we split the kernel into several slices along the channels, such that Kĝ is shared across these slices,
i.e., ∀ 1 ≤ i, j ≤ Cl+1, Kiĝ = K j ĝ if i ≡ j (mod s), where s is the number of slices, and i and j are channel indexes. The Kĝ is essentially a dynamic filters which is adaptive to features around g, avoiding the spatial-agnostic problem in G-CNNs. Unlike conventional dynamic filters, which are matrices, the output of Kĝ is a vector, which can be viewed as a depth wise kernel [5]. This can decouple channel dimension with spatial dimension during feature aggregation to reduce the computational cost. Position information is implicitly encoded in the organized output form of our kernel generator, rather than using explicit positional embedding in the group self-attention layer [26, 47].
In practice, we can view the whole kernel family {Kĝ}ĝ∈G as the output of a single mapping, i.e., K̃: RCl|N1(e)| → R|G|Cl+1 . Then, we resize the output of K̃ to be a |G| × Cl+1 matrix, with different rows represent different Kĝ. Namely, if we adopt K̃ as an MLP, the computations and parameters used for hidden layer are shared across Kĝ for different ĝ, which is another merit of the Eqn. (11). However, there is still a large search space for H̃ĝ, as Eqn (11) is only a special structue of H̃ĝ, we leave a more complete study of H̃ĝ in the future work.
4.1 Implementation on Affine Group
In this section, we design a very efficient equivariant layer based on Eqn. (11) for affine group R2 oA. The computation of the operator is:
f (l+1)(g) = ∑
g̃∈N (g)
Kg−1g̃ ⋃ g′∈N1(g) f (l)(g′) V ⋃ g̃′∈N2(g̃) f (l)(g̃′) . (12) Following the standard practice in computer vision, aggregation is done only on the local neighborhood of g, N (g). To save computation, we choose N (g) to be only spatial-wise neighborhood, i.e.,
N (g) = {g(v, eA) | v ∈ Ω}, where Ω ∈ R2 and eA is the identity element of group A. However, aggregating information along spatial neighborhood only discards the information interaction along A, which could lead to a drop in performance [32]. We alleviate the issue by concatenating the feature map along A, i.e., we choose N1(g) and N2(g) to be {g(0, a)|a ∈ A}. The order of concatenation is predefined on A. As will be shown in the later experiments, this concatenation does not introduce much computation but can significantly improve performance. Compared to group convolution, such a design enables us to decouple the feature aggregation across the spatial dimension and the A dimension to further reduce computational cost. In practice, we adopt the K̃ as a two layer MLP: K̃(x) = W2Relu(W1x), where W1 ∈ RCl/r×Cl|A|,W2 ∈ R|Ω|s×Cl/r, and r is the reduction ratio which saves both parameters and computation, s is the number of slices defined before. For 2D images, Ω is usually adopted as a k × k square mesh grids and |Ω| = k2, where k is the kernel size. We simply adopt the encoder V as a linear transform: V (y) = W3y, where W3 ∈ RCl+1×Cl|A|. For better illustration, we visualize a concrete layer of Eqn. (12) by choosing G as p4 in Figure 1.
4.2 Computational Complexity Analysis
In practice, the feature map is defined on discrete mesh grids. We use h and w to denote the height and the width of mesh grids. As the numbers of the input and output channels are usually the same, we assume Cl = Cl+1 = c.
Parameter Analysis The number of learnable parameters of E4-layer (12) is c2|A|(1 + 1/r) + csk2/r. As s c, parameter counts are dominated by the first term when k is not too large, and increasing kernel size will not significantly increase parameter counts, which is shown in later experiments. The parameters count of group convolution layer is c2k2|A|. Notice that (1+1/r) k2 and s/r c|A|, parameters count of our E4-layer is significantly less than that of group convolution layer.
Time Complexity Analysis The FLOPs of E4-layer and group convolution layer are (1 + 1/r)c2|A|2hw + (1 + s/r)k2c|A|hw and k2c2|A|2hw, respectively. Similarly, as (1 + 1/r) k2 and (1 + s/r) c|A|, the FLOPs of E4-layer is significantly lower than that of group convolutional layer.
It can be observed that both the parameter count and FLOPs of our E4-layer are composed of two terms, one depending on k2 and the other not relying on k, which is a result of disentangling across spatial dimension with both channels and A during feature aggregation.
5 Experiments
In this section, we conduct extensive experiments to study and demonstrate the performance of our model. The experimental results show that our model has a greater capacity than the groupconvolution-based one in terms of parameter efficiency, computational efficiency, data efficiency and accuracy. On the MNIST-rot dataset, we detailedly study the effect of hyperparameters on the number of parameters, computation FLOPs and performance of our model. All the experiments are done on the GeForce RTX 3090 GPU.
5.1 Rotated MNIST
The MNIST-rot dataset [33] is the most widely used benchmark to test the equivariant models. It contains 62k 28×28 randomly rotated gray-scale handwritten digits. Images in the dataset are split into 10k for training, 2k for validation and 50k for testing. Random rotation of digits and only 20 percent of training data of the
standard MNIST dataset increases the difficulty of classification.
For a fair comparison, we keep both training settings and architectures of our model as close as possible to previous works [9, 47]. In addition, we adopt the p4 group to construct all our models in this section. In our first experiment, we adopt our E4-Net given in the supplementary material to make a comparison to previous works. This is a very lightweight model which contains only 18.8K learnable parameters. It is composed of one group convolutional layer which lifts the image to the p4 group, six E4-layers and one fully connected layer. Two 2× 2 max-pooling layers are inserted after the first and the third E4-layer to downsample feature maps. The last E4-layer is followed by a global max group pooling layer [9], which takes the maximum response over the entire group, to ensure the predictions invariant to rotations.
Our model is trained using the Adam optimizer [27] for 200 epochs with a batch size of 128. The learning rate is initialized as 0.02 and is reduced by 10 at the 60th, 120th and 160th epochs. The weight decay is set as 0.0001 and no data augmentation is used during training. The results are listed in Table 1. Our models significantly outperform G-CNNs [9] using only about 25% parameters and 40% FLOPs. For G-SA [47], which is a group equivariant stand-alone self-attention model, even performs inferiorly to G-CNNs with much more computational cost. The α-p4-CNN model [45] further introduces the attention mechanism to group convolution along both spatial and channel dimensions to enhance the expressiveness of G-CNNs, while our E4-Net still significantly outperforms it with less computational cost. We also experiment with a larger model to further demonstrate the capacity of our model, which is listed in the last line of Table 1.
Ablation Study of Concatenation: In the E4layer (12), we introduce the concatenation operation to enable the disentanglement across the rotation and the spatial information interaction. To study the importance of concatenation, we carry out experiments on the case that neither Kĝ nor V in Eqn. (12) use concatenation, i.e.,
N1(g) = g, N2(g̃) = g̃. As shown in the first line of Table 2, this leads to a significant drop in performance. This is because if aggregation in Eqn.(12) is done merely in the spatial neighborhoods without concatenation, there is no information interaction along the rotation dimensions. We also experiment the cases using concatenation only in Kĝ or V , and the performance of both is better than the case without concatenation but is still inferior to the case with concatenation in both Kĝ and V . This further illustrates the importance of concatenation along A.
Hyperparameters Analysis: We investigate the effect of various hyperparameters used in the E4-layer. The reduction ratio r and the slice number s in the Kĝ and kernel size k control the computations and parameters of the layer. Based on the baseline model, we vary the three hyperparameters respectively. As shown in the Table 3, improvement is observed when decreasing the reduction ratio and increasing the slice number, with the cost of computational burden increasing. Especially, the improvement of s = 2 over s = 4 and r = 1 over r = 2 is marginal, which is attributes to redundancy
in the kernel [4]. In conclusion, appropriately increasing the reduction ratio r and decreasing the slice number s can help to reduce computational cost while preserving performance. Keeping other hyperparameters fixed, we study the effect of kernel size on our model. In Table 3, the performance peaks when kernel size equals 7. In general, a larger kernel size leads to improved performance due to a larger receptive field. In addition, as explained in Section 4.2, increasing kernel size does not dramatically increase parameters and FLOPs as standard convolution.
5.2 Natural Image Classification
In this section, we evaluate the performance of our model on the two common natural image datasets, CIFAR10 and CIFAR100 [30]. The CIFAR-10 and the CIFAR100 datasets consist of 32× 32 images
belonging to 10 and 100 classes, respectively. Both of the datasets contain 50k training data and 10k testing data. Before training, images are normalized according to the channel means and standard deviations.
In this experiment, we adopt ResNet-18 [22] as the baseline model(short as R18), which is composed of an initial convolution layer, followed by 4 stage Res-Blocks and one final classification layer. Following the standard practice in [9], we replace all the conventional layers with p4 (p4m) convolutions in R18 and increase the width of each layer by √ 4 ( √
8) to keep the learnable parameters approximately the same. We denote the resulting models as p4-R18 (p4m-R18). We replace the second group convolution layer in each Res-Block of p4-R18 (p4m-R18) with our E4-layer, resulting in the p4-E4R18 (p4m-E4R18). For a fair comparison, all the above models are trained under the same training settings. We use the stochastic gradient descent with an initial learning rate of 0.1, a Nesterov momentum of 0.9 and a weight decay of 0.0005. The learning rate is reduced by 5 at 60th, 120th, and 160th epochs. Models are trained for 200 epochs using 128 batch size. No data augmentation is used during training to illustrate data efficiency of our model.
The classification accuracy, parameters count and FLOPs of all models on CIFAR10 and CIFAR100 are reported in Table 4. We can see that models incorporating more symmetry achieve better performance, i.e., R18 ≤ p4-R18 ≤ p4m-R18. Our p4 and p4m models significantly outperform their counterparts on both CIFAR10 and CIFAR100. Furthermore, our model decreases the parameter count and FLOPs by 45% and 32%, respectively. Notice that the model size reduction is purely caused by the introduction of our E4-layers, as topological connections and width of each layer of E4 model and its counterparts are the same.
Data Efficiency: To further study the performance of our model, we train all the models listed in Table 4 on CIFAR10 with different sizes of training data. To be specific, we consider 5 settings, where 1k, 2k, 3k, 4k and 5k training data of each class are randomly sampled from the CIFAR10 training set. Testing is still performed on the original test set of CIFAR10. Other training settings are identical to the above. We visualize the results in Figure 2.
It is observed that the performance gap between p4, p4m and R2 models tend to increase as we reduce the training data. This is mainly because that the prior that the label is invariant to rotations is more important when training data are fewer. The trend is also observed in the gap between our models and their counterparts. For instance, the gap between p4m-E4R18 and p4m-R18 is 0.87% when training data of each class is 5k, while it is enlarged to 5.22% when training data of each class is reduced to 1k.
Especially, we observe the line of p4-E4R18 intersects with the one of p4m-R18, which further indicates that our model is much more data efficient than G-CNNs. As indicated above, symmetry prior is more important when training data are fewer, and the data efficiency of our model implies that p4-E4R18 and p4m-E4R18 can better exploit the symmetry of data.
6 Limitation and Future Work
From the theory perspective, although we extend the general equivaraint framework from linear cases to common non-linear cases, there’s two limitations on the generalization: 1) we only focus on layers with such pair-wise interactions proposed in Eqn.(7), and higher-order interactions cases are not included. 2) We only consider regular group action in this framework, which is a special case of general group actions. We leave extending this equivariant framework to these cases as future work.
From the practice perspective, we only give a special implementation of Eqn.(10) in an intuitive insight, and further exploration in the space of equivariant map is in demand. An alternative is to exploit searching algorithms from neural architecture search [42, 38, 69] to find a more powerful and efficient model. Besides this, our E4-layer is slower than G-CNN despite less FLOPs due to convolutions are optimized by many speedup libraries. Our layer is implemented only in a naive way, that is, using the unfold operation followed by a summation operation for the aggregation step. In the future, we will try to implement a customized CUDA kernel for GPU acceleration to reduce training and inference time of our model.
7 Conclusions
In this work, we propose a general framework of group equivariant models which delivers a unified understanding on the previous group equivariant models. Based on the new understanding, we propose a novel efficient and powerful group equivariant layer which can serve as a drop-in replacement for convolutional layers. Extensive experiments demonstrate the E4-layer is more powerful, parameter efficient and computational efficient than group convolution layers and their variants. Through a side by side comparison with G-CNNs, we demonstrate our E4-layer can significantly improve data efficiency of equivariant models, which show great potential for reducing the cost of collecting data.
Acknowledgment
Zhouchen Lin was supported by the NSF China (No.s 61625301 and 61731018), NSFC Tianyuan Fund for Mathematics (No. 12026606) and Project 2020BD006 supported by PKU-Baidu Fund. Yisen Wang is partially supported by the National Natural Science Foundation of China under Grant 62006153, and Project 2020BD006 supported by PKU-Baidu Fund. | 1. What is the focus of the paper regarding group equivariant models?
2. What are the main contributions of the proposed framework, particularly in comparison to previous G-CNNs?
3. How does the suggested layer ψ generalize G-CNNs and equivariant self-attention layers?
4. Can you provide more information about the function H used in constructing E4-Net?
5. Why did you choose V to be a linear map instead of an MLP?
6. Can you explain the relationship between E4-layer and G-Conv layer and G-SA layer?
7. Why didn't you include comparisons with G-SA neural networks in your experiments?
8. Any minor comments or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a general framework for group equivariant models which generalizes group equivariant CNNs (G-CNNs) and equivariant self-attention layers. For practical use, the paper proposes a particular framework, called E4-Net, which can significant improve previous G-CNNs with smaller model size.
Review
Two main contributions of the paper are as follows:
The authors propose a general framework of group equivariant models which generalizes G-CNNs and equivariant self-attention layers. The authors suggest a layer
ψ
:
f
l
↦
f
l
+
1
where
f
l
+
1
(
g
)
is defined to be a sum over
g
~
∈
G
of values of a function, say
H
, at
g
,
g
~
,
f
l
(
g
)
and
f
l
(
g
~
)
. It is proved in Theorem 1 of the paper that
ψ
is equivariant if and only if
H
can be viewed as a function at
g
−
1
g
~
,
f
l
(
g
)
and
f
l
(
g
~
)
.
A special case for the function
H
is used to construct E4-Net, a new group equivariant layers. In E4-Net, the function
H
is considered to be a point-wise product of a kernel generator
K
g
−
1
g
~
(
f
l
(
g
)
)
and an encoder
V
(
f
l
(
g
~
)
)
. Experiments on Rotated MNIST and CIFAR shows that E4-Net outperforms classical G-CNNs with less numer of parameters and time complexity.
The results are novel and interesting. The paper is well-written. Besides, I have some comments as follows:
The construction for the kernel generator
K
is described in detail in page 5. But I do not see any clear construction for the encoder
V
. It seems to me from Figure 1 that the encoder
V
is chosen to be a linear map. If yes, is there any reason for choosing
V
to be a linear map rather than an MLP?
An explanation for the relation of E4-layer with G-Conv layer and G-SA layer is needed.
In experiments, comparisons of the accuracy of E4-Net with G-CNNs are reasonable. Why do not you compare with G-SA neural nets?
Minor comments:
page 4, line 132: "querys" --> "queries" |
NIPS | Title
Efficient Equivariant Network
Abstract
Convolutional neural networks (CNNs) have dominated the field of Computer Vision and achieved great success due to their built-in translation equivariance. Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs. However, G-CNNs are faced with two major challenges: spatial-agnostic problem and expensive computational cost. In this work, we propose a general framework of previous equivariant models, which includes G-CNNs and equivariant self-attention layers as special cases. Under this framework, we explicitly decompose the feature aggregation operation into a kernel generator and an encoder, and decouple the spatial and extra geometric dimensions in the computation. Therefore, our filters are essentially dynamic rather than being spatial-agnostic. We further show that our Equivariant model is parameter Efficient and computational Efficient by complexity analysis, and also data Efficient by experiments, so we call our model E-Net. Extensive experiments verify that our model can significantly improve previous works with smaller model size. Especially, under the setting of training on 1/5 data of CIFAR10, our model improves G-CNNs by 5%+ accuracy, while using only 56% parameters and 68% FLOPs.
1 Introduction
In the past few years, convolutional neural networks (CNNs) have been widely used and achieved superior results on multiple vision tasks, such as image classification [31, 55, 51, 22], semantic segmentation [3], and object detection [44]. A compelling explanation of the good performance of CNNs is that their built-in parameter sharing scheme brings in translation equivariance: shifting an image and then feeding it through a CNN layer is the same as feeding the original image and then shifting the resulted feature maps. In other words, the translation symmetry is preserved by each layer. Motivated by this, Cohen and Welling [9] proposed Group Equivariant CNNs (G-CNNs), showing how convolutional networks can be generalized to exploit larger groups of symmetries. Following G-CNNs, researchers have designed new neural networks that are equivariant to other transformations like rotations [9, 61, 24, 49] and scales [65, 53]. However, G-CNNs still have two main drawbacks: 1) In the implementation, G-CNNs would introduce extra dimensions to encode new transformations, such as rotations and scales, thus have a very high computational cost. 2) Although G-CNNs achieve group equivariance by sharing kernels, like vanilla CNNs, they lack the ability to adapt kernels to diverse feature patterns with respect to different spatial positions, namely, the spatial-agnostic problem [68, 39, 70, 71, 54, 67, 36].
∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Some previous works focus on solving these two problems. Cheng et al. [4] proposed to decompose the convolutional filters over joint steerable bases to reduce model size. However, it is essentially G-CNNs which still have the inherent spatial-agnostic problem. To incorporate dynamic filters, one solution is introducing attention mechanism into each convolution layer in G-CNNs without disturbing inherent equivariance [48, 45]. The cost is that they introduce extra parameters and increase the complexity of space and time. Another solution is to replace group convolution layers with standalone self-attention layers by designing a specific position embedding to ensure equivariance [47, 26]. However, the self-attention mechanism suffers from quadratic memory and time complexity, because it has to compute the attention score at each pair of inputs.
Actually, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] revealed that an equivariant linear layer is essentially a convolution-like operation. Inspired by this, we further discover that a general feature-extraction layer, either linear or non-linear, being equivariant is equivalent to that the feature aggregation mechanism between each pair of inputs only depends on the relative positions of these two inputs. Based on this observation, we propose a generalized framework of previous equivariant models, which includes G-CNNs and equivariant attention networks as special cases. Under this generalized framework, we design a new equivariant layer to conquer the aforementioned difficulties. Firstly, to avoid quadratic computational complexity, the feature aggregation operator is explicitly decomposed into a kernel generator and an encoder which takes one single feature as the input. Since our kernels are calculated based on input features, they are essentially dynamic rather than being spatial-agnostic. In addition, we decouple the feature aggregation mechanism across spatial and extra geometric dimensions to reduce the inter-channel redundancy in convolution filters [4] and further accelerate computation. Extensive experiments show that our method can process data very efficiently and perform significantly better than previous works using lower computational cost. As our method is parameter Efficient, computational Efficient, data Efficient and Equivariant, we name our new layer as E4-layer.
We summarize our main contributions as follows:
• We propose a generalized framework of previous equivariant models, which includes GCNNs and attention-based equivariant models as special cases.
• Under the generalized framework, we explicitly decompose the feature aggregation operator into a kernel generator and an encoder, and further decouple the spatial and extra geometric dimensions to reduce computation.
• Extensive experiments verify that our method is also data efficient and performs competitively with lower computational cost.
2 Related Work
Vanilla CNNs [34] are naturally translation equivariant. More symmetries are considered to be exploited into the network for different tasks, such as rotations over plane [9, 66, 35, 12, 61, 59, 49, 37, 52, 4, 41, 2, 10, 58, 21, 24], rotations over 3D space [62, 16, 57, 64, 14, 60, 15, 50, 28, 6], scaling [65, 40, 53, 46], symmetries on manifold [8, 11], and other general symmetry groups [17, 56, 18]. These works accomplish equivariance by constraining the linear mappings in layers, followed by pointwise non-linearities to enhance their expressive power. In general, researchers [29, 7, 1] pointed out that an equivariant linear mapping can always be written as a convolution-like integral, i.e., G-CNNs in practice. However, their theory is still limited to linear cases.
As works [68, 39, 70, 71, 54, 67, 36] point out the spatial-agnostic problem of CNNs and attention mechanisms [25, 63, 43, 13, 20] achieve impressive results on various vision tasks, researchers start to consider non-linear equivariant mapping. Romero et al. [48, 45] directly reweighted the convolution kernels with attention weights generated by features and obtained non-linear equivariant models. However, compared with G-CNNs, these methods introduce extra parameters and operations, resulting in an even heavier computational burden. Also, some works [47, 26, 23] proposed group equivariant self-attention [43, 13]. Fuchs et al. [19] incorporated self-attention into 3D equivariant networks and proposed SE(3)-Transformers. However, since their filters are essentially calculated based on a pair of inputs, the computational complexity is quadratic.
In this work, we further extend the linear equivariant theory to a more general situation, including non-linear cases. Under the framework, we design a new equivariant layer to solve both the spatial-
agnostic problem in convolution-based equivariant models and heavy computation cost problem in most equivariant models.
3 A Unified Framework of Previous Group Equivariant Models
In this section, we first briefly review two representative group equivariant models: the linear model G-CNNs [9], and the non-linear model equivariant self-attention [47, 26]. Then, we propose a general framework of previous equivariant models based on the inner relationship among these specific models.
3.1 Equivariance
Equivariance indicates that the outputs of a mapping transform in a predictable way with the transformation of the inputs. Formally, a group equivariant map Ψ satisfies that
∀u ∈ G, Ψ [Tu[f ]] = T ′u[Ψ[f ]], (1) where G is a transformation group, f is an input feature map, and Tu and T ′u are group actions, indicating how the transformation u acts on the input and output features, respectively. Besides, since we hope that two transformations u, v ∈ G acting on the feature maps successively is equivalent to the composition of transformations uv ∈ G acting on the feature maps directly, we require that TuTv = Tuv , where uv is the group product of u and v. The same is the case with T ′u. Now we examine the specific form of the transformation group G. In this work, we focus on the analysis of 2D images defined on R2. Consequently, we are most interested in the groups of the form G = R2 o A, resulting from the semi-product (o) between the translation group R2 and a group A acts on R2, e.g., rotations, scalings and mirrorings. This family of groups is referred to as affine groups and their group product rule is:
uv = (xu, au)(xv, av) = (xu + auxv, auav), (2)
where u = (xu, au) and v = (xv, av), in which xu, xv ∈ R2 and au, av ∈ A. For ease of implementation, following [9], we take A as the cyclic group C4 or the dihedral group D4, then G becomes p4 or p4m. As for the group action, we employ the most common regular group action in this work, i.e., Tu[f ](v) = f(u−1v). (3) Here, we only care about the group action over the feature maps defined on G, because we always use a lifting operation to lift the input images defined on R2 to the feature maps on G, where the equivariance can be preserved properly, as will be shown in Section 3.2.
3.2 G-CNNs
Let f (l) : X → RCl and W : G → RCl+1×Cl be the input feature and the convolutional filter in the l-th layer, respectively, where Cl denotes the channel number of the l-th layer. X is taken as R2 for the first layer, and taken as G for the following layers. Then for any g ∈ G, the group convolution [29, 7, 1] of f (l) and W on G at g is given by
f (l+1)(g) = Ψ[f (l)](g) = ∫ X W (g−1g̃)f (l)(g̃)dµ(g̃), (4)
where µ(·) is the Haar measure. When X is discrete, Eqn. (4) can be rewritten as f (l+1)(g) = ∑ g̃∈X W (g−1g̃)f (l)(g̃). (5)
G-CNNs essentially generalize the translation equivariance of conventional convolution to a more general group G. In fact, the first layer maps the 2D images to a function defined on G, while the following layers map one feature map on G to another. As a result, the computational complexity of the first layer and the following layers are of the order O(k2|A|) and O(k2|A|2), respectively, where k is the kernel size in the spatial space. As a result, G-CNNs have a much larger computational cost when A is large, especially for the intermediate layers. In this work, we employ the first layer of G-CNNs as a lifting operation, and focus on reducing the computation of the latter layers.
3.3 Equivariant Attention Networks
Group Equivariant Self-Attention (G-SA) [47, 26] is a representative method of equivariant attention networks, whose form can be simplified as follows:
f (l+1)(g) = ∑ g̃∈G Softmaxg̃[h T Q(f (l)(g))(hK(f (l)(g̃)) + Pg−1g̃)]hV (f (l)(g̃)), (6)
where hV : RCl → RCl+1 , and hQ, hK : RCl → Rd are the embedding functions of values, querys and keys, respectively, which are neural networks in the most general case. d is the dimension of the low dimensional embeddings, and Pg−1g̃ ∈ Rd encodes the relative positions of the query f (l)(g) and the key f (l)(g̃).
3.4 Generalized Equivariant Framework
As more and more group equivariant structures emerge, researchers start to deduce the most general equivariant structures. To this end, Cohen et al. [7], Kondor et al. [29] and Bekkers [1] proposed a general theory of linear group equivariant structures, which indicates that G-CNNs are the most general equivariant linear layers. Besides, a lot of non-linear equivariant structures appear recently, such as equivariant self-attention layers [47, 26]. This motivates us to investigate a more general framework.
In all, with only slight modification, most of layers in a neural network can be viewed as a kind of aggregation of pair-wise feature interaction as follows:
f (l+1)(g) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)), (7)
where the feature aggregation operator Hg,g̃(·, ·) : RCl × RCl → RCl+1 is a mapping indexed by a pair of location g and g̃, which describes how to aggregate the input feature pair f(g) and f(g̃). In general, the above layer is not equivariant. However, we can find a general constraint for Hg,g̃(f
(l)(g), f (l)(g̃)) to make this layer equivariant over G. Theorem 1 The layer formulated as Eqn.(7) is group equivariant if and only if there is a mapping H̃ĝ : RCl × RCl → RCl+1 which is indexed by a single group element ĝ, such that, ∀f (l) and ∀g, g̃ ∈ G, the layer satisfies:∑
g̃
Hg,g̃(f (l)(g), f (l)(g̃)) = ∑ g̃ H̃g−1g̃(f (l)(g), f (l)(g̃)) (8)
Proof ⇒ Firstly, ∀u, g and g̃ ∈ G, Tuf (l+1)(g) = f (l+1)(u−1g) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
On the other hand,∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(u−1g), f (l)(u−1g̃)) = ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)).
As Tuf (l+1)(g) = ∑ g̃∈G Hg,g̃(Tuf (l)(g), Tuf (l)(g̃)),
⇒ ∀f (l), g, u, ∑ g̃∈G Hg,ug̃(f (l)(u−1g), f (l)(g̃)) = ∑ g̃∈G Hu−1g,g̃(f (l)(u−1g), f (l)(g̃)).
Let g → ug, we get: ∀f (l), g, u, ∑ g̃∈G Hug,ug̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
then, we let u to be g−1, ∀f (l), g, ∑ g̃∈G He,g−1g̃(f (l)(g), f (l)(g̃)) = ∑ g̃∈G Hg,g̃(f (l)(g), f (l)(g̃)).
We denote H̃g−1g̃(·, ·) as He,g−1g̃(·, ·), we can get exactly the Eqn.(8) ⇐ This is obvious. Q.E.D
From the theorem, we can get a group equivariant layer: f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(f (l)(g), f (l)(g̃)), (9)
which is also the only equivariant form of Eqn. (7). Actually, the above theorem also reveals the essence of equivariance in previous works, i.e., if the relative positions of (g1,g̃1) and (g2,g̃2) are the same, i.e., g1−1g̃1 = g−12 g̃2 = ĝ, the feature pairs located at the two tuples should be processed equally. In other words, we should employ the same function H̃ĝ to act on these two input feature pairs.
From this perspective, we can readily see that both the kernel sharing used in G-CNNs, Eqn. (4), and the relative position encoding adopted in the G-SA, Eqn. (6), utilizes the above rule. According to Theorem 1, designing a group equivariant layer becomes much more easily and flexibly than ever, as we only need to design a new function H̃ĝ. In addition, the new formulation provides a more general perspective on the group equivariant layer, i.e., sharing the parameters of function H̃ĝ , which generalizes the kernel sharing schemes in G-CNNs. Based on the above understanding, we can see that if we replace the feature vector in the right hand side of Eqn. (9) with the local patch at group element g and g̃, respectively, it is still equivariant. Proposition 1 The following layer is equivariant,
f (l+1)(g) = ∑ g̃∈G H̃g−1g̃(FN1(g),FN2(g̃)) (10)
where for i = 1, 2, the FNi(g) denote the local patches of g, in which Ni(g) represent g’s neighborhood {gg′|g′ ∈ Ni(e)} and Ni(e) is the predefined neighborhood of the identity element e ∈ G.
One remarkable advantages of introducing local patch is that it contains more semantic information than feature vector. Notice, we acquire the local patches by concatenating features in the neighborhoods of g and g̃ in a predefined order onN1(e) andN2(e) respectively, i.e., f(g′) is concatenated at the same place in FN1(g) as f(g−1g′) in FN1(e). We denote the concatenation operator as ⋃ , and will discuss the above in detail in Section 4.1, which shows that concatenating features can not only make our framework more flexible, but also help to reduce the computational burden of our newly proposed equivariant layer.
4 Efficient Equivariant Layer
A straight-forward and easy case of Eqn. (10) is to adopt H̃ĝ, ∀ĝ ∈ G, as a multi-layer perceptron (MLP), where the subscript ĝ is used to identify different MLPs. However, in Eqn. (10), we have to compute a mapping from two high dimensional vectors to another high dimensional one for each input pair of g and g̃, which is very expensive. A similar issue exists in computing the attention score in self-attention. To deal with this problem, we decompose H̃ into the following form to reduce the computation, i.e.,
∀ĝ ∈ G, H̃ĝ(x, y) = Kĝ(x) V (y) (11)
where means element-wise product, and Kĝ : RCl|N1(e)| → RCl+1 is a kernel generator and V : RCl|N2(e)| → RCl+1 is an encoder. We use | · | to denote the numbers of elements in a set. Hence, we can compute Kĝ(x) and V (y) separately. In addition, to further save computation, we split the kernel into several slices along the channels, such that Kĝ is shared across these slices,
i.e., ∀ 1 ≤ i, j ≤ Cl+1, Kiĝ = K j ĝ if i ≡ j (mod s), where s is the number of slices, and i and j are channel indexes. The Kĝ is essentially a dynamic filters which is adaptive to features around g, avoiding the spatial-agnostic problem in G-CNNs. Unlike conventional dynamic filters, which are matrices, the output of Kĝ is a vector, which can be viewed as a depth wise kernel [5]. This can decouple channel dimension with spatial dimension during feature aggregation to reduce the computational cost. Position information is implicitly encoded in the organized output form of our kernel generator, rather than using explicit positional embedding in the group self-attention layer [26, 47].
In practice, we can view the whole kernel family {Kĝ}ĝ∈G as the output of a single mapping, i.e., K̃: RCl|N1(e)| → R|G|Cl+1 . Then, we resize the output of K̃ to be a |G| × Cl+1 matrix, with different rows represent different Kĝ. Namely, if we adopt K̃ as an MLP, the computations and parameters used for hidden layer are shared across Kĝ for different ĝ, which is another merit of the Eqn. (11). However, there is still a large search space for H̃ĝ, as Eqn (11) is only a special structue of H̃ĝ, we leave a more complete study of H̃ĝ in the future work.
4.1 Implementation on Affine Group
In this section, we design a very efficient equivariant layer based on Eqn. (11) for affine group R2 oA. The computation of the operator is:
f (l+1)(g) = ∑
g̃∈N (g)
Kg−1g̃ ⋃ g′∈N1(g) f (l)(g′) V ⋃ g̃′∈N2(g̃) f (l)(g̃′) . (12) Following the standard practice in computer vision, aggregation is done only on the local neighborhood of g, N (g). To save computation, we choose N (g) to be only spatial-wise neighborhood, i.e.,
N (g) = {g(v, eA) | v ∈ Ω}, where Ω ∈ R2 and eA is the identity element of group A. However, aggregating information along spatial neighborhood only discards the information interaction along A, which could lead to a drop in performance [32]. We alleviate the issue by concatenating the feature map along A, i.e., we choose N1(g) and N2(g) to be {g(0, a)|a ∈ A}. The order of concatenation is predefined on A. As will be shown in the later experiments, this concatenation does not introduce much computation but can significantly improve performance. Compared to group convolution, such a design enables us to decouple the feature aggregation across the spatial dimension and the A dimension to further reduce computational cost. In practice, we adopt the K̃ as a two layer MLP: K̃(x) = W2Relu(W1x), where W1 ∈ RCl/r×Cl|A|,W2 ∈ R|Ω|s×Cl/r, and r is the reduction ratio which saves both parameters and computation, s is the number of slices defined before. For 2D images, Ω is usually adopted as a k × k square mesh grids and |Ω| = k2, where k is the kernel size. We simply adopt the encoder V as a linear transform: V (y) = W3y, where W3 ∈ RCl+1×Cl|A|. For better illustration, we visualize a concrete layer of Eqn. (12) by choosing G as p4 in Figure 1.
4.2 Computational Complexity Analysis
In practice, the feature map is defined on discrete mesh grids. We use h and w to denote the height and the width of mesh grids. As the numbers of the input and output channels are usually the same, we assume Cl = Cl+1 = c.
Parameter Analysis The number of learnable parameters of E4-layer (12) is c2|A|(1 + 1/r) + csk2/r. As s c, parameter counts are dominated by the first term when k is not too large, and increasing kernel size will not significantly increase parameter counts, which is shown in later experiments. The parameters count of group convolution layer is c2k2|A|. Notice that (1+1/r) k2 and s/r c|A|, parameters count of our E4-layer is significantly less than that of group convolution layer.
Time Complexity Analysis The FLOPs of E4-layer and group convolution layer are (1 + 1/r)c2|A|2hw + (1 + s/r)k2c|A|hw and k2c2|A|2hw, respectively. Similarly, as (1 + 1/r) k2 and (1 + s/r) c|A|, the FLOPs of E4-layer is significantly lower than that of group convolutional layer.
It can be observed that both the parameter count and FLOPs of our E4-layer are composed of two terms, one depending on k2 and the other not relying on k, which is a result of disentangling across spatial dimension with both channels and A during feature aggregation.
5 Experiments
In this section, we conduct extensive experiments to study and demonstrate the performance of our model. The experimental results show that our model has a greater capacity than the groupconvolution-based one in terms of parameter efficiency, computational efficiency, data efficiency and accuracy. On the MNIST-rot dataset, we detailedly study the effect of hyperparameters on the number of parameters, computation FLOPs and performance of our model. All the experiments are done on the GeForce RTX 3090 GPU.
5.1 Rotated MNIST
The MNIST-rot dataset [33] is the most widely used benchmark to test the equivariant models. It contains 62k 28×28 randomly rotated gray-scale handwritten digits. Images in the dataset are split into 10k for training, 2k for validation and 50k for testing. Random rotation of digits and only 20 percent of training data of the
standard MNIST dataset increases the difficulty of classification.
For a fair comparison, we keep both training settings and architectures of our model as close as possible to previous works [9, 47]. In addition, we adopt the p4 group to construct all our models in this section. In our first experiment, we adopt our E4-Net given in the supplementary material to make a comparison to previous works. This is a very lightweight model which contains only 18.8K learnable parameters. It is composed of one group convolutional layer which lifts the image to the p4 group, six E4-layers and one fully connected layer. Two 2× 2 max-pooling layers are inserted after the first and the third E4-layer to downsample feature maps. The last E4-layer is followed by a global max group pooling layer [9], which takes the maximum response over the entire group, to ensure the predictions invariant to rotations.
Our model is trained using the Adam optimizer [27] for 200 epochs with a batch size of 128. The learning rate is initialized as 0.02 and is reduced by 10 at the 60th, 120th and 160th epochs. The weight decay is set as 0.0001 and no data augmentation is used during training. The results are listed in Table 1. Our models significantly outperform G-CNNs [9] using only about 25% parameters and 40% FLOPs. For G-SA [47], which is a group equivariant stand-alone self-attention model, even performs inferiorly to G-CNNs with much more computational cost. The α-p4-CNN model [45] further introduces the attention mechanism to group convolution along both spatial and channel dimensions to enhance the expressiveness of G-CNNs, while our E4-Net still significantly outperforms it with less computational cost. We also experiment with a larger model to further demonstrate the capacity of our model, which is listed in the last line of Table 1.
Ablation Study of Concatenation: In the E4layer (12), we introduce the concatenation operation to enable the disentanglement across the rotation and the spatial information interaction. To study the importance of concatenation, we carry out experiments on the case that neither Kĝ nor V in Eqn. (12) use concatenation, i.e.,
N1(g) = g, N2(g̃) = g̃. As shown in the first line of Table 2, this leads to a significant drop in performance. This is because if aggregation in Eqn.(12) is done merely in the spatial neighborhoods without concatenation, there is no information interaction along the rotation dimensions. We also experiment the cases using concatenation only in Kĝ or V , and the performance of both is better than the case without concatenation but is still inferior to the case with concatenation in both Kĝ and V . This further illustrates the importance of concatenation along A.
Hyperparameters Analysis: We investigate the effect of various hyperparameters used in the E4-layer. The reduction ratio r and the slice number s in the Kĝ and kernel size k control the computations and parameters of the layer. Based on the baseline model, we vary the three hyperparameters respectively. As shown in the Table 3, improvement is observed when decreasing the reduction ratio and increasing the slice number, with the cost of computational burden increasing. Especially, the improvement of s = 2 over s = 4 and r = 1 over r = 2 is marginal, which is attributes to redundancy
in the kernel [4]. In conclusion, appropriately increasing the reduction ratio r and decreasing the slice number s can help to reduce computational cost while preserving performance. Keeping other hyperparameters fixed, we study the effect of kernel size on our model. In Table 3, the performance peaks when kernel size equals 7. In general, a larger kernel size leads to improved performance due to a larger receptive field. In addition, as explained in Section 4.2, increasing kernel size does not dramatically increase parameters and FLOPs as standard convolution.
5.2 Natural Image Classification
In this section, we evaluate the performance of our model on the two common natural image datasets, CIFAR10 and CIFAR100 [30]. The CIFAR-10 and the CIFAR100 datasets consist of 32× 32 images
belonging to 10 and 100 classes, respectively. Both of the datasets contain 50k training data and 10k testing data. Before training, images are normalized according to the channel means and standard deviations.
In this experiment, we adopt ResNet-18 [22] as the baseline model(short as R18), which is composed of an initial convolution layer, followed by 4 stage Res-Blocks and one final classification layer. Following the standard practice in [9], we replace all the conventional layers with p4 (p4m) convolutions in R18 and increase the width of each layer by √ 4 ( √
8) to keep the learnable parameters approximately the same. We denote the resulting models as p4-R18 (p4m-R18). We replace the second group convolution layer in each Res-Block of p4-R18 (p4m-R18) with our E4-layer, resulting in the p4-E4R18 (p4m-E4R18). For a fair comparison, all the above models are trained under the same training settings. We use the stochastic gradient descent with an initial learning rate of 0.1, a Nesterov momentum of 0.9 and a weight decay of 0.0005. The learning rate is reduced by 5 at 60th, 120th, and 160th epochs. Models are trained for 200 epochs using 128 batch size. No data augmentation is used during training to illustrate data efficiency of our model.
The classification accuracy, parameters count and FLOPs of all models on CIFAR10 and CIFAR100 are reported in Table 4. We can see that models incorporating more symmetry achieve better performance, i.e., R18 ≤ p4-R18 ≤ p4m-R18. Our p4 and p4m models significantly outperform their counterparts on both CIFAR10 and CIFAR100. Furthermore, our model decreases the parameter count and FLOPs by 45% and 32%, respectively. Notice that the model size reduction is purely caused by the introduction of our E4-layers, as topological connections and width of each layer of E4 model and its counterparts are the same.
Data Efficiency: To further study the performance of our model, we train all the models listed in Table 4 on CIFAR10 with different sizes of training data. To be specific, we consider 5 settings, where 1k, 2k, 3k, 4k and 5k training data of each class are randomly sampled from the CIFAR10 training set. Testing is still performed on the original test set of CIFAR10. Other training settings are identical to the above. We visualize the results in Figure 2.
It is observed that the performance gap between p4, p4m and R2 models tend to increase as we reduce the training data. This is mainly because that the prior that the label is invariant to rotations is more important when training data are fewer. The trend is also observed in the gap between our models and their counterparts. For instance, the gap between p4m-E4R18 and p4m-R18 is 0.87% when training data of each class is 5k, while it is enlarged to 5.22% when training data of each class is reduced to 1k.
Especially, we observe the line of p4-E4R18 intersects with the one of p4m-R18, which further indicates that our model is much more data efficient than G-CNNs. As indicated above, symmetry prior is more important when training data are fewer, and the data efficiency of our model implies that p4-E4R18 and p4m-E4R18 can better exploit the symmetry of data.
6 Limitation and Future Work
From the theory perspective, although we extend the general equivaraint framework from linear cases to common non-linear cases, there’s two limitations on the generalization: 1) we only focus on layers with such pair-wise interactions proposed in Eqn.(7), and higher-order interactions cases are not included. 2) We only consider regular group action in this framework, which is a special case of general group actions. We leave extending this equivariant framework to these cases as future work.
From the practice perspective, we only give a special implementation of Eqn.(10) in an intuitive insight, and further exploration in the space of equivariant map is in demand. An alternative is to exploit searching algorithms from neural architecture search [42, 38, 69] to find a more powerful and efficient model. Besides this, our E4-layer is slower than G-CNN despite less FLOPs due to convolutions are optimized by many speedup libraries. Our layer is implemented only in a naive way, that is, using the unfold operation followed by a summation operation for the aggregation step. In the future, we will try to implement a customized CUDA kernel for GPU acceleration to reduce training and inference time of our model.
7 Conclusions
In this work, we propose a general framework of group equivariant models which delivers a unified understanding on the previous group equivariant models. Based on the new understanding, we propose a novel efficient and powerful group equivariant layer which can serve as a drop-in replacement for convolutional layers. Extensive experiments demonstrate the E4-layer is more powerful, parameter efficient and computational efficient than group convolution layers and their variants. Through a side by side comparison with G-CNNs, we demonstrate our E4-layer can significantly improve data efficiency of equivariant models, which show great potential for reducing the cost of collecting data.
Acknowledgment
Zhouchen Lin was supported by the NSF China (No.s 61625301 and 61731018), NSFC Tianyuan Fund for Mathematics (No. 12026606) and Project 2020BD006 supported by PKU-Baidu Fund. Yisen Wang is partially supported by the National Natural Science Foundation of China under Grant 62006153, and Project 2020BD006 supported by PKU-Baidu Fund. | 1. What is the main contribution of the paper regarding generalizing group equivariant neural networks?
2. How does the proposed method improve upon previous works in terms of efficiency and non-linearity?
3. What are the concerns regarding the input dependency on g and \tilde{g} in the proof of Theorem 1?
4. How does the paper deliver a unified understanding of previous group equivariant models, and how could this be improved?
5. What is the intuition behind Equation 12, and why is this layer more powerful than standard convolutions?
6. How does the paper solve the spatial agnostic problem, and how does this differ from traditional equivariant methods?
7. Practical question: How is the aggregation function/convolution kernel parametrized, and how does this relate to continuous kernel convolutions such as PointConvs and LieConvs?
8. Minor comments: What motivates the importance of Equation 10 for efficient implementations, and how could this be better introduced?
9. Implementation details: How do we ensure that the output of V represents transformed feature values of some neighborhood, and how does one practically code this?
10. Why replace only the second group conv layer in a res-block, and what are the practical limitations of applying this method to large datasets?
11. How does the FLOP analysis relate to computation times, and what overhead may still exist despite reduced FLOPs? | Summary Of The Paper
Review | Summary Of The Paper
The paper describes a generalization of group equivariant neural networks that typically rely on linear operators such as the group convolution, or pseudo linear operators such as attentive group convolution. The authors take a look at the aggregation function of the convolution operator (which is usually linear via a kernel times feature values followed by sum aggregation) and replaces it with a non-linear function that generally could depend not just on relative positions, but also on the central and neighbor feature values. The result is a more non-linear operator than group convolutions, which are a special case of this framework.
The paper is accurate and has an appropriate experimental section with decent ablation studies. The proposed work compares favorably compared to the baselines both in terms of accuracy, parameters and nr of operations (FLOPs).
Review
I think the paper describes a great idea of modifying the main workhorse of G-CNNs (the group convolution) to something non-linear. Also the paper focusses on efficiency which is a relevant aspect of G-CNNs. The paper performs a decent set of abblation studies, though these are done on MNIST, so it may not generalize to more challenging datasets, but the method is further validated on CIFAR10 and CIFAR100. Since the paper focussed on efficiency it was a pity not to see the method being applied to problems that actually require efficient implementations (such as e.g. imagenet). I think this is one of the main limitations of the paper, but up to some other concerns (see below), I think it is a great paper.
I think he paper could mainly improve by presenting additional intuition behind the proposed method and discuss the main mechanism that motivates why it should work better (see second upcoming concern). This could possibly be achieved by explicitly framing conventional methods as special cases of this framework, which enables to precisely point to differences that could explain why the current method should work better. I find this important particularly because the method is presented as a generalization of other works, and it would be great to explicitly see related works as special cases. I think this would help interpretation.
All in all I have the following comments:
[Thm 1 + proof | concern about input dependency on g and \tilde{g}] I think the construction is not very transparent with respect to the dependency of of H to it’s inputs. Namely in equation 8, the inputs of H_{g,\tilde{g}} are denoted as arbitrary open “slots” (with \cdot) for arbitrary inputs. However, these inputs in practice do depend again on g and \tilde{g}, as given in equation 7. With this in mind I don't think the proof is formally correct and is in my opinion even misleading. Though when explicitly taking the dependency into account the theorem and proof is valid . After 154 I suggest not writing \cdot for the inputs but f(g) and f(\tilde{g}) respectively. (Or please correct me if my concern is ill founded).
Related to this concern: In equation 11 x and y still depend on g or \tilde{g}. This dependency seems important but is currently obscured, is this a problem?
[related work and interpretation of the generalization] Then, the proof of thm. 1 follows the exact same structure as thm.1 of [Bekkers] and reference [23] for the attentive version. It would be great if these works and other related works could be discussed in more detail when it comes to claims regarding this work generalizing many group equivariant works. Just to be clear, I do not dispute this claim, it is very well founded, it is just that the paper contains sentence such as “this work… delivers a unified understanding on the previous group equivariant models” (line 357). In my opinion, the understanding of this unified view could be better delivered as the equation is quite abstract and I think can be better understood with some specific (mathematically precises) special cases such as g-conv or attentive g-conv.
For example, an additional intuitive explanation of equation 12 would be very helpful. Why is this layer more powerful than standard convs?
ref: [Bekkers] Erik J Bekkers. B-spline cnns on lie groups. In International Conference on Learning Representations, 2019.
[solution to the spatial agnostic problem?] Then I have doubts regarding the claim of solving “the spatial-agnostic problem”. I do not understand how this problem is solved. By construction equivariant methods have to be spatial agnostic otherwise they can’t be equivariant and thus operations only depend on relative positions/transformations. The only difference now is that the convolution kernels are conditioned on the feature values at the central or neighbor locations. Maybe it is mentioned somewhere in other words, but do I understand correctly that merely the dependency of the conv kernel (as in deformable convs), or aggregation function on local features solves the spatial agnostic problem? My apologies for possibly misinterpreting parts of the paper, but perhaps the notions of equivariance vs “conditioning” can be decoupled and explicitly discussed in terms of the spatial agnostic problem. For me this part was slightly confusing.
[Practical question] How is the aggregation function/convolution kernel parametrized. In continuous kernel convolutions such as e.g. PointConvs [Wu et al.] or LieConvs [Finzi et al.], the kernel is parametrized by an MLP (though it does not depend on neighborhood feature values). Then for every possible g^{-1}\tilde{g} the kernel is defined. Is this also the case in this paper, or is the kernel (e.g. in eq 12) indexed for a finite set of possible values for g^{-1}\tilde{g}? (made possible because only a discrete group is considered)
refs [Wu et al.] Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9621–9630, 2019. [Finzi et al.] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In International Conference on Machine Learning, pages 3165–3176. PMLR, 2020.
[Minor comments. ] When equation 10 is presented I didn’t understand why this property was important. But then later on this is used for efficient implementations where the sub-group part is handled in one step by aggregating neighborhood information. Perhaps the section about eq 10 could be better introduced with such a motivation.
Line 217. What is “s”. I think this symbol is not defined. Lin 292. Same here, I don’t understand what a “slice numer s” is.
Regarding implementation of eq 12. The output of V represents transformed feature values of some neighboorhood, and it should follow the same ordering as the kernel values outputted by K. Is this automatically obtained or does one have to practically code this?
Line 320, why is only the second group conv layerin a res-block replaced and not all?
Section 5.2. Since the focus is on efficiency I was a bit dissapointed not to see any application to large datasets that actually require efficiency. Where there still practical limitations?
The FLOP analysis is great, still I would like to get a sense of computation times. For example, I can imagine that all the stacking of features and what not leads to quite some overhead which, despite less FLOPs, doesn’t lead to a reduction in computation time. |
NIPS | Title
The Landscape of Non-convex Empirical Risk with Degenerate Population Risk
Abstract
The landscape of empirical risk has been widely studied in a series of machine learning problems, including low-rank matrix factorization, matrix sensing, matrix completion, and phase retrieval. In this work, we focus on the situation where the corresponding population risk is a degenerate non-convex loss function, namely, the Hessian of the population risk can have zero eigenvalues. Instead of analyzing the non-convex empirical risk directly, we first study the landscape of the corresponding population risk, which is usually easier to characterize, and then build a connection between the landscape of the empirical risk and its population risk. In particular, we establish a correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption, which is required in existing literature but not satisfied in degenerate scenarios. We also apply the theory to matrix sensing and phase retrieval to demonstrate how to infer the landscape of empirical risk from that of the corresponding population risk.
1 Introduction
Understanding the connection between empirical risk and population risk can yield valuable insight into an optimization problem [1, 2]. Mathematically, the empirical risk f(x) with respect to a parameter vector x is defined as
f(x) , 1
M M∑ m=1 L(x,ym).
Here, L(·) is a loss function and we are interested in losses that are non-convex in x in this work. y = [y1, · · · ,yM ]> is a vector containing the random training samples, and M is the total number of samples contained in the training set. The population risk, denoted as g(x), is the expectation of the empirical risk with respect to the random measure used to generate the samples y, i.e., g(x) = Ef(x). Recently, the landscapes of empirical and population risk have been extensively studied in many fields of science and engineering, including machine learning and signal processing. In particular, the local or global geometry has been characterized in a wide variety of convex and non-convex problems, such as matrix sensing [3, 4], matrix completion [5, 6, 7], low-rank matrix factorization [8, 9, 10], phase retrieval [11, 12], blind deconvolution [13, 14], tensor decomposition [15, 16, 17], and so on. In this work, we focus on analyzing global geometry, which requires understanding not only regions near critical points but also the landscape away from these points.
It follows from empirical process theory that the empirical risk can uniformly converge to the corresponding population risk as M →∞ [18]. A recent work [1] exploits the uniform convergence of the empirical risk to the corresponding population risk and establishes a correspondence of their
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
critical points when provided with enough samples. The authors build their theoretical guarantees based on the assumption that the population risk is strongly Morse, namely, the Hessian of the population risk cannot have zero eigenvalues at or near the critical points1. However, many problems of practical interest do have Hessians with zero eigenvalues at some critical points. We refer to such problems as degenerate. To illustrate this, we present the very simple rank-1 matrix sensing and phase retrieval examples below.
Example 1.1. (Rank-1 matrix sensing). Given measurements ym = 〈Am,x?x?>〉, 1 ≤ m ≤M , where x? ∈ RN and Am ∈ RN×N denote the true signal and them-th Gaussian sensing matrix with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
4M M∑ m=1 ( 〈Am,xx>〉 − ym )2 .
The corresponding population risk is then
g(x) = Ef(x) = 1 4 ‖xx> − x?x?>‖2F .
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = (xx> − x?x?>)x, and ∇2g(x) = 2xx> − x?x?> + ‖x‖22IN . We see that g(x) has three critical points x = 0, ± x?. Observe that the Hessian at x = 0 is ∇2g(0) = −x?x?>, which does have zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1]. The conclusion extends to the general low-rank matrix sensing.
Example 1.2. (Phase retrieval). Given measurements ym = |〈am,x?〉|2, 1 ≤ m ≤ M , where x? ∈ RN and am ∈ RN denote the true signal and the m-th Gaussian random vector with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
2M M∑ m=1 ( |〈am,x〉|2 − ym )2 . (1.1)
The corresponding population risk is then
g(x) = Ef(x) = ‖xx> − x?x?>‖2F + 1
2 (‖x‖22 − ‖x?‖22)2. (1.2)
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = 6‖x‖22x− 2‖x?‖22x− 4(x?>x)x?, ∇2g(x) = 12xx> − 4x?x?> + 6‖x‖22IN − 2‖x?‖22IN .
We see that the population loss has critical points x = 0, ± x?, 1√ 3 ‖x?‖2w with w>x? = 0 and ‖w‖2 = 1. Observe that the Hessian at x = 1√3‖x ?‖2w is ∇2g( 1√3‖x
?‖2w) = 4‖x?‖22ww> − 4x?x?>, which also has zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1].
In this work, we aim to fill this gap and establish the correspondence between the critical points of empirical risk and its population risk without the strongly Morse assumption. In particular, we work on the situation where the population risk may be a degenerate non-convex function, i.e., the Hessian of the population risk can have zero eigenvalues. Given the correspondence between the critical points of the empirical risk and its population risk, we are able to build a connection between the landscape of the empirical risk and its population counterpart. To illustrate the effectiveness of this theory, we also apply it to applications such as matrix sensing (with general rank) and phase retrieval to show how to characterize the landscape of the empirical risk via its corresponding population risk.
1A twice differentiable function f(x) is Morse if all of its critical points are non-degenerate, i.e., its Hessian has no zero eigenvalues at all critical points. Mathematically, ∇f(x) = 0 implies all λi(∇2f(x)) 6= 0 with λi(·) being the i-th eigenvalue of the Hessian. A twice differentiable function f(x) is ( , η)-strongly Morse if ‖∇f(x)‖2 ≤ implies mini |λi(∇2f(x))| ≥ η. One can refer to [1] for more information.
The remainder of this work is organized as follows. In Section 2, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. In Section 3, we apply our theory to the two applications, matrix sensing and phase retrieval. In Section 4, we conduct experiments to further support our analysis. Finally, we conclude our work in Section 5.
Notation: For a twice differential function f(·): ∇f , ∇2f , grad f , and hess f denote the gradient and Hessian of f in the Euclidean space and with respect to a Riemannian manifoldM, respectively. Note that the Riemannian gradient/Hessian (grad/hess) reduces to the Euclidean gradient/Hessian (∇/∇2) when the domain of f is the Euclidean space. For a scalar function with a matrix variable, e.g., f(U), we represent its Euclidean Hessian with a bilinear form defined as ∇2f(U)[D,D] =∑ i,j,p,q ∂2f(U) ∂D(i,j)∂D(p,q)D(i, j)D(p, q) for any D having the same size as U. Denote B(l) as a compact and connected subset of a Riemannian manifoldM with l being a problem-specific parameter.2
2 Main Results
In this section, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. LetM be a Riemannian manifold. For notational simplicity, we use x ∈M to denote the parameter vector when we introduce our theory3. We begin by introducing the assumptions needed to build our theory. Denote f(x) and g(x) as the empirical risk and the corresponding population risk defined for x ∈M, respectively. Let and η be two positive constants. Assumption 2.1. The population risk g(x) satisfies
|λmin(hess g(x))| ≥ η (2.1) in the set D , {x ∈ B(l) : ‖grad g(x)‖2 ≤ }. Here, λmin(·) denotes the minimal eigenvalue (not the eigenvalue of smallest magnitude).
Assumption 2.1 is closely related to the robust strict saddle property [19] – it requires that any point with a small gradient has either a positive definite Hessian (λmin(hess g(x)) ≥ η) or a Hessian with a negative curvature (λmin(hess g(x)) ≤ −η). It is weaker than the ( , η)-strongly Morse condition
-1.5 -1 -0.5 0 0.5 1 1.5
0.5 1
1.5 2
2.5
R is
ks
Population Empirical
-1.5 -1 -0.5 0 0.5 1 1.5
-10
-11
10
G ra
d ie
n ts
-1.5 -1 -0.5 0 0.5 1 1.5
0 10 20 30 H e ss
ia n s
Figure 1: Phase retrieval with N = 1.
as it allows the Hessian hess g(x) to have zero eigenvalues inD, provided it also has at least one sufficiently negative eigenvalue. Assumption 2.2. (Gradient proximity). The gradients of the empirical risk and population risk satisfy
sup x∈B(l)
‖grad f(x)− grad g(x)‖2 ≤ 2 . (2.2)
Assumption 2.3. (Hessian proximity). The Hessians of the empirical risk and population risk satisfy
sup x∈B(l)
‖hess f(x)− hess g(x)‖2 ≤ η
2 . (2.3)
To illustrate the above three assumptions, we use the phase retrieval Example 1.2 with N = 1, x? = 1, and M = 30. We present the population risk g(x) = 32 (x
2−1)2 and the empirical risk f(x) = 12M
∑M
m=1 a 4 m(x
2−1)2 together with their gradients and Hessians in Figure 1. It can be seen that in the small gradient region (the three parts between the light blue vertical dashed lines), the absolute value of the population Hessian’s minimal eigenvalue (which equals the absolute value of Hessian here since N = 1) is bounded away from zero. In addition, with enough measurements, e.g., M = 30, we do see the gradients and Hessians of the empirical and population risk are close to each other.
We are now in the position to state our main theorem. Theorem 2.1. Denote f and g as the non-convex empirical risk and the corresponding population risk, respectively. Let D be any maximal connected and compact subset of D with a C2 boundary ∂D. Under Assumptions 2.1-2.3 stated above, the following statements hold:
2The subset B(l) can vary in different applications. For example, we define B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} in matrix sensing and B(l) , {x ∈ RN : ‖x‖2 ≤ l} in phase retrieval.
3For problems with matrix variables, such as matrix sensing introduced in Section 3, x is the vectorized representation of the matrix.
(a) D contains at most one local minimum of g. If g has K (K = 0, 1) local minima in D, then f also has K local minima in D.
(b) If g has strict saddles in D, then if f has any critical points in D, they must be strict saddle points.
The proof of Theorem 2.1 is given in Appendix A (see supplementary material). In particular, we prove Theorem 2.1 by extending the proof of Theorem 2 in [1] without requiring the strongly Morse assumption on the population risk. We first present two key lemmas, in which we show that there exists a correspondence between the critical points of the empirical risk and those of the population risk in a connected and compact set under certain assumptions, and the small gradient area can be partitioned into many maximal connected and compact components with each component either containing one local minimum or no local minimum. Finally, we finish the proof of Theorem 2.1 by using these two key lemmas.
Part (a) in Theorem 2.1 indicates a one-to-one correspondence between the local minima of the empirical risk and its population risk. We can further bound the distance between the local minima of the empirical risk and its population risk. We summarize this result in the following corollary, which is proved in Appendix C (see supplementary material). Corollary 2.1. Let {x̂k}Kk=1 and {xk}Kk=1 denote the local minima of the empirical risk and its population risk, and Dk be the maximal connected and compact subset of D containing xk and x̂k. Let ρ be the injectivity radius of the manifoldM. Suppose the pre-image of Dk under the exponential mapping Expxk(·) is contained in the ball at the origin of the tangent space TxkM with radius ρ. Assume the differential of the exponential mapping DExpxk(v) has an operator norm bounded by σ for all v ∈ TxkM with norm less than ρ. Suppose the pullback of the population risk onto the tangent space TxkM has Lipschitz Hessian with constant LH at the origin. Then as long as ≤ η 2 2σLH , the Riemannian distance between x̂k and xk satisfies
dist(x̂k,xk) ≤ 2σ /η, 1 ≤ k ≤ K.
In general, the two parameters and η used in Assumptions 2.1-2.3 can be obtained by lower bounding |λmin(hess g(x))| in a small gradient region. In this way, one can adjust the size of the small gradient region to get an upper bound on , and use the lower bound for |λmin(hess g(x))| as η. In the case when it is not easy to directly bound |λmin(hess g(x))| in a small gradient region, one can also first choose a region for which it is easy to find the lower bound, and then show that the gradient has a large norm outside of this region, as we do in Section 3. For phase retrieval, note that |λmin(∇2g(x))| and ‖∇g(x)‖2 roughly scale with ‖x?‖22 and ‖x?‖32 in the regions near critical points, which implies that η and the upper bound on should also scale with ‖x?‖22 and ‖x?‖32, respectively. For matrix sensing, in a similar way, |λmin(hess g(U))| and ‖grad g(U)‖F roughly scale with λk and λ1.5k in the regions near critical points, which implies that η and the upper bound on should also scale with λk and λ1.5k , respectively. Note, however, with more samples (larger M ), can be set to smaller values, while η typically remains unchanged. One can refer to Section 3 for more details on the notation as well as how to choose η and upper bounds on in the two applications.
Note that we have shown the correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption in the above theorem. In particular, we relax the strongly Morse assumption to our Assumption 2.1, which implies that we are able to handle the scenario where the Hessian of the population risk has zero eigenvalues at some critical points or even everywhere in the set D. With this correspondence, we can then establish a connection between the landscape of the empirical risk and the population risk, and thus for problems where the population risk has a favorable geometry, we are able to carry this favorable geometry over to the corresponding empirical risk. To illustrate this in detail, we highlight two applications, matrix sensing and phase retrieval, in the next section.
3 Applications
In this section, we illustrate how to completely characterize the landscape of an empirical risk from its population risk using Theorem 2.1. In particular, we apply Theorem 2.1 to two applications, matrix sensing and phase retrieval. In order to use Theorem 2.1, all we need is to verify that the empirical risk and population risk in these two applications satisfy the three assumptions stated in Section 2.
3.1 Matrix Sensing
Let X ∈ RN×N be a symmetric, positive semi-definite matrix with rank r. We measure X with a symmetric Gaussian linear operator A : RN×N → RM . The m-th entry of the observation y = A(X) is given as ym = 〈X,Am〉, where Am = 12 (Bm + B>m) with Bm being a Gaussian random matrix with entries following N (0, 1M ). The adjoint operator A∗ : RM → RN×N is defined as A∗(y) = ∑Mm=1 ymAm. It can be shown that E(A∗A) is the identity operator, i.e. E(A∗A(X)) = X. To find a low-rank approximation of X when given the measurements y = A(X), one can solve the following optimization problem:
min X̃∈RN×N
1 4 ‖A(X̃−X)‖22 s. t. rank(X̃) ≤ k, X̃ 0. (3.1)
Here, we assume that r2 ≤ k ≤ r N . By using the Burer-Monteiro type factorization [20, 21], i.e., letting X̃ = UU> with U ∈ RN×k, we can transform the above optimization problem into the following unconstrained one:
min U∈RN×k
f(U) , 1 4 ‖A(UU> −X)‖22. (3.2)
Observe that this empirical risk f(U) is a non-convex function due to the quadratic term UU>. With some elementary calculation, we obtain the gradient and Hessian of f(U), which are given as
∇f(U) = A∗A(UU> −X)U,
∇2f(U)[D,D] = 1 2 ‖A(UD> + DU>)‖22 + 〈A∗A(UU> −X),DD>〉.
Computing the expectation of f(U), we get the population risk
g(U) = Ef(U) = 1 4 ‖UU> −X‖2F , (3.3)
whose gradient and Hessian are given as
∇g(U) = (UU> −X)U and ∇2g(U)[D,D] = 1 2 ‖UD> + DU>‖22 + 〈UU> −X,DD>〉.
The landscape of the above population risk has been studied in the general RN×k space with k = r in [8]. The landscape of its variants, such as the asymmetric version with or without a balanced term, has also been studied in [4, 22]. It is well known that there exists an ambiguity in the solution of (3.2) due to the fact that UU> = UQQ>U> holds for any orthogonal matrix Q ∈ Rk×k . This implies that the Euclidean Hessian∇2g(U) always has zero eigenvalues for k > 1 at critical points, even at local minima, violating not only the strongly Morse condition but also Assumption 2.1. To overcome this difficulty, we propose to formulate an equivalent problem on a proper quotient manifold (rather than the general RN×k space as in [8]) to remove this ambiguity and make sure Assumption 2.1 is satisfied.
3.1.1 Background on the quotient manifold
To keep our work self-contained, we provide a brief introduction to quotient manifolds in this section before we verify our three assumptions. One can refer to [23, 24] for more information. We make the assumption that the matrix variable U is always full-rank. This is required in order to define a proper quotient manifold, since otherwise the equivalence classes defined below will have different dimensions, violating Proposition 3.4.4 in [23]. Thus, we focus on the case that U belongs to the manifold RN×k∗ , i.e., the set of all N × k real matrices with full column rank. To remove the parameterization ambiguity caused by the factorization X̃ = UU>, we define an equivalence class for any U ∈ RN×k∗ as [U] , {V ∈ RN×k∗ : VV> = UU>} = {UQ : Q ∈ Rk×k,Q>Q = Ik}. We will abuse notation and use U to denote also its equivalence class [U] in the following. Let M denote the set of all equivalence classes of the above form, which admits a (unique) differential structure that makes it a (Riemannian) quotient manifold, denoted asM = RN×k∗ /Ok. Here Ok is the orthogonal group {Q ∈ Rk×k : QQ> = Q>Q = Ik}. Since the objective function g(U) in
(3.3) (and f(U) in (3.2)) is invariant under the equivalence relation, it induces a unique function on the quotient manifold RN×k∗ /Ok, also denoted as g(U). Note that the tangent space TURN×k∗ of the manifold RN×k∗ at any point U ∈ RN×k∗ is still RN×k∗ . We define the vertical space VUM as the tangent space to the equivalence classes (which are themselves manifolds): VUM , {UΩ : Ω ∈ Rk×k, Ω> = −Ω}. We also define the horizontal space HUM as the orthogonal complement of the vertical space VUM in the tangent space TURN×k∗ = RN×k∗ : HUM , {D ∈ RN×k∗ : D>U = U>D}. For any matrix Z ∈ RN×k∗ , its projection onto the horizontal spaceHUM is given as PU(Z) = Z−UΩ, where Ω is a skewsymmetric matrix that solves the following Sylvester equation ΩU>U + U>UΩ = U>Z− Z>U. Then, we can define the Riemannian gradient (grad ·) and Hessian (hess ·) of the empirical risk and population risk on the quotient manifoldM, which are given in the supplementary material.
3.1.2 Verifying Assumptions 2.1, 2.2, and 2.3
Assume that X = WΛW> with W ∈ RN×r and Λ = diag([λ1, · · · , λr]) ∈ Rr×r is an eigendecomposition of X. Without loss of generality, we assume that the eigenvalues of X are in descending order. Let Λu ∈ Rk×k be a diagonal matrix that contains any k non-zero eigenvalues of X and Wu ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λu. Let Λk = diag([λ1, · · · , λk]) be the diagonal matrix that contains the largest k eigenvalues of X and Wk ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λk. Q ∈ Ok is any orthogonal matrix. The following lemma provides the global geometry of the population risk in (3.3), which also determines the values of and η in Assumption 2.1.
Lemma 3.1. Define U , {U = WuΛ 1 2 uQ>}, U? , {U? = WkΛ 1 2 kQ >} ⊆ U , and U?s , U\U?. Denote κ , √
λ1 λk ≥ 1 as the condition number of any U? ∈ U?. Define the following regions: R1 , { U∈ RN×k∗ : min
P∈Ok ‖U−U?P‖F ≤ 0.2κ−1
√ λk, ∀U? ∈ U? } ,
R′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F ≤
1
80 λ
3 2 k
} ,
R′′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F >
1
80 λ
3 2 k
} ,
R′3 , { U∈ RN×k∗ : σk(U)> 1
2
√ λk, min
P∈Ok ‖U−U?P‖F >0.2κ−1
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F
} ,
R′′3 , { U∈ RN×k∗ : ‖UU>‖F > 8
7 ‖U?U?>‖F
} ,
where σk(U) denotes the k-th singular value of a matrix U ∈ RN×k∗ , i.e., the smallest singular value of U. These regions also induce regions in the quotient manifoldM in an apparent way. We additionally assume that λk+1 ≤ 112λk and k ≤ r N . Then, the following properties hold:
(1) For any U ∈ U , U is a critical point of the population risk g(U) in (3.3). (2) For any U? ∈ U?, U? is a global minimum of g(U) with λmin(hess g(U?)) ≥ 1.91λk. Moreover,
for any U ∈ R1, we have λmin(hess g(U)) ≥ 0.19λk.
(3) For any U?s ∈ U?s , U?s is a strict saddle point of g(U) with λmin(hess g(U?s)) ≤ −0.91λk. Moreover, for any U ∈ R′2, we have
λmin(hess g(U)) ≤ −0.06λk.
(4) For any U ∈ R′′2 ⋃R′3⋃R′′3 , we have a large gradient. In particular,
‖ grad g(U)‖F > 1 80λ 3 2 k , if U ∈ R′′2 , 1 60κ −1λ 3 2
k , if U ∈ R′3, 5 84k 1 4λ 3 2 k , if U ∈ R′′3 .
The proof of Lemma 3.1 is inspired by the proofs of [8, Theorem 4], [3, Lemma 13] and [4, Theorem 5], and is given in Appendix D (see supplementary material). Therefore, we can set ≤ min{1/80, 1/60κ−1}λ 3 2
k and η = 0.06λk. Then, the population risk given in (3.3) satisfies Assumption 2.1. It can be seen that each critical point of the population risk g(U) in (3.3) is either a global minimum or a strict saddle, which inspires us to carry this favorable geometry over to the corresponding empirical risk.
To illustrate the partition of the manifold RN×k∗ used in the above Lemma 3.1, we use the purple (¬), yellow (), and green (®) regions in Figure 2 to denote the regions that satisfy minP∈Ok ‖U− U?P‖F ≤ 0.2κ−1 √ λk, σk(U) ≤ 12 √ λk, and ‖UU>‖F ≤ 8 7‖U?U?
>‖F , respectively. It can be seen thatR1 is exactly the purple region, which contains the areas near the global minima ([U?]). R2 = R′2 ⋃R′′2 is the intersection of the yellow and green regions. R′3 is the part of the green region that does not intersect with the purple or yellow regions. Finally, R′′3 is the space outside of the green region. Therefore, the union ofR1,R2, andR3 = R′3
⋃R′′3 covers the entire manifold RN×k∗ .
We define a norm ball as B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} with l = 87‖U?U? >‖F . The following lemma verifies Assumptions 2.2 and 2.3 under the restricted isometry property (RIP). Lemma 3.2. Assume r2 ≤ k ≤ r N . Suppose that a linear operator B with [B(Z)]m = 〈Z,Bm〉 satisfies the following RIP
(1− δr+k)‖Z‖2F ≤ ‖B(Z)‖22 ≤ (1 + δr+k)‖Z‖2F (3.4) for any matrix Z ∈ RN×N with rank at most r + k. We construct the linear operator A by setting Am = 1 2 (Bm + B > m). If the restricted isometry constant δr+k satisfies
δr+k≤min 2√87k 14(87‖U?U?>‖F+‖X‖F)‖U?U?>‖12F , 1 36 ,
η
2(167 √ k‖U?U?>‖F+ 87‖U?U?>‖F+‖X‖F) then, we have
sup U∈B(l)
‖ grad f(U)− grad g(U)‖F ≤ 2 , and sup U∈B(l) ‖hess f(U)− hess g(U)‖2 ≤
η 2 .
The proof of Lemma 3.2 is given in Appendix E (see supplementary material). As is shown in existing literature [25, 26, 27], a Gaussian linear operator B : RN×N → RM satisfies the RIP condition (3.4) with high probability if M ≥ C(r + k)N/δ2r+k for some numerical constant C. Therefore, we can conclude that the three statements in Theorem 2.1 hold for the empirical risk (3.2) and population risk (3.3) as long as M is large enough. Some similar bounds for the sample complexity M under different settings can also be found in papers [8, 4]. Note that the particular choice of l can guarantee that ‖ grad f(U)‖F is large outside of B(l), which is also proved in Appendix E. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk.
3.2 Phase Retrieval
We continue to elaborate on Example 1.2. The following lemma provides the global geometry of the population risk in (1.2), which also determines the values of and η in Assumption 2.1. Lemma 3.3. Define the following four regions:
R1 , { x ∈ RN : ‖x‖2 ≤ 1
2 ‖x?‖2
} , R2 , { x ∈ RN : min
γ∈{1,−1} ‖x− γx?‖2 ≤
1
10 ‖x?‖2
} ,
R3 , { x ∈ RN : min
γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 ≤ 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 } ,
R4 , { x ∈ RN : ‖x‖2 > 1
2 ‖x?‖2, min γ∈{1,−1} ‖x− γx?‖2 >
1
10 ‖x?‖2,
min γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 > 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 }
Then, the following properties hold:
(1) x = 0 is a strict saddle point with ∇2g(0) = −4x?x?> − 2‖x?‖22IN and λmin(∇2g(0)) = −6‖x?‖22. Moreover, for any x ∈ R1, the neighborhood of strict saddle point 0, we have
λmin(∇2g(x)) ≤ − 3
2 ‖x?‖22.
(2) x = ±x? are global minima with∇2g(±x?) = 8x?x?> + 4‖x?‖22IN and λmin(∇2g(±x?)) = 4‖x?‖22. Moreover, for any x ∈ R2, the neighborhood of global minima ±x?, we have
λmin(∇2g(x)) ≥ 0.22‖x?‖22.
(3) x = ± 1√ 3 ‖x?‖2w, with w>x? = 0 and ‖w‖2 = 1, are strict saddle points with
∇2g(± 1√ 3 ‖x?‖2w) = 4‖x?‖22ww> − 4x?x?> and λmin(∇2g(± 1√3‖x ?‖2w)) = −4‖x?‖22. Moreover, for any x ∈ R3, the neighborhood of strict saddle points ± 1√3‖x
?‖2w, we have λmin(∇2g(x)) ≤ −0.78‖x?‖22.
(4) For any x ∈ R4, the complement region ofR1,R2, andR3, we have ‖∇g(x)‖2 > 0.3963‖x?‖32.
The proof of Lemma 3.3 is inspired by the proof of [8, Theorem 3] and is given in Appendix F (see supplementary material). Letting ≤ 0.3963‖x?‖32 and η = 0.22‖x?‖22, the population risk (1.2) then satisfies Assumption 2.1. As in Lemma 3.1, we also note that each critical point of the population risk in (1.2) is either a global minimum or a strict saddle. This inspires us to carry this favorable geometry over to the corresponding empirical risk.
The partition of regions used in Lemma 3.3 is illustrated in Figure 3. We use the purple, green, and blue
balls to denote the three regionsR1,R2, andR3, respectively. R4 is then represented with the light gray region. Therefore, the union of the four regions covers the entire RN space.
Define a norm ball as B(l) , {x ∈ RN : ‖x‖2 ≤ l} with radius l = 1.1‖x?‖2. This particular choice of l guarantees that ‖ grad f(x)‖2 is large outside of B(l), which is proved in Appendix G. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk. We also
define h(N,M) , Õ ( N2 M + √ N M ) with Õ denoting an asymptotic notation that hides polylog factors. The following lemma verifies Assumptions 2.2 and 2.3 for this phase retrieval problem. Lemma 3.4. Suppose that am ∈ RN is a Gaussian random vector with entries following N (0, 1). If h(N,M) ≤ 0.0118, we then have
sup x∈B(l)
‖∇f(x)−∇g(x)‖2 ≤ 2 , and sup x∈B(l) ‖∇2f(x)−∇2g(x)‖2 ≤
η
2
hold with probability at least 1− e−CN log(M). The proof of Lemma 3.4 is given in Appendix G (see supplementary material). The assumption h(N,M) ≤ 0.0118 implies that we need a sample complexity that scales like N2, which is not optimal since x has only N degrees of freedom. This is a technical artifact that can be traced back to Assumptions 2.2 and 2.3–which require two-sided closeness between the gradients and Hessians–and the heavy-tail property of the fourth powers of Gaussian random process [12]. To arrive at the conclusions of Theorem 2.1, however, these two assumptions are sufficient but not necessary (while Assumption 2.1 is more critical), leaving room for tightening the sampling complexity bound. We leave this to future work.
4 Numerical Simulations
We first conduct numerical experiments on the two examples introduced in Section 1, i.e., the rank-1 matrix sensing and phase retrieval problems. In both problems, we fix N = 2 and set x? = [1 − 1]>.
Then, we generate the population risk and empirical risk based on the formulation introduced in these two examples. The contour plots of the population risk and a realization of empirical risk with M = 3 and M = 10 are given in Figure 4 for rank-1 matrix sensing and Figure 5 for phase retrieval. We see that when we have fewer samples (e.g., M = 3), there could exist some spurious local minima as is shown in plots (b). However, as we increase the number of samples (e.g., M = 10), we see a direct correspondence between the local
minima of empirical risk and population risk in both examples with a much higher probability. We also notice that extra saddle points can emerge as shown in Figure 4 (c), which shows that statement (b) in Theorem 2.1 cannot be improved to a one-to-one correspondence between saddle points in degenerate scenarios. We still observe this phenomenon even when M = 1000, which is not shown here. Note that for the rank-1 case, Theorem 2.1 can be applied directly without restricting to full-rank representations. Next, we conduct another experiment on general-rank matrix sensing with k = 2, r = 3, N = 8, and a variety of M . We set U? as the first r columns of an N ×N identity matrix and create X = U?U?>. The population and empirical risks are then generated according to the model introduced in Section 3.1. As shown in Figure 6, the distance (averaged over 100 trials) between the local minima of the population and empirical risk decreases as we increase M .
5 Conclusions
In this work, we study the problem of establishing a correspondence between the critical points of the empirical risk and its population counterpart without the strongly Morse assumption required in some existing literature. With this correspondence, we are able to analyze the landscape of an empirical risk from the landscape of its population risk. Our theory builds on a weaker condition than the strongly Morse assumption. This enables us to work on the very popular matrix sensing and phase retrieval problems, whose Hessian does have zero eigenvalues at some critical points, i.e., they are degenerate and do not satisfy the strongly Morse assumption. As mentioned, there is still room to improve the sample complexity of the phase retrieval problem that we will pursue in future work.
Acknowledgments
SL would like to thank Qiuwei Li at Colorado School of Mines for many helpful discussions on the analysis of matrix sensing and phase retrieval. The authors would also like to thank the anonymous reviewers for their constructive comments and suggestions which greatly improved the quality of this paper. This work was supported by NSF grant CCF-1704204, and the DARPA Lagrange Program under ONR/SPAWAR contract N660011824020. | 1. What is the focus of the paper, and what are its contributions?
2. How does the paper extend previous work by relaxing assumptions?
3. What are the strengths and weaknesses of the paper's organization and content?
4. Are there any concerns regarding the level of detail in certain sections of the paper?
5. How could the paper improve its experimental results and visual aids? | Review | Review
I have carefully read other reviewers' comments and the author feedback, and I understood that it's important to have these details in the applications section. I really appreciate the additional work done by the authors to improve the writing and contents of this paper, and these work addressed my concerns. Thus, I have raised my score. -------------------------------------------------------------------- Summary: This paper shows an interesting result which connects the landscape of the empirical risk and population risk. This result is an extension of (Song, Mei et al, 2016). The previous result establishes important connections between critical points, especially local minima, of empirical and population risk. These connections are helpful in understanding the landscape of the empirical risk and the behaviors of optimization algorithms like gradient descent. Previous work requires three assumptions: The strongly Morse property of the population risk, and the proximity for the gradient and Hessian of the population risk. The authors relaxed the first assumption and only requires the minimum eigenvalue of the Hessian is away from zero. Thus, this paper further improves the understanding of the landscape in a more general setting. They have found some examples for this new result to be applied to and done experiments to verify the theoretical results. Detailed Comments: 1. The authors spend half of the paper explaining the applications of the main result, but only use less than one page to show the main result without providing any sketch for the proof. From my point of view, this is not a very well-organized paper. The main result section is the core part of this paper, so I would expect more explanations of the main result, e.g., a proof sketch with some intuition about the significance of this result. 2. The contents in the Applications section contains too much detail. I don't think we need to look at these applications that carefully because they are just the applications of the main result. In this section, there are tons of constants like 8/7, 1.91, 0.06 and so on, which I don't think people will be interested in, and you can hide those constants using asymptotic notations or some letters. There are also details that are too technical, e.g., the division of regions, and formulas like line 196. I would suggest the authors hide this detail and make more space for the main result section. 3. The experiments only cover the case of the examples, i.e., the rank-1 matrix sensing and phase retrieval. It even doesn't cover your applications because you have proved the case for a more general matrix sensing. I think it would be better to do experiments on settings that are more general, e.g., the setting you use for the first application. 4. It would be better to have more figures in this paper to make people understand better. One figure is provided for the partition of regions in matrix sensing, so it's better to have another one for phase retrieval. Also, for the main result section, the authors can also use a figure to illustrate the assumptions or results better. |
NIPS | Title
The Landscape of Non-convex Empirical Risk with Degenerate Population Risk
Abstract
The landscape of empirical risk has been widely studied in a series of machine learning problems, including low-rank matrix factorization, matrix sensing, matrix completion, and phase retrieval. In this work, we focus on the situation where the corresponding population risk is a degenerate non-convex loss function, namely, the Hessian of the population risk can have zero eigenvalues. Instead of analyzing the non-convex empirical risk directly, we first study the landscape of the corresponding population risk, which is usually easier to characterize, and then build a connection between the landscape of the empirical risk and its population risk. In particular, we establish a correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption, which is required in existing literature but not satisfied in degenerate scenarios. We also apply the theory to matrix sensing and phase retrieval to demonstrate how to infer the landscape of empirical risk from that of the corresponding population risk.
1 Introduction
Understanding the connection between empirical risk and population risk can yield valuable insight into an optimization problem [1, 2]. Mathematically, the empirical risk f(x) with respect to a parameter vector x is defined as
f(x) , 1
M M∑ m=1 L(x,ym).
Here, L(·) is a loss function and we are interested in losses that are non-convex in x in this work. y = [y1, · · · ,yM ]> is a vector containing the random training samples, and M is the total number of samples contained in the training set. The population risk, denoted as g(x), is the expectation of the empirical risk with respect to the random measure used to generate the samples y, i.e., g(x) = Ef(x). Recently, the landscapes of empirical and population risk have been extensively studied in many fields of science and engineering, including machine learning and signal processing. In particular, the local or global geometry has been characterized in a wide variety of convex and non-convex problems, such as matrix sensing [3, 4], matrix completion [5, 6, 7], low-rank matrix factorization [8, 9, 10], phase retrieval [11, 12], blind deconvolution [13, 14], tensor decomposition [15, 16, 17], and so on. In this work, we focus on analyzing global geometry, which requires understanding not only regions near critical points but also the landscape away from these points.
It follows from empirical process theory that the empirical risk can uniformly converge to the corresponding population risk as M →∞ [18]. A recent work [1] exploits the uniform convergence of the empirical risk to the corresponding population risk and establishes a correspondence of their
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
critical points when provided with enough samples. The authors build their theoretical guarantees based on the assumption that the population risk is strongly Morse, namely, the Hessian of the population risk cannot have zero eigenvalues at or near the critical points1. However, many problems of practical interest do have Hessians with zero eigenvalues at some critical points. We refer to such problems as degenerate. To illustrate this, we present the very simple rank-1 matrix sensing and phase retrieval examples below.
Example 1.1. (Rank-1 matrix sensing). Given measurements ym = 〈Am,x?x?>〉, 1 ≤ m ≤M , where x? ∈ RN and Am ∈ RN×N denote the true signal and them-th Gaussian sensing matrix with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
4M M∑ m=1 ( 〈Am,xx>〉 − ym )2 .
The corresponding population risk is then
g(x) = Ef(x) = 1 4 ‖xx> − x?x?>‖2F .
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = (xx> − x?x?>)x, and ∇2g(x) = 2xx> − x?x?> + ‖x‖22IN . We see that g(x) has three critical points x = 0, ± x?. Observe that the Hessian at x = 0 is ∇2g(0) = −x?x?>, which does have zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1]. The conclusion extends to the general low-rank matrix sensing.
Example 1.2. (Phase retrieval). Given measurements ym = |〈am,x?〉|2, 1 ≤ m ≤ M , where x? ∈ RN and am ∈ RN denote the true signal and the m-th Gaussian random vector with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
2M M∑ m=1 ( |〈am,x〉|2 − ym )2 . (1.1)
The corresponding population risk is then
g(x) = Ef(x) = ‖xx> − x?x?>‖2F + 1
2 (‖x‖22 − ‖x?‖22)2. (1.2)
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = 6‖x‖22x− 2‖x?‖22x− 4(x?>x)x?, ∇2g(x) = 12xx> − 4x?x?> + 6‖x‖22IN − 2‖x?‖22IN .
We see that the population loss has critical points x = 0, ± x?, 1√ 3 ‖x?‖2w with w>x? = 0 and ‖w‖2 = 1. Observe that the Hessian at x = 1√3‖x ?‖2w is ∇2g( 1√3‖x
?‖2w) = 4‖x?‖22ww> − 4x?x?>, which also has zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1].
In this work, we aim to fill this gap and establish the correspondence between the critical points of empirical risk and its population risk without the strongly Morse assumption. In particular, we work on the situation where the population risk may be a degenerate non-convex function, i.e., the Hessian of the population risk can have zero eigenvalues. Given the correspondence between the critical points of the empirical risk and its population risk, we are able to build a connection between the landscape of the empirical risk and its population counterpart. To illustrate the effectiveness of this theory, we also apply it to applications such as matrix sensing (with general rank) and phase retrieval to show how to characterize the landscape of the empirical risk via its corresponding population risk.
1A twice differentiable function f(x) is Morse if all of its critical points are non-degenerate, i.e., its Hessian has no zero eigenvalues at all critical points. Mathematically, ∇f(x) = 0 implies all λi(∇2f(x)) 6= 0 with λi(·) being the i-th eigenvalue of the Hessian. A twice differentiable function f(x) is ( , η)-strongly Morse if ‖∇f(x)‖2 ≤ implies mini |λi(∇2f(x))| ≥ η. One can refer to [1] for more information.
The remainder of this work is organized as follows. In Section 2, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. In Section 3, we apply our theory to the two applications, matrix sensing and phase retrieval. In Section 4, we conduct experiments to further support our analysis. Finally, we conclude our work in Section 5.
Notation: For a twice differential function f(·): ∇f , ∇2f , grad f , and hess f denote the gradient and Hessian of f in the Euclidean space and with respect to a Riemannian manifoldM, respectively. Note that the Riemannian gradient/Hessian (grad/hess) reduces to the Euclidean gradient/Hessian (∇/∇2) when the domain of f is the Euclidean space. For a scalar function with a matrix variable, e.g., f(U), we represent its Euclidean Hessian with a bilinear form defined as ∇2f(U)[D,D] =∑ i,j,p,q ∂2f(U) ∂D(i,j)∂D(p,q)D(i, j)D(p, q) for any D having the same size as U. Denote B(l) as a compact and connected subset of a Riemannian manifoldM with l being a problem-specific parameter.2
2 Main Results
In this section, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. LetM be a Riemannian manifold. For notational simplicity, we use x ∈M to denote the parameter vector when we introduce our theory3. We begin by introducing the assumptions needed to build our theory. Denote f(x) and g(x) as the empirical risk and the corresponding population risk defined for x ∈M, respectively. Let and η be two positive constants. Assumption 2.1. The population risk g(x) satisfies
|λmin(hess g(x))| ≥ η (2.1) in the set D , {x ∈ B(l) : ‖grad g(x)‖2 ≤ }. Here, λmin(·) denotes the minimal eigenvalue (not the eigenvalue of smallest magnitude).
Assumption 2.1 is closely related to the robust strict saddle property [19] – it requires that any point with a small gradient has either a positive definite Hessian (λmin(hess g(x)) ≥ η) or a Hessian with a negative curvature (λmin(hess g(x)) ≤ −η). It is weaker than the ( , η)-strongly Morse condition
-1.5 -1 -0.5 0 0.5 1 1.5
0.5 1
1.5 2
2.5
R is
ks
Population Empirical
-1.5 -1 -0.5 0 0.5 1 1.5
-10
-11
10
G ra
d ie
n ts
-1.5 -1 -0.5 0 0.5 1 1.5
0 10 20 30 H e ss
ia n s
Figure 1: Phase retrieval with N = 1.
as it allows the Hessian hess g(x) to have zero eigenvalues inD, provided it also has at least one sufficiently negative eigenvalue. Assumption 2.2. (Gradient proximity). The gradients of the empirical risk and population risk satisfy
sup x∈B(l)
‖grad f(x)− grad g(x)‖2 ≤ 2 . (2.2)
Assumption 2.3. (Hessian proximity). The Hessians of the empirical risk and population risk satisfy
sup x∈B(l)
‖hess f(x)− hess g(x)‖2 ≤ η
2 . (2.3)
To illustrate the above three assumptions, we use the phase retrieval Example 1.2 with N = 1, x? = 1, and M = 30. We present the population risk g(x) = 32 (x
2−1)2 and the empirical risk f(x) = 12M
∑M
m=1 a 4 m(x
2−1)2 together with their gradients and Hessians in Figure 1. It can be seen that in the small gradient region (the three parts between the light blue vertical dashed lines), the absolute value of the population Hessian’s minimal eigenvalue (which equals the absolute value of Hessian here since N = 1) is bounded away from zero. In addition, with enough measurements, e.g., M = 30, we do see the gradients and Hessians of the empirical and population risk are close to each other.
We are now in the position to state our main theorem. Theorem 2.1. Denote f and g as the non-convex empirical risk and the corresponding population risk, respectively. Let D be any maximal connected and compact subset of D with a C2 boundary ∂D. Under Assumptions 2.1-2.3 stated above, the following statements hold:
2The subset B(l) can vary in different applications. For example, we define B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} in matrix sensing and B(l) , {x ∈ RN : ‖x‖2 ≤ l} in phase retrieval.
3For problems with matrix variables, such as matrix sensing introduced in Section 3, x is the vectorized representation of the matrix.
(a) D contains at most one local minimum of g. If g has K (K = 0, 1) local minima in D, then f also has K local minima in D.
(b) If g has strict saddles in D, then if f has any critical points in D, they must be strict saddle points.
The proof of Theorem 2.1 is given in Appendix A (see supplementary material). In particular, we prove Theorem 2.1 by extending the proof of Theorem 2 in [1] without requiring the strongly Morse assumption on the population risk. We first present two key lemmas, in which we show that there exists a correspondence between the critical points of the empirical risk and those of the population risk in a connected and compact set under certain assumptions, and the small gradient area can be partitioned into many maximal connected and compact components with each component either containing one local minimum or no local minimum. Finally, we finish the proof of Theorem 2.1 by using these two key lemmas.
Part (a) in Theorem 2.1 indicates a one-to-one correspondence between the local minima of the empirical risk and its population risk. We can further bound the distance between the local minima of the empirical risk and its population risk. We summarize this result in the following corollary, which is proved in Appendix C (see supplementary material). Corollary 2.1. Let {x̂k}Kk=1 and {xk}Kk=1 denote the local minima of the empirical risk and its population risk, and Dk be the maximal connected and compact subset of D containing xk and x̂k. Let ρ be the injectivity radius of the manifoldM. Suppose the pre-image of Dk under the exponential mapping Expxk(·) is contained in the ball at the origin of the tangent space TxkM with radius ρ. Assume the differential of the exponential mapping DExpxk(v) has an operator norm bounded by σ for all v ∈ TxkM with norm less than ρ. Suppose the pullback of the population risk onto the tangent space TxkM has Lipschitz Hessian with constant LH at the origin. Then as long as ≤ η 2 2σLH , the Riemannian distance between x̂k and xk satisfies
dist(x̂k,xk) ≤ 2σ /η, 1 ≤ k ≤ K.
In general, the two parameters and η used in Assumptions 2.1-2.3 can be obtained by lower bounding |λmin(hess g(x))| in a small gradient region. In this way, one can adjust the size of the small gradient region to get an upper bound on , and use the lower bound for |λmin(hess g(x))| as η. In the case when it is not easy to directly bound |λmin(hess g(x))| in a small gradient region, one can also first choose a region for which it is easy to find the lower bound, and then show that the gradient has a large norm outside of this region, as we do in Section 3. For phase retrieval, note that |λmin(∇2g(x))| and ‖∇g(x)‖2 roughly scale with ‖x?‖22 and ‖x?‖32 in the regions near critical points, which implies that η and the upper bound on should also scale with ‖x?‖22 and ‖x?‖32, respectively. For matrix sensing, in a similar way, |λmin(hess g(U))| and ‖grad g(U)‖F roughly scale with λk and λ1.5k in the regions near critical points, which implies that η and the upper bound on should also scale with λk and λ1.5k , respectively. Note, however, with more samples (larger M ), can be set to smaller values, while η typically remains unchanged. One can refer to Section 3 for more details on the notation as well as how to choose η and upper bounds on in the two applications.
Note that we have shown the correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption in the above theorem. In particular, we relax the strongly Morse assumption to our Assumption 2.1, which implies that we are able to handle the scenario where the Hessian of the population risk has zero eigenvalues at some critical points or even everywhere in the set D. With this correspondence, we can then establish a connection between the landscape of the empirical risk and the population risk, and thus for problems where the population risk has a favorable geometry, we are able to carry this favorable geometry over to the corresponding empirical risk. To illustrate this in detail, we highlight two applications, matrix sensing and phase retrieval, in the next section.
3 Applications
In this section, we illustrate how to completely characterize the landscape of an empirical risk from its population risk using Theorem 2.1. In particular, we apply Theorem 2.1 to two applications, matrix sensing and phase retrieval. In order to use Theorem 2.1, all we need is to verify that the empirical risk and population risk in these two applications satisfy the three assumptions stated in Section 2.
3.1 Matrix Sensing
Let X ∈ RN×N be a symmetric, positive semi-definite matrix with rank r. We measure X with a symmetric Gaussian linear operator A : RN×N → RM . The m-th entry of the observation y = A(X) is given as ym = 〈X,Am〉, where Am = 12 (Bm + B>m) with Bm being a Gaussian random matrix with entries following N (0, 1M ). The adjoint operator A∗ : RM → RN×N is defined as A∗(y) = ∑Mm=1 ymAm. It can be shown that E(A∗A) is the identity operator, i.e. E(A∗A(X)) = X. To find a low-rank approximation of X when given the measurements y = A(X), one can solve the following optimization problem:
min X̃∈RN×N
1 4 ‖A(X̃−X)‖22 s. t. rank(X̃) ≤ k, X̃ 0. (3.1)
Here, we assume that r2 ≤ k ≤ r N . By using the Burer-Monteiro type factorization [20, 21], i.e., letting X̃ = UU> with U ∈ RN×k, we can transform the above optimization problem into the following unconstrained one:
min U∈RN×k
f(U) , 1 4 ‖A(UU> −X)‖22. (3.2)
Observe that this empirical risk f(U) is a non-convex function due to the quadratic term UU>. With some elementary calculation, we obtain the gradient and Hessian of f(U), which are given as
∇f(U) = A∗A(UU> −X)U,
∇2f(U)[D,D] = 1 2 ‖A(UD> + DU>)‖22 + 〈A∗A(UU> −X),DD>〉.
Computing the expectation of f(U), we get the population risk
g(U) = Ef(U) = 1 4 ‖UU> −X‖2F , (3.3)
whose gradient and Hessian are given as
∇g(U) = (UU> −X)U and ∇2g(U)[D,D] = 1 2 ‖UD> + DU>‖22 + 〈UU> −X,DD>〉.
The landscape of the above population risk has been studied in the general RN×k space with k = r in [8]. The landscape of its variants, such as the asymmetric version with or without a balanced term, has also been studied in [4, 22]. It is well known that there exists an ambiguity in the solution of (3.2) due to the fact that UU> = UQQ>U> holds for any orthogonal matrix Q ∈ Rk×k . This implies that the Euclidean Hessian∇2g(U) always has zero eigenvalues for k > 1 at critical points, even at local minima, violating not only the strongly Morse condition but also Assumption 2.1. To overcome this difficulty, we propose to formulate an equivalent problem on a proper quotient manifold (rather than the general RN×k space as in [8]) to remove this ambiguity and make sure Assumption 2.1 is satisfied.
3.1.1 Background on the quotient manifold
To keep our work self-contained, we provide a brief introduction to quotient manifolds in this section before we verify our three assumptions. One can refer to [23, 24] for more information. We make the assumption that the matrix variable U is always full-rank. This is required in order to define a proper quotient manifold, since otherwise the equivalence classes defined below will have different dimensions, violating Proposition 3.4.4 in [23]. Thus, we focus on the case that U belongs to the manifold RN×k∗ , i.e., the set of all N × k real matrices with full column rank. To remove the parameterization ambiguity caused by the factorization X̃ = UU>, we define an equivalence class for any U ∈ RN×k∗ as [U] , {V ∈ RN×k∗ : VV> = UU>} = {UQ : Q ∈ Rk×k,Q>Q = Ik}. We will abuse notation and use U to denote also its equivalence class [U] in the following. Let M denote the set of all equivalence classes of the above form, which admits a (unique) differential structure that makes it a (Riemannian) quotient manifold, denoted asM = RN×k∗ /Ok. Here Ok is the orthogonal group {Q ∈ Rk×k : QQ> = Q>Q = Ik}. Since the objective function g(U) in
(3.3) (and f(U) in (3.2)) is invariant under the equivalence relation, it induces a unique function on the quotient manifold RN×k∗ /Ok, also denoted as g(U). Note that the tangent space TURN×k∗ of the manifold RN×k∗ at any point U ∈ RN×k∗ is still RN×k∗ . We define the vertical space VUM as the tangent space to the equivalence classes (which are themselves manifolds): VUM , {UΩ : Ω ∈ Rk×k, Ω> = −Ω}. We also define the horizontal space HUM as the orthogonal complement of the vertical space VUM in the tangent space TURN×k∗ = RN×k∗ : HUM , {D ∈ RN×k∗ : D>U = U>D}. For any matrix Z ∈ RN×k∗ , its projection onto the horizontal spaceHUM is given as PU(Z) = Z−UΩ, where Ω is a skewsymmetric matrix that solves the following Sylvester equation ΩU>U + U>UΩ = U>Z− Z>U. Then, we can define the Riemannian gradient (grad ·) and Hessian (hess ·) of the empirical risk and population risk on the quotient manifoldM, which are given in the supplementary material.
3.1.2 Verifying Assumptions 2.1, 2.2, and 2.3
Assume that X = WΛW> with W ∈ RN×r and Λ = diag([λ1, · · · , λr]) ∈ Rr×r is an eigendecomposition of X. Without loss of generality, we assume that the eigenvalues of X are in descending order. Let Λu ∈ Rk×k be a diagonal matrix that contains any k non-zero eigenvalues of X and Wu ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λu. Let Λk = diag([λ1, · · · , λk]) be the diagonal matrix that contains the largest k eigenvalues of X and Wk ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λk. Q ∈ Ok is any orthogonal matrix. The following lemma provides the global geometry of the population risk in (3.3), which also determines the values of and η in Assumption 2.1.
Lemma 3.1. Define U , {U = WuΛ 1 2 uQ>}, U? , {U? = WkΛ 1 2 kQ >} ⊆ U , and U?s , U\U?. Denote κ , √
λ1 λk ≥ 1 as the condition number of any U? ∈ U?. Define the following regions: R1 , { U∈ RN×k∗ : min
P∈Ok ‖U−U?P‖F ≤ 0.2κ−1
√ λk, ∀U? ∈ U? } ,
R′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F ≤
1
80 λ
3 2 k
} ,
R′′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F >
1
80 λ
3 2 k
} ,
R′3 , { U∈ RN×k∗ : σk(U)> 1
2
√ λk, min
P∈Ok ‖U−U?P‖F >0.2κ−1
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F
} ,
R′′3 , { U∈ RN×k∗ : ‖UU>‖F > 8
7 ‖U?U?>‖F
} ,
where σk(U) denotes the k-th singular value of a matrix U ∈ RN×k∗ , i.e., the smallest singular value of U. These regions also induce regions in the quotient manifoldM in an apparent way. We additionally assume that λk+1 ≤ 112λk and k ≤ r N . Then, the following properties hold:
(1) For any U ∈ U , U is a critical point of the population risk g(U) in (3.3). (2) For any U? ∈ U?, U? is a global minimum of g(U) with λmin(hess g(U?)) ≥ 1.91λk. Moreover,
for any U ∈ R1, we have λmin(hess g(U)) ≥ 0.19λk.
(3) For any U?s ∈ U?s , U?s is a strict saddle point of g(U) with λmin(hess g(U?s)) ≤ −0.91λk. Moreover, for any U ∈ R′2, we have
λmin(hess g(U)) ≤ −0.06λk.
(4) For any U ∈ R′′2 ⋃R′3⋃R′′3 , we have a large gradient. In particular,
‖ grad g(U)‖F > 1 80λ 3 2 k , if U ∈ R′′2 , 1 60κ −1λ 3 2
k , if U ∈ R′3, 5 84k 1 4λ 3 2 k , if U ∈ R′′3 .
The proof of Lemma 3.1 is inspired by the proofs of [8, Theorem 4], [3, Lemma 13] and [4, Theorem 5], and is given in Appendix D (see supplementary material). Therefore, we can set ≤ min{1/80, 1/60κ−1}λ 3 2
k and η = 0.06λk. Then, the population risk given in (3.3) satisfies Assumption 2.1. It can be seen that each critical point of the population risk g(U) in (3.3) is either a global minimum or a strict saddle, which inspires us to carry this favorable geometry over to the corresponding empirical risk.
To illustrate the partition of the manifold RN×k∗ used in the above Lemma 3.1, we use the purple (¬), yellow (), and green (®) regions in Figure 2 to denote the regions that satisfy minP∈Ok ‖U− U?P‖F ≤ 0.2κ−1 √ λk, σk(U) ≤ 12 √ λk, and ‖UU>‖F ≤ 8 7‖U?U?
>‖F , respectively. It can be seen thatR1 is exactly the purple region, which contains the areas near the global minima ([U?]). R2 = R′2 ⋃R′′2 is the intersection of the yellow and green regions. R′3 is the part of the green region that does not intersect with the purple or yellow regions. Finally, R′′3 is the space outside of the green region. Therefore, the union ofR1,R2, andR3 = R′3
⋃R′′3 covers the entire manifold RN×k∗ .
We define a norm ball as B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} with l = 87‖U?U? >‖F . The following lemma verifies Assumptions 2.2 and 2.3 under the restricted isometry property (RIP). Lemma 3.2. Assume r2 ≤ k ≤ r N . Suppose that a linear operator B with [B(Z)]m = 〈Z,Bm〉 satisfies the following RIP
(1− δr+k)‖Z‖2F ≤ ‖B(Z)‖22 ≤ (1 + δr+k)‖Z‖2F (3.4) for any matrix Z ∈ RN×N with rank at most r + k. We construct the linear operator A by setting Am = 1 2 (Bm + B > m). If the restricted isometry constant δr+k satisfies
δr+k≤min 2√87k 14(87‖U?U?>‖F+‖X‖F)‖U?U?>‖12F , 1 36 ,
η
2(167 √ k‖U?U?>‖F+ 87‖U?U?>‖F+‖X‖F) then, we have
sup U∈B(l)
‖ grad f(U)− grad g(U)‖F ≤ 2 , and sup U∈B(l) ‖hess f(U)− hess g(U)‖2 ≤
η 2 .
The proof of Lemma 3.2 is given in Appendix E (see supplementary material). As is shown in existing literature [25, 26, 27], a Gaussian linear operator B : RN×N → RM satisfies the RIP condition (3.4) with high probability if M ≥ C(r + k)N/δ2r+k for some numerical constant C. Therefore, we can conclude that the three statements in Theorem 2.1 hold for the empirical risk (3.2) and population risk (3.3) as long as M is large enough. Some similar bounds for the sample complexity M under different settings can also be found in papers [8, 4]. Note that the particular choice of l can guarantee that ‖ grad f(U)‖F is large outside of B(l), which is also proved in Appendix E. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk.
3.2 Phase Retrieval
We continue to elaborate on Example 1.2. The following lemma provides the global geometry of the population risk in (1.2), which also determines the values of and η in Assumption 2.1. Lemma 3.3. Define the following four regions:
R1 , { x ∈ RN : ‖x‖2 ≤ 1
2 ‖x?‖2
} , R2 , { x ∈ RN : min
γ∈{1,−1} ‖x− γx?‖2 ≤
1
10 ‖x?‖2
} ,
R3 , { x ∈ RN : min
γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 ≤ 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 } ,
R4 , { x ∈ RN : ‖x‖2 > 1
2 ‖x?‖2, min γ∈{1,−1} ‖x− γx?‖2 >
1
10 ‖x?‖2,
min γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 > 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 }
Then, the following properties hold:
(1) x = 0 is a strict saddle point with ∇2g(0) = −4x?x?> − 2‖x?‖22IN and λmin(∇2g(0)) = −6‖x?‖22. Moreover, for any x ∈ R1, the neighborhood of strict saddle point 0, we have
λmin(∇2g(x)) ≤ − 3
2 ‖x?‖22.
(2) x = ±x? are global minima with∇2g(±x?) = 8x?x?> + 4‖x?‖22IN and λmin(∇2g(±x?)) = 4‖x?‖22. Moreover, for any x ∈ R2, the neighborhood of global minima ±x?, we have
λmin(∇2g(x)) ≥ 0.22‖x?‖22.
(3) x = ± 1√ 3 ‖x?‖2w, with w>x? = 0 and ‖w‖2 = 1, are strict saddle points with
∇2g(± 1√ 3 ‖x?‖2w) = 4‖x?‖22ww> − 4x?x?> and λmin(∇2g(± 1√3‖x ?‖2w)) = −4‖x?‖22. Moreover, for any x ∈ R3, the neighborhood of strict saddle points ± 1√3‖x
?‖2w, we have λmin(∇2g(x)) ≤ −0.78‖x?‖22.
(4) For any x ∈ R4, the complement region ofR1,R2, andR3, we have ‖∇g(x)‖2 > 0.3963‖x?‖32.
The proof of Lemma 3.3 is inspired by the proof of [8, Theorem 3] and is given in Appendix F (see supplementary material). Letting ≤ 0.3963‖x?‖32 and η = 0.22‖x?‖22, the population risk (1.2) then satisfies Assumption 2.1. As in Lemma 3.1, we also note that each critical point of the population risk in (1.2) is either a global minimum or a strict saddle. This inspires us to carry this favorable geometry over to the corresponding empirical risk.
The partition of regions used in Lemma 3.3 is illustrated in Figure 3. We use the purple, green, and blue
balls to denote the three regionsR1,R2, andR3, respectively. R4 is then represented with the light gray region. Therefore, the union of the four regions covers the entire RN space.
Define a norm ball as B(l) , {x ∈ RN : ‖x‖2 ≤ l} with radius l = 1.1‖x?‖2. This particular choice of l guarantees that ‖ grad f(x)‖2 is large outside of B(l), which is proved in Appendix G. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk. We also
define h(N,M) , Õ ( N2 M + √ N M ) with Õ denoting an asymptotic notation that hides polylog factors. The following lemma verifies Assumptions 2.2 and 2.3 for this phase retrieval problem. Lemma 3.4. Suppose that am ∈ RN is a Gaussian random vector with entries following N (0, 1). If h(N,M) ≤ 0.0118, we then have
sup x∈B(l)
‖∇f(x)−∇g(x)‖2 ≤ 2 , and sup x∈B(l) ‖∇2f(x)−∇2g(x)‖2 ≤
η
2
hold with probability at least 1− e−CN log(M). The proof of Lemma 3.4 is given in Appendix G (see supplementary material). The assumption h(N,M) ≤ 0.0118 implies that we need a sample complexity that scales like N2, which is not optimal since x has only N degrees of freedom. This is a technical artifact that can be traced back to Assumptions 2.2 and 2.3–which require two-sided closeness between the gradients and Hessians–and the heavy-tail property of the fourth powers of Gaussian random process [12]. To arrive at the conclusions of Theorem 2.1, however, these two assumptions are sufficient but not necessary (while Assumption 2.1 is more critical), leaving room for tightening the sampling complexity bound. We leave this to future work.
4 Numerical Simulations
We first conduct numerical experiments on the two examples introduced in Section 1, i.e., the rank-1 matrix sensing and phase retrieval problems. In both problems, we fix N = 2 and set x? = [1 − 1]>.
Then, we generate the population risk and empirical risk based on the formulation introduced in these two examples. The contour plots of the population risk and a realization of empirical risk with M = 3 and M = 10 are given in Figure 4 for rank-1 matrix sensing and Figure 5 for phase retrieval. We see that when we have fewer samples (e.g., M = 3), there could exist some spurious local minima as is shown in plots (b). However, as we increase the number of samples (e.g., M = 10), we see a direct correspondence between the local
minima of empirical risk and population risk in both examples with a much higher probability. We also notice that extra saddle points can emerge as shown in Figure 4 (c), which shows that statement (b) in Theorem 2.1 cannot be improved to a one-to-one correspondence between saddle points in degenerate scenarios. We still observe this phenomenon even when M = 1000, which is not shown here. Note that for the rank-1 case, Theorem 2.1 can be applied directly without restricting to full-rank representations. Next, we conduct another experiment on general-rank matrix sensing with k = 2, r = 3, N = 8, and a variety of M . We set U? as the first r columns of an N ×N identity matrix and create X = U?U?>. The population and empirical risks are then generated according to the model introduced in Section 3.1. As shown in Figure 6, the distance (averaged over 100 trials) between the local minima of the population and empirical risk decreases as we increase M .
5 Conclusions
In this work, we study the problem of establishing a correspondence between the critical points of the empirical risk and its population counterpart without the strongly Morse assumption required in some existing literature. With this correspondence, we are able to analyze the landscape of an empirical risk from the landscape of its population risk. Our theory builds on a weaker condition than the strongly Morse assumption. This enables us to work on the very popular matrix sensing and phase retrieval problems, whose Hessian does have zero eigenvalues at some critical points, i.e., they are degenerate and do not satisfy the strongly Morse assumption. As mentioned, there is still room to improve the sample complexity of the phase retrieval problem that we will pursue in future work.
Acknowledgments
SL would like to thank Qiuwei Li at Colorado School of Mines for many helpful discussions on the analysis of matrix sensing and phase retrieval. The authors would also like to thank the anonymous reviewers for their constructive comments and suggestions which greatly improved the quality of this paper. This work was supported by NSF grant CCF-1704204, and the DARPA Lagrange Program under ONR/SPAWAR contract N660011824020. | 1. What is the main contribution of the paper regarding empirical risk?
2. What are the strengths of the paper, particularly in relaxing the strongly Morse assumption?
3. What are the weaknesses of the paper, especially regarding the lack of discussion on estimation error bounds?
4. How does the reviewer assess the clarity and ease of following the paper's content? | Review | Review
This work is an extension of a work by S. Mei et al. that looks at the landscape of empirical risk. In that work, the authors established uniform convergence of empirical risk to population risk and one to one correspondence between critical points of the two function under strongly Morse assumption. In this work, the authors relax that assumption and allow the Hessian to be be even degenerate, provided that it has a negative eigenvalue. This allows the authors to apply the theory to cases were strongly Morse assumption is violated. The authors show that the assumptions are satisfied in two examples, and in one, they show how one can use quotient manifolds to circumvent the fact that at the global min, the Hessian is degenerate, and still apply the same theorem. One minor issue with the result of the paper: can authors briefly describe how theorem 2.1 can be used to get estimation error bounds? This theorem is very strong in the sense that it provides a result for every connected compact subset of B(l). Therefore it seems reasonable to assume that getting estimation error bounds using this theorem should be doable, but the details should depend on the specific examples and underlying distributions. Establishing one to one correspondence between critical points of the two functions are definitely useful, but the ultimate goal, at least in the examples provided is to get estimation error bounds, and unfortunately it is not discussed at all. The paper is very well written and easy to follow. I did not check the details of the proofs. |
NIPS | Title
The Landscape of Non-convex Empirical Risk with Degenerate Population Risk
Abstract
The landscape of empirical risk has been widely studied in a series of machine learning problems, including low-rank matrix factorization, matrix sensing, matrix completion, and phase retrieval. In this work, we focus on the situation where the corresponding population risk is a degenerate non-convex loss function, namely, the Hessian of the population risk can have zero eigenvalues. Instead of analyzing the non-convex empirical risk directly, we first study the landscape of the corresponding population risk, which is usually easier to characterize, and then build a connection between the landscape of the empirical risk and its population risk. In particular, we establish a correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption, which is required in existing literature but not satisfied in degenerate scenarios. We also apply the theory to matrix sensing and phase retrieval to demonstrate how to infer the landscape of empirical risk from that of the corresponding population risk.
1 Introduction
Understanding the connection between empirical risk and population risk can yield valuable insight into an optimization problem [1, 2]. Mathematically, the empirical risk f(x) with respect to a parameter vector x is defined as
f(x) , 1
M M∑ m=1 L(x,ym).
Here, L(·) is a loss function and we are interested in losses that are non-convex in x in this work. y = [y1, · · · ,yM ]> is a vector containing the random training samples, and M is the total number of samples contained in the training set. The population risk, denoted as g(x), is the expectation of the empirical risk with respect to the random measure used to generate the samples y, i.e., g(x) = Ef(x). Recently, the landscapes of empirical and population risk have been extensively studied in many fields of science and engineering, including machine learning and signal processing. In particular, the local or global geometry has been characterized in a wide variety of convex and non-convex problems, such as matrix sensing [3, 4], matrix completion [5, 6, 7], low-rank matrix factorization [8, 9, 10], phase retrieval [11, 12], blind deconvolution [13, 14], tensor decomposition [15, 16, 17], and so on. In this work, we focus on analyzing global geometry, which requires understanding not only regions near critical points but also the landscape away from these points.
It follows from empirical process theory that the empirical risk can uniformly converge to the corresponding population risk as M →∞ [18]. A recent work [1] exploits the uniform convergence of the empirical risk to the corresponding population risk and establishes a correspondence of their
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
critical points when provided with enough samples. The authors build their theoretical guarantees based on the assumption that the population risk is strongly Morse, namely, the Hessian of the population risk cannot have zero eigenvalues at or near the critical points1. However, many problems of practical interest do have Hessians with zero eigenvalues at some critical points. We refer to such problems as degenerate. To illustrate this, we present the very simple rank-1 matrix sensing and phase retrieval examples below.
Example 1.1. (Rank-1 matrix sensing). Given measurements ym = 〈Am,x?x?>〉, 1 ≤ m ≤M , where x? ∈ RN and Am ∈ RN×N denote the true signal and them-th Gaussian sensing matrix with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
4M M∑ m=1 ( 〈Am,xx>〉 − ym )2 .
The corresponding population risk is then
g(x) = Ef(x) = 1 4 ‖xx> − x?x?>‖2F .
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = (xx> − x?x?>)x, and ∇2g(x) = 2xx> − x?x?> + ‖x‖22IN . We see that g(x) has three critical points x = 0, ± x?. Observe that the Hessian at x = 0 is ∇2g(0) = −x?x?>, which does have zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1]. The conclusion extends to the general low-rank matrix sensing.
Example 1.2. (Phase retrieval). Given measurements ym = |〈am,x?〉|2, 1 ≤ m ≤ M , where x? ∈ RN and am ∈ RN denote the true signal and the m-th Gaussian random vector with entries following N (0, 1), respectively. The following empirical risk is commonly used in practice
f(x) = 1
2M M∑ m=1 ( |〈am,x〉|2 − ym )2 . (1.1)
The corresponding population risk is then
g(x) = Ef(x) = ‖xx> − x?x?>‖2F + 1
2 (‖x‖22 − ‖x?‖22)2. (1.2)
Elementary calculations give the gradient and Hessian of the above population risk as
∇g(x) = 6‖x‖22x− 2‖x?‖22x− 4(x?>x)x?, ∇2g(x) = 12xx> − 4x?x?> + 6‖x‖22IN − 2‖x?‖22IN .
We see that the population loss has critical points x = 0, ± x?, 1√ 3 ‖x?‖2w with w>x? = 0 and ‖w‖2 = 1. Observe that the Hessian at x = 1√3‖x ?‖2w is ∇2g( 1√3‖x
?‖2w) = 4‖x?‖22ww> − 4x?x?>, which also has zero eigenvalues and thus g(x) does not satisfy the strongly Morse condition required in [1].
In this work, we aim to fill this gap and establish the correspondence between the critical points of empirical risk and its population risk without the strongly Morse assumption. In particular, we work on the situation where the population risk may be a degenerate non-convex function, i.e., the Hessian of the population risk can have zero eigenvalues. Given the correspondence between the critical points of the empirical risk and its population risk, we are able to build a connection between the landscape of the empirical risk and its population counterpart. To illustrate the effectiveness of this theory, we also apply it to applications such as matrix sensing (with general rank) and phase retrieval to show how to characterize the landscape of the empirical risk via its corresponding population risk.
1A twice differentiable function f(x) is Morse if all of its critical points are non-degenerate, i.e., its Hessian has no zero eigenvalues at all critical points. Mathematically, ∇f(x) = 0 implies all λi(∇2f(x)) 6= 0 with λi(·) being the i-th eigenvalue of the Hessian. A twice differentiable function f(x) is ( , η)-strongly Morse if ‖∇f(x)‖2 ≤ implies mini |λi(∇2f(x))| ≥ η. One can refer to [1] for more information.
The remainder of this work is organized as follows. In Section 2, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. In Section 3, we apply our theory to the two applications, matrix sensing and phase retrieval. In Section 4, we conduct experiments to further support our analysis. Finally, we conclude our work in Section 5.
Notation: For a twice differential function f(·): ∇f , ∇2f , grad f , and hess f denote the gradient and Hessian of f in the Euclidean space and with respect to a Riemannian manifoldM, respectively. Note that the Riemannian gradient/Hessian (grad/hess) reduces to the Euclidean gradient/Hessian (∇/∇2) when the domain of f is the Euclidean space. For a scalar function with a matrix variable, e.g., f(U), we represent its Euclidean Hessian with a bilinear form defined as ∇2f(U)[D,D] =∑ i,j,p,q ∂2f(U) ∂D(i,j)∂D(p,q)D(i, j)D(p, q) for any D having the same size as U. Denote B(l) as a compact and connected subset of a Riemannian manifoldM with l being a problem-specific parameter.2
2 Main Results
In this section, we present our main results on the correspondence between the critical points of the empirical risk and its population risk. LetM be a Riemannian manifold. For notational simplicity, we use x ∈M to denote the parameter vector when we introduce our theory3. We begin by introducing the assumptions needed to build our theory. Denote f(x) and g(x) as the empirical risk and the corresponding population risk defined for x ∈M, respectively. Let and η be two positive constants. Assumption 2.1. The population risk g(x) satisfies
|λmin(hess g(x))| ≥ η (2.1) in the set D , {x ∈ B(l) : ‖grad g(x)‖2 ≤ }. Here, λmin(·) denotes the minimal eigenvalue (not the eigenvalue of smallest magnitude).
Assumption 2.1 is closely related to the robust strict saddle property [19] – it requires that any point with a small gradient has either a positive definite Hessian (λmin(hess g(x)) ≥ η) or a Hessian with a negative curvature (λmin(hess g(x)) ≤ −η). It is weaker than the ( , η)-strongly Morse condition
-1.5 -1 -0.5 0 0.5 1 1.5
0.5 1
1.5 2
2.5
R is
ks
Population Empirical
-1.5 -1 -0.5 0 0.5 1 1.5
-10
-11
10
G ra
d ie
n ts
-1.5 -1 -0.5 0 0.5 1 1.5
0 10 20 30 H e ss
ia n s
Figure 1: Phase retrieval with N = 1.
as it allows the Hessian hess g(x) to have zero eigenvalues inD, provided it also has at least one sufficiently negative eigenvalue. Assumption 2.2. (Gradient proximity). The gradients of the empirical risk and population risk satisfy
sup x∈B(l)
‖grad f(x)− grad g(x)‖2 ≤ 2 . (2.2)
Assumption 2.3. (Hessian proximity). The Hessians of the empirical risk and population risk satisfy
sup x∈B(l)
‖hess f(x)− hess g(x)‖2 ≤ η
2 . (2.3)
To illustrate the above three assumptions, we use the phase retrieval Example 1.2 with N = 1, x? = 1, and M = 30. We present the population risk g(x) = 32 (x
2−1)2 and the empirical risk f(x) = 12M
∑M
m=1 a 4 m(x
2−1)2 together with their gradients and Hessians in Figure 1. It can be seen that in the small gradient region (the three parts between the light blue vertical dashed lines), the absolute value of the population Hessian’s minimal eigenvalue (which equals the absolute value of Hessian here since N = 1) is bounded away from zero. In addition, with enough measurements, e.g., M = 30, we do see the gradients and Hessians of the empirical and population risk are close to each other.
We are now in the position to state our main theorem. Theorem 2.1. Denote f and g as the non-convex empirical risk and the corresponding population risk, respectively. Let D be any maximal connected and compact subset of D with a C2 boundary ∂D. Under Assumptions 2.1-2.3 stated above, the following statements hold:
2The subset B(l) can vary in different applications. For example, we define B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} in matrix sensing and B(l) , {x ∈ RN : ‖x‖2 ≤ l} in phase retrieval.
3For problems with matrix variables, such as matrix sensing introduced in Section 3, x is the vectorized representation of the matrix.
(a) D contains at most one local minimum of g. If g has K (K = 0, 1) local minima in D, then f also has K local minima in D.
(b) If g has strict saddles in D, then if f has any critical points in D, they must be strict saddle points.
The proof of Theorem 2.1 is given in Appendix A (see supplementary material). In particular, we prove Theorem 2.1 by extending the proof of Theorem 2 in [1] without requiring the strongly Morse assumption on the population risk. We first present two key lemmas, in which we show that there exists a correspondence between the critical points of the empirical risk and those of the population risk in a connected and compact set under certain assumptions, and the small gradient area can be partitioned into many maximal connected and compact components with each component either containing one local minimum or no local minimum. Finally, we finish the proof of Theorem 2.1 by using these two key lemmas.
Part (a) in Theorem 2.1 indicates a one-to-one correspondence between the local minima of the empirical risk and its population risk. We can further bound the distance between the local minima of the empirical risk and its population risk. We summarize this result in the following corollary, which is proved in Appendix C (see supplementary material). Corollary 2.1. Let {x̂k}Kk=1 and {xk}Kk=1 denote the local minima of the empirical risk and its population risk, and Dk be the maximal connected and compact subset of D containing xk and x̂k. Let ρ be the injectivity radius of the manifoldM. Suppose the pre-image of Dk under the exponential mapping Expxk(·) is contained in the ball at the origin of the tangent space TxkM with radius ρ. Assume the differential of the exponential mapping DExpxk(v) has an operator norm bounded by σ for all v ∈ TxkM with norm less than ρ. Suppose the pullback of the population risk onto the tangent space TxkM has Lipschitz Hessian with constant LH at the origin. Then as long as ≤ η 2 2σLH , the Riemannian distance between x̂k and xk satisfies
dist(x̂k,xk) ≤ 2σ /η, 1 ≤ k ≤ K.
In general, the two parameters and η used in Assumptions 2.1-2.3 can be obtained by lower bounding |λmin(hess g(x))| in a small gradient region. In this way, one can adjust the size of the small gradient region to get an upper bound on , and use the lower bound for |λmin(hess g(x))| as η. In the case when it is not easy to directly bound |λmin(hess g(x))| in a small gradient region, one can also first choose a region for which it is easy to find the lower bound, and then show that the gradient has a large norm outside of this region, as we do in Section 3. For phase retrieval, note that |λmin(∇2g(x))| and ‖∇g(x)‖2 roughly scale with ‖x?‖22 and ‖x?‖32 in the regions near critical points, which implies that η and the upper bound on should also scale with ‖x?‖22 and ‖x?‖32, respectively. For matrix sensing, in a similar way, |λmin(hess g(U))| and ‖grad g(U)‖F roughly scale with λk and λ1.5k in the regions near critical points, which implies that η and the upper bound on should also scale with λk and λ1.5k , respectively. Note, however, with more samples (larger M ), can be set to smaller values, while η typically remains unchanged. One can refer to Section 3 for more details on the notation as well as how to choose η and upper bounds on in the two applications.
Note that we have shown the correspondence between the critical points of the empirical risk and its population risk without the strongly Morse assumption in the above theorem. In particular, we relax the strongly Morse assumption to our Assumption 2.1, which implies that we are able to handle the scenario where the Hessian of the population risk has zero eigenvalues at some critical points or even everywhere in the set D. With this correspondence, we can then establish a connection between the landscape of the empirical risk and the population risk, and thus for problems where the population risk has a favorable geometry, we are able to carry this favorable geometry over to the corresponding empirical risk. To illustrate this in detail, we highlight two applications, matrix sensing and phase retrieval, in the next section.
3 Applications
In this section, we illustrate how to completely characterize the landscape of an empirical risk from its population risk using Theorem 2.1. In particular, we apply Theorem 2.1 to two applications, matrix sensing and phase retrieval. In order to use Theorem 2.1, all we need is to verify that the empirical risk and population risk in these two applications satisfy the three assumptions stated in Section 2.
3.1 Matrix Sensing
Let X ∈ RN×N be a symmetric, positive semi-definite matrix with rank r. We measure X with a symmetric Gaussian linear operator A : RN×N → RM . The m-th entry of the observation y = A(X) is given as ym = 〈X,Am〉, where Am = 12 (Bm + B>m) with Bm being a Gaussian random matrix with entries following N (0, 1M ). The adjoint operator A∗ : RM → RN×N is defined as A∗(y) = ∑Mm=1 ymAm. It can be shown that E(A∗A) is the identity operator, i.e. E(A∗A(X)) = X. To find a low-rank approximation of X when given the measurements y = A(X), one can solve the following optimization problem:
min X̃∈RN×N
1 4 ‖A(X̃−X)‖22 s. t. rank(X̃) ≤ k, X̃ 0. (3.1)
Here, we assume that r2 ≤ k ≤ r N . By using the Burer-Monteiro type factorization [20, 21], i.e., letting X̃ = UU> with U ∈ RN×k, we can transform the above optimization problem into the following unconstrained one:
min U∈RN×k
f(U) , 1 4 ‖A(UU> −X)‖22. (3.2)
Observe that this empirical risk f(U) is a non-convex function due to the quadratic term UU>. With some elementary calculation, we obtain the gradient and Hessian of f(U), which are given as
∇f(U) = A∗A(UU> −X)U,
∇2f(U)[D,D] = 1 2 ‖A(UD> + DU>)‖22 + 〈A∗A(UU> −X),DD>〉.
Computing the expectation of f(U), we get the population risk
g(U) = Ef(U) = 1 4 ‖UU> −X‖2F , (3.3)
whose gradient and Hessian are given as
∇g(U) = (UU> −X)U and ∇2g(U)[D,D] = 1 2 ‖UD> + DU>‖22 + 〈UU> −X,DD>〉.
The landscape of the above population risk has been studied in the general RN×k space with k = r in [8]. The landscape of its variants, such as the asymmetric version with or without a balanced term, has also been studied in [4, 22]. It is well known that there exists an ambiguity in the solution of (3.2) due to the fact that UU> = UQQ>U> holds for any orthogonal matrix Q ∈ Rk×k . This implies that the Euclidean Hessian∇2g(U) always has zero eigenvalues for k > 1 at critical points, even at local minima, violating not only the strongly Morse condition but also Assumption 2.1. To overcome this difficulty, we propose to formulate an equivalent problem on a proper quotient manifold (rather than the general RN×k space as in [8]) to remove this ambiguity and make sure Assumption 2.1 is satisfied.
3.1.1 Background on the quotient manifold
To keep our work self-contained, we provide a brief introduction to quotient manifolds in this section before we verify our three assumptions. One can refer to [23, 24] for more information. We make the assumption that the matrix variable U is always full-rank. This is required in order to define a proper quotient manifold, since otherwise the equivalence classes defined below will have different dimensions, violating Proposition 3.4.4 in [23]. Thus, we focus on the case that U belongs to the manifold RN×k∗ , i.e., the set of all N × k real matrices with full column rank. To remove the parameterization ambiguity caused by the factorization X̃ = UU>, we define an equivalence class for any U ∈ RN×k∗ as [U] , {V ∈ RN×k∗ : VV> = UU>} = {UQ : Q ∈ Rk×k,Q>Q = Ik}. We will abuse notation and use U to denote also its equivalence class [U] in the following. Let M denote the set of all equivalence classes of the above form, which admits a (unique) differential structure that makes it a (Riemannian) quotient manifold, denoted asM = RN×k∗ /Ok. Here Ok is the orthogonal group {Q ∈ Rk×k : QQ> = Q>Q = Ik}. Since the objective function g(U) in
(3.3) (and f(U) in (3.2)) is invariant under the equivalence relation, it induces a unique function on the quotient manifold RN×k∗ /Ok, also denoted as g(U). Note that the tangent space TURN×k∗ of the manifold RN×k∗ at any point U ∈ RN×k∗ is still RN×k∗ . We define the vertical space VUM as the tangent space to the equivalence classes (which are themselves manifolds): VUM , {UΩ : Ω ∈ Rk×k, Ω> = −Ω}. We also define the horizontal space HUM as the orthogonal complement of the vertical space VUM in the tangent space TURN×k∗ = RN×k∗ : HUM , {D ∈ RN×k∗ : D>U = U>D}. For any matrix Z ∈ RN×k∗ , its projection onto the horizontal spaceHUM is given as PU(Z) = Z−UΩ, where Ω is a skewsymmetric matrix that solves the following Sylvester equation ΩU>U + U>UΩ = U>Z− Z>U. Then, we can define the Riemannian gradient (grad ·) and Hessian (hess ·) of the empirical risk and population risk on the quotient manifoldM, which are given in the supplementary material.
3.1.2 Verifying Assumptions 2.1, 2.2, and 2.3
Assume that X = WΛW> with W ∈ RN×r and Λ = diag([λ1, · · · , λr]) ∈ Rr×r is an eigendecomposition of X. Without loss of generality, we assume that the eigenvalues of X are in descending order. Let Λu ∈ Rk×k be a diagonal matrix that contains any k non-zero eigenvalues of X and Wu ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λu. Let Λk = diag([λ1, · · · , λk]) be the diagonal matrix that contains the largest k eigenvalues of X and Wk ∈ RN×k contain the k eigenvectors of X associated with the eigenvalues in Λk. Q ∈ Ok is any orthogonal matrix. The following lemma provides the global geometry of the population risk in (3.3), which also determines the values of and η in Assumption 2.1.
Lemma 3.1. Define U , {U = WuΛ 1 2 uQ>}, U? , {U? = WkΛ 1 2 kQ >} ⊆ U , and U?s , U\U?. Denote κ , √
λ1 λk ≥ 1 as the condition number of any U? ∈ U?. Define the following regions: R1 , { U∈ RN×k∗ : min
P∈Ok ‖U−U?P‖F ≤ 0.2κ−1
√ λk, ∀U? ∈ U? } ,
R′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F ≤
1
80 λ
3 2 k
} ,
R′′2 , { U∈ RN×k∗ : σk(U) ≤ 1
2
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F , ‖ grad g(U)‖F >
1
80 λ
3 2 k
} ,
R′3 , { U∈ RN×k∗ : σk(U)> 1
2
√ λk, min
P∈Ok ‖U−U?P‖F >0.2κ−1
√ λk, ‖UU>‖F ≤ 8
7 ‖U?U?>‖F
} ,
R′′3 , { U∈ RN×k∗ : ‖UU>‖F > 8
7 ‖U?U?>‖F
} ,
where σk(U) denotes the k-th singular value of a matrix U ∈ RN×k∗ , i.e., the smallest singular value of U. These regions also induce regions in the quotient manifoldM in an apparent way. We additionally assume that λk+1 ≤ 112λk and k ≤ r N . Then, the following properties hold:
(1) For any U ∈ U , U is a critical point of the population risk g(U) in (3.3). (2) For any U? ∈ U?, U? is a global minimum of g(U) with λmin(hess g(U?)) ≥ 1.91λk. Moreover,
for any U ∈ R1, we have λmin(hess g(U)) ≥ 0.19λk.
(3) For any U?s ∈ U?s , U?s is a strict saddle point of g(U) with λmin(hess g(U?s)) ≤ −0.91λk. Moreover, for any U ∈ R′2, we have
λmin(hess g(U)) ≤ −0.06λk.
(4) For any U ∈ R′′2 ⋃R′3⋃R′′3 , we have a large gradient. In particular,
‖ grad g(U)‖F > 1 80λ 3 2 k , if U ∈ R′′2 , 1 60κ −1λ 3 2
k , if U ∈ R′3, 5 84k 1 4λ 3 2 k , if U ∈ R′′3 .
The proof of Lemma 3.1 is inspired by the proofs of [8, Theorem 4], [3, Lemma 13] and [4, Theorem 5], and is given in Appendix D (see supplementary material). Therefore, we can set ≤ min{1/80, 1/60κ−1}λ 3 2
k and η = 0.06λk. Then, the population risk given in (3.3) satisfies Assumption 2.1. It can be seen that each critical point of the population risk g(U) in (3.3) is either a global minimum or a strict saddle, which inspires us to carry this favorable geometry over to the corresponding empirical risk.
To illustrate the partition of the manifold RN×k∗ used in the above Lemma 3.1, we use the purple (¬), yellow (), and green (®) regions in Figure 2 to denote the regions that satisfy minP∈Ok ‖U− U?P‖F ≤ 0.2κ−1 √ λk, σk(U) ≤ 12 √ λk, and ‖UU>‖F ≤ 8 7‖U?U?
>‖F , respectively. It can be seen thatR1 is exactly the purple region, which contains the areas near the global minima ([U?]). R2 = R′2 ⋃R′′2 is the intersection of the yellow and green regions. R′3 is the part of the green region that does not intersect with the purple or yellow regions. Finally, R′′3 is the space outside of the green region. Therefore, the union ofR1,R2, andR3 = R′3
⋃R′′3 covers the entire manifold RN×k∗ .
We define a norm ball as B(l) , {U ∈ RN×k∗ : ‖UU>‖F ≤ l} with l = 87‖U?U? >‖F . The following lemma verifies Assumptions 2.2 and 2.3 under the restricted isometry property (RIP). Lemma 3.2. Assume r2 ≤ k ≤ r N . Suppose that a linear operator B with [B(Z)]m = 〈Z,Bm〉 satisfies the following RIP
(1− δr+k)‖Z‖2F ≤ ‖B(Z)‖22 ≤ (1 + δr+k)‖Z‖2F (3.4) for any matrix Z ∈ RN×N with rank at most r + k. We construct the linear operator A by setting Am = 1 2 (Bm + B > m). If the restricted isometry constant δr+k satisfies
δr+k≤min 2√87k 14(87‖U?U?>‖F+‖X‖F)‖U?U?>‖12F , 1 36 ,
η
2(167 √ k‖U?U?>‖F+ 87‖U?U?>‖F+‖X‖F) then, we have
sup U∈B(l)
‖ grad f(U)− grad g(U)‖F ≤ 2 , and sup U∈B(l) ‖hess f(U)− hess g(U)‖2 ≤
η 2 .
The proof of Lemma 3.2 is given in Appendix E (see supplementary material). As is shown in existing literature [25, 26, 27], a Gaussian linear operator B : RN×N → RM satisfies the RIP condition (3.4) with high probability if M ≥ C(r + k)N/δ2r+k for some numerical constant C. Therefore, we can conclude that the three statements in Theorem 2.1 hold for the empirical risk (3.2) and population risk (3.3) as long as M is large enough. Some similar bounds for the sample complexity M under different settings can also be found in papers [8, 4]. Note that the particular choice of l can guarantee that ‖ grad f(U)‖F is large outside of B(l), which is also proved in Appendix E. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk.
3.2 Phase Retrieval
We continue to elaborate on Example 1.2. The following lemma provides the global geometry of the population risk in (1.2), which also determines the values of and η in Assumption 2.1. Lemma 3.3. Define the following four regions:
R1 , { x ∈ RN : ‖x‖2 ≤ 1
2 ‖x?‖2
} , R2 , { x ∈ RN : min
γ∈{1,−1} ‖x− γx?‖2 ≤
1
10 ‖x?‖2
} ,
R3 , { x ∈ RN : min
γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 ≤ 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 } ,
R4 , { x ∈ RN : ‖x‖2 > 1
2 ‖x?‖2, min γ∈{1,−1} ‖x− γx?‖2 >
1
10 ‖x?‖2,
min γ∈{1,−1} ∥∥∥∥x− γ 1√3‖x?‖2w ∥∥∥∥ 2 > 1 5 ‖x?‖2, w>x? = 0, ‖w‖2 = 1 }
Then, the following properties hold:
(1) x = 0 is a strict saddle point with ∇2g(0) = −4x?x?> − 2‖x?‖22IN and λmin(∇2g(0)) = −6‖x?‖22. Moreover, for any x ∈ R1, the neighborhood of strict saddle point 0, we have
λmin(∇2g(x)) ≤ − 3
2 ‖x?‖22.
(2) x = ±x? are global minima with∇2g(±x?) = 8x?x?> + 4‖x?‖22IN and λmin(∇2g(±x?)) = 4‖x?‖22. Moreover, for any x ∈ R2, the neighborhood of global minima ±x?, we have
λmin(∇2g(x)) ≥ 0.22‖x?‖22.
(3) x = ± 1√ 3 ‖x?‖2w, with w>x? = 0 and ‖w‖2 = 1, are strict saddle points with
∇2g(± 1√ 3 ‖x?‖2w) = 4‖x?‖22ww> − 4x?x?> and λmin(∇2g(± 1√3‖x ?‖2w)) = −4‖x?‖22. Moreover, for any x ∈ R3, the neighborhood of strict saddle points ± 1√3‖x
?‖2w, we have λmin(∇2g(x)) ≤ −0.78‖x?‖22.
(4) For any x ∈ R4, the complement region ofR1,R2, andR3, we have ‖∇g(x)‖2 > 0.3963‖x?‖32.
The proof of Lemma 3.3 is inspired by the proof of [8, Theorem 3] and is given in Appendix F (see supplementary material). Letting ≤ 0.3963‖x?‖32 and η = 0.22‖x?‖22, the population risk (1.2) then satisfies Assumption 2.1. As in Lemma 3.1, we also note that each critical point of the population risk in (1.2) is either a global minimum or a strict saddle. This inspires us to carry this favorable geometry over to the corresponding empirical risk.
The partition of regions used in Lemma 3.3 is illustrated in Figure 3. We use the purple, green, and blue
balls to denote the three regionsR1,R2, andR3, respectively. R4 is then represented with the light gray region. Therefore, the union of the four regions covers the entire RN space.
Define a norm ball as B(l) , {x ∈ RN : ‖x‖2 ≤ l} with radius l = 1.1‖x?‖2. This particular choice of l guarantees that ‖ grad f(x)‖2 is large outside of B(l), which is proved in Appendix G. Together with Theorem 2.1, we prove a globally benign landscape for the empirical risk. We also
define h(N,M) , Õ ( N2 M + √ N M ) with Õ denoting an asymptotic notation that hides polylog factors. The following lemma verifies Assumptions 2.2 and 2.3 for this phase retrieval problem. Lemma 3.4. Suppose that am ∈ RN is a Gaussian random vector with entries following N (0, 1). If h(N,M) ≤ 0.0118, we then have
sup x∈B(l)
‖∇f(x)−∇g(x)‖2 ≤ 2 , and sup x∈B(l) ‖∇2f(x)−∇2g(x)‖2 ≤
η
2
hold with probability at least 1− e−CN log(M). The proof of Lemma 3.4 is given in Appendix G (see supplementary material). The assumption h(N,M) ≤ 0.0118 implies that we need a sample complexity that scales like N2, which is not optimal since x has only N degrees of freedom. This is a technical artifact that can be traced back to Assumptions 2.2 and 2.3–which require two-sided closeness between the gradients and Hessians–and the heavy-tail property of the fourth powers of Gaussian random process [12]. To arrive at the conclusions of Theorem 2.1, however, these two assumptions are sufficient but not necessary (while Assumption 2.1 is more critical), leaving room for tightening the sampling complexity bound. We leave this to future work.
4 Numerical Simulations
We first conduct numerical experiments on the two examples introduced in Section 1, i.e., the rank-1 matrix sensing and phase retrieval problems. In both problems, we fix N = 2 and set x? = [1 − 1]>.
Then, we generate the population risk and empirical risk based on the formulation introduced in these two examples. The contour plots of the population risk and a realization of empirical risk with M = 3 and M = 10 are given in Figure 4 for rank-1 matrix sensing and Figure 5 for phase retrieval. We see that when we have fewer samples (e.g., M = 3), there could exist some spurious local minima as is shown in plots (b). However, as we increase the number of samples (e.g., M = 10), we see a direct correspondence between the local
minima of empirical risk and population risk in both examples with a much higher probability. We also notice that extra saddle points can emerge as shown in Figure 4 (c), which shows that statement (b) in Theorem 2.1 cannot be improved to a one-to-one correspondence between saddle points in degenerate scenarios. We still observe this phenomenon even when M = 1000, which is not shown here. Note that for the rank-1 case, Theorem 2.1 can be applied directly without restricting to full-rank representations. Next, we conduct another experiment on general-rank matrix sensing with k = 2, r = 3, N = 8, and a variety of M . We set U? as the first r columns of an N ×N identity matrix and create X = U?U?>. The population and empirical risks are then generated according to the model introduced in Section 3.1. As shown in Figure 6, the distance (averaged over 100 trials) between the local minima of the population and empirical risk decreases as we increase M .
5 Conclusions
In this work, we study the problem of establishing a correspondence between the critical points of the empirical risk and its population counterpart without the strongly Morse assumption required in some existing literature. With this correspondence, we are able to analyze the landscape of an empirical risk from the landscape of its population risk. Our theory builds on a weaker condition than the strongly Morse assumption. This enables us to work on the very popular matrix sensing and phase retrieval problems, whose Hessian does have zero eigenvalues at some critical points, i.e., they are degenerate and do not satisfy the strongly Morse assumption. As mentioned, there is still room to improve the sample complexity of the phase retrieval problem that we will pursue in future work.
Acknowledgments
SL would like to thank Qiuwei Li at Colorado School of Mines for many helpful discussions on the analysis of matrix sensing and phase retrieval. The authors would also like to thank the anonymous reviewers for their constructive comments and suggestions which greatly improved the quality of this paper. This work was supported by NSF grant CCF-1704204, and the DARPA Lagrange Program under ONR/SPAWAR contract N660011824020. | 1. What is the focus of the paper regarding empirical risk and population risk?
2. What are the strengths of the paper, particularly in its improvements over previous works?
3. What are the weaknesses or limitations of the paper, such as the exclusion of degenerate minima?
4. Are there any questions or concerns regarding the presentation and technical details of the paper, especially in the "main result" section?
5. How do the parameters eps and eta impact the results, and are there any standard methods for computing them? | Review | Review
The paper considers the problem of characterizing the landscape of empirical risk using the population risk. Particular attention is devoted to the case of a degenerate population risk. Their main theorem proves a close connection between stationary point and local minimizers. The paper improves on state-of-the-art in the characterization of the above mentioned landscape. Previous work could not explain the situation where the Hessian is degenerate (has zero eigenvalues) near a stationary point. The main assumption in the present paper is that the absolute value of the Hessian's minimal eigenvalue must be bounded away from zero. This includes includes (some) degenerate saddle point and strict local minimizers. It does not include degenerate minima, which is surprising (a drawback). The presentation of the paper with a separate, clearly written section on the "main result" is good, since the details are rather technical. In the section "main results", comments on the parameters eps and eta are missing. Need the conditions hold for any positive eps and eta? I think, you require only existence of eps and eta such that the assumptions are satisfied. In that case, the reader would like to see an intuitive description of the meaning of these parameters in the presented examples of matrix sensing and phase retrieval and an explanation of the proof strategy for obtaining the parameters. Is there a standard way to compute the parameters eps and eta? |
NIPS | Title
TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification
Abstract
Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv.
1 Introduction
The Transformer [24] is a neural network based on attention mechanisms. It has shown great success in the field of natural language processing. Recently, it has also shown promising performance for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], and image segmentation [14, 26], thus gaining increasing attention in this field. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification or dense predictions, and the generalizability of Transformers is unknown. At a glance, query-key similarities are computed by dot products in the attention mechanisms of Transformers. Therefore, these models could potentially be useful for image matching. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images, with applications in generalizable person re-identification.
Attention mechanisms are used to gather global information from different locations according to query-key similarities. The vanilla Transformer [24] is composed of an encoder that employs
∗Shengcai Liao is the corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
self-attention, and a decoder that further incorporates a cross-attention module. The difference is that the query and key are the same in the self-attention, while they are different in the cross-attention. The Vision Transformer (ViT) [7] applies a pure Transformer encoder for feature learning and image classification. While the Transformer encoder facilitates feature interaction among different locations of the same image, it cannot address the image matching problem being studied in this paper, because it does not enable interaction between different images. In the decoder, however, the cross-attention module does have the ability for cross interaction between query and the encoded memory. For example, in the decoder of the detection Transformer (DETR) [2], learnable query embeddings are designed to decode useful information in the encoded image memory for object localization. However, the query embeddings are independent from the image inputs, and so there is still no interaction between pairs of input images. Motivated by this, how about using actual image queries instead of learnable query embeddings as input to decoders?
Person re-identification is a typical image matching and metric learning problem. In a recent study called QAConv [10], it was shown that explicitly performing image matching between pairs of deep feature maps helps the generalization of the learned model. This inspires us to investigate the capability and generalizability of Transformers for image matching and metric learning between pairs of images. Since training through classification is also a popular strategy for metric learning, we start from a direct application of ViT and the vanilla Transformer with a powerful ResNet [3] backbone for person re-identification. However, this results in poor generalization to different datasets. Then, we consider formulating explicit interactions between query2 and gallery images in Transformers. Two naive solutions are thus designed. The first one uses a pure Transformer encoder, as in ViT, but concatenates the query and gallery features together as inputs, so as to enable the self-attention module to read both query and gallery content and apply the attention between them. The second design employs the vanilla Transformer, but replaces the learnable query embedding in the decoder by the ready-to-use query feature maps. This way, the query input acts as a real query from the actual retrieval inputs, rather than a learnable query which is more like a prior or a template. Accordingly, the cross-attention module in the decoder is able to gather information across query-key pairs, where the key comes from the encoded memory of gallery images.
While the first solution does not lead to improvement, the second one is successful with notable performance gain. However, compared to the state of the art in generalizable person re-identification,
2Query/gallery in person re-identification and query/key or target/memory in Transformers have very similar concepts originated from information retrieval. We use the same word query here in different contexts.
the performance of the second variant is still not satisfactory. We further consider that the attention mechanism in Transformers might be primarily for global feature aggregation, which is not naturally suitable for image matching, though the two naive solutions already enable feature interactions between query and gallery images. Therefore, to improve the effectiveness of image matching, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, inspired from QAConv [10], global max pooling (GMP) is applied, which acts as a hard attention to gather similarity values, instead of a soft attention to weight feature values. This is because, in image matching, we are more interested in matching scores than feature values. Finally, a multilayer perceptron (MLP) head maps the matching result to a similarity score for each query-gallery pair. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching.
We call the above design TransMatcher (see Fig. 1), which targets at efficient image matching and metric learning in particular. The contributions of this paper are summarized as follows.
• We investigate the possibility and generalizability of applying Transformers for image matching and metric learning, including direct applications of ViT and the vanilla Transformer, and two solutions adapted specifically for matching images through attention. This furthers our understanding of the capability and limitation of Transformers for image matching.
• According to the above, a new simplified decoder is proposed for efficient image matching, with a focus on similarity computation and mapping.
• With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively.
2 Related Work
Given pairs of input images, deep feature matching has been shown to be effective for person re-identification. Li et al. [8] proposed a novel filter pairing neural network (FPNN) to handle misalignment and occlusions in person re-identification. Ahmed et al. [1] proposed a local neighborhood matching layer to match deep feature maps of query and gallery images. Suh et al. [21] proposed a deep neural network to learn part-aligned bilinear representations for person re-identification. Shen et al. [18] proposed a Kronecker-product matching (KPM) module for matching person images in a softly aligned way. Liao and Shao [10] proposed the query adaptive convolution (QAConv) for explicit deep feature matching, which is proved to be effective for generalizable person re-identification. They further proposed a graph sampler (GS) for efficient deep metric learning [11].
Generalizable person re-identification has gained increasing attention in recent years. Zhou et al. [36] proposed the OSNet, and showed that this new backbone network has advantages in generalization. Jia et al. [5] applied IBN-Net-b [15] together with a feature normalization to alleviate both style and content variance across datasets to improve generalizability. Song et al. [20] proposed a domaininvariant mapping network (DIMN) and further introduced a meta-learning pipeline for effective training and generalization. Qian et al. [17] proposed a deep architecture with leader-based multiscale attention (MuDeep), with improved generalization of the learned models. Yuan et al. [31] proposed an adversarial domain-invariant feature learning network (ADIN) to separate identity-related features from challenging variations. Jin et al.[6] proposed a style normalization and restitution module, which shows good generalizability for person re-identification. Zhuang et al. [38] proposed a camera-based batch normalization (CBN) method for domain-invariant representation learning, which utilizes unlabeled target data to adapt the BN layer in a quick and unsupervised way. Wang et al. [27] created a large-scale synthetic person dataset called RandPerson, and showed that models learned from synthesized data generalize well to real-world datasets. However, current methods are still far from satisfactory in generalization for practical person re-identification.
There are a number of attentional networks [12, 16, 13, 30, 19, 9, 29, 32, 4] proposed for person re-identification, but focusing on representation learning. More recently, Zhao et al. [33] proposed a cross-attention network for person re-identificaiton. However, it is still applied for feature refinement, instead of explicit image matching between gallery and probe images studied in this paper.
Transformers have recently received increasing attention for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], image segmentation [14, 26], and so on. For
example, ViT was proposed in [7], showing that a pure Transformer-based architecture is capable of effective image classification. DETR was proposed in [2], providing a successful end-to-end Transformer solution for object detection. Later, several studies, such as the Deformable DETR [37], Swin [14], and PVT [26], improved the computation of Visual Transformers and further boosted their performance. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions. There lacks a comprehensive study on whether Transformers are effective for image matching and metric learning and how its capability is in generalizing to unknown domains.
3 Transformers
For the vanilla Transformer [24], the core module is the multi-head attention (MHA). First, a scaled dot-product attention is defined as follows:
Attention(Q,K, V ) = softmax( QKT√ dk )V, (1)
where Q ∈ RT×dk is the query (or target) matrix, K ∈ RM×dk is the key (or memory) matrix, V ∈ RM×dv is the value matrix, T and M are the sequence lengths of the query and key, respectively, dk is the feature dimension of the query and key, and dv is the feature dimension of V . In visual tasks, Q and K are usually reshaped query and key feature maps, with T =M = hw, where h and w are the height and width of the query and key feature maps, respectively. Then, the MHA is defined as:
headi = Attention(QW Q i ,KW K i , V W V i ), (2)
MultiHead(Q,K, V ) = Concat(head1, . . . , headH)WO, (3) where WQi ∈ Rd×dk , WKi ∈ Rd×dk , WVi ∈ Rd×dv , and WO ∈ Rhdv×d are parameter matrices, and H is the number of heads. Then, Q = K = V in the multi-head self-attention (MHSA) in the encoders, while they are defined separately in the multi-head cross-attention (MHCA) in the decoders.
The structure of the Transformer encoder without positional encoding is shown on the left of Fig. 1. Beyond MHSA, it further appends a feed-forward layer to first increase the feature dimension from d to D, and then recover it back from D to d. Besides, the encoder can be self-stacked N times, where N is the total number of encoder layers. In ViT [7], only Transformer encoders are used, and positional encoding is further applied. In the vanilla Transformer [24], decoders with MHCA are further applied, with the query being learnable query embeddings initially, and the output of the previous decoder layer later on, and the key and value being the output of the encoder layer. The decoder can also be self-stacked N times.
4 Image Matching with Transformers: Naive Solutions
While the above ViT and vanilla Transformer are able to perform image matching through black-box feature extraction and distance learning, they are not optimal for this task because they lack image-toimage interaction in their designs. Though cross-attention is employed in the Transformer decoders, in its original form the query either comes from learnable query embeddings, or from the output of the previous decoder layer.
Therefore, we adapt Transformers with two naive solutions for image matching and metric learning. Building upon a powerful ResNet [3] backbone, the first solution appends ViT, but not simply for feature extraction. Instead, a pair of query and gallery feature maps are concatenated to double the sequence length, forming a single sample for the input of ViT. Thus, both the query and key for the self-attention layer contain query image information in one half and gallery image information in the other half. Therefore, the attention computation in Eq. (1) is able to interact query and gallery inputs for image matching. This variant is denoted as Transformer-Cat.
The second solution appends the vanilla Transformer, but instead of learnable query embeddings, ResNet query features are directly input into the first decoder. This way, the cross-attention layer in the decoders is able to interact the query and gallery samples being matched. This variant is denoted as Transformer-Cross.
The structure of these two variants can be found in the Appendix. Note that these two solutions have high computational and memory costs, especially for large d, D, and N (c.f. Section 6.4).
5 The Proposed TransMatcher
Though the above two solutions enable query-gallery interaction in the attention mechanism for image matching, they are not adequate for distance metric learning. This is because, taking a deeper look at Eq. (1) for the attention, it can be observed that, though similarity values between Q and K are computed, they are only used for softmax-based weighting to aggregate features from V . Therefore, the output of the attention is always a weighted version of V (orK), and thus cross-matching between a pair of inputs is not directly formulated.
To address this, we propose a simplified decoder, which is explicitly formulated towards similarity computation. The structure of this decoder is shown in the middle of Fig. 1. First, both gallery and query images are independently encoded by N sequential Transformer encoders after a backbone network, as shown on the left of Fig. 1. This encoding helps aggregating global information from similar body parts for the subsequent matching step. The resulting feature encodings are denoted by Qn ∈ Rhw×d and Kn ∈ Rhw×d, n = 1, . . . , N , for the query and gallery, respectively. Then, as in Eq. (2), both the gallery and query encodings are transformed by a fully connected (FC) layer FC1:
Q′n = QnWn,K ′ n = KnWn, (4)
where Wn ∈ Rd×d is the parameter matrix for encoder-decoder layer n. Different from Eq. (2), we use shared FC parameters for both query and gallery, because they are exchangeable in the image matching task, and the similarity metric needs to be symmetrically defined. Then, the dot product is computed between the transformed features, as in Eq. (1):
Sn = Q ′ nK ′ n T , (5)
where Sn ∈ Rhw×hw are the similarity scores. In addition, a learnable prior score embedding R ∈ Rhw×hw is designed, which defines prior matching scores between different locations of query and gallery images. Then, it is used to weight the similarity values: S′n = Sn ∗ σ(R), (6) where ∗ denotes element-wise multiplication, and σ is the sigmoid function to map the prior score embedding into weights in [0, 1].
After that, a GMP layer is applied along the last dimension of hw elements: S′′n = max(S′n, dim=-1). (7) This way, the optimal local matching over all key locations is obtained, as in QAConv [10]. Compared to Eq. (1), the GMP here can be considered as a hard attention, but it is used for similarity matching rather than softmax-based feature weighting like in the soft attention. Note that multi-head design in MHA is not considered here (c.f. Section 6.6).
Then, after a batch normalization layer BN1, an MLP head is further appended, similar to the feedforward layer of Transformers. It is composed of MLPHead1=(FC2, BN2, ReLU) to map the hw similarity values to dimension D, and MLPHead2=(FC3, BN3) to map dimension D to 1 as a single output score S′′′n.
Finally, decoder n outputs a similarity score by fusing the output of the previous decoder: S′′′′n = S ′′′ n + S ′′′′ n−1, (8) where S′′′′0 is defined as 0. With N stacked encoder-decoder blocks, as shown in Fig. 1, this can be considered as residual similarity learning. Note that the stack of encoder-decoder blocks in TransMatcher is different from that in the vanilla Transformer. In TransMatcher, the encoder and decoder are connected before being stacked, while in the vanilla Transformer they are stacked independently before connection. This way, the decoder of TransMatcher is able to perform cross matching with different levels of encoded features for residual similarity learning.
However, the GMP operation in Eq. (7) is not symmetric. To make TransMatcher symmetric for the query and gallery, the GMP operation in Eq. (7) can also be applied along dim=0; that is, conduct an inverse search of best matches over all query locations. Keeping other operations the same, this will result in another set of similarity scores, which are summed with the original ones after the FC3 layer. Further details can be found in the Appendix. Note that this is not reflected in Fig. 1 for simplicity of illustration.
Finally, the outputs of TransMatcher scores for all query-gallery pairs in a batch are collected for pairwise metric learning following the same pipeline in QAConv-GS [11], and the same binary cross entropy loss is used as in the QAConv-GS.
6 Experiments
6.1 Datasets
Four large-scale person re-identification datasets, CUHK03 [8], Market-1501 [34], MSMT17 [28], and RandPerson [27], which are publicly available for research purpose, are used in our experiments. The CUHK03 dataset includes 1,360 persons and 13,164 images,with 767 and 700 subjects used for training and testing, respectively, as in the CUHK03-NP protocol [35]. Besides, the “detected” subset is used, which is more challenging than the “labeled” subset. The Market-1501 dataset contains 32,668 images of 1,501 identities captured from six cameras, with 12,936 images from 751 identities for training, and 19,732 images from 750 identities for testing.MSMT17 includes 4,101 identities and 126,441 images captured from 15 cameras, with 32,621 images from 1,041 identities for training, and the remaining images from 3,010 identities for testing. RandPerson is a recently released synthetic person re-identification dataset for large-scale training towards generalization testing. It is with 8,000 persons and 1,801,816 images. A subset with 132,145 images of the 8,000 IDs is used for training.
Cross-dataset evaluation is performed on these datasets by training on the training subset of one dataset, and evaluating on the test subsets of other datasets. Except that for MSMT17 we further use an additional setting with all images for training, regardless of the subset splits. This is denoted by MSMT17all. All evaluations follow the single-query evaluation protocol. The Rank-1 (Top1) accuracy and mean average precision (mAP) are used as the performance evaluation metrics.
6.2 Implementation Details
The implementation of TransMatcher is built upon the official PyTorch project of QAConv-GS 3 [11], as the graph sampler (GS) proposed in this project is efficient for metric learning and quite suitable for the learning of TransMatcher. We keep most of the settings the same as QAConv-GS. Specifically, ResNet-50 [3] is used as the backbone network, with three instance normalization (IN) [23] layers further appended as in IBN-Net-b [15], following several recent studies [5, 36, 6, 38, 11]. The backbone network is pre-trained on ImageNet, with the states of the BN layers being fixed. The layer3 feature map is used, with a 3×3 neck convolution appended to produce the final feature map. The input image is resized to 384× 128. The batch size is set to 64, with K=4 for the GS sampler. The network is trained with the SGD optimizer, with a learning rate of 0.0005 for the backbone network, and 0.005 for newly added layers. They are decayed by 0.1 after 10 epochs, and 15 epochs are trained in total. Except that for RandPerson [27] the total number of epochs is 4, and the learning rate step size is 2, according to the experiences in [27, 11]. Gradient clipping is applied with T = 4 [11]. Several commonly used data augmentation methods are applied, including random flipping, cropping, occlusion, and color jittering. All experiments are run on a single NVIDIA V100 GPU.
For the proposed TransMatcher, unless otherwise indicated, d=512 and D=2048 by default as in the original Transformer [24], and H=1 and N=3 for higher efficiency. Please refer to Section 6.6 for further parameter analysis. Besides, in practice, we find that when N decoders are used, using N − 1 encoders together with the ResNet feature map directly pairing the first decoder slightly improves the results while being more efficient, which is preferred in the implementation (c.f. Appendix).
6.3 Comparison to the State of the Art
A comparison to the state of the art (SOTA) in generalizable person re-identification is shown in Table 1. Several methods published very recently for generalizable person re-identification are compared, including OSNet [36], MuDeep [17], ADIN [31], SNR [6], CBN [38], QAConv [10], and QAConv-GS [11]. From Table 1 it can be observed that TransMatcher significantly improves the previous SOTA. For example, with Market-1501 for training, the Rank-1 and mAP are improved by 5.8% and 5.7% on CUHK03-NP, respectively, and they are improved by 6.1% and 3.4% on MSMT17, respectively. With MSMT17→Market-1501, the improvements are 5.0% for Rank-1 and 5.3% for mAP. With the synthetic dataset RandPerson for training, the improvements on Market-1501 are 3.3% for Rank-1 and 5.3% for mAP, and the gains on MSMT17 are 5.9% for Rank-1 and 3.3% for mAP.
Compared to the second best method QAConv-GS, since it shares the same code base and training setting with the proposed TransMatcher, it indicates that TransMatcher is a superior image matching
3QAConv-GS project under MIT License: https://github.com/ShengcaiLiao/QAConv.
and metric learning method for generalizable person re-identification, thanks to the effective crossmatching design in the new decoders.
6.4 Comparison of Transformers
A comparison of different Transformers trained on MSMT17 for direct cross-dataset evaluation is shown in Table 2. For a fair comparison, they are all trained with the same settings as described in Section 6.2. Besides, H=1 for all models. ViT, the vanilla Transformer, and TransMatcher all have the same parameter settings. Though we use an NVIDIA V100 GPU with 32GB of memory, Transformer-Cat and Transformer-Cross still encounter the memory overflow problem under the same parameter settings as TransMatcher. Therefore, we have to set d=128, D=512, and N=2 for them to run, and accordingly, a smaller version of TransMatcher with the same set of parameters is also provided for comparison.
From the results shown in Table 2, it can be observed that ViT and the vanilla Transformer perform poor in generalizing to other datasets. In contrast, the proposed TransMatcher significantly improves the performance. This confirms that simply applying Transformers for the image matching task is not effective, because they lack cross-image interaction in their designs.
Besides, we find that Transformer-Cat does not lead to improvement compared to ViT and the vanilla Transformer. It is a smaller model, though. However, Transformer-Cross does lead to notable improvements, indicating that the cross-matching of gallery and query images in Transformer decoders is potentially more effective. However, it is still not as good as the smaller version of TransMatcher. For example, on Market-1501, TransMatcher improves the Rank-1 by 11.2% and the mAP by 9.2% over the Transformer-Cross. Therefore, the cross-attention design in the original Transformers is not efficient enough for image matching, due to its focus on feature aggregation but not similarity matching. More variants and experiments of Transformers can be found in Appendix.
As for the running speed, the training times of these methods are also listed in Table 2. As can be seen, without cross-matching, ViT is the most efficient, followed by the vanilla Transformer. TransMatcher is not as efficient as ViT due to the explicit cross-matching between query and gallery images. However, it is still acceptable, thanks to the new simplified decoder. In contrast, even with a small set of parameters, Transformer-Cat and Transformer-Cross are still quite heavy to compute.
6.5 Ablation Study
The structure of the proposed TransMatcher shown in Fig. 1 is carefully ablated, with results listed in Table 3. The training is performed on MSMT17. For ease and reliable comparison, we report the average of all Rank-1 and mAP results on all test sets over four random runs. This is denoted by mAcc. We start with Dot Product + GMP + MLPHead2 (the input dimension to FC3 needs to be adapted to hw accordingly), which is the simplest and most necessary configuration. Then, by adding MLPHead1, the performance is improved by 1.38%, indicating that increasing the dimension to D, as in Transformers, is useful. Then, by including FC1 / BN1 independently, the performance gain is 0.84% / 0.88%, and by including them together, the performance can be further improved. Finally, when the prior score embedding is appended, the best performance is achieved. Interestingly, when we include a learnable positional embedding in the encoders, as in ViT, either independently or together with the prior score embedding, the performance is degraded. This indicates that mixing the position information with visual features for image matching is not useful in our design. In contrast, learning spatial-aware prior matching scores separately for score weighting is more effective. More ablation study and analysis can be found in the Appendix.
6.6 Parameter Analysis
To understand the parameter selection of the proposed TransMatcher, we train it on MSMT17 with different parameter configurations to the defaults, with the mAcc results as well as the training time shown in Fig. 2. First, the performance is gradually improved by increasing the model dimension d. However, the training time is also increased quadratically. Therefore, to provide a balance between accuracy and running speed, d=512 is selected, which is the same as in the vanilla Transformer [24].
For the feed forward-dimension D, the performance is also gradually improved when increasing the value. However, the training time is less affected, because the feed-forward operation is only applied after the dot product and GMP, where the dimension of d and one spatial dimension hw are already contracted. Nevertheless, large D will increase the memory usage. Therefore, D=2048 is selected, which is also the same as in the vanilla Transformer [24].
As for the number of layers N , the performance is also gradually improved with increasing N . However, after N=3 the performance tends to saturate, and the training time grows linearly with the increasing number of layers. Therefore, N=3 is a reasonable balance for our choice. In addition, with N = 1 there is no encoder used (for details please see Appendix), and from Fig. 2 it is clear that this is inferior, indicating that including an encoder is important. On the other hand, from the poor performance of ViT where there are only encoders, it is clear that the decoder is also important.
Finally, for the number of heads H in the encoders, it appears that larger H does not lead to improved results. Since the training time is also not affected, we simply select H=1 in the encoders, and do not implement the multi-head mechanism in the decoders.
6.7 Qualitative Analysis
With the help of the GMP layer, inspired from QAConv [10], the proposed TransMatcher is able to find the best local correspondence matches in each decoder layer. Some qualitative matching results are shown in Fig. 3 for a better understanding of TransMatcher. More examples can be found in the Appendix. The model used here is trained on the MSMT17 dataset [28], and the evaluations are done on the query subset of the Market-1501 dataset [34]. Results of both positive pairs and hard negative pairs are shown. For a clear illustration, only reliable correspondences with matching scores over a certain threshold are shown, where the threshold is determined by a false acceptance rate of 1‰ over all matches of negative pairs. Note that the local positions are coarse due to the 24× 8 size of the feature map.
As can be observed from Fig. 3, the proposed method is able to find correct local correspondences for positive pairs of images, even if there are notable misalignments in both scales and positions, pose, viewpoint, and illumination variations, occlusions, and low resolution blur. Besides, for hard negative pairs, the matching of TransMatcher still appears to be mostly reasonable, by linking visually similar parts or even the same person who might be incorrectly labeled.
This indicates that the proposed TransMatcher is effective in local correspondence matching, and note that it learns to do this with the only supervision of identity information. Besides, the matching capability is generalizable to other datasets beyond the training set. From the illustration it can also be seen that, generally, matching results of the first decoder layer are not as successful as the next two layers, and the matching with the last decoder layer appears to be the best. This indicates that both Transformer encoders and decoders helps the model to match better by aggregating global similarity information.
7 Conclusion
With the study conducted in this paper, we conclude that: (1) direct applications of ViT and the vanilla Transformer are not effective for image matching and metric learning, because they lack cross-image interaction in their designs; (2) designing query-gallery concatenation in ViT does not help, while introducing query-gallery cross-attention in the vanilla Transformer leads to notable but not adequate improvements, probably because the attention mechanism in Transformers might be primarily designed for global feature aggregation, which is not naturally suitable for image matching; and (3) a new simplified decoder thus developed, which employs hard attention to cross-matching similarity scores, is more efficient and effective for image matching and metric learning. With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets with large improvements. Therefore, this study proves that Transformers can be effectively adapted for the image matching and metric learning tasks, and so other potentially useful variants will be of future interest.
Acknowledgements
The authors would like to thank Yanan Wang who helped producing Fig. 1 in this paper, and Anna Hennig who helped proofreading the paper, and all the anonymous reviewers for the valuable feedbacks in improving the paper. | 1. What is the main contribution of the paper regarding the application of Transformer in image matching and metric learning?
2. What are the strengths and weaknesses of the proposed approach compared to existing works in person re-identification?
3. Do you have any questions or concerns about the baseline architecture, loss function, and improvements applied to the naive Transformer application?
4. How does the reviewer assess the clarity, quality, originality, significance, and impact of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper applies Transformer to the matching problem and metric learning. The direct applications of ViT and the vanilla Transformer, and two solutions adapted specifically for matching images through attention. A new simplified decoder is proposed for efficient image matching, focusing on similarity computation and mapping.
Review
Originality: Middle
(+) This paper is to compare and extend the prevalent ViT[6] and DeTR[2], which contain both self-attention and cross (gallery-probe)-attention, in the person re-id.
(-) There are many works that use self-attention for person re-id. Such existing works are not mentioned at all. For example,
[A] Z.Zhang et al., Relation-Aware Global Attention for Person Re-identification, CVPR2020. [B] R.Hou et al., Interaction-and-Aggregation Network for Person Re-identification, CVP2019.
(-) There are several cross-attention works on person re-id; approach which maximizes the similarity of local features of gallery and probe pair. Such existing works are not explained. For example,
[C] S.Zhao et al., Do Not Disturb Me: Person Re-identification Under the Interference of Other Pedestrians, ECCV2020
Quality: Middle
(+) Direct application of Transformer and naïve application of Transformer to image matching problem are compared.
(+) The accuracies are better, and the training time is shorter than the compared naïve application of Transformer.
(+) The proposed method shows better rank-1 rates and mAP than the results reported on state-of-the-art domain generalized person re-id works.
(-) Details of baseline architecture are not clear.
ViT is an image classification model. Is the same binary cross-entropy loss in GS sampler applied for all methods? Is the same MLP head applied as the proposed model?
For Transformer-Cross, what is the value
V
in the decoder in Eq.(1) ? Why are the Resnet query features directly used (line155) without encoders, unlike the proposed model?
(-) The performance of Transformer-Cat in Table 2 is lower than vanilla Transformer. How about appending the results of the Transformer in the lower block?
(-) The proposed method applies many improvements to the Transformer. For example,
The proposed method uses the shared FC parameter to query and key (line-174-175).
Multi-scale similarity scores are summed in the proposed model as in Eq.(8).
However, Table 2 compares only with the final model. Thus, we can not see how much each component improved the naive application of the Transformer. We can see the ablation study, but improvement to the naïve application of Transformer is more understandable by adding the new component to Transformer-Cross.
(+) This paper shows that replacing the first encoder directly with the ResNet feature map improves the results while being more efficient (line 241-242 and A.2).
(-) Why is only the first encoder removed? Does the replaced place and the number of replaced encoders affect performance?
Clarity: Middle
(+) The paper is easy to understand.
(-) What loss function is used should be made clear. GS sampler seems to be the negative sampling method and not loss function.
(-) It would be better to clarify MLPHead_1 and MLPHead_2 in Figure1.
Line 289-290, “We start with Dot Product + GMP + MLPHead_1”. MLPHead_1 seems to be MLPHead_2.
When MLPHead_2 is not used, the output dimension seems to be D. How can we output the score?
When MLPHead_1 is not used, the input dimension to FC3 seems to be changed to hw.
(-) For the ablation study, the author used their own measure mACC, which is averages of rank-1 rates and mAP on different datasets and runs. It is not clear if the average of different measures is possible. Also, we cannot compare the results of Table2 and Table3 because it is not used in Table2.
-- Significance: Middle
(+) The Transformer is a timely topic. ViT[6] and DeTR[2] are prevalent in computer vision.
(+) This paper clearly outperforms the state-of-the-art performance on domain generalized person re-id.
(-) The theoretical contribution is limited. This paper has made only engineering to improve the rank-1 rates and mAP and computational time. |
NIPS | Title
TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification
Abstract
Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv.
1 Introduction
The Transformer [24] is a neural network based on attention mechanisms. It has shown great success in the field of natural language processing. Recently, it has also shown promising performance for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], and image segmentation [14, 26], thus gaining increasing attention in this field. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification or dense predictions, and the generalizability of Transformers is unknown. At a glance, query-key similarities are computed by dot products in the attention mechanisms of Transformers. Therefore, these models could potentially be useful for image matching. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images, with applications in generalizable person re-identification.
Attention mechanisms are used to gather global information from different locations according to query-key similarities. The vanilla Transformer [24] is composed of an encoder that employs
∗Shengcai Liao is the corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
self-attention, and a decoder that further incorporates a cross-attention module. The difference is that the query and key are the same in the self-attention, while they are different in the cross-attention. The Vision Transformer (ViT) [7] applies a pure Transformer encoder for feature learning and image classification. While the Transformer encoder facilitates feature interaction among different locations of the same image, it cannot address the image matching problem being studied in this paper, because it does not enable interaction between different images. In the decoder, however, the cross-attention module does have the ability for cross interaction between query and the encoded memory. For example, in the decoder of the detection Transformer (DETR) [2], learnable query embeddings are designed to decode useful information in the encoded image memory for object localization. However, the query embeddings are independent from the image inputs, and so there is still no interaction between pairs of input images. Motivated by this, how about using actual image queries instead of learnable query embeddings as input to decoders?
Person re-identification is a typical image matching and metric learning problem. In a recent study called QAConv [10], it was shown that explicitly performing image matching between pairs of deep feature maps helps the generalization of the learned model. This inspires us to investigate the capability and generalizability of Transformers for image matching and metric learning between pairs of images. Since training through classification is also a popular strategy for metric learning, we start from a direct application of ViT and the vanilla Transformer with a powerful ResNet [3] backbone for person re-identification. However, this results in poor generalization to different datasets. Then, we consider formulating explicit interactions between query2 and gallery images in Transformers. Two naive solutions are thus designed. The first one uses a pure Transformer encoder, as in ViT, but concatenates the query and gallery features together as inputs, so as to enable the self-attention module to read both query and gallery content and apply the attention between them. The second design employs the vanilla Transformer, but replaces the learnable query embedding in the decoder by the ready-to-use query feature maps. This way, the query input acts as a real query from the actual retrieval inputs, rather than a learnable query which is more like a prior or a template. Accordingly, the cross-attention module in the decoder is able to gather information across query-key pairs, where the key comes from the encoded memory of gallery images.
While the first solution does not lead to improvement, the second one is successful with notable performance gain. However, compared to the state of the art in generalizable person re-identification,
2Query/gallery in person re-identification and query/key or target/memory in Transformers have very similar concepts originated from information retrieval. We use the same word query here in different contexts.
the performance of the second variant is still not satisfactory. We further consider that the attention mechanism in Transformers might be primarily for global feature aggregation, which is not naturally suitable for image matching, though the two naive solutions already enable feature interactions between query and gallery images. Therefore, to improve the effectiveness of image matching, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, inspired from QAConv [10], global max pooling (GMP) is applied, which acts as a hard attention to gather similarity values, instead of a soft attention to weight feature values. This is because, in image matching, we are more interested in matching scores than feature values. Finally, a multilayer perceptron (MLP) head maps the matching result to a similarity score for each query-gallery pair. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching.
We call the above design TransMatcher (see Fig. 1), which targets at efficient image matching and metric learning in particular. The contributions of this paper are summarized as follows.
• We investigate the possibility and generalizability of applying Transformers for image matching and metric learning, including direct applications of ViT and the vanilla Transformer, and two solutions adapted specifically for matching images through attention. This furthers our understanding of the capability and limitation of Transformers for image matching.
• According to the above, a new simplified decoder is proposed for efficient image matching, with a focus on similarity computation and mapping.
• With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively.
2 Related Work
Given pairs of input images, deep feature matching has been shown to be effective for person re-identification. Li et al. [8] proposed a novel filter pairing neural network (FPNN) to handle misalignment and occlusions in person re-identification. Ahmed et al. [1] proposed a local neighborhood matching layer to match deep feature maps of query and gallery images. Suh et al. [21] proposed a deep neural network to learn part-aligned bilinear representations for person re-identification. Shen et al. [18] proposed a Kronecker-product matching (KPM) module for matching person images in a softly aligned way. Liao and Shao [10] proposed the query adaptive convolution (QAConv) for explicit deep feature matching, which is proved to be effective for generalizable person re-identification. They further proposed a graph sampler (GS) for efficient deep metric learning [11].
Generalizable person re-identification has gained increasing attention in recent years. Zhou et al. [36] proposed the OSNet, and showed that this new backbone network has advantages in generalization. Jia et al. [5] applied IBN-Net-b [15] together with a feature normalization to alleviate both style and content variance across datasets to improve generalizability. Song et al. [20] proposed a domaininvariant mapping network (DIMN) and further introduced a meta-learning pipeline for effective training and generalization. Qian et al. [17] proposed a deep architecture with leader-based multiscale attention (MuDeep), with improved generalization of the learned models. Yuan et al. [31] proposed an adversarial domain-invariant feature learning network (ADIN) to separate identity-related features from challenging variations. Jin et al.[6] proposed a style normalization and restitution module, which shows good generalizability for person re-identification. Zhuang et al. [38] proposed a camera-based batch normalization (CBN) method for domain-invariant representation learning, which utilizes unlabeled target data to adapt the BN layer in a quick and unsupervised way. Wang et al. [27] created a large-scale synthetic person dataset called RandPerson, and showed that models learned from synthesized data generalize well to real-world datasets. However, current methods are still far from satisfactory in generalization for practical person re-identification.
There are a number of attentional networks [12, 16, 13, 30, 19, 9, 29, 32, 4] proposed for person re-identification, but focusing on representation learning. More recently, Zhao et al. [33] proposed a cross-attention network for person re-identificaiton. However, it is still applied for feature refinement, instead of explicit image matching between gallery and probe images studied in this paper.
Transformers have recently received increasing attention for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], image segmentation [14, 26], and so on. For
example, ViT was proposed in [7], showing that a pure Transformer-based architecture is capable of effective image classification. DETR was proposed in [2], providing a successful end-to-end Transformer solution for object detection. Later, several studies, such as the Deformable DETR [37], Swin [14], and PVT [26], improved the computation of Visual Transformers and further boosted their performance. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions. There lacks a comprehensive study on whether Transformers are effective for image matching and metric learning and how its capability is in generalizing to unknown domains.
3 Transformers
For the vanilla Transformer [24], the core module is the multi-head attention (MHA). First, a scaled dot-product attention is defined as follows:
Attention(Q,K, V ) = softmax( QKT√ dk )V, (1)
where Q ∈ RT×dk is the query (or target) matrix, K ∈ RM×dk is the key (or memory) matrix, V ∈ RM×dv is the value matrix, T and M are the sequence lengths of the query and key, respectively, dk is the feature dimension of the query and key, and dv is the feature dimension of V . In visual tasks, Q and K are usually reshaped query and key feature maps, with T =M = hw, where h and w are the height and width of the query and key feature maps, respectively. Then, the MHA is defined as:
headi = Attention(QW Q i ,KW K i , V W V i ), (2)
MultiHead(Q,K, V ) = Concat(head1, . . . , headH)WO, (3) where WQi ∈ Rd×dk , WKi ∈ Rd×dk , WVi ∈ Rd×dv , and WO ∈ Rhdv×d are parameter matrices, and H is the number of heads. Then, Q = K = V in the multi-head self-attention (MHSA) in the encoders, while they are defined separately in the multi-head cross-attention (MHCA) in the decoders.
The structure of the Transformer encoder without positional encoding is shown on the left of Fig. 1. Beyond MHSA, it further appends a feed-forward layer to first increase the feature dimension from d to D, and then recover it back from D to d. Besides, the encoder can be self-stacked N times, where N is the total number of encoder layers. In ViT [7], only Transformer encoders are used, and positional encoding is further applied. In the vanilla Transformer [24], decoders with MHCA are further applied, with the query being learnable query embeddings initially, and the output of the previous decoder layer later on, and the key and value being the output of the encoder layer. The decoder can also be self-stacked N times.
4 Image Matching with Transformers: Naive Solutions
While the above ViT and vanilla Transformer are able to perform image matching through black-box feature extraction and distance learning, they are not optimal for this task because they lack image-toimage interaction in their designs. Though cross-attention is employed in the Transformer decoders, in its original form the query either comes from learnable query embeddings, or from the output of the previous decoder layer.
Therefore, we adapt Transformers with two naive solutions for image matching and metric learning. Building upon a powerful ResNet [3] backbone, the first solution appends ViT, but not simply for feature extraction. Instead, a pair of query and gallery feature maps are concatenated to double the sequence length, forming a single sample for the input of ViT. Thus, both the query and key for the self-attention layer contain query image information in one half and gallery image information in the other half. Therefore, the attention computation in Eq. (1) is able to interact query and gallery inputs for image matching. This variant is denoted as Transformer-Cat.
The second solution appends the vanilla Transformer, but instead of learnable query embeddings, ResNet query features are directly input into the first decoder. This way, the cross-attention layer in the decoders is able to interact the query and gallery samples being matched. This variant is denoted as Transformer-Cross.
The structure of these two variants can be found in the Appendix. Note that these two solutions have high computational and memory costs, especially for large d, D, and N (c.f. Section 6.4).
5 The Proposed TransMatcher
Though the above two solutions enable query-gallery interaction in the attention mechanism for image matching, they are not adequate for distance metric learning. This is because, taking a deeper look at Eq. (1) for the attention, it can be observed that, though similarity values between Q and K are computed, they are only used for softmax-based weighting to aggregate features from V . Therefore, the output of the attention is always a weighted version of V (orK), and thus cross-matching between a pair of inputs is not directly formulated.
To address this, we propose a simplified decoder, which is explicitly formulated towards similarity computation. The structure of this decoder is shown in the middle of Fig. 1. First, both gallery and query images are independently encoded by N sequential Transformer encoders after a backbone network, as shown on the left of Fig. 1. This encoding helps aggregating global information from similar body parts for the subsequent matching step. The resulting feature encodings are denoted by Qn ∈ Rhw×d and Kn ∈ Rhw×d, n = 1, . . . , N , for the query and gallery, respectively. Then, as in Eq. (2), both the gallery and query encodings are transformed by a fully connected (FC) layer FC1:
Q′n = QnWn,K ′ n = KnWn, (4)
where Wn ∈ Rd×d is the parameter matrix for encoder-decoder layer n. Different from Eq. (2), we use shared FC parameters for both query and gallery, because they are exchangeable in the image matching task, and the similarity metric needs to be symmetrically defined. Then, the dot product is computed between the transformed features, as in Eq. (1):
Sn = Q ′ nK ′ n T , (5)
where Sn ∈ Rhw×hw are the similarity scores. In addition, a learnable prior score embedding R ∈ Rhw×hw is designed, which defines prior matching scores between different locations of query and gallery images. Then, it is used to weight the similarity values: S′n = Sn ∗ σ(R), (6) where ∗ denotes element-wise multiplication, and σ is the sigmoid function to map the prior score embedding into weights in [0, 1].
After that, a GMP layer is applied along the last dimension of hw elements: S′′n = max(S′n, dim=-1). (7) This way, the optimal local matching over all key locations is obtained, as in QAConv [10]. Compared to Eq. (1), the GMP here can be considered as a hard attention, but it is used for similarity matching rather than softmax-based feature weighting like in the soft attention. Note that multi-head design in MHA is not considered here (c.f. Section 6.6).
Then, after a batch normalization layer BN1, an MLP head is further appended, similar to the feedforward layer of Transformers. It is composed of MLPHead1=(FC2, BN2, ReLU) to map the hw similarity values to dimension D, and MLPHead2=(FC3, BN3) to map dimension D to 1 as a single output score S′′′n.
Finally, decoder n outputs a similarity score by fusing the output of the previous decoder: S′′′′n = S ′′′ n + S ′′′′ n−1, (8) where S′′′′0 is defined as 0. With N stacked encoder-decoder blocks, as shown in Fig. 1, this can be considered as residual similarity learning. Note that the stack of encoder-decoder blocks in TransMatcher is different from that in the vanilla Transformer. In TransMatcher, the encoder and decoder are connected before being stacked, while in the vanilla Transformer they are stacked independently before connection. This way, the decoder of TransMatcher is able to perform cross matching with different levels of encoded features for residual similarity learning.
However, the GMP operation in Eq. (7) is not symmetric. To make TransMatcher symmetric for the query and gallery, the GMP operation in Eq. (7) can also be applied along dim=0; that is, conduct an inverse search of best matches over all query locations. Keeping other operations the same, this will result in another set of similarity scores, which are summed with the original ones after the FC3 layer. Further details can be found in the Appendix. Note that this is not reflected in Fig. 1 for simplicity of illustration.
Finally, the outputs of TransMatcher scores for all query-gallery pairs in a batch are collected for pairwise metric learning following the same pipeline in QAConv-GS [11], and the same binary cross entropy loss is used as in the QAConv-GS.
6 Experiments
6.1 Datasets
Four large-scale person re-identification datasets, CUHK03 [8], Market-1501 [34], MSMT17 [28], and RandPerson [27], which are publicly available for research purpose, are used in our experiments. The CUHK03 dataset includes 1,360 persons and 13,164 images,with 767 and 700 subjects used for training and testing, respectively, as in the CUHK03-NP protocol [35]. Besides, the “detected” subset is used, which is more challenging than the “labeled” subset. The Market-1501 dataset contains 32,668 images of 1,501 identities captured from six cameras, with 12,936 images from 751 identities for training, and 19,732 images from 750 identities for testing.MSMT17 includes 4,101 identities and 126,441 images captured from 15 cameras, with 32,621 images from 1,041 identities for training, and the remaining images from 3,010 identities for testing. RandPerson is a recently released synthetic person re-identification dataset for large-scale training towards generalization testing. It is with 8,000 persons and 1,801,816 images. A subset with 132,145 images of the 8,000 IDs is used for training.
Cross-dataset evaluation is performed on these datasets by training on the training subset of one dataset, and evaluating on the test subsets of other datasets. Except that for MSMT17 we further use an additional setting with all images for training, regardless of the subset splits. This is denoted by MSMT17all. All evaluations follow the single-query evaluation protocol. The Rank-1 (Top1) accuracy and mean average precision (mAP) are used as the performance evaluation metrics.
6.2 Implementation Details
The implementation of TransMatcher is built upon the official PyTorch project of QAConv-GS 3 [11], as the graph sampler (GS) proposed in this project is efficient for metric learning and quite suitable for the learning of TransMatcher. We keep most of the settings the same as QAConv-GS. Specifically, ResNet-50 [3] is used as the backbone network, with three instance normalization (IN) [23] layers further appended as in IBN-Net-b [15], following several recent studies [5, 36, 6, 38, 11]. The backbone network is pre-trained on ImageNet, with the states of the BN layers being fixed. The layer3 feature map is used, with a 3×3 neck convolution appended to produce the final feature map. The input image is resized to 384× 128. The batch size is set to 64, with K=4 for the GS sampler. The network is trained with the SGD optimizer, with a learning rate of 0.0005 for the backbone network, and 0.005 for newly added layers. They are decayed by 0.1 after 10 epochs, and 15 epochs are trained in total. Except that for RandPerson [27] the total number of epochs is 4, and the learning rate step size is 2, according to the experiences in [27, 11]. Gradient clipping is applied with T = 4 [11]. Several commonly used data augmentation methods are applied, including random flipping, cropping, occlusion, and color jittering. All experiments are run on a single NVIDIA V100 GPU.
For the proposed TransMatcher, unless otherwise indicated, d=512 and D=2048 by default as in the original Transformer [24], and H=1 and N=3 for higher efficiency. Please refer to Section 6.6 for further parameter analysis. Besides, in practice, we find that when N decoders are used, using N − 1 encoders together with the ResNet feature map directly pairing the first decoder slightly improves the results while being more efficient, which is preferred in the implementation (c.f. Appendix).
6.3 Comparison to the State of the Art
A comparison to the state of the art (SOTA) in generalizable person re-identification is shown in Table 1. Several methods published very recently for generalizable person re-identification are compared, including OSNet [36], MuDeep [17], ADIN [31], SNR [6], CBN [38], QAConv [10], and QAConv-GS [11]. From Table 1 it can be observed that TransMatcher significantly improves the previous SOTA. For example, with Market-1501 for training, the Rank-1 and mAP are improved by 5.8% and 5.7% on CUHK03-NP, respectively, and they are improved by 6.1% and 3.4% on MSMT17, respectively. With MSMT17→Market-1501, the improvements are 5.0% for Rank-1 and 5.3% for mAP. With the synthetic dataset RandPerson for training, the improvements on Market-1501 are 3.3% for Rank-1 and 5.3% for mAP, and the gains on MSMT17 are 5.9% for Rank-1 and 3.3% for mAP.
Compared to the second best method QAConv-GS, since it shares the same code base and training setting with the proposed TransMatcher, it indicates that TransMatcher is a superior image matching
3QAConv-GS project under MIT License: https://github.com/ShengcaiLiao/QAConv.
and metric learning method for generalizable person re-identification, thanks to the effective crossmatching design in the new decoders.
6.4 Comparison of Transformers
A comparison of different Transformers trained on MSMT17 for direct cross-dataset evaluation is shown in Table 2. For a fair comparison, they are all trained with the same settings as described in Section 6.2. Besides, H=1 for all models. ViT, the vanilla Transformer, and TransMatcher all have the same parameter settings. Though we use an NVIDIA V100 GPU with 32GB of memory, Transformer-Cat and Transformer-Cross still encounter the memory overflow problem under the same parameter settings as TransMatcher. Therefore, we have to set d=128, D=512, and N=2 for them to run, and accordingly, a smaller version of TransMatcher with the same set of parameters is also provided for comparison.
From the results shown in Table 2, it can be observed that ViT and the vanilla Transformer perform poor in generalizing to other datasets. In contrast, the proposed TransMatcher significantly improves the performance. This confirms that simply applying Transformers for the image matching task is not effective, because they lack cross-image interaction in their designs.
Besides, we find that Transformer-Cat does not lead to improvement compared to ViT and the vanilla Transformer. It is a smaller model, though. However, Transformer-Cross does lead to notable improvements, indicating that the cross-matching of gallery and query images in Transformer decoders is potentially more effective. However, it is still not as good as the smaller version of TransMatcher. For example, on Market-1501, TransMatcher improves the Rank-1 by 11.2% and the mAP by 9.2% over the Transformer-Cross. Therefore, the cross-attention design in the original Transformers is not efficient enough for image matching, due to its focus on feature aggregation but not similarity matching. More variants and experiments of Transformers can be found in Appendix.
As for the running speed, the training times of these methods are also listed in Table 2. As can be seen, without cross-matching, ViT is the most efficient, followed by the vanilla Transformer. TransMatcher is not as efficient as ViT due to the explicit cross-matching between query and gallery images. However, it is still acceptable, thanks to the new simplified decoder. In contrast, even with a small set of parameters, Transformer-Cat and Transformer-Cross are still quite heavy to compute.
6.5 Ablation Study
The structure of the proposed TransMatcher shown in Fig. 1 is carefully ablated, with results listed in Table 3. The training is performed on MSMT17. For ease and reliable comparison, we report the average of all Rank-1 and mAP results on all test sets over four random runs. This is denoted by mAcc. We start with Dot Product + GMP + MLPHead2 (the input dimension to FC3 needs to be adapted to hw accordingly), which is the simplest and most necessary configuration. Then, by adding MLPHead1, the performance is improved by 1.38%, indicating that increasing the dimension to D, as in Transformers, is useful. Then, by including FC1 / BN1 independently, the performance gain is 0.84% / 0.88%, and by including them together, the performance can be further improved. Finally, when the prior score embedding is appended, the best performance is achieved. Interestingly, when we include a learnable positional embedding in the encoders, as in ViT, either independently or together with the prior score embedding, the performance is degraded. This indicates that mixing the position information with visual features for image matching is not useful in our design. In contrast, learning spatial-aware prior matching scores separately for score weighting is more effective. More ablation study and analysis can be found in the Appendix.
6.6 Parameter Analysis
To understand the parameter selection of the proposed TransMatcher, we train it on MSMT17 with different parameter configurations to the defaults, with the mAcc results as well as the training time shown in Fig. 2. First, the performance is gradually improved by increasing the model dimension d. However, the training time is also increased quadratically. Therefore, to provide a balance between accuracy and running speed, d=512 is selected, which is the same as in the vanilla Transformer [24].
For the feed forward-dimension D, the performance is also gradually improved when increasing the value. However, the training time is less affected, because the feed-forward operation is only applied after the dot product and GMP, where the dimension of d and one spatial dimension hw are already contracted. Nevertheless, large D will increase the memory usage. Therefore, D=2048 is selected, which is also the same as in the vanilla Transformer [24].
As for the number of layers N , the performance is also gradually improved with increasing N . However, after N=3 the performance tends to saturate, and the training time grows linearly with the increasing number of layers. Therefore, N=3 is a reasonable balance for our choice. In addition, with N = 1 there is no encoder used (for details please see Appendix), and from Fig. 2 it is clear that this is inferior, indicating that including an encoder is important. On the other hand, from the poor performance of ViT where there are only encoders, it is clear that the decoder is also important.
Finally, for the number of heads H in the encoders, it appears that larger H does not lead to improved results. Since the training time is also not affected, we simply select H=1 in the encoders, and do not implement the multi-head mechanism in the decoders.
6.7 Qualitative Analysis
With the help of the GMP layer, inspired from QAConv [10], the proposed TransMatcher is able to find the best local correspondence matches in each decoder layer. Some qualitative matching results are shown in Fig. 3 for a better understanding of TransMatcher. More examples can be found in the Appendix. The model used here is trained on the MSMT17 dataset [28], and the evaluations are done on the query subset of the Market-1501 dataset [34]. Results of both positive pairs and hard negative pairs are shown. For a clear illustration, only reliable correspondences with matching scores over a certain threshold are shown, where the threshold is determined by a false acceptance rate of 1‰ over all matches of negative pairs. Note that the local positions are coarse due to the 24× 8 size of the feature map.
As can be observed from Fig. 3, the proposed method is able to find correct local correspondences for positive pairs of images, even if there are notable misalignments in both scales and positions, pose, viewpoint, and illumination variations, occlusions, and low resolution blur. Besides, for hard negative pairs, the matching of TransMatcher still appears to be mostly reasonable, by linking visually similar parts or even the same person who might be incorrectly labeled.
This indicates that the proposed TransMatcher is effective in local correspondence matching, and note that it learns to do this with the only supervision of identity information. Besides, the matching capability is generalizable to other datasets beyond the training set. From the illustration it can also be seen that, generally, matching results of the first decoder layer are not as successful as the next two layers, and the matching with the last decoder layer appears to be the best. This indicates that both Transformer encoders and decoders helps the model to match better by aggregating global similarity information.
7 Conclusion
With the study conducted in this paper, we conclude that: (1) direct applications of ViT and the vanilla Transformer are not effective for image matching and metric learning, because they lack cross-image interaction in their designs; (2) designing query-gallery concatenation in ViT does not help, while introducing query-gallery cross-attention in the vanilla Transformer leads to notable but not adequate improvements, probably because the attention mechanism in Transformers might be primarily designed for global feature aggregation, which is not naturally suitable for image matching; and (3) a new simplified decoder thus developed, which employs hard attention to cross-matching similarity scores, is more efficient and effective for image matching and metric learning. With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets with large improvements. Therefore, this study proves that Transformers can be effectively adapted for the image matching and metric learning tasks, and so other potentially useful variants will be of future interest.
Acknowledgements
The authors would like to thank Yanan Wang who helped producing Fig. 1 in this paper, and Anna Hennig who helped proofreading the paper, and all the anonymous reviewers for the valuable feedbacks in improving the paper. | 1. What is the focus of the paper regarding person re-identification?
2. What are the strengths of the proposed TransMatcher architecture, particularly in its design and improvements?
3. Do you have any concerns or suggestions regarding the experimental setup and comparisons with other works?
4. How does the reviewer assess the clarity and reproducibility of the paper's content?
5. What are the limitations of the paper, such as the absence of comparison with DeTR and unclear aspects of the global max pooling and prior score embeddings? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies person re-identification using visual transformers with query and gallery interactions. The paper presents first simple query and gallery image concatenation and cross attention layer for interaction query and gallery interactions; and later presents the proposed TransMatcher architecture, which replaces the decoder self-attention by only keeping query-key matching followed by global max pooling and multi-layer perceptron head maps composed of fully connected and batch norm layers.
Review
The analysis on the standard transformer for detection is interesting. I further appreciated that it presents the naive extensions that do not lead to improvement along with the proposed model that leads to improvement.
The visual transformers is an active area of research and most of the very recent (a few months) papers are also cited in the related work. The experimental setup contains comparisons between the proposed approach and several earlier studies on the re-identification task. However, it would be further interesting to see how the proposed model performs compared to DeTR which is known for object detection on the re-identification task.
It is not very clear how the global max pooling is symmetrized, could you please provide the equations of this part and the concatenation afterwards, what is the dimension of the output?
How are the prior score embeddings obtained? It is not clear why it is used, what is the rationale behind it?
Sharing the code will increase the reproducibility of results. Could you please share your implementation?
Line 170, 171, 173, 177
R
instead of
R
for the dimensions |
NIPS | Title
TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification
Abstract
Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv.
1 Introduction
The Transformer [24] is a neural network based on attention mechanisms. It has shown great success in the field of natural language processing. Recently, it has also shown promising performance for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], and image segmentation [14, 26], thus gaining increasing attention in this field. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification or dense predictions, and the generalizability of Transformers is unknown. At a glance, query-key similarities are computed by dot products in the attention mechanisms of Transformers. Therefore, these models could potentially be useful for image matching. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images, with applications in generalizable person re-identification.
Attention mechanisms are used to gather global information from different locations according to query-key similarities. The vanilla Transformer [24] is composed of an encoder that employs
∗Shengcai Liao is the corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
self-attention, and a decoder that further incorporates a cross-attention module. The difference is that the query and key are the same in the self-attention, while they are different in the cross-attention. The Vision Transformer (ViT) [7] applies a pure Transformer encoder for feature learning and image classification. While the Transformer encoder facilitates feature interaction among different locations of the same image, it cannot address the image matching problem being studied in this paper, because it does not enable interaction between different images. In the decoder, however, the cross-attention module does have the ability for cross interaction between query and the encoded memory. For example, in the decoder of the detection Transformer (DETR) [2], learnable query embeddings are designed to decode useful information in the encoded image memory for object localization. However, the query embeddings are independent from the image inputs, and so there is still no interaction between pairs of input images. Motivated by this, how about using actual image queries instead of learnable query embeddings as input to decoders?
Person re-identification is a typical image matching and metric learning problem. In a recent study called QAConv [10], it was shown that explicitly performing image matching between pairs of deep feature maps helps the generalization of the learned model. This inspires us to investigate the capability and generalizability of Transformers for image matching and metric learning between pairs of images. Since training through classification is also a popular strategy for metric learning, we start from a direct application of ViT and the vanilla Transformer with a powerful ResNet [3] backbone for person re-identification. However, this results in poor generalization to different datasets. Then, we consider formulating explicit interactions between query2 and gallery images in Transformers. Two naive solutions are thus designed. The first one uses a pure Transformer encoder, as in ViT, but concatenates the query and gallery features together as inputs, so as to enable the self-attention module to read both query and gallery content and apply the attention between them. The second design employs the vanilla Transformer, but replaces the learnable query embedding in the decoder by the ready-to-use query feature maps. This way, the query input acts as a real query from the actual retrieval inputs, rather than a learnable query which is more like a prior or a template. Accordingly, the cross-attention module in the decoder is able to gather information across query-key pairs, where the key comes from the encoded memory of gallery images.
While the first solution does not lead to improvement, the second one is successful with notable performance gain. However, compared to the state of the art in generalizable person re-identification,
2Query/gallery in person re-identification and query/key or target/memory in Transformers have very similar concepts originated from information retrieval. We use the same word query here in different contexts.
the performance of the second variant is still not satisfactory. We further consider that the attention mechanism in Transformers might be primarily for global feature aggregation, which is not naturally suitable for image matching, though the two naive solutions already enable feature interactions between query and gallery images. Therefore, to improve the effectiveness of image matching, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, inspired from QAConv [10], global max pooling (GMP) is applied, which acts as a hard attention to gather similarity values, instead of a soft attention to weight feature values. This is because, in image matching, we are more interested in matching scores than feature values. Finally, a multilayer perceptron (MLP) head maps the matching result to a similarity score for each query-gallery pair. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching.
We call the above design TransMatcher (see Fig. 1), which targets at efficient image matching and metric learning in particular. The contributions of this paper are summarized as follows.
• We investigate the possibility and generalizability of applying Transformers for image matching and metric learning, including direct applications of ViT and the vanilla Transformer, and two solutions adapted specifically for matching images through attention. This furthers our understanding of the capability and limitation of Transformers for image matching.
• According to the above, a new simplified decoder is proposed for efficient image matching, with a focus on similarity computation and mapping.
• With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively.
2 Related Work
Given pairs of input images, deep feature matching has been shown to be effective for person re-identification. Li et al. [8] proposed a novel filter pairing neural network (FPNN) to handle misalignment and occlusions in person re-identification. Ahmed et al. [1] proposed a local neighborhood matching layer to match deep feature maps of query and gallery images. Suh et al. [21] proposed a deep neural network to learn part-aligned bilinear representations for person re-identification. Shen et al. [18] proposed a Kronecker-product matching (KPM) module for matching person images in a softly aligned way. Liao and Shao [10] proposed the query adaptive convolution (QAConv) for explicit deep feature matching, which is proved to be effective for generalizable person re-identification. They further proposed a graph sampler (GS) for efficient deep metric learning [11].
Generalizable person re-identification has gained increasing attention in recent years. Zhou et al. [36] proposed the OSNet, and showed that this new backbone network has advantages in generalization. Jia et al. [5] applied IBN-Net-b [15] together with a feature normalization to alleviate both style and content variance across datasets to improve generalizability. Song et al. [20] proposed a domaininvariant mapping network (DIMN) and further introduced a meta-learning pipeline for effective training and generalization. Qian et al. [17] proposed a deep architecture with leader-based multiscale attention (MuDeep), with improved generalization of the learned models. Yuan et al. [31] proposed an adversarial domain-invariant feature learning network (ADIN) to separate identity-related features from challenging variations. Jin et al.[6] proposed a style normalization and restitution module, which shows good generalizability for person re-identification. Zhuang et al. [38] proposed a camera-based batch normalization (CBN) method for domain-invariant representation learning, which utilizes unlabeled target data to adapt the BN layer in a quick and unsupervised way. Wang et al. [27] created a large-scale synthetic person dataset called RandPerson, and showed that models learned from synthesized data generalize well to real-world datasets. However, current methods are still far from satisfactory in generalization for practical person re-identification.
There are a number of attentional networks [12, 16, 13, 30, 19, 9, 29, 32, 4] proposed for person re-identification, but focusing on representation learning. More recently, Zhao et al. [33] proposed a cross-attention network for person re-identificaiton. However, it is still applied for feature refinement, instead of explicit image matching between gallery and probe images studied in this paper.
Transformers have recently received increasing attention for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], image segmentation [14, 26], and so on. For
example, ViT was proposed in [7], showing that a pure Transformer-based architecture is capable of effective image classification. DETR was proposed in [2], providing a successful end-to-end Transformer solution for object detection. Later, several studies, such as the Deformable DETR [37], Swin [14], and PVT [26], improved the computation of Visual Transformers and further boosted their performance. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions. There lacks a comprehensive study on whether Transformers are effective for image matching and metric learning and how its capability is in generalizing to unknown domains.
3 Transformers
For the vanilla Transformer [24], the core module is the multi-head attention (MHA). First, a scaled dot-product attention is defined as follows:
Attention(Q,K, V ) = softmax( QKT√ dk )V, (1)
where Q ∈ RT×dk is the query (or target) matrix, K ∈ RM×dk is the key (or memory) matrix, V ∈ RM×dv is the value matrix, T and M are the sequence lengths of the query and key, respectively, dk is the feature dimension of the query and key, and dv is the feature dimension of V . In visual tasks, Q and K are usually reshaped query and key feature maps, with T =M = hw, where h and w are the height and width of the query and key feature maps, respectively. Then, the MHA is defined as:
headi = Attention(QW Q i ,KW K i , V W V i ), (2)
MultiHead(Q,K, V ) = Concat(head1, . . . , headH)WO, (3) where WQi ∈ Rd×dk , WKi ∈ Rd×dk , WVi ∈ Rd×dv , and WO ∈ Rhdv×d are parameter matrices, and H is the number of heads. Then, Q = K = V in the multi-head self-attention (MHSA) in the encoders, while they are defined separately in the multi-head cross-attention (MHCA) in the decoders.
The structure of the Transformer encoder without positional encoding is shown on the left of Fig. 1. Beyond MHSA, it further appends a feed-forward layer to first increase the feature dimension from d to D, and then recover it back from D to d. Besides, the encoder can be self-stacked N times, where N is the total number of encoder layers. In ViT [7], only Transformer encoders are used, and positional encoding is further applied. In the vanilla Transformer [24], decoders with MHCA are further applied, with the query being learnable query embeddings initially, and the output of the previous decoder layer later on, and the key and value being the output of the encoder layer. The decoder can also be self-stacked N times.
4 Image Matching with Transformers: Naive Solutions
While the above ViT and vanilla Transformer are able to perform image matching through black-box feature extraction and distance learning, they are not optimal for this task because they lack image-toimage interaction in their designs. Though cross-attention is employed in the Transformer decoders, in its original form the query either comes from learnable query embeddings, or from the output of the previous decoder layer.
Therefore, we adapt Transformers with two naive solutions for image matching and metric learning. Building upon a powerful ResNet [3] backbone, the first solution appends ViT, but not simply for feature extraction. Instead, a pair of query and gallery feature maps are concatenated to double the sequence length, forming a single sample for the input of ViT. Thus, both the query and key for the self-attention layer contain query image information in one half and gallery image information in the other half. Therefore, the attention computation in Eq. (1) is able to interact query and gallery inputs for image matching. This variant is denoted as Transformer-Cat.
The second solution appends the vanilla Transformer, but instead of learnable query embeddings, ResNet query features are directly input into the first decoder. This way, the cross-attention layer in the decoders is able to interact the query and gallery samples being matched. This variant is denoted as Transformer-Cross.
The structure of these two variants can be found in the Appendix. Note that these two solutions have high computational and memory costs, especially for large d, D, and N (c.f. Section 6.4).
5 The Proposed TransMatcher
Though the above two solutions enable query-gallery interaction in the attention mechanism for image matching, they are not adequate for distance metric learning. This is because, taking a deeper look at Eq. (1) for the attention, it can be observed that, though similarity values between Q and K are computed, they are only used for softmax-based weighting to aggregate features from V . Therefore, the output of the attention is always a weighted version of V (orK), and thus cross-matching between a pair of inputs is not directly formulated.
To address this, we propose a simplified decoder, which is explicitly formulated towards similarity computation. The structure of this decoder is shown in the middle of Fig. 1. First, both gallery and query images are independently encoded by N sequential Transformer encoders after a backbone network, as shown on the left of Fig. 1. This encoding helps aggregating global information from similar body parts for the subsequent matching step. The resulting feature encodings are denoted by Qn ∈ Rhw×d and Kn ∈ Rhw×d, n = 1, . . . , N , for the query and gallery, respectively. Then, as in Eq. (2), both the gallery and query encodings are transformed by a fully connected (FC) layer FC1:
Q′n = QnWn,K ′ n = KnWn, (4)
where Wn ∈ Rd×d is the parameter matrix for encoder-decoder layer n. Different from Eq. (2), we use shared FC parameters for both query and gallery, because they are exchangeable in the image matching task, and the similarity metric needs to be symmetrically defined. Then, the dot product is computed between the transformed features, as in Eq. (1):
Sn = Q ′ nK ′ n T , (5)
where Sn ∈ Rhw×hw are the similarity scores. In addition, a learnable prior score embedding R ∈ Rhw×hw is designed, which defines prior matching scores between different locations of query and gallery images. Then, it is used to weight the similarity values: S′n = Sn ∗ σ(R), (6) where ∗ denotes element-wise multiplication, and σ is the sigmoid function to map the prior score embedding into weights in [0, 1].
After that, a GMP layer is applied along the last dimension of hw elements: S′′n = max(S′n, dim=-1). (7) This way, the optimal local matching over all key locations is obtained, as in QAConv [10]. Compared to Eq. (1), the GMP here can be considered as a hard attention, but it is used for similarity matching rather than softmax-based feature weighting like in the soft attention. Note that multi-head design in MHA is not considered here (c.f. Section 6.6).
Then, after a batch normalization layer BN1, an MLP head is further appended, similar to the feedforward layer of Transformers. It is composed of MLPHead1=(FC2, BN2, ReLU) to map the hw similarity values to dimension D, and MLPHead2=(FC3, BN3) to map dimension D to 1 as a single output score S′′′n.
Finally, decoder n outputs a similarity score by fusing the output of the previous decoder: S′′′′n = S ′′′ n + S ′′′′ n−1, (8) where S′′′′0 is defined as 0. With N stacked encoder-decoder blocks, as shown in Fig. 1, this can be considered as residual similarity learning. Note that the stack of encoder-decoder blocks in TransMatcher is different from that in the vanilla Transformer. In TransMatcher, the encoder and decoder are connected before being stacked, while in the vanilla Transformer they are stacked independently before connection. This way, the decoder of TransMatcher is able to perform cross matching with different levels of encoded features for residual similarity learning.
However, the GMP operation in Eq. (7) is not symmetric. To make TransMatcher symmetric for the query and gallery, the GMP operation in Eq. (7) can also be applied along dim=0; that is, conduct an inverse search of best matches over all query locations. Keeping other operations the same, this will result in another set of similarity scores, which are summed with the original ones after the FC3 layer. Further details can be found in the Appendix. Note that this is not reflected in Fig. 1 for simplicity of illustration.
Finally, the outputs of TransMatcher scores for all query-gallery pairs in a batch are collected for pairwise metric learning following the same pipeline in QAConv-GS [11], and the same binary cross entropy loss is used as in the QAConv-GS.
6 Experiments
6.1 Datasets
Four large-scale person re-identification datasets, CUHK03 [8], Market-1501 [34], MSMT17 [28], and RandPerson [27], which are publicly available for research purpose, are used in our experiments. The CUHK03 dataset includes 1,360 persons and 13,164 images,with 767 and 700 subjects used for training and testing, respectively, as in the CUHK03-NP protocol [35]. Besides, the “detected” subset is used, which is more challenging than the “labeled” subset. The Market-1501 dataset contains 32,668 images of 1,501 identities captured from six cameras, with 12,936 images from 751 identities for training, and 19,732 images from 750 identities for testing.MSMT17 includes 4,101 identities and 126,441 images captured from 15 cameras, with 32,621 images from 1,041 identities for training, and the remaining images from 3,010 identities for testing. RandPerson is a recently released synthetic person re-identification dataset for large-scale training towards generalization testing. It is with 8,000 persons and 1,801,816 images. A subset with 132,145 images of the 8,000 IDs is used for training.
Cross-dataset evaluation is performed on these datasets by training on the training subset of one dataset, and evaluating on the test subsets of other datasets. Except that for MSMT17 we further use an additional setting with all images for training, regardless of the subset splits. This is denoted by MSMT17all. All evaluations follow the single-query evaluation protocol. The Rank-1 (Top1) accuracy and mean average precision (mAP) are used as the performance evaluation metrics.
6.2 Implementation Details
The implementation of TransMatcher is built upon the official PyTorch project of QAConv-GS 3 [11], as the graph sampler (GS) proposed in this project is efficient for metric learning and quite suitable for the learning of TransMatcher. We keep most of the settings the same as QAConv-GS. Specifically, ResNet-50 [3] is used as the backbone network, with three instance normalization (IN) [23] layers further appended as in IBN-Net-b [15], following several recent studies [5, 36, 6, 38, 11]. The backbone network is pre-trained on ImageNet, with the states of the BN layers being fixed. The layer3 feature map is used, with a 3×3 neck convolution appended to produce the final feature map. The input image is resized to 384× 128. The batch size is set to 64, with K=4 for the GS sampler. The network is trained with the SGD optimizer, with a learning rate of 0.0005 for the backbone network, and 0.005 for newly added layers. They are decayed by 0.1 after 10 epochs, and 15 epochs are trained in total. Except that for RandPerson [27] the total number of epochs is 4, and the learning rate step size is 2, according to the experiences in [27, 11]. Gradient clipping is applied with T = 4 [11]. Several commonly used data augmentation methods are applied, including random flipping, cropping, occlusion, and color jittering. All experiments are run on a single NVIDIA V100 GPU.
For the proposed TransMatcher, unless otherwise indicated, d=512 and D=2048 by default as in the original Transformer [24], and H=1 and N=3 for higher efficiency. Please refer to Section 6.6 for further parameter analysis. Besides, in practice, we find that when N decoders are used, using N − 1 encoders together with the ResNet feature map directly pairing the first decoder slightly improves the results while being more efficient, which is preferred in the implementation (c.f. Appendix).
6.3 Comparison to the State of the Art
A comparison to the state of the art (SOTA) in generalizable person re-identification is shown in Table 1. Several methods published very recently for generalizable person re-identification are compared, including OSNet [36], MuDeep [17], ADIN [31], SNR [6], CBN [38], QAConv [10], and QAConv-GS [11]. From Table 1 it can be observed that TransMatcher significantly improves the previous SOTA. For example, with Market-1501 for training, the Rank-1 and mAP are improved by 5.8% and 5.7% on CUHK03-NP, respectively, and they are improved by 6.1% and 3.4% on MSMT17, respectively. With MSMT17→Market-1501, the improvements are 5.0% for Rank-1 and 5.3% for mAP. With the synthetic dataset RandPerson for training, the improvements on Market-1501 are 3.3% for Rank-1 and 5.3% for mAP, and the gains on MSMT17 are 5.9% for Rank-1 and 3.3% for mAP.
Compared to the second best method QAConv-GS, since it shares the same code base and training setting with the proposed TransMatcher, it indicates that TransMatcher is a superior image matching
3QAConv-GS project under MIT License: https://github.com/ShengcaiLiao/QAConv.
and metric learning method for generalizable person re-identification, thanks to the effective crossmatching design in the new decoders.
6.4 Comparison of Transformers
A comparison of different Transformers trained on MSMT17 for direct cross-dataset evaluation is shown in Table 2. For a fair comparison, they are all trained with the same settings as described in Section 6.2. Besides, H=1 for all models. ViT, the vanilla Transformer, and TransMatcher all have the same parameter settings. Though we use an NVIDIA V100 GPU with 32GB of memory, Transformer-Cat and Transformer-Cross still encounter the memory overflow problem under the same parameter settings as TransMatcher. Therefore, we have to set d=128, D=512, and N=2 for them to run, and accordingly, a smaller version of TransMatcher with the same set of parameters is also provided for comparison.
From the results shown in Table 2, it can be observed that ViT and the vanilla Transformer perform poor in generalizing to other datasets. In contrast, the proposed TransMatcher significantly improves the performance. This confirms that simply applying Transformers for the image matching task is not effective, because they lack cross-image interaction in their designs.
Besides, we find that Transformer-Cat does not lead to improvement compared to ViT and the vanilla Transformer. It is a smaller model, though. However, Transformer-Cross does lead to notable improvements, indicating that the cross-matching of gallery and query images in Transformer decoders is potentially more effective. However, it is still not as good as the smaller version of TransMatcher. For example, on Market-1501, TransMatcher improves the Rank-1 by 11.2% and the mAP by 9.2% over the Transformer-Cross. Therefore, the cross-attention design in the original Transformers is not efficient enough for image matching, due to its focus on feature aggregation but not similarity matching. More variants and experiments of Transformers can be found in Appendix.
As for the running speed, the training times of these methods are also listed in Table 2. As can be seen, without cross-matching, ViT is the most efficient, followed by the vanilla Transformer. TransMatcher is not as efficient as ViT due to the explicit cross-matching between query and gallery images. However, it is still acceptable, thanks to the new simplified decoder. In contrast, even with a small set of parameters, Transformer-Cat and Transformer-Cross are still quite heavy to compute.
6.5 Ablation Study
The structure of the proposed TransMatcher shown in Fig. 1 is carefully ablated, with results listed in Table 3. The training is performed on MSMT17. For ease and reliable comparison, we report the average of all Rank-1 and mAP results on all test sets over four random runs. This is denoted by mAcc. We start with Dot Product + GMP + MLPHead2 (the input dimension to FC3 needs to be adapted to hw accordingly), which is the simplest and most necessary configuration. Then, by adding MLPHead1, the performance is improved by 1.38%, indicating that increasing the dimension to D, as in Transformers, is useful. Then, by including FC1 / BN1 independently, the performance gain is 0.84% / 0.88%, and by including them together, the performance can be further improved. Finally, when the prior score embedding is appended, the best performance is achieved. Interestingly, when we include a learnable positional embedding in the encoders, as in ViT, either independently or together with the prior score embedding, the performance is degraded. This indicates that mixing the position information with visual features for image matching is not useful in our design. In contrast, learning spatial-aware prior matching scores separately for score weighting is more effective. More ablation study and analysis can be found in the Appendix.
6.6 Parameter Analysis
To understand the parameter selection of the proposed TransMatcher, we train it on MSMT17 with different parameter configurations to the defaults, with the mAcc results as well as the training time shown in Fig. 2. First, the performance is gradually improved by increasing the model dimension d. However, the training time is also increased quadratically. Therefore, to provide a balance between accuracy and running speed, d=512 is selected, which is the same as in the vanilla Transformer [24].
For the feed forward-dimension D, the performance is also gradually improved when increasing the value. However, the training time is less affected, because the feed-forward operation is only applied after the dot product and GMP, where the dimension of d and one spatial dimension hw are already contracted. Nevertheless, large D will increase the memory usage. Therefore, D=2048 is selected, which is also the same as in the vanilla Transformer [24].
As for the number of layers N , the performance is also gradually improved with increasing N . However, after N=3 the performance tends to saturate, and the training time grows linearly with the increasing number of layers. Therefore, N=3 is a reasonable balance for our choice. In addition, with N = 1 there is no encoder used (for details please see Appendix), and from Fig. 2 it is clear that this is inferior, indicating that including an encoder is important. On the other hand, from the poor performance of ViT where there are only encoders, it is clear that the decoder is also important.
Finally, for the number of heads H in the encoders, it appears that larger H does not lead to improved results. Since the training time is also not affected, we simply select H=1 in the encoders, and do not implement the multi-head mechanism in the decoders.
6.7 Qualitative Analysis
With the help of the GMP layer, inspired from QAConv [10], the proposed TransMatcher is able to find the best local correspondence matches in each decoder layer. Some qualitative matching results are shown in Fig. 3 for a better understanding of TransMatcher. More examples can be found in the Appendix. The model used here is trained on the MSMT17 dataset [28], and the evaluations are done on the query subset of the Market-1501 dataset [34]. Results of both positive pairs and hard negative pairs are shown. For a clear illustration, only reliable correspondences with matching scores over a certain threshold are shown, where the threshold is determined by a false acceptance rate of 1‰ over all matches of negative pairs. Note that the local positions are coarse due to the 24× 8 size of the feature map.
As can be observed from Fig. 3, the proposed method is able to find correct local correspondences for positive pairs of images, even if there are notable misalignments in both scales and positions, pose, viewpoint, and illumination variations, occlusions, and low resolution blur. Besides, for hard negative pairs, the matching of TransMatcher still appears to be mostly reasonable, by linking visually similar parts or even the same person who might be incorrectly labeled.
This indicates that the proposed TransMatcher is effective in local correspondence matching, and note that it learns to do this with the only supervision of identity information. Besides, the matching capability is generalizable to other datasets beyond the training set. From the illustration it can also be seen that, generally, matching results of the first decoder layer are not as successful as the next two layers, and the matching with the last decoder layer appears to be the best. This indicates that both Transformer encoders and decoders helps the model to match better by aggregating global similarity information.
7 Conclusion
With the study conducted in this paper, we conclude that: (1) direct applications of ViT and the vanilla Transformer are not effective for image matching and metric learning, because they lack cross-image interaction in their designs; (2) designing query-gallery concatenation in ViT does not help, while introducing query-gallery cross-attention in the vanilla Transformer leads to notable but not adequate improvements, probably because the attention mechanism in Transformers might be primarily designed for global feature aggregation, which is not naturally suitable for image matching; and (3) a new simplified decoder thus developed, which employs hard attention to cross-matching similarity scores, is more efficient and effective for image matching and metric learning. With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets with large improvements. Therefore, this study proves that Transformers can be effectively adapted for the image matching and metric learning tasks, and so other potentially useful variants will be of future interest.
Acknowledgements
The authors would like to thank Yanan Wang who helped producing Fig. 1 in this paper, and Anna Hennig who helped proofreading the paper, and all the anonymous reviewers for the valuable feedbacks in improving the paper. | 1. What is the focus of the paper on person Re-ID?
2. What are the strengths of the proposed approach, particularly in its performance?
3. What are the weaknesses of the paper regarding the pipeline design and experimental comparisons?
4. Do you have any questions about the transformer architecture application in person Re-ID?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content after the rebuttal? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates how to apply the transformer architecture to solve the image match problem of generalised person Re-ID. The features of query/gallery images are first extracted with a ResNet model, then a transformer encoder without positional embedding is applied to query/gallery features, respectively. A customised decoder is designed to obtain the final match score for the following pairwise metric learning. The experiments are conducted on several common Re-ID datasets and the proposed method have shown better performance than the baseline.
Review
STRENGTH
The performance on CUHK03/Market-1501/MSMT17 is superior to the baseline methods.
The idea of applying the transformer model to person Re-ID seems to be new.
The paper writing is easy to follow.
WEAKNESS
There are other attention modules, e.g. no-local or channel-wise attention, why particularly use the transformer if no positional embedding is required?
The proposed pipeline is still a naive solution. TThe transformer encoder is simply used to refine the ResNet features, while the query /gallery features are combined together through a simple dot product in decoder. I don't think the cross attention between query/gallery can be learned in this way - it is more likely the performance improvement is gained by the transformer encoder instead of the proposed decoder.
The comparison with ViT and transformer in Table 2 is not quite fair, since the authors use a smaller transformer model (line 263-267) due to memory overflow problem. The original settings may actually work better
Since both person-ID and self-attention have the same concept "query" but having quite different meanings, the authors should make clear which query is referred to in the paper to avoid confusion.
Intuitive visualizations can be helpful to reveal the reason why the transformer can work better for person-ReID problems.
Overall, although this paper has reported higher performance than the baseline, the motivation of using transformer is not very clear, the pipeline design is naive and lack novelties, and some experimental results are still questionable. Considering the superior performance, I vote for a marginally below.
------After rebuttal------ The authors solved parts of my concerns and therefore I increase the rating to marginally above (6). |
NIPS | Title
TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification
Abstract
Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv.
1 Introduction
The Transformer [24] is a neural network based on attention mechanisms. It has shown great success in the field of natural language processing. Recently, it has also shown promising performance for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], and image segmentation [14, 26], thus gaining increasing attention in this field. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification or dense predictions, and the generalizability of Transformers is unknown. At a glance, query-key similarities are computed by dot products in the attention mechanisms of Transformers. Therefore, these models could potentially be useful for image matching. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images, with applications in generalizable person re-identification.
Attention mechanisms are used to gather global information from different locations according to query-key similarities. The vanilla Transformer [24] is composed of an encoder that employs
∗Shengcai Liao is the corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
self-attention, and a decoder that further incorporates a cross-attention module. The difference is that the query and key are the same in the self-attention, while they are different in the cross-attention. The Vision Transformer (ViT) [7] applies a pure Transformer encoder for feature learning and image classification. While the Transformer encoder facilitates feature interaction among different locations of the same image, it cannot address the image matching problem being studied in this paper, because it does not enable interaction between different images. In the decoder, however, the cross-attention module does have the ability for cross interaction between query and the encoded memory. For example, in the decoder of the detection Transformer (DETR) [2], learnable query embeddings are designed to decode useful information in the encoded image memory for object localization. However, the query embeddings are independent from the image inputs, and so there is still no interaction between pairs of input images. Motivated by this, how about using actual image queries instead of learnable query embeddings as input to decoders?
Person re-identification is a typical image matching and metric learning problem. In a recent study called QAConv [10], it was shown that explicitly performing image matching between pairs of deep feature maps helps the generalization of the learned model. This inspires us to investigate the capability and generalizability of Transformers for image matching and metric learning between pairs of images. Since training through classification is also a popular strategy for metric learning, we start from a direct application of ViT and the vanilla Transformer with a powerful ResNet [3] backbone for person re-identification. However, this results in poor generalization to different datasets. Then, we consider formulating explicit interactions between query2 and gallery images in Transformers. Two naive solutions are thus designed. The first one uses a pure Transformer encoder, as in ViT, but concatenates the query and gallery features together as inputs, so as to enable the self-attention module to read both query and gallery content and apply the attention between them. The second design employs the vanilla Transformer, but replaces the learnable query embedding in the decoder by the ready-to-use query feature maps. This way, the query input acts as a real query from the actual retrieval inputs, rather than a learnable query which is more like a prior or a template. Accordingly, the cross-attention module in the decoder is able to gather information across query-key pairs, where the key comes from the encoded memory of gallery images.
While the first solution does not lead to improvement, the second one is successful with notable performance gain. However, compared to the state of the art in generalizable person re-identification,
2Query/gallery in person re-identification and query/key or target/memory in Transformers have very similar concepts originated from information retrieval. We use the same word query here in different contexts.
the performance of the second variant is still not satisfactory. We further consider that the attention mechanism in Transformers might be primarily for global feature aggregation, which is not naturally suitable for image matching, though the two naive solutions already enable feature interactions between query and gallery images. Therefore, to improve the effectiveness of image matching, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, inspired from QAConv [10], global max pooling (GMP) is applied, which acts as a hard attention to gather similarity values, instead of a soft attention to weight feature values. This is because, in image matching, we are more interested in matching scores than feature values. Finally, a multilayer perceptron (MLP) head maps the matching result to a similarity score for each query-gallery pair. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching.
We call the above design TransMatcher (see Fig. 1), which targets at efficient image matching and metric learning in particular. The contributions of this paper are summarized as follows.
• We investigate the possibility and generalizability of applying Transformers for image matching and metric learning, including direct applications of ViT and the vanilla Transformer, and two solutions adapted specifically for matching images through attention. This furthers our understanding of the capability and limitation of Transformers for image matching.
• According to the above, a new simplified decoder is proposed for efficient image matching, with a focus on similarity computation and mapping.
• With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively.
2 Related Work
Given pairs of input images, deep feature matching has been shown to be effective for person re-identification. Li et al. [8] proposed a novel filter pairing neural network (FPNN) to handle misalignment and occlusions in person re-identification. Ahmed et al. [1] proposed a local neighborhood matching layer to match deep feature maps of query and gallery images. Suh et al. [21] proposed a deep neural network to learn part-aligned bilinear representations for person re-identification. Shen et al. [18] proposed a Kronecker-product matching (KPM) module for matching person images in a softly aligned way. Liao and Shao [10] proposed the query adaptive convolution (QAConv) for explicit deep feature matching, which is proved to be effective for generalizable person re-identification. They further proposed a graph sampler (GS) for efficient deep metric learning [11].
Generalizable person re-identification has gained increasing attention in recent years. Zhou et al. [36] proposed the OSNet, and showed that this new backbone network has advantages in generalization. Jia et al. [5] applied IBN-Net-b [15] together with a feature normalization to alleviate both style and content variance across datasets to improve generalizability. Song et al. [20] proposed a domaininvariant mapping network (DIMN) and further introduced a meta-learning pipeline for effective training and generalization. Qian et al. [17] proposed a deep architecture with leader-based multiscale attention (MuDeep), with improved generalization of the learned models. Yuan et al. [31] proposed an adversarial domain-invariant feature learning network (ADIN) to separate identity-related features from challenging variations. Jin et al.[6] proposed a style normalization and restitution module, which shows good generalizability for person re-identification. Zhuang et al. [38] proposed a camera-based batch normalization (CBN) method for domain-invariant representation learning, which utilizes unlabeled target data to adapt the BN layer in a quick and unsupervised way. Wang et al. [27] created a large-scale synthetic person dataset called RandPerson, and showed that models learned from synthesized data generalize well to real-world datasets. However, current methods are still far from satisfactory in generalization for practical person re-identification.
There are a number of attentional networks [12, 16, 13, 30, 19, 9, 29, 32, 4] proposed for person re-identification, but focusing on representation learning. More recently, Zhao et al. [33] proposed a cross-attention network for person re-identificaiton. However, it is still applied for feature refinement, instead of explicit image matching between gallery and probe images studied in this paper.
Transformers have recently received increasing attention for computer vision tasks, including image classification [7, 14], object detection [2, 37, 14, 26], image segmentation [14, 26], and so on. For
example, ViT was proposed in [7], showing that a pure Transformer-based architecture is capable of effective image classification. DETR was proposed in [2], providing a successful end-to-end Transformer solution for object detection. Later, several studies, such as the Deformable DETR [37], Swin [14], and PVT [26], improved the computation of Visual Transformers and further boosted their performance. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions. There lacks a comprehensive study on whether Transformers are effective for image matching and metric learning and how its capability is in generalizing to unknown domains.
3 Transformers
For the vanilla Transformer [24], the core module is the multi-head attention (MHA). First, a scaled dot-product attention is defined as follows:
Attention(Q,K, V ) = softmax( QKT√ dk )V, (1)
where Q ∈ RT×dk is the query (or target) matrix, K ∈ RM×dk is the key (or memory) matrix, V ∈ RM×dv is the value matrix, T and M are the sequence lengths of the query and key, respectively, dk is the feature dimension of the query and key, and dv is the feature dimension of V . In visual tasks, Q and K are usually reshaped query and key feature maps, with T =M = hw, where h and w are the height and width of the query and key feature maps, respectively. Then, the MHA is defined as:
headi = Attention(QW Q i ,KW K i , V W V i ), (2)
MultiHead(Q,K, V ) = Concat(head1, . . . , headH)WO, (3) where WQi ∈ Rd×dk , WKi ∈ Rd×dk , WVi ∈ Rd×dv , and WO ∈ Rhdv×d are parameter matrices, and H is the number of heads. Then, Q = K = V in the multi-head self-attention (MHSA) in the encoders, while they are defined separately in the multi-head cross-attention (MHCA) in the decoders.
The structure of the Transformer encoder without positional encoding is shown on the left of Fig. 1. Beyond MHSA, it further appends a feed-forward layer to first increase the feature dimension from d to D, and then recover it back from D to d. Besides, the encoder can be self-stacked N times, where N is the total number of encoder layers. In ViT [7], only Transformer encoders are used, and positional encoding is further applied. In the vanilla Transformer [24], decoders with MHCA are further applied, with the query being learnable query embeddings initially, and the output of the previous decoder layer later on, and the key and value being the output of the encoder layer. The decoder can also be self-stacked N times.
4 Image Matching with Transformers: Naive Solutions
While the above ViT and vanilla Transformer are able to perform image matching through black-box feature extraction and distance learning, they are not optimal for this task because they lack image-toimage interaction in their designs. Though cross-attention is employed in the Transformer decoders, in its original form the query either comes from learnable query embeddings, or from the output of the previous decoder layer.
Therefore, we adapt Transformers with two naive solutions for image matching and metric learning. Building upon a powerful ResNet [3] backbone, the first solution appends ViT, but not simply for feature extraction. Instead, a pair of query and gallery feature maps are concatenated to double the sequence length, forming a single sample for the input of ViT. Thus, both the query and key for the self-attention layer contain query image information in one half and gallery image information in the other half. Therefore, the attention computation in Eq. (1) is able to interact query and gallery inputs for image matching. This variant is denoted as Transformer-Cat.
The second solution appends the vanilla Transformer, but instead of learnable query embeddings, ResNet query features are directly input into the first decoder. This way, the cross-attention layer in the decoders is able to interact the query and gallery samples being matched. This variant is denoted as Transformer-Cross.
The structure of these two variants can be found in the Appendix. Note that these two solutions have high computational and memory costs, especially for large d, D, and N (c.f. Section 6.4).
5 The Proposed TransMatcher
Though the above two solutions enable query-gallery interaction in the attention mechanism for image matching, they are not adequate for distance metric learning. This is because, taking a deeper look at Eq. (1) for the attention, it can be observed that, though similarity values between Q and K are computed, they are only used for softmax-based weighting to aggregate features from V . Therefore, the output of the attention is always a weighted version of V (orK), and thus cross-matching between a pair of inputs is not directly formulated.
To address this, we propose a simplified decoder, which is explicitly formulated towards similarity computation. The structure of this decoder is shown in the middle of Fig. 1. First, both gallery and query images are independently encoded by N sequential Transformer encoders after a backbone network, as shown on the left of Fig. 1. This encoding helps aggregating global information from similar body parts for the subsequent matching step. The resulting feature encodings are denoted by Qn ∈ Rhw×d and Kn ∈ Rhw×d, n = 1, . . . , N , for the query and gallery, respectively. Then, as in Eq. (2), both the gallery and query encodings are transformed by a fully connected (FC) layer FC1:
Q′n = QnWn,K ′ n = KnWn, (4)
where Wn ∈ Rd×d is the parameter matrix for encoder-decoder layer n. Different from Eq. (2), we use shared FC parameters for both query and gallery, because they are exchangeable in the image matching task, and the similarity metric needs to be symmetrically defined. Then, the dot product is computed between the transformed features, as in Eq. (1):
Sn = Q ′ nK ′ n T , (5)
where Sn ∈ Rhw×hw are the similarity scores. In addition, a learnable prior score embedding R ∈ Rhw×hw is designed, which defines prior matching scores between different locations of query and gallery images. Then, it is used to weight the similarity values: S′n = Sn ∗ σ(R), (6) where ∗ denotes element-wise multiplication, and σ is the sigmoid function to map the prior score embedding into weights in [0, 1].
After that, a GMP layer is applied along the last dimension of hw elements: S′′n = max(S′n, dim=-1). (7) This way, the optimal local matching over all key locations is obtained, as in QAConv [10]. Compared to Eq. (1), the GMP here can be considered as a hard attention, but it is used for similarity matching rather than softmax-based feature weighting like in the soft attention. Note that multi-head design in MHA is not considered here (c.f. Section 6.6).
Then, after a batch normalization layer BN1, an MLP head is further appended, similar to the feedforward layer of Transformers. It is composed of MLPHead1=(FC2, BN2, ReLU) to map the hw similarity values to dimension D, and MLPHead2=(FC3, BN3) to map dimension D to 1 as a single output score S′′′n.
Finally, decoder n outputs a similarity score by fusing the output of the previous decoder: S′′′′n = S ′′′ n + S ′′′′ n−1, (8) where S′′′′0 is defined as 0. With N stacked encoder-decoder blocks, as shown in Fig. 1, this can be considered as residual similarity learning. Note that the stack of encoder-decoder blocks in TransMatcher is different from that in the vanilla Transformer. In TransMatcher, the encoder and decoder are connected before being stacked, while in the vanilla Transformer they are stacked independently before connection. This way, the decoder of TransMatcher is able to perform cross matching with different levels of encoded features for residual similarity learning.
However, the GMP operation in Eq. (7) is not symmetric. To make TransMatcher symmetric for the query and gallery, the GMP operation in Eq. (7) can also be applied along dim=0; that is, conduct an inverse search of best matches over all query locations. Keeping other operations the same, this will result in another set of similarity scores, which are summed with the original ones after the FC3 layer. Further details can be found in the Appendix. Note that this is not reflected in Fig. 1 for simplicity of illustration.
Finally, the outputs of TransMatcher scores for all query-gallery pairs in a batch are collected for pairwise metric learning following the same pipeline in QAConv-GS [11], and the same binary cross entropy loss is used as in the QAConv-GS.
6 Experiments
6.1 Datasets
Four large-scale person re-identification datasets, CUHK03 [8], Market-1501 [34], MSMT17 [28], and RandPerson [27], which are publicly available for research purpose, are used in our experiments. The CUHK03 dataset includes 1,360 persons and 13,164 images,with 767 and 700 subjects used for training and testing, respectively, as in the CUHK03-NP protocol [35]. Besides, the “detected” subset is used, which is more challenging than the “labeled” subset. The Market-1501 dataset contains 32,668 images of 1,501 identities captured from six cameras, with 12,936 images from 751 identities for training, and 19,732 images from 750 identities for testing.MSMT17 includes 4,101 identities and 126,441 images captured from 15 cameras, with 32,621 images from 1,041 identities for training, and the remaining images from 3,010 identities for testing. RandPerson is a recently released synthetic person re-identification dataset for large-scale training towards generalization testing. It is with 8,000 persons and 1,801,816 images. A subset with 132,145 images of the 8,000 IDs is used for training.
Cross-dataset evaluation is performed on these datasets by training on the training subset of one dataset, and evaluating on the test subsets of other datasets. Except that for MSMT17 we further use an additional setting with all images for training, regardless of the subset splits. This is denoted by MSMT17all. All evaluations follow the single-query evaluation protocol. The Rank-1 (Top1) accuracy and mean average precision (mAP) are used as the performance evaluation metrics.
6.2 Implementation Details
The implementation of TransMatcher is built upon the official PyTorch project of QAConv-GS 3 [11], as the graph sampler (GS) proposed in this project is efficient for metric learning and quite suitable for the learning of TransMatcher. We keep most of the settings the same as QAConv-GS. Specifically, ResNet-50 [3] is used as the backbone network, with three instance normalization (IN) [23] layers further appended as in IBN-Net-b [15], following several recent studies [5, 36, 6, 38, 11]. The backbone network is pre-trained on ImageNet, with the states of the BN layers being fixed. The layer3 feature map is used, with a 3×3 neck convolution appended to produce the final feature map. The input image is resized to 384× 128. The batch size is set to 64, with K=4 for the GS sampler. The network is trained with the SGD optimizer, with a learning rate of 0.0005 for the backbone network, and 0.005 for newly added layers. They are decayed by 0.1 after 10 epochs, and 15 epochs are trained in total. Except that for RandPerson [27] the total number of epochs is 4, and the learning rate step size is 2, according to the experiences in [27, 11]. Gradient clipping is applied with T = 4 [11]. Several commonly used data augmentation methods are applied, including random flipping, cropping, occlusion, and color jittering. All experiments are run on a single NVIDIA V100 GPU.
For the proposed TransMatcher, unless otherwise indicated, d=512 and D=2048 by default as in the original Transformer [24], and H=1 and N=3 for higher efficiency. Please refer to Section 6.6 for further parameter analysis. Besides, in practice, we find that when N decoders are used, using N − 1 encoders together with the ResNet feature map directly pairing the first decoder slightly improves the results while being more efficient, which is preferred in the implementation (c.f. Appendix).
6.3 Comparison to the State of the Art
A comparison to the state of the art (SOTA) in generalizable person re-identification is shown in Table 1. Several methods published very recently for generalizable person re-identification are compared, including OSNet [36], MuDeep [17], ADIN [31], SNR [6], CBN [38], QAConv [10], and QAConv-GS [11]. From Table 1 it can be observed that TransMatcher significantly improves the previous SOTA. For example, with Market-1501 for training, the Rank-1 and mAP are improved by 5.8% and 5.7% on CUHK03-NP, respectively, and they are improved by 6.1% and 3.4% on MSMT17, respectively. With MSMT17→Market-1501, the improvements are 5.0% for Rank-1 and 5.3% for mAP. With the synthetic dataset RandPerson for training, the improvements on Market-1501 are 3.3% for Rank-1 and 5.3% for mAP, and the gains on MSMT17 are 5.9% for Rank-1 and 3.3% for mAP.
Compared to the second best method QAConv-GS, since it shares the same code base and training setting with the proposed TransMatcher, it indicates that TransMatcher is a superior image matching
3QAConv-GS project under MIT License: https://github.com/ShengcaiLiao/QAConv.
and metric learning method for generalizable person re-identification, thanks to the effective crossmatching design in the new decoders.
6.4 Comparison of Transformers
A comparison of different Transformers trained on MSMT17 for direct cross-dataset evaluation is shown in Table 2. For a fair comparison, they are all trained with the same settings as described in Section 6.2. Besides, H=1 for all models. ViT, the vanilla Transformer, and TransMatcher all have the same parameter settings. Though we use an NVIDIA V100 GPU with 32GB of memory, Transformer-Cat and Transformer-Cross still encounter the memory overflow problem under the same parameter settings as TransMatcher. Therefore, we have to set d=128, D=512, and N=2 for them to run, and accordingly, a smaller version of TransMatcher with the same set of parameters is also provided for comparison.
From the results shown in Table 2, it can be observed that ViT and the vanilla Transformer perform poor in generalizing to other datasets. In contrast, the proposed TransMatcher significantly improves the performance. This confirms that simply applying Transformers for the image matching task is not effective, because they lack cross-image interaction in their designs.
Besides, we find that Transformer-Cat does not lead to improvement compared to ViT and the vanilla Transformer. It is a smaller model, though. However, Transformer-Cross does lead to notable improvements, indicating that the cross-matching of gallery and query images in Transformer decoders is potentially more effective. However, it is still not as good as the smaller version of TransMatcher. For example, on Market-1501, TransMatcher improves the Rank-1 by 11.2% and the mAP by 9.2% over the Transformer-Cross. Therefore, the cross-attention design in the original Transformers is not efficient enough for image matching, due to its focus on feature aggregation but not similarity matching. More variants and experiments of Transformers can be found in Appendix.
As for the running speed, the training times of these methods are also listed in Table 2. As can be seen, without cross-matching, ViT is the most efficient, followed by the vanilla Transformer. TransMatcher is not as efficient as ViT due to the explicit cross-matching between query and gallery images. However, it is still acceptable, thanks to the new simplified decoder. In contrast, even with a small set of parameters, Transformer-Cat and Transformer-Cross are still quite heavy to compute.
6.5 Ablation Study
The structure of the proposed TransMatcher shown in Fig. 1 is carefully ablated, with results listed in Table 3. The training is performed on MSMT17. For ease and reliable comparison, we report the average of all Rank-1 and mAP results on all test sets over four random runs. This is denoted by mAcc. We start with Dot Product + GMP + MLPHead2 (the input dimension to FC3 needs to be adapted to hw accordingly), which is the simplest and most necessary configuration. Then, by adding MLPHead1, the performance is improved by 1.38%, indicating that increasing the dimension to D, as in Transformers, is useful. Then, by including FC1 / BN1 independently, the performance gain is 0.84% / 0.88%, and by including them together, the performance can be further improved. Finally, when the prior score embedding is appended, the best performance is achieved. Interestingly, when we include a learnable positional embedding in the encoders, as in ViT, either independently or together with the prior score embedding, the performance is degraded. This indicates that mixing the position information with visual features for image matching is not useful in our design. In contrast, learning spatial-aware prior matching scores separately for score weighting is more effective. More ablation study and analysis can be found in the Appendix.
6.6 Parameter Analysis
To understand the parameter selection of the proposed TransMatcher, we train it on MSMT17 with different parameter configurations to the defaults, with the mAcc results as well as the training time shown in Fig. 2. First, the performance is gradually improved by increasing the model dimension d. However, the training time is also increased quadratically. Therefore, to provide a balance between accuracy and running speed, d=512 is selected, which is the same as in the vanilla Transformer [24].
For the feed forward-dimension D, the performance is also gradually improved when increasing the value. However, the training time is less affected, because the feed-forward operation is only applied after the dot product and GMP, where the dimension of d and one spatial dimension hw are already contracted. Nevertheless, large D will increase the memory usage. Therefore, D=2048 is selected, which is also the same as in the vanilla Transformer [24].
As for the number of layers N , the performance is also gradually improved with increasing N . However, after N=3 the performance tends to saturate, and the training time grows linearly with the increasing number of layers. Therefore, N=3 is a reasonable balance for our choice. In addition, with N = 1 there is no encoder used (for details please see Appendix), and from Fig. 2 it is clear that this is inferior, indicating that including an encoder is important. On the other hand, from the poor performance of ViT where there are only encoders, it is clear that the decoder is also important.
Finally, for the number of heads H in the encoders, it appears that larger H does not lead to improved results. Since the training time is also not affected, we simply select H=1 in the encoders, and do not implement the multi-head mechanism in the decoders.
6.7 Qualitative Analysis
With the help of the GMP layer, inspired from QAConv [10], the proposed TransMatcher is able to find the best local correspondence matches in each decoder layer. Some qualitative matching results are shown in Fig. 3 for a better understanding of TransMatcher. More examples can be found in the Appendix. The model used here is trained on the MSMT17 dataset [28], and the evaluations are done on the query subset of the Market-1501 dataset [34]. Results of both positive pairs and hard negative pairs are shown. For a clear illustration, only reliable correspondences with matching scores over a certain threshold are shown, where the threshold is determined by a false acceptance rate of 1‰ over all matches of negative pairs. Note that the local positions are coarse due to the 24× 8 size of the feature map.
As can be observed from Fig. 3, the proposed method is able to find correct local correspondences for positive pairs of images, even if there are notable misalignments in both scales and positions, pose, viewpoint, and illumination variations, occlusions, and low resolution blur. Besides, for hard negative pairs, the matching of TransMatcher still appears to be mostly reasonable, by linking visually similar parts or even the same person who might be incorrectly labeled.
This indicates that the proposed TransMatcher is effective in local correspondence matching, and note that it learns to do this with the only supervision of identity information. Besides, the matching capability is generalizable to other datasets beyond the training set. From the illustration it can also be seen that, generally, matching results of the first decoder layer are not as successful as the next two layers, and the matching with the last decoder layer appears to be the best. This indicates that both Transformer encoders and decoders helps the model to match better by aggregating global similarity information.
7 Conclusion
With the study conducted in this paper, we conclude that: (1) direct applications of ViT and the vanilla Transformer are not effective for image matching and metric learning, because they lack cross-image interaction in their designs; (2) designing query-gallery concatenation in ViT does not help, while introducing query-gallery cross-attention in the vanilla Transformer leads to notable but not adequate improvements, probably because the attention mechanism in Transformers might be primarily designed for global feature aggregation, which is not naturally suitable for image matching; and (3) a new simplified decoder thus developed, which employs hard attention to cross-matching similarity scores, is more efficient and effective for image matching and metric learning. With generalizable person re-identification experiments, the proposed TransMatcher is shown to achieve state-of-the-art performance on several popular datasets with large improvements. Therefore, this study proves that Transformers can be effectively adapted for the image matching and metric learning tasks, and so other potentially useful variants will be of future interest.
Acknowledgements
The authors would like to thank Yanan Wang who helped producing Fig. 1 in this paper, and Anna Hennig who helped proofreading the paper, and all the anonymous reviewers for the valuable feedbacks in improving the paper. | 1. What is the focus of the paper on person re-identification?
2. What are the strengths of the proposed Transformer-based image matching approach?
3. What are the weaknesses of the paper regarding its contributions and limitations?
4. How does the reviewer assess the idea and the assumption behind the proposed method?
5. What are the convincing aspects of the ablation studies and experiments presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a Transformer-Based Deep Image Matching approach for generalizable person re-identification. It adopts two simple solutions based on the transformer to achieve pair image matching. The idea is simple and easy to follow.
Review
This paper proposes a Transformer-based image matching to achieve generalizable person re-identification.
This idea is based on the assumption that local feature matching helps to improve the generalization ability of models. I agree with this conclusion. The cross image attention proposed in this paper is indeed making sense for image-to-image matching. The ablation studies and experiments are convincing and demonstrate the improvement of the proposed TansMatcher.
However, the authors only use the transformer to achieve the local matching, therefore, the contribution is limited. |
NIPS | Title
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
Abstract
The lottery ticket hypothesis (LTH) [20] states that learning on a properly pruned network (the winning ticket) improves test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
1 Introduction
Neural network pruning can reduce the computational cost of model training and inference significantly and potentially lessen the chance of overfitting [33, 26, 15, 25, 28, 51, 58, 41]. The recent Lottery Ticket Hypothesis (LTH) [20] claims that a randomly initialized dense neural network al-
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
ways contains a so-called “winning ticket,” which is a sub-network bundled with the corresponding initialization, such that when trained in isolation, this winning ticket can achieve at least the same testing accuracy as that of the original network by running at most the same amount of training time. This so-called “improved generalization of winning tickets” is verified empirically in [20]. LTH has attracted a significant amount of recent research interests [45, 70, 39]. Despite the empirical success [19, 63, 55, 11], the theoretical justification of winning tickets remains elusive except for a few recent works. [39] provides the first theoretical evidence that within a randomly initialized neural network, there exists a good sub-network that can achieve the same test performance as the original network. Meanwhile, recent work [42] trains neural network by adding the `1 regularization term to obtain a relatively sparse neural network, which has a better performance numerically.
However, the theoretical foundation of network pruning is limited. The existing theoretical works usually focus on finding a sub-network that achieves a tolerable loss in either expressive power or training accuracy, compared with the original dense network [2, 71, 61, 43, 4, 3, 35, 5, 59]. To the best of our knowledge, there exists no theoretical support for the improved generalization achieved by winning tickets, i.e., pruned networks with faster convergence and better test accuracy.
Contributions: This paper provides the first systematic analysis of learning pruned neural networks with a finite number of training samples in the oracle-learner setup, where the training data are generated by a unknown neural network, the oracle, and another network, the learner, is trained on the dataset. Our analytical results also provide a justification of the LTH from the perspective of the sample complexity. In particular, we provide the first theoretical justification of the improved generalization of winning tickets. Specific contributions include:
1. Pruned neural network learning via accelerated gradient descent (AGD): We propose an AGD algorithm with tensor initialization to learn the pruned model from training samples. Our algorithm converges to the oracle model linearly, which has guaranteed generalization.
2. First sample complexity analysis for pruned networks: We characterize the required number of samples for successful convergence, termed as the sample complexity. Our sample complexity bound depends linearly on the number of the non-pruned weights and is a significant reduction from directly applying conventional complexity bounds in [69, 66, 67].
3. Characterization of the benign optimization landscape of pruned networks: We show analytically that the empirical risk function has an enlarged convex region for a pruned network, justifying the importance of a good sub-network (i.e., the winning ticket).
4. Characterization of the improved generalization of winning tickets: We show that gradientdescent methods converge faster to the oracle model when the neural network is properly pruned, or equivalently, learning on a pruned network returns a model closer to the oracle model with the same number of iterations, indicating the improved generalization of winning tickets.
Notations. Vectors are bold lowercase, matrices and tensors are bold uppercase. Scalars are in normal font, and sets are in calligraphy and blackboard bold font. I denote the identity matrix. N and R denote the sets of nature number and real number, respectively. ‖z‖ denotes the `2-norm of a vector z, and ‖Z‖2, ‖Z‖F and ‖Z‖∞ denote the spectral norm, Frobenius norm and the maximum value of matrix Z, respectively. [Z] stands for the set of {1, 2, · · · , Z} for any number Z ∈ N. In addition, f(r) = O(g(r)) ( or f(r) = Ω(g(r)) ) if f ≤ C · g ( or f ≥ C · g ) for some constant C > 0 when r is large enough. f(r) = Θ(g(r)) if both f(r) = O(g(r)) and f(r) = Ω(g(r)) holds, where c · g ≤ f ≤ C · g for some constant 0 ≤ c ≤ C when r is large enough.
1.1 Related Work
Network pruning. Network pruning methods seek a compressed model while maintaining the expressive power. Numerical experiments have shown that over 90% of the parameters can be pruned without a significant performance loss [10]. Examples of pruning methods include irregular weight pruning [25], structured weight pruning [57], neuron-based pruning [28], and projecting the weights to a low-rank subspace [13].
Winning tickets. [20] employs an Iterative Magnitude Pruning (IMP) algorithm to obtain the proper sub-network and initialization. IMP and its variations [22, 46] succeed in deeper networks like Residual Networks (Resnet)-50 and Bidirectional Encoder Representations from Transformers (BERT) network [11]. [21] shows that IMP succeeds in finding the “winning ticket” if the ticket is
stable to stochastic gradient descent noise. In parallel, [36] shows numerically that the “winning ticket” initialization does not improve over a random initialization once the correct sub-networks are found, suggesting that the benefit of “winning ticket” mainly comes from the sub-network structures. [18] analyzes the sample complexity of IMP from the perspective of recovering a sparse vector in a linear model rather than learning neural networks.
Feature sparsity. High-dimensional data often contains redundant features, and only a subset of the features is used in training [6, 14, 27, 60, 68]. Conventional approaches like wrapper and filter methods score the importance of each feature in a certain way and select the ones with highest scores [24]. Optimization-based methods add variants of the `0 norm as a regularization to promote feature sparsity [68]. Different from network pruning where the feature dimension still remains high during training, the feature dimension is significantly reduced in training when promoting feature sparsity.
Over-parameterized model. When the number of weights in a neural network is much larger than the number of training samples, the landscape of the objective function of the learning problem has no spurious local minima, and first-order algorithms converge to one of the global optima [37, 44, 64, 50, 9, 49, 38]. However, the global optima is not guaranteed to generalize well on testing data [62, 64].
Generalization analyses. The existing generalization analyses mostly fall within three categories. One line of research employs the Mean Field approach to model the training process by a differential equation assuming infinite network width and infinitesimal training step size [12, 40, 56]. Another approach is the neural tangent kernel (NTK) [30], which requires strong and probably unpractical over-parameterization such that the nonlinear neural network model behaves as its linearization around the initialization [1, 17, 72, 73]. The third line of works follow the oracle-learner setup, where the data are generated by an unknown oracle model, and the learning objective is to estimate the oracle model, which has a generalization guarantee on testing data. However, the objective function has intractably many spurious local minima even for one-hidden-layer neural networks [48, 47, 64]. Assuming an infinite number of training samples, [8, 16, 52] develop learning methods to estimate the oracle model. [23, 69, 66, 67] extend to the practical case of a finite number of samples and characterize the sample complexity for recovering the oracle model. Because the analysis complexity explodes when the number of hidden layers increases, all the analytical results about estimating the oracle model are limited to one-hidden-layer neural networks, and the input distribution is often assumed to be the standard Gaussian distribution.
2 Problem Formulation
In an oracle-learner model, given any input x ∈ Rd, the corresponding output y is generated by a pruned one-hidden-layer neural network, called oracle, as shown in Figure 1. The oracle network is equipped with K neurons where the j-th neuron is connected to any arbitrary r∗j (r ∗ j ≤ d) input features. LetW ∗ = [w∗1, · · · ,w∗K ] ∈ Rd×K denotes all the weights (pruned ones are represented by zero). The number of non-zero entries in w∗j is at most r ∗ j . The oracle network is not unique because permuting neurons together with the corresponding weights does not change the output. Therefore, the output label y obtained by the oracle network satisfies 1
y = 1
K K∑ j=1 φ(w∗Tj x) + ξ := g(x;W ∗) + ξ = g(x;W ∗P ) + ξ, (1)
where ξ is arbitrary unknown additive noise bounded by some constant |ξ|, φ is the rectified linear unit (ReLU) activation function with φ(z) = max{z, 0}, and P ∈ {0, 1}K×K is any permutation matrix. M∗ is a mask matrix for the oracle network, such that M∗j,i equals to 1 if the weight w∗j,i is not pruned, and 0 otherwise. Then,M∗ is an indicator matrix for the non-zero entries ofW ∗ with M∗ W ∗ = W ∗, where is entry-wise multiplication. Based on N pairs of training samples D = {xn, yn}Nn=1 generated by the oracle, we train on a learner network equipped with the same number of neurons in the oracle network. However, the j-th neuron in the learner network is connected to rj input features rather than r∗j . Let rmin, rmax, and rave denote the minimum, maximum, and average value of {rj}Kj=1, respectively. LetM denote the
1It is extendable to binary classification, and the output is generated by Prob ( yn = 1|xn ) = g(xn;W ∗).
mask matrix with respect to the learner network, and wj is the j-th column ofW . The empirical risk function is defined as
f̂D(W ) = 1
2N N∑ n=1 ( 1 K K∑ j=1 φ(wTj xn)− yn )2 . (2)
When the maskM is given, the learning objective is to estimate a proper weight matrixW for the learner network from the training samples D via solving
minW∈Rd×K f̂D(W ) s.t. M W = W . (3)
M is called an accurate mask if the support ofM covers the support of a permutation ofM∗, i.e., there exists a permutation matrix P such that (M∗P ) M = M∗. When M is accurate, and ξ = 0, there exists a permutation matrix P such that W ∗P is a global optimizer to (3). Hence, if W ∗P can be estimated by solving (3), one can learn the oracle network accurately, which has guaranteed generalization performance on the testing data.
We assume xn is independent and identically distributed from the standard Gaussian distribution N (0, Id×d). The Gaussian assumption is motivated by the data whitening [34] and batch normalization techniques [29] that are commonly used in practice to improve learning performance. Moreover, training one-hidden-layer neural network with multiple neurons has intractable many fake minima [47] without any input distribution assumption. In addition, the theoretical results in Section 3 assume an accurate mask, and inaccurate mask is evaluated empirically in Section 4.
The questions that this paper addresses include: 1. what algorithm to solve (3)? 2. what is the sample complexity for the accurate estimate of the weights in the oracle network? 3. what is the impact of the network pruning on the difficulty of the learning problem and the performance of the learned model?
3 Algorithm and Theoretical Results
Section 3.1 studies the geometric structure of (3), and the main results are in Section 3.2. Section 3.3 briefly introduces the proof sketch and technical novelty, and the limitations are in Section 3.4.
3.1 Local Geometric Structure
Theorem 1 characterizes the local convexity of f̂D in (3). It has two important implications.
1. Strictly locally convex near ground truth: f̂D is strictly convex nearW ∗P for some permutation matrix P , and the radius of the convex ball is negatively correlated with √ r̃, where r̃ is in the order of rave. Thus, the convex ball enlarges as any rj decreases.
2. Importance of the winning ticket architecture: Compared with training on the dense network directly, training on a properly pruned sub-network has a larger local convex region near W ∗P , which may lead to easier estimation of W ∗P . To some extent, this result can be viewed as a theoretical validation of the importance of the winning architecture (a good sub-network) in [20]. Formally, we have
Theorem 1 (Local Convexity). Assume the mask M of the learner network is accurate. Suppose constants ε0, ε1 ∈ (0, 1) and the number of samples satisfies
N = Ω ( ε−21 K 4r̃ log q ) , (4)
for some large constant q > 0, where
r̃ = 1
8K4
(∑K k=1 ∑K j=1(1 + δj,k)(rj + rk) 1 2 )2 , (5)
δj,k is 1 if the indices of non-pruned weights in the j-th and k-th neurons overlap and 0 otherwise. Then, there exists a permutation matrix P such that for anyW that satisfies
‖W −W ∗P ‖F = O ( ε0 K2 ) , andM W = W , (6)
its Hessian of f̂D, with probability at least 1−K · q−rmin , is bounded as:
Θ (1− ε0 − ε1
K2
) I ∇2f̂D(W ) Θ ( 1 K ) I. (7)
Remark 1.1 (Parameter r̃): Clearly r̃ is a monotonically increasing function of any rj from (5). Moreover, one can check that 18rave ≤ r̃ ≤ rave. Hence, r̃ is in the order of rave. Remark 1.2 (Local landscape): Theorem 1 shows that with enough samples as shown in (4), in a local region ofW ∗P as shown in (6), all the eigenvalues of the Hessian matrix of the empirical risk function are lower and upper bounded by two positive constants. This property is useful in designing efficient algorithms to recoverW ∗P , as shown in Section 3.2.
Remark 1.3 (Size of the convex region): When the number of samples N is fixed and r changes, ε1 can be Θ( √ r̃/N) while (4) is still met. ε0 in (7) can be arbitrarily close to but smaller than 1− ε1 so that the Hessian matrix is still positive definite. Then from (6), the radius of the convex ball is Θ(1) − Θ( √ r̃/N), indicating an enlarged region when r̃ decreases. The enlarged convex region serves as an important component in proving the faster convergence rate, summarized in Theorem 2. Besides this, as Figure 1 shown in [20], the authors claim that the learning is stable if the linear interpolation of the learned models with SGD noises still remain similar in performance, which is summarized as the concept “linearly connected region.” Intuitively, we conjecture that the winning ticket shows a better performance in the stability analysis because it has a larger convex region. In the other words, a larger convex region indicates that the learning is more likely to be stable in the linearly connected region.
3.2 Convergence Analysis with Accelerated Gradient Descent
We propose to solve the non-convex problem (3) via the accelerated gradient descent (AGD) algorithm, summarized in Algorithm 1. Compared with the vanilla gradient descent (GD) algorithm, AGD has an additional momentum term, denoted by β(W (t) −W (t−1)), in each iteration. AGD enjoys a faster convergence rate than vanilla GD in solving optimization problems, including learning neural networks [65]. Vanilla GD can be viewed as a special case of AGD by letting β = 0. The initial point W (0) can be obtained through a tensor method, and the details are provided in Appendix B.
Algorithm 1 Accelerated Gradient Descent (AGD) Algorithm 1: Input: training data D = {(xn, yn)}Nn=1, gradient step size η, momentum parameter β, and an
initializationW (0) by the tensor initialization method; 2: Partition D into T = log(1/ε) disjoint subsets, denoted as {Dt}Tt=1; 3: for t = 1, 2, · · · , T do 4: W (t+1) = W (t) − η ·M ∇f̂Dt(W
(t)) + β(W (t) −W (t−1)) 5: end for 6: Return: W (T )
The theoretical analyses of our algorithm are summarized in Theorem 2 (convergence) and Lemma 1 (Initialization). The significance of these results can be interpreted from the following aspects.
1. Linear convergence to the oracle model: Theorem 2 implies that if initialized in the local convex region, the iterates generated by AGD converge linearly toW ∗P for some P when noiseless. When there is noise, they converge to a pointW (T ). The distance betweenW (T ) andW ∗P is proportional to the noise level and scales in terms of O( √ r̃/N). Moreover, when N is fixed, the convergence rate
of AGD is Θ( √ r̃/K). Recall that Algorithm 1 reduces to the vanilla GD by setting β = 0. The rate for the vanilla GD algorithm here is Θ( √ r̃/K) by setting β = 0 by Theorem 2, indicating a slower convergence than AGD. Lemma 1 shows the tensor initialization method indeed returns an initial point in the convex region.
2. Sample complexity for accurate estimation: We show that the required number of samples for successful estimation of the oracle model is Θ ( r̃ log q log(1/ε) ) for some large constant q and estimation accuracy ε. Our sample complexity is much less than the conventional bound of Θ(d log q log(1/ε)) for one-hidden-layer networks [69, 66, 67]. This is the first theoretical characterization of learning a pruned network from the perspective of sample complexity.
3. Improved generalization of winning tickets: We prove that with a fixed number of training samples, training on a properly pruned sub-network converges faster toW ∗P than training on the original dense network. Our theoretical analysis justifies that training on the winning ticket can meet or exceed the same test accuracy within the same number of iterations. To the best of our knowledge, our result here provides the first theoretical justification for this intriguing empirical finding of “improved generalization of winning tickets” by [20]. Theorem 2 (Convergence). Assume the maskM of the learner network is accurate. SupposeW (0) satisfies (6) and the number of samples satisfies
N = Ω ( ε−20 K 6r̃ log q log(1/ε) )
(8)
for some ε0 ∈ (0, 1/2). Let η = K/14 in Algorithm 1. Then, the iterates {W (t)}Tt=1 returned by Algorithm 1 converges linearly toW ∗ up to the noise level with probability at least 1−K2T · q−rmin
‖W (t) −W ∗P ‖F ≤ν(β)t‖W (0) −W ∗P ‖F +O (∑
j
√ rj log q
N
) · |ξ|, (9)
and ‖W (T ) −W ∗P ‖F ≤ε‖W ∗‖F +O (∑
j
√ rj log q
N
) · |ξ|, (10)
for a fixed permutation matrix P , where ν(β) is the rate of convergence that depends on β with ν(β∗) = 1−Θ ( 1−ε0√ K ) for some non-zero β∗ and ν(0) = 1−Θ ( 1−ε0 K ) . Lemma 1 (Initialization). Assume the noise |ξ| ≤ ‖W ∗‖2 and the number of samples N = Ω ( ε−20 K 5rmax log q )
for ε0 > 0 and large constant q, the tensor initialization method outputs W (0) such that (6) holds, i.e., ‖W (0) −W ∗‖F = O ( ε0σK K2 ) with probability at least 1− q−rmax .
Remark 2.1 (Faster convergence on pruned network): With a fixed number of samples, when r̃ decreases, ε0 can increase as Θ( √ r̃) while (8) is still met. Then ν(0) = Θ( √ r̃/K) and ν(β∗) =
Θ( √ r̃/K). Therefore, when r̃ decreases, both the stochastic and accelerated gradient descent
converge faster. Note that as long asW (0) is initialized in the local convex region, not necessarily by the tensor method, Theorem 2 guarantees the accurate recovery. [66, 67] analyze AGD on convolutional neural networks, while this paper focuses on network pruning.
Remark 2.2 (Sample complexity of initialization): From Lemma 1, the required number of samples for a proper initialization is Ω ( ε−20 K 5rmax log q ) . Because rmax ≤ Krave and r̃ = Ω(rave), this number is no greater than the sample complexity in (8). Thus, provided that (8) is met, Algorithm 1 can estimate the oracle network model accurately.
Remark 2.3 (Inaccurate mask): The above analyses are based on the assumption that the mask of the learner network is accurate. In practice, a mask can be obtained by an iterative pruning method such as [20] or a one-shot pruning method such as [55]. In Appendix E, we prove that the magnitude pruning method can obtain an accurate mask with enough training samples. Moreover, empirical experiments in Section 4.2 and 4.3 suggest that even if the mask is not accurate, the three properties (linear convergence, sample complexity with respect to the network size, and improved generalization of winning tickets) can still hold. Therefore, our theoretical results provide some insight into the empirical success of network pruning.
3.3 The Sketch of Proofs and Technical Novelty
Our proof outline is inspired by [69] on fully connected neural networks, however, major technical changes are made in this paper to generalize the analysis to an arbitrarily pruned network. To characterize the local convex region of f̂D (Theorem 1), the idea is to bound the Hessian matrix of the population risk function, which is the expectation of the empirical risk function, locally and then characterize the distance between the empirical and population risk functions through the concentration bounds. Then, the convergence of AGD (Theorem 2) is established based on the desired local curvature, which in turn determines the sample complexity. Finally, to initialize in the local convex region (Lemma 1), we construct tensors that contain the weights information and apply a decomposition method to estimate the weights.
Our technical novelties are as follows. First, a direct application of the results in [69] leads to a sample complexity bound that is linear in the feature dimension d. We develop new techniques to tighten the sample complexity bound to be linear in r̃, which can be significantly smaller than d for a sufficiently pruned network. Specifically, we develop new concentration bounds (Lemmas 4 and 5 in Appendix) to bound the distance between the population and empirical risk functions rather than using the bound in [69]. Second, instead of restricting the acitivation to be smooth for convergence analysis, we study the case of ReLU function which is non-smooth. Third, new tensors are constructed for pruned networks (see (21)-(23) in Appendix) in computing the initialization, and our new concentration bounds are employed to reduce the required number of samples for a proper initialization. Last, Algorithm 1 employs AGD and is proved to converge faster than the GD algorithm in [69].
3.4 Limitations
Like most theoretical works based on the oracle-learner setup, limitations of this work include (1) one hidden layer only; and (2) the input follows the Gaussian distribution. Extension to multi-layer might be possible if the following technical challenges are addressed. First, when characterizing the local convex region, one needs to show that the Hessian matrix is positive definite. In multi-layer networks, the Hessian matrix is more complicated to compute. Second, new concentration bounds need to be developed because the input feature distributions to the second and third layers depend on the weights in previous layers. Third, the initialization approach needs to be revised. The team is also investigating the other input distributions such as Gaussian mixture models.
4 Numerical Experiments
The theoretical results are first verified on synthetic data, and we then analyze the pruning performance on both synthetic and real datasets. In Section 4.1, Algorithm 1 is implemented with minor modification, such that, the initial point is randomly selected as ‖W (0) −W ∗‖F /‖W ∗‖F < λ for some λ > 0 to reduce the computation. Algorithm 1 terminates when ‖W (t+1)−W (t)‖F /‖W (t)‖F is smaller than 10−8 or reaching 10000 iterations. In Sections 4.2 and 4.3, the Gradient Signal Preservation (GraSP) algorithm [55] and IMP algorithm [10, 20]2 are implemented to prune the neural networks. As many works like [11, 10, 20] have already verified the faster convergence and better generalization accuracy of the winning tickets empirically, we only include the results of some representative experiments, such as training MNIST and CIFAR-10 on Lenet-5 [32] and Resnet-50 [27] networks, to verify our theoretical findings.
The synthetic data are generated using a oracle model in Figure 1. The input xn’s are randomly generated from Gaussian distribution N (0, Id×d) independently, and indices of non-pruned weights of the j-th neuron are obtained by randomly selecting rj numbers without replacement from [d]. For the convenience of generating specific r̃, the indices of non-pruned weights are almost overlapped ( ∑ j ∑ k δjδk > 0.95K
2) except for Figure 5. In Figures 2 and 4, rj is selected uniformly from [0.9r̃, 1.1r̃] for a given r̃, and rj are the same in value for all j in other figures. Each non-zero entry ofW ∗ is randomly selected from [−0.5, 0.5] independently. The noise ξn’s are i.i.d. from N (0, σ2), and the noise level is measured by σ/Ey , where Ey is the root mean square of the noiseless outputs.
2The source codes used are downloaded from https://github.com/VITA-Group/CV_LTH_Pre-training.
4.1 Evaluation of theoretical findings on synthetic data
Local convexity near W ∗. We set the number of neurons K = 10, the dimension of the data d = 500 and the sample size N = 5000. Figure 2 illustrates the success rate of Algorithm 1 when r̃ changes. The y-axis is the relative distance of the initializationW (0) to the ground-truth. For each pair of r̃ and the initial distance, 100 trails are constructed with the network weights, training data and the initializationW (0) are all generated independently in each trail. Each trail is called successful if the relative error of the solutionW returned by Algorithm 1, measured by ‖W −W ∗‖F /‖W ∗‖F , is less than 10−4. A black block means Algorithm 1 fails in estimatingW ∗ in all trails while a white block indicates all success. As Algorithm 1 succeeds ifW (0) is in the local convex region nearW ∗, we can see that the radius of convex region is indeed linear in −r̃ 12 , as predicted by Theorem 1. Convergence rate. Figure 3 shows the convergence rate of Algorithm 1 when r̃ changes. N = 5000, d = 300, K = 10, η = 0.5, and β = 0.2. Figure 3(a) shows that the relative error decreases exponentially as the number of iterations increases, indicating the linear convergence of Algorithm 1. As shown in Figure 3(b), the results are averaged over 20 trials with different initial points, and the areas in low transparency represent the standard deviation errors. We can see that the convergence rate is almost linear in √ r̃, as predicted by Theorem 2. We also compare with GD by setting β as 0. One can see that AGD has a smaller convergence rate than GD, indicating faster convergence.
10 12 14 16 18 20 2 6
10 14 18 22 26 30 34 38
Figure 2: The radius of the local convex region against r̃ 1 2
Sample complexity. Figures 4 and 5 show the success rate of Algorithm 1 when varying N and r̃. d is fixed as 100. In Figure 4, we construct 100 independent trails for each pair of N and r̃, where the ground-truth model and training data are generated independently in each trail. One can see that the required number of samples for successful estimation is linear in r̃, as predicted by (8). In Figure 5, rj is fixed as 20 for all neurons, but different network architectures after pruning are considered. One can see that although the number of remaining weights is the same, r̃ can be different in different architectures, and the sample complexity increases as r̃ increases, as predicted by (8).
r̃
r̃ 1 2 at different noise level
Performance in noisy case. Figure 6 shows the relative error of the learned model by Algorithm 1 from noisy measurements when r̃ changes. N = 1000, K = 10, and d = 300. The results are averaged over 100 independent trials, and standard deviation is around 2% to 8% of the corresponding relative errors. The relative error is linear in r̃ 1 2 , as predicted by (9). Moreover, the relative error is proportional to the noise level |ξ|.
4.2 Performance with inaccurate mask on synthetic data
The performance of Algorithm 1 is evaluated when the maskM of the learner network is inaccurate. The number of neurons K is 5. The dimension of inputs d is 100. r∗j of the oracle model is 20 for
all j ∈ [K]. GraSP algorithm [55] is employed to find masks based only on early-trained weights in 20 iterations of AGD. The mask accuracy is measured by ‖M∗ M‖0/‖M∗‖0, whereM∗ is the mask of the oracle model. The pruning ratio is defined as (1− rave/d)× 100%. The number of training samples N is 200. The model returned by Algorithm 1 is evaluated on Ntest = 105 samples, and the test error is measured by √∑ n |yn − ŷn|2/Ntest, where ŷn is the output of the learned model with the input xn, and (xn, yn) is the n-th testing sample generated by the oracle network.
Improved generalization by GraSP. Figure 7 shows the test error with different pruning ratios. For each pruning ratio, we randomly generate 1000 independent trials. Because the mask of the learner network in each trail is generated independently, we compute the average test error of the learned models in all the trails with same mask accuracy. If there are less than 10 trails for certain mask accuracy, the result of that mask accuracy is not reported as it is statistically meaningless. The test error decreases as the mask accuracy increases. More importantly, at fixed mask accuracy, the test error decreases as the pruning ratio increases. That means the generalization performance improves when r̃ deceases, even if the mask is not accurate.
Linear convergence. Figure 8 shows the convergence rate of Algorithm 1 with different pruning ratios. We show the smallest number of iterations required to achieve a certain test error of the learned model, and the results are averaged over the independent trials with mask accuracy between 0.85 and 0.90. Even with inaccurate mask, the test error converges linearly. Moreover, as the pruning ratio increases, Algorithm 1 converges faster.
Sample complexity with respect to the pruning ratio. Figure 9 shows the test error when the number of training samples N changes. All the other parameters except N remain the same. The results are averaged over the trials with mask accuracy between 0.85 and 0.90. We can see the test error decreases when N increases. More importantly, as the pruning ratio increases, the required number of samples to achieve the same test error (no less than 10−3) decreases dramatically. That means the sample complexity decreases as r̃ decreases even if the mask is inaccurate.
4.3 Performance of IMP on synthetic, MNIST and CIFAR-10 datasets
We implement the IMP algorithm to obtain pruned networks on synthetic, MNIST and CIFAR-10 datasets. Figure 10 shows the test performance of a pruned network on synthetic data with different sample sizes. Here in the oracle network model, K = 5, d = 100, and r∗j = 20 for all j ∈ [K]. The noise level σ/Ey = 10−3. One observation is that for a fixed sample size N greater than 100, the test error decreases as the pruning ratio increases. This verifies that the IMP algorithm indeed prunes the network properly. It also shows that the learned model improves as the pruning progresses, verifying our theoretical result in Theorem 2 that the difference of the learned model from the oracle model decreases as rj decreases. The second observation is that the test error decreases as N increases for any fixed pruning ratio. This verifies our result in Theorem 2 that the difference of the learned model from the oracle model decreases as the number of training samples increases. When the pruning ratio is too large (greater than 80%), the pruned network cannot explain the data properly, and thus the test error is large for all N . When the number of samples is too small, like N = 100, the test error is always large, because it does not meet the sample complexity requirement for estimating the oracle model even though the network is properly pruned.
Figures 11 and 12 show the test performance of learned models by implementing the IMP algorithm on MNIST and CIFAR-10 using Lenet-5 [32] and Resnet-50 [27] architecture, respectively. The
experiments follow the standard setup in [10] except for the size of the training sets. To demonstrate the effect of sample complexity, we randomly selected N samples from the original training set without replacement. As we can see, a properly pruned network (i.e., winning ticket) helps reduce the sample complexity required to reach the test accuracy of the original dense model. For example, training on a pruned network returns a model (e.g., P1 and P3 in Figures 11 and 12) that has better testing performance than a dense model (e.g., P2 and P4 in Figures 11 and 12) trained on a larger data set. Given the number of samples, we consistently find the characteristic behavior of winning tickets: That is, the test accuracy could increase when the pruning ratio increases, indicating the effectiveness of pruning. The test accuracy then drops when the network is overly pruned. The results show that our theoretical characterization of sample complexity is well aligned with the empirical performance of pruned neural networks and explains the improved generalization observed in LTH.
5 Conclusions
This paper provides the first theoretical analysis of learning one-hidden-layer pruned neural networks, which offers formal justification of the improved generalization of winning ticket observed from empirical findings in LTH. We characterize analytically the impact of the number of remaining weights in a pruned network on the required number of samples for training, the convergence rate of the learning algorithm, and the accuracy of the learned model. We also provide extensive numerical validations of our theoretical findings.
Broader impacts
We see no ethical or immediate societal consequence of our work. This paper contributes to the theoretical foundation of both network pruning and generalization guarantee. The former encourages the development of learning method to reduce the computational cost. The latter increases the public trust in incorporating AI technology in critical domains.
Acknowledgement
This work was supported by AFOSR FA9550-20-1-0122, ARO W911NF-21-1-0255, NSF 1932196 and the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons). We thank Tianlong Chen at University of Texas at Austin, Haolin Xiong at Rensselaer Polytechnic Institute and Yihua Zhang at Michigan State University for the help in formulating numerical experiments. We thank all anonymous reviewers for their constructive comments. | 1. What is the focus of the paper regarding theoretical explanations for improved generalization error?
2. What are the strengths of the proposed approach, particularly in terms of adapting previous works to pruned networks settings?
3. Do you have any concerns or limitations regarding the technical novelty of the paper?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. What are the implications of the paper's findings on accelerated gradient descent method and the importance of winning tickets? | Summary Of The Paper
Review | Summary Of The Paper
The paper gives theoretical explanation for improved generalization error of winning tickets for which only empirical results are known. Their results are based on teacher-student setting in which training samples are assumed to be generated from an unknown teacher network and student network is supposed to learn only from those samples. They give an accelerated gradient descent method for learning the pruned network and convergence and sample complexity analysis for this algorithm. The empirical risk function is shown to have an enlarged convex region for a pruned network, which justifies the importance of the winning ticket. Learning on pruned network with the AGD algorithm gives a model closer to the teacher network with the same number of iterations, which implies better generalization of the trained pruned network. These findings are validated with experiments on synthetic and real datasets (MNIST, CIFAR-10).
Review
There is significant empirical evidence in support of lottery tickets, however theoretical understanding/justification is not very clear in the literature. This work tries to offer theoretical explanation for the winning tickets in the teacher student setup. Most of the results of the paper are inspired by [63], which has similar results for fully-connected one layer neural network. Straight forward application of bounds from this prior work yields suboptimal bounds for pruned network and hence they adapted it to pruned networks setting. In addition to some key differences in analysis like application of novel concentration bounds, non-smooth activation functions, they also construct new tensors for pruned networks. Overall, the paper is well written, looks technically sound. Though the proofs are inspired from prior works, the adaptation to pruned networks setting is also not trivial. In sum, despite limited technical novelty, I think its a good contribution towards theoretical understanding of winning tickets. |
NIPS | Title
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
Abstract
The lottery ticket hypothesis (LTH) [20] states that learning on a properly pruned network (the winning ticket) improves test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
1 Introduction
Neural network pruning can reduce the computational cost of model training and inference significantly and potentially lessen the chance of overfitting [33, 26, 15, 25, 28, 51, 58, 41]. The recent Lottery Ticket Hypothesis (LTH) [20] claims that a randomly initialized dense neural network al-
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
ways contains a so-called “winning ticket,” which is a sub-network bundled with the corresponding initialization, such that when trained in isolation, this winning ticket can achieve at least the same testing accuracy as that of the original network by running at most the same amount of training time. This so-called “improved generalization of winning tickets” is verified empirically in [20]. LTH has attracted a significant amount of recent research interests [45, 70, 39]. Despite the empirical success [19, 63, 55, 11], the theoretical justification of winning tickets remains elusive except for a few recent works. [39] provides the first theoretical evidence that within a randomly initialized neural network, there exists a good sub-network that can achieve the same test performance as the original network. Meanwhile, recent work [42] trains neural network by adding the `1 regularization term to obtain a relatively sparse neural network, which has a better performance numerically.
However, the theoretical foundation of network pruning is limited. The existing theoretical works usually focus on finding a sub-network that achieves a tolerable loss in either expressive power or training accuracy, compared with the original dense network [2, 71, 61, 43, 4, 3, 35, 5, 59]. To the best of our knowledge, there exists no theoretical support for the improved generalization achieved by winning tickets, i.e., pruned networks with faster convergence and better test accuracy.
Contributions: This paper provides the first systematic analysis of learning pruned neural networks with a finite number of training samples in the oracle-learner setup, where the training data are generated by a unknown neural network, the oracle, and another network, the learner, is trained on the dataset. Our analytical results also provide a justification of the LTH from the perspective of the sample complexity. In particular, we provide the first theoretical justification of the improved generalization of winning tickets. Specific contributions include:
1. Pruned neural network learning via accelerated gradient descent (AGD): We propose an AGD algorithm with tensor initialization to learn the pruned model from training samples. Our algorithm converges to the oracle model linearly, which has guaranteed generalization.
2. First sample complexity analysis for pruned networks: We characterize the required number of samples for successful convergence, termed as the sample complexity. Our sample complexity bound depends linearly on the number of the non-pruned weights and is a significant reduction from directly applying conventional complexity bounds in [69, 66, 67].
3. Characterization of the benign optimization landscape of pruned networks: We show analytically that the empirical risk function has an enlarged convex region for a pruned network, justifying the importance of a good sub-network (i.e., the winning ticket).
4. Characterization of the improved generalization of winning tickets: We show that gradientdescent methods converge faster to the oracle model when the neural network is properly pruned, or equivalently, learning on a pruned network returns a model closer to the oracle model with the same number of iterations, indicating the improved generalization of winning tickets.
Notations. Vectors are bold lowercase, matrices and tensors are bold uppercase. Scalars are in normal font, and sets are in calligraphy and blackboard bold font. I denote the identity matrix. N and R denote the sets of nature number and real number, respectively. ‖z‖ denotes the `2-norm of a vector z, and ‖Z‖2, ‖Z‖F and ‖Z‖∞ denote the spectral norm, Frobenius norm and the maximum value of matrix Z, respectively. [Z] stands for the set of {1, 2, · · · , Z} for any number Z ∈ N. In addition, f(r) = O(g(r)) ( or f(r) = Ω(g(r)) ) if f ≤ C · g ( or f ≥ C · g ) for some constant C > 0 when r is large enough. f(r) = Θ(g(r)) if both f(r) = O(g(r)) and f(r) = Ω(g(r)) holds, where c · g ≤ f ≤ C · g for some constant 0 ≤ c ≤ C when r is large enough.
1.1 Related Work
Network pruning. Network pruning methods seek a compressed model while maintaining the expressive power. Numerical experiments have shown that over 90% of the parameters can be pruned without a significant performance loss [10]. Examples of pruning methods include irregular weight pruning [25], structured weight pruning [57], neuron-based pruning [28], and projecting the weights to a low-rank subspace [13].
Winning tickets. [20] employs an Iterative Magnitude Pruning (IMP) algorithm to obtain the proper sub-network and initialization. IMP and its variations [22, 46] succeed in deeper networks like Residual Networks (Resnet)-50 and Bidirectional Encoder Representations from Transformers (BERT) network [11]. [21] shows that IMP succeeds in finding the “winning ticket” if the ticket is
stable to stochastic gradient descent noise. In parallel, [36] shows numerically that the “winning ticket” initialization does not improve over a random initialization once the correct sub-networks are found, suggesting that the benefit of “winning ticket” mainly comes from the sub-network structures. [18] analyzes the sample complexity of IMP from the perspective of recovering a sparse vector in a linear model rather than learning neural networks.
Feature sparsity. High-dimensional data often contains redundant features, and only a subset of the features is used in training [6, 14, 27, 60, 68]. Conventional approaches like wrapper and filter methods score the importance of each feature in a certain way and select the ones with highest scores [24]. Optimization-based methods add variants of the `0 norm as a regularization to promote feature sparsity [68]. Different from network pruning where the feature dimension still remains high during training, the feature dimension is significantly reduced in training when promoting feature sparsity.
Over-parameterized model. When the number of weights in a neural network is much larger than the number of training samples, the landscape of the objective function of the learning problem has no spurious local minima, and first-order algorithms converge to one of the global optima [37, 44, 64, 50, 9, 49, 38]. However, the global optima is not guaranteed to generalize well on testing data [62, 64].
Generalization analyses. The existing generalization analyses mostly fall within three categories. One line of research employs the Mean Field approach to model the training process by a differential equation assuming infinite network width and infinitesimal training step size [12, 40, 56]. Another approach is the neural tangent kernel (NTK) [30], which requires strong and probably unpractical over-parameterization such that the nonlinear neural network model behaves as its linearization around the initialization [1, 17, 72, 73]. The third line of works follow the oracle-learner setup, where the data are generated by an unknown oracle model, and the learning objective is to estimate the oracle model, which has a generalization guarantee on testing data. However, the objective function has intractably many spurious local minima even for one-hidden-layer neural networks [48, 47, 64]. Assuming an infinite number of training samples, [8, 16, 52] develop learning methods to estimate the oracle model. [23, 69, 66, 67] extend to the practical case of a finite number of samples and characterize the sample complexity for recovering the oracle model. Because the analysis complexity explodes when the number of hidden layers increases, all the analytical results about estimating the oracle model are limited to one-hidden-layer neural networks, and the input distribution is often assumed to be the standard Gaussian distribution.
2 Problem Formulation
In an oracle-learner model, given any input x ∈ Rd, the corresponding output y is generated by a pruned one-hidden-layer neural network, called oracle, as shown in Figure 1. The oracle network is equipped with K neurons where the j-th neuron is connected to any arbitrary r∗j (r ∗ j ≤ d) input features. LetW ∗ = [w∗1, · · · ,w∗K ] ∈ Rd×K denotes all the weights (pruned ones are represented by zero). The number of non-zero entries in w∗j is at most r ∗ j . The oracle network is not unique because permuting neurons together with the corresponding weights does not change the output. Therefore, the output label y obtained by the oracle network satisfies 1
y = 1
K K∑ j=1 φ(w∗Tj x) + ξ := g(x;W ∗) + ξ = g(x;W ∗P ) + ξ, (1)
where ξ is arbitrary unknown additive noise bounded by some constant |ξ|, φ is the rectified linear unit (ReLU) activation function with φ(z) = max{z, 0}, and P ∈ {0, 1}K×K is any permutation matrix. M∗ is a mask matrix for the oracle network, such that M∗j,i equals to 1 if the weight w∗j,i is not pruned, and 0 otherwise. Then,M∗ is an indicator matrix for the non-zero entries ofW ∗ with M∗ W ∗ = W ∗, where is entry-wise multiplication. Based on N pairs of training samples D = {xn, yn}Nn=1 generated by the oracle, we train on a learner network equipped with the same number of neurons in the oracle network. However, the j-th neuron in the learner network is connected to rj input features rather than r∗j . Let rmin, rmax, and rave denote the minimum, maximum, and average value of {rj}Kj=1, respectively. LetM denote the
1It is extendable to binary classification, and the output is generated by Prob ( yn = 1|xn ) = g(xn;W ∗).
mask matrix with respect to the learner network, and wj is the j-th column ofW . The empirical risk function is defined as
f̂D(W ) = 1
2N N∑ n=1 ( 1 K K∑ j=1 φ(wTj xn)− yn )2 . (2)
When the maskM is given, the learning objective is to estimate a proper weight matrixW for the learner network from the training samples D via solving
minW∈Rd×K f̂D(W ) s.t. M W = W . (3)
M is called an accurate mask if the support ofM covers the support of a permutation ofM∗, i.e., there exists a permutation matrix P such that (M∗P ) M = M∗. When M is accurate, and ξ = 0, there exists a permutation matrix P such that W ∗P is a global optimizer to (3). Hence, if W ∗P can be estimated by solving (3), one can learn the oracle network accurately, which has guaranteed generalization performance on the testing data.
We assume xn is independent and identically distributed from the standard Gaussian distribution N (0, Id×d). The Gaussian assumption is motivated by the data whitening [34] and batch normalization techniques [29] that are commonly used in practice to improve learning performance. Moreover, training one-hidden-layer neural network with multiple neurons has intractable many fake minima [47] without any input distribution assumption. In addition, the theoretical results in Section 3 assume an accurate mask, and inaccurate mask is evaluated empirically in Section 4.
The questions that this paper addresses include: 1. what algorithm to solve (3)? 2. what is the sample complexity for the accurate estimate of the weights in the oracle network? 3. what is the impact of the network pruning on the difficulty of the learning problem and the performance of the learned model?
3 Algorithm and Theoretical Results
Section 3.1 studies the geometric structure of (3), and the main results are in Section 3.2. Section 3.3 briefly introduces the proof sketch and technical novelty, and the limitations are in Section 3.4.
3.1 Local Geometric Structure
Theorem 1 characterizes the local convexity of f̂D in (3). It has two important implications.
1. Strictly locally convex near ground truth: f̂D is strictly convex nearW ∗P for some permutation matrix P , and the radius of the convex ball is negatively correlated with √ r̃, where r̃ is in the order of rave. Thus, the convex ball enlarges as any rj decreases.
2. Importance of the winning ticket architecture: Compared with training on the dense network directly, training on a properly pruned sub-network has a larger local convex region near W ∗P , which may lead to easier estimation of W ∗P . To some extent, this result can be viewed as a theoretical validation of the importance of the winning architecture (a good sub-network) in [20]. Formally, we have
Theorem 1 (Local Convexity). Assume the mask M of the learner network is accurate. Suppose constants ε0, ε1 ∈ (0, 1) and the number of samples satisfies
N = Ω ( ε−21 K 4r̃ log q ) , (4)
for some large constant q > 0, where
r̃ = 1
8K4
(∑K k=1 ∑K j=1(1 + δj,k)(rj + rk) 1 2 )2 , (5)
δj,k is 1 if the indices of non-pruned weights in the j-th and k-th neurons overlap and 0 otherwise. Then, there exists a permutation matrix P such that for anyW that satisfies
‖W −W ∗P ‖F = O ( ε0 K2 ) , andM W = W , (6)
its Hessian of f̂D, with probability at least 1−K · q−rmin , is bounded as:
Θ (1− ε0 − ε1
K2
) I ∇2f̂D(W ) Θ ( 1 K ) I. (7)
Remark 1.1 (Parameter r̃): Clearly r̃ is a monotonically increasing function of any rj from (5). Moreover, one can check that 18rave ≤ r̃ ≤ rave. Hence, r̃ is in the order of rave. Remark 1.2 (Local landscape): Theorem 1 shows that with enough samples as shown in (4), in a local region ofW ∗P as shown in (6), all the eigenvalues of the Hessian matrix of the empirical risk function are lower and upper bounded by two positive constants. This property is useful in designing efficient algorithms to recoverW ∗P , as shown in Section 3.2.
Remark 1.3 (Size of the convex region): When the number of samples N is fixed and r changes, ε1 can be Θ( √ r̃/N) while (4) is still met. ε0 in (7) can be arbitrarily close to but smaller than 1− ε1 so that the Hessian matrix is still positive definite. Then from (6), the radius of the convex ball is Θ(1) − Θ( √ r̃/N), indicating an enlarged region when r̃ decreases. The enlarged convex region serves as an important component in proving the faster convergence rate, summarized in Theorem 2. Besides this, as Figure 1 shown in [20], the authors claim that the learning is stable if the linear interpolation of the learned models with SGD noises still remain similar in performance, which is summarized as the concept “linearly connected region.” Intuitively, we conjecture that the winning ticket shows a better performance in the stability analysis because it has a larger convex region. In the other words, a larger convex region indicates that the learning is more likely to be stable in the linearly connected region.
3.2 Convergence Analysis with Accelerated Gradient Descent
We propose to solve the non-convex problem (3) via the accelerated gradient descent (AGD) algorithm, summarized in Algorithm 1. Compared with the vanilla gradient descent (GD) algorithm, AGD has an additional momentum term, denoted by β(W (t) −W (t−1)), in each iteration. AGD enjoys a faster convergence rate than vanilla GD in solving optimization problems, including learning neural networks [65]. Vanilla GD can be viewed as a special case of AGD by letting β = 0. The initial point W (0) can be obtained through a tensor method, and the details are provided in Appendix B.
Algorithm 1 Accelerated Gradient Descent (AGD) Algorithm 1: Input: training data D = {(xn, yn)}Nn=1, gradient step size η, momentum parameter β, and an
initializationW (0) by the tensor initialization method; 2: Partition D into T = log(1/ε) disjoint subsets, denoted as {Dt}Tt=1; 3: for t = 1, 2, · · · , T do 4: W (t+1) = W (t) − η ·M ∇f̂Dt(W
(t)) + β(W (t) −W (t−1)) 5: end for 6: Return: W (T )
The theoretical analyses of our algorithm are summarized in Theorem 2 (convergence) and Lemma 1 (Initialization). The significance of these results can be interpreted from the following aspects.
1. Linear convergence to the oracle model: Theorem 2 implies that if initialized in the local convex region, the iterates generated by AGD converge linearly toW ∗P for some P when noiseless. When there is noise, they converge to a pointW (T ). The distance betweenW (T ) andW ∗P is proportional to the noise level and scales in terms of O( √ r̃/N). Moreover, when N is fixed, the convergence rate
of AGD is Θ( √ r̃/K). Recall that Algorithm 1 reduces to the vanilla GD by setting β = 0. The rate for the vanilla GD algorithm here is Θ( √ r̃/K) by setting β = 0 by Theorem 2, indicating a slower convergence than AGD. Lemma 1 shows the tensor initialization method indeed returns an initial point in the convex region.
2. Sample complexity for accurate estimation: We show that the required number of samples for successful estimation of the oracle model is Θ ( r̃ log q log(1/ε) ) for some large constant q and estimation accuracy ε. Our sample complexity is much less than the conventional bound of Θ(d log q log(1/ε)) for one-hidden-layer networks [69, 66, 67]. This is the first theoretical characterization of learning a pruned network from the perspective of sample complexity.
3. Improved generalization of winning tickets: We prove that with a fixed number of training samples, training on a properly pruned sub-network converges faster toW ∗P than training on the original dense network. Our theoretical analysis justifies that training on the winning ticket can meet or exceed the same test accuracy within the same number of iterations. To the best of our knowledge, our result here provides the first theoretical justification for this intriguing empirical finding of “improved generalization of winning tickets” by [20]. Theorem 2 (Convergence). Assume the maskM of the learner network is accurate. SupposeW (0) satisfies (6) and the number of samples satisfies
N = Ω ( ε−20 K 6r̃ log q log(1/ε) )
(8)
for some ε0 ∈ (0, 1/2). Let η = K/14 in Algorithm 1. Then, the iterates {W (t)}Tt=1 returned by Algorithm 1 converges linearly toW ∗ up to the noise level with probability at least 1−K2T · q−rmin
‖W (t) −W ∗P ‖F ≤ν(β)t‖W (0) −W ∗P ‖F +O (∑
j
√ rj log q
N
) · |ξ|, (9)
and ‖W (T ) −W ∗P ‖F ≤ε‖W ∗‖F +O (∑
j
√ rj log q
N
) · |ξ|, (10)
for a fixed permutation matrix P , where ν(β) is the rate of convergence that depends on β with ν(β∗) = 1−Θ ( 1−ε0√ K ) for some non-zero β∗ and ν(0) = 1−Θ ( 1−ε0 K ) . Lemma 1 (Initialization). Assume the noise |ξ| ≤ ‖W ∗‖2 and the number of samples N = Ω ( ε−20 K 5rmax log q )
for ε0 > 0 and large constant q, the tensor initialization method outputs W (0) such that (6) holds, i.e., ‖W (0) −W ∗‖F = O ( ε0σK K2 ) with probability at least 1− q−rmax .
Remark 2.1 (Faster convergence on pruned network): With a fixed number of samples, when r̃ decreases, ε0 can increase as Θ( √ r̃) while (8) is still met. Then ν(0) = Θ( √ r̃/K) and ν(β∗) =
Θ( √ r̃/K). Therefore, when r̃ decreases, both the stochastic and accelerated gradient descent
converge faster. Note that as long asW (0) is initialized in the local convex region, not necessarily by the tensor method, Theorem 2 guarantees the accurate recovery. [66, 67] analyze AGD on convolutional neural networks, while this paper focuses on network pruning.
Remark 2.2 (Sample complexity of initialization): From Lemma 1, the required number of samples for a proper initialization is Ω ( ε−20 K 5rmax log q ) . Because rmax ≤ Krave and r̃ = Ω(rave), this number is no greater than the sample complexity in (8). Thus, provided that (8) is met, Algorithm 1 can estimate the oracle network model accurately.
Remark 2.3 (Inaccurate mask): The above analyses are based on the assumption that the mask of the learner network is accurate. In practice, a mask can be obtained by an iterative pruning method such as [20] or a one-shot pruning method such as [55]. In Appendix E, we prove that the magnitude pruning method can obtain an accurate mask with enough training samples. Moreover, empirical experiments in Section 4.2 and 4.3 suggest that even if the mask is not accurate, the three properties (linear convergence, sample complexity with respect to the network size, and improved generalization of winning tickets) can still hold. Therefore, our theoretical results provide some insight into the empirical success of network pruning.
3.3 The Sketch of Proofs and Technical Novelty
Our proof outline is inspired by [69] on fully connected neural networks, however, major technical changes are made in this paper to generalize the analysis to an arbitrarily pruned network. To characterize the local convex region of f̂D (Theorem 1), the idea is to bound the Hessian matrix of the population risk function, which is the expectation of the empirical risk function, locally and then characterize the distance between the empirical and population risk functions through the concentration bounds. Then, the convergence of AGD (Theorem 2) is established based on the desired local curvature, which in turn determines the sample complexity. Finally, to initialize in the local convex region (Lemma 1), we construct tensors that contain the weights information and apply a decomposition method to estimate the weights.
Our technical novelties are as follows. First, a direct application of the results in [69] leads to a sample complexity bound that is linear in the feature dimension d. We develop new techniques to tighten the sample complexity bound to be linear in r̃, which can be significantly smaller than d for a sufficiently pruned network. Specifically, we develop new concentration bounds (Lemmas 4 and 5 in Appendix) to bound the distance between the population and empirical risk functions rather than using the bound in [69]. Second, instead of restricting the acitivation to be smooth for convergence analysis, we study the case of ReLU function which is non-smooth. Third, new tensors are constructed for pruned networks (see (21)-(23) in Appendix) in computing the initialization, and our new concentration bounds are employed to reduce the required number of samples for a proper initialization. Last, Algorithm 1 employs AGD and is proved to converge faster than the GD algorithm in [69].
3.4 Limitations
Like most theoretical works based on the oracle-learner setup, limitations of this work include (1) one hidden layer only; and (2) the input follows the Gaussian distribution. Extension to multi-layer might be possible if the following technical challenges are addressed. First, when characterizing the local convex region, one needs to show that the Hessian matrix is positive definite. In multi-layer networks, the Hessian matrix is more complicated to compute. Second, new concentration bounds need to be developed because the input feature distributions to the second and third layers depend on the weights in previous layers. Third, the initialization approach needs to be revised. The team is also investigating the other input distributions such as Gaussian mixture models.
4 Numerical Experiments
The theoretical results are first verified on synthetic data, and we then analyze the pruning performance on both synthetic and real datasets. In Section 4.1, Algorithm 1 is implemented with minor modification, such that, the initial point is randomly selected as ‖W (0) −W ∗‖F /‖W ∗‖F < λ for some λ > 0 to reduce the computation. Algorithm 1 terminates when ‖W (t+1)−W (t)‖F /‖W (t)‖F is smaller than 10−8 or reaching 10000 iterations. In Sections 4.2 and 4.3, the Gradient Signal Preservation (GraSP) algorithm [55] and IMP algorithm [10, 20]2 are implemented to prune the neural networks. As many works like [11, 10, 20] have already verified the faster convergence and better generalization accuracy of the winning tickets empirically, we only include the results of some representative experiments, such as training MNIST and CIFAR-10 on Lenet-5 [32] and Resnet-50 [27] networks, to verify our theoretical findings.
The synthetic data are generated using a oracle model in Figure 1. The input xn’s are randomly generated from Gaussian distribution N (0, Id×d) independently, and indices of non-pruned weights of the j-th neuron are obtained by randomly selecting rj numbers without replacement from [d]. For the convenience of generating specific r̃, the indices of non-pruned weights are almost overlapped ( ∑ j ∑ k δjδk > 0.95K
2) except for Figure 5. In Figures 2 and 4, rj is selected uniformly from [0.9r̃, 1.1r̃] for a given r̃, and rj are the same in value for all j in other figures. Each non-zero entry ofW ∗ is randomly selected from [−0.5, 0.5] independently. The noise ξn’s are i.i.d. from N (0, σ2), and the noise level is measured by σ/Ey , where Ey is the root mean square of the noiseless outputs.
2The source codes used are downloaded from https://github.com/VITA-Group/CV_LTH_Pre-training.
4.1 Evaluation of theoretical findings on synthetic data
Local convexity near W ∗. We set the number of neurons K = 10, the dimension of the data d = 500 and the sample size N = 5000. Figure 2 illustrates the success rate of Algorithm 1 when r̃ changes. The y-axis is the relative distance of the initializationW (0) to the ground-truth. For each pair of r̃ and the initial distance, 100 trails are constructed with the network weights, training data and the initializationW (0) are all generated independently in each trail. Each trail is called successful if the relative error of the solutionW returned by Algorithm 1, measured by ‖W −W ∗‖F /‖W ∗‖F , is less than 10−4. A black block means Algorithm 1 fails in estimatingW ∗ in all trails while a white block indicates all success. As Algorithm 1 succeeds ifW (0) is in the local convex region nearW ∗, we can see that the radius of convex region is indeed linear in −r̃ 12 , as predicted by Theorem 1. Convergence rate. Figure 3 shows the convergence rate of Algorithm 1 when r̃ changes. N = 5000, d = 300, K = 10, η = 0.5, and β = 0.2. Figure 3(a) shows that the relative error decreases exponentially as the number of iterations increases, indicating the linear convergence of Algorithm 1. As shown in Figure 3(b), the results are averaged over 20 trials with different initial points, and the areas in low transparency represent the standard deviation errors. We can see that the convergence rate is almost linear in √ r̃, as predicted by Theorem 2. We also compare with GD by setting β as 0. One can see that AGD has a smaller convergence rate than GD, indicating faster convergence.
10 12 14 16 18 20 2 6
10 14 18 22 26 30 34 38
Figure 2: The radius of the local convex region against r̃ 1 2
Sample complexity. Figures 4 and 5 show the success rate of Algorithm 1 when varying N and r̃. d is fixed as 100. In Figure 4, we construct 100 independent trails for each pair of N and r̃, where the ground-truth model and training data are generated independently in each trail. One can see that the required number of samples for successful estimation is linear in r̃, as predicted by (8). In Figure 5, rj is fixed as 20 for all neurons, but different network architectures after pruning are considered. One can see that although the number of remaining weights is the same, r̃ can be different in different architectures, and the sample complexity increases as r̃ increases, as predicted by (8).
r̃
r̃ 1 2 at different noise level
Performance in noisy case. Figure 6 shows the relative error of the learned model by Algorithm 1 from noisy measurements when r̃ changes. N = 1000, K = 10, and d = 300. The results are averaged over 100 independent trials, and standard deviation is around 2% to 8% of the corresponding relative errors. The relative error is linear in r̃ 1 2 , as predicted by (9). Moreover, the relative error is proportional to the noise level |ξ|.
4.2 Performance with inaccurate mask on synthetic data
The performance of Algorithm 1 is evaluated when the maskM of the learner network is inaccurate. The number of neurons K is 5. The dimension of inputs d is 100. r∗j of the oracle model is 20 for
all j ∈ [K]. GraSP algorithm [55] is employed to find masks based only on early-trained weights in 20 iterations of AGD. The mask accuracy is measured by ‖M∗ M‖0/‖M∗‖0, whereM∗ is the mask of the oracle model. The pruning ratio is defined as (1− rave/d)× 100%. The number of training samples N is 200. The model returned by Algorithm 1 is evaluated on Ntest = 105 samples, and the test error is measured by √∑ n |yn − ŷn|2/Ntest, where ŷn is the output of the learned model with the input xn, and (xn, yn) is the n-th testing sample generated by the oracle network.
Improved generalization by GraSP. Figure 7 shows the test error with different pruning ratios. For each pruning ratio, we randomly generate 1000 independent trials. Because the mask of the learner network in each trail is generated independently, we compute the average test error of the learned models in all the trails with same mask accuracy. If there are less than 10 trails for certain mask accuracy, the result of that mask accuracy is not reported as it is statistically meaningless. The test error decreases as the mask accuracy increases. More importantly, at fixed mask accuracy, the test error decreases as the pruning ratio increases. That means the generalization performance improves when r̃ deceases, even if the mask is not accurate.
Linear convergence. Figure 8 shows the convergence rate of Algorithm 1 with different pruning ratios. We show the smallest number of iterations required to achieve a certain test error of the learned model, and the results are averaged over the independent trials with mask accuracy between 0.85 and 0.90. Even with inaccurate mask, the test error converges linearly. Moreover, as the pruning ratio increases, Algorithm 1 converges faster.
Sample complexity with respect to the pruning ratio. Figure 9 shows the test error when the number of training samples N changes. All the other parameters except N remain the same. The results are averaged over the trials with mask accuracy between 0.85 and 0.90. We can see the test error decreases when N increases. More importantly, as the pruning ratio increases, the required number of samples to achieve the same test error (no less than 10−3) decreases dramatically. That means the sample complexity decreases as r̃ decreases even if the mask is inaccurate.
4.3 Performance of IMP on synthetic, MNIST and CIFAR-10 datasets
We implement the IMP algorithm to obtain pruned networks on synthetic, MNIST and CIFAR-10 datasets. Figure 10 shows the test performance of a pruned network on synthetic data with different sample sizes. Here in the oracle network model, K = 5, d = 100, and r∗j = 20 for all j ∈ [K]. The noise level σ/Ey = 10−3. One observation is that for a fixed sample size N greater than 100, the test error decreases as the pruning ratio increases. This verifies that the IMP algorithm indeed prunes the network properly. It also shows that the learned model improves as the pruning progresses, verifying our theoretical result in Theorem 2 that the difference of the learned model from the oracle model decreases as rj decreases. The second observation is that the test error decreases as N increases for any fixed pruning ratio. This verifies our result in Theorem 2 that the difference of the learned model from the oracle model decreases as the number of training samples increases. When the pruning ratio is too large (greater than 80%), the pruned network cannot explain the data properly, and thus the test error is large for all N . When the number of samples is too small, like N = 100, the test error is always large, because it does not meet the sample complexity requirement for estimating the oracle model even though the network is properly pruned.
Figures 11 and 12 show the test performance of learned models by implementing the IMP algorithm on MNIST and CIFAR-10 using Lenet-5 [32] and Resnet-50 [27] architecture, respectively. The
experiments follow the standard setup in [10] except for the size of the training sets. To demonstrate the effect of sample complexity, we randomly selected N samples from the original training set without replacement. As we can see, a properly pruned network (i.e., winning ticket) helps reduce the sample complexity required to reach the test accuracy of the original dense model. For example, training on a pruned network returns a model (e.g., P1 and P3 in Figures 11 and 12) that has better testing performance than a dense model (e.g., P2 and P4 in Figures 11 and 12) trained on a larger data set. Given the number of samples, we consistently find the characteristic behavior of winning tickets: That is, the test accuracy could increase when the pruning ratio increases, indicating the effectiveness of pruning. The test accuracy then drops when the network is overly pruned. The results show that our theoretical characterization of sample complexity is well aligned with the empirical performance of pruned neural networks and explains the improved generalization observed in LTH.
5 Conclusions
This paper provides the first theoretical analysis of learning one-hidden-layer pruned neural networks, which offers formal justification of the improved generalization of winning ticket observed from empirical findings in LTH. We characterize analytically the impact of the number of remaining weights in a pruned network on the required number of samples for training, the convergence rate of the learning algorithm, and the accuracy of the learned model. We also provide extensive numerical validations of our theoretical findings.
Broader impacts
We see no ethical or immediate societal consequence of our work. This paper contributes to the theoretical foundation of both network pruning and generalization guarantee. The former encourages the development of learning method to reduce the computational cost. The latter increases the public trust in incorporating AI technology in critical domains.
Acknowledgement
This work was supported by AFOSR FA9550-20-1-0122, ARO W911NF-21-1-0255, NSF 1932196 and the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons). We thank Tianlong Chen at University of Texas at Austin, Haolin Xiong at Rensselaer Polytechnic Institute and Yihua Zhang at Michigan State University for the help in formulating numerical experiments. We thank all anonymous reviewers for their constructive comments. | 1. What is the focus of the paper regarding neural network pruning?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its assumptions and efficiency?
4. Do you have any concerns about the applicability of the method on small-size neural networks?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper analyzes the possibility of recovering the weight of a teacher network using a student network which is as dense as the original one. The authors prove that with enough samples drawn from the teacher network, the recovery can be done on one-hidden-layer neural networks with linear convergence speed. The authors also discuss the correlation between the needed samples and the sparsity of the teacher network.
To sum up, the well-designed structure makes the workflow clear and easy to follow, but the further analysis and discussion are expected to clarify some contributions in the techniques as well as in the evaluation section.
Review
Strength
The paper analyzes the influence of the number of remaining parameters in pruned subnetworks and derives the required number of samples for successful convergence.
The authors conduct the theoretical analysis and justification for improved generalization error of winning ticket in the LTH.
The extensive evaluation on both synthetic and real datasets demonstrates the effectiveness of the proposed approach for neural network pruning.
Weakness
The strong assumption that one-hidden-layer neural networks and the input data must follow the Gaussian distribution seriously limit the applicability of the proposed method.
The authors claims to provide the theoretical analysis and justification of the LTH. The paper learns pruned neural networks by using the teacher-student setup. However, this is inconsistent with the Lottery Ticket Hypothesis setting that the pruned subnetwork should be "trained in isolation".
In the main theorems (Theorems 1 and 2), the number of samples drawn from the teacher model are required to be proportional to \Omega(K^4) or \Omega(K^6), where K is number of the neurons in the neural network. This may result in a serious efficiency issue, such that the method only work on small-size neural networks. In fact, the paper conduct the experiments over neural networks with merely 5 or 10 neurons.
It would be nice to conduct the experiments with the SOTA LTH and neural network pruning algorithms.
Post-rebuttal update:
Thanks for the authors' efforts in addressing the raised concerns. I have read the author response and other reviews and keep my score due to the following two reasons.
In the paper, the derived theoretical justification or explanation over LTH or network pruning is based on a strong assumption of the one-hidden-layer neural networks and the input data with Gaussian distribution. Typically, LTH or network pruning methods don't have such strong assumption. If the authors claim the theoretical justification over LTH or network pruning is the main contribution of this paper, then the authors need to extend the proof to general neural networks without this restriction, although this extension is non-trivial. Otherwise, it is hard to say that the paper theoretically justifies the LTH or network pruning. In fact, real-world neural networks are often not satisfied with this assumption, such as Lenet-5 and Resnet-50 used in the experiments in the paper.
Local convexity (radius of convex region) and convergence are two main theoretical results in the paper. They are validated on synthetic data with small networks (5 or 10 neurons), but not on real data with large ones (Lenet-5, and Resnet-50). |
NIPS | Title
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
Abstract
The lottery ticket hypothesis (LTH) [20] states that learning on a properly pruned network (the winning ticket) improves test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
1 Introduction
Neural network pruning can reduce the computational cost of model training and inference significantly and potentially lessen the chance of overfitting [33, 26, 15, 25, 28, 51, 58, 41]. The recent Lottery Ticket Hypothesis (LTH) [20] claims that a randomly initialized dense neural network al-
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
ways contains a so-called “winning ticket,” which is a sub-network bundled with the corresponding initialization, such that when trained in isolation, this winning ticket can achieve at least the same testing accuracy as that of the original network by running at most the same amount of training time. This so-called “improved generalization of winning tickets” is verified empirically in [20]. LTH has attracted a significant amount of recent research interests [45, 70, 39]. Despite the empirical success [19, 63, 55, 11], the theoretical justification of winning tickets remains elusive except for a few recent works. [39] provides the first theoretical evidence that within a randomly initialized neural network, there exists a good sub-network that can achieve the same test performance as the original network. Meanwhile, recent work [42] trains neural network by adding the `1 regularization term to obtain a relatively sparse neural network, which has a better performance numerically.
However, the theoretical foundation of network pruning is limited. The existing theoretical works usually focus on finding a sub-network that achieves a tolerable loss in either expressive power or training accuracy, compared with the original dense network [2, 71, 61, 43, 4, 3, 35, 5, 59]. To the best of our knowledge, there exists no theoretical support for the improved generalization achieved by winning tickets, i.e., pruned networks with faster convergence and better test accuracy.
Contributions: This paper provides the first systematic analysis of learning pruned neural networks with a finite number of training samples in the oracle-learner setup, where the training data are generated by a unknown neural network, the oracle, and another network, the learner, is trained on the dataset. Our analytical results also provide a justification of the LTH from the perspective of the sample complexity. In particular, we provide the first theoretical justification of the improved generalization of winning tickets. Specific contributions include:
1. Pruned neural network learning via accelerated gradient descent (AGD): We propose an AGD algorithm with tensor initialization to learn the pruned model from training samples. Our algorithm converges to the oracle model linearly, which has guaranteed generalization.
2. First sample complexity analysis for pruned networks: We characterize the required number of samples for successful convergence, termed as the sample complexity. Our sample complexity bound depends linearly on the number of the non-pruned weights and is a significant reduction from directly applying conventional complexity bounds in [69, 66, 67].
3. Characterization of the benign optimization landscape of pruned networks: We show analytically that the empirical risk function has an enlarged convex region for a pruned network, justifying the importance of a good sub-network (i.e., the winning ticket).
4. Characterization of the improved generalization of winning tickets: We show that gradientdescent methods converge faster to the oracle model when the neural network is properly pruned, or equivalently, learning on a pruned network returns a model closer to the oracle model with the same number of iterations, indicating the improved generalization of winning tickets.
Notations. Vectors are bold lowercase, matrices and tensors are bold uppercase. Scalars are in normal font, and sets are in calligraphy and blackboard bold font. I denote the identity matrix. N and R denote the sets of nature number and real number, respectively. ‖z‖ denotes the `2-norm of a vector z, and ‖Z‖2, ‖Z‖F and ‖Z‖∞ denote the spectral norm, Frobenius norm and the maximum value of matrix Z, respectively. [Z] stands for the set of {1, 2, · · · , Z} for any number Z ∈ N. In addition, f(r) = O(g(r)) ( or f(r) = Ω(g(r)) ) if f ≤ C · g ( or f ≥ C · g ) for some constant C > 0 when r is large enough. f(r) = Θ(g(r)) if both f(r) = O(g(r)) and f(r) = Ω(g(r)) holds, where c · g ≤ f ≤ C · g for some constant 0 ≤ c ≤ C when r is large enough.
1.1 Related Work
Network pruning. Network pruning methods seek a compressed model while maintaining the expressive power. Numerical experiments have shown that over 90% of the parameters can be pruned without a significant performance loss [10]. Examples of pruning methods include irregular weight pruning [25], structured weight pruning [57], neuron-based pruning [28], and projecting the weights to a low-rank subspace [13].
Winning tickets. [20] employs an Iterative Magnitude Pruning (IMP) algorithm to obtain the proper sub-network and initialization. IMP and its variations [22, 46] succeed in deeper networks like Residual Networks (Resnet)-50 and Bidirectional Encoder Representations from Transformers (BERT) network [11]. [21] shows that IMP succeeds in finding the “winning ticket” if the ticket is
stable to stochastic gradient descent noise. In parallel, [36] shows numerically that the “winning ticket” initialization does not improve over a random initialization once the correct sub-networks are found, suggesting that the benefit of “winning ticket” mainly comes from the sub-network structures. [18] analyzes the sample complexity of IMP from the perspective of recovering a sparse vector in a linear model rather than learning neural networks.
Feature sparsity. High-dimensional data often contains redundant features, and only a subset of the features is used in training [6, 14, 27, 60, 68]. Conventional approaches like wrapper and filter methods score the importance of each feature in a certain way and select the ones with highest scores [24]. Optimization-based methods add variants of the `0 norm as a regularization to promote feature sparsity [68]. Different from network pruning where the feature dimension still remains high during training, the feature dimension is significantly reduced in training when promoting feature sparsity.
Over-parameterized model. When the number of weights in a neural network is much larger than the number of training samples, the landscape of the objective function of the learning problem has no spurious local minima, and first-order algorithms converge to one of the global optima [37, 44, 64, 50, 9, 49, 38]. However, the global optima is not guaranteed to generalize well on testing data [62, 64].
Generalization analyses. The existing generalization analyses mostly fall within three categories. One line of research employs the Mean Field approach to model the training process by a differential equation assuming infinite network width and infinitesimal training step size [12, 40, 56]. Another approach is the neural tangent kernel (NTK) [30], which requires strong and probably unpractical over-parameterization such that the nonlinear neural network model behaves as its linearization around the initialization [1, 17, 72, 73]. The third line of works follow the oracle-learner setup, where the data are generated by an unknown oracle model, and the learning objective is to estimate the oracle model, which has a generalization guarantee on testing data. However, the objective function has intractably many spurious local minima even for one-hidden-layer neural networks [48, 47, 64]. Assuming an infinite number of training samples, [8, 16, 52] develop learning methods to estimate the oracle model. [23, 69, 66, 67] extend to the practical case of a finite number of samples and characterize the sample complexity for recovering the oracle model. Because the analysis complexity explodes when the number of hidden layers increases, all the analytical results about estimating the oracle model are limited to one-hidden-layer neural networks, and the input distribution is often assumed to be the standard Gaussian distribution.
2 Problem Formulation
In an oracle-learner model, given any input x ∈ Rd, the corresponding output y is generated by a pruned one-hidden-layer neural network, called oracle, as shown in Figure 1. The oracle network is equipped with K neurons where the j-th neuron is connected to any arbitrary r∗j (r ∗ j ≤ d) input features. LetW ∗ = [w∗1, · · · ,w∗K ] ∈ Rd×K denotes all the weights (pruned ones are represented by zero). The number of non-zero entries in w∗j is at most r ∗ j . The oracle network is not unique because permuting neurons together with the corresponding weights does not change the output. Therefore, the output label y obtained by the oracle network satisfies 1
y = 1
K K∑ j=1 φ(w∗Tj x) + ξ := g(x;W ∗) + ξ = g(x;W ∗P ) + ξ, (1)
where ξ is arbitrary unknown additive noise bounded by some constant |ξ|, φ is the rectified linear unit (ReLU) activation function with φ(z) = max{z, 0}, and P ∈ {0, 1}K×K is any permutation matrix. M∗ is a mask matrix for the oracle network, such that M∗j,i equals to 1 if the weight w∗j,i is not pruned, and 0 otherwise. Then,M∗ is an indicator matrix for the non-zero entries ofW ∗ with M∗ W ∗ = W ∗, where is entry-wise multiplication. Based on N pairs of training samples D = {xn, yn}Nn=1 generated by the oracle, we train on a learner network equipped with the same number of neurons in the oracle network. However, the j-th neuron in the learner network is connected to rj input features rather than r∗j . Let rmin, rmax, and rave denote the minimum, maximum, and average value of {rj}Kj=1, respectively. LetM denote the
1It is extendable to binary classification, and the output is generated by Prob ( yn = 1|xn ) = g(xn;W ∗).
mask matrix with respect to the learner network, and wj is the j-th column ofW . The empirical risk function is defined as
f̂D(W ) = 1
2N N∑ n=1 ( 1 K K∑ j=1 φ(wTj xn)− yn )2 . (2)
When the maskM is given, the learning objective is to estimate a proper weight matrixW for the learner network from the training samples D via solving
minW∈Rd×K f̂D(W ) s.t. M W = W . (3)
M is called an accurate mask if the support ofM covers the support of a permutation ofM∗, i.e., there exists a permutation matrix P such that (M∗P ) M = M∗. When M is accurate, and ξ = 0, there exists a permutation matrix P such that W ∗P is a global optimizer to (3). Hence, if W ∗P can be estimated by solving (3), one can learn the oracle network accurately, which has guaranteed generalization performance on the testing data.
We assume xn is independent and identically distributed from the standard Gaussian distribution N (0, Id×d). The Gaussian assumption is motivated by the data whitening [34] and batch normalization techniques [29] that are commonly used in practice to improve learning performance. Moreover, training one-hidden-layer neural network with multiple neurons has intractable many fake minima [47] without any input distribution assumption. In addition, the theoretical results in Section 3 assume an accurate mask, and inaccurate mask is evaluated empirically in Section 4.
The questions that this paper addresses include: 1. what algorithm to solve (3)? 2. what is the sample complexity for the accurate estimate of the weights in the oracle network? 3. what is the impact of the network pruning on the difficulty of the learning problem and the performance of the learned model?
3 Algorithm and Theoretical Results
Section 3.1 studies the geometric structure of (3), and the main results are in Section 3.2. Section 3.3 briefly introduces the proof sketch and technical novelty, and the limitations are in Section 3.4.
3.1 Local Geometric Structure
Theorem 1 characterizes the local convexity of f̂D in (3). It has two important implications.
1. Strictly locally convex near ground truth: f̂D is strictly convex nearW ∗P for some permutation matrix P , and the radius of the convex ball is negatively correlated with √ r̃, where r̃ is in the order of rave. Thus, the convex ball enlarges as any rj decreases.
2. Importance of the winning ticket architecture: Compared with training on the dense network directly, training on a properly pruned sub-network has a larger local convex region near W ∗P , which may lead to easier estimation of W ∗P . To some extent, this result can be viewed as a theoretical validation of the importance of the winning architecture (a good sub-network) in [20]. Formally, we have
Theorem 1 (Local Convexity). Assume the mask M of the learner network is accurate. Suppose constants ε0, ε1 ∈ (0, 1) and the number of samples satisfies
N = Ω ( ε−21 K 4r̃ log q ) , (4)
for some large constant q > 0, where
r̃ = 1
8K4
(∑K k=1 ∑K j=1(1 + δj,k)(rj + rk) 1 2 )2 , (5)
δj,k is 1 if the indices of non-pruned weights in the j-th and k-th neurons overlap and 0 otherwise. Then, there exists a permutation matrix P such that for anyW that satisfies
‖W −W ∗P ‖F = O ( ε0 K2 ) , andM W = W , (6)
its Hessian of f̂D, with probability at least 1−K · q−rmin , is bounded as:
Θ (1− ε0 − ε1
K2
) I ∇2f̂D(W ) Θ ( 1 K ) I. (7)
Remark 1.1 (Parameter r̃): Clearly r̃ is a monotonically increasing function of any rj from (5). Moreover, one can check that 18rave ≤ r̃ ≤ rave. Hence, r̃ is in the order of rave. Remark 1.2 (Local landscape): Theorem 1 shows that with enough samples as shown in (4), in a local region ofW ∗P as shown in (6), all the eigenvalues of the Hessian matrix of the empirical risk function are lower and upper bounded by two positive constants. This property is useful in designing efficient algorithms to recoverW ∗P , as shown in Section 3.2.
Remark 1.3 (Size of the convex region): When the number of samples N is fixed and r changes, ε1 can be Θ( √ r̃/N) while (4) is still met. ε0 in (7) can be arbitrarily close to but smaller than 1− ε1 so that the Hessian matrix is still positive definite. Then from (6), the radius of the convex ball is Θ(1) − Θ( √ r̃/N), indicating an enlarged region when r̃ decreases. The enlarged convex region serves as an important component in proving the faster convergence rate, summarized in Theorem 2. Besides this, as Figure 1 shown in [20], the authors claim that the learning is stable if the linear interpolation of the learned models with SGD noises still remain similar in performance, which is summarized as the concept “linearly connected region.” Intuitively, we conjecture that the winning ticket shows a better performance in the stability analysis because it has a larger convex region. In the other words, a larger convex region indicates that the learning is more likely to be stable in the linearly connected region.
3.2 Convergence Analysis with Accelerated Gradient Descent
We propose to solve the non-convex problem (3) via the accelerated gradient descent (AGD) algorithm, summarized in Algorithm 1. Compared with the vanilla gradient descent (GD) algorithm, AGD has an additional momentum term, denoted by β(W (t) −W (t−1)), in each iteration. AGD enjoys a faster convergence rate than vanilla GD in solving optimization problems, including learning neural networks [65]. Vanilla GD can be viewed as a special case of AGD by letting β = 0. The initial point W (0) can be obtained through a tensor method, and the details are provided in Appendix B.
Algorithm 1 Accelerated Gradient Descent (AGD) Algorithm 1: Input: training data D = {(xn, yn)}Nn=1, gradient step size η, momentum parameter β, and an
initializationW (0) by the tensor initialization method; 2: Partition D into T = log(1/ε) disjoint subsets, denoted as {Dt}Tt=1; 3: for t = 1, 2, · · · , T do 4: W (t+1) = W (t) − η ·M ∇f̂Dt(W
(t)) + β(W (t) −W (t−1)) 5: end for 6: Return: W (T )
The theoretical analyses of our algorithm are summarized in Theorem 2 (convergence) and Lemma 1 (Initialization). The significance of these results can be interpreted from the following aspects.
1. Linear convergence to the oracle model: Theorem 2 implies that if initialized in the local convex region, the iterates generated by AGD converge linearly toW ∗P for some P when noiseless. When there is noise, they converge to a pointW (T ). The distance betweenW (T ) andW ∗P is proportional to the noise level and scales in terms of O( √ r̃/N). Moreover, when N is fixed, the convergence rate
of AGD is Θ( √ r̃/K). Recall that Algorithm 1 reduces to the vanilla GD by setting β = 0. The rate for the vanilla GD algorithm here is Θ( √ r̃/K) by setting β = 0 by Theorem 2, indicating a slower convergence than AGD. Lemma 1 shows the tensor initialization method indeed returns an initial point in the convex region.
2. Sample complexity for accurate estimation: We show that the required number of samples for successful estimation of the oracle model is Θ ( r̃ log q log(1/ε) ) for some large constant q and estimation accuracy ε. Our sample complexity is much less than the conventional bound of Θ(d log q log(1/ε)) for one-hidden-layer networks [69, 66, 67]. This is the first theoretical characterization of learning a pruned network from the perspective of sample complexity.
3. Improved generalization of winning tickets: We prove that with a fixed number of training samples, training on a properly pruned sub-network converges faster toW ∗P than training on the original dense network. Our theoretical analysis justifies that training on the winning ticket can meet or exceed the same test accuracy within the same number of iterations. To the best of our knowledge, our result here provides the first theoretical justification for this intriguing empirical finding of “improved generalization of winning tickets” by [20]. Theorem 2 (Convergence). Assume the maskM of the learner network is accurate. SupposeW (0) satisfies (6) and the number of samples satisfies
N = Ω ( ε−20 K 6r̃ log q log(1/ε) )
(8)
for some ε0 ∈ (0, 1/2). Let η = K/14 in Algorithm 1. Then, the iterates {W (t)}Tt=1 returned by Algorithm 1 converges linearly toW ∗ up to the noise level with probability at least 1−K2T · q−rmin
‖W (t) −W ∗P ‖F ≤ν(β)t‖W (0) −W ∗P ‖F +O (∑
j
√ rj log q
N
) · |ξ|, (9)
and ‖W (T ) −W ∗P ‖F ≤ε‖W ∗‖F +O (∑
j
√ rj log q
N
) · |ξ|, (10)
for a fixed permutation matrix P , where ν(β) is the rate of convergence that depends on β with ν(β∗) = 1−Θ ( 1−ε0√ K ) for some non-zero β∗ and ν(0) = 1−Θ ( 1−ε0 K ) . Lemma 1 (Initialization). Assume the noise |ξ| ≤ ‖W ∗‖2 and the number of samples N = Ω ( ε−20 K 5rmax log q )
for ε0 > 0 and large constant q, the tensor initialization method outputs W (0) such that (6) holds, i.e., ‖W (0) −W ∗‖F = O ( ε0σK K2 ) with probability at least 1− q−rmax .
Remark 2.1 (Faster convergence on pruned network): With a fixed number of samples, when r̃ decreases, ε0 can increase as Θ( √ r̃) while (8) is still met. Then ν(0) = Θ( √ r̃/K) and ν(β∗) =
Θ( √ r̃/K). Therefore, when r̃ decreases, both the stochastic and accelerated gradient descent
converge faster. Note that as long asW (0) is initialized in the local convex region, not necessarily by the tensor method, Theorem 2 guarantees the accurate recovery. [66, 67] analyze AGD on convolutional neural networks, while this paper focuses on network pruning.
Remark 2.2 (Sample complexity of initialization): From Lemma 1, the required number of samples for a proper initialization is Ω ( ε−20 K 5rmax log q ) . Because rmax ≤ Krave and r̃ = Ω(rave), this number is no greater than the sample complexity in (8). Thus, provided that (8) is met, Algorithm 1 can estimate the oracle network model accurately.
Remark 2.3 (Inaccurate mask): The above analyses are based on the assumption that the mask of the learner network is accurate. In practice, a mask can be obtained by an iterative pruning method such as [20] or a one-shot pruning method such as [55]. In Appendix E, we prove that the magnitude pruning method can obtain an accurate mask with enough training samples. Moreover, empirical experiments in Section 4.2 and 4.3 suggest that even if the mask is not accurate, the three properties (linear convergence, sample complexity with respect to the network size, and improved generalization of winning tickets) can still hold. Therefore, our theoretical results provide some insight into the empirical success of network pruning.
3.3 The Sketch of Proofs and Technical Novelty
Our proof outline is inspired by [69] on fully connected neural networks, however, major technical changes are made in this paper to generalize the analysis to an arbitrarily pruned network. To characterize the local convex region of f̂D (Theorem 1), the idea is to bound the Hessian matrix of the population risk function, which is the expectation of the empirical risk function, locally and then characterize the distance between the empirical and population risk functions through the concentration bounds. Then, the convergence of AGD (Theorem 2) is established based on the desired local curvature, which in turn determines the sample complexity. Finally, to initialize in the local convex region (Lemma 1), we construct tensors that contain the weights information and apply a decomposition method to estimate the weights.
Our technical novelties are as follows. First, a direct application of the results in [69] leads to a sample complexity bound that is linear in the feature dimension d. We develop new techniques to tighten the sample complexity bound to be linear in r̃, which can be significantly smaller than d for a sufficiently pruned network. Specifically, we develop new concentration bounds (Lemmas 4 and 5 in Appendix) to bound the distance between the population and empirical risk functions rather than using the bound in [69]. Second, instead of restricting the acitivation to be smooth for convergence analysis, we study the case of ReLU function which is non-smooth. Third, new tensors are constructed for pruned networks (see (21)-(23) in Appendix) in computing the initialization, and our new concentration bounds are employed to reduce the required number of samples for a proper initialization. Last, Algorithm 1 employs AGD and is proved to converge faster than the GD algorithm in [69].
3.4 Limitations
Like most theoretical works based on the oracle-learner setup, limitations of this work include (1) one hidden layer only; and (2) the input follows the Gaussian distribution. Extension to multi-layer might be possible if the following technical challenges are addressed. First, when characterizing the local convex region, one needs to show that the Hessian matrix is positive definite. In multi-layer networks, the Hessian matrix is more complicated to compute. Second, new concentration bounds need to be developed because the input feature distributions to the second and third layers depend on the weights in previous layers. Third, the initialization approach needs to be revised. The team is also investigating the other input distributions such as Gaussian mixture models.
4 Numerical Experiments
The theoretical results are first verified on synthetic data, and we then analyze the pruning performance on both synthetic and real datasets. In Section 4.1, Algorithm 1 is implemented with minor modification, such that, the initial point is randomly selected as ‖W (0) −W ∗‖F /‖W ∗‖F < λ for some λ > 0 to reduce the computation. Algorithm 1 terminates when ‖W (t+1)−W (t)‖F /‖W (t)‖F is smaller than 10−8 or reaching 10000 iterations. In Sections 4.2 and 4.3, the Gradient Signal Preservation (GraSP) algorithm [55] and IMP algorithm [10, 20]2 are implemented to prune the neural networks. As many works like [11, 10, 20] have already verified the faster convergence and better generalization accuracy of the winning tickets empirically, we only include the results of some representative experiments, such as training MNIST and CIFAR-10 on Lenet-5 [32] and Resnet-50 [27] networks, to verify our theoretical findings.
The synthetic data are generated using a oracle model in Figure 1. The input xn’s are randomly generated from Gaussian distribution N (0, Id×d) independently, and indices of non-pruned weights of the j-th neuron are obtained by randomly selecting rj numbers without replacement from [d]. For the convenience of generating specific r̃, the indices of non-pruned weights are almost overlapped ( ∑ j ∑ k δjδk > 0.95K
2) except for Figure 5. In Figures 2 and 4, rj is selected uniformly from [0.9r̃, 1.1r̃] for a given r̃, and rj are the same in value for all j in other figures. Each non-zero entry ofW ∗ is randomly selected from [−0.5, 0.5] independently. The noise ξn’s are i.i.d. from N (0, σ2), and the noise level is measured by σ/Ey , where Ey is the root mean square of the noiseless outputs.
2The source codes used are downloaded from https://github.com/VITA-Group/CV_LTH_Pre-training.
4.1 Evaluation of theoretical findings on synthetic data
Local convexity near W ∗. We set the number of neurons K = 10, the dimension of the data d = 500 and the sample size N = 5000. Figure 2 illustrates the success rate of Algorithm 1 when r̃ changes. The y-axis is the relative distance of the initializationW (0) to the ground-truth. For each pair of r̃ and the initial distance, 100 trails are constructed with the network weights, training data and the initializationW (0) are all generated independently in each trail. Each trail is called successful if the relative error of the solutionW returned by Algorithm 1, measured by ‖W −W ∗‖F /‖W ∗‖F , is less than 10−4. A black block means Algorithm 1 fails in estimatingW ∗ in all trails while a white block indicates all success. As Algorithm 1 succeeds ifW (0) is in the local convex region nearW ∗, we can see that the radius of convex region is indeed linear in −r̃ 12 , as predicted by Theorem 1. Convergence rate. Figure 3 shows the convergence rate of Algorithm 1 when r̃ changes. N = 5000, d = 300, K = 10, η = 0.5, and β = 0.2. Figure 3(a) shows that the relative error decreases exponentially as the number of iterations increases, indicating the linear convergence of Algorithm 1. As shown in Figure 3(b), the results are averaged over 20 trials with different initial points, and the areas in low transparency represent the standard deviation errors. We can see that the convergence rate is almost linear in √ r̃, as predicted by Theorem 2. We also compare with GD by setting β as 0. One can see that AGD has a smaller convergence rate than GD, indicating faster convergence.
10 12 14 16 18 20 2 6
10 14 18 22 26 30 34 38
Figure 2: The radius of the local convex region against r̃ 1 2
Sample complexity. Figures 4 and 5 show the success rate of Algorithm 1 when varying N and r̃. d is fixed as 100. In Figure 4, we construct 100 independent trails for each pair of N and r̃, where the ground-truth model and training data are generated independently in each trail. One can see that the required number of samples for successful estimation is linear in r̃, as predicted by (8). In Figure 5, rj is fixed as 20 for all neurons, but different network architectures after pruning are considered. One can see that although the number of remaining weights is the same, r̃ can be different in different architectures, and the sample complexity increases as r̃ increases, as predicted by (8).
r̃
r̃ 1 2 at different noise level
Performance in noisy case. Figure 6 shows the relative error of the learned model by Algorithm 1 from noisy measurements when r̃ changes. N = 1000, K = 10, and d = 300. The results are averaged over 100 independent trials, and standard deviation is around 2% to 8% of the corresponding relative errors. The relative error is linear in r̃ 1 2 , as predicted by (9). Moreover, the relative error is proportional to the noise level |ξ|.
4.2 Performance with inaccurate mask on synthetic data
The performance of Algorithm 1 is evaluated when the maskM of the learner network is inaccurate. The number of neurons K is 5. The dimension of inputs d is 100. r∗j of the oracle model is 20 for
all j ∈ [K]. GraSP algorithm [55] is employed to find masks based only on early-trained weights in 20 iterations of AGD. The mask accuracy is measured by ‖M∗ M‖0/‖M∗‖0, whereM∗ is the mask of the oracle model. The pruning ratio is defined as (1− rave/d)× 100%. The number of training samples N is 200. The model returned by Algorithm 1 is evaluated on Ntest = 105 samples, and the test error is measured by √∑ n |yn − ŷn|2/Ntest, where ŷn is the output of the learned model with the input xn, and (xn, yn) is the n-th testing sample generated by the oracle network.
Improved generalization by GraSP. Figure 7 shows the test error with different pruning ratios. For each pruning ratio, we randomly generate 1000 independent trials. Because the mask of the learner network in each trail is generated independently, we compute the average test error of the learned models in all the trails with same mask accuracy. If there are less than 10 trails for certain mask accuracy, the result of that mask accuracy is not reported as it is statistically meaningless. The test error decreases as the mask accuracy increases. More importantly, at fixed mask accuracy, the test error decreases as the pruning ratio increases. That means the generalization performance improves when r̃ deceases, even if the mask is not accurate.
Linear convergence. Figure 8 shows the convergence rate of Algorithm 1 with different pruning ratios. We show the smallest number of iterations required to achieve a certain test error of the learned model, and the results are averaged over the independent trials with mask accuracy between 0.85 and 0.90. Even with inaccurate mask, the test error converges linearly. Moreover, as the pruning ratio increases, Algorithm 1 converges faster.
Sample complexity with respect to the pruning ratio. Figure 9 shows the test error when the number of training samples N changes. All the other parameters except N remain the same. The results are averaged over the trials with mask accuracy between 0.85 and 0.90. We can see the test error decreases when N increases. More importantly, as the pruning ratio increases, the required number of samples to achieve the same test error (no less than 10−3) decreases dramatically. That means the sample complexity decreases as r̃ decreases even if the mask is inaccurate.
4.3 Performance of IMP on synthetic, MNIST and CIFAR-10 datasets
We implement the IMP algorithm to obtain pruned networks on synthetic, MNIST and CIFAR-10 datasets. Figure 10 shows the test performance of a pruned network on synthetic data with different sample sizes. Here in the oracle network model, K = 5, d = 100, and r∗j = 20 for all j ∈ [K]. The noise level σ/Ey = 10−3. One observation is that for a fixed sample size N greater than 100, the test error decreases as the pruning ratio increases. This verifies that the IMP algorithm indeed prunes the network properly. It also shows that the learned model improves as the pruning progresses, verifying our theoretical result in Theorem 2 that the difference of the learned model from the oracle model decreases as rj decreases. The second observation is that the test error decreases as N increases for any fixed pruning ratio. This verifies our result in Theorem 2 that the difference of the learned model from the oracle model decreases as the number of training samples increases. When the pruning ratio is too large (greater than 80%), the pruned network cannot explain the data properly, and thus the test error is large for all N . When the number of samples is too small, like N = 100, the test error is always large, because it does not meet the sample complexity requirement for estimating the oracle model even though the network is properly pruned.
Figures 11 and 12 show the test performance of learned models by implementing the IMP algorithm on MNIST and CIFAR-10 using Lenet-5 [32] and Resnet-50 [27] architecture, respectively. The
experiments follow the standard setup in [10] except for the size of the training sets. To demonstrate the effect of sample complexity, we randomly selected N samples from the original training set without replacement. As we can see, a properly pruned network (i.e., winning ticket) helps reduce the sample complexity required to reach the test accuracy of the original dense model. For example, training on a pruned network returns a model (e.g., P1 and P3 in Figures 11 and 12) that has better testing performance than a dense model (e.g., P2 and P4 in Figures 11 and 12) trained on a larger data set. Given the number of samples, we consistently find the characteristic behavior of winning tickets: That is, the test accuracy could increase when the pruning ratio increases, indicating the effectiveness of pruning. The test accuracy then drops when the network is overly pruned. The results show that our theoretical characterization of sample complexity is well aligned with the empirical performance of pruned neural networks and explains the improved generalization observed in LTH.
5 Conclusions
This paper provides the first theoretical analysis of learning one-hidden-layer pruned neural networks, which offers formal justification of the improved generalization of winning ticket observed from empirical findings in LTH. We characterize analytically the impact of the number of remaining weights in a pruned network on the required number of samples for training, the convergence rate of the learning algorithm, and the accuracy of the learned model. We also provide extensive numerical validations of our theoretical findings.
Broader impacts
We see no ethical or immediate societal consequence of our work. This paper contributes to the theoretical foundation of both network pruning and generalization guarantee. The former encourages the development of learning method to reduce the computational cost. The latter increases the public trust in incorporating AI technology in critical domains.
Acknowledgement
This work was supported by AFOSR FA9550-20-1-0122, ARO W911NF-21-1-0255, NSF 1932196 and the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons). We thank Tianlong Chen at University of Texas at Austin, Haolin Xiong at Rensselaer Polytechnic Institute and Yihua Zhang at Michigan State University for the help in formulating numerical experiments. We thank all anonymous reviewers for their constructive comments. | 1. What is the main contribution and novelty of the paper regarding LTH?
2. What are the strengths and weaknesses of the theoretical analysis provided in the paper?
3. Do you have any concerns or questions regarding the setup and assumptions made in the paper?
4. How does the reviewer assess the significance and usefulness of the paper's findings and contributions?
5. Are there any limitations or potential biases in the experimental design or analysis that should be considered? | Summary Of The Paper
Review | Summary Of The Paper
This work takes a theoretical view of LTH, leveraging the geometric structure of the objective function to analyze the generalization error of a pruned network trained in a teacher-student fashion. In particular, they prove that, as a model is pruned, the desirable (convex) region with high-guaranteed generalization performance enlarges, providing explanation for improved performance of winning tickets. Then, the sample complexity of training the pruned network to achieve zero generalization error is analyzed, finding that the number of samples required is proportional to the number of un-pruned weights. Interestingly, the work finds that pruned models enjoy faster convergence to high performance, providing another possible explanation of why winning tickets outperform dense networks.
Review
I begin my review by emphasizing that I am completely open to author feedback upon my review. My final score will be mostly based upon discussion with authors in regard to my points below.
General opinion: The theoretical contributions of this work are very extensive/impressive. All theoretical results are extensively studied in simulated experiments, and experiments on real datasets are also provided. Although the problem setup within the work is somewhat peculiar (I have not seen such a student-teacher setup used within theoretical LTH analysis), I find the work to be well-written, interesting, and useful.
Pros:
The analysis of the sample complexity required to train the pruned model to convergence seems novel and interesting. It is especially interesting that this sample complexity is found to be superior to the dense model from which the pruned model is derived.
The incorporation of momentum into the gradient descent algorithm for pruning is interesting. The fact that acceleration is also achieved in theory is well-done.
Analysis is also performed for the generalization performance of pruned networks, which is not as common in theoretical work for LTH. (The only others I know of are https://arxiv.org/abs/1802.05296 or https://arxiv.org/abs/1804.05345, which are both referenced).
Theoretical analysis within this work, although based on arguments in [63], seems to be novel and requires numerous technical developments in comparison to previous work.
Experiments on real datasets are included.
Cons:
Analysis is limited to gaussian input data. I believe this is the main/most unrealistic limitation of the theoretical analysis.
Theoretical analysis is only done one one-hidden-layer networks (though this is not truly a con, as this is true of much theoretical work for LTH and neural networks in general).
Questions:
I do not fully understand why both teacher and student networks are pruned within the setup. Additionally, this teacher-student setup is not fully reflective of standard methodologies for LTH, so I am a bit curious about how this setup was derived (I assume this setup was adopted from [63], http://proceedings.mlr.press/v70/zhong17a.html?).
The paper claims that the convex regions for pruned networks are enlarged. This means that pruned networks have a larger region in which linear convergence is achieved (if I am not mistaken). Although this may mean the pruned network converges faster due to the larger region of convexity, this does not say anything about the quality of the solution, correct? It seems like this analysis cannot be used as evidence for winning tickets having superior performance (though superiority is shown in other areas with sample complexity/generalization bounds).
Is there any previous work that studies whether analysis with gaussian input data is reflective of practical behavior? I am unsure whether such an assumption is very limiting (possibly this determination can be made by discussing differences between synthetic/real experiments within the experimental section of this work).
Minor Comments:
There are a lot of other papers that analyze over-parameterized neural networks that could be added to the related work (though maybe they are not as relevant to this work, I am not sure). For example, https://arxiv.org/abs/1902.04674
I would recommend that the numerical experiments devote more space/attention to experiments on real datasets. Right now, it seems to be denominated by synthetic datasets, though I understand this is in an attempt to numerically verify aspects of the theory. |
NIPS | Title
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
Abstract
The lottery ticket hypothesis (LTH) [20] states that learning on a properly pruned network (the winning ticket) improves test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
1 Introduction
Neural network pruning can reduce the computational cost of model training and inference significantly and potentially lessen the chance of overfitting [33, 26, 15, 25, 28, 51, 58, 41]. The recent Lottery Ticket Hypothesis (LTH) [20] claims that a randomly initialized dense neural network al-
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
ways contains a so-called “winning ticket,” which is a sub-network bundled with the corresponding initialization, such that when trained in isolation, this winning ticket can achieve at least the same testing accuracy as that of the original network by running at most the same amount of training time. This so-called “improved generalization of winning tickets” is verified empirically in [20]. LTH has attracted a significant amount of recent research interests [45, 70, 39]. Despite the empirical success [19, 63, 55, 11], the theoretical justification of winning tickets remains elusive except for a few recent works. [39] provides the first theoretical evidence that within a randomly initialized neural network, there exists a good sub-network that can achieve the same test performance as the original network. Meanwhile, recent work [42] trains neural network by adding the `1 regularization term to obtain a relatively sparse neural network, which has a better performance numerically.
However, the theoretical foundation of network pruning is limited. The existing theoretical works usually focus on finding a sub-network that achieves a tolerable loss in either expressive power or training accuracy, compared with the original dense network [2, 71, 61, 43, 4, 3, 35, 5, 59]. To the best of our knowledge, there exists no theoretical support for the improved generalization achieved by winning tickets, i.e., pruned networks with faster convergence and better test accuracy.
Contributions: This paper provides the first systematic analysis of learning pruned neural networks with a finite number of training samples in the oracle-learner setup, where the training data are generated by a unknown neural network, the oracle, and another network, the learner, is trained on the dataset. Our analytical results also provide a justification of the LTH from the perspective of the sample complexity. In particular, we provide the first theoretical justification of the improved generalization of winning tickets. Specific contributions include:
1. Pruned neural network learning via accelerated gradient descent (AGD): We propose an AGD algorithm with tensor initialization to learn the pruned model from training samples. Our algorithm converges to the oracle model linearly, which has guaranteed generalization.
2. First sample complexity analysis for pruned networks: We characterize the required number of samples for successful convergence, termed as the sample complexity. Our sample complexity bound depends linearly on the number of the non-pruned weights and is a significant reduction from directly applying conventional complexity bounds in [69, 66, 67].
3. Characterization of the benign optimization landscape of pruned networks: We show analytically that the empirical risk function has an enlarged convex region for a pruned network, justifying the importance of a good sub-network (i.e., the winning ticket).
4. Characterization of the improved generalization of winning tickets: We show that gradientdescent methods converge faster to the oracle model when the neural network is properly pruned, or equivalently, learning on a pruned network returns a model closer to the oracle model with the same number of iterations, indicating the improved generalization of winning tickets.
Notations. Vectors are bold lowercase, matrices and tensors are bold uppercase. Scalars are in normal font, and sets are in calligraphy and blackboard bold font. I denote the identity matrix. N and R denote the sets of nature number and real number, respectively. ‖z‖ denotes the `2-norm of a vector z, and ‖Z‖2, ‖Z‖F and ‖Z‖∞ denote the spectral norm, Frobenius norm and the maximum value of matrix Z, respectively. [Z] stands for the set of {1, 2, · · · , Z} for any number Z ∈ N. In addition, f(r) = O(g(r)) ( or f(r) = Ω(g(r)) ) if f ≤ C · g ( or f ≥ C · g ) for some constant C > 0 when r is large enough. f(r) = Θ(g(r)) if both f(r) = O(g(r)) and f(r) = Ω(g(r)) holds, where c · g ≤ f ≤ C · g for some constant 0 ≤ c ≤ C when r is large enough.
1.1 Related Work
Network pruning. Network pruning methods seek a compressed model while maintaining the expressive power. Numerical experiments have shown that over 90% of the parameters can be pruned without a significant performance loss [10]. Examples of pruning methods include irregular weight pruning [25], structured weight pruning [57], neuron-based pruning [28], and projecting the weights to a low-rank subspace [13].
Winning tickets. [20] employs an Iterative Magnitude Pruning (IMP) algorithm to obtain the proper sub-network and initialization. IMP and its variations [22, 46] succeed in deeper networks like Residual Networks (Resnet)-50 and Bidirectional Encoder Representations from Transformers (BERT) network [11]. [21] shows that IMP succeeds in finding the “winning ticket” if the ticket is
stable to stochastic gradient descent noise. In parallel, [36] shows numerically that the “winning ticket” initialization does not improve over a random initialization once the correct sub-networks are found, suggesting that the benefit of “winning ticket” mainly comes from the sub-network structures. [18] analyzes the sample complexity of IMP from the perspective of recovering a sparse vector in a linear model rather than learning neural networks.
Feature sparsity. High-dimensional data often contains redundant features, and only a subset of the features is used in training [6, 14, 27, 60, 68]. Conventional approaches like wrapper and filter methods score the importance of each feature in a certain way and select the ones with highest scores [24]. Optimization-based methods add variants of the `0 norm as a regularization to promote feature sparsity [68]. Different from network pruning where the feature dimension still remains high during training, the feature dimension is significantly reduced in training when promoting feature sparsity.
Over-parameterized model. When the number of weights in a neural network is much larger than the number of training samples, the landscape of the objective function of the learning problem has no spurious local minima, and first-order algorithms converge to one of the global optima [37, 44, 64, 50, 9, 49, 38]. However, the global optima is not guaranteed to generalize well on testing data [62, 64].
Generalization analyses. The existing generalization analyses mostly fall within three categories. One line of research employs the Mean Field approach to model the training process by a differential equation assuming infinite network width and infinitesimal training step size [12, 40, 56]. Another approach is the neural tangent kernel (NTK) [30], which requires strong and probably unpractical over-parameterization such that the nonlinear neural network model behaves as its linearization around the initialization [1, 17, 72, 73]. The third line of works follow the oracle-learner setup, where the data are generated by an unknown oracle model, and the learning objective is to estimate the oracle model, which has a generalization guarantee on testing data. However, the objective function has intractably many spurious local minima even for one-hidden-layer neural networks [48, 47, 64]. Assuming an infinite number of training samples, [8, 16, 52] develop learning methods to estimate the oracle model. [23, 69, 66, 67] extend to the practical case of a finite number of samples and characterize the sample complexity for recovering the oracle model. Because the analysis complexity explodes when the number of hidden layers increases, all the analytical results about estimating the oracle model are limited to one-hidden-layer neural networks, and the input distribution is often assumed to be the standard Gaussian distribution.
2 Problem Formulation
In an oracle-learner model, given any input x ∈ Rd, the corresponding output y is generated by a pruned one-hidden-layer neural network, called oracle, as shown in Figure 1. The oracle network is equipped with K neurons where the j-th neuron is connected to any arbitrary r∗j (r ∗ j ≤ d) input features. LetW ∗ = [w∗1, · · · ,w∗K ] ∈ Rd×K denotes all the weights (pruned ones are represented by zero). The number of non-zero entries in w∗j is at most r ∗ j . The oracle network is not unique because permuting neurons together with the corresponding weights does not change the output. Therefore, the output label y obtained by the oracle network satisfies 1
y = 1
K K∑ j=1 φ(w∗Tj x) + ξ := g(x;W ∗) + ξ = g(x;W ∗P ) + ξ, (1)
where ξ is arbitrary unknown additive noise bounded by some constant |ξ|, φ is the rectified linear unit (ReLU) activation function with φ(z) = max{z, 0}, and P ∈ {0, 1}K×K is any permutation matrix. M∗ is a mask matrix for the oracle network, such that M∗j,i equals to 1 if the weight w∗j,i is not pruned, and 0 otherwise. Then,M∗ is an indicator matrix for the non-zero entries ofW ∗ with M∗ W ∗ = W ∗, where is entry-wise multiplication. Based on N pairs of training samples D = {xn, yn}Nn=1 generated by the oracle, we train on a learner network equipped with the same number of neurons in the oracle network. However, the j-th neuron in the learner network is connected to rj input features rather than r∗j . Let rmin, rmax, and rave denote the minimum, maximum, and average value of {rj}Kj=1, respectively. LetM denote the
1It is extendable to binary classification, and the output is generated by Prob ( yn = 1|xn ) = g(xn;W ∗).
mask matrix with respect to the learner network, and wj is the j-th column ofW . The empirical risk function is defined as
f̂D(W ) = 1
2N N∑ n=1 ( 1 K K∑ j=1 φ(wTj xn)− yn )2 . (2)
When the maskM is given, the learning objective is to estimate a proper weight matrixW for the learner network from the training samples D via solving
minW∈Rd×K f̂D(W ) s.t. M W = W . (3)
M is called an accurate mask if the support ofM covers the support of a permutation ofM∗, i.e., there exists a permutation matrix P such that (M∗P ) M = M∗. When M is accurate, and ξ = 0, there exists a permutation matrix P such that W ∗P is a global optimizer to (3). Hence, if W ∗P can be estimated by solving (3), one can learn the oracle network accurately, which has guaranteed generalization performance on the testing data.
We assume xn is independent and identically distributed from the standard Gaussian distribution N (0, Id×d). The Gaussian assumption is motivated by the data whitening [34] and batch normalization techniques [29] that are commonly used in practice to improve learning performance. Moreover, training one-hidden-layer neural network with multiple neurons has intractable many fake minima [47] without any input distribution assumption. In addition, the theoretical results in Section 3 assume an accurate mask, and inaccurate mask is evaluated empirically in Section 4.
The questions that this paper addresses include: 1. what algorithm to solve (3)? 2. what is the sample complexity for the accurate estimate of the weights in the oracle network? 3. what is the impact of the network pruning on the difficulty of the learning problem and the performance of the learned model?
3 Algorithm and Theoretical Results
Section 3.1 studies the geometric structure of (3), and the main results are in Section 3.2. Section 3.3 briefly introduces the proof sketch and technical novelty, and the limitations are in Section 3.4.
3.1 Local Geometric Structure
Theorem 1 characterizes the local convexity of f̂D in (3). It has two important implications.
1. Strictly locally convex near ground truth: f̂D is strictly convex nearW ∗P for some permutation matrix P , and the radius of the convex ball is negatively correlated with √ r̃, where r̃ is in the order of rave. Thus, the convex ball enlarges as any rj decreases.
2. Importance of the winning ticket architecture: Compared with training on the dense network directly, training on a properly pruned sub-network has a larger local convex region near W ∗P , which may lead to easier estimation of W ∗P . To some extent, this result can be viewed as a theoretical validation of the importance of the winning architecture (a good sub-network) in [20]. Formally, we have
Theorem 1 (Local Convexity). Assume the mask M of the learner network is accurate. Suppose constants ε0, ε1 ∈ (0, 1) and the number of samples satisfies
N = Ω ( ε−21 K 4r̃ log q ) , (4)
for some large constant q > 0, where
r̃ = 1
8K4
(∑K k=1 ∑K j=1(1 + δj,k)(rj + rk) 1 2 )2 , (5)
δj,k is 1 if the indices of non-pruned weights in the j-th and k-th neurons overlap and 0 otherwise. Then, there exists a permutation matrix P such that for anyW that satisfies
‖W −W ∗P ‖F = O ( ε0 K2 ) , andM W = W , (6)
its Hessian of f̂D, with probability at least 1−K · q−rmin , is bounded as:
Θ (1− ε0 − ε1
K2
) I ∇2f̂D(W ) Θ ( 1 K ) I. (7)
Remark 1.1 (Parameter r̃): Clearly r̃ is a monotonically increasing function of any rj from (5). Moreover, one can check that 18rave ≤ r̃ ≤ rave. Hence, r̃ is in the order of rave. Remark 1.2 (Local landscape): Theorem 1 shows that with enough samples as shown in (4), in a local region ofW ∗P as shown in (6), all the eigenvalues of the Hessian matrix of the empirical risk function are lower and upper bounded by two positive constants. This property is useful in designing efficient algorithms to recoverW ∗P , as shown in Section 3.2.
Remark 1.3 (Size of the convex region): When the number of samples N is fixed and r changes, ε1 can be Θ( √ r̃/N) while (4) is still met. ε0 in (7) can be arbitrarily close to but smaller than 1− ε1 so that the Hessian matrix is still positive definite. Then from (6), the radius of the convex ball is Θ(1) − Θ( √ r̃/N), indicating an enlarged region when r̃ decreases. The enlarged convex region serves as an important component in proving the faster convergence rate, summarized in Theorem 2. Besides this, as Figure 1 shown in [20], the authors claim that the learning is stable if the linear interpolation of the learned models with SGD noises still remain similar in performance, which is summarized as the concept “linearly connected region.” Intuitively, we conjecture that the winning ticket shows a better performance in the stability analysis because it has a larger convex region. In the other words, a larger convex region indicates that the learning is more likely to be stable in the linearly connected region.
3.2 Convergence Analysis with Accelerated Gradient Descent
We propose to solve the non-convex problem (3) via the accelerated gradient descent (AGD) algorithm, summarized in Algorithm 1. Compared with the vanilla gradient descent (GD) algorithm, AGD has an additional momentum term, denoted by β(W (t) −W (t−1)), in each iteration. AGD enjoys a faster convergence rate than vanilla GD in solving optimization problems, including learning neural networks [65]. Vanilla GD can be viewed as a special case of AGD by letting β = 0. The initial point W (0) can be obtained through a tensor method, and the details are provided in Appendix B.
Algorithm 1 Accelerated Gradient Descent (AGD) Algorithm 1: Input: training data D = {(xn, yn)}Nn=1, gradient step size η, momentum parameter β, and an
initializationW (0) by the tensor initialization method; 2: Partition D into T = log(1/ε) disjoint subsets, denoted as {Dt}Tt=1; 3: for t = 1, 2, · · · , T do 4: W (t+1) = W (t) − η ·M ∇f̂Dt(W
(t)) + β(W (t) −W (t−1)) 5: end for 6: Return: W (T )
The theoretical analyses of our algorithm are summarized in Theorem 2 (convergence) and Lemma 1 (Initialization). The significance of these results can be interpreted from the following aspects.
1. Linear convergence to the oracle model: Theorem 2 implies that if initialized in the local convex region, the iterates generated by AGD converge linearly toW ∗P for some P when noiseless. When there is noise, they converge to a pointW (T ). The distance betweenW (T ) andW ∗P is proportional to the noise level and scales in terms of O( √ r̃/N). Moreover, when N is fixed, the convergence rate
of AGD is Θ( √ r̃/K). Recall that Algorithm 1 reduces to the vanilla GD by setting β = 0. The rate for the vanilla GD algorithm here is Θ( √ r̃/K) by setting β = 0 by Theorem 2, indicating a slower convergence than AGD. Lemma 1 shows the tensor initialization method indeed returns an initial point in the convex region.
2. Sample complexity for accurate estimation: We show that the required number of samples for successful estimation of the oracle model is Θ ( r̃ log q log(1/ε) ) for some large constant q and estimation accuracy ε. Our sample complexity is much less than the conventional bound of Θ(d log q log(1/ε)) for one-hidden-layer networks [69, 66, 67]. This is the first theoretical characterization of learning a pruned network from the perspective of sample complexity.
3. Improved generalization of winning tickets: We prove that with a fixed number of training samples, training on a properly pruned sub-network converges faster toW ∗P than training on the original dense network. Our theoretical analysis justifies that training on the winning ticket can meet or exceed the same test accuracy within the same number of iterations. To the best of our knowledge, our result here provides the first theoretical justification for this intriguing empirical finding of “improved generalization of winning tickets” by [20]. Theorem 2 (Convergence). Assume the maskM of the learner network is accurate. SupposeW (0) satisfies (6) and the number of samples satisfies
N = Ω ( ε−20 K 6r̃ log q log(1/ε) )
(8)
for some ε0 ∈ (0, 1/2). Let η = K/14 in Algorithm 1. Then, the iterates {W (t)}Tt=1 returned by Algorithm 1 converges linearly toW ∗ up to the noise level with probability at least 1−K2T · q−rmin
‖W (t) −W ∗P ‖F ≤ν(β)t‖W (0) −W ∗P ‖F +O (∑
j
√ rj log q
N
) · |ξ|, (9)
and ‖W (T ) −W ∗P ‖F ≤ε‖W ∗‖F +O (∑
j
√ rj log q
N
) · |ξ|, (10)
for a fixed permutation matrix P , where ν(β) is the rate of convergence that depends on β with ν(β∗) = 1−Θ ( 1−ε0√ K ) for some non-zero β∗ and ν(0) = 1−Θ ( 1−ε0 K ) . Lemma 1 (Initialization). Assume the noise |ξ| ≤ ‖W ∗‖2 and the number of samples N = Ω ( ε−20 K 5rmax log q )
for ε0 > 0 and large constant q, the tensor initialization method outputs W (0) such that (6) holds, i.e., ‖W (0) −W ∗‖F = O ( ε0σK K2 ) with probability at least 1− q−rmax .
Remark 2.1 (Faster convergence on pruned network): With a fixed number of samples, when r̃ decreases, ε0 can increase as Θ( √ r̃) while (8) is still met. Then ν(0) = Θ( √ r̃/K) and ν(β∗) =
Θ( √ r̃/K). Therefore, when r̃ decreases, both the stochastic and accelerated gradient descent
converge faster. Note that as long asW (0) is initialized in the local convex region, not necessarily by the tensor method, Theorem 2 guarantees the accurate recovery. [66, 67] analyze AGD on convolutional neural networks, while this paper focuses on network pruning.
Remark 2.2 (Sample complexity of initialization): From Lemma 1, the required number of samples for a proper initialization is Ω ( ε−20 K 5rmax log q ) . Because rmax ≤ Krave and r̃ = Ω(rave), this number is no greater than the sample complexity in (8). Thus, provided that (8) is met, Algorithm 1 can estimate the oracle network model accurately.
Remark 2.3 (Inaccurate mask): The above analyses are based on the assumption that the mask of the learner network is accurate. In practice, a mask can be obtained by an iterative pruning method such as [20] or a one-shot pruning method such as [55]. In Appendix E, we prove that the magnitude pruning method can obtain an accurate mask with enough training samples. Moreover, empirical experiments in Section 4.2 and 4.3 suggest that even if the mask is not accurate, the three properties (linear convergence, sample complexity with respect to the network size, and improved generalization of winning tickets) can still hold. Therefore, our theoretical results provide some insight into the empirical success of network pruning.
3.3 The Sketch of Proofs and Technical Novelty
Our proof outline is inspired by [69] on fully connected neural networks, however, major technical changes are made in this paper to generalize the analysis to an arbitrarily pruned network. To characterize the local convex region of f̂D (Theorem 1), the idea is to bound the Hessian matrix of the population risk function, which is the expectation of the empirical risk function, locally and then characterize the distance between the empirical and population risk functions through the concentration bounds. Then, the convergence of AGD (Theorem 2) is established based on the desired local curvature, which in turn determines the sample complexity. Finally, to initialize in the local convex region (Lemma 1), we construct tensors that contain the weights information and apply a decomposition method to estimate the weights.
Our technical novelties are as follows. First, a direct application of the results in [69] leads to a sample complexity bound that is linear in the feature dimension d. We develop new techniques to tighten the sample complexity bound to be linear in r̃, which can be significantly smaller than d for a sufficiently pruned network. Specifically, we develop new concentration bounds (Lemmas 4 and 5 in Appendix) to bound the distance between the population and empirical risk functions rather than using the bound in [69]. Second, instead of restricting the acitivation to be smooth for convergence analysis, we study the case of ReLU function which is non-smooth. Third, new tensors are constructed for pruned networks (see (21)-(23) in Appendix) in computing the initialization, and our new concentration bounds are employed to reduce the required number of samples for a proper initialization. Last, Algorithm 1 employs AGD and is proved to converge faster than the GD algorithm in [69].
3.4 Limitations
Like most theoretical works based on the oracle-learner setup, limitations of this work include (1) one hidden layer only; and (2) the input follows the Gaussian distribution. Extension to multi-layer might be possible if the following technical challenges are addressed. First, when characterizing the local convex region, one needs to show that the Hessian matrix is positive definite. In multi-layer networks, the Hessian matrix is more complicated to compute. Second, new concentration bounds need to be developed because the input feature distributions to the second and third layers depend on the weights in previous layers. Third, the initialization approach needs to be revised. The team is also investigating the other input distributions such as Gaussian mixture models.
4 Numerical Experiments
The theoretical results are first verified on synthetic data, and we then analyze the pruning performance on both synthetic and real datasets. In Section 4.1, Algorithm 1 is implemented with minor modification, such that, the initial point is randomly selected as ‖W (0) −W ∗‖F /‖W ∗‖F < λ for some λ > 0 to reduce the computation. Algorithm 1 terminates when ‖W (t+1)−W (t)‖F /‖W (t)‖F is smaller than 10−8 or reaching 10000 iterations. In Sections 4.2 and 4.3, the Gradient Signal Preservation (GraSP) algorithm [55] and IMP algorithm [10, 20]2 are implemented to prune the neural networks. As many works like [11, 10, 20] have already verified the faster convergence and better generalization accuracy of the winning tickets empirically, we only include the results of some representative experiments, such as training MNIST and CIFAR-10 on Lenet-5 [32] and Resnet-50 [27] networks, to verify our theoretical findings.
The synthetic data are generated using a oracle model in Figure 1. The input xn’s are randomly generated from Gaussian distribution N (0, Id×d) independently, and indices of non-pruned weights of the j-th neuron are obtained by randomly selecting rj numbers without replacement from [d]. For the convenience of generating specific r̃, the indices of non-pruned weights are almost overlapped ( ∑ j ∑ k δjδk > 0.95K
2) except for Figure 5. In Figures 2 and 4, rj is selected uniformly from [0.9r̃, 1.1r̃] for a given r̃, and rj are the same in value for all j in other figures. Each non-zero entry ofW ∗ is randomly selected from [−0.5, 0.5] independently. The noise ξn’s are i.i.d. from N (0, σ2), and the noise level is measured by σ/Ey , where Ey is the root mean square of the noiseless outputs.
2The source codes used are downloaded from https://github.com/VITA-Group/CV_LTH_Pre-training.
4.1 Evaluation of theoretical findings on synthetic data
Local convexity near W ∗. We set the number of neurons K = 10, the dimension of the data d = 500 and the sample size N = 5000. Figure 2 illustrates the success rate of Algorithm 1 when r̃ changes. The y-axis is the relative distance of the initializationW (0) to the ground-truth. For each pair of r̃ and the initial distance, 100 trails are constructed with the network weights, training data and the initializationW (0) are all generated independently in each trail. Each trail is called successful if the relative error of the solutionW returned by Algorithm 1, measured by ‖W −W ∗‖F /‖W ∗‖F , is less than 10−4. A black block means Algorithm 1 fails in estimatingW ∗ in all trails while a white block indicates all success. As Algorithm 1 succeeds ifW (0) is in the local convex region nearW ∗, we can see that the radius of convex region is indeed linear in −r̃ 12 , as predicted by Theorem 1. Convergence rate. Figure 3 shows the convergence rate of Algorithm 1 when r̃ changes. N = 5000, d = 300, K = 10, η = 0.5, and β = 0.2. Figure 3(a) shows that the relative error decreases exponentially as the number of iterations increases, indicating the linear convergence of Algorithm 1. As shown in Figure 3(b), the results are averaged over 20 trials with different initial points, and the areas in low transparency represent the standard deviation errors. We can see that the convergence rate is almost linear in √ r̃, as predicted by Theorem 2. We also compare with GD by setting β as 0. One can see that AGD has a smaller convergence rate than GD, indicating faster convergence.
10 12 14 16 18 20 2 6
10 14 18 22 26 30 34 38
Figure 2: The radius of the local convex region against r̃ 1 2
Sample complexity. Figures 4 and 5 show the success rate of Algorithm 1 when varying N and r̃. d is fixed as 100. In Figure 4, we construct 100 independent trails for each pair of N and r̃, where the ground-truth model and training data are generated independently in each trail. One can see that the required number of samples for successful estimation is linear in r̃, as predicted by (8). In Figure 5, rj is fixed as 20 for all neurons, but different network architectures after pruning are considered. One can see that although the number of remaining weights is the same, r̃ can be different in different architectures, and the sample complexity increases as r̃ increases, as predicted by (8).
r̃
r̃ 1 2 at different noise level
Performance in noisy case. Figure 6 shows the relative error of the learned model by Algorithm 1 from noisy measurements when r̃ changes. N = 1000, K = 10, and d = 300. The results are averaged over 100 independent trials, and standard deviation is around 2% to 8% of the corresponding relative errors. The relative error is linear in r̃ 1 2 , as predicted by (9). Moreover, the relative error is proportional to the noise level |ξ|.
4.2 Performance with inaccurate mask on synthetic data
The performance of Algorithm 1 is evaluated when the maskM of the learner network is inaccurate. The number of neurons K is 5. The dimension of inputs d is 100. r∗j of the oracle model is 20 for
all j ∈ [K]. GraSP algorithm [55] is employed to find masks based only on early-trained weights in 20 iterations of AGD. The mask accuracy is measured by ‖M∗ M‖0/‖M∗‖0, whereM∗ is the mask of the oracle model. The pruning ratio is defined as (1− rave/d)× 100%. The number of training samples N is 200. The model returned by Algorithm 1 is evaluated on Ntest = 105 samples, and the test error is measured by √∑ n |yn − ŷn|2/Ntest, where ŷn is the output of the learned model with the input xn, and (xn, yn) is the n-th testing sample generated by the oracle network.
Improved generalization by GraSP. Figure 7 shows the test error with different pruning ratios. For each pruning ratio, we randomly generate 1000 independent trials. Because the mask of the learner network in each trail is generated independently, we compute the average test error of the learned models in all the trails with same mask accuracy. If there are less than 10 trails for certain mask accuracy, the result of that mask accuracy is not reported as it is statistically meaningless. The test error decreases as the mask accuracy increases. More importantly, at fixed mask accuracy, the test error decreases as the pruning ratio increases. That means the generalization performance improves when r̃ deceases, even if the mask is not accurate.
Linear convergence. Figure 8 shows the convergence rate of Algorithm 1 with different pruning ratios. We show the smallest number of iterations required to achieve a certain test error of the learned model, and the results are averaged over the independent trials with mask accuracy between 0.85 and 0.90. Even with inaccurate mask, the test error converges linearly. Moreover, as the pruning ratio increases, Algorithm 1 converges faster.
Sample complexity with respect to the pruning ratio. Figure 9 shows the test error when the number of training samples N changes. All the other parameters except N remain the same. The results are averaged over the trials with mask accuracy between 0.85 and 0.90. We can see the test error decreases when N increases. More importantly, as the pruning ratio increases, the required number of samples to achieve the same test error (no less than 10−3) decreases dramatically. That means the sample complexity decreases as r̃ decreases even if the mask is inaccurate.
4.3 Performance of IMP on synthetic, MNIST and CIFAR-10 datasets
We implement the IMP algorithm to obtain pruned networks on synthetic, MNIST and CIFAR-10 datasets. Figure 10 shows the test performance of a pruned network on synthetic data with different sample sizes. Here in the oracle network model, K = 5, d = 100, and r∗j = 20 for all j ∈ [K]. The noise level σ/Ey = 10−3. One observation is that for a fixed sample size N greater than 100, the test error decreases as the pruning ratio increases. This verifies that the IMP algorithm indeed prunes the network properly. It also shows that the learned model improves as the pruning progresses, verifying our theoretical result in Theorem 2 that the difference of the learned model from the oracle model decreases as rj decreases. The second observation is that the test error decreases as N increases for any fixed pruning ratio. This verifies our result in Theorem 2 that the difference of the learned model from the oracle model decreases as the number of training samples increases. When the pruning ratio is too large (greater than 80%), the pruned network cannot explain the data properly, and thus the test error is large for all N . When the number of samples is too small, like N = 100, the test error is always large, because it does not meet the sample complexity requirement for estimating the oracle model even though the network is properly pruned.
Figures 11 and 12 show the test performance of learned models by implementing the IMP algorithm on MNIST and CIFAR-10 using Lenet-5 [32] and Resnet-50 [27] architecture, respectively. The
experiments follow the standard setup in [10] except for the size of the training sets. To demonstrate the effect of sample complexity, we randomly selected N samples from the original training set without replacement. As we can see, a properly pruned network (i.e., winning ticket) helps reduce the sample complexity required to reach the test accuracy of the original dense model. For example, training on a pruned network returns a model (e.g., P1 and P3 in Figures 11 and 12) that has better testing performance than a dense model (e.g., P2 and P4 in Figures 11 and 12) trained on a larger data set. Given the number of samples, we consistently find the characteristic behavior of winning tickets: That is, the test accuracy could increase when the pruning ratio increases, indicating the effectiveness of pruning. The test accuracy then drops when the network is overly pruned. The results show that our theoretical characterization of sample complexity is well aligned with the empirical performance of pruned neural networks and explains the improved generalization observed in LTH.
5 Conclusions
This paper provides the first theoretical analysis of learning one-hidden-layer pruned neural networks, which offers formal justification of the improved generalization of winning ticket observed from empirical findings in LTH. We characterize analytically the impact of the number of remaining weights in a pruned network on the required number of samples for training, the convergence rate of the learning algorithm, and the accuracy of the learned model. We also provide extensive numerical validations of our theoretical findings.
Broader impacts
We see no ethical or immediate societal consequence of our work. This paper contributes to the theoretical foundation of both network pruning and generalization guarantee. The former encourages the development of learning method to reduce the computational cost. The latter increases the public trust in incorporating AI technology in critical domains.
Acknowledgement
This work was supported by AFOSR FA9550-20-1-0122, ARO W911NF-21-1-0255, NSF 1932196 and the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons). We thank Tianlong Chen at University of Texas at Austin, Haolin Xiong at Rensselaer Polytechnic Institute and Yihua Zhang at Michigan State University for the help in formulating numerical experiments. We thank all anonymous reviewers for their constructive comments. | 1. What is the focus of the paper regarding pruned neural networks?
2. What are the strengths of the proposed theoretical analysis?
3. What are the weaknesses of the paper, particularly in its assumptions and experimental design?
4. Do you have any concerns about the applicability of the theoretical results in real-world scenarios?
5. How could the paper be improved regarding its readability and content? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a theoretical analysis of the advantages of learning pruned neural networks . This analysis considers a teacher-student setup with a finite number of training samples. The theoretical results presented in the paper show that pruned networks can have multiple advantages, such as faster training convergence, a lower number of samples required for successful convergence and an enlarged convex region.
Review
Pros:
This paper tries to give some theoretical insights that explain the performance of pruned neural networks. I agree with the authors that this problem has been studied mostly empirically and a deeper theoretical analysis can provide a better understanding of the behaviour of pruned neural networks. For this reason, I believe that this paper can be a valid contribution in this direction.
Cons:
I think that the analysis presented in the paper has some limitations: i) as highlighted by the authors in Sec. 3.4, this analysis considers only one hidden layer and assumes that the input follows the Gaussian distribution; ii) the authors assume that the whole training data is used at each iteration (instead of using minibatches). For these reasons, it is not clear to me if the theoretical results presented in the paper can be useful when we consider a real setting. Are there any useful insights that can be used in real applications to, e.g., improve the training of pruned neural networks or find better winning tickets? The experiments on real datasets presented in the paper are quite limited and show only the effect of sample complexity, which is not very informative and does not reflect what happens in real applications since usually training is performed using minibatches and does not use the whole training set at each iteration.
Minor comments:
Many details of the theoretical analysis are discussed only in the appendix. Some of them, such as the tensor initialization method, are fundamental in order to understand the theoretical results presented in the paper. I suggest to try to add them to the main paper. This would increases the paper readability. |
NIPS | Title
FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Abstract
We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
1 Introduction
Training machine learning models in a centralized fashion becomes more challenging and marginally inaccessible for a large number of users, especially when the size of datasets and models is growing substantially larger. Consequently, training algorithms using decentralized and distributed approaches comes in as a natural replacement. Among several approaches, federated learning (FL) has received tremendous attention in the past few years since it was first introduced in [18, 30]. In this setting, a central server coordinates between many local users (also called agents or devices) to perform their local updates, then the global model will get updated, e.g., by averaging or aggregating local models.
Challenges. FL provides a promising solution for many machine learning applications such as learning over smartphones or across organizations, and internet of things, where privacy protection is one of the most critical requirements. However, this training mechanism faces a number of fundamental challenges, see, e.g., [31]. First, when the number of users gets substantially large, it creates communication bottleneck during model exchange process between server and users. Second, the local data stored in each local user may be different in terms of sizes and distribution which poses a challenge: data or statistical heterogeneity. Third, the variety of users with different local storage, computational power, and network connectivity participating into the system also creates a major challenge, known as system heterogeneity. This challenge also causes unstable connection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
between server and users, where some users may be disconnected from the server or simply dropped out during training. In practice, we can expect only a subset of users to participate in each round of communication. Another challenge in FL is privacy concern. Accessing and sharing local raw data is not permitted in FL. In addition, distributed methods exchange the objective gradient of local users, and private data can be exposed from the shared model such as the objective gradients [51]. Therefore, FL methods normally send the global model to each user at the start of each communication round, each user will perform its local update and send back only the necessary update for aggregation.
Our goal and approach. Our goal in this paper is to further and simultaneously address these fundamental challenges by proposing two new algorithms to train the underlying common optimization model in FL. Our approach relies on a novel combination between randomized block-coordinate strategy, nonconvex Douglas-Rachford (DR) splitting, and asynchronous implementation. While each individual technique or partial combinations is not new, our combination of three as in this paper appears to be the first in the literature. To the best of our knowledge, this is the first work developing randomized block-coordinate DR splitting methods for nonconvex composite FL optimization models, and they are fundamentally different from some works in the convex setting, e.g., [7, 8].
Contribution. Our contribution can be summarized as follows.
(a) We develop a new FL algorithm, called FedDR (Federated Douglas-Rachford), by combining the well-known DR splitting technique and randomized block-coordinate strategy for the common nonconvex composite optimization problem in FL. Our algorithm can handle nonsmooth convex regularizers and allows inexact evaluation of the underlying proximal operators as in FedProx or FedPD. It also achieves the best known O ( ε−2 )
communication complexity for finding a stationary point under standard assumptions (Assumptions 2.1- 2.2), where ε is a given accuracy. More importantly, unlike FedSplit [33] and FedPD [49], which require full user participation to achieve convergence, our analysis does allow partial participation by selecting a subset of users to perform update at each communication round. (b) Next, we propose an asynchronous algorithm, asyncFedDR, where each user can asynchronously perform local update and periodically send the update to the server for proximal aggregation. We show that asyncFedDR achieves the same communication complexity O ( ε−2 )
as FedDR (up to a constant factor) under the same standard assumptions. This algorithm is expected to simultaneously address all challenges discussed above.
Let us emphasize some key points of our contribution. First, the best known O ( ε−2 )
communication complexity of our methods matches the lower bound complexity up to a constant factor as shown in [49], even with inexact evaluation of the objective proximal operators. Second, our methods rely on a DR splitting technique for nonconvex optimization and can handle possibly nonsmooth convex regularizers, which allows us to deal with a larger class of applications and with constraints [47]. Furthermore, it can also handle both statistical and system heterogeneity as discussed in FedSplit [33] and FedPD [49]. However, FedSplit only considers the convex case, and both FedSplit and FedPD require all users to update at each communication round, making them less practical and applicable in FL. Our methods only require a subset of users or even one user to participate in each communication round as in FedAvg or FedProx. In addition, our aggregation step on the server is different from most existing works due to a proximal step on the regularizer. It is also different from [47]. Third, as FedProx [23], we allow inexact evaluation of users’ proximal operators with any local solver (e.g., local SGD or variance reduced methods) and with adaptive accuracies. Finally, requiring synchronous aggregation at the end of each communication round may lead to slow-down in training due to the heterogeneity in computing power and communication capability of local users. It is natural to have asynchronous update from local users as in, e.g., [34, 35, 39]. Our asynchronous variant, asyncFedDR, can fairly address this challenge. Moreover, it uses a general probabilistic model recently introduced in [5], which allows us to capture the variety of asynchronous environments and architectures compared to existing methods, e.g., [39, 44].
Related work and comparison. Federated Averaging (FedAvg) is perhaps the earliest method used in FL. In FedAvg, users perform stochastic gradient descent (SGD) updates for a number of epochs then send updated models to server for aggregation. FedAvg’s practical performance has been shown in many early works, e.g., [18, 29, 48] and tends to become the most popular method for solving FL applications. [26] show that local SGD where users perform a number of local updates before global communication takes place as in FedAvg may offer benefit over minibatch SGD. Similar comparison between minibatch SGD and local SGD has been done in [42, 43]. Analyzing convergence of FedAvg
was very challenging at its early time due to the complexity in its update as well as data heterogeneity. One of the early attempt to show the convergence of FedAvg is in [39] for convex problems under the iid data setting and a set of assumptions. [45] also considers local SGD in the nonconvex setting. Without using an additional bounded gradient assumption as in [39, 45], [41] improves the complexity for the general nonconvex setting while [11] uses a Polyak-Łojasiewicz (PL) condition to improve FedAvg’s convergence results. In heterogeneous data settings, [17] analyzes local GD, where users performs gradient descent (GD) updates instead of SGD. The analysis of FedAvg for non-iid data is given in [24]. The analysis of local GD/SGD for nonconvex problems has been studied in [13]. However, FedAvg might not converge with non-iid data as shown in [33, 49, 50].
FedProx [23] is an extension of FedAvg, which deals with heterogeneity in federated networks by introducing a proximal term to the objective in local updates to improve stability. FedProx has been shown to achieve better performance than FedAvg in heterogeneous setting. Another method to deal with data heterogeneity is SCAFFOLD [16] which uses a control variate to correct the “client-drift" in local update of FedAvg. MIME [15] is another framework that uses control variate to improve FedAvg for heterogeneous settings. However, SCAFFOLD and MIME require to communicate extra information apart from local models. Compared to aforementioned works, our methods deal with nonconvex problems under standard assumptions and with composite settings.
FedSplit [33] instead employs a Peaceman-Rachford splitting scheme to solve a constrained reformulation of the original problem. In fact, FedSplit can be viewed as a variant of Tseng’s splitting scheme [1] applied to FL. [33] show that FedSplit can find a solution of the FL problem under only convexity without imposing any additional assumptions on system or data homogeneity. [49] proposes FedPD, which is essentially a variant of the standard augmented Lagrangian method in nonlinear optimization. Other algorithms for FL can be found, e.g., in [6, 10, 12, 14, 25, 46].
Our approach in this paper relies on nonconvex DR splitting method, which can handle the heterogeneity as discussed in [33]. While the DR method is classical, its nonconvex variants have been recently studied e.g., in [9, 21, 40]. However, the combination of DR and randomized block-coordinate strategy remains limited [7, 8] even in the convex settings. Alternatively, asynchronous algorithms have been extensively studied in the literature, also for FL, see, e.g., [2, 34, 35]. For instance, a recent work [44] analyzes an asynchronous variant of FedAvg under bounded delay assumption and constraint on the number of local updates. [39] proposes an asynchronous local SGD to solve convex problems under iid data. However, to our best knowledge, there exists no asynchronous method using DR splitting techniques with convergence guarantee for FL. In addition, most existing algorithms only focus on non-composite settings. Hence, our work here appears to be the first.
Content. The rest of this paper is organized as follows. Section 2 states our FL optimization model and our assumptions. Section 3 develops FedDR and analyzes its convergence. Section 4 considers an asynchronous variant, asyncFedDR. Section 5 is devoted for numerical experiments. Due to space limit, all technical details and proofs can be found in Supplementary Document (Supp. Doc.).
2 Nonconvex Optimization Models in Federated Learning
The underlying optimization model of many FL applications can be written into the following form:
min x∈Rp
{ F (x) := f(x) + g(x) = 1
n n∑ i=1 fi(x) + g(x) } , (1)
where n is the number of users, and each fi is a local loss of the i-th user, which is assumed to be nonconvex and L-smooth (see Assumptions 2.1 and 2.2 below), and g is a proper, closed, and convex regularizer. Apart from these assumptions, we will not make any additional assumption on (1). We emphasize that the use of regularizers g has been motivated in several works, including [47].
Let dom(F ) := {x ∈ Rp : F (x) < +∞} be the domain of F and ∂g be the subdifferential of g [1]. Since (1) is nonconvex, we only expect to find a stationary point, which is characterized by the following optimality condition. Definition 2.1. If 0 ∈ ∇f(x∗) + ∂g(x∗), then x∗ is called a [first-order] stationary point of (1).
The algorithms for solving (1) developed in this paper will rely on the following assumptions. Assumption 2.1 (Boundedness from below). dom(F ) 6= ∅ and F ? := infx∈Rp F (x) > −∞.
Assumption 2.2 (L-smoothness). All functions fi(·) for i ∈ [n] := {1, · · · , n} are L-smooth, i.e., fi is continuously differentiable and there exists L ∈ (0,+∞) such that
‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ dom(fi). (2)
Assumptions 2.1 and 2.2 are very standard in nonconvex optimization. Assumption 2.1 guarantees the well-definedness of (1) and is independent of algorithms. Assuming the same Lipschitz constant L for all fi is not restrictive since if fi is Li-smooth, then by scaling variables of its constrained formulation (see (11) in Supp. Doc.), we can get the same Lipschitz constant L of all fi.
Proximal operators and evaluation. Our methods make use of the proximal operators of both fi and g. Although fi is L-smooth and nonconvex, we still define its proximal operator as
proxηfi(x) := argminy
{ fi(y) + 1 2η‖y − x‖ 2 } , (3)
where η > 0. Even fi is nonconvex, under Assumption 2.2, if we choose 0 < η < 1L , then proxηfi is well-defined and single-valued. Evaluating proxηfi requires to solve a strongly convex program. If proxηfi can only be computed approximately up to an accuracy ≥ 0 to obtain z, denoted by x+ :≈ proxηfi(x), if ‖x+ − proxηfi(x)‖ ≤ i. Note that instead of absolute error, one can also use a relative error as ‖x+ − proxηfi(x)‖ ≤ i‖x+ − x‖ as in [37]. For the convex function g, its proximal operator proxηg is defined in the same way as (3). Evaluating proxηfi can be done by various existing methods, including local SGD and accelerated GD-type algorithms. However, this is not our focus in this paper, and therefore we do not specify the subsolver for evaluating proxηfi .
Gradient mapping. As usual, let us define the following gradient mapping of F in (1). Gη(x) := 1η ( x− proxηg(x− η∇f(x)) ) , η > 0. (4) Then, the optimality condition 0 ∈ ∇f(x∗) + ∂g(x∗) of (1) is equivalent to Gη(x∗) = 0. However, in practice, we often wish to find an ε-approximate stationary point to (1) defined as follows. Definition 2.2. If x̃ ∈ dom(F ) satisfies E [ ‖Gη(x̃)‖2 ] ≤ ε2, then x̃ is called an ε-stationary point of (1), where the expectation is taken overall the randomness generated by the underlying algorithm.
Note that, for Gη(x̃) to be well-defined, we require x̃ ∈ dom(F ). In our algorithms below, this requirement is fulfilled if x̃ ∈ dom(f), which is often satisfied in practice as dom(f) = Rp.
3 FedDR Algorithm and Its Convergence Guarantee
Prior to our work, FedSplit [33] exploits similar update steps as ours by adopting the PeacemanRachford splitting method to solve the convex and non-composite instances of (1). FedSplit can overcome some of the key challenges as discussed earlier. Following this idea, we take the advantages of the DR splitting method to first derive a new variant to handle the nonconvex composite problem (1). This new algorithm is synchronous and we call it FedDR. The central idea is as follows: First, we reformulate (1) into (12) by duplicating variables. Next, we apply a DR splitting scheme to the resulting problem. Finally, we combine such a scheme with a randomized block-coordinate strategy.
The complete algorithm is presented in Algorithm 1, where its full derivation is in Supp. Doc. A.1.
Let us make the following remarks. Firstly, FedDR mainly updates of three sequences {x̄k}, {xki } and {yki }. While x̄k is an averaged model to approximately minimize the global objective function F , xki act as local models trying to optimize a regularized local loss function w.r.t. its local data distribution, and yki keeps track of the residuals from the local models to the global one. Secondly, we allow xki to be an approximation of proxηfi(y k i ) up to an accuracy i,k ≥ 0 as defined in (3), i.e., ‖xki − proxηfi(y k i )‖ ≤ i,k for all i ∈ [n] if k = 0 and for all i ∈ Sk−1 if k > 0. If i,k = 0, then we get the exact evaluation xki := proxηfi(y k i ). Approximately evaluating proxηfi can be done, e.g., by local SGD as in FedAvg. Thirdly, Algorithm 1 is different from existing randomized proximal gradient-based methods since we rely on a DR splitting scheme and can handle composite settings. Here, three iterates yki , x k i , and x̂ k i at Step 5 are updated sequentially, making it challenging to analyze convergence. Lastly, the subset of active users Sk is sampled from a random set-valued mapping Ŝ. As specified in Assumption 3.1, this sampling mechanism covers a wide range of sampling strategies. Clearly, if Sk = [n] and g = 0, then Algorithm 1 reduces to FedSplit, but for the nonconvex case. Hence, our convergence guarantee below remains applicable, and the guarantee is sure. Note that both our model (1) and Algorithm 1 are completely different from [47].
Algorithm 1 (FL with Randomized DR (FedDR)) 1: Initialization: Take x0 ∈ dom(F ). Choose η > 0 and α > 0, and accuracies i,0 ≥ 0 (i ∈ [n]).
Initialize the server with x̄0 := x0 and x̃0 := x0. Initialize each user i ∈ [n] with y0i := x0, x0i :≈ proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: [Active users] Generate a proper realization Sk ⊆ [n] of Ŝ (see Assumption 3.1). 4: [Communication] Each user i ∈ Sk receives x̄k from the server. 5: [Local update] For each user i ∈ Sk do: Choose i,k+1 ≥ 0 and update
yk+1i := y k i + α(x̄ k − xki ), xk+1i :≈ proxηfi(y k+1 i ), and x̂ k+1 i := 2x k+1 i − y k+1 i .
6: [Communication] Each user i ∈ Sk sends ∆x̂ki := x̂ k+1 i − x̂ki back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n ∑ i∈Sk ∆x̂ k i .
8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
3.1 Convergence of Algorithm 1
Let us consider a proper sampling scheme Ŝ of [n], which is a random set-valued mapping with values in 2[n], the collection of all subsets of [n]. Let Sk be an iid realization of Ŝ and Fk := σ(S0, · · · ,Sk) be the σ-algebra generated by S0, · · · ,Sk. We first impose the following assumption about the distribution of our sampling scheme Ŝ. Assumption 3.1. There exist p1, · · · ,pn > 0 such that P ( i ∈ Ŝ ) = pi > 0 for all i ∈ [n].
This assumption covers a large class of sampling schemes as discussed in [36], including nonoverlapping uniform and doubly uniform. This assumption guarantees that every user has a nonnegligible probability to be updated. Note that pi = ∑ S:i∈S P(S) due to Assumption 3.1. For the sake of notation, we also denote p̂ := min{pi : i ∈ [n]} > 0. The following theorem characterizes convergence of Algorithm 1 with inexact evaluation of proxηfi . Due to space limit, we refer the reader to Lemma A.6 in Sup. Doc. for more details about the choice of stepsizes and related constants. The proof of this theorem is defered to Sup. Doc. A.5. Theorem 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α and η defined in (33). Then, the following holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ C1[F (x 0)− F ?] K + 1 + 1 n(K + 1) K∑ k=0 n∑ i=1 ( C2 2 i,k + C3 2 i,k+1 ) , (5)
where β, ρ1, and ρ2 are explicitly defined by (35), and
C1 := 2(1+ηL)2(1+γ2) η2β , C2 := ρ1C1, and C3 := ρ2C1 + (1+ηL)2(1+γ2) η2γ2 .
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Let the accuracies i,k for all i ∈ [n] and k ≥ 0 at Step 5 be chosen such that 1n ∑n i=1 ∑K+1 k=0 2 i,k ≤M for a given constant M > 0 and all K ≥ 0. Then, if we run Algorithm 1 for at most
K :=
⌊ C1[F (x
0)− F ?] + (C2 + C3)M ε2
⌋ ≡ O ( ε−2 )
iterations, then x̃K is an ε-stationary point of (1) in the sense of Definition 2.2.
Remark 3.1. [Choice of accuracies ki ] To guarantee 1 n ∑n i=1 ∑K+1 k=0 2 i,k ≤M in Theorem 3.1 for a given constant M > 0 and for all K ≥ 0, one can choose, e.g., 2i,k := M2(k+1)2 for all i ∈ [n] and k ≥ 0. In this case, we can easily show that 1n ∑n i=1 ∑K+1 k=0 2 i,k = M 2 ∑K+1 k=0 1 (k+1)2 ≤M . Note that, instead of using absolute accuracies, one can also use relative accuracies as ‖ i,k‖2 ≤ θ‖xk+1i −xki ‖2 for a given constant θ > 0, which is more practical, while still achieving a similar convergence guarantee. Such an idea has been widely used in the literature, including [28] (see Supp. Doc. A.7).
Remark 3.2 (Comparison). Since (1) is nonconvex, our O ( ε−2 )
communication complexity is the state-of-the-art, matching the lower bound complexity (up to a constant factor) [49]. However, different from the convergence analysis of FedSplit and FedPD [49], our flexible sampling scheme allows us to update a subset of users at each round and still obtains convergence. This can potentially further resolve the communication bottleneck [22]. We note that FedSplit is a variant of the PeacemanRachford splitting method, i.e. α = 2 and only considers convex non-composite case while we use a relaxation parameter α < 2 and for a more general nonconvex composite problem (1).
The following corollary specifies the convergence of Algorithm 1 with a specific choice of stepsizes and exact evaluation of proxηfi , whose proof is in Sup. Doc. A.6.
Corollary 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α = 1, η = 13L , and pi = 1 n . Under exact evaluation of proxηfi , i.e.
i,k = 0 for all i ∈ [n] and k ≥ 0, the following bound holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ 160Ln 3(K + 1) [F (x0)− F ?]. (6)
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Then after at most
K :=
⌊ 160Ln[F (x0)− F ?]
3ε2
⌋ ≡ O ( ε−2 ) ,
communication rounds, x̃K becomes an ε-stationary point of (1) (defined by Definition 2.2).
4 AsyncFedDR and Its Convergence Guarantee
Motivation. Although FedDR has been shown to converge, it is more practical to account for the system heterogeneity of local users. Requiring synchronous aggregation at the end of each communication round may lead to slow down in training. It is natural to have asynchronous update from local users as seen, e.g., in [35, 39]. However, asynchronous implementation remains limited in FL. Here, we propose asyncFedDR, an asynchronous variant of FedDR, and analyze its convergence guarantee. For the sake of our analysis, we only consider Sk := {ik}, the exact evaluation of proxηfi , and bounded delay, but extensions to general Sk and inexact proxηfi are similar to Algorithm 1.
4.1 Derivation of asyncFedDR
Let us first explain the main idea of asyncFedDR. At each iteration k, each user receives a delay copy x̄k−d
k ik of x̄k from the server with a delay dkik . The active user ik will update its own local model
(yki , x k i , x̂ k i ) in an asynchronous mode without waiting for others to complete. Once completing its update, user ik just sends an increment ∆x̂kik to the server to update the global model, while others may be reading. Overall, the complete asyncFedDR is presented in Algorithm 2.
In our analysis below, a transition of iteration from k to k + 1 is triggered whenever a user completes its update. Moreover, at Step 3, active user ik is chosen from a realization (ik, dk) of a joint random vector (̂ik, d̂k) at the k-th iteration. Here, we do not assume ik to be uniformly random or independent of the delay dk. This allows Algorithm 2 to capture the variety of asynchronous implementations and architectures. Note that x̄k−d k ik at Step 4 is a delayed version of x̄k, which only exists on the server when user ik is reading. However, right after, x̄k may be updated by another user.
Illustrative example. To better understand the update of asyncFedDR, Figure 1 depicts a simple scenario where there are 4 users (C1 - C4) asynchronously perform updates and with g(·) = 0. At iteration k = 4, user C4 finishes its update so that the server performs updates. During this process, user C1 starts its update by receiving a global model x̄4−d 4 i4 from server which is the average of (x̂41, x̂ 4 2, x̂ 4 3, x̂ 4 4). At iteration t = 7, C1 finishes its update. Although x̂1 and x̂4 do not change during this time, i.e. x̂61 = x̂ 4 1 and x̂ 6 4 = x̂ 4 4, x̂2 and x̂3 have been updated at k = 5, 6 from user C2 and C3, respectively. Therefore, the global model x̄k used to perform the update at k = 7 is actually aggregated from (x̂61, x̂ 4 2, x̂ 5 3, x̂ 6 4) not (x̂ 6 1, x̂ 6 2, x̂ 6 3, x̂ 6 4). In other words, each user receives a delay estimate x̄k−d k
where dk = (dk1 , · · · , dkn) is a delay vector and dki = max{t ∈ [k] : it = i}, i.e. the
Algorithm 2 (Asynchronous FedDR (asyncFedDR)) 1: Initialization: Take x0∈dom(F ) and choose η > 0 and α > 0.
Initialize the server with x̄0 := x0 and x̃0 := 0. Initialize each user i ∈ [n] with y0i := x0, x0i := proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: Select ik such that (ik, dk) is a realization of (̂ik, d̂k). 4: [Communication] User ik receives x̄
k−dkik , a delayed version of x̄k with the delay dkik . 5: [Local update] User ik updates
yk+1ik := y k ik + α(x̄k−d k ik − xkik), x k+1 ik := proxηfik (yk+1ik ), and x̂ k+1 ik := 2xk+1ik − y k+1 ik .
Other users maintain yk+1i := y k i , x k+1 i := x k i , and x̂ k+1 i := x̂ k i for i 6= ik.
6: [Communication] User ik sends ∆kik := x̂ k+1 ik − x̂kik back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n∆ k ik
. 8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
last time x̂i gets updated up to iteration k. Note that when dki = 0 for all i, Algorithm 2 reduces to its synchronous variant, i.e. a special variant of Algorithm 1 with Sk = {ik}.
4.2 Convergence analysis
Since we treat the active user ik and the delay vector dk jointly at each iteration k as a realization of a joint random vector (̂ik, d̂k), we adopt the probabilistic model from [5] to analyze Algorithm 2. This new model allows us to cope with a more general class of asynchronous variants of our method.
Probabilistic model. Let ξk := (ik, dk) be a realization of a random vector ξ̂k := (̂ik, d̂k) containing the user index îk ∈ [n] and the delay vector d̂k = (d̂k1 , · · · , d̂kn) ∈ D := {0, 1, · · · , τ}
n presented at the k-the iteration, respectively. We consider k + 1 random variables that form a random vector ξ̂0:k := (ξ̂0, · · · , ξ̂k). We also use ξ0:k = (ξ0, ξ1, · · · , ξk) for k + 1 possible values of the random vector ξ̂0:k. Let Ω be the sample space of all sequences ω := {(ik, dk)}k≥0. We define a cylinder Ck(ξ0:k) := {ω ∈ Ω : (ω0, · · · , ωk) = ξ0:k} and Ck is the set of all possible Ck(ξ0:k) when ξt, t = 0, · · · , k take all possible values, where ωl is the l-th element of ω. Let Fk := σ(Ck) be the σ-algebra generated by Ck and F := σ(∪∞k=0Ck). For each Ck(ξ0:k) we also equip with a probability p(ξ0:k) := P(Ck(ξ0:k)). Then, (Ω,F ,P) forms a probability space. Assume that p(ξ0:k) := P(ξ̂0:k = ξ0:k) > 0. Our conditional probability is defined as p((i, d) | ξ0:k) := P(Ck+1(ξ0:k+1))/P(Ck(ξ0:k)), where p((i, d) | ξ0:k) := 0 if p(ξ0:k) = 0. We refer to Supp. Doc. B.2 for more details of our probabilistic model.
To analyze Algorithm 2, we impose Assumption 4.1 on the implementation below.
Assumption 4.1. For all i ∈ [n] and ω ∈ Ω, there exists at least one t ∈ {0, 1, · · · , T} with T > 0, such that ∑
d∈D
p((i, d) | ξ0:k+t−1) ≥ p̂ if p(ξ0:k) > 0, (7)
for a given p̂ > 0 and any k ≥ 0. Assume also that dki ≤ τ and dkik = 0 for all k ≥ 0 and i, ik ∈ [n].
Assumption 4.1 implies that during an interval of T iterations, every user has a non-negligible positive probability to be updated. Note that if the user ik is active, then it uses recent value with no delay, i.e., dkik = 0 as in Assumption 4.1. Moreover, the bounded delay assumption d k i ≤ τ is standard to analyze convergence of asynchronous algorithms, see e.g., [5, 32, 34, 35, 44].
Suppose that we choose 0 < α < ᾱ and 0 < η < η̄ in Algorithm 2, where c := 2τ 2−n n2 is given, and ᾱ > 0 and η̄ > 0 are respectively computed by
ᾱ := { 1 if 2τ2 ≤ n, 2
2+c otherwise, and η̄ :=
√ 16−8α−7α2−α 2L(2+α) if 2τ 2 ≤ n, √ 16−8α−(7+4c+4c2)α2−α
2L[2+(1+c)α] otherwise. (8)
Next, we introduce the following two constants:
ρ := 2(1−α)−(2+α)L2η2−Lαη αηn if 2τ 2 ≤ n,
n2[2(1−α)−(2+α)L2η2−Lαη]−α(1+η2L2)(2τ2−n) αηn3 otherwise.
D := 8α 2(1+L2η2)(τ2+2Tnp̂) + 8n2(1+L2η2+Tα2p̂)
p̂α2n2 .
(9)
Then, both ρ and D are positive. We emphasize that though these formulas look complicated, they are computed explicitly without any tuning. Theorem 4.1 proves the convergence of Algorithm 2, whose analysis is in Supp. Doc. B. Theorem 4.1. Suppose that Assumption 2.1, 2.2, and 4.1 hold for (1). Let ᾱ, η̄, ρ, and D be given by (8) and (9), respectively. Let {(xki , yki , x̄k)} be generated by Algorithm 2 with stepsizes α ∈ (0, ᾱ) and η ∈ (0, η̄). Then, the following bound holds:
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ Ĉ [ F (x0)− F ? ] K + 1 , (10)
where Ĉ := 2(1+ηL) 2D
nη2ρ > 0 depending on n,L, η, α, τ, T, and p̂.
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 2. Then, after at most K := O ( ε−2 )
iterations, x̃K is an ε-stationary point of (1) as in Definition 2.2. Remark 4.1. From Theorem 4.1, we can see that asyncFedDR achieves the same worst-case communication complexity O ( ε−2 ) (up to a constant factor) as FedDR, but with smaller α and η.
5 Numerical Experiments
To evaluate the performance of FedDR and asyncFedDR, we conduct multiple experiments using both synthetic and real datasets. Since most existing methods are developed for non-composite problems, we also implement three other methods: FedAvg, FedProx, and FedPD to compare for this setting. We use training loss, training accuracy, and test accuracy as our performance metrics.
Implementation. To compare synchronous algorithms, we reuse the implementation of FedAvg and FedProx in [23] and implement FedDR and FedPD on top of it. To conduct the asynchronous examples, we implement our algorithms based on the asynchronous framework in [3]. All experiments are run on a Linux-based server with multiple nodes and configuration: 24-core 2.50GHz Intel processors, 30M cache, and 256GB RAM.
Models and hyper-parameters selection. Our models are neural networks, and their detail is given in Supp. Doc. C. As in [23], we use the same local solver (SGD) for all algorithms and run the local updates for 20 epochs. Parameters for each algorithm such as µ for FedProx, η for FedPD, and α and η for FedDR are tuned from a wide range of values. For each dataset, we pick the parameters that work best for each algorithm and plot their performance on the chosen parameters.
Results on synthetic datasets. We compare these algorithms using synthetic dataset in both iid and non-iid settings. We follow the data generation procedures described in [23, 38] to generate one iid dataset synthetic-iid and three non-iid datasets: synthetic-(r,s) for (r, s) =
{(0, 0), (0.5, 0.5), (1, 1)}. We first compare these algorithms without using the user sampling scheme, i.e. all users perform update at each communication round, and for non-composite model of (1).
We report the performance of these algorithms on one non-iid dataset in Figure 2, but more results can be found in Sup. Doc. C. FedDR and FedPD are comparable in these datasets and they both outperform FedProx and FedAvg. FedProx works better than FedAvg which aligns with the results in [23]. However, when comparing on more datasets, our algorithm overall performs better than others.
Now we compare these algorithms where we sample 10 users out of 30 to perform update at each communication round for FedAvg, FedProx, and FedDR while we use all users for FedPD since FedPD only has convergence guarantee for this setting. In this test, the evaluation metric is plotted in terms of the number of bytes communicated between users and server at each communication round. Note that using user sampling scheme in this case can save one-third of communication cost each round. Figure 3 depicts the performance of 4 algorithms on one dataset, see also Sup. Doc. C.
From Figure 3, FedDR performs well compared to others. FedProx using user sampling scheme performs better and is slightly behind FedPD while FedDR, FedPD, and FedProx outperform FedAvg.
Results on FEMNIST datasets. FEMNIST [4] is an extended version of the MNIST dataset [19] where the data is partitioned by the writer of the digit/character. It has a total of 62 classes (10 digits, 26 upper-case and 26 lower-case letters) with over 800,000 samples. In this example, there are total of 200 users and we sample 50 users to perform update at each round of communication for FedAvg, FedProx, and FedDR while we use all users to perform update for FedPD. Fig. 4 depicts the performance of 4 algorithms in terms of communication cost. From Fig. 4, FedDR can achieve lower loss value and higher training accuracy than other algorithms while FedPD can reach the same test accuracy as ours at the end. Overall, FedDR seems working better than other algorithms in this test.
Results with the `1-norm regularizer. We now consider the composite setting with g(x) := 0.01 ‖x‖1 to verify Algorithm 1 on different inexactness levels i,k by varying the learning rate (lr) and the number of local SGD epochs to approximately evaluate proxηfi(y k i ). We run Algorithm 1 on the FEMNIST dataset, and the results are shown in Figure 5.
We observe that Algorithm 1 works best when local learning rate is 0.003 which aligns with [23] for the non-composite case. It also performs better when we decrease i,k by increasing the number of epochs in evaluating proxηfi . This performance confirms our theoretical results in Supp. Doc. A.5.
FEMNIST
FEMNIST, g = || ||1
Results using asynchronous update. To illustrate the advantage of asyncFedDR over FedDR, we conduct another example to train MNIST dataset using 20 users. Since we run these experiments on computing nodes with identical configurations, we simulate the case with computing power discrepancy between users by adding variable delay to each user’s update process such that the difference between the fastest user may be up to twice as fast as the slowest one.
MNIST
The results of two variants are presented in Figure 6, see Supp. Doc. C for more examples. We can see that asyncFedDR can achieve better performance than FedDR in terms of training time which illustrate the advantage of asynchronous update in heterogeneous computing power.
Acknowledgments and Disclosure of Funding
The work of Quoc Tran-Dinh is partially supported by the Office of Naval Research (ONR), grant No. N00014-20-1-2088. The authors would also like to thank all the anonymous reviewers and the ACs for their constructive comments to improve the paper. | 1. What is the focus of the paper regarding federated learning?
2. What are the strengths and weaknesses of the proposed algorithm compared to prior works like FedSplit and FedPD?
3. Are there any questions or concerns regarding the claims made by the authors about the limitations of other algorithms?
4. How does the reviewer assess the technical accuracy of the content, particularly regarding the distinction between Douglas-Rachford and Peaceman-Rachford?
5. What are the requests or suggestions for additional information or analysis related to the convergence properties of FedDR under different complexity assumptions? | Summary Of The Paper
Review | Summary Of The Paper
The authors present an application of Douglas-Rachford for consensus optimization in federated learning. Their algorithm, unlike in prior work, explicitly handles subsampled clients, bounded delays, and also provides guarantees under for hypoconvex problems (smooth but not necessarily convex).
Review
The authors present a version of Douglas-Rachford that supports bounded delays and client subsampling for federated optimization. The techniques (stochastic approximation of the DR operator via client subsampling, and dealing with bounded delays) are more or less standard, and the results are not surprising.
Questions/comments:
The authors claim that FedSplit and FedPD do not allow for sub-sampling among clients (e.g. ll. 53-54, ll 66-67, of the submission). However, it is unclear to me that FedSplit and FedPD are non-convergent under client subsampling; are the authors claiming that if these algorithms are modified to allow random sub-selection of devices/agents/users/clients on each round, that these algorithms will not converge?
ll.103-104 is technically incorrect, FedSplit is an application of Peaceman-Rachford, not Douglas-Rachford; although similar, the latter is an "averaged" version of Peaceman-Rachford.
Can the authors comment on the convergence properties of FedDR with random subsampling of the clients/users under more standard complexity assumptions (e.g. F is convex (but not necessarily strongly convex) and smooth / F is strongly convex and smooth; in the latter case, FedDR should be able to attain linear convergence in expectation)?
It would be useful to state a corollary of Theorem 3.1 in the case alpha = 1, eta = c/L, and p_i = 1/n (uniform sampling), which corresponds to standard Douglas-Rachford with uniformly sampling and the optimal selection of the step-size. |
NIPS | Title
FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Abstract
We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
1 Introduction
Training machine learning models in a centralized fashion becomes more challenging and marginally inaccessible for a large number of users, especially when the size of datasets and models is growing substantially larger. Consequently, training algorithms using decentralized and distributed approaches comes in as a natural replacement. Among several approaches, federated learning (FL) has received tremendous attention in the past few years since it was first introduced in [18, 30]. In this setting, a central server coordinates between many local users (also called agents or devices) to perform their local updates, then the global model will get updated, e.g., by averaging or aggregating local models.
Challenges. FL provides a promising solution for many machine learning applications such as learning over smartphones or across organizations, and internet of things, where privacy protection is one of the most critical requirements. However, this training mechanism faces a number of fundamental challenges, see, e.g., [31]. First, when the number of users gets substantially large, it creates communication bottleneck during model exchange process between server and users. Second, the local data stored in each local user may be different in terms of sizes and distribution which poses a challenge: data or statistical heterogeneity. Third, the variety of users with different local storage, computational power, and network connectivity participating into the system also creates a major challenge, known as system heterogeneity. This challenge also causes unstable connection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
between server and users, where some users may be disconnected from the server or simply dropped out during training. In practice, we can expect only a subset of users to participate in each round of communication. Another challenge in FL is privacy concern. Accessing and sharing local raw data is not permitted in FL. In addition, distributed methods exchange the objective gradient of local users, and private data can be exposed from the shared model such as the objective gradients [51]. Therefore, FL methods normally send the global model to each user at the start of each communication round, each user will perform its local update and send back only the necessary update for aggregation.
Our goal and approach. Our goal in this paper is to further and simultaneously address these fundamental challenges by proposing two new algorithms to train the underlying common optimization model in FL. Our approach relies on a novel combination between randomized block-coordinate strategy, nonconvex Douglas-Rachford (DR) splitting, and asynchronous implementation. While each individual technique or partial combinations is not new, our combination of three as in this paper appears to be the first in the literature. To the best of our knowledge, this is the first work developing randomized block-coordinate DR splitting methods for nonconvex composite FL optimization models, and they are fundamentally different from some works in the convex setting, e.g., [7, 8].
Contribution. Our contribution can be summarized as follows.
(a) We develop a new FL algorithm, called FedDR (Federated Douglas-Rachford), by combining the well-known DR splitting technique and randomized block-coordinate strategy for the common nonconvex composite optimization problem in FL. Our algorithm can handle nonsmooth convex regularizers and allows inexact evaluation of the underlying proximal operators as in FedProx or FedPD. It also achieves the best known O ( ε−2 )
communication complexity for finding a stationary point under standard assumptions (Assumptions 2.1- 2.2), where ε is a given accuracy. More importantly, unlike FedSplit [33] and FedPD [49], which require full user participation to achieve convergence, our analysis does allow partial participation by selecting a subset of users to perform update at each communication round. (b) Next, we propose an asynchronous algorithm, asyncFedDR, where each user can asynchronously perform local update and periodically send the update to the server for proximal aggregation. We show that asyncFedDR achieves the same communication complexity O ( ε−2 )
as FedDR (up to a constant factor) under the same standard assumptions. This algorithm is expected to simultaneously address all challenges discussed above.
Let us emphasize some key points of our contribution. First, the best known O ( ε−2 )
communication complexity of our methods matches the lower bound complexity up to a constant factor as shown in [49], even with inexact evaluation of the objective proximal operators. Second, our methods rely on a DR splitting technique for nonconvex optimization and can handle possibly nonsmooth convex regularizers, which allows us to deal with a larger class of applications and with constraints [47]. Furthermore, it can also handle both statistical and system heterogeneity as discussed in FedSplit [33] and FedPD [49]. However, FedSplit only considers the convex case, and both FedSplit and FedPD require all users to update at each communication round, making them less practical and applicable in FL. Our methods only require a subset of users or even one user to participate in each communication round as in FedAvg or FedProx. In addition, our aggregation step on the server is different from most existing works due to a proximal step on the regularizer. It is also different from [47]. Third, as FedProx [23], we allow inexact evaluation of users’ proximal operators with any local solver (e.g., local SGD or variance reduced methods) and with adaptive accuracies. Finally, requiring synchronous aggregation at the end of each communication round may lead to slow-down in training due to the heterogeneity in computing power and communication capability of local users. It is natural to have asynchronous update from local users as in, e.g., [34, 35, 39]. Our asynchronous variant, asyncFedDR, can fairly address this challenge. Moreover, it uses a general probabilistic model recently introduced in [5], which allows us to capture the variety of asynchronous environments and architectures compared to existing methods, e.g., [39, 44].
Related work and comparison. Federated Averaging (FedAvg) is perhaps the earliest method used in FL. In FedAvg, users perform stochastic gradient descent (SGD) updates for a number of epochs then send updated models to server for aggregation. FedAvg’s practical performance has been shown in many early works, e.g., [18, 29, 48] and tends to become the most popular method for solving FL applications. [26] show that local SGD where users perform a number of local updates before global communication takes place as in FedAvg may offer benefit over minibatch SGD. Similar comparison between minibatch SGD and local SGD has been done in [42, 43]. Analyzing convergence of FedAvg
was very challenging at its early time due to the complexity in its update as well as data heterogeneity. One of the early attempt to show the convergence of FedAvg is in [39] for convex problems under the iid data setting and a set of assumptions. [45] also considers local SGD in the nonconvex setting. Without using an additional bounded gradient assumption as in [39, 45], [41] improves the complexity for the general nonconvex setting while [11] uses a Polyak-Łojasiewicz (PL) condition to improve FedAvg’s convergence results. In heterogeneous data settings, [17] analyzes local GD, where users performs gradient descent (GD) updates instead of SGD. The analysis of FedAvg for non-iid data is given in [24]. The analysis of local GD/SGD for nonconvex problems has been studied in [13]. However, FedAvg might not converge with non-iid data as shown in [33, 49, 50].
FedProx [23] is an extension of FedAvg, which deals with heterogeneity in federated networks by introducing a proximal term to the objective in local updates to improve stability. FedProx has been shown to achieve better performance than FedAvg in heterogeneous setting. Another method to deal with data heterogeneity is SCAFFOLD [16] which uses a control variate to correct the “client-drift" in local update of FedAvg. MIME [15] is another framework that uses control variate to improve FedAvg for heterogeneous settings. However, SCAFFOLD and MIME require to communicate extra information apart from local models. Compared to aforementioned works, our methods deal with nonconvex problems under standard assumptions and with composite settings.
FedSplit [33] instead employs a Peaceman-Rachford splitting scheme to solve a constrained reformulation of the original problem. In fact, FedSplit can be viewed as a variant of Tseng’s splitting scheme [1] applied to FL. [33] show that FedSplit can find a solution of the FL problem under only convexity without imposing any additional assumptions on system or data homogeneity. [49] proposes FedPD, which is essentially a variant of the standard augmented Lagrangian method in nonlinear optimization. Other algorithms for FL can be found, e.g., in [6, 10, 12, 14, 25, 46].
Our approach in this paper relies on nonconvex DR splitting method, which can handle the heterogeneity as discussed in [33]. While the DR method is classical, its nonconvex variants have been recently studied e.g., in [9, 21, 40]. However, the combination of DR and randomized block-coordinate strategy remains limited [7, 8] even in the convex settings. Alternatively, asynchronous algorithms have been extensively studied in the literature, also for FL, see, e.g., [2, 34, 35]. For instance, a recent work [44] analyzes an asynchronous variant of FedAvg under bounded delay assumption and constraint on the number of local updates. [39] proposes an asynchronous local SGD to solve convex problems under iid data. However, to our best knowledge, there exists no asynchronous method using DR splitting techniques with convergence guarantee for FL. In addition, most existing algorithms only focus on non-composite settings. Hence, our work here appears to be the first.
Content. The rest of this paper is organized as follows. Section 2 states our FL optimization model and our assumptions. Section 3 develops FedDR and analyzes its convergence. Section 4 considers an asynchronous variant, asyncFedDR. Section 5 is devoted for numerical experiments. Due to space limit, all technical details and proofs can be found in Supplementary Document (Supp. Doc.).
2 Nonconvex Optimization Models in Federated Learning
The underlying optimization model of many FL applications can be written into the following form:
min x∈Rp
{ F (x) := f(x) + g(x) = 1
n n∑ i=1 fi(x) + g(x) } , (1)
where n is the number of users, and each fi is a local loss of the i-th user, which is assumed to be nonconvex and L-smooth (see Assumptions 2.1 and 2.2 below), and g is a proper, closed, and convex regularizer. Apart from these assumptions, we will not make any additional assumption on (1). We emphasize that the use of regularizers g has been motivated in several works, including [47].
Let dom(F ) := {x ∈ Rp : F (x) < +∞} be the domain of F and ∂g be the subdifferential of g [1]. Since (1) is nonconvex, we only expect to find a stationary point, which is characterized by the following optimality condition. Definition 2.1. If 0 ∈ ∇f(x∗) + ∂g(x∗), then x∗ is called a [first-order] stationary point of (1).
The algorithms for solving (1) developed in this paper will rely on the following assumptions. Assumption 2.1 (Boundedness from below). dom(F ) 6= ∅ and F ? := infx∈Rp F (x) > −∞.
Assumption 2.2 (L-smoothness). All functions fi(·) for i ∈ [n] := {1, · · · , n} are L-smooth, i.e., fi is continuously differentiable and there exists L ∈ (0,+∞) such that
‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ dom(fi). (2)
Assumptions 2.1 and 2.2 are very standard in nonconvex optimization. Assumption 2.1 guarantees the well-definedness of (1) and is independent of algorithms. Assuming the same Lipschitz constant L for all fi is not restrictive since if fi is Li-smooth, then by scaling variables of its constrained formulation (see (11) in Supp. Doc.), we can get the same Lipschitz constant L of all fi.
Proximal operators and evaluation. Our methods make use of the proximal operators of both fi and g. Although fi is L-smooth and nonconvex, we still define its proximal operator as
proxηfi(x) := argminy
{ fi(y) + 1 2η‖y − x‖ 2 } , (3)
where η > 0. Even fi is nonconvex, under Assumption 2.2, if we choose 0 < η < 1L , then proxηfi is well-defined and single-valued. Evaluating proxηfi requires to solve a strongly convex program. If proxηfi can only be computed approximately up to an accuracy ≥ 0 to obtain z, denoted by x+ :≈ proxηfi(x), if ‖x+ − proxηfi(x)‖ ≤ i. Note that instead of absolute error, one can also use a relative error as ‖x+ − proxηfi(x)‖ ≤ i‖x+ − x‖ as in [37]. For the convex function g, its proximal operator proxηg is defined in the same way as (3). Evaluating proxηfi can be done by various existing methods, including local SGD and accelerated GD-type algorithms. However, this is not our focus in this paper, and therefore we do not specify the subsolver for evaluating proxηfi .
Gradient mapping. As usual, let us define the following gradient mapping of F in (1). Gη(x) := 1η ( x− proxηg(x− η∇f(x)) ) , η > 0. (4) Then, the optimality condition 0 ∈ ∇f(x∗) + ∂g(x∗) of (1) is equivalent to Gη(x∗) = 0. However, in practice, we often wish to find an ε-approximate stationary point to (1) defined as follows. Definition 2.2. If x̃ ∈ dom(F ) satisfies E [ ‖Gη(x̃)‖2 ] ≤ ε2, then x̃ is called an ε-stationary point of (1), where the expectation is taken overall the randomness generated by the underlying algorithm.
Note that, for Gη(x̃) to be well-defined, we require x̃ ∈ dom(F ). In our algorithms below, this requirement is fulfilled if x̃ ∈ dom(f), which is often satisfied in practice as dom(f) = Rp.
3 FedDR Algorithm and Its Convergence Guarantee
Prior to our work, FedSplit [33] exploits similar update steps as ours by adopting the PeacemanRachford splitting method to solve the convex and non-composite instances of (1). FedSplit can overcome some of the key challenges as discussed earlier. Following this idea, we take the advantages of the DR splitting method to first derive a new variant to handle the nonconvex composite problem (1). This new algorithm is synchronous and we call it FedDR. The central idea is as follows: First, we reformulate (1) into (12) by duplicating variables. Next, we apply a DR splitting scheme to the resulting problem. Finally, we combine such a scheme with a randomized block-coordinate strategy.
The complete algorithm is presented in Algorithm 1, where its full derivation is in Supp. Doc. A.1.
Let us make the following remarks. Firstly, FedDR mainly updates of three sequences {x̄k}, {xki } and {yki }. While x̄k is an averaged model to approximately minimize the global objective function F , xki act as local models trying to optimize a regularized local loss function w.r.t. its local data distribution, and yki keeps track of the residuals from the local models to the global one. Secondly, we allow xki to be an approximation of proxηfi(y k i ) up to an accuracy i,k ≥ 0 as defined in (3), i.e., ‖xki − proxηfi(y k i )‖ ≤ i,k for all i ∈ [n] if k = 0 and for all i ∈ Sk−1 if k > 0. If i,k = 0, then we get the exact evaluation xki := proxηfi(y k i ). Approximately evaluating proxηfi can be done, e.g., by local SGD as in FedAvg. Thirdly, Algorithm 1 is different from existing randomized proximal gradient-based methods since we rely on a DR splitting scheme and can handle composite settings. Here, three iterates yki , x k i , and x̂ k i at Step 5 are updated sequentially, making it challenging to analyze convergence. Lastly, the subset of active users Sk is sampled from a random set-valued mapping Ŝ. As specified in Assumption 3.1, this sampling mechanism covers a wide range of sampling strategies. Clearly, if Sk = [n] and g = 0, then Algorithm 1 reduces to FedSplit, but for the nonconvex case. Hence, our convergence guarantee below remains applicable, and the guarantee is sure. Note that both our model (1) and Algorithm 1 are completely different from [47].
Algorithm 1 (FL with Randomized DR (FedDR)) 1: Initialization: Take x0 ∈ dom(F ). Choose η > 0 and α > 0, and accuracies i,0 ≥ 0 (i ∈ [n]).
Initialize the server with x̄0 := x0 and x̃0 := x0. Initialize each user i ∈ [n] with y0i := x0, x0i :≈ proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: [Active users] Generate a proper realization Sk ⊆ [n] of Ŝ (see Assumption 3.1). 4: [Communication] Each user i ∈ Sk receives x̄k from the server. 5: [Local update] For each user i ∈ Sk do: Choose i,k+1 ≥ 0 and update
yk+1i := y k i + α(x̄ k − xki ), xk+1i :≈ proxηfi(y k+1 i ), and x̂ k+1 i := 2x k+1 i − y k+1 i .
6: [Communication] Each user i ∈ Sk sends ∆x̂ki := x̂ k+1 i − x̂ki back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n ∑ i∈Sk ∆x̂ k i .
8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
3.1 Convergence of Algorithm 1
Let us consider a proper sampling scheme Ŝ of [n], which is a random set-valued mapping with values in 2[n], the collection of all subsets of [n]. Let Sk be an iid realization of Ŝ and Fk := σ(S0, · · · ,Sk) be the σ-algebra generated by S0, · · · ,Sk. We first impose the following assumption about the distribution of our sampling scheme Ŝ. Assumption 3.1. There exist p1, · · · ,pn > 0 such that P ( i ∈ Ŝ ) = pi > 0 for all i ∈ [n].
This assumption covers a large class of sampling schemes as discussed in [36], including nonoverlapping uniform and doubly uniform. This assumption guarantees that every user has a nonnegligible probability to be updated. Note that pi = ∑ S:i∈S P(S) due to Assumption 3.1. For the sake of notation, we also denote p̂ := min{pi : i ∈ [n]} > 0. The following theorem characterizes convergence of Algorithm 1 with inexact evaluation of proxηfi . Due to space limit, we refer the reader to Lemma A.6 in Sup. Doc. for more details about the choice of stepsizes and related constants. The proof of this theorem is defered to Sup. Doc. A.5. Theorem 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α and η defined in (33). Then, the following holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ C1[F (x 0)− F ?] K + 1 + 1 n(K + 1) K∑ k=0 n∑ i=1 ( C2 2 i,k + C3 2 i,k+1 ) , (5)
where β, ρ1, and ρ2 are explicitly defined by (35), and
C1 := 2(1+ηL)2(1+γ2) η2β , C2 := ρ1C1, and C3 := ρ2C1 + (1+ηL)2(1+γ2) η2γ2 .
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Let the accuracies i,k for all i ∈ [n] and k ≥ 0 at Step 5 be chosen such that 1n ∑n i=1 ∑K+1 k=0 2 i,k ≤M for a given constant M > 0 and all K ≥ 0. Then, if we run Algorithm 1 for at most
K :=
⌊ C1[F (x
0)− F ?] + (C2 + C3)M ε2
⌋ ≡ O ( ε−2 )
iterations, then x̃K is an ε-stationary point of (1) in the sense of Definition 2.2.
Remark 3.1. [Choice of accuracies ki ] To guarantee 1 n ∑n i=1 ∑K+1 k=0 2 i,k ≤M in Theorem 3.1 for a given constant M > 0 and for all K ≥ 0, one can choose, e.g., 2i,k := M2(k+1)2 for all i ∈ [n] and k ≥ 0. In this case, we can easily show that 1n ∑n i=1 ∑K+1 k=0 2 i,k = M 2 ∑K+1 k=0 1 (k+1)2 ≤M . Note that, instead of using absolute accuracies, one can also use relative accuracies as ‖ i,k‖2 ≤ θ‖xk+1i −xki ‖2 for a given constant θ > 0, which is more practical, while still achieving a similar convergence guarantee. Such an idea has been widely used in the literature, including [28] (see Supp. Doc. A.7).
Remark 3.2 (Comparison). Since (1) is nonconvex, our O ( ε−2 )
communication complexity is the state-of-the-art, matching the lower bound complexity (up to a constant factor) [49]. However, different from the convergence analysis of FedSplit and FedPD [49], our flexible sampling scheme allows us to update a subset of users at each round and still obtains convergence. This can potentially further resolve the communication bottleneck [22]. We note that FedSplit is a variant of the PeacemanRachford splitting method, i.e. α = 2 and only considers convex non-composite case while we use a relaxation parameter α < 2 and for a more general nonconvex composite problem (1).
The following corollary specifies the convergence of Algorithm 1 with a specific choice of stepsizes and exact evaluation of proxηfi , whose proof is in Sup. Doc. A.6.
Corollary 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α = 1, η = 13L , and pi = 1 n . Under exact evaluation of proxηfi , i.e.
i,k = 0 for all i ∈ [n] and k ≥ 0, the following bound holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ 160Ln 3(K + 1) [F (x0)− F ?]. (6)
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Then after at most
K :=
⌊ 160Ln[F (x0)− F ?]
3ε2
⌋ ≡ O ( ε−2 ) ,
communication rounds, x̃K becomes an ε-stationary point of (1) (defined by Definition 2.2).
4 AsyncFedDR and Its Convergence Guarantee
Motivation. Although FedDR has been shown to converge, it is more practical to account for the system heterogeneity of local users. Requiring synchronous aggregation at the end of each communication round may lead to slow down in training. It is natural to have asynchronous update from local users as seen, e.g., in [35, 39]. However, asynchronous implementation remains limited in FL. Here, we propose asyncFedDR, an asynchronous variant of FedDR, and analyze its convergence guarantee. For the sake of our analysis, we only consider Sk := {ik}, the exact evaluation of proxηfi , and bounded delay, but extensions to general Sk and inexact proxηfi are similar to Algorithm 1.
4.1 Derivation of asyncFedDR
Let us first explain the main idea of asyncFedDR. At each iteration k, each user receives a delay copy x̄k−d
k ik of x̄k from the server with a delay dkik . The active user ik will update its own local model
(yki , x k i , x̂ k i ) in an asynchronous mode without waiting for others to complete. Once completing its update, user ik just sends an increment ∆x̂kik to the server to update the global model, while others may be reading. Overall, the complete asyncFedDR is presented in Algorithm 2.
In our analysis below, a transition of iteration from k to k + 1 is triggered whenever a user completes its update. Moreover, at Step 3, active user ik is chosen from a realization (ik, dk) of a joint random vector (̂ik, d̂k) at the k-th iteration. Here, we do not assume ik to be uniformly random or independent of the delay dk. This allows Algorithm 2 to capture the variety of asynchronous implementations and architectures. Note that x̄k−d k ik at Step 4 is a delayed version of x̄k, which only exists on the server when user ik is reading. However, right after, x̄k may be updated by another user.
Illustrative example. To better understand the update of asyncFedDR, Figure 1 depicts a simple scenario where there are 4 users (C1 - C4) asynchronously perform updates and with g(·) = 0. At iteration k = 4, user C4 finishes its update so that the server performs updates. During this process, user C1 starts its update by receiving a global model x̄4−d 4 i4 from server which is the average of (x̂41, x̂ 4 2, x̂ 4 3, x̂ 4 4). At iteration t = 7, C1 finishes its update. Although x̂1 and x̂4 do not change during this time, i.e. x̂61 = x̂ 4 1 and x̂ 6 4 = x̂ 4 4, x̂2 and x̂3 have been updated at k = 5, 6 from user C2 and C3, respectively. Therefore, the global model x̄k used to perform the update at k = 7 is actually aggregated from (x̂61, x̂ 4 2, x̂ 5 3, x̂ 6 4) not (x̂ 6 1, x̂ 6 2, x̂ 6 3, x̂ 6 4). In other words, each user receives a delay estimate x̄k−d k
where dk = (dk1 , · · · , dkn) is a delay vector and dki = max{t ∈ [k] : it = i}, i.e. the
Algorithm 2 (Asynchronous FedDR (asyncFedDR)) 1: Initialization: Take x0∈dom(F ) and choose η > 0 and α > 0.
Initialize the server with x̄0 := x0 and x̃0 := 0. Initialize each user i ∈ [n] with y0i := x0, x0i := proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: Select ik such that (ik, dk) is a realization of (̂ik, d̂k). 4: [Communication] User ik receives x̄
k−dkik , a delayed version of x̄k with the delay dkik . 5: [Local update] User ik updates
yk+1ik := y k ik + α(x̄k−d k ik − xkik), x k+1 ik := proxηfik (yk+1ik ), and x̂ k+1 ik := 2xk+1ik − y k+1 ik .
Other users maintain yk+1i := y k i , x k+1 i := x k i , and x̂ k+1 i := x̂ k i for i 6= ik.
6: [Communication] User ik sends ∆kik := x̂ k+1 ik − x̂kik back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n∆ k ik
. 8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
last time x̂i gets updated up to iteration k. Note that when dki = 0 for all i, Algorithm 2 reduces to its synchronous variant, i.e. a special variant of Algorithm 1 with Sk = {ik}.
4.2 Convergence analysis
Since we treat the active user ik and the delay vector dk jointly at each iteration k as a realization of a joint random vector (̂ik, d̂k), we adopt the probabilistic model from [5] to analyze Algorithm 2. This new model allows us to cope with a more general class of asynchronous variants of our method.
Probabilistic model. Let ξk := (ik, dk) be a realization of a random vector ξ̂k := (̂ik, d̂k) containing the user index îk ∈ [n] and the delay vector d̂k = (d̂k1 , · · · , d̂kn) ∈ D := {0, 1, · · · , τ}
n presented at the k-the iteration, respectively. We consider k + 1 random variables that form a random vector ξ̂0:k := (ξ̂0, · · · , ξ̂k). We also use ξ0:k = (ξ0, ξ1, · · · , ξk) for k + 1 possible values of the random vector ξ̂0:k. Let Ω be the sample space of all sequences ω := {(ik, dk)}k≥0. We define a cylinder Ck(ξ0:k) := {ω ∈ Ω : (ω0, · · · , ωk) = ξ0:k} and Ck is the set of all possible Ck(ξ0:k) when ξt, t = 0, · · · , k take all possible values, where ωl is the l-th element of ω. Let Fk := σ(Ck) be the σ-algebra generated by Ck and F := σ(∪∞k=0Ck). For each Ck(ξ0:k) we also equip with a probability p(ξ0:k) := P(Ck(ξ0:k)). Then, (Ω,F ,P) forms a probability space. Assume that p(ξ0:k) := P(ξ̂0:k = ξ0:k) > 0. Our conditional probability is defined as p((i, d) | ξ0:k) := P(Ck+1(ξ0:k+1))/P(Ck(ξ0:k)), where p((i, d) | ξ0:k) := 0 if p(ξ0:k) = 0. We refer to Supp. Doc. B.2 for more details of our probabilistic model.
To analyze Algorithm 2, we impose Assumption 4.1 on the implementation below.
Assumption 4.1. For all i ∈ [n] and ω ∈ Ω, there exists at least one t ∈ {0, 1, · · · , T} with T > 0, such that ∑
d∈D
p((i, d) | ξ0:k+t−1) ≥ p̂ if p(ξ0:k) > 0, (7)
for a given p̂ > 0 and any k ≥ 0. Assume also that dki ≤ τ and dkik = 0 for all k ≥ 0 and i, ik ∈ [n].
Assumption 4.1 implies that during an interval of T iterations, every user has a non-negligible positive probability to be updated. Note that if the user ik is active, then it uses recent value with no delay, i.e., dkik = 0 as in Assumption 4.1. Moreover, the bounded delay assumption d k i ≤ τ is standard to analyze convergence of asynchronous algorithms, see e.g., [5, 32, 34, 35, 44].
Suppose that we choose 0 < α < ᾱ and 0 < η < η̄ in Algorithm 2, where c := 2τ 2−n n2 is given, and ᾱ > 0 and η̄ > 0 are respectively computed by
ᾱ := { 1 if 2τ2 ≤ n, 2
2+c otherwise, and η̄ :=
√ 16−8α−7α2−α 2L(2+α) if 2τ 2 ≤ n, √ 16−8α−(7+4c+4c2)α2−α
2L[2+(1+c)α] otherwise. (8)
Next, we introduce the following two constants:
ρ := 2(1−α)−(2+α)L2η2−Lαη αηn if 2τ 2 ≤ n,
n2[2(1−α)−(2+α)L2η2−Lαη]−α(1+η2L2)(2τ2−n) αηn3 otherwise.
D := 8α 2(1+L2η2)(τ2+2Tnp̂) + 8n2(1+L2η2+Tα2p̂)
p̂α2n2 .
(9)
Then, both ρ and D are positive. We emphasize that though these formulas look complicated, they are computed explicitly without any tuning. Theorem 4.1 proves the convergence of Algorithm 2, whose analysis is in Supp. Doc. B. Theorem 4.1. Suppose that Assumption 2.1, 2.2, and 4.1 hold for (1). Let ᾱ, η̄, ρ, and D be given by (8) and (9), respectively. Let {(xki , yki , x̄k)} be generated by Algorithm 2 with stepsizes α ∈ (0, ᾱ) and η ∈ (0, η̄). Then, the following bound holds:
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ Ĉ [ F (x0)− F ? ] K + 1 , (10)
where Ĉ := 2(1+ηL) 2D
nη2ρ > 0 depending on n,L, η, α, τ, T, and p̂.
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 2. Then, after at most K := O ( ε−2 )
iterations, x̃K is an ε-stationary point of (1) as in Definition 2.2. Remark 4.1. From Theorem 4.1, we can see that asyncFedDR achieves the same worst-case communication complexity O ( ε−2 ) (up to a constant factor) as FedDR, but with smaller α and η.
5 Numerical Experiments
To evaluate the performance of FedDR and asyncFedDR, we conduct multiple experiments using both synthetic and real datasets. Since most existing methods are developed for non-composite problems, we also implement three other methods: FedAvg, FedProx, and FedPD to compare for this setting. We use training loss, training accuracy, and test accuracy as our performance metrics.
Implementation. To compare synchronous algorithms, we reuse the implementation of FedAvg and FedProx in [23] and implement FedDR and FedPD on top of it. To conduct the asynchronous examples, we implement our algorithms based on the asynchronous framework in [3]. All experiments are run on a Linux-based server with multiple nodes and configuration: 24-core 2.50GHz Intel processors, 30M cache, and 256GB RAM.
Models and hyper-parameters selection. Our models are neural networks, and their detail is given in Supp. Doc. C. As in [23], we use the same local solver (SGD) for all algorithms and run the local updates for 20 epochs. Parameters for each algorithm such as µ for FedProx, η for FedPD, and α and η for FedDR are tuned from a wide range of values. For each dataset, we pick the parameters that work best for each algorithm and plot their performance on the chosen parameters.
Results on synthetic datasets. We compare these algorithms using synthetic dataset in both iid and non-iid settings. We follow the data generation procedures described in [23, 38] to generate one iid dataset synthetic-iid and three non-iid datasets: synthetic-(r,s) for (r, s) =
{(0, 0), (0.5, 0.5), (1, 1)}. We first compare these algorithms without using the user sampling scheme, i.e. all users perform update at each communication round, and for non-composite model of (1).
We report the performance of these algorithms on one non-iid dataset in Figure 2, but more results can be found in Sup. Doc. C. FedDR and FedPD are comparable in these datasets and they both outperform FedProx and FedAvg. FedProx works better than FedAvg which aligns with the results in [23]. However, when comparing on more datasets, our algorithm overall performs better than others.
Now we compare these algorithms where we sample 10 users out of 30 to perform update at each communication round for FedAvg, FedProx, and FedDR while we use all users for FedPD since FedPD only has convergence guarantee for this setting. In this test, the evaluation metric is plotted in terms of the number of bytes communicated between users and server at each communication round. Note that using user sampling scheme in this case can save one-third of communication cost each round. Figure 3 depicts the performance of 4 algorithms on one dataset, see also Sup. Doc. C.
From Figure 3, FedDR performs well compared to others. FedProx using user sampling scheme performs better and is slightly behind FedPD while FedDR, FedPD, and FedProx outperform FedAvg.
Results on FEMNIST datasets. FEMNIST [4] is an extended version of the MNIST dataset [19] where the data is partitioned by the writer of the digit/character. It has a total of 62 classes (10 digits, 26 upper-case and 26 lower-case letters) with over 800,000 samples. In this example, there are total of 200 users and we sample 50 users to perform update at each round of communication for FedAvg, FedProx, and FedDR while we use all users to perform update for FedPD. Fig. 4 depicts the performance of 4 algorithms in terms of communication cost. From Fig. 4, FedDR can achieve lower loss value and higher training accuracy than other algorithms while FedPD can reach the same test accuracy as ours at the end. Overall, FedDR seems working better than other algorithms in this test.
Results with the `1-norm regularizer. We now consider the composite setting with g(x) := 0.01 ‖x‖1 to verify Algorithm 1 on different inexactness levels i,k by varying the learning rate (lr) and the number of local SGD epochs to approximately evaluate proxηfi(y k i ). We run Algorithm 1 on the FEMNIST dataset, and the results are shown in Figure 5.
We observe that Algorithm 1 works best when local learning rate is 0.003 which aligns with [23] for the non-composite case. It also performs better when we decrease i,k by increasing the number of epochs in evaluating proxηfi . This performance confirms our theoretical results in Supp. Doc. A.5.
FEMNIST
FEMNIST, g = || ||1
Results using asynchronous update. To illustrate the advantage of asyncFedDR over FedDR, we conduct another example to train MNIST dataset using 20 users. Since we run these experiments on computing nodes with identical configurations, we simulate the case with computing power discrepancy between users by adding variable delay to each user’s update process such that the difference between the fastest user may be up to twice as fast as the slowest one.
MNIST
The results of two variants are presented in Figure 6, see Supp. Doc. C for more examples. We can see that asyncFedDR can achieve better performance than FedDR in terms of training time which illustrate the advantage of asynchronous update in heterogeneous computing power.
Acknowledgments and Disclosure of Funding
The work of Quoc Tran-Dinh is partially supported by the Office of Naval Research (ONR), grant No. N00014-20-1-2088. The authors would also like to thank all the anonymous reviewers and the ACs for their constructive comments to improve the paper. | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths of the proposed method, particularly in its ability to handle nonconvex objectives?
3. Do you have any concerns or questions regarding the method's reliance on stochastic approximations?
4. How does the reviewer assess the clarity and readability of the paper's content, particularly in terms of the intuition behind certain quantities and variables?
5. Are there any concerns regarding the deferral of the full statement of convergence results to the appendix, and the lack of discussion on the cost of inexact proximal evaluations? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a new method for federated learning with a nonconvex but smooth objective function. The proposed method is based on Douglas Rachford Splitting to decompose the objective and randomized coordinate methodology to allow a random selection of users to update at each iteration. An extension to asynchronous settings is presented, allowing modeling of communication delays and similar practical considerations. The convergence of these methods (potential using inexact proximal calculations) is bounded by the typical O(\epsilon^-2) rate. Reasonable numerics are given showing the potential effectiveness of such a scheme on synthetic and real datasets.
Review
-The use of
G
β
(
x
~
)
in Definition 2.2 as a stochastic quantity is confusing. In (4),
G
β
(
x
)
is defined deterministically. Is this definition meaning that a stochastic approximation of
G
β
(
x
)
generated by some algorithm is small? Or is
x
~
not a point in the domain of
F
but rather a random variable that tends to have small
G
β
(
x
~
)
?
-The intuition for each
x
i
and
y
i
used in FedDR rely on the series of reformulations presented in Appendix A. Including some interpretation of these quantities in the main text would help readability.
-It is interesting that the different components do not need to be sample uniformly in Assumption 3.1. Some intuition on how varied
p
i
≠
p
j
is handled without leading to bias would be helpful.
-Deferring the full statement of convergence results to the appendix is disappointing. Guarantees under inexact proximal evaluations are one of the works claimed contributions but do not appear anywhere in the main text. In particular, a full description of the cost of an inexact method would include the cost of the SGD subroutine employed to approximate each strongly convex subproblem. This cost seems particularly important to include since the
ϵ
i
,
k
choice recommended in the appendix is quite small (namely,
ϵ
/
(
k
+
1
)
). Including such costs would likely lead to a runtime worse than the claimed
O
(
ϵ
−
2
)
rate. |
NIPS | Title
FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Abstract
We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
1 Introduction
Training machine learning models in a centralized fashion becomes more challenging and marginally inaccessible for a large number of users, especially when the size of datasets and models is growing substantially larger. Consequently, training algorithms using decentralized and distributed approaches comes in as a natural replacement. Among several approaches, federated learning (FL) has received tremendous attention in the past few years since it was first introduced in [18, 30]. In this setting, a central server coordinates between many local users (also called agents or devices) to perform their local updates, then the global model will get updated, e.g., by averaging or aggregating local models.
Challenges. FL provides a promising solution for many machine learning applications such as learning over smartphones or across organizations, and internet of things, where privacy protection is one of the most critical requirements. However, this training mechanism faces a number of fundamental challenges, see, e.g., [31]. First, when the number of users gets substantially large, it creates communication bottleneck during model exchange process between server and users. Second, the local data stored in each local user may be different in terms of sizes and distribution which poses a challenge: data or statistical heterogeneity. Third, the variety of users with different local storage, computational power, and network connectivity participating into the system also creates a major challenge, known as system heterogeneity. This challenge also causes unstable connection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
between server and users, where some users may be disconnected from the server or simply dropped out during training. In practice, we can expect only a subset of users to participate in each round of communication. Another challenge in FL is privacy concern. Accessing and sharing local raw data is not permitted in FL. In addition, distributed methods exchange the objective gradient of local users, and private data can be exposed from the shared model such as the objective gradients [51]. Therefore, FL methods normally send the global model to each user at the start of each communication round, each user will perform its local update and send back only the necessary update for aggregation.
Our goal and approach. Our goal in this paper is to further and simultaneously address these fundamental challenges by proposing two new algorithms to train the underlying common optimization model in FL. Our approach relies on a novel combination between randomized block-coordinate strategy, nonconvex Douglas-Rachford (DR) splitting, and asynchronous implementation. While each individual technique or partial combinations is not new, our combination of three as in this paper appears to be the first in the literature. To the best of our knowledge, this is the first work developing randomized block-coordinate DR splitting methods for nonconvex composite FL optimization models, and they are fundamentally different from some works in the convex setting, e.g., [7, 8].
Contribution. Our contribution can be summarized as follows.
(a) We develop a new FL algorithm, called FedDR (Federated Douglas-Rachford), by combining the well-known DR splitting technique and randomized block-coordinate strategy for the common nonconvex composite optimization problem in FL. Our algorithm can handle nonsmooth convex regularizers and allows inexact evaluation of the underlying proximal operators as in FedProx or FedPD. It also achieves the best known O ( ε−2 )
communication complexity for finding a stationary point under standard assumptions (Assumptions 2.1- 2.2), where ε is a given accuracy. More importantly, unlike FedSplit [33] and FedPD [49], which require full user participation to achieve convergence, our analysis does allow partial participation by selecting a subset of users to perform update at each communication round. (b) Next, we propose an asynchronous algorithm, asyncFedDR, where each user can asynchronously perform local update and periodically send the update to the server for proximal aggregation. We show that asyncFedDR achieves the same communication complexity O ( ε−2 )
as FedDR (up to a constant factor) under the same standard assumptions. This algorithm is expected to simultaneously address all challenges discussed above.
Let us emphasize some key points of our contribution. First, the best known O ( ε−2 )
communication complexity of our methods matches the lower bound complexity up to a constant factor as shown in [49], even with inexact evaluation of the objective proximal operators. Second, our methods rely on a DR splitting technique for nonconvex optimization and can handle possibly nonsmooth convex regularizers, which allows us to deal with a larger class of applications and with constraints [47]. Furthermore, it can also handle both statistical and system heterogeneity as discussed in FedSplit [33] and FedPD [49]. However, FedSplit only considers the convex case, and both FedSplit and FedPD require all users to update at each communication round, making them less practical and applicable in FL. Our methods only require a subset of users or even one user to participate in each communication round as in FedAvg or FedProx. In addition, our aggregation step on the server is different from most existing works due to a proximal step on the regularizer. It is also different from [47]. Third, as FedProx [23], we allow inexact evaluation of users’ proximal operators with any local solver (e.g., local SGD or variance reduced methods) and with adaptive accuracies. Finally, requiring synchronous aggregation at the end of each communication round may lead to slow-down in training due to the heterogeneity in computing power and communication capability of local users. It is natural to have asynchronous update from local users as in, e.g., [34, 35, 39]. Our asynchronous variant, asyncFedDR, can fairly address this challenge. Moreover, it uses a general probabilistic model recently introduced in [5], which allows us to capture the variety of asynchronous environments and architectures compared to existing methods, e.g., [39, 44].
Related work and comparison. Federated Averaging (FedAvg) is perhaps the earliest method used in FL. In FedAvg, users perform stochastic gradient descent (SGD) updates for a number of epochs then send updated models to server for aggregation. FedAvg’s practical performance has been shown in many early works, e.g., [18, 29, 48] and tends to become the most popular method for solving FL applications. [26] show that local SGD where users perform a number of local updates before global communication takes place as in FedAvg may offer benefit over minibatch SGD. Similar comparison between minibatch SGD and local SGD has been done in [42, 43]. Analyzing convergence of FedAvg
was very challenging at its early time due to the complexity in its update as well as data heterogeneity. One of the early attempt to show the convergence of FedAvg is in [39] for convex problems under the iid data setting and a set of assumptions. [45] also considers local SGD in the nonconvex setting. Without using an additional bounded gradient assumption as in [39, 45], [41] improves the complexity for the general nonconvex setting while [11] uses a Polyak-Łojasiewicz (PL) condition to improve FedAvg’s convergence results. In heterogeneous data settings, [17] analyzes local GD, where users performs gradient descent (GD) updates instead of SGD. The analysis of FedAvg for non-iid data is given in [24]. The analysis of local GD/SGD for nonconvex problems has been studied in [13]. However, FedAvg might not converge with non-iid data as shown in [33, 49, 50].
FedProx [23] is an extension of FedAvg, which deals with heterogeneity in federated networks by introducing a proximal term to the objective in local updates to improve stability. FedProx has been shown to achieve better performance than FedAvg in heterogeneous setting. Another method to deal with data heterogeneity is SCAFFOLD [16] which uses a control variate to correct the “client-drift" in local update of FedAvg. MIME [15] is another framework that uses control variate to improve FedAvg for heterogeneous settings. However, SCAFFOLD and MIME require to communicate extra information apart from local models. Compared to aforementioned works, our methods deal with nonconvex problems under standard assumptions and with composite settings.
FedSplit [33] instead employs a Peaceman-Rachford splitting scheme to solve a constrained reformulation of the original problem. In fact, FedSplit can be viewed as a variant of Tseng’s splitting scheme [1] applied to FL. [33] show that FedSplit can find a solution of the FL problem under only convexity without imposing any additional assumptions on system or data homogeneity. [49] proposes FedPD, which is essentially a variant of the standard augmented Lagrangian method in nonlinear optimization. Other algorithms for FL can be found, e.g., in [6, 10, 12, 14, 25, 46].
Our approach in this paper relies on nonconvex DR splitting method, which can handle the heterogeneity as discussed in [33]. While the DR method is classical, its nonconvex variants have been recently studied e.g., in [9, 21, 40]. However, the combination of DR and randomized block-coordinate strategy remains limited [7, 8] even in the convex settings. Alternatively, asynchronous algorithms have been extensively studied in the literature, also for FL, see, e.g., [2, 34, 35]. For instance, a recent work [44] analyzes an asynchronous variant of FedAvg under bounded delay assumption and constraint on the number of local updates. [39] proposes an asynchronous local SGD to solve convex problems under iid data. However, to our best knowledge, there exists no asynchronous method using DR splitting techniques with convergence guarantee for FL. In addition, most existing algorithms only focus on non-composite settings. Hence, our work here appears to be the first.
Content. The rest of this paper is organized as follows. Section 2 states our FL optimization model and our assumptions. Section 3 develops FedDR and analyzes its convergence. Section 4 considers an asynchronous variant, asyncFedDR. Section 5 is devoted for numerical experiments. Due to space limit, all technical details and proofs can be found in Supplementary Document (Supp. Doc.).
2 Nonconvex Optimization Models in Federated Learning
The underlying optimization model of many FL applications can be written into the following form:
min x∈Rp
{ F (x) := f(x) + g(x) = 1
n n∑ i=1 fi(x) + g(x) } , (1)
where n is the number of users, and each fi is a local loss of the i-th user, which is assumed to be nonconvex and L-smooth (see Assumptions 2.1 and 2.2 below), and g is a proper, closed, and convex regularizer. Apart from these assumptions, we will not make any additional assumption on (1). We emphasize that the use of regularizers g has been motivated in several works, including [47].
Let dom(F ) := {x ∈ Rp : F (x) < +∞} be the domain of F and ∂g be the subdifferential of g [1]. Since (1) is nonconvex, we only expect to find a stationary point, which is characterized by the following optimality condition. Definition 2.1. If 0 ∈ ∇f(x∗) + ∂g(x∗), then x∗ is called a [first-order] stationary point of (1).
The algorithms for solving (1) developed in this paper will rely on the following assumptions. Assumption 2.1 (Boundedness from below). dom(F ) 6= ∅ and F ? := infx∈Rp F (x) > −∞.
Assumption 2.2 (L-smoothness). All functions fi(·) for i ∈ [n] := {1, · · · , n} are L-smooth, i.e., fi is continuously differentiable and there exists L ∈ (0,+∞) such that
‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ dom(fi). (2)
Assumptions 2.1 and 2.2 are very standard in nonconvex optimization. Assumption 2.1 guarantees the well-definedness of (1) and is independent of algorithms. Assuming the same Lipschitz constant L for all fi is not restrictive since if fi is Li-smooth, then by scaling variables of its constrained formulation (see (11) in Supp. Doc.), we can get the same Lipschitz constant L of all fi.
Proximal operators and evaluation. Our methods make use of the proximal operators of both fi and g. Although fi is L-smooth and nonconvex, we still define its proximal operator as
proxηfi(x) := argminy
{ fi(y) + 1 2η‖y − x‖ 2 } , (3)
where η > 0. Even fi is nonconvex, under Assumption 2.2, if we choose 0 < η < 1L , then proxηfi is well-defined and single-valued. Evaluating proxηfi requires to solve a strongly convex program. If proxηfi can only be computed approximately up to an accuracy ≥ 0 to obtain z, denoted by x+ :≈ proxηfi(x), if ‖x+ − proxηfi(x)‖ ≤ i. Note that instead of absolute error, one can also use a relative error as ‖x+ − proxηfi(x)‖ ≤ i‖x+ − x‖ as in [37]. For the convex function g, its proximal operator proxηg is defined in the same way as (3). Evaluating proxηfi can be done by various existing methods, including local SGD and accelerated GD-type algorithms. However, this is not our focus in this paper, and therefore we do not specify the subsolver for evaluating proxηfi .
Gradient mapping. As usual, let us define the following gradient mapping of F in (1). Gη(x) := 1η ( x− proxηg(x− η∇f(x)) ) , η > 0. (4) Then, the optimality condition 0 ∈ ∇f(x∗) + ∂g(x∗) of (1) is equivalent to Gη(x∗) = 0. However, in practice, we often wish to find an ε-approximate stationary point to (1) defined as follows. Definition 2.2. If x̃ ∈ dom(F ) satisfies E [ ‖Gη(x̃)‖2 ] ≤ ε2, then x̃ is called an ε-stationary point of (1), where the expectation is taken overall the randomness generated by the underlying algorithm.
Note that, for Gη(x̃) to be well-defined, we require x̃ ∈ dom(F ). In our algorithms below, this requirement is fulfilled if x̃ ∈ dom(f), which is often satisfied in practice as dom(f) = Rp.
3 FedDR Algorithm and Its Convergence Guarantee
Prior to our work, FedSplit [33] exploits similar update steps as ours by adopting the PeacemanRachford splitting method to solve the convex and non-composite instances of (1). FedSplit can overcome some of the key challenges as discussed earlier. Following this idea, we take the advantages of the DR splitting method to first derive a new variant to handle the nonconvex composite problem (1). This new algorithm is synchronous and we call it FedDR. The central idea is as follows: First, we reformulate (1) into (12) by duplicating variables. Next, we apply a DR splitting scheme to the resulting problem. Finally, we combine such a scheme with a randomized block-coordinate strategy.
The complete algorithm is presented in Algorithm 1, where its full derivation is in Supp. Doc. A.1.
Let us make the following remarks. Firstly, FedDR mainly updates of three sequences {x̄k}, {xki } and {yki }. While x̄k is an averaged model to approximately minimize the global objective function F , xki act as local models trying to optimize a regularized local loss function w.r.t. its local data distribution, and yki keeps track of the residuals from the local models to the global one. Secondly, we allow xki to be an approximation of proxηfi(y k i ) up to an accuracy i,k ≥ 0 as defined in (3), i.e., ‖xki − proxηfi(y k i )‖ ≤ i,k for all i ∈ [n] if k = 0 and for all i ∈ Sk−1 if k > 0. If i,k = 0, then we get the exact evaluation xki := proxηfi(y k i ). Approximately evaluating proxηfi can be done, e.g., by local SGD as in FedAvg. Thirdly, Algorithm 1 is different from existing randomized proximal gradient-based methods since we rely on a DR splitting scheme and can handle composite settings. Here, three iterates yki , x k i , and x̂ k i at Step 5 are updated sequentially, making it challenging to analyze convergence. Lastly, the subset of active users Sk is sampled from a random set-valued mapping Ŝ. As specified in Assumption 3.1, this sampling mechanism covers a wide range of sampling strategies. Clearly, if Sk = [n] and g = 0, then Algorithm 1 reduces to FedSplit, but for the nonconvex case. Hence, our convergence guarantee below remains applicable, and the guarantee is sure. Note that both our model (1) and Algorithm 1 are completely different from [47].
Algorithm 1 (FL with Randomized DR (FedDR)) 1: Initialization: Take x0 ∈ dom(F ). Choose η > 0 and α > 0, and accuracies i,0 ≥ 0 (i ∈ [n]).
Initialize the server with x̄0 := x0 and x̃0 := x0. Initialize each user i ∈ [n] with y0i := x0, x0i :≈ proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: [Active users] Generate a proper realization Sk ⊆ [n] of Ŝ (see Assumption 3.1). 4: [Communication] Each user i ∈ Sk receives x̄k from the server. 5: [Local update] For each user i ∈ Sk do: Choose i,k+1 ≥ 0 and update
yk+1i := y k i + α(x̄ k − xki ), xk+1i :≈ proxηfi(y k+1 i ), and x̂ k+1 i := 2x k+1 i − y k+1 i .
6: [Communication] Each user i ∈ Sk sends ∆x̂ki := x̂ k+1 i − x̂ki back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n ∑ i∈Sk ∆x̂ k i .
8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
3.1 Convergence of Algorithm 1
Let us consider a proper sampling scheme Ŝ of [n], which is a random set-valued mapping with values in 2[n], the collection of all subsets of [n]. Let Sk be an iid realization of Ŝ and Fk := σ(S0, · · · ,Sk) be the σ-algebra generated by S0, · · · ,Sk. We first impose the following assumption about the distribution of our sampling scheme Ŝ. Assumption 3.1. There exist p1, · · · ,pn > 0 such that P ( i ∈ Ŝ ) = pi > 0 for all i ∈ [n].
This assumption covers a large class of sampling schemes as discussed in [36], including nonoverlapping uniform and doubly uniform. This assumption guarantees that every user has a nonnegligible probability to be updated. Note that pi = ∑ S:i∈S P(S) due to Assumption 3.1. For the sake of notation, we also denote p̂ := min{pi : i ∈ [n]} > 0. The following theorem characterizes convergence of Algorithm 1 with inexact evaluation of proxηfi . Due to space limit, we refer the reader to Lemma A.6 in Sup. Doc. for more details about the choice of stepsizes and related constants. The proof of this theorem is defered to Sup. Doc. A.5. Theorem 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α and η defined in (33). Then, the following holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ C1[F (x 0)− F ?] K + 1 + 1 n(K + 1) K∑ k=0 n∑ i=1 ( C2 2 i,k + C3 2 i,k+1 ) , (5)
where β, ρ1, and ρ2 are explicitly defined by (35), and
C1 := 2(1+ηL)2(1+γ2) η2β , C2 := ρ1C1, and C3 := ρ2C1 + (1+ηL)2(1+γ2) η2γ2 .
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Let the accuracies i,k for all i ∈ [n] and k ≥ 0 at Step 5 be chosen such that 1n ∑n i=1 ∑K+1 k=0 2 i,k ≤M for a given constant M > 0 and all K ≥ 0. Then, if we run Algorithm 1 for at most
K :=
⌊ C1[F (x
0)− F ?] + (C2 + C3)M ε2
⌋ ≡ O ( ε−2 )
iterations, then x̃K is an ε-stationary point of (1) in the sense of Definition 2.2.
Remark 3.1. [Choice of accuracies ki ] To guarantee 1 n ∑n i=1 ∑K+1 k=0 2 i,k ≤M in Theorem 3.1 for a given constant M > 0 and for all K ≥ 0, one can choose, e.g., 2i,k := M2(k+1)2 for all i ∈ [n] and k ≥ 0. In this case, we can easily show that 1n ∑n i=1 ∑K+1 k=0 2 i,k = M 2 ∑K+1 k=0 1 (k+1)2 ≤M . Note that, instead of using absolute accuracies, one can also use relative accuracies as ‖ i,k‖2 ≤ θ‖xk+1i −xki ‖2 for a given constant θ > 0, which is more practical, while still achieving a similar convergence guarantee. Such an idea has been widely used in the literature, including [28] (see Supp. Doc. A.7).
Remark 3.2 (Comparison). Since (1) is nonconvex, our O ( ε−2 )
communication complexity is the state-of-the-art, matching the lower bound complexity (up to a constant factor) [49]. However, different from the convergence analysis of FedSplit and FedPD [49], our flexible sampling scheme allows us to update a subset of users at each round and still obtains convergence. This can potentially further resolve the communication bottleneck [22]. We note that FedSplit is a variant of the PeacemanRachford splitting method, i.e. α = 2 and only considers convex non-composite case while we use a relaxation parameter α < 2 and for a more general nonconvex composite problem (1).
The following corollary specifies the convergence of Algorithm 1 with a specific choice of stepsizes and exact evaluation of proxηfi , whose proof is in Sup. Doc. A.6.
Corollary 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α = 1, η = 13L , and pi = 1 n . Under exact evaluation of proxηfi , i.e.
i,k = 0 for all i ∈ [n] and k ≥ 0, the following bound holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ 160Ln 3(K + 1) [F (x0)− F ?]. (6)
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Then after at most
K :=
⌊ 160Ln[F (x0)− F ?]
3ε2
⌋ ≡ O ( ε−2 ) ,
communication rounds, x̃K becomes an ε-stationary point of (1) (defined by Definition 2.2).
4 AsyncFedDR and Its Convergence Guarantee
Motivation. Although FedDR has been shown to converge, it is more practical to account for the system heterogeneity of local users. Requiring synchronous aggregation at the end of each communication round may lead to slow down in training. It is natural to have asynchronous update from local users as seen, e.g., in [35, 39]. However, asynchronous implementation remains limited in FL. Here, we propose asyncFedDR, an asynchronous variant of FedDR, and analyze its convergence guarantee. For the sake of our analysis, we only consider Sk := {ik}, the exact evaluation of proxηfi , and bounded delay, but extensions to general Sk and inexact proxηfi are similar to Algorithm 1.
4.1 Derivation of asyncFedDR
Let us first explain the main idea of asyncFedDR. At each iteration k, each user receives a delay copy x̄k−d
k ik of x̄k from the server with a delay dkik . The active user ik will update its own local model
(yki , x k i , x̂ k i ) in an asynchronous mode without waiting for others to complete. Once completing its update, user ik just sends an increment ∆x̂kik to the server to update the global model, while others may be reading. Overall, the complete asyncFedDR is presented in Algorithm 2.
In our analysis below, a transition of iteration from k to k + 1 is triggered whenever a user completes its update. Moreover, at Step 3, active user ik is chosen from a realization (ik, dk) of a joint random vector (̂ik, d̂k) at the k-th iteration. Here, we do not assume ik to be uniformly random or independent of the delay dk. This allows Algorithm 2 to capture the variety of asynchronous implementations and architectures. Note that x̄k−d k ik at Step 4 is a delayed version of x̄k, which only exists on the server when user ik is reading. However, right after, x̄k may be updated by another user.
Illustrative example. To better understand the update of asyncFedDR, Figure 1 depicts a simple scenario where there are 4 users (C1 - C4) asynchronously perform updates and with g(·) = 0. At iteration k = 4, user C4 finishes its update so that the server performs updates. During this process, user C1 starts its update by receiving a global model x̄4−d 4 i4 from server which is the average of (x̂41, x̂ 4 2, x̂ 4 3, x̂ 4 4). At iteration t = 7, C1 finishes its update. Although x̂1 and x̂4 do not change during this time, i.e. x̂61 = x̂ 4 1 and x̂ 6 4 = x̂ 4 4, x̂2 and x̂3 have been updated at k = 5, 6 from user C2 and C3, respectively. Therefore, the global model x̄k used to perform the update at k = 7 is actually aggregated from (x̂61, x̂ 4 2, x̂ 5 3, x̂ 6 4) not (x̂ 6 1, x̂ 6 2, x̂ 6 3, x̂ 6 4). In other words, each user receives a delay estimate x̄k−d k
where dk = (dk1 , · · · , dkn) is a delay vector and dki = max{t ∈ [k] : it = i}, i.e. the
Algorithm 2 (Asynchronous FedDR (asyncFedDR)) 1: Initialization: Take x0∈dom(F ) and choose η > 0 and α > 0.
Initialize the server with x̄0 := x0 and x̃0 := 0. Initialize each user i ∈ [n] with y0i := x0, x0i := proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: Select ik such that (ik, dk) is a realization of (̂ik, d̂k). 4: [Communication] User ik receives x̄
k−dkik , a delayed version of x̄k with the delay dkik . 5: [Local update] User ik updates
yk+1ik := y k ik + α(x̄k−d k ik − xkik), x k+1 ik := proxηfik (yk+1ik ), and x̂ k+1 ik := 2xk+1ik − y k+1 ik .
Other users maintain yk+1i := y k i , x k+1 i := x k i , and x̂ k+1 i := x̂ k i for i 6= ik.
6: [Communication] User ik sends ∆kik := x̂ k+1 ik − x̂kik back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n∆ k ik
. 8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
last time x̂i gets updated up to iteration k. Note that when dki = 0 for all i, Algorithm 2 reduces to its synchronous variant, i.e. a special variant of Algorithm 1 with Sk = {ik}.
4.2 Convergence analysis
Since we treat the active user ik and the delay vector dk jointly at each iteration k as a realization of a joint random vector (̂ik, d̂k), we adopt the probabilistic model from [5] to analyze Algorithm 2. This new model allows us to cope with a more general class of asynchronous variants of our method.
Probabilistic model. Let ξk := (ik, dk) be a realization of a random vector ξ̂k := (̂ik, d̂k) containing the user index îk ∈ [n] and the delay vector d̂k = (d̂k1 , · · · , d̂kn) ∈ D := {0, 1, · · · , τ}
n presented at the k-the iteration, respectively. We consider k + 1 random variables that form a random vector ξ̂0:k := (ξ̂0, · · · , ξ̂k). We also use ξ0:k = (ξ0, ξ1, · · · , ξk) for k + 1 possible values of the random vector ξ̂0:k. Let Ω be the sample space of all sequences ω := {(ik, dk)}k≥0. We define a cylinder Ck(ξ0:k) := {ω ∈ Ω : (ω0, · · · , ωk) = ξ0:k} and Ck is the set of all possible Ck(ξ0:k) when ξt, t = 0, · · · , k take all possible values, where ωl is the l-th element of ω. Let Fk := σ(Ck) be the σ-algebra generated by Ck and F := σ(∪∞k=0Ck). For each Ck(ξ0:k) we also equip with a probability p(ξ0:k) := P(Ck(ξ0:k)). Then, (Ω,F ,P) forms a probability space. Assume that p(ξ0:k) := P(ξ̂0:k = ξ0:k) > 0. Our conditional probability is defined as p((i, d) | ξ0:k) := P(Ck+1(ξ0:k+1))/P(Ck(ξ0:k)), where p((i, d) | ξ0:k) := 0 if p(ξ0:k) = 0. We refer to Supp. Doc. B.2 for more details of our probabilistic model.
To analyze Algorithm 2, we impose Assumption 4.1 on the implementation below.
Assumption 4.1. For all i ∈ [n] and ω ∈ Ω, there exists at least one t ∈ {0, 1, · · · , T} with T > 0, such that ∑
d∈D
p((i, d) | ξ0:k+t−1) ≥ p̂ if p(ξ0:k) > 0, (7)
for a given p̂ > 0 and any k ≥ 0. Assume also that dki ≤ τ and dkik = 0 for all k ≥ 0 and i, ik ∈ [n].
Assumption 4.1 implies that during an interval of T iterations, every user has a non-negligible positive probability to be updated. Note that if the user ik is active, then it uses recent value with no delay, i.e., dkik = 0 as in Assumption 4.1. Moreover, the bounded delay assumption d k i ≤ τ is standard to analyze convergence of asynchronous algorithms, see e.g., [5, 32, 34, 35, 44].
Suppose that we choose 0 < α < ᾱ and 0 < η < η̄ in Algorithm 2, where c := 2τ 2−n n2 is given, and ᾱ > 0 and η̄ > 0 are respectively computed by
ᾱ := { 1 if 2τ2 ≤ n, 2
2+c otherwise, and η̄ :=
√ 16−8α−7α2−α 2L(2+α) if 2τ 2 ≤ n, √ 16−8α−(7+4c+4c2)α2−α
2L[2+(1+c)α] otherwise. (8)
Next, we introduce the following two constants:
ρ := 2(1−α)−(2+α)L2η2−Lαη αηn if 2τ 2 ≤ n,
n2[2(1−α)−(2+α)L2η2−Lαη]−α(1+η2L2)(2τ2−n) αηn3 otherwise.
D := 8α 2(1+L2η2)(τ2+2Tnp̂) + 8n2(1+L2η2+Tα2p̂)
p̂α2n2 .
(9)
Then, both ρ and D are positive. We emphasize that though these formulas look complicated, they are computed explicitly without any tuning. Theorem 4.1 proves the convergence of Algorithm 2, whose analysis is in Supp. Doc. B. Theorem 4.1. Suppose that Assumption 2.1, 2.2, and 4.1 hold for (1). Let ᾱ, η̄, ρ, and D be given by (8) and (9), respectively. Let {(xki , yki , x̄k)} be generated by Algorithm 2 with stepsizes α ∈ (0, ᾱ) and η ∈ (0, η̄). Then, the following bound holds:
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ Ĉ [ F (x0)− F ? ] K + 1 , (10)
where Ĉ := 2(1+ηL) 2D
nη2ρ > 0 depending on n,L, η, α, τ, T, and p̂.
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 2. Then, after at most K := O ( ε−2 )
iterations, x̃K is an ε-stationary point of (1) as in Definition 2.2. Remark 4.1. From Theorem 4.1, we can see that asyncFedDR achieves the same worst-case communication complexity O ( ε−2 ) (up to a constant factor) as FedDR, but with smaller α and η.
5 Numerical Experiments
To evaluate the performance of FedDR and asyncFedDR, we conduct multiple experiments using both synthetic and real datasets. Since most existing methods are developed for non-composite problems, we also implement three other methods: FedAvg, FedProx, and FedPD to compare for this setting. We use training loss, training accuracy, and test accuracy as our performance metrics.
Implementation. To compare synchronous algorithms, we reuse the implementation of FedAvg and FedProx in [23] and implement FedDR and FedPD on top of it. To conduct the asynchronous examples, we implement our algorithms based on the asynchronous framework in [3]. All experiments are run on a Linux-based server with multiple nodes and configuration: 24-core 2.50GHz Intel processors, 30M cache, and 256GB RAM.
Models and hyper-parameters selection. Our models are neural networks, and their detail is given in Supp. Doc. C. As in [23], we use the same local solver (SGD) for all algorithms and run the local updates for 20 epochs. Parameters for each algorithm such as µ for FedProx, η for FedPD, and α and η for FedDR are tuned from a wide range of values. For each dataset, we pick the parameters that work best for each algorithm and plot their performance on the chosen parameters.
Results on synthetic datasets. We compare these algorithms using synthetic dataset in both iid and non-iid settings. We follow the data generation procedures described in [23, 38] to generate one iid dataset synthetic-iid and three non-iid datasets: synthetic-(r,s) for (r, s) =
{(0, 0), (0.5, 0.5), (1, 1)}. We first compare these algorithms without using the user sampling scheme, i.e. all users perform update at each communication round, and for non-composite model of (1).
We report the performance of these algorithms on one non-iid dataset in Figure 2, but more results can be found in Sup. Doc. C. FedDR and FedPD are comparable in these datasets and they both outperform FedProx and FedAvg. FedProx works better than FedAvg which aligns with the results in [23]. However, when comparing on more datasets, our algorithm overall performs better than others.
Now we compare these algorithms where we sample 10 users out of 30 to perform update at each communication round for FedAvg, FedProx, and FedDR while we use all users for FedPD since FedPD only has convergence guarantee for this setting. In this test, the evaluation metric is plotted in terms of the number of bytes communicated between users and server at each communication round. Note that using user sampling scheme in this case can save one-third of communication cost each round. Figure 3 depicts the performance of 4 algorithms on one dataset, see also Sup. Doc. C.
From Figure 3, FedDR performs well compared to others. FedProx using user sampling scheme performs better and is slightly behind FedPD while FedDR, FedPD, and FedProx outperform FedAvg.
Results on FEMNIST datasets. FEMNIST [4] is an extended version of the MNIST dataset [19] where the data is partitioned by the writer of the digit/character. It has a total of 62 classes (10 digits, 26 upper-case and 26 lower-case letters) with over 800,000 samples. In this example, there are total of 200 users and we sample 50 users to perform update at each round of communication for FedAvg, FedProx, and FedDR while we use all users to perform update for FedPD. Fig. 4 depicts the performance of 4 algorithms in terms of communication cost. From Fig. 4, FedDR can achieve lower loss value and higher training accuracy than other algorithms while FedPD can reach the same test accuracy as ours at the end. Overall, FedDR seems working better than other algorithms in this test.
Results with the `1-norm regularizer. We now consider the composite setting with g(x) := 0.01 ‖x‖1 to verify Algorithm 1 on different inexactness levels i,k by varying the learning rate (lr) and the number of local SGD epochs to approximately evaluate proxηfi(y k i ). We run Algorithm 1 on the FEMNIST dataset, and the results are shown in Figure 5.
We observe that Algorithm 1 works best when local learning rate is 0.003 which aligns with [23] for the non-composite case. It also performs better when we decrease i,k by increasing the number of epochs in evaluating proxηfi . This performance confirms our theoretical results in Supp. Doc. A.5.
FEMNIST
FEMNIST, g = || ||1
Results using asynchronous update. To illustrate the advantage of asyncFedDR over FedDR, we conduct another example to train MNIST dataset using 20 users. Since we run these experiments on computing nodes with identical configurations, we simulate the case with computing power discrepancy between users by adding variable delay to each user’s update process such that the difference between the fastest user may be up to twice as fast as the slowest one.
MNIST
The results of two variants are presented in Figure 6, see Supp. Doc. C for more examples. We can see that asyncFedDR can achieve better performance than FedDR in terms of training time which illustrate the advantage of asynchronous update in heterogeneous computing power.
Acknowledgments and Disclosure of Funding
The work of Quoc Tran-Dinh is partially supported by the Office of Naval Research (ONR), grant No. N00014-20-1-2088. The authors would also like to thank all the anonymous reviewers and the ACs for their constructive comments to improve the paper. | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths of the proposed approach, particularly in its convergence guarantees?
3. Do you have any concerns regarding the inexactness evaluations of \prox_{\eta f_i}?
4. How does the reviewer assess the total stochastic gradient complexity of the proposed method?
5. What is the significance of warm-start on each subproblem, and how does it affect the overall convergence rate?
6. Can you provide a comparison between the analysis in this work and previous studies, such as ARock?
7. Are there any minor concerns or typos in the paper that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes FedDR and asyncFedDR for FL, which combines DRS and the randomized BCD strategy. Detailed convergence guarantees are also provided, which seems clear and correct to me. The authors also provide numerical results to support their theoretical claims.
Review
Major concerns:
My first concern is regarding the inexactness evaluations of \prox_{\eta f_i}.
(1) First, in theorems 3.1 and 4.1, it would be much better if that fact that the error tolerances \eps_{i, k} should be square summable (which is stated in the proof in the appendix).
(2) Secondly, in order to have square summable error tolerances, more and more inner loops (e.g., SGD or other solvers) are needed to solve these subproblems. This would make the total stochastic gradient complexity to be quite high. Note that what's presented in theorems 3.1 and 4.1 is the outer iteration complexity.
(3) A possible remedy is to apply warm-start on each subproblem and only solve it for a fixed number of iterations. Then, there will be a "bounded relative error", which will not hurt the overall 1/K convergence rate.
This idea has been applied in the following analysis of inexact preconditioned PDHG (or equivalently, ADMM). More specifically, please refer to Theorem 4 of this paper:
Liu, Yanli, Yunbei Xu, and Wotao Yin. "Acceleration of Primal–Dual Methods by Preconditioning and Simple Subproblem Procedures." Journal of Scientific Computing 86.2 (2021): 1-34.
Since the analysis in the above paper also depends on a Lyapunov function, which looks quite similar to this work (Lemma A.5 and Lemma B.1). I think that this may help improve the overall stochastic gradient complexity of this work.
(4) In the experiments, it should also be mentioned how many inner loops of SGD are applied in FedDR, asyncFedDR, as well as other algorithms. And a comparison of stochastic gradient complexity should be provided, in addition to the communication complexity presented.
Minor concerns:
The analysis also looks similar to those of ARock ([33] in this paper), where the bounded delay is also assumed and a Lyapunov type analysis is done, would you please provide some comparisons?
On line 63, [nonsmooth] seems to be a typo.
On line 79, early should be earliest. |
NIPS | Title
FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Abstract
We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
1 Introduction
Training machine learning models in a centralized fashion becomes more challenging and marginally inaccessible for a large number of users, especially when the size of datasets and models is growing substantially larger. Consequently, training algorithms using decentralized and distributed approaches comes in as a natural replacement. Among several approaches, federated learning (FL) has received tremendous attention in the past few years since it was first introduced in [18, 30]. In this setting, a central server coordinates between many local users (also called agents or devices) to perform their local updates, then the global model will get updated, e.g., by averaging or aggregating local models.
Challenges. FL provides a promising solution for many machine learning applications such as learning over smartphones or across organizations, and internet of things, where privacy protection is one of the most critical requirements. However, this training mechanism faces a number of fundamental challenges, see, e.g., [31]. First, when the number of users gets substantially large, it creates communication bottleneck during model exchange process between server and users. Second, the local data stored in each local user may be different in terms of sizes and distribution which poses a challenge: data or statistical heterogeneity. Third, the variety of users with different local storage, computational power, and network connectivity participating into the system also creates a major challenge, known as system heterogeneity. This challenge also causes unstable connection
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
between server and users, where some users may be disconnected from the server or simply dropped out during training. In practice, we can expect only a subset of users to participate in each round of communication. Another challenge in FL is privacy concern. Accessing and sharing local raw data is not permitted in FL. In addition, distributed methods exchange the objective gradient of local users, and private data can be exposed from the shared model such as the objective gradients [51]. Therefore, FL methods normally send the global model to each user at the start of each communication round, each user will perform its local update and send back only the necessary update for aggregation.
Our goal and approach. Our goal in this paper is to further and simultaneously address these fundamental challenges by proposing two new algorithms to train the underlying common optimization model in FL. Our approach relies on a novel combination between randomized block-coordinate strategy, nonconvex Douglas-Rachford (DR) splitting, and asynchronous implementation. While each individual technique or partial combinations is not new, our combination of three as in this paper appears to be the first in the literature. To the best of our knowledge, this is the first work developing randomized block-coordinate DR splitting methods for nonconvex composite FL optimization models, and they are fundamentally different from some works in the convex setting, e.g., [7, 8].
Contribution. Our contribution can be summarized as follows.
(a) We develop a new FL algorithm, called FedDR (Federated Douglas-Rachford), by combining the well-known DR splitting technique and randomized block-coordinate strategy for the common nonconvex composite optimization problem in FL. Our algorithm can handle nonsmooth convex regularizers and allows inexact evaluation of the underlying proximal operators as in FedProx or FedPD. It also achieves the best known O ( ε−2 )
communication complexity for finding a stationary point under standard assumptions (Assumptions 2.1- 2.2), where ε is a given accuracy. More importantly, unlike FedSplit [33] and FedPD [49], which require full user participation to achieve convergence, our analysis does allow partial participation by selecting a subset of users to perform update at each communication round. (b) Next, we propose an asynchronous algorithm, asyncFedDR, where each user can asynchronously perform local update and periodically send the update to the server for proximal aggregation. We show that asyncFedDR achieves the same communication complexity O ( ε−2 )
as FedDR (up to a constant factor) under the same standard assumptions. This algorithm is expected to simultaneously address all challenges discussed above.
Let us emphasize some key points of our contribution. First, the best known O ( ε−2 )
communication complexity of our methods matches the lower bound complexity up to a constant factor as shown in [49], even with inexact evaluation of the objective proximal operators. Second, our methods rely on a DR splitting technique for nonconvex optimization and can handle possibly nonsmooth convex regularizers, which allows us to deal with a larger class of applications and with constraints [47]. Furthermore, it can also handle both statistical and system heterogeneity as discussed in FedSplit [33] and FedPD [49]. However, FedSplit only considers the convex case, and both FedSplit and FedPD require all users to update at each communication round, making them less practical and applicable in FL. Our methods only require a subset of users or even one user to participate in each communication round as in FedAvg or FedProx. In addition, our aggregation step on the server is different from most existing works due to a proximal step on the regularizer. It is also different from [47]. Third, as FedProx [23], we allow inexact evaluation of users’ proximal operators with any local solver (e.g., local SGD or variance reduced methods) and with adaptive accuracies. Finally, requiring synchronous aggregation at the end of each communication round may lead to slow-down in training due to the heterogeneity in computing power and communication capability of local users. It is natural to have asynchronous update from local users as in, e.g., [34, 35, 39]. Our asynchronous variant, asyncFedDR, can fairly address this challenge. Moreover, it uses a general probabilistic model recently introduced in [5], which allows us to capture the variety of asynchronous environments and architectures compared to existing methods, e.g., [39, 44].
Related work and comparison. Federated Averaging (FedAvg) is perhaps the earliest method used in FL. In FedAvg, users perform stochastic gradient descent (SGD) updates for a number of epochs then send updated models to server for aggregation. FedAvg’s practical performance has been shown in many early works, e.g., [18, 29, 48] and tends to become the most popular method for solving FL applications. [26] show that local SGD where users perform a number of local updates before global communication takes place as in FedAvg may offer benefit over minibatch SGD. Similar comparison between minibatch SGD and local SGD has been done in [42, 43]. Analyzing convergence of FedAvg
was very challenging at its early time due to the complexity in its update as well as data heterogeneity. One of the early attempt to show the convergence of FedAvg is in [39] for convex problems under the iid data setting and a set of assumptions. [45] also considers local SGD in the nonconvex setting. Without using an additional bounded gradient assumption as in [39, 45], [41] improves the complexity for the general nonconvex setting while [11] uses a Polyak-Łojasiewicz (PL) condition to improve FedAvg’s convergence results. In heterogeneous data settings, [17] analyzes local GD, where users performs gradient descent (GD) updates instead of SGD. The analysis of FedAvg for non-iid data is given in [24]. The analysis of local GD/SGD for nonconvex problems has been studied in [13]. However, FedAvg might not converge with non-iid data as shown in [33, 49, 50].
FedProx [23] is an extension of FedAvg, which deals with heterogeneity in federated networks by introducing a proximal term to the objective in local updates to improve stability. FedProx has been shown to achieve better performance than FedAvg in heterogeneous setting. Another method to deal with data heterogeneity is SCAFFOLD [16] which uses a control variate to correct the “client-drift" in local update of FedAvg. MIME [15] is another framework that uses control variate to improve FedAvg for heterogeneous settings. However, SCAFFOLD and MIME require to communicate extra information apart from local models. Compared to aforementioned works, our methods deal with nonconvex problems under standard assumptions and with composite settings.
FedSplit [33] instead employs a Peaceman-Rachford splitting scheme to solve a constrained reformulation of the original problem. In fact, FedSplit can be viewed as a variant of Tseng’s splitting scheme [1] applied to FL. [33] show that FedSplit can find a solution of the FL problem under only convexity without imposing any additional assumptions on system or data homogeneity. [49] proposes FedPD, which is essentially a variant of the standard augmented Lagrangian method in nonlinear optimization. Other algorithms for FL can be found, e.g., in [6, 10, 12, 14, 25, 46].
Our approach in this paper relies on nonconvex DR splitting method, which can handle the heterogeneity as discussed in [33]. While the DR method is classical, its nonconvex variants have been recently studied e.g., in [9, 21, 40]. However, the combination of DR and randomized block-coordinate strategy remains limited [7, 8] even in the convex settings. Alternatively, asynchronous algorithms have been extensively studied in the literature, also for FL, see, e.g., [2, 34, 35]. For instance, a recent work [44] analyzes an asynchronous variant of FedAvg under bounded delay assumption and constraint on the number of local updates. [39] proposes an asynchronous local SGD to solve convex problems under iid data. However, to our best knowledge, there exists no asynchronous method using DR splitting techniques with convergence guarantee for FL. In addition, most existing algorithms only focus on non-composite settings. Hence, our work here appears to be the first.
Content. The rest of this paper is organized as follows. Section 2 states our FL optimization model and our assumptions. Section 3 develops FedDR and analyzes its convergence. Section 4 considers an asynchronous variant, asyncFedDR. Section 5 is devoted for numerical experiments. Due to space limit, all technical details and proofs can be found in Supplementary Document (Supp. Doc.).
2 Nonconvex Optimization Models in Federated Learning
The underlying optimization model of many FL applications can be written into the following form:
min x∈Rp
{ F (x) := f(x) + g(x) = 1
n n∑ i=1 fi(x) + g(x) } , (1)
where n is the number of users, and each fi is a local loss of the i-th user, which is assumed to be nonconvex and L-smooth (see Assumptions 2.1 and 2.2 below), and g is a proper, closed, and convex regularizer. Apart from these assumptions, we will not make any additional assumption on (1). We emphasize that the use of regularizers g has been motivated in several works, including [47].
Let dom(F ) := {x ∈ Rp : F (x) < +∞} be the domain of F and ∂g be the subdifferential of g [1]. Since (1) is nonconvex, we only expect to find a stationary point, which is characterized by the following optimality condition. Definition 2.1. If 0 ∈ ∇f(x∗) + ∂g(x∗), then x∗ is called a [first-order] stationary point of (1).
The algorithms for solving (1) developed in this paper will rely on the following assumptions. Assumption 2.1 (Boundedness from below). dom(F ) 6= ∅ and F ? := infx∈Rp F (x) > −∞.
Assumption 2.2 (L-smoothness). All functions fi(·) for i ∈ [n] := {1, · · · , n} are L-smooth, i.e., fi is continuously differentiable and there exists L ∈ (0,+∞) such that
‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ dom(fi). (2)
Assumptions 2.1 and 2.2 are very standard in nonconvex optimization. Assumption 2.1 guarantees the well-definedness of (1) and is independent of algorithms. Assuming the same Lipschitz constant L for all fi is not restrictive since if fi is Li-smooth, then by scaling variables of its constrained formulation (see (11) in Supp. Doc.), we can get the same Lipschitz constant L of all fi.
Proximal operators and evaluation. Our methods make use of the proximal operators of both fi and g. Although fi is L-smooth and nonconvex, we still define its proximal operator as
proxηfi(x) := argminy
{ fi(y) + 1 2η‖y − x‖ 2 } , (3)
where η > 0. Even fi is nonconvex, under Assumption 2.2, if we choose 0 < η < 1L , then proxηfi is well-defined and single-valued. Evaluating proxηfi requires to solve a strongly convex program. If proxηfi can only be computed approximately up to an accuracy ≥ 0 to obtain z, denoted by x+ :≈ proxηfi(x), if ‖x+ − proxηfi(x)‖ ≤ i. Note that instead of absolute error, one can also use a relative error as ‖x+ − proxηfi(x)‖ ≤ i‖x+ − x‖ as in [37]. For the convex function g, its proximal operator proxηg is defined in the same way as (3). Evaluating proxηfi can be done by various existing methods, including local SGD and accelerated GD-type algorithms. However, this is not our focus in this paper, and therefore we do not specify the subsolver for evaluating proxηfi .
Gradient mapping. As usual, let us define the following gradient mapping of F in (1). Gη(x) := 1η ( x− proxηg(x− η∇f(x)) ) , η > 0. (4) Then, the optimality condition 0 ∈ ∇f(x∗) + ∂g(x∗) of (1) is equivalent to Gη(x∗) = 0. However, in practice, we often wish to find an ε-approximate stationary point to (1) defined as follows. Definition 2.2. If x̃ ∈ dom(F ) satisfies E [ ‖Gη(x̃)‖2 ] ≤ ε2, then x̃ is called an ε-stationary point of (1), where the expectation is taken overall the randomness generated by the underlying algorithm.
Note that, for Gη(x̃) to be well-defined, we require x̃ ∈ dom(F ). In our algorithms below, this requirement is fulfilled if x̃ ∈ dom(f), which is often satisfied in practice as dom(f) = Rp.
3 FedDR Algorithm and Its Convergence Guarantee
Prior to our work, FedSplit [33] exploits similar update steps as ours by adopting the PeacemanRachford splitting method to solve the convex and non-composite instances of (1). FedSplit can overcome some of the key challenges as discussed earlier. Following this idea, we take the advantages of the DR splitting method to first derive a new variant to handle the nonconvex composite problem (1). This new algorithm is synchronous and we call it FedDR. The central idea is as follows: First, we reformulate (1) into (12) by duplicating variables. Next, we apply a DR splitting scheme to the resulting problem. Finally, we combine such a scheme with a randomized block-coordinate strategy.
The complete algorithm is presented in Algorithm 1, where its full derivation is in Supp. Doc. A.1.
Let us make the following remarks. Firstly, FedDR mainly updates of three sequences {x̄k}, {xki } and {yki }. While x̄k is an averaged model to approximately minimize the global objective function F , xki act as local models trying to optimize a regularized local loss function w.r.t. its local data distribution, and yki keeps track of the residuals from the local models to the global one. Secondly, we allow xki to be an approximation of proxηfi(y k i ) up to an accuracy i,k ≥ 0 as defined in (3), i.e., ‖xki − proxηfi(y k i )‖ ≤ i,k for all i ∈ [n] if k = 0 and for all i ∈ Sk−1 if k > 0. If i,k = 0, then we get the exact evaluation xki := proxηfi(y k i ). Approximately evaluating proxηfi can be done, e.g., by local SGD as in FedAvg. Thirdly, Algorithm 1 is different from existing randomized proximal gradient-based methods since we rely on a DR splitting scheme and can handle composite settings. Here, three iterates yki , x k i , and x̂ k i at Step 5 are updated sequentially, making it challenging to analyze convergence. Lastly, the subset of active users Sk is sampled from a random set-valued mapping Ŝ. As specified in Assumption 3.1, this sampling mechanism covers a wide range of sampling strategies. Clearly, if Sk = [n] and g = 0, then Algorithm 1 reduces to FedSplit, but for the nonconvex case. Hence, our convergence guarantee below remains applicable, and the guarantee is sure. Note that both our model (1) and Algorithm 1 are completely different from [47].
Algorithm 1 (FL with Randomized DR (FedDR)) 1: Initialization: Take x0 ∈ dom(F ). Choose η > 0 and α > 0, and accuracies i,0 ≥ 0 (i ∈ [n]).
Initialize the server with x̄0 := x0 and x̃0 := x0. Initialize each user i ∈ [n] with y0i := x0, x0i :≈ proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: [Active users] Generate a proper realization Sk ⊆ [n] of Ŝ (see Assumption 3.1). 4: [Communication] Each user i ∈ Sk receives x̄k from the server. 5: [Local update] For each user i ∈ Sk do: Choose i,k+1 ≥ 0 and update
yk+1i := y k i + α(x̄ k − xki ), xk+1i :≈ proxηfi(y k+1 i ), and x̂ k+1 i := 2x k+1 i − y k+1 i .
6: [Communication] Each user i ∈ Sk sends ∆x̂ki := x̂ k+1 i − x̂ki back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n ∑ i∈Sk ∆x̂ k i .
8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
3.1 Convergence of Algorithm 1
Let us consider a proper sampling scheme Ŝ of [n], which is a random set-valued mapping with values in 2[n], the collection of all subsets of [n]. Let Sk be an iid realization of Ŝ and Fk := σ(S0, · · · ,Sk) be the σ-algebra generated by S0, · · · ,Sk. We first impose the following assumption about the distribution of our sampling scheme Ŝ. Assumption 3.1. There exist p1, · · · ,pn > 0 such that P ( i ∈ Ŝ ) = pi > 0 for all i ∈ [n].
This assumption covers a large class of sampling schemes as discussed in [36], including nonoverlapping uniform and doubly uniform. This assumption guarantees that every user has a nonnegligible probability to be updated. Note that pi = ∑ S:i∈S P(S) due to Assumption 3.1. For the sake of notation, we also denote p̂ := min{pi : i ∈ [n]} > 0. The following theorem characterizes convergence of Algorithm 1 with inexact evaluation of proxηfi . Due to space limit, we refer the reader to Lemma A.6 in Sup. Doc. for more details about the choice of stepsizes and related constants. The proof of this theorem is defered to Sup. Doc. A.5. Theorem 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α and η defined in (33). Then, the following holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ C1[F (x 0)− F ?] K + 1 + 1 n(K + 1) K∑ k=0 n∑ i=1 ( C2 2 i,k + C3 2 i,k+1 ) , (5)
where β, ρ1, and ρ2 are explicitly defined by (35), and
C1 := 2(1+ηL)2(1+γ2) η2β , C2 := ρ1C1, and C3 := ρ2C1 + (1+ηL)2(1+γ2) η2γ2 .
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Let the accuracies i,k for all i ∈ [n] and k ≥ 0 at Step 5 be chosen such that 1n ∑n i=1 ∑K+1 k=0 2 i,k ≤M for a given constant M > 0 and all K ≥ 0. Then, if we run Algorithm 1 for at most
K :=
⌊ C1[F (x
0)− F ?] + (C2 + C3)M ε2
⌋ ≡ O ( ε−2 )
iterations, then x̃K is an ε-stationary point of (1) in the sense of Definition 2.2.
Remark 3.1. [Choice of accuracies ki ] To guarantee 1 n ∑n i=1 ∑K+1 k=0 2 i,k ≤M in Theorem 3.1 for a given constant M > 0 and for all K ≥ 0, one can choose, e.g., 2i,k := M2(k+1)2 for all i ∈ [n] and k ≥ 0. In this case, we can easily show that 1n ∑n i=1 ∑K+1 k=0 2 i,k = M 2 ∑K+1 k=0 1 (k+1)2 ≤M . Note that, instead of using absolute accuracies, one can also use relative accuracies as ‖ i,k‖2 ≤ θ‖xk+1i −xki ‖2 for a given constant θ > 0, which is more practical, while still achieving a similar convergence guarantee. Such an idea has been widely used in the literature, including [28] (see Supp. Doc. A.7).
Remark 3.2 (Comparison). Since (1) is nonconvex, our O ( ε−2 )
communication complexity is the state-of-the-art, matching the lower bound complexity (up to a constant factor) [49]. However, different from the convergence analysis of FedSplit and FedPD [49], our flexible sampling scheme allows us to update a subset of users at each round and still obtains convergence. This can potentially further resolve the communication bottleneck [22]. We note that FedSplit is a variant of the PeacemanRachford splitting method, i.e. α = 2 and only considers convex non-composite case while we use a relaxation parameter α < 2 and for a more general nonconvex composite problem (1).
The following corollary specifies the convergence of Algorithm 1 with a specific choice of stepsizes and exact evaluation of proxηfi , whose proof is in Sup. Doc. A.6.
Corollary 3.1. Suppose that Assumptions 2.1, 2.2, and 3.1 hold. Let {(xki , yki , x̂ki , x̄k)} be generated by Algorithm 1 using stepsizes α = 1, η = 13L , and pi = 1 n . Under exact evaluation of proxηfi , i.e.
i,k = 0 for all i ∈ [n] and k ≥ 0, the following bound holds
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ 160Ln 3(K + 1) [F (x0)− F ?]. (6)
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 1. Then after at most
K :=
⌊ 160Ln[F (x0)− F ?]
3ε2
⌋ ≡ O ( ε−2 ) ,
communication rounds, x̃K becomes an ε-stationary point of (1) (defined by Definition 2.2).
4 AsyncFedDR and Its Convergence Guarantee
Motivation. Although FedDR has been shown to converge, it is more practical to account for the system heterogeneity of local users. Requiring synchronous aggregation at the end of each communication round may lead to slow down in training. It is natural to have asynchronous update from local users as seen, e.g., in [35, 39]. However, asynchronous implementation remains limited in FL. Here, we propose asyncFedDR, an asynchronous variant of FedDR, and analyze its convergence guarantee. For the sake of our analysis, we only consider Sk := {ik}, the exact evaluation of proxηfi , and bounded delay, but extensions to general Sk and inexact proxηfi are similar to Algorithm 1.
4.1 Derivation of asyncFedDR
Let us first explain the main idea of asyncFedDR. At each iteration k, each user receives a delay copy x̄k−d
k ik of x̄k from the server with a delay dkik . The active user ik will update its own local model
(yki , x k i , x̂ k i ) in an asynchronous mode without waiting for others to complete. Once completing its update, user ik just sends an increment ∆x̂kik to the server to update the global model, while others may be reading. Overall, the complete asyncFedDR is presented in Algorithm 2.
In our analysis below, a transition of iteration from k to k + 1 is triggered whenever a user completes its update. Moreover, at Step 3, active user ik is chosen from a realization (ik, dk) of a joint random vector (̂ik, d̂k) at the k-th iteration. Here, we do not assume ik to be uniformly random or independent of the delay dk. This allows Algorithm 2 to capture the variety of asynchronous implementations and architectures. Note that x̄k−d k ik at Step 4 is a delayed version of x̄k, which only exists on the server when user ik is reading. However, right after, x̄k may be updated by another user.
Illustrative example. To better understand the update of asyncFedDR, Figure 1 depicts a simple scenario where there are 4 users (C1 - C4) asynchronously perform updates and with g(·) = 0. At iteration k = 4, user C4 finishes its update so that the server performs updates. During this process, user C1 starts its update by receiving a global model x̄4−d 4 i4 from server which is the average of (x̂41, x̂ 4 2, x̂ 4 3, x̂ 4 4). At iteration t = 7, C1 finishes its update. Although x̂1 and x̂4 do not change during this time, i.e. x̂61 = x̂ 4 1 and x̂ 6 4 = x̂ 4 4, x̂2 and x̂3 have been updated at k = 5, 6 from user C2 and C3, respectively. Therefore, the global model x̄k used to perform the update at k = 7 is actually aggregated from (x̂61, x̂ 4 2, x̂ 5 3, x̂ 6 4) not (x̂ 6 1, x̂ 6 2, x̂ 6 3, x̂ 6 4). In other words, each user receives a delay estimate x̄k−d k
where dk = (dk1 , · · · , dkn) is a delay vector and dki = max{t ∈ [k] : it = i}, i.e. the
Algorithm 2 (Asynchronous FedDR (asyncFedDR)) 1: Initialization: Take x0∈dom(F ) and choose η > 0 and α > 0.
Initialize the server with x̄0 := x0 and x̃0 := 0. Initialize each user i ∈ [n] with y0i := x0, x0i := proxηfi(y 0 i ), and x̂ 0 i := 2x 0 i − y0i .
2: For k := 0, · · · ,K do 3: Select ik such that (ik, dk) is a realization of (̂ik, d̂k). 4: [Communication] User ik receives x̄
k−dkik , a delayed version of x̄k with the delay dkik . 5: [Local update] User ik updates
yk+1ik := y k ik + α(x̄k−d k ik − xkik), x k+1 ik := proxηfik (yk+1ik ), and x̂ k+1 ik := 2xk+1ik − y k+1 ik .
Other users maintain yk+1i := y k i , x k+1 i := x k i , and x̂ k+1 i := x̂ k i for i 6= ik.
6: [Communication] User ik sends ∆kik := x̂ k+1 ik − x̂kik back to the server. 7: [Sever aggregation] The server aggregates x̃k+1 := x̃k + 1n∆ k ik
. 8: [Sever update] Then, the sever updates x̄k+1 := proxηg ( x̃k+1 ) . 9: End For
last time x̂i gets updated up to iteration k. Note that when dki = 0 for all i, Algorithm 2 reduces to its synchronous variant, i.e. a special variant of Algorithm 1 with Sk = {ik}.
4.2 Convergence analysis
Since we treat the active user ik and the delay vector dk jointly at each iteration k as a realization of a joint random vector (̂ik, d̂k), we adopt the probabilistic model from [5] to analyze Algorithm 2. This new model allows us to cope with a more general class of asynchronous variants of our method.
Probabilistic model. Let ξk := (ik, dk) be a realization of a random vector ξ̂k := (̂ik, d̂k) containing the user index îk ∈ [n] and the delay vector d̂k = (d̂k1 , · · · , d̂kn) ∈ D := {0, 1, · · · , τ}
n presented at the k-the iteration, respectively. We consider k + 1 random variables that form a random vector ξ̂0:k := (ξ̂0, · · · , ξ̂k). We also use ξ0:k = (ξ0, ξ1, · · · , ξk) for k + 1 possible values of the random vector ξ̂0:k. Let Ω be the sample space of all sequences ω := {(ik, dk)}k≥0. We define a cylinder Ck(ξ0:k) := {ω ∈ Ω : (ω0, · · · , ωk) = ξ0:k} and Ck is the set of all possible Ck(ξ0:k) when ξt, t = 0, · · · , k take all possible values, where ωl is the l-th element of ω. Let Fk := σ(Ck) be the σ-algebra generated by Ck and F := σ(∪∞k=0Ck). For each Ck(ξ0:k) we also equip with a probability p(ξ0:k) := P(Ck(ξ0:k)). Then, (Ω,F ,P) forms a probability space. Assume that p(ξ0:k) := P(ξ̂0:k = ξ0:k) > 0. Our conditional probability is defined as p((i, d) | ξ0:k) := P(Ck+1(ξ0:k+1))/P(Ck(ξ0:k)), where p((i, d) | ξ0:k) := 0 if p(ξ0:k) = 0. We refer to Supp. Doc. B.2 for more details of our probabilistic model.
To analyze Algorithm 2, we impose Assumption 4.1 on the implementation below.
Assumption 4.1. For all i ∈ [n] and ω ∈ Ω, there exists at least one t ∈ {0, 1, · · · , T} with T > 0, such that ∑
d∈D
p((i, d) | ξ0:k+t−1) ≥ p̂ if p(ξ0:k) > 0, (7)
for a given p̂ > 0 and any k ≥ 0. Assume also that dki ≤ τ and dkik = 0 for all k ≥ 0 and i, ik ∈ [n].
Assumption 4.1 implies that during an interval of T iterations, every user has a non-negligible positive probability to be updated. Note that if the user ik is active, then it uses recent value with no delay, i.e., dkik = 0 as in Assumption 4.1. Moreover, the bounded delay assumption d k i ≤ τ is standard to analyze convergence of asynchronous algorithms, see e.g., [5, 32, 34, 35, 44].
Suppose that we choose 0 < α < ᾱ and 0 < η < η̄ in Algorithm 2, where c := 2τ 2−n n2 is given, and ᾱ > 0 and η̄ > 0 are respectively computed by
ᾱ := { 1 if 2τ2 ≤ n, 2
2+c otherwise, and η̄ :=
√ 16−8α−7α2−α 2L(2+α) if 2τ 2 ≤ n, √ 16−8α−(7+4c+4c2)α2−α
2L[2+(1+c)α] otherwise. (8)
Next, we introduce the following two constants:
ρ := 2(1−α)−(2+α)L2η2−Lαη αηn if 2τ 2 ≤ n,
n2[2(1−α)−(2+α)L2η2−Lαη]−α(1+η2L2)(2τ2−n) αηn3 otherwise.
D := 8α 2(1+L2η2)(τ2+2Tnp̂) + 8n2(1+L2η2+Tα2p̂)
p̂α2n2 .
(9)
Then, both ρ and D are positive. We emphasize that though these formulas look complicated, they are computed explicitly without any tuning. Theorem 4.1 proves the convergence of Algorithm 2, whose analysis is in Supp. Doc. B. Theorem 4.1. Suppose that Assumption 2.1, 2.2, and 4.1 hold for (1). Let ᾱ, η̄, ρ, and D be given by (8) and (9), respectively. Let {(xki , yki , x̄k)} be generated by Algorithm 2 with stepsizes α ∈ (0, ᾱ) and η ∈ (0, η̄). Then, the following bound holds:
1
K + 1 K∑ k=0 E [ ‖Gη(x̄k)‖2 ] ≤ Ĉ [ F (x0)− F ? ] K + 1 , (10)
where Ĉ := 2(1+ηL) 2D
nη2ρ > 0 depending on n,L, η, α, τ, T, and p̂.
Let x̃K be selected uniformly at random from {x̄0, · · · , x̄K} as the output of Algorithm 2. Then, after at most K := O ( ε−2 )
iterations, x̃K is an ε-stationary point of (1) as in Definition 2.2. Remark 4.1. From Theorem 4.1, we can see that asyncFedDR achieves the same worst-case communication complexity O ( ε−2 ) (up to a constant factor) as FedDR, but with smaller α and η.
5 Numerical Experiments
To evaluate the performance of FedDR and asyncFedDR, we conduct multiple experiments using both synthetic and real datasets. Since most existing methods are developed for non-composite problems, we also implement three other methods: FedAvg, FedProx, and FedPD to compare for this setting. We use training loss, training accuracy, and test accuracy as our performance metrics.
Implementation. To compare synchronous algorithms, we reuse the implementation of FedAvg and FedProx in [23] and implement FedDR and FedPD on top of it. To conduct the asynchronous examples, we implement our algorithms based on the asynchronous framework in [3]. All experiments are run on a Linux-based server with multiple nodes and configuration: 24-core 2.50GHz Intel processors, 30M cache, and 256GB RAM.
Models and hyper-parameters selection. Our models are neural networks, and their detail is given in Supp. Doc. C. As in [23], we use the same local solver (SGD) for all algorithms and run the local updates for 20 epochs. Parameters for each algorithm such as µ for FedProx, η for FedPD, and α and η for FedDR are tuned from a wide range of values. For each dataset, we pick the parameters that work best for each algorithm and plot their performance on the chosen parameters.
Results on synthetic datasets. We compare these algorithms using synthetic dataset in both iid and non-iid settings. We follow the data generation procedures described in [23, 38] to generate one iid dataset synthetic-iid and three non-iid datasets: synthetic-(r,s) for (r, s) =
{(0, 0), (0.5, 0.5), (1, 1)}. We first compare these algorithms without using the user sampling scheme, i.e. all users perform update at each communication round, and for non-composite model of (1).
We report the performance of these algorithms on one non-iid dataset in Figure 2, but more results can be found in Sup. Doc. C. FedDR and FedPD are comparable in these datasets and they both outperform FedProx and FedAvg. FedProx works better than FedAvg which aligns with the results in [23]. However, when comparing on more datasets, our algorithm overall performs better than others.
Now we compare these algorithms where we sample 10 users out of 30 to perform update at each communication round for FedAvg, FedProx, and FedDR while we use all users for FedPD since FedPD only has convergence guarantee for this setting. In this test, the evaluation metric is plotted in terms of the number of bytes communicated between users and server at each communication round. Note that using user sampling scheme in this case can save one-third of communication cost each round. Figure 3 depicts the performance of 4 algorithms on one dataset, see also Sup. Doc. C.
From Figure 3, FedDR performs well compared to others. FedProx using user sampling scheme performs better and is slightly behind FedPD while FedDR, FedPD, and FedProx outperform FedAvg.
Results on FEMNIST datasets. FEMNIST [4] is an extended version of the MNIST dataset [19] where the data is partitioned by the writer of the digit/character. It has a total of 62 classes (10 digits, 26 upper-case and 26 lower-case letters) with over 800,000 samples. In this example, there are total of 200 users and we sample 50 users to perform update at each round of communication for FedAvg, FedProx, and FedDR while we use all users to perform update for FedPD. Fig. 4 depicts the performance of 4 algorithms in terms of communication cost. From Fig. 4, FedDR can achieve lower loss value and higher training accuracy than other algorithms while FedPD can reach the same test accuracy as ours at the end. Overall, FedDR seems working better than other algorithms in this test.
Results with the `1-norm regularizer. We now consider the composite setting with g(x) := 0.01 ‖x‖1 to verify Algorithm 1 on different inexactness levels i,k by varying the learning rate (lr) and the number of local SGD epochs to approximately evaluate proxηfi(y k i ). We run Algorithm 1 on the FEMNIST dataset, and the results are shown in Figure 5.
We observe that Algorithm 1 works best when local learning rate is 0.003 which aligns with [23] for the non-composite case. It also performs better when we decrease i,k by increasing the number of epochs in evaluating proxηfi . This performance confirms our theoretical results in Supp. Doc. A.5.
FEMNIST
FEMNIST, g = || ||1
Results using asynchronous update. To illustrate the advantage of asyncFedDR over FedDR, we conduct another example to train MNIST dataset using 20 users. Since we run these experiments on computing nodes with identical configurations, we simulate the case with computing power discrepancy between users by adding variable delay to each user’s update process such that the difference between the fastest user may be up to twice as fast as the slowest one.
MNIST
The results of two variants are presented in Figure 6, see Supp. Doc. C for more examples. We can see that asyncFedDR can achieve better performance than FedDR in terms of training time which illustrate the advantage of asynchronous update in heterogeneous computing power.
Acknowledgments and Disclosure of Funding
The work of Quoc Tran-Dinh is partially supported by the Office of Naval Research (ONR), grant No. N00014-20-1-2088. The authors would also like to thank all the anonymous reviewers and the ACs for their constructive comments to improve the paper. | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths of the proposed approach, particularly in terms of extending FedSplit and allowing non-convex objectives?
3. What are the weaknesses of the paper, especially regarding its comparison with other works and the complexity of the algorithm?
4. Do you have any concerns about the theoretical analysis, particularly regarding the dependence on hyperparameters?
5. Are there any limitations in the paper that should be addressed in future work? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes FedDR, which extends FedSplit by allowing non-convex objective, partial participation and convex composite regularizers. This work also studies the asynchronous variant of FedDR. Detailed proofs and experiments are provided.
Review
I would like to express my appreciation for your submission to NeurIPS 2021. This paper is overall well-written and I enjoyed reading it. This paper proposes FedDR which differs from existing works FedSplit, FedPD by allowing non-convex objective, partial participation and convex composite regularizers. My main concerns are as follows:
The algorithm FedDR requires an inner solver to compute the proximal operator of f. Although the authors kindly provide an inexact variant that allows for inexact approximation of prox_f, the additional inner computation is still not factored into the overall complexity. Therefore, it is hard to calibrate the complexities of FedDR with other first-order canonical algorithms such as FedAvg. (I acknowledge the same complain could apply to other prior works along this line such as FedPD and FedSplit.)
This paper is mostly compared with the FedSplit and FedPD literature. However, as noted on line 128, the composite federated optimization has been explored by [45, Federated Composite Optimization] for the convex case. Though I agree with the author’s claim that FedDR is different from the algorithms in [45], this paper misses necessary comparisons with [45] either theoretically or empirically. For example, how does FedDR compare with [45] (theoretically and empirically) when the objective f is convex?
The theorem 3.1 is somewhat concerning as the constant C still depends on the hyperparameter
α
and
η
. Is it possible to state a cleaner bound with specific variables
α
,
η
so that the dependency can be eliminated? |
NIPS | Title
DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs
Abstract
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
1 Introduction
This paper considers a class of control policies, called threshold policies, that naturally arise in many practical problems. For example, a smart home server may only turn on the air conditioner when the room temperature exceeds a certain threshold, and a central bank may only raise the interest rate when inflation exceeds a certain threshold. For such problems, finding the optimal control policies can be reduced to finding the appropriate thresholds given other factors of the system, such as the number of people in the room in the smart home server scenario or the unemployment rate and the current interest rate in the central bank scenario.
An important feature of threshold policies is that their actions are monotone. For example, if a smart home server would turn on the air conditioner at a certain temperature, then, all other factors being equal, the server would also turn on the air conditioner when the temperature is even higher. By leveraging this monotone property, an algorithm aiming to learn the optimal threshold can potentially be much more efficient than generic reinforcement learning algorithms seeking to learn the optimal action at different points of temperature separately. In order to design an efficient algorithm for learning the optimal threshold policy, we first formally define a class of Markov decision processes (MDPs) that admit threshold policies and its objective function. The optimal threshold policy is then the one that maximizes the objective function. However, the objective function involves an integral over a continuous range, which makes it infeasible to directly apply standard tools, such as backward-propagation in neural networks, to perform gradient updates.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surprisingly, we show that, by leveraging the monotone property of threshold policies, the gradient of the objective function has a very simple expression. Built upon this expression, we propose Deep Threshold-Optimal Policy (DeepTOP), a model-free actor-critic deep reinforcement learning algorithm that finds the optimal threshold policies. We evaluate the performance of DeepTOP by considering three practical problems, an electric vehicle (EV) charging problem that determines whether to charge an EV in the face of unknown fluctuations of electricity price, an inventory management problem that determines whether to order for goods in the face of unknown seasonal demands, and a make-to-stock problem for servicing jobs with different sizes. For all problems, DeepTOP significantly outperforms other state-of-the-art deep reinforcement learning algorithms due to its ability to exploit the monotone property.
We also study the notoriously hard restless multi-armed bandit (RMAB) problem. We show that the Whittle index policy, a powerful tool for RMABs, can be viewed as an optimal threshold policy for an alternative problem. Based on this observation, we define an objective function for the alternative problem, of which the Whittle index is the maximizer. We again show that the gradient of the objective function has a simple expression. This simple expression allows us to extend DeepTOP for the learning of the Whittle index. We compare this DeepTOP extension to three recently proposed algorithms that seek to learn the optimal index policies through other indirect properties. Simulation results show that the DeepTOP extension learns much faster because it directly finds the optimal threshold policy.
The rest of the paper is organized as follows. Section 2 defines the MDP setting and threshold policies. We present the DeepTOP algorithm for MDP in Section 3. We then discuss how the Whittle index policy for RMABs can be viewed as a threshold policy in Section 4 and develop a DeepTOP extension for learning it in Section 5. We show DeepTOP’s performance results for MDPs and RMABs in Section 6, and give related works in Section 7 before concluding.
2 Threshold Policies for MDPs
Consider an agent controlling a stochastic environment E described as an MDP E = (S,A,R,P, γ), with state space S, binary action space A := {0, 1}, reward function R : S × A → Ω, transition dynamics P : S ×A × S → , and discount factor γ ∈ [0, 1), where is the set of real numbers and Ω is the set of random variables. At each timestep t, the agent picks an action at ∈ A for the current state st. The state st ∈ S = × V has two components: a scalar state λt ∈ , and a vector state vt ∈ V, whereV is a discrete set of vectors. We assume the environment state is fully observable. Given the state-action pair (st, at), the MDP generates a reward rt following the unknown random variable R(st, at), and a random next state st+1 = (λt+1, vt+1) following the unknown distribution P. We use r̄(λ, v, a) := E[R((λ, v), a)] to denote the unknown expected one-step reward that can be obtained for the state-action pair (λ, v, a).
A threshold policy is one that defines a threshold function µ : V → mapping each vector state to a real number. The policy then deterministically picks at = 1(µ(vt) > λt), where 1(·) is the indicator function. There are many applications where it is natural to consider threshold policies and we discuss some of them below. Example 1. Consider the problem of charging electric vehicles (EV). When an EV arrives at a charging station, it specifies its demands for charge and a deadline upon which it will leave the station. The electricity price changes over time following some random process. The goal of the operator is to fulfill the EV’s requirement with minimum cost. In this problem, we can model the system by letting the scalar state λt be the current electricity price and the vector state vt be the remaining charge and time to deadline of the EV. For this problem, it is natural to consider a threshold policy that defines a threshold µ(vt) as the highest price the operator is willing to pay to charge the EV under vector state vt. The operator only charges the vehicle, i.e., chooses at = 1, if λt < µ(vt). Example 2. Consider the problem of warehouse management. A warehouse stores goods waiting to be sold. When the number of stored goods exceeds the demand, then there is a holding cost for each unsold good. On the other hand, if the number of stored goods is insufficient to fulfill the demand, then there is a cost of lost sales. The goal of the manager is to decide when to place orders so as to minimize the total cost. In this problem we can let the scalar state λt be the current inventory and let the vector state vt be the vector of all factors, such as upcoming holidays, that can influence future demands. It is natural to consider a threshold policy where the manager only places a new order if the current inventory λt falls below a threshold µ(vt) based on the current vector state vt.
Example 3. Consider a smart home server that controls the air conditioner. Let λt be −(current temperature) and vt be the time of the day and the number of people in the house. The server should turn on the air conditioner only if the temperature exceeds some threshold determined by vt, or, equivalently, λt < µ(vt).
Given a threshold policy with threshold function µ(·), we can define the corresponding action-value function by Qµ(λ, v, a). Let ρµ(λ′, v′, λ, v) be the discounted state distribution when the initial state is (λ, v) under the threshold policy to a visited state (λ′, v′). When the initial state is (λ, v), the expected discounted reward under the policy is
Qµ ( λ, v,1(µ(v) > λ) ) = ∑ v′∈V ∫ λ′=+M λ′=−M ρµ(λ′, v′, λ, v)r̄ ( λ′, v′,1(µ(v′) > λ′) ) . (1)
Let M be a sufficiently large constant such that λt ∈ [−M,+M] for all t. Our goal is to learn the optimal threshold function µφ(v) parametrized by a vector φ that maximizes the objective function
K(µφ) := ∫ λ=+M λ=−M ∑ v∈V Qµφ ( λ, v,1(µφ(v) > λ) ) dλ. (2)
3 Deep Threshold Optimal Policy for MDPs
In this section, we present a deep threshold optimal policy (DeepTOP) for MDPs that finds the optimal φ for maximizing K(µφ).
3.1 Threshold Policy Gradient Theorem for MDPs
In order to design DeepTOP, we first study the gradient ∇φK(µφ). At first glance, computing ∇φK(µφ) looks intractable since it involves an integral over λ ∈ [−M,+M]. However, we establish the following threshold policy gradient theorem that shows the surprising result that ∇φK(µφ) has a simple expression.
Theorem 1. Given the parameter vector φ, let ρ̄(λ, v) be the discounted state distribution when the initial state is chosen uniformly at random under the threshold policy. If all vector states v ∈ V have distinct values of µφ(v), then,
∇φK(µφ) = 2M|V| ∑ v∈V ρ̄(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (3) Proof. Let ρ̄t(λ, v) be the distribution that the state at time t is (λ, v) when the initial state is chosen uniformly at random. Clearly, we have ρ̄(λ, v) = ∑∞ t=1 γ
t−1ρ̄t(λ, v). Given φ, we number all states inV such that µφ(v1) > µφ(v2) > . . . . LetM0 = +M,Mn = µφ(vn), for all 1 ≤ n ≤ |V|, andM|V|+1 = −M. Also, let Vn be the subset of states {v|µφ(v) > Mn} = {v1, v2, . . . , vn−1}. Now, consider the interval (Mn+1,Mn) for some n. Notice that, for all λ ∈ (Mn+1,Mn), 1(µφ(v) > λ) = 1 if and only if v ∈ Vn+1. In other words, for any vector state v, the threshold policy would take the same action under all λ ∈ (Mn+1,Mn), and we use πn+1(v) to denote this action. We then have
∇φK(µφ) = ∇φ ∫ λ=+M λ=−M ∑ v∈V Qµφ (λ, v,1(µφ(v) > λ))dλ = ∑ v∈V ∇φ ∫ λ=+M λ=−M Qµφ (λ, v,1(µφ(v) > λ))dλ
= ∑ v∈V |V|∑ n=0 ∇φ ∫ λ=Mn λ=Mn+1 Qµφ (λ, v, πn+1(v))dλ
= ∑ v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1 + ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ ) ,
(4)
where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by∑
v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1)
= ∑ v∈V |V|∑ n=1 ( Qµφ ( µφ(vn), v,1(v ∈ Vn+1)) − Qµφ(µφ(vn), v,1(v ∈ Vn)))∇φµφ(vn)
=2M|V| ∑ v∈V ρ̄1(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (5) Next, we expand the last term in (4). Note that Qµφ(λ, v, a) = r̄(λ, v, a) +
γ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′, where p(·|·) is the transition probability.
Hence, ∇φQµφ(λ, v, a) = γ∇φ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′. Using the same techniques in (4) and (5), we have∑ v∈V |V|∑ n=0 ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ = ∑ v∈V ∫ λ=+M λ=−M ∇φQµφ (λ, v,1(µφ(v) > λ))dλ
= γ ∑ v∈V ∫ λ=+M λ=−M ( ∇φ ∫ λ′=+M λ′=−M ∑ v′∈V p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ′, v′,1(µφ(v′) > λ′))dλ′ ) dλ
= 2M|V| ∑ v∈V γρ̄2(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v) + γ
∑ v∈V ∫ λ=+M λ=−M ( ∑ v′∈V ∫ λ′=+M λ′=−M ∇φ ( p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ, v,1(µφ(v′) > λ′)) ) dλ′ ) dλ.
In the above equation, expanding the last term in time establishes (3).
3.2 DeepTOP Algorithm Design for MDPs
Motivated by Theorem 1, we now present DeepTOP-MDP, a model-free, actor-critic Deep RL algorithm. DeepTOP-MDP maintains an actor network with parameters φ that learns a threshold function µφ(v), and a critic network with parameters θ that learns an action-value function Qθ(λ, v, a). DeepTOP-MDP also maintains a target critic network with parameters θ′ that is updated slower than the critic parameters θ. The purpose of the target critic network is to improve the learning stability as demonstrated in [8, 19]. The objective of the critic network is to find θ that minimizes the loss function
L(θ) := E st ,at ,rt ,st+1
[( Qθ(λt, vt, at) − rt − γmax
a′∈A Qθ
′( λt+1, vt+1, a′ ))2] , (6)
where (st, at, rt, st+1) is sampled under some policy with st = (λt, vt). The objective of the actor network is to find φ that maximizes ∫ λ=+M λ=−M ∑ v∈V Qθµφ ( λ, v,1(µφ(v) > λ) ) dλ. In each timestep t, the environment E provides a state st to the agent. We set an exploration parameter t ∈ [0, 1) that takes a random action with probability t. Otherwise, DeepTOP-MDP calculates µφ(vt) based on vt, and chooses at = 1(µφ(vt) > λt). E generates a reward rt and a next state st+1. A replay memory denoted byM then stores the transition {st, at, rt, st+1}. After filling the memory with at least B transitions, DeepTOP-MDP updates the parameters φ, θ, θ′ in every timestep using a sampled minibatch of size B of transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B. The critic network uses the sampled transitions to calculate the estimated gradient of L(θ):
∇̂θL(θ) := 2 B B∑ k=1 ( Qθ(λtk , vtk , atk ) − rtk − γmaxa′∈A Q θ′(λtk+1, vtk+1, a′))∇θQθ(λtk , vtk , atk ). (7) Similarly, the actor network uses the sampled transitions and Equation (3) to calculate the estimated gradient:
∇̂φK(µφ) := 1 B B∑ k=1 ( Qθµφ ( µφ(vtk ), vtk , 1 ) − Qθµφ ( µφ(vtk ), vtk , 0 )) ∇φµφ(vtk ). (8)
Algorithm 1 Deep Threshold Optimal Policy Training for MDPs (DeepTOP-MDP) Randomly select initial actor network parameters φ and critic network parameters θ. Set target critic network parameters θ′ ← θ, and initialize replay memoryM. for timestep t = 1, 2, 3, . . . do
Receive state st = (λt, vt) from environment E. Select action at = 1(µφ(vt) > λt) with probability 1 − t. Otherwise, select action at randomly. Execute action at, and observe reward rt and next state st+1 from E. Store transition {st, at, rt, st+1} intoM. Sample a minibatch of B transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B fromM. Update critic network parameters θ using the estimated gradient from Equation (7). Update actor network parameters φ using the estimated gradient from Equation (8). Soft update target critic parameters θ′: θ′ ← τθ + (1 − τ)θ′.
end for
Both the critic network and the actor network then take a gradient update step. Finally, we soft update the target critic’s parameters θ′ using θ′ ← τθ + (1 − τ)θ′, with τ < 1. The complete pseudocode is given in Algorithm 1.
4 Whittle Index Policy for RMABs
In this section, we demonstrate how the Whittle index policy [32], a powerful tool for solving the notoriously intractable Restless Multi-Armed Bandit (RMAB) problem, can be represented with a set of threshold functions. We first describe the RMAB control problem, and then define the Whittle index function.
An RMAB problem consists of N arms. The environment of an arm i, denoted as Ei, is an MDP with a discrete state space si,t ∈ Si, and a binary action space ai,t ∈ A := {0, 1}, where ai,t = 1 means that arm i is activated, and ai,t = 0 means that arm i is left passive at time t. Given the state-action pair (si,t, ai,t), Ei generates a random reward ri,t and a random next state si,t+1 following some unknown probability distributions based on (si,t, ai,t). Here we also use r̄i(si, ai) to denote the unknown expected one-step reward that can be obtained for the state-action pair (si, ai).
A control policy over all arms takes the states (s1,t, s2,t, . . . , sN,t) as input, and activates V out of N arms in every timestep. Solving for the optimal control policy for RMABs was proven to be intractable [21], since the agent must optimize over an input state space exponential in N. To circumvent the dimensionality challenge, the Whittle index policy assigns real values to an arm’s states using a Whittle index function for each arm Wi : Si → . Based on the assigned Whittle indices ( W1(s1,t),W2(s2,t), . . . ,WN(sN,t) ) , the Whittle index policy activates the V highest-valued arms out of N arms in timestep t, and picks the passive action for the remaining arms.
4.1 The Whittle Index Function as The Optimal Threshold Function
To define the Whittle index and relate it to threshold functions, let us first consider an alternative control problem of a single arm i as environment Ei with activation cost λ. In this problem, the agent follows a control policy that determines whether the arm is activated or not based on its current state si,t. If the policy activates the arm, then the agent must pay an activation cost of λ. Hence, the agent’s net reward at timestep t is defined as ri,t − λai,t. We now consider applying threshold policies for this alternative control problem. A threshold policy defines a threshold function µi : Si → that maps each state to a real value. It then activates the arm if and only if µi(si,t) > λ, i.e., ai,t = 1(µi(si) > λ). The value of µi(si,t) can therefore be viewed as the largest activation cost that the agent is willing to pay to activate the arm under state si,t. To characterize the performance of a threshold policy with a threshold function µi(·), we let ρµi,λ(s′i , si) be the discounted state distribution, which is the average discounted number of visits of state s′i when the initial state is si under the threshold policy and λ. When the initial state is si, the expected discounted net reward under the threshold policy is
Qi,λ ( si,1(µi(si) > λ) ) = ∑ s′i∈Si ρµi,λ(s ′ i , si) ( r̄i ( s′i ,1(µi(s ′ i) > λ) ) − λ1(µi(s′i) > λ)). (9)
The performance of the threshold policy under a given λ is defined as Ji,λ(µi) :=∑ si∈Si Qi,λ ( si,1(µi(si) > λ) ) . The Whittle index of this arm is defined as the function µi(·) whose corresponding threshold policy maximizes Ji,λ(µi) for all λ: Definition 1. (Whittle Index) If there exists a function µi : Si → such that choosing 1(µi(si) > λ) maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞), then we say that µi(si) is the Whittle index Wi(si) 1.
We note that, for some arms, there does not exist any function µi(si) that satisfies the condition in Definition 1. For such arms, the Whittle index does not exist. We say that an arm is indexable if it has a well-defined Whittle index function. Definition 1 shows that finding the Whittle index is equivalent to finding the optimal µi(·) that maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞). Parameterizing a threshold function µφii (·) by parameters φi and letting M be a sufficiently large number such that µ φi i (si) ∈ (−M,+M) for all si and φi, we aim to find the optimal φi for maximizing the objective function
Ki(µ φi i ) := ∫ λ=+M λ=−M ∑ si∈Si Qi,λ ( si,1(µ φi i (si) > λ) ) dλ. (10)
5 Deep Threshold Optimal Policy for RMABs
To design a DeepTOP variant for RMABs, we first give the gradient of the objective function. Theorem 2. Given the parameter vector φi, let ρ̄λ(si) be the discounted state distribution when the initial state is chosen uniformly at random and the activation cost is λ. If all states si ∈ Si have distinct values of µφii (si), then,
∇φi Ki(µ φi i ) = |Si| ∑ si∈Si ρ̄ µ φi i (si) (si) ( Qi,µφii (si) ( si, 1 ) − Qi,µφii (si)(si, 0))∇φiµφii (si). (11) Proof. The proof is similar to that of Theorem 1. For completeness, we provide it in Appendix A.
We note that Theorem 2 does not require the arm to be indexable. Whether an arm is indexable or not, using Theorem 2 along with a gradient ascent algorithm will find a locally-optimal φi that maximizes Ki(µ φi i ). When the arm is indexable, the resulting threshold function µ φi i is the Whittle index function. Using the gradient result from Equation (11), we present the algorithm DeepTOP-RMAB for finding the optimal parametrized threshold functions µφii for arms i = 1, 2, . . . ,N. The training method is similar to the MDP version, except for two important differences. First, the training of each arm is done independently from others. Second, the value of λ is an artificial value that only exists in the alternative problem but not in the original RMAB problem. Similar to DeepTOP-MDP, we maintain three network parameters for each arm i: actor φi, critic θi, and target-critic θ′i . The critic network parametrizes the action-value function, and is optimized by minimizing the loss function
Li(θi) := λ=+M∫
λ=−M
E si,t ,ai,t ,ri,t ,si,t+1 [( Qθii,λ(si,t, ai,t) − ri,t − γmaxa′∈A Q θ′i i,λ(si,t+1, a ′) )2] dλ, (12)
with (si,t, ai,t, ri,t, si,t+1) sampled under some policy. In each timestep t, each arm environment Ei provides its current state si,t to the agent. For each arm i = 1, 2, . . . ,N, DeepTOP-RMAB calculates the state value µφii (si,t) with the arm’s respective actor network parameters φi. Given an exploration parameter t ∈ [0, 1), DeepTOP-RMAB activates the V arms with the largest µφii (si,t) with probability 1− t, and activates V randomly selected arms with probability t. Based on the executed actions, each arm provides a reward ri,t and the next state si,t+1. An arm’s transition {si,t, ai,t, ri,t, si,t+1} is then stored in the arm’s memory denoted byMi. After filling each arm’s memory with at least B transitions, DeepTOP-RMAB updates φi, θi, and θ′i in every timestep. For each arm i, DeepTOP-RMAB first samples a minibatch of size B of transitions {si,tk , ai,tk , ri,tk , si,tk+1}, for 1 ≤ k ≤ B from the memoryMi. It then randomly samples B values [λi,1, λi,2, . . . , λi,B] from the range [−M,+M]. Using the sampled transitions and λ values, it estimates the gradient of Li(θi) as
∇̂θiLi(θi) := 2 B B∑ k=1 ( Qθii,λk (si,tk , ai,tk ) − ri,tk − γmaxa′∈A Q θ′i i,λk ( si,tk+1, a ′ )) ∇θi Qθii,λk (si,tk , ai,tk ). (13)
1To simplify notations, we use a necessary and sufficient condition for the Whittle index as its definition. We refer interested readers to [9] for more thorough discussions on the Whittle index.
Using the sampled transitions and Equation (11), it estimates the gradient of Ki(µ φi i ) as
∇̂φi Ki(µ φi i ) := 1 B B∑ k=1 ( Qθi i,µφii (si,tk ) ( si,tk , 1 ) − Qθi i,µφii (si,tk ) ( si,tk , 0 )) ∇φiµ φi i (si,tk ). (14)
A gradient update step is taken after calculating the actor and critic networks’ gradients. Finally, DeepTOP-RMAB soft updates the target critic parameters θ′i using θ ′ i ← τθi + (1 − τ)θ′i , with τ < 1. The complete DeepTOP-RMAB pseudocode is given in Appendix B.
6 Simulations
We have implemented and tested both DeepTOP-MDP and DeepTOP-RMAB in a variety of settings. The training procedure of the two DeepTOP algorithms are similar to that of the DDPG [19] algorithm except for the expression of gradients. We implemented the DeepTOP algorithms by modifying an open-source implementation of DDPG [12]. All source code can be found in the repository https://github.com/khalednakhleh/deeptop.
6.1 Simulations for MDPs
We evaluate three MDPs, namely, the electric vehicle charging problem, the inventory management problem, and the make-to-stock problem.
EV charging problem. This problem is based on Yu, Xu, and Tong [34]. It considers a charging station serving EVs. When an EV arrives at the station, it specifies the amount of charges it needs and a deadline upon which it will leave the station. The electricity price changes over time and we model it by an Ornstein-Uhlenbeck process [30]. In each timestep, the station decides whether to charge the EV or not. If it decides to charge the EV, then it provides one unit charge to the EV. The station then obtains a unit reward and pays the current electricity price. If the station fails to fully charge the EV by the deadline of the EV, then the station suffers from a penalty that is a convex function of the remaining needed charge. A new EV arrives at the station when the previous EV leaves. We model this problem by letting the scalar state be the current electricity price and the vector state be the remaining needed charge and time-to-deadline of the current EV. A threshold policy is one that calculates a threshold based on the EV’s remaining needed charge and time-to-deadline, and then decides to charge the EV if and only if the current electricity price is below the threshold.
Inventory management problem. We construct an inventory management problem by jointly incorporating a variety of practical challenges, including seasonal fluctuations in demands and lead times in orders, in the literature [28, 15, 10, 27]. We consider a warehouse holding goods. In each timestep, there is a random amount of demand whose mean depends on the time of the year. The warehouse can fulfill the demand as long as it has sufficient inventory, and it makes a profit for each unit of sold goods. At the end of the timestep, the warehouse incurs a unit holding cost for each unit of unsold goods. The warehouse manager needs to decide whether to order more goods. When it places an order for goods, there is a lead time of one time step, that is, the goods ordered at timestep t are only available for sale at timestep t + 1. We model this problem by letting the scalar state be the current inventory and the vector state be the time of the year. A threshold policy calculates a threshold based on the time of the year and decides to place an order for goods if the current inventory is below the threshold.
Make-to-stock production problem. This problem is considered in [26]. It studies a system that produces m items with W demand classes and buffer size S . Accepting a class v order leads to a reward Rv, as long as there is still room in the buffer for the order. The classes of demands are ordered such that R1 > R2 > . . . . In this problem, the scalar state is the number of accepted but unfinished orders and the vector state is the class of the next arriving order. More details about the three MDPs can be found in Appendix C.
Evaluated policies. We compare DeepTOP-MDP against DDPG [19] and TD3 [8], two state-ofthe-art off-policy and model free deep RL algorithms. We use open-source implementations of these two algorithms for [12, 7]. We use the same hyper-parameters, including the neural network architecture, learning rates, etc., for all three algorithms. We also evaluate the Structure-Aware Learning for Multiple Thresholds algorithm (SALMUT) [26], a reinforcement learning algorithm
that finds the optimal threshold policy. SALMUT requires the vector states to be pre-sorted by their threshold values. Hence, SALMUT can only be applied to the make-to-stock production problem. Details about the training parameters can be found in Appendix D. For the EV charging problem, Yu, Xu, and Tong [34] has found the optimal threshold policy. We call the optimal threshold policy the Deadline Index policy and compare DeepTOP-MDP against it.
Simulations results. Simulation results of the three MDPs are shown in Figure 1. The results are the average of 20 independent runs. Before starting a run, we fill an agent’s memory with 1000 transitions by randomly selecting actions. We plot the average reward obtained from the previous 100 timesteps, and average them over 20 runs. In addition, we provide the standard deviation bounds from the average reward.
It can be observed that DeepTOP significantly outperforms DDPG, TD3, and SALMUT. Although the training procedure of DeepTOP is similar to that of DDPG, DeepTOP is able to achieve much faster learning by leveraging the monotone property. Without leveraging the monotone property, DDPG and TD3 need to learn the optimal policy for each scalar state independently, and therefore have much worse performance. DeepTOP performs better than SALMUT because DeepTOP directly employs the threshold policy gradient. SALMUT in contrast approximates threshold policies through randomized policies since it can only handle continuous and differentiable functions. We believe this might be the reason why DeepTOP outperforms SALMUT. We also note that DeepTOP performs virtually the same as the Deadline Index policy for the EV charging problem in about 2000 timesteps, suggesting that DeepTOP indeed finds the optimal threshold policy quickly. We also evaluate DeepTOP for different neural network architectures in Appendix E, and show that DeepTOP performs the best in all settings.
6.2 Simulations for RMABs
We evaluate two RMABs, namely, the onedimensional bandits from [17] and the recovering bandits from [20].
One-dimensional bandits. We consider an extension of the RMAB problem evaluated in Killian et al. [17]. Killian et al. [17] considers the case when each arm is a two-state Markov process. We extend it so that each arm is a Markov process with 100 states, numbered as 0, 1, . . . , 99, as shown in Figure 2 where state 99 is the optimal state.
The reward of an arm depends on the distance between its current state and state 99. Suppose the current state of arm i is si,t, then it generates a reward ri,t = 1− ( si,t−9999 )2. If the arm is activated, then it changes to state si,t+1 = min{si,t +1, 99} with probability pi. If the arm is not activated, then it changes to state si,t+1 = max{si,t − 1, 0} with probability qi. In the simulations, we pick the probabilities pi to be evenly spaced depending on the number of arms N from the interval [0.2, 0.8]. We set the
probabilities qi = pi. We consider that there are N arms and that the agent needs to activate V arms in each timestep. We evaluate three settings of (N,V) = (10, 3), (20, 5), and (30, 6).
Recovering bandits. First introduced in [25], we consider the case that studies the varying behavior of consumers over time. A consumer’s interest in a particular product falls if the consumer clicks on its advertisement link. However their interest in the product would recover with time. The recovering bandit is modelled as an RMAB with each arm being the advertisement link. The reward of playing an arm is given by a function f ( min(z, zmax) ) , with z being the time since the arm was last played.
In our experiments, we consider arms with different reward functions, with the arm’s state being the value min{z, zmax} and zmax = 100. We also evaluate recovering bandits on three settings of (N,V) = (10, 3), (20, 5), and (30, 6). More details can be found in Appendix F.
Evaluated policies. We compare DeepTOP-RMAB against three recent studies that aim to learn index policies for RMABs, namely, Lagrange policy Q learning (LPQL) [17], Whittle index based Q learning (WIBQL) [1], and neural Whittle index network (NeurWIN) [20]. LPQL consists of three steps: First, it learns a Q function for each arm independently. Second, it uses the Q functions of all arms to determine a common Lagrangian. Third, it uses the Lagrangian to calculate the index of each arm. WIBQL is a two-timescale algorithm that learns the Whittle indices of indexable arms by updating Q values on the fast timescale, and index values on the slower timescale. NeurWIN is an off-line training algorithm based on REINFORCE that requires a simulator to learn the Whittle index. Both LPQL and WIBQL are tabular learning methods which may perform poorly compared to deep RL algorithms when the size of the state space is large. Hence, we also design deep RL equivalent algorithms that approximate their Q functions using neural networks. We refer to the Deep RL extensions as neural LPQL and neural WIBQL. In all experiments, neural LPQL, neural WIBQL, and NeurWIN use the same hyper-parameters as DeepTOP-RMAB. For the one-dimensional bandits, it can be shown that the Whittle index is in the range of [−1, 1], and hence we set M = 1. For the recovering bandits, we set M = 10.
Simulation results. Simulation results are shown in Figures 3 and 4. It can be observed that DeepTOP achieves the optimal average rewards in all cases. The reason that neural LPQL performs worse than DeepTOP may lie in its reliance on a common Lagrangian. Since the common Lagrangian is calculated based on the Q functions of all arms, an inaccuracy in one arm’s Q function can result in an inaccurate Lagrangian, which, in turn, leads to inaccuracy in the index values of all arms. Prior work [17] has already shown that WIBQL performs worse than LPQL. Hence, it is not surprising that neural WIBQL performs worse than both neural LPQL and DeepTOP. NeurWIN performs worse than DeepTOP because it is based on REINFORCE and therefore can only apply updates at the end of each minibatch of episodes. We also evaluate DeepTOP for different neural network architectures and the results are shown in Appendix G for the one-dimensional bandits and Appendix H for the recovering bandits.
7 Related Work
Threshold policies have been analysed for many decision-making problems formed as MDPs. [11] examined the residential energy storage under price fluctuations problem, and proved the existence
of optimal threshold policies for minimizing the cost. [5] proved that MDPs with a convex and piecewise linear cost functions admit an optimal threshold policy. [24] shows the existence of an optimal threshold policy for energy arbitrage given degrading battery capacity, with [2] using the REINFORCE algorithm [33] to learn a trading policy with price thresholds for intraday electricity markets. [14] considered mean field games in a multi-agent MDP setting, and characterized individual agent strategy with a threshold policy when the mean game admits a threshold policy.
More recently, [31] studies finding a job assigning threshold policy for data centers with heterogeneous servers and job classes, and gave conditions for the existence of optimal threshold policies. [35] proposed a distributed threshold-based control policy for graph traversal by assigning a state threshold that determines if the agent stays in or leaves a state. For minimizing the age of information in energy-harvesting sensors, [4] used the finite-difference policy gradient [23] to learn a possibly sub-optimal threshold policy in the average cost setting. [13] proposed an RL-based threshold policy for semi-MDPs in controlling micro-climate for buildings with simulations proving efficacy on a single-zone building. [29] used the Deep Q-network RL algorithm for selecting alert thresholds in anti-fraud systems with simulations showing performance improvements over static threshold policies. [26] described the SALMUT RL algorithm for exploiting the ordered multi-threshold structure of the optimal policy with SALMUT implementations in [16] for computing node’s overload protection. In contrast to these works, DeepTOP-MDP is applicable to any MDP that admits threshold policies.
In learning the Whittle index policy for RMABs, [6] proposed a Q-learning heuristic called the Q Whittle Index Controller (QWIC) which may not find the Whittle indices even when the training converges. [20] describes a Deep RL algorithm called NeurWIN for learning the Whittle index of a restless arm independently of other arms. However, NeurWIN requires a simulator to train the neural networks. Some recent studies, such as [1, 3, 17], proposed various online learning algorithms that can find Whittle index when the algorithms converge. These algorithms rely on some indirect property of the Whittle index which explains why they converge slower than DeepTOP.
8 Conclusion and Future Work
In this paper, we presented DeepTOP: a Deep RL actor-critic algorithm that learns the optimal threshold function for MDPs that admit a threshold policy and for RMAB problems. We first developed the threshold policy gradient theorem, where we proved that a threshold function has a simple to compute gradient. Based on the gradient expressions, we design the DeepTOP-MDP and DeepTOP-RMAB algorithm variants and compare them against state-of-the-art learning algorithms. In both the MDP and RMAB settings, experiment results showed that DeepTOP exceeds the performance of baselines in all considered problems. A promising future direction is to extend DeepTOP to threshold policies with multiple actions. For example, the Federal Reserve needs to decide not only whether to raise interest rate, but also the amount of rate hike.
Acknowledgments and Disclosure of Funding This material is based upon work supported in part by NSF under Award Number ECCS-2127721, in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant Number W911NF-22-1-0151, and in part by Office of Naval Research under Contract N00014-21-1-2385. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. | 1. What is the main contribution of the paper regarding the optimization of Markov reward processes?
2. What are the strengths and weaknesses of the proposed algorithms, particularly in comparison with other works in the literature?
3. Do you have any concerns or questions regarding the assumptions made in the paper, such as the finite state space assumption?
4. Can you provide more explanations or clarifications regarding certain aspects of the proof of Theorem 1 or the deep threshold optimal policy computation in Section 5?
5. How does the paper's contribution differ from prior works, especially in terms of the extension towards Restless Multi-armed Bandit problems?
6. Are there any limitations or areas for improvement in the paper's approach or presentation, such as the need to better highlight the contribution towards RMAB or establish a clearer connection between the policy gradient theorem in the paper and the one in Marbach and Tsitsiklis (2001)? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In this paper, the problem of learning the optimal threshold policy for Markov Decision Processes (MDPs) is considered. Using the monotonicity property of threshold policies, the authors establish a simple policy gradient formula for the class of threshold policies. Using that, an off-policy actor-critic algorithm (DeepTOP) is proposed to learn the optimal policy in a situation where the optimal policy is known to possess a threshold structure. Moreover, the equivalence between obtaining the whittle index in a Restless Multi-armed Bandit (RMAB) problem and optimal threshold policy in an MDP is established. Following that, the DeepTOP algorithm is extended to the RMAB setting. Extensive simulation results are presented to demonstrate that the proposed algorithms outperform other algorithms in the literature.
Strengths And Weaknesses
The paper is well-written, and the claims appear to be correct. Extensive simulations have been performed to demonstrate the efficacy of the proposed approaches.
Questions
However, there are several concerns as stated below. 1. The idea of viewing the Whittle index policy for RMABs as an optimal threshold policy is already developed in [1] as stated in the paper. Another important work in this direction is [a] Robledo, F., Borkar, V., Ayesta, U., & Avrachenkov, K. (2022). QWI: Q-learning with Whittle Index. ACM SIGMETRICS Performance Evaluation Review, 49(2), 47-50. See Equation (8) in the paper above. How is the proposed algorithm in this paper different from the schemes described in these papers? 2. It is assumed that
λ
t
∈
[
−
M
,
M
]
for all
t
and the states can be numbered. This essentially translates into a finite state space. In the following paper, can’t Theorem 1 be derived as a corollary of the policy gradient theorem in [b] Marbach, P., & Tsitsiklis, J. N. (2001). Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control, 46(2), 191-209. 3. In the proof of Theorem 1, in the first step, the rationale behind swapping integration and summation is not clear. It needs to be explicitly stated in the paper. 4. In Algorithm 1, why don’t the authors consider a decreasing
ϵ
? Does constant
ϵ
guarantee convergence to the optimal solution? 5. The deep threshold optimal policy computation of RMAB in Section 5 appears to be a straightforward extension of the policy gradient theorem in Theorem 1 because of the equivalence between obtaining the whittle index in a Restless Multi-armed Bandit (RMAB) problem and optimal threshold policy in an MDP. However, since this idea was already introduced in [1,a], the contribution in Section 5 is limited. 6. The authors have performed extensive simulations on various problems such as electric vehicle charging problem, inventory management, and make-to-stock problem. By leveraging the monotone property, DeepTOP performs better than DDPG and TD3. However, the explanation regarding how it outperforms SALMUT is not clear. Although DeepTOP employs the threshold policy gradient directly, if you take the policy gradient algorithm in [b] and encode the threshold policy information in the gradient of the transition probability matrix, is that not the same as the threshold policy gradient theorem (Theorem 1 in the paper)? The authors are requested to explain this point.
Limitations
Overall, although the authors’ effort in exploiting the information regarding the existence of the threshold-based optimal policy in the learning framework is appreciable, the contribution regarding extension towards RMAB needs to be better highlighted. Moreover, how the policy gradient theorem (Theorem 1) presented in the paper is a non-trivial extension of the policy gradient theorem in [b] within the context of threshold policies, needs to be established clearly. |
NIPS | Title
DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs
Abstract
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
1 Introduction
This paper considers a class of control policies, called threshold policies, that naturally arise in many practical problems. For example, a smart home server may only turn on the air conditioner when the room temperature exceeds a certain threshold, and a central bank may only raise the interest rate when inflation exceeds a certain threshold. For such problems, finding the optimal control policies can be reduced to finding the appropriate thresholds given other factors of the system, such as the number of people in the room in the smart home server scenario or the unemployment rate and the current interest rate in the central bank scenario.
An important feature of threshold policies is that their actions are monotone. For example, if a smart home server would turn on the air conditioner at a certain temperature, then, all other factors being equal, the server would also turn on the air conditioner when the temperature is even higher. By leveraging this monotone property, an algorithm aiming to learn the optimal threshold can potentially be much more efficient than generic reinforcement learning algorithms seeking to learn the optimal action at different points of temperature separately. In order to design an efficient algorithm for learning the optimal threshold policy, we first formally define a class of Markov decision processes (MDPs) that admit threshold policies and its objective function. The optimal threshold policy is then the one that maximizes the objective function. However, the objective function involves an integral over a continuous range, which makes it infeasible to directly apply standard tools, such as backward-propagation in neural networks, to perform gradient updates.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surprisingly, we show that, by leveraging the monotone property of threshold policies, the gradient of the objective function has a very simple expression. Built upon this expression, we propose Deep Threshold-Optimal Policy (DeepTOP), a model-free actor-critic deep reinforcement learning algorithm that finds the optimal threshold policies. We evaluate the performance of DeepTOP by considering three practical problems, an electric vehicle (EV) charging problem that determines whether to charge an EV in the face of unknown fluctuations of electricity price, an inventory management problem that determines whether to order for goods in the face of unknown seasonal demands, and a make-to-stock problem for servicing jobs with different sizes. For all problems, DeepTOP significantly outperforms other state-of-the-art deep reinforcement learning algorithms due to its ability to exploit the monotone property.
We also study the notoriously hard restless multi-armed bandit (RMAB) problem. We show that the Whittle index policy, a powerful tool for RMABs, can be viewed as an optimal threshold policy for an alternative problem. Based on this observation, we define an objective function for the alternative problem, of which the Whittle index is the maximizer. We again show that the gradient of the objective function has a simple expression. This simple expression allows us to extend DeepTOP for the learning of the Whittle index. We compare this DeepTOP extension to three recently proposed algorithms that seek to learn the optimal index policies through other indirect properties. Simulation results show that the DeepTOP extension learns much faster because it directly finds the optimal threshold policy.
The rest of the paper is organized as follows. Section 2 defines the MDP setting and threshold policies. We present the DeepTOP algorithm for MDP in Section 3. We then discuss how the Whittle index policy for RMABs can be viewed as a threshold policy in Section 4 and develop a DeepTOP extension for learning it in Section 5. We show DeepTOP’s performance results for MDPs and RMABs in Section 6, and give related works in Section 7 before concluding.
2 Threshold Policies for MDPs
Consider an agent controlling a stochastic environment E described as an MDP E = (S,A,R,P, γ), with state space S, binary action space A := {0, 1}, reward function R : S × A → Ω, transition dynamics P : S ×A × S → , and discount factor γ ∈ [0, 1), where is the set of real numbers and Ω is the set of random variables. At each timestep t, the agent picks an action at ∈ A for the current state st. The state st ∈ S = × V has two components: a scalar state λt ∈ , and a vector state vt ∈ V, whereV is a discrete set of vectors. We assume the environment state is fully observable. Given the state-action pair (st, at), the MDP generates a reward rt following the unknown random variable R(st, at), and a random next state st+1 = (λt+1, vt+1) following the unknown distribution P. We use r̄(λ, v, a) := E[R((λ, v), a)] to denote the unknown expected one-step reward that can be obtained for the state-action pair (λ, v, a).
A threshold policy is one that defines a threshold function µ : V → mapping each vector state to a real number. The policy then deterministically picks at = 1(µ(vt) > λt), where 1(·) is the indicator function. There are many applications where it is natural to consider threshold policies and we discuss some of them below. Example 1. Consider the problem of charging electric vehicles (EV). When an EV arrives at a charging station, it specifies its demands for charge and a deadline upon which it will leave the station. The electricity price changes over time following some random process. The goal of the operator is to fulfill the EV’s requirement with minimum cost. In this problem, we can model the system by letting the scalar state λt be the current electricity price and the vector state vt be the remaining charge and time to deadline of the EV. For this problem, it is natural to consider a threshold policy that defines a threshold µ(vt) as the highest price the operator is willing to pay to charge the EV under vector state vt. The operator only charges the vehicle, i.e., chooses at = 1, if λt < µ(vt). Example 2. Consider the problem of warehouse management. A warehouse stores goods waiting to be sold. When the number of stored goods exceeds the demand, then there is a holding cost for each unsold good. On the other hand, if the number of stored goods is insufficient to fulfill the demand, then there is a cost of lost sales. The goal of the manager is to decide when to place orders so as to minimize the total cost. In this problem we can let the scalar state λt be the current inventory and let the vector state vt be the vector of all factors, such as upcoming holidays, that can influence future demands. It is natural to consider a threshold policy where the manager only places a new order if the current inventory λt falls below a threshold µ(vt) based on the current vector state vt.
Example 3. Consider a smart home server that controls the air conditioner. Let λt be −(current temperature) and vt be the time of the day and the number of people in the house. The server should turn on the air conditioner only if the temperature exceeds some threshold determined by vt, or, equivalently, λt < µ(vt).
Given a threshold policy with threshold function µ(·), we can define the corresponding action-value function by Qµ(λ, v, a). Let ρµ(λ′, v′, λ, v) be the discounted state distribution when the initial state is (λ, v) under the threshold policy to a visited state (λ′, v′). When the initial state is (λ, v), the expected discounted reward under the policy is
Qµ ( λ, v,1(µ(v) > λ) ) = ∑ v′∈V ∫ λ′=+M λ′=−M ρµ(λ′, v′, λ, v)r̄ ( λ′, v′,1(µ(v′) > λ′) ) . (1)
Let M be a sufficiently large constant such that λt ∈ [−M,+M] for all t. Our goal is to learn the optimal threshold function µφ(v) parametrized by a vector φ that maximizes the objective function
K(µφ) := ∫ λ=+M λ=−M ∑ v∈V Qµφ ( λ, v,1(µφ(v) > λ) ) dλ. (2)
3 Deep Threshold Optimal Policy for MDPs
In this section, we present a deep threshold optimal policy (DeepTOP) for MDPs that finds the optimal φ for maximizing K(µφ).
3.1 Threshold Policy Gradient Theorem for MDPs
In order to design DeepTOP, we first study the gradient ∇φK(µφ). At first glance, computing ∇φK(µφ) looks intractable since it involves an integral over λ ∈ [−M,+M]. However, we establish the following threshold policy gradient theorem that shows the surprising result that ∇φK(µφ) has a simple expression.
Theorem 1. Given the parameter vector φ, let ρ̄(λ, v) be the discounted state distribution when the initial state is chosen uniformly at random under the threshold policy. If all vector states v ∈ V have distinct values of µφ(v), then,
∇φK(µφ) = 2M|V| ∑ v∈V ρ̄(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (3) Proof. Let ρ̄t(λ, v) be the distribution that the state at time t is (λ, v) when the initial state is chosen uniformly at random. Clearly, we have ρ̄(λ, v) = ∑∞ t=1 γ
t−1ρ̄t(λ, v). Given φ, we number all states inV such that µφ(v1) > µφ(v2) > . . . . LetM0 = +M,Mn = µφ(vn), for all 1 ≤ n ≤ |V|, andM|V|+1 = −M. Also, let Vn be the subset of states {v|µφ(v) > Mn} = {v1, v2, . . . , vn−1}. Now, consider the interval (Mn+1,Mn) for some n. Notice that, for all λ ∈ (Mn+1,Mn), 1(µφ(v) > λ) = 1 if and only if v ∈ Vn+1. In other words, for any vector state v, the threshold policy would take the same action under all λ ∈ (Mn+1,Mn), and we use πn+1(v) to denote this action. We then have
∇φK(µφ) = ∇φ ∫ λ=+M λ=−M ∑ v∈V Qµφ (λ, v,1(µφ(v) > λ))dλ = ∑ v∈V ∇φ ∫ λ=+M λ=−M Qµφ (λ, v,1(µφ(v) > λ))dλ
= ∑ v∈V |V|∑ n=0 ∇φ ∫ λ=Mn λ=Mn+1 Qµφ (λ, v, πn+1(v))dλ
= ∑ v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1 + ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ ) ,
(4)
where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by∑
v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1)
= ∑ v∈V |V|∑ n=1 ( Qµφ ( µφ(vn), v,1(v ∈ Vn+1)) − Qµφ(µφ(vn), v,1(v ∈ Vn)))∇φµφ(vn)
=2M|V| ∑ v∈V ρ̄1(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (5) Next, we expand the last term in (4). Note that Qµφ(λ, v, a) = r̄(λ, v, a) +
γ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′, where p(·|·) is the transition probability.
Hence, ∇φQµφ(λ, v, a) = γ∇φ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′. Using the same techniques in (4) and (5), we have∑ v∈V |V|∑ n=0 ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ = ∑ v∈V ∫ λ=+M λ=−M ∇φQµφ (λ, v,1(µφ(v) > λ))dλ
= γ ∑ v∈V ∫ λ=+M λ=−M ( ∇φ ∫ λ′=+M λ′=−M ∑ v′∈V p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ′, v′,1(µφ(v′) > λ′))dλ′ ) dλ
= 2M|V| ∑ v∈V γρ̄2(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v) + γ
∑ v∈V ∫ λ=+M λ=−M ( ∑ v′∈V ∫ λ′=+M λ′=−M ∇φ ( p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ, v,1(µφ(v′) > λ′)) ) dλ′ ) dλ.
In the above equation, expanding the last term in time establishes (3).
3.2 DeepTOP Algorithm Design for MDPs
Motivated by Theorem 1, we now present DeepTOP-MDP, a model-free, actor-critic Deep RL algorithm. DeepTOP-MDP maintains an actor network with parameters φ that learns a threshold function µφ(v), and a critic network with parameters θ that learns an action-value function Qθ(λ, v, a). DeepTOP-MDP also maintains a target critic network with parameters θ′ that is updated slower than the critic parameters θ. The purpose of the target critic network is to improve the learning stability as demonstrated in [8, 19]. The objective of the critic network is to find θ that minimizes the loss function
L(θ) := E st ,at ,rt ,st+1
[( Qθ(λt, vt, at) − rt − γmax
a′∈A Qθ
′( λt+1, vt+1, a′ ))2] , (6)
where (st, at, rt, st+1) is sampled under some policy with st = (λt, vt). The objective of the actor network is to find φ that maximizes ∫ λ=+M λ=−M ∑ v∈V Qθµφ ( λ, v,1(µφ(v) > λ) ) dλ. In each timestep t, the environment E provides a state st to the agent. We set an exploration parameter t ∈ [0, 1) that takes a random action with probability t. Otherwise, DeepTOP-MDP calculates µφ(vt) based on vt, and chooses at = 1(µφ(vt) > λt). E generates a reward rt and a next state st+1. A replay memory denoted byM then stores the transition {st, at, rt, st+1}. After filling the memory with at least B transitions, DeepTOP-MDP updates the parameters φ, θ, θ′ in every timestep using a sampled minibatch of size B of transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B. The critic network uses the sampled transitions to calculate the estimated gradient of L(θ):
∇̂θL(θ) := 2 B B∑ k=1 ( Qθ(λtk , vtk , atk ) − rtk − γmaxa′∈A Q θ′(λtk+1, vtk+1, a′))∇θQθ(λtk , vtk , atk ). (7) Similarly, the actor network uses the sampled transitions and Equation (3) to calculate the estimated gradient:
∇̂φK(µφ) := 1 B B∑ k=1 ( Qθµφ ( µφ(vtk ), vtk , 1 ) − Qθµφ ( µφ(vtk ), vtk , 0 )) ∇φµφ(vtk ). (8)
Algorithm 1 Deep Threshold Optimal Policy Training for MDPs (DeepTOP-MDP) Randomly select initial actor network parameters φ and critic network parameters θ. Set target critic network parameters θ′ ← θ, and initialize replay memoryM. for timestep t = 1, 2, 3, . . . do
Receive state st = (λt, vt) from environment E. Select action at = 1(µφ(vt) > λt) with probability 1 − t. Otherwise, select action at randomly. Execute action at, and observe reward rt and next state st+1 from E. Store transition {st, at, rt, st+1} intoM. Sample a minibatch of B transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B fromM. Update critic network parameters θ using the estimated gradient from Equation (7). Update actor network parameters φ using the estimated gradient from Equation (8). Soft update target critic parameters θ′: θ′ ← τθ + (1 − τ)θ′.
end for
Both the critic network and the actor network then take a gradient update step. Finally, we soft update the target critic’s parameters θ′ using θ′ ← τθ + (1 − τ)θ′, with τ < 1. The complete pseudocode is given in Algorithm 1.
4 Whittle Index Policy for RMABs
In this section, we demonstrate how the Whittle index policy [32], a powerful tool for solving the notoriously intractable Restless Multi-Armed Bandit (RMAB) problem, can be represented with a set of threshold functions. We first describe the RMAB control problem, and then define the Whittle index function.
An RMAB problem consists of N arms. The environment of an arm i, denoted as Ei, is an MDP with a discrete state space si,t ∈ Si, and a binary action space ai,t ∈ A := {0, 1}, where ai,t = 1 means that arm i is activated, and ai,t = 0 means that arm i is left passive at time t. Given the state-action pair (si,t, ai,t), Ei generates a random reward ri,t and a random next state si,t+1 following some unknown probability distributions based on (si,t, ai,t). Here we also use r̄i(si, ai) to denote the unknown expected one-step reward that can be obtained for the state-action pair (si, ai).
A control policy over all arms takes the states (s1,t, s2,t, . . . , sN,t) as input, and activates V out of N arms in every timestep. Solving for the optimal control policy for RMABs was proven to be intractable [21], since the agent must optimize over an input state space exponential in N. To circumvent the dimensionality challenge, the Whittle index policy assigns real values to an arm’s states using a Whittle index function for each arm Wi : Si → . Based on the assigned Whittle indices ( W1(s1,t),W2(s2,t), . . . ,WN(sN,t) ) , the Whittle index policy activates the V highest-valued arms out of N arms in timestep t, and picks the passive action for the remaining arms.
4.1 The Whittle Index Function as The Optimal Threshold Function
To define the Whittle index and relate it to threshold functions, let us first consider an alternative control problem of a single arm i as environment Ei with activation cost λ. In this problem, the agent follows a control policy that determines whether the arm is activated or not based on its current state si,t. If the policy activates the arm, then the agent must pay an activation cost of λ. Hence, the agent’s net reward at timestep t is defined as ri,t − λai,t. We now consider applying threshold policies for this alternative control problem. A threshold policy defines a threshold function µi : Si → that maps each state to a real value. It then activates the arm if and only if µi(si,t) > λ, i.e., ai,t = 1(µi(si) > λ). The value of µi(si,t) can therefore be viewed as the largest activation cost that the agent is willing to pay to activate the arm under state si,t. To characterize the performance of a threshold policy with a threshold function µi(·), we let ρµi,λ(s′i , si) be the discounted state distribution, which is the average discounted number of visits of state s′i when the initial state is si under the threshold policy and λ. When the initial state is si, the expected discounted net reward under the threshold policy is
Qi,λ ( si,1(µi(si) > λ) ) = ∑ s′i∈Si ρµi,λ(s ′ i , si) ( r̄i ( s′i ,1(µi(s ′ i) > λ) ) − λ1(µi(s′i) > λ)). (9)
The performance of the threshold policy under a given λ is defined as Ji,λ(µi) :=∑ si∈Si Qi,λ ( si,1(µi(si) > λ) ) . The Whittle index of this arm is defined as the function µi(·) whose corresponding threshold policy maximizes Ji,λ(µi) for all λ: Definition 1. (Whittle Index) If there exists a function µi : Si → such that choosing 1(µi(si) > λ) maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞), then we say that µi(si) is the Whittle index Wi(si) 1.
We note that, for some arms, there does not exist any function µi(si) that satisfies the condition in Definition 1. For such arms, the Whittle index does not exist. We say that an arm is indexable if it has a well-defined Whittle index function. Definition 1 shows that finding the Whittle index is equivalent to finding the optimal µi(·) that maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞). Parameterizing a threshold function µφii (·) by parameters φi and letting M be a sufficiently large number such that µ φi i (si) ∈ (−M,+M) for all si and φi, we aim to find the optimal φi for maximizing the objective function
Ki(µ φi i ) := ∫ λ=+M λ=−M ∑ si∈Si Qi,λ ( si,1(µ φi i (si) > λ) ) dλ. (10)
5 Deep Threshold Optimal Policy for RMABs
To design a DeepTOP variant for RMABs, we first give the gradient of the objective function. Theorem 2. Given the parameter vector φi, let ρ̄λ(si) be the discounted state distribution when the initial state is chosen uniformly at random and the activation cost is λ. If all states si ∈ Si have distinct values of µφii (si), then,
∇φi Ki(µ φi i ) = |Si| ∑ si∈Si ρ̄ µ φi i (si) (si) ( Qi,µφii (si) ( si, 1 ) − Qi,µφii (si)(si, 0))∇φiµφii (si). (11) Proof. The proof is similar to that of Theorem 1. For completeness, we provide it in Appendix A.
We note that Theorem 2 does not require the arm to be indexable. Whether an arm is indexable or not, using Theorem 2 along with a gradient ascent algorithm will find a locally-optimal φi that maximizes Ki(µ φi i ). When the arm is indexable, the resulting threshold function µ φi i is the Whittle index function. Using the gradient result from Equation (11), we present the algorithm DeepTOP-RMAB for finding the optimal parametrized threshold functions µφii for arms i = 1, 2, . . . ,N. The training method is similar to the MDP version, except for two important differences. First, the training of each arm is done independently from others. Second, the value of λ is an artificial value that only exists in the alternative problem but not in the original RMAB problem. Similar to DeepTOP-MDP, we maintain three network parameters for each arm i: actor φi, critic θi, and target-critic θ′i . The critic network parametrizes the action-value function, and is optimized by minimizing the loss function
Li(θi) := λ=+M∫
λ=−M
E si,t ,ai,t ,ri,t ,si,t+1 [( Qθii,λ(si,t, ai,t) − ri,t − γmaxa′∈A Q θ′i i,λ(si,t+1, a ′) )2] dλ, (12)
with (si,t, ai,t, ri,t, si,t+1) sampled under some policy. In each timestep t, each arm environment Ei provides its current state si,t to the agent. For each arm i = 1, 2, . . . ,N, DeepTOP-RMAB calculates the state value µφii (si,t) with the arm’s respective actor network parameters φi. Given an exploration parameter t ∈ [0, 1), DeepTOP-RMAB activates the V arms with the largest µφii (si,t) with probability 1− t, and activates V randomly selected arms with probability t. Based on the executed actions, each arm provides a reward ri,t and the next state si,t+1. An arm’s transition {si,t, ai,t, ri,t, si,t+1} is then stored in the arm’s memory denoted byMi. After filling each arm’s memory with at least B transitions, DeepTOP-RMAB updates φi, θi, and θ′i in every timestep. For each arm i, DeepTOP-RMAB first samples a minibatch of size B of transitions {si,tk , ai,tk , ri,tk , si,tk+1}, for 1 ≤ k ≤ B from the memoryMi. It then randomly samples B values [λi,1, λi,2, . . . , λi,B] from the range [−M,+M]. Using the sampled transitions and λ values, it estimates the gradient of Li(θi) as
∇̂θiLi(θi) := 2 B B∑ k=1 ( Qθii,λk (si,tk , ai,tk ) − ri,tk − γmaxa′∈A Q θ′i i,λk ( si,tk+1, a ′ )) ∇θi Qθii,λk (si,tk , ai,tk ). (13)
1To simplify notations, we use a necessary and sufficient condition for the Whittle index as its definition. We refer interested readers to [9] for more thorough discussions on the Whittle index.
Using the sampled transitions and Equation (11), it estimates the gradient of Ki(µ φi i ) as
∇̂φi Ki(µ φi i ) := 1 B B∑ k=1 ( Qθi i,µφii (si,tk ) ( si,tk , 1 ) − Qθi i,µφii (si,tk ) ( si,tk , 0 )) ∇φiµ φi i (si,tk ). (14)
A gradient update step is taken after calculating the actor and critic networks’ gradients. Finally, DeepTOP-RMAB soft updates the target critic parameters θ′i using θ ′ i ← τθi + (1 − τ)θ′i , with τ < 1. The complete DeepTOP-RMAB pseudocode is given in Appendix B.
6 Simulations
We have implemented and tested both DeepTOP-MDP and DeepTOP-RMAB in a variety of settings. The training procedure of the two DeepTOP algorithms are similar to that of the DDPG [19] algorithm except for the expression of gradients. We implemented the DeepTOP algorithms by modifying an open-source implementation of DDPG [12]. All source code can be found in the repository https://github.com/khalednakhleh/deeptop.
6.1 Simulations for MDPs
We evaluate three MDPs, namely, the electric vehicle charging problem, the inventory management problem, and the make-to-stock problem.
EV charging problem. This problem is based on Yu, Xu, and Tong [34]. It considers a charging station serving EVs. When an EV arrives at the station, it specifies the amount of charges it needs and a deadline upon which it will leave the station. The electricity price changes over time and we model it by an Ornstein-Uhlenbeck process [30]. In each timestep, the station decides whether to charge the EV or not. If it decides to charge the EV, then it provides one unit charge to the EV. The station then obtains a unit reward and pays the current electricity price. If the station fails to fully charge the EV by the deadline of the EV, then the station suffers from a penalty that is a convex function of the remaining needed charge. A new EV arrives at the station when the previous EV leaves. We model this problem by letting the scalar state be the current electricity price and the vector state be the remaining needed charge and time-to-deadline of the current EV. A threshold policy is one that calculates a threshold based on the EV’s remaining needed charge and time-to-deadline, and then decides to charge the EV if and only if the current electricity price is below the threshold.
Inventory management problem. We construct an inventory management problem by jointly incorporating a variety of practical challenges, including seasonal fluctuations in demands and lead times in orders, in the literature [28, 15, 10, 27]. We consider a warehouse holding goods. In each timestep, there is a random amount of demand whose mean depends on the time of the year. The warehouse can fulfill the demand as long as it has sufficient inventory, and it makes a profit for each unit of sold goods. At the end of the timestep, the warehouse incurs a unit holding cost for each unit of unsold goods. The warehouse manager needs to decide whether to order more goods. When it places an order for goods, there is a lead time of one time step, that is, the goods ordered at timestep t are only available for sale at timestep t + 1. We model this problem by letting the scalar state be the current inventory and the vector state be the time of the year. A threshold policy calculates a threshold based on the time of the year and decides to place an order for goods if the current inventory is below the threshold.
Make-to-stock production problem. This problem is considered in [26]. It studies a system that produces m items with W demand classes and buffer size S . Accepting a class v order leads to a reward Rv, as long as there is still room in the buffer for the order. The classes of demands are ordered such that R1 > R2 > . . . . In this problem, the scalar state is the number of accepted but unfinished orders and the vector state is the class of the next arriving order. More details about the three MDPs can be found in Appendix C.
Evaluated policies. We compare DeepTOP-MDP against DDPG [19] and TD3 [8], two state-ofthe-art off-policy and model free deep RL algorithms. We use open-source implementations of these two algorithms for [12, 7]. We use the same hyper-parameters, including the neural network architecture, learning rates, etc., for all three algorithms. We also evaluate the Structure-Aware Learning for Multiple Thresholds algorithm (SALMUT) [26], a reinforcement learning algorithm
that finds the optimal threshold policy. SALMUT requires the vector states to be pre-sorted by their threshold values. Hence, SALMUT can only be applied to the make-to-stock production problem. Details about the training parameters can be found in Appendix D. For the EV charging problem, Yu, Xu, and Tong [34] has found the optimal threshold policy. We call the optimal threshold policy the Deadline Index policy and compare DeepTOP-MDP against it.
Simulations results. Simulation results of the three MDPs are shown in Figure 1. The results are the average of 20 independent runs. Before starting a run, we fill an agent’s memory with 1000 transitions by randomly selecting actions. We plot the average reward obtained from the previous 100 timesteps, and average them over 20 runs. In addition, we provide the standard deviation bounds from the average reward.
It can be observed that DeepTOP significantly outperforms DDPG, TD3, and SALMUT. Although the training procedure of DeepTOP is similar to that of DDPG, DeepTOP is able to achieve much faster learning by leveraging the monotone property. Without leveraging the monotone property, DDPG and TD3 need to learn the optimal policy for each scalar state independently, and therefore have much worse performance. DeepTOP performs better than SALMUT because DeepTOP directly employs the threshold policy gradient. SALMUT in contrast approximates threshold policies through randomized policies since it can only handle continuous and differentiable functions. We believe this might be the reason why DeepTOP outperforms SALMUT. We also note that DeepTOP performs virtually the same as the Deadline Index policy for the EV charging problem in about 2000 timesteps, suggesting that DeepTOP indeed finds the optimal threshold policy quickly. We also evaluate DeepTOP for different neural network architectures in Appendix E, and show that DeepTOP performs the best in all settings.
6.2 Simulations for RMABs
We evaluate two RMABs, namely, the onedimensional bandits from [17] and the recovering bandits from [20].
One-dimensional bandits. We consider an extension of the RMAB problem evaluated in Killian et al. [17]. Killian et al. [17] considers the case when each arm is a two-state Markov process. We extend it so that each arm is a Markov process with 100 states, numbered as 0, 1, . . . , 99, as shown in Figure 2 where state 99 is the optimal state.
The reward of an arm depends on the distance between its current state and state 99. Suppose the current state of arm i is si,t, then it generates a reward ri,t = 1− ( si,t−9999 )2. If the arm is activated, then it changes to state si,t+1 = min{si,t +1, 99} with probability pi. If the arm is not activated, then it changes to state si,t+1 = max{si,t − 1, 0} with probability qi. In the simulations, we pick the probabilities pi to be evenly spaced depending on the number of arms N from the interval [0.2, 0.8]. We set the
probabilities qi = pi. We consider that there are N arms and that the agent needs to activate V arms in each timestep. We evaluate three settings of (N,V) = (10, 3), (20, 5), and (30, 6).
Recovering bandits. First introduced in [25], we consider the case that studies the varying behavior of consumers over time. A consumer’s interest in a particular product falls if the consumer clicks on its advertisement link. However their interest in the product would recover with time. The recovering bandit is modelled as an RMAB with each arm being the advertisement link. The reward of playing an arm is given by a function f ( min(z, zmax) ) , with z being the time since the arm was last played.
In our experiments, we consider arms with different reward functions, with the arm’s state being the value min{z, zmax} and zmax = 100. We also evaluate recovering bandits on three settings of (N,V) = (10, 3), (20, 5), and (30, 6). More details can be found in Appendix F.
Evaluated policies. We compare DeepTOP-RMAB against three recent studies that aim to learn index policies for RMABs, namely, Lagrange policy Q learning (LPQL) [17], Whittle index based Q learning (WIBQL) [1], and neural Whittle index network (NeurWIN) [20]. LPQL consists of three steps: First, it learns a Q function for each arm independently. Second, it uses the Q functions of all arms to determine a common Lagrangian. Third, it uses the Lagrangian to calculate the index of each arm. WIBQL is a two-timescale algorithm that learns the Whittle indices of indexable arms by updating Q values on the fast timescale, and index values on the slower timescale. NeurWIN is an off-line training algorithm based on REINFORCE that requires a simulator to learn the Whittle index. Both LPQL and WIBQL are tabular learning methods which may perform poorly compared to deep RL algorithms when the size of the state space is large. Hence, we also design deep RL equivalent algorithms that approximate their Q functions using neural networks. We refer to the Deep RL extensions as neural LPQL and neural WIBQL. In all experiments, neural LPQL, neural WIBQL, and NeurWIN use the same hyper-parameters as DeepTOP-RMAB. For the one-dimensional bandits, it can be shown that the Whittle index is in the range of [−1, 1], and hence we set M = 1. For the recovering bandits, we set M = 10.
Simulation results. Simulation results are shown in Figures 3 and 4. It can be observed that DeepTOP achieves the optimal average rewards in all cases. The reason that neural LPQL performs worse than DeepTOP may lie in its reliance on a common Lagrangian. Since the common Lagrangian is calculated based on the Q functions of all arms, an inaccuracy in one arm’s Q function can result in an inaccurate Lagrangian, which, in turn, leads to inaccuracy in the index values of all arms. Prior work [17] has already shown that WIBQL performs worse than LPQL. Hence, it is not surprising that neural WIBQL performs worse than both neural LPQL and DeepTOP. NeurWIN performs worse than DeepTOP because it is based on REINFORCE and therefore can only apply updates at the end of each minibatch of episodes. We also evaluate DeepTOP for different neural network architectures and the results are shown in Appendix G for the one-dimensional bandits and Appendix H for the recovering bandits.
7 Related Work
Threshold policies have been analysed for many decision-making problems formed as MDPs. [11] examined the residential energy storage under price fluctuations problem, and proved the existence
of optimal threshold policies for minimizing the cost. [5] proved that MDPs with a convex and piecewise linear cost functions admit an optimal threshold policy. [24] shows the existence of an optimal threshold policy for energy arbitrage given degrading battery capacity, with [2] using the REINFORCE algorithm [33] to learn a trading policy with price thresholds for intraday electricity markets. [14] considered mean field games in a multi-agent MDP setting, and characterized individual agent strategy with a threshold policy when the mean game admits a threshold policy.
More recently, [31] studies finding a job assigning threshold policy for data centers with heterogeneous servers and job classes, and gave conditions for the existence of optimal threshold policies. [35] proposed a distributed threshold-based control policy for graph traversal by assigning a state threshold that determines if the agent stays in or leaves a state. For minimizing the age of information in energy-harvesting sensors, [4] used the finite-difference policy gradient [23] to learn a possibly sub-optimal threshold policy in the average cost setting. [13] proposed an RL-based threshold policy for semi-MDPs in controlling micro-climate for buildings with simulations proving efficacy on a single-zone building. [29] used the Deep Q-network RL algorithm for selecting alert thresholds in anti-fraud systems with simulations showing performance improvements over static threshold policies. [26] described the SALMUT RL algorithm for exploiting the ordered multi-threshold structure of the optimal policy with SALMUT implementations in [16] for computing node’s overload protection. In contrast to these works, DeepTOP-MDP is applicable to any MDP that admits threshold policies.
In learning the Whittle index policy for RMABs, [6] proposed a Q-learning heuristic called the Q Whittle Index Controller (QWIC) which may not find the Whittle indices even when the training converges. [20] describes a Deep RL algorithm called NeurWIN for learning the Whittle index of a restless arm independently of other arms. However, NeurWIN requires a simulator to train the neural networks. Some recent studies, such as [1, 3, 17], proposed various online learning algorithms that can find Whittle index when the algorithms converge. These algorithms rely on some indirect property of the Whittle index which explains why they converge slower than DeepTOP.
8 Conclusion and Future Work
In this paper, we presented DeepTOP: a Deep RL actor-critic algorithm that learns the optimal threshold function for MDPs that admit a threshold policy and for RMAB problems. We first developed the threshold policy gradient theorem, where we proved that a threshold function has a simple to compute gradient. Based on the gradient expressions, we design the DeepTOP-MDP and DeepTOP-RMAB algorithm variants and compare them against state-of-the-art learning algorithms. In both the MDP and RMAB settings, experiment results showed that DeepTOP exceeds the performance of baselines in all considered problems. A promising future direction is to extend DeepTOP to threshold policies with multiple actions. For example, the Federal Reserve needs to decide not only whether to raise interest rate, but also the amount of rate hike.
Acknowledgments and Disclosure of Funding This material is based upon work supported in part by NSF under Award Number ECCS-2127721, in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant Number W911NF-22-1-0151, and in part by Office of Naval Research under Contract N00014-21-1-2385. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. | 1. What is the focus and contribution of the paper on dynamic problems?
2. What are the strengths of the proposed approach, particularly in terms of utilizing neural networks?
3. What are the weaknesses of the paper, especially regarding the experiment section?
4. Do you have any concerns about the suitability of the environments used in the experiments?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
6. Are there any suggestions for future directions or improvements to the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper consider a subset of dynamic problems, in which the optimal policy is a threshold-policy. The authors use this attribute to formulate tailored off-policy actor-critic algorithms, for both MDPs and RMABs which are gradient-based, so can utilize neural networks. They empirically compare their method to SOTA methods in three MDP domains and three RMAB parametrizations, the results show that their method, DeepTOP, performs better than the compared methods in all the experiments.
Strengths And Weaknesses
Overall, I think it is a good paper, which contributes to the community. But I do have concerns regarding the empirical experiments, I think that the environments are rather toy problems, and since DeepTop incorporate neural-networks, its main advantage over tailored analytical methods is in complex environments.
Strengths:
The performance of DeepTOP compared to the other methods is impressive. I think that while being limited, threshold policies are indeed interesting. The theorems are important contributions as well.
Weaknesses:
The main contribution of the paper is an empirical method, and the experiments conducted in simple domains. I think that domains that are more challenging should be considered.
While being important, the theorems are minor, hence they do not compensate lack of experiments
Minor Comments:
The last part of section 6 seems like it addressed to the reviewers (lines 220-221). Rephrase.
Questions
Would you be able to run experiments in more complex domains?
Is it possible to give a similar analysis for any policy which is a fixed, deterministic function of some scalar
λ
t
and an output of a neural-network? might be a future direction.
Limitations
The authors addressed the limitations of their work. |
NIPS | Title
DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs
Abstract
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
1 Introduction
This paper considers a class of control policies, called threshold policies, that naturally arise in many practical problems. For example, a smart home server may only turn on the air conditioner when the room temperature exceeds a certain threshold, and a central bank may only raise the interest rate when inflation exceeds a certain threshold. For such problems, finding the optimal control policies can be reduced to finding the appropriate thresholds given other factors of the system, such as the number of people in the room in the smart home server scenario or the unemployment rate and the current interest rate in the central bank scenario.
An important feature of threshold policies is that their actions are monotone. For example, if a smart home server would turn on the air conditioner at a certain temperature, then, all other factors being equal, the server would also turn on the air conditioner when the temperature is even higher. By leveraging this monotone property, an algorithm aiming to learn the optimal threshold can potentially be much more efficient than generic reinforcement learning algorithms seeking to learn the optimal action at different points of temperature separately. In order to design an efficient algorithm for learning the optimal threshold policy, we first formally define a class of Markov decision processes (MDPs) that admit threshold policies and its objective function. The optimal threshold policy is then the one that maximizes the objective function. However, the objective function involves an integral over a continuous range, which makes it infeasible to directly apply standard tools, such as backward-propagation in neural networks, to perform gradient updates.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surprisingly, we show that, by leveraging the monotone property of threshold policies, the gradient of the objective function has a very simple expression. Built upon this expression, we propose Deep Threshold-Optimal Policy (DeepTOP), a model-free actor-critic deep reinforcement learning algorithm that finds the optimal threshold policies. We evaluate the performance of DeepTOP by considering three practical problems, an electric vehicle (EV) charging problem that determines whether to charge an EV in the face of unknown fluctuations of electricity price, an inventory management problem that determines whether to order for goods in the face of unknown seasonal demands, and a make-to-stock problem for servicing jobs with different sizes. For all problems, DeepTOP significantly outperforms other state-of-the-art deep reinforcement learning algorithms due to its ability to exploit the monotone property.
We also study the notoriously hard restless multi-armed bandit (RMAB) problem. We show that the Whittle index policy, a powerful tool for RMABs, can be viewed as an optimal threshold policy for an alternative problem. Based on this observation, we define an objective function for the alternative problem, of which the Whittle index is the maximizer. We again show that the gradient of the objective function has a simple expression. This simple expression allows us to extend DeepTOP for the learning of the Whittle index. We compare this DeepTOP extension to three recently proposed algorithms that seek to learn the optimal index policies through other indirect properties. Simulation results show that the DeepTOP extension learns much faster because it directly finds the optimal threshold policy.
The rest of the paper is organized as follows. Section 2 defines the MDP setting and threshold policies. We present the DeepTOP algorithm for MDP in Section 3. We then discuss how the Whittle index policy for RMABs can be viewed as a threshold policy in Section 4 and develop a DeepTOP extension for learning it in Section 5. We show DeepTOP’s performance results for MDPs and RMABs in Section 6, and give related works in Section 7 before concluding.
2 Threshold Policies for MDPs
Consider an agent controlling a stochastic environment E described as an MDP E = (S,A,R,P, γ), with state space S, binary action space A := {0, 1}, reward function R : S × A → Ω, transition dynamics P : S ×A × S → , and discount factor γ ∈ [0, 1), where is the set of real numbers and Ω is the set of random variables. At each timestep t, the agent picks an action at ∈ A for the current state st. The state st ∈ S = × V has two components: a scalar state λt ∈ , and a vector state vt ∈ V, whereV is a discrete set of vectors. We assume the environment state is fully observable. Given the state-action pair (st, at), the MDP generates a reward rt following the unknown random variable R(st, at), and a random next state st+1 = (λt+1, vt+1) following the unknown distribution P. We use r̄(λ, v, a) := E[R((λ, v), a)] to denote the unknown expected one-step reward that can be obtained for the state-action pair (λ, v, a).
A threshold policy is one that defines a threshold function µ : V → mapping each vector state to a real number. The policy then deterministically picks at = 1(µ(vt) > λt), where 1(·) is the indicator function. There are many applications where it is natural to consider threshold policies and we discuss some of them below. Example 1. Consider the problem of charging electric vehicles (EV). When an EV arrives at a charging station, it specifies its demands for charge and a deadline upon which it will leave the station. The electricity price changes over time following some random process. The goal of the operator is to fulfill the EV’s requirement with minimum cost. In this problem, we can model the system by letting the scalar state λt be the current electricity price and the vector state vt be the remaining charge and time to deadline of the EV. For this problem, it is natural to consider a threshold policy that defines a threshold µ(vt) as the highest price the operator is willing to pay to charge the EV under vector state vt. The operator only charges the vehicle, i.e., chooses at = 1, if λt < µ(vt). Example 2. Consider the problem of warehouse management. A warehouse stores goods waiting to be sold. When the number of stored goods exceeds the demand, then there is a holding cost for each unsold good. On the other hand, if the number of stored goods is insufficient to fulfill the demand, then there is a cost of lost sales. The goal of the manager is to decide when to place orders so as to minimize the total cost. In this problem we can let the scalar state λt be the current inventory and let the vector state vt be the vector of all factors, such as upcoming holidays, that can influence future demands. It is natural to consider a threshold policy where the manager only places a new order if the current inventory λt falls below a threshold µ(vt) based on the current vector state vt.
Example 3. Consider a smart home server that controls the air conditioner. Let λt be −(current temperature) and vt be the time of the day and the number of people in the house. The server should turn on the air conditioner only if the temperature exceeds some threshold determined by vt, or, equivalently, λt < µ(vt).
Given a threshold policy with threshold function µ(·), we can define the corresponding action-value function by Qµ(λ, v, a). Let ρµ(λ′, v′, λ, v) be the discounted state distribution when the initial state is (λ, v) under the threshold policy to a visited state (λ′, v′). When the initial state is (λ, v), the expected discounted reward under the policy is
Qµ ( λ, v,1(µ(v) > λ) ) = ∑ v′∈V ∫ λ′=+M λ′=−M ρµ(λ′, v′, λ, v)r̄ ( λ′, v′,1(µ(v′) > λ′) ) . (1)
Let M be a sufficiently large constant such that λt ∈ [−M,+M] for all t. Our goal is to learn the optimal threshold function µφ(v) parametrized by a vector φ that maximizes the objective function
K(µφ) := ∫ λ=+M λ=−M ∑ v∈V Qµφ ( λ, v,1(µφ(v) > λ) ) dλ. (2)
3 Deep Threshold Optimal Policy for MDPs
In this section, we present a deep threshold optimal policy (DeepTOP) for MDPs that finds the optimal φ for maximizing K(µφ).
3.1 Threshold Policy Gradient Theorem for MDPs
In order to design DeepTOP, we first study the gradient ∇φK(µφ). At first glance, computing ∇φK(µφ) looks intractable since it involves an integral over λ ∈ [−M,+M]. However, we establish the following threshold policy gradient theorem that shows the surprising result that ∇φK(µφ) has a simple expression.
Theorem 1. Given the parameter vector φ, let ρ̄(λ, v) be the discounted state distribution when the initial state is chosen uniformly at random under the threshold policy. If all vector states v ∈ V have distinct values of µφ(v), then,
∇φK(µφ) = 2M|V| ∑ v∈V ρ̄(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (3) Proof. Let ρ̄t(λ, v) be the distribution that the state at time t is (λ, v) when the initial state is chosen uniformly at random. Clearly, we have ρ̄(λ, v) = ∑∞ t=1 γ
t−1ρ̄t(λ, v). Given φ, we number all states inV such that µφ(v1) > µφ(v2) > . . . . LetM0 = +M,Mn = µφ(vn), for all 1 ≤ n ≤ |V|, andM|V|+1 = −M. Also, let Vn be the subset of states {v|µφ(v) > Mn} = {v1, v2, . . . , vn−1}. Now, consider the interval (Mn+1,Mn) for some n. Notice that, for all λ ∈ (Mn+1,Mn), 1(µφ(v) > λ) = 1 if and only if v ∈ Vn+1. In other words, for any vector state v, the threshold policy would take the same action under all λ ∈ (Mn+1,Mn), and we use πn+1(v) to denote this action. We then have
∇φK(µφ) = ∇φ ∫ λ=+M λ=−M ∑ v∈V Qµφ (λ, v,1(µφ(v) > λ))dλ = ∑ v∈V ∇φ ∫ λ=+M λ=−M Qµφ (λ, v,1(µφ(v) > λ))dλ
= ∑ v∈V |V|∑ n=0 ∇φ ∫ λ=Mn λ=Mn+1 Qµφ (λ, v, πn+1(v))dλ
= ∑ v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1 + ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ ) ,
(4)
where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by∑
v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1)
= ∑ v∈V |V|∑ n=1 ( Qµφ ( µφ(vn), v,1(v ∈ Vn+1)) − Qµφ(µφ(vn), v,1(v ∈ Vn)))∇φµφ(vn)
=2M|V| ∑ v∈V ρ̄1(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (5) Next, we expand the last term in (4). Note that Qµφ(λ, v, a) = r̄(λ, v, a) +
γ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′, where p(·|·) is the transition probability.
Hence, ∇φQµφ(λ, v, a) = γ∇φ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′. Using the same techniques in (4) and (5), we have∑ v∈V |V|∑ n=0 ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ = ∑ v∈V ∫ λ=+M λ=−M ∇φQµφ (λ, v,1(µφ(v) > λ))dλ
= γ ∑ v∈V ∫ λ=+M λ=−M ( ∇φ ∫ λ′=+M λ′=−M ∑ v′∈V p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ′, v′,1(µφ(v′) > λ′))dλ′ ) dλ
= 2M|V| ∑ v∈V γρ̄2(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v) + γ
∑ v∈V ∫ λ=+M λ=−M ( ∑ v′∈V ∫ λ′=+M λ′=−M ∇φ ( p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ, v,1(µφ(v′) > λ′)) ) dλ′ ) dλ.
In the above equation, expanding the last term in time establishes (3).
3.2 DeepTOP Algorithm Design for MDPs
Motivated by Theorem 1, we now present DeepTOP-MDP, a model-free, actor-critic Deep RL algorithm. DeepTOP-MDP maintains an actor network with parameters φ that learns a threshold function µφ(v), and a critic network with parameters θ that learns an action-value function Qθ(λ, v, a). DeepTOP-MDP also maintains a target critic network with parameters θ′ that is updated slower than the critic parameters θ. The purpose of the target critic network is to improve the learning stability as demonstrated in [8, 19]. The objective of the critic network is to find θ that minimizes the loss function
L(θ) := E st ,at ,rt ,st+1
[( Qθ(λt, vt, at) − rt − γmax
a′∈A Qθ
′( λt+1, vt+1, a′ ))2] , (6)
where (st, at, rt, st+1) is sampled under some policy with st = (λt, vt). The objective of the actor network is to find φ that maximizes ∫ λ=+M λ=−M ∑ v∈V Qθµφ ( λ, v,1(µφ(v) > λ) ) dλ. In each timestep t, the environment E provides a state st to the agent. We set an exploration parameter t ∈ [0, 1) that takes a random action with probability t. Otherwise, DeepTOP-MDP calculates µφ(vt) based on vt, and chooses at = 1(µφ(vt) > λt). E generates a reward rt and a next state st+1. A replay memory denoted byM then stores the transition {st, at, rt, st+1}. After filling the memory with at least B transitions, DeepTOP-MDP updates the parameters φ, θ, θ′ in every timestep using a sampled minibatch of size B of transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B. The critic network uses the sampled transitions to calculate the estimated gradient of L(θ):
∇̂θL(θ) := 2 B B∑ k=1 ( Qθ(λtk , vtk , atk ) − rtk − γmaxa′∈A Q θ′(λtk+1, vtk+1, a′))∇θQθ(λtk , vtk , atk ). (7) Similarly, the actor network uses the sampled transitions and Equation (3) to calculate the estimated gradient:
∇̂φK(µφ) := 1 B B∑ k=1 ( Qθµφ ( µφ(vtk ), vtk , 1 ) − Qθµφ ( µφ(vtk ), vtk , 0 )) ∇φµφ(vtk ). (8)
Algorithm 1 Deep Threshold Optimal Policy Training for MDPs (DeepTOP-MDP) Randomly select initial actor network parameters φ and critic network parameters θ. Set target critic network parameters θ′ ← θ, and initialize replay memoryM. for timestep t = 1, 2, 3, . . . do
Receive state st = (λt, vt) from environment E. Select action at = 1(µφ(vt) > λt) with probability 1 − t. Otherwise, select action at randomly. Execute action at, and observe reward rt and next state st+1 from E. Store transition {st, at, rt, st+1} intoM. Sample a minibatch of B transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B fromM. Update critic network parameters θ using the estimated gradient from Equation (7). Update actor network parameters φ using the estimated gradient from Equation (8). Soft update target critic parameters θ′: θ′ ← τθ + (1 − τ)θ′.
end for
Both the critic network and the actor network then take a gradient update step. Finally, we soft update the target critic’s parameters θ′ using θ′ ← τθ + (1 − τ)θ′, with τ < 1. The complete pseudocode is given in Algorithm 1.
4 Whittle Index Policy for RMABs
In this section, we demonstrate how the Whittle index policy [32], a powerful tool for solving the notoriously intractable Restless Multi-Armed Bandit (RMAB) problem, can be represented with a set of threshold functions. We first describe the RMAB control problem, and then define the Whittle index function.
An RMAB problem consists of N arms. The environment of an arm i, denoted as Ei, is an MDP with a discrete state space si,t ∈ Si, and a binary action space ai,t ∈ A := {0, 1}, where ai,t = 1 means that arm i is activated, and ai,t = 0 means that arm i is left passive at time t. Given the state-action pair (si,t, ai,t), Ei generates a random reward ri,t and a random next state si,t+1 following some unknown probability distributions based on (si,t, ai,t). Here we also use r̄i(si, ai) to denote the unknown expected one-step reward that can be obtained for the state-action pair (si, ai).
A control policy over all arms takes the states (s1,t, s2,t, . . . , sN,t) as input, and activates V out of N arms in every timestep. Solving for the optimal control policy for RMABs was proven to be intractable [21], since the agent must optimize over an input state space exponential in N. To circumvent the dimensionality challenge, the Whittle index policy assigns real values to an arm’s states using a Whittle index function for each arm Wi : Si → . Based on the assigned Whittle indices ( W1(s1,t),W2(s2,t), . . . ,WN(sN,t) ) , the Whittle index policy activates the V highest-valued arms out of N arms in timestep t, and picks the passive action for the remaining arms.
4.1 The Whittle Index Function as The Optimal Threshold Function
To define the Whittle index and relate it to threshold functions, let us first consider an alternative control problem of a single arm i as environment Ei with activation cost λ. In this problem, the agent follows a control policy that determines whether the arm is activated or not based on its current state si,t. If the policy activates the arm, then the agent must pay an activation cost of λ. Hence, the agent’s net reward at timestep t is defined as ri,t − λai,t. We now consider applying threshold policies for this alternative control problem. A threshold policy defines a threshold function µi : Si → that maps each state to a real value. It then activates the arm if and only if µi(si,t) > λ, i.e., ai,t = 1(µi(si) > λ). The value of µi(si,t) can therefore be viewed as the largest activation cost that the agent is willing to pay to activate the arm under state si,t. To characterize the performance of a threshold policy with a threshold function µi(·), we let ρµi,λ(s′i , si) be the discounted state distribution, which is the average discounted number of visits of state s′i when the initial state is si under the threshold policy and λ. When the initial state is si, the expected discounted net reward under the threshold policy is
Qi,λ ( si,1(µi(si) > λ) ) = ∑ s′i∈Si ρµi,λ(s ′ i , si) ( r̄i ( s′i ,1(µi(s ′ i) > λ) ) − λ1(µi(s′i) > λ)). (9)
The performance of the threshold policy under a given λ is defined as Ji,λ(µi) :=∑ si∈Si Qi,λ ( si,1(µi(si) > λ) ) . The Whittle index of this arm is defined as the function µi(·) whose corresponding threshold policy maximizes Ji,λ(µi) for all λ: Definition 1. (Whittle Index) If there exists a function µi : Si → such that choosing 1(µi(si) > λ) maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞), then we say that µi(si) is the Whittle index Wi(si) 1.
We note that, for some arms, there does not exist any function µi(si) that satisfies the condition in Definition 1. For such arms, the Whittle index does not exist. We say that an arm is indexable if it has a well-defined Whittle index function. Definition 1 shows that finding the Whittle index is equivalent to finding the optimal µi(·) that maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞). Parameterizing a threshold function µφii (·) by parameters φi and letting M be a sufficiently large number such that µ φi i (si) ∈ (−M,+M) for all si and φi, we aim to find the optimal φi for maximizing the objective function
Ki(µ φi i ) := ∫ λ=+M λ=−M ∑ si∈Si Qi,λ ( si,1(µ φi i (si) > λ) ) dλ. (10)
5 Deep Threshold Optimal Policy for RMABs
To design a DeepTOP variant for RMABs, we first give the gradient of the objective function. Theorem 2. Given the parameter vector φi, let ρ̄λ(si) be the discounted state distribution when the initial state is chosen uniformly at random and the activation cost is λ. If all states si ∈ Si have distinct values of µφii (si), then,
∇φi Ki(µ φi i ) = |Si| ∑ si∈Si ρ̄ µ φi i (si) (si) ( Qi,µφii (si) ( si, 1 ) − Qi,µφii (si)(si, 0))∇φiµφii (si). (11) Proof. The proof is similar to that of Theorem 1. For completeness, we provide it in Appendix A.
We note that Theorem 2 does not require the arm to be indexable. Whether an arm is indexable or not, using Theorem 2 along with a gradient ascent algorithm will find a locally-optimal φi that maximizes Ki(µ φi i ). When the arm is indexable, the resulting threshold function µ φi i is the Whittle index function. Using the gradient result from Equation (11), we present the algorithm DeepTOP-RMAB for finding the optimal parametrized threshold functions µφii for arms i = 1, 2, . . . ,N. The training method is similar to the MDP version, except for two important differences. First, the training of each arm is done independently from others. Second, the value of λ is an artificial value that only exists in the alternative problem but not in the original RMAB problem. Similar to DeepTOP-MDP, we maintain three network parameters for each arm i: actor φi, critic θi, and target-critic θ′i . The critic network parametrizes the action-value function, and is optimized by minimizing the loss function
Li(θi) := λ=+M∫
λ=−M
E si,t ,ai,t ,ri,t ,si,t+1 [( Qθii,λ(si,t, ai,t) − ri,t − γmaxa′∈A Q θ′i i,λ(si,t+1, a ′) )2] dλ, (12)
with (si,t, ai,t, ri,t, si,t+1) sampled under some policy. In each timestep t, each arm environment Ei provides its current state si,t to the agent. For each arm i = 1, 2, . . . ,N, DeepTOP-RMAB calculates the state value µφii (si,t) with the arm’s respective actor network parameters φi. Given an exploration parameter t ∈ [0, 1), DeepTOP-RMAB activates the V arms with the largest µφii (si,t) with probability 1− t, and activates V randomly selected arms with probability t. Based on the executed actions, each arm provides a reward ri,t and the next state si,t+1. An arm’s transition {si,t, ai,t, ri,t, si,t+1} is then stored in the arm’s memory denoted byMi. After filling each arm’s memory with at least B transitions, DeepTOP-RMAB updates φi, θi, and θ′i in every timestep. For each arm i, DeepTOP-RMAB first samples a minibatch of size B of transitions {si,tk , ai,tk , ri,tk , si,tk+1}, for 1 ≤ k ≤ B from the memoryMi. It then randomly samples B values [λi,1, λi,2, . . . , λi,B] from the range [−M,+M]. Using the sampled transitions and λ values, it estimates the gradient of Li(θi) as
∇̂θiLi(θi) := 2 B B∑ k=1 ( Qθii,λk (si,tk , ai,tk ) − ri,tk − γmaxa′∈A Q θ′i i,λk ( si,tk+1, a ′ )) ∇θi Qθii,λk (si,tk , ai,tk ). (13)
1To simplify notations, we use a necessary and sufficient condition for the Whittle index as its definition. We refer interested readers to [9] for more thorough discussions on the Whittle index.
Using the sampled transitions and Equation (11), it estimates the gradient of Ki(µ φi i ) as
∇̂φi Ki(µ φi i ) := 1 B B∑ k=1 ( Qθi i,µφii (si,tk ) ( si,tk , 1 ) − Qθi i,µφii (si,tk ) ( si,tk , 0 )) ∇φiµ φi i (si,tk ). (14)
A gradient update step is taken after calculating the actor and critic networks’ gradients. Finally, DeepTOP-RMAB soft updates the target critic parameters θ′i using θ ′ i ← τθi + (1 − τ)θ′i , with τ < 1. The complete DeepTOP-RMAB pseudocode is given in Appendix B.
6 Simulations
We have implemented and tested both DeepTOP-MDP and DeepTOP-RMAB in a variety of settings. The training procedure of the two DeepTOP algorithms are similar to that of the DDPG [19] algorithm except for the expression of gradients. We implemented the DeepTOP algorithms by modifying an open-source implementation of DDPG [12]. All source code can be found in the repository https://github.com/khalednakhleh/deeptop.
6.1 Simulations for MDPs
We evaluate three MDPs, namely, the electric vehicle charging problem, the inventory management problem, and the make-to-stock problem.
EV charging problem. This problem is based on Yu, Xu, and Tong [34]. It considers a charging station serving EVs. When an EV arrives at the station, it specifies the amount of charges it needs and a deadline upon which it will leave the station. The electricity price changes over time and we model it by an Ornstein-Uhlenbeck process [30]. In each timestep, the station decides whether to charge the EV or not. If it decides to charge the EV, then it provides one unit charge to the EV. The station then obtains a unit reward and pays the current electricity price. If the station fails to fully charge the EV by the deadline of the EV, then the station suffers from a penalty that is a convex function of the remaining needed charge. A new EV arrives at the station when the previous EV leaves. We model this problem by letting the scalar state be the current electricity price and the vector state be the remaining needed charge and time-to-deadline of the current EV. A threshold policy is one that calculates a threshold based on the EV’s remaining needed charge and time-to-deadline, and then decides to charge the EV if and only if the current electricity price is below the threshold.
Inventory management problem. We construct an inventory management problem by jointly incorporating a variety of practical challenges, including seasonal fluctuations in demands and lead times in orders, in the literature [28, 15, 10, 27]. We consider a warehouse holding goods. In each timestep, there is a random amount of demand whose mean depends on the time of the year. The warehouse can fulfill the demand as long as it has sufficient inventory, and it makes a profit for each unit of sold goods. At the end of the timestep, the warehouse incurs a unit holding cost for each unit of unsold goods. The warehouse manager needs to decide whether to order more goods. When it places an order for goods, there is a lead time of one time step, that is, the goods ordered at timestep t are only available for sale at timestep t + 1. We model this problem by letting the scalar state be the current inventory and the vector state be the time of the year. A threshold policy calculates a threshold based on the time of the year and decides to place an order for goods if the current inventory is below the threshold.
Make-to-stock production problem. This problem is considered in [26]. It studies a system that produces m items with W demand classes and buffer size S . Accepting a class v order leads to a reward Rv, as long as there is still room in the buffer for the order. The classes of demands are ordered such that R1 > R2 > . . . . In this problem, the scalar state is the number of accepted but unfinished orders and the vector state is the class of the next arriving order. More details about the three MDPs can be found in Appendix C.
Evaluated policies. We compare DeepTOP-MDP against DDPG [19] and TD3 [8], two state-ofthe-art off-policy and model free deep RL algorithms. We use open-source implementations of these two algorithms for [12, 7]. We use the same hyper-parameters, including the neural network architecture, learning rates, etc., for all three algorithms. We also evaluate the Structure-Aware Learning for Multiple Thresholds algorithm (SALMUT) [26], a reinforcement learning algorithm
that finds the optimal threshold policy. SALMUT requires the vector states to be pre-sorted by their threshold values. Hence, SALMUT can only be applied to the make-to-stock production problem. Details about the training parameters can be found in Appendix D. For the EV charging problem, Yu, Xu, and Tong [34] has found the optimal threshold policy. We call the optimal threshold policy the Deadline Index policy and compare DeepTOP-MDP against it.
Simulations results. Simulation results of the three MDPs are shown in Figure 1. The results are the average of 20 independent runs. Before starting a run, we fill an agent’s memory with 1000 transitions by randomly selecting actions. We plot the average reward obtained from the previous 100 timesteps, and average them over 20 runs. In addition, we provide the standard deviation bounds from the average reward.
It can be observed that DeepTOP significantly outperforms DDPG, TD3, and SALMUT. Although the training procedure of DeepTOP is similar to that of DDPG, DeepTOP is able to achieve much faster learning by leveraging the monotone property. Without leveraging the monotone property, DDPG and TD3 need to learn the optimal policy for each scalar state independently, and therefore have much worse performance. DeepTOP performs better than SALMUT because DeepTOP directly employs the threshold policy gradient. SALMUT in contrast approximates threshold policies through randomized policies since it can only handle continuous and differentiable functions. We believe this might be the reason why DeepTOP outperforms SALMUT. We also note that DeepTOP performs virtually the same as the Deadline Index policy for the EV charging problem in about 2000 timesteps, suggesting that DeepTOP indeed finds the optimal threshold policy quickly. We also evaluate DeepTOP for different neural network architectures in Appendix E, and show that DeepTOP performs the best in all settings.
6.2 Simulations for RMABs
We evaluate two RMABs, namely, the onedimensional bandits from [17] and the recovering bandits from [20].
One-dimensional bandits. We consider an extension of the RMAB problem evaluated in Killian et al. [17]. Killian et al. [17] considers the case when each arm is a two-state Markov process. We extend it so that each arm is a Markov process with 100 states, numbered as 0, 1, . . . , 99, as shown in Figure 2 where state 99 is the optimal state.
The reward of an arm depends on the distance between its current state and state 99. Suppose the current state of arm i is si,t, then it generates a reward ri,t = 1− ( si,t−9999 )2. If the arm is activated, then it changes to state si,t+1 = min{si,t +1, 99} with probability pi. If the arm is not activated, then it changes to state si,t+1 = max{si,t − 1, 0} with probability qi. In the simulations, we pick the probabilities pi to be evenly spaced depending on the number of arms N from the interval [0.2, 0.8]. We set the
probabilities qi = pi. We consider that there are N arms and that the agent needs to activate V arms in each timestep. We evaluate three settings of (N,V) = (10, 3), (20, 5), and (30, 6).
Recovering bandits. First introduced in [25], we consider the case that studies the varying behavior of consumers over time. A consumer’s interest in a particular product falls if the consumer clicks on its advertisement link. However their interest in the product would recover with time. The recovering bandit is modelled as an RMAB with each arm being the advertisement link. The reward of playing an arm is given by a function f ( min(z, zmax) ) , with z being the time since the arm was last played.
In our experiments, we consider arms with different reward functions, with the arm’s state being the value min{z, zmax} and zmax = 100. We also evaluate recovering bandits on three settings of (N,V) = (10, 3), (20, 5), and (30, 6). More details can be found in Appendix F.
Evaluated policies. We compare DeepTOP-RMAB against three recent studies that aim to learn index policies for RMABs, namely, Lagrange policy Q learning (LPQL) [17], Whittle index based Q learning (WIBQL) [1], and neural Whittle index network (NeurWIN) [20]. LPQL consists of three steps: First, it learns a Q function for each arm independently. Second, it uses the Q functions of all arms to determine a common Lagrangian. Third, it uses the Lagrangian to calculate the index of each arm. WIBQL is a two-timescale algorithm that learns the Whittle indices of indexable arms by updating Q values on the fast timescale, and index values on the slower timescale. NeurWIN is an off-line training algorithm based on REINFORCE that requires a simulator to learn the Whittle index. Both LPQL and WIBQL are tabular learning methods which may perform poorly compared to deep RL algorithms when the size of the state space is large. Hence, we also design deep RL equivalent algorithms that approximate their Q functions using neural networks. We refer to the Deep RL extensions as neural LPQL and neural WIBQL. In all experiments, neural LPQL, neural WIBQL, and NeurWIN use the same hyper-parameters as DeepTOP-RMAB. For the one-dimensional bandits, it can be shown that the Whittle index is in the range of [−1, 1], and hence we set M = 1. For the recovering bandits, we set M = 10.
Simulation results. Simulation results are shown in Figures 3 and 4. It can be observed that DeepTOP achieves the optimal average rewards in all cases. The reason that neural LPQL performs worse than DeepTOP may lie in its reliance on a common Lagrangian. Since the common Lagrangian is calculated based on the Q functions of all arms, an inaccuracy in one arm’s Q function can result in an inaccurate Lagrangian, which, in turn, leads to inaccuracy in the index values of all arms. Prior work [17] has already shown that WIBQL performs worse than LPQL. Hence, it is not surprising that neural WIBQL performs worse than both neural LPQL and DeepTOP. NeurWIN performs worse than DeepTOP because it is based on REINFORCE and therefore can only apply updates at the end of each minibatch of episodes. We also evaluate DeepTOP for different neural network architectures and the results are shown in Appendix G for the one-dimensional bandits and Appendix H for the recovering bandits.
7 Related Work
Threshold policies have been analysed for many decision-making problems formed as MDPs. [11] examined the residential energy storage under price fluctuations problem, and proved the existence
of optimal threshold policies for minimizing the cost. [5] proved that MDPs with a convex and piecewise linear cost functions admit an optimal threshold policy. [24] shows the existence of an optimal threshold policy for energy arbitrage given degrading battery capacity, with [2] using the REINFORCE algorithm [33] to learn a trading policy with price thresholds for intraday electricity markets. [14] considered mean field games in a multi-agent MDP setting, and characterized individual agent strategy with a threshold policy when the mean game admits a threshold policy.
More recently, [31] studies finding a job assigning threshold policy for data centers with heterogeneous servers and job classes, and gave conditions for the existence of optimal threshold policies. [35] proposed a distributed threshold-based control policy for graph traversal by assigning a state threshold that determines if the agent stays in or leaves a state. For minimizing the age of information in energy-harvesting sensors, [4] used the finite-difference policy gradient [23] to learn a possibly sub-optimal threshold policy in the average cost setting. [13] proposed an RL-based threshold policy for semi-MDPs in controlling micro-climate for buildings with simulations proving efficacy on a single-zone building. [29] used the Deep Q-network RL algorithm for selecting alert thresholds in anti-fraud systems with simulations showing performance improvements over static threshold policies. [26] described the SALMUT RL algorithm for exploiting the ordered multi-threshold structure of the optimal policy with SALMUT implementations in [16] for computing node’s overload protection. In contrast to these works, DeepTOP-MDP is applicable to any MDP that admits threshold policies.
In learning the Whittle index policy for RMABs, [6] proposed a Q-learning heuristic called the Q Whittle Index Controller (QWIC) which may not find the Whittle indices even when the training converges. [20] describes a Deep RL algorithm called NeurWIN for learning the Whittle index of a restless arm independently of other arms. However, NeurWIN requires a simulator to train the neural networks. Some recent studies, such as [1, 3, 17], proposed various online learning algorithms that can find Whittle index when the algorithms converge. These algorithms rely on some indirect property of the Whittle index which explains why they converge slower than DeepTOP.
8 Conclusion and Future Work
In this paper, we presented DeepTOP: a Deep RL actor-critic algorithm that learns the optimal threshold function for MDPs that admit a threshold policy and for RMAB problems. We first developed the threshold policy gradient theorem, where we proved that a threshold function has a simple to compute gradient. Based on the gradient expressions, we design the DeepTOP-MDP and DeepTOP-RMAB algorithm variants and compare them against state-of-the-art learning algorithms. In both the MDP and RMAB settings, experiment results showed that DeepTOP exceeds the performance of baselines in all considered problems. A promising future direction is to extend DeepTOP to threshold policies with multiple actions. For example, the Federal Reserve needs to decide not only whether to raise interest rate, but also the amount of rate hike.
Acknowledgments and Disclosure of Funding This material is based upon work supported in part by NSF under Award Number ECCS-2127721, in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant Number W911NF-22-1-0151, and in part by Office of Naval Research under Contract N00014-21-1-2385. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. | 1. What is the main contribution of the paper regarding the computation of an optimal threshold policy in MDPs and RMABs?
2. What are the strengths of the proposed approach, particularly in its simplicity and efficiency compared to other RL-based algorithms?
3. What are the weaknesses of the paper, especially regarding its novelty and limitations in applicability?
4. How does the reviewer assess the clarity, quality, and presentation of the paper?
5. Are there any minor comments or questions regarding the paper's content, such as a missing integral or the choice of reward function structure?
6. Can the Whittle index be directly computed using similar LP methods without relying on RL or neural networks? If not, what are the major difficulties in doing so? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper presents an algorithm to compute an optimal threshold policy in MDPs and RMABs with state information composed of a scalar state and a vector state. The authors propose to learn a mapping from the vector state to a scalar number to compare with the scalar state. This function is used to construct the threshold policy where the action (0 or 1) only depends on the comparison between the scalar state and the produced scalar value. In order to learn the mapping used for threshold policy, the authors use an actor-critic algorithm, where the scalar mapping and the associated threshold policy are used as the actor function, and a neural network is used as the action-value function (Q-function) as the critic function. The losses of the actor and critic functions are defined as the standard actor-critic work using the expected performance and the Bellman error. In this paper, the authors compute the derivative of the expected performance and identify a simple expression of the actor's derivative. This is used to perform actor-critic gradient updates more efficiently.
In the RMAB domain, the same algorithm and derivative simplification can be applied to the RMAB domain. RMABs is a special case of multiple MDPs with scalar and vector states. Specifically, the objective is defined as the integral of all activation cost
l
a
m
b
d
a
, assuming Whittle index exists and thus there exists a threshold policy that is optimal for all activation cost. This makes finding the optimal threshold policy equivalent to finding the Whittle index (if exist).
The proposed method is evaluated on three domains and compared with other RL-based baselines, including general RL algorithms (DDPG, TD3, SALMUT) in the MDP setting, and Q-learning based (LPQL, WIBQL) and Whittle index based (NeurWIN) in the RMABs setting. My interpretation of why the proposed algorithm can outperform the general RL-based algorithms is that the proposed method simplifies the search space to threshold policy, while in contrast the general RL algorithms may use more complex models (e.g., neural networks) to represent the actor function. This advantage makes the proposed algorithm find the optimal policy more efficient but also restricted more to threshold policy. It may not work when threshold policy is not optimal. Specifically, in the context of MDPs considered in this paper, it is possible that threshold policy is not optimal. In those cases, general RL algorithms may still be needed.
in the RMABs context, the proposed algorithm outperforms other Q-learning based algorithm without using actor-critic algorithm. It is known that actor-critic can improve the performance of the RL challenges. I believe this is the main advantage of the proposed algorithm compared to other baselines.
Strengths And Weaknesses
Strengths
The paper is well-presented and easy to follow. I appreciate the clarity of the presentation and idea.
The simplified expression of the actor derivative (expected reward derivative) is new.
Thorough evaluations and experiments
Weaknesses
The novelty is incremental. The main contribution is based on the use of threshold policy and simplification of the policy gradient.
The MDPs and RMABs domains are similar with no major differences.
The threshold policy considered in the paper can only handle one single scalar, which limits the applicability of the threshold policy.
[Minor]The proposed policy only works when threshold policy is good enough. Otherwise, a more expressive policy parameterization is still needed in order to achieve better performance.
Questions
Comments
I think there is a missing integral over
λ
′
in the definition of Q function in Equation (1).
Questions
I understand that finding the optimal policy in MDPs and RMABs are both challenging due to the PSPACE hardness. But finding the Whittle index in RMABs may be polynomial time solvable when there are only finitely many states [34, 35], where given indexability, [34] uses the definition of Whittle index and the Bellman equation to form a LP to solve in polynomial time, and [35] leverages the threshold policy to construct a faster algorithm for a specific type of RMABs problems. Is it possible to directly compute the Whittle index using similar LP method without using RL or neural networks? If not, what is the major difficulty of computing the Whittle index directly in your case?
Why did you choose to use a specific quadratic form of the reward function in the RMAB simulation in Section 6.3? Does the reward function structure affect the convergence of the actor-critic gradient descent update?
References: [34] Qian, Yundi, Chao Zhang, Bhaskar Krishnamachari, and Milind Tambe. "Restless poachers: Handling exploration-exploitation tradeoffs in security domains." In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 123-131. 2016. [35] Mate, Aditya, Jackson Killian, Haifeng Xu, Andrew Perrault, and Milind Tambe. "Collapsing Bandits and Their Application to Public Health Intervention." Advances in Neural Information Processing Systems 33 (2020): 15639-15650.
Limitations
Limitations
[Stated by the authors] The algorithm is only applicable to MDPs that admit a threshold policy.
The proposed algorithm only works with threshold policy with a single scalar value.
Negative societal impact
N/A |
NIPS | Title
DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs
Abstract
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
1 Introduction
This paper considers a class of control policies, called threshold policies, that naturally arise in many practical problems. For example, a smart home server may only turn on the air conditioner when the room temperature exceeds a certain threshold, and a central bank may only raise the interest rate when inflation exceeds a certain threshold. For such problems, finding the optimal control policies can be reduced to finding the appropriate thresholds given other factors of the system, such as the number of people in the room in the smart home server scenario or the unemployment rate and the current interest rate in the central bank scenario.
An important feature of threshold policies is that their actions are monotone. For example, if a smart home server would turn on the air conditioner at a certain temperature, then, all other factors being equal, the server would also turn on the air conditioner when the temperature is even higher. By leveraging this monotone property, an algorithm aiming to learn the optimal threshold can potentially be much more efficient than generic reinforcement learning algorithms seeking to learn the optimal action at different points of temperature separately. In order to design an efficient algorithm for learning the optimal threshold policy, we first formally define a class of Markov decision processes (MDPs) that admit threshold policies and its objective function. The optimal threshold policy is then the one that maximizes the objective function. However, the objective function involves an integral over a continuous range, which makes it infeasible to directly apply standard tools, such as backward-propagation in neural networks, to perform gradient updates.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surprisingly, we show that, by leveraging the monotone property of threshold policies, the gradient of the objective function has a very simple expression. Built upon this expression, we propose Deep Threshold-Optimal Policy (DeepTOP), a model-free actor-critic deep reinforcement learning algorithm that finds the optimal threshold policies. We evaluate the performance of DeepTOP by considering three practical problems, an electric vehicle (EV) charging problem that determines whether to charge an EV in the face of unknown fluctuations of electricity price, an inventory management problem that determines whether to order for goods in the face of unknown seasonal demands, and a make-to-stock problem for servicing jobs with different sizes. For all problems, DeepTOP significantly outperforms other state-of-the-art deep reinforcement learning algorithms due to its ability to exploit the monotone property.
We also study the notoriously hard restless multi-armed bandit (RMAB) problem. We show that the Whittle index policy, a powerful tool for RMABs, can be viewed as an optimal threshold policy for an alternative problem. Based on this observation, we define an objective function for the alternative problem, of which the Whittle index is the maximizer. We again show that the gradient of the objective function has a simple expression. This simple expression allows us to extend DeepTOP for the learning of the Whittle index. We compare this DeepTOP extension to three recently proposed algorithms that seek to learn the optimal index policies through other indirect properties. Simulation results show that the DeepTOP extension learns much faster because it directly finds the optimal threshold policy.
The rest of the paper is organized as follows. Section 2 defines the MDP setting and threshold policies. We present the DeepTOP algorithm for MDP in Section 3. We then discuss how the Whittle index policy for RMABs can be viewed as a threshold policy in Section 4 and develop a DeepTOP extension for learning it in Section 5. We show DeepTOP’s performance results for MDPs and RMABs in Section 6, and give related works in Section 7 before concluding.
2 Threshold Policies for MDPs
Consider an agent controlling a stochastic environment E described as an MDP E = (S,A,R,P, γ), with state space S, binary action space A := {0, 1}, reward function R : S × A → Ω, transition dynamics P : S ×A × S → , and discount factor γ ∈ [0, 1), where is the set of real numbers and Ω is the set of random variables. At each timestep t, the agent picks an action at ∈ A for the current state st. The state st ∈ S = × V has two components: a scalar state λt ∈ , and a vector state vt ∈ V, whereV is a discrete set of vectors. We assume the environment state is fully observable. Given the state-action pair (st, at), the MDP generates a reward rt following the unknown random variable R(st, at), and a random next state st+1 = (λt+1, vt+1) following the unknown distribution P. We use r̄(λ, v, a) := E[R((λ, v), a)] to denote the unknown expected one-step reward that can be obtained for the state-action pair (λ, v, a).
A threshold policy is one that defines a threshold function µ : V → mapping each vector state to a real number. The policy then deterministically picks at = 1(µ(vt) > λt), where 1(·) is the indicator function. There are many applications where it is natural to consider threshold policies and we discuss some of them below. Example 1. Consider the problem of charging electric vehicles (EV). When an EV arrives at a charging station, it specifies its demands for charge and a deadline upon which it will leave the station. The electricity price changes over time following some random process. The goal of the operator is to fulfill the EV’s requirement with minimum cost. In this problem, we can model the system by letting the scalar state λt be the current electricity price and the vector state vt be the remaining charge and time to deadline of the EV. For this problem, it is natural to consider a threshold policy that defines a threshold µ(vt) as the highest price the operator is willing to pay to charge the EV under vector state vt. The operator only charges the vehicle, i.e., chooses at = 1, if λt < µ(vt). Example 2. Consider the problem of warehouse management. A warehouse stores goods waiting to be sold. When the number of stored goods exceeds the demand, then there is a holding cost for each unsold good. On the other hand, if the number of stored goods is insufficient to fulfill the demand, then there is a cost of lost sales. The goal of the manager is to decide when to place orders so as to minimize the total cost. In this problem we can let the scalar state λt be the current inventory and let the vector state vt be the vector of all factors, such as upcoming holidays, that can influence future demands. It is natural to consider a threshold policy where the manager only places a new order if the current inventory λt falls below a threshold µ(vt) based on the current vector state vt.
Example 3. Consider a smart home server that controls the air conditioner. Let λt be −(current temperature) and vt be the time of the day and the number of people in the house. The server should turn on the air conditioner only if the temperature exceeds some threshold determined by vt, or, equivalently, λt < µ(vt).
Given a threshold policy with threshold function µ(·), we can define the corresponding action-value function by Qµ(λ, v, a). Let ρµ(λ′, v′, λ, v) be the discounted state distribution when the initial state is (λ, v) under the threshold policy to a visited state (λ′, v′). When the initial state is (λ, v), the expected discounted reward under the policy is
Qµ ( λ, v,1(µ(v) > λ) ) = ∑ v′∈V ∫ λ′=+M λ′=−M ρµ(λ′, v′, λ, v)r̄ ( λ′, v′,1(µ(v′) > λ′) ) . (1)
Let M be a sufficiently large constant such that λt ∈ [−M,+M] for all t. Our goal is to learn the optimal threshold function µφ(v) parametrized by a vector φ that maximizes the objective function
K(µφ) := ∫ λ=+M λ=−M ∑ v∈V Qµφ ( λ, v,1(µφ(v) > λ) ) dλ. (2)
3 Deep Threshold Optimal Policy for MDPs
In this section, we present a deep threshold optimal policy (DeepTOP) for MDPs that finds the optimal φ for maximizing K(µφ).
3.1 Threshold Policy Gradient Theorem for MDPs
In order to design DeepTOP, we first study the gradient ∇φK(µφ). At first glance, computing ∇φK(µφ) looks intractable since it involves an integral over λ ∈ [−M,+M]. However, we establish the following threshold policy gradient theorem that shows the surprising result that ∇φK(µφ) has a simple expression.
Theorem 1. Given the parameter vector φ, let ρ̄(λ, v) be the discounted state distribution when the initial state is chosen uniformly at random under the threshold policy. If all vector states v ∈ V have distinct values of µφ(v), then,
∇φK(µφ) = 2M|V| ∑ v∈V ρ̄(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (3) Proof. Let ρ̄t(λ, v) be the distribution that the state at time t is (λ, v) when the initial state is chosen uniformly at random. Clearly, we have ρ̄(λ, v) = ∑∞ t=1 γ
t−1ρ̄t(λ, v). Given φ, we number all states inV such that µφ(v1) > µφ(v2) > . . . . LetM0 = +M,Mn = µφ(vn), for all 1 ≤ n ≤ |V|, andM|V|+1 = −M. Also, let Vn be the subset of states {v|µφ(v) > Mn} = {v1, v2, . . . , vn−1}. Now, consider the interval (Mn+1,Mn) for some n. Notice that, for all λ ∈ (Mn+1,Mn), 1(µφ(v) > λ) = 1 if and only if v ∈ Vn+1. In other words, for any vector state v, the threshold policy would take the same action under all λ ∈ (Mn+1,Mn), and we use πn+1(v) to denote this action. We then have
∇φK(µφ) = ∇φ ∫ λ=+M λ=−M ∑ v∈V Qµφ (λ, v,1(µφ(v) > λ))dλ = ∑ v∈V ∇φ ∫ λ=+M λ=−M Qµφ (λ, v,1(µφ(v) > λ))dλ
= ∑ v∈V |V|∑ n=0 ∇φ ∫ λ=Mn λ=Mn+1 Qµφ (λ, v, πn+1(v))dλ
= ∑ v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1 + ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ ) ,
(4)
where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by∑
v∈V |V|∑ n=0 ( Qµφ ( Mn, v, πn+1(v) )∇φMn − Qµφ(Mn+1, v, πn+1(v))∇φMn+1)
= ∑ v∈V |V|∑ n=1 ( Qµφ ( µφ(vn), v,1(v ∈ Vn+1)) − Qµφ(µφ(vn), v,1(v ∈ Vn)))∇φµφ(vn)
=2M|V| ∑ v∈V ρ̄1(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v). (5) Next, we expand the last term in (4). Note that Qµφ(λ, v, a) = r̄(λ, v, a) +
γ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′, where p(·|·) is the transition probability.
Hence, ∇φQµφ(λ, v, a) = γ∇φ ∫ λ′=+M λ′=−M ∑ v′ p(λ′, v′|λ, v, a)Qµφ(λ′, v′,1(µφ(v′) > λ′))dλ′. Using the same techniques in (4) and (5), we have∑ v∈V |V|∑ n=0 ∫ λ=Mn λ=Mn+1 ∇φQµφ (λ, v, πn+1(v))dλ = ∑ v∈V ∫ λ=+M λ=−M ∇φQµφ (λ, v,1(µφ(v) > λ))dλ
= γ ∑ v∈V ∫ λ=+M λ=−M ( ∇φ ∫ λ′=+M λ′=−M ∑ v′∈V p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ′, v′,1(µφ(v′) > λ′))dλ′ ) dλ
= 2M|V| ∑ v∈V γρ̄2(µφ(v), v) ( Qµφ ( µφ(v), v, 1 ) − Qµφ(µφ(v), v, 0))∇φµφ(v) + γ
∑ v∈V ∫ λ=+M λ=−M ( ∑ v′∈V ∫ λ′=+M λ′=−M ∇φ ( p(λ′, v′|λ, v,1(µφ(v) > λ))Qµφ (λ, v,1(µφ(v′) > λ′)) ) dλ′ ) dλ.
In the above equation, expanding the last term in time establishes (3).
3.2 DeepTOP Algorithm Design for MDPs
Motivated by Theorem 1, we now present DeepTOP-MDP, a model-free, actor-critic Deep RL algorithm. DeepTOP-MDP maintains an actor network with parameters φ that learns a threshold function µφ(v), and a critic network with parameters θ that learns an action-value function Qθ(λ, v, a). DeepTOP-MDP also maintains a target critic network with parameters θ′ that is updated slower than the critic parameters θ. The purpose of the target critic network is to improve the learning stability as demonstrated in [8, 19]. The objective of the critic network is to find θ that minimizes the loss function
L(θ) := E st ,at ,rt ,st+1
[( Qθ(λt, vt, at) − rt − γmax
a′∈A Qθ
′( λt+1, vt+1, a′ ))2] , (6)
where (st, at, rt, st+1) is sampled under some policy with st = (λt, vt). The objective of the actor network is to find φ that maximizes ∫ λ=+M λ=−M ∑ v∈V Qθµφ ( λ, v,1(µφ(v) > λ) ) dλ. In each timestep t, the environment E provides a state st to the agent. We set an exploration parameter t ∈ [0, 1) that takes a random action with probability t. Otherwise, DeepTOP-MDP calculates µφ(vt) based on vt, and chooses at = 1(µφ(vt) > λt). E generates a reward rt and a next state st+1. A replay memory denoted byM then stores the transition {st, at, rt, st+1}. After filling the memory with at least B transitions, DeepTOP-MDP updates the parameters φ, θ, θ′ in every timestep using a sampled minibatch of size B of transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B. The critic network uses the sampled transitions to calculate the estimated gradient of L(θ):
∇̂θL(θ) := 2 B B∑ k=1 ( Qθ(λtk , vtk , atk ) − rtk − γmaxa′∈A Q θ′(λtk+1, vtk+1, a′))∇θQθ(λtk , vtk , atk ). (7) Similarly, the actor network uses the sampled transitions and Equation (3) to calculate the estimated gradient:
∇̂φK(µφ) := 1 B B∑ k=1 ( Qθµφ ( µφ(vtk ), vtk , 1 ) − Qθµφ ( µφ(vtk ), vtk , 0 )) ∇φµφ(vtk ). (8)
Algorithm 1 Deep Threshold Optimal Policy Training for MDPs (DeepTOP-MDP) Randomly select initial actor network parameters φ and critic network parameters θ. Set target critic network parameters θ′ ← θ, and initialize replay memoryM. for timestep t = 1, 2, 3, . . . do
Receive state st = (λt, vt) from environment E. Select action at = 1(µφ(vt) > λt) with probability 1 − t. Otherwise, select action at randomly. Execute action at, and observe reward rt and next state st+1 from E. Store transition {st, at, rt, st+1} intoM. Sample a minibatch of B transitions {stk , atk , rtk , stk+1}, for 1 ≤ k ≤ B fromM. Update critic network parameters θ using the estimated gradient from Equation (7). Update actor network parameters φ using the estimated gradient from Equation (8). Soft update target critic parameters θ′: θ′ ← τθ + (1 − τ)θ′.
end for
Both the critic network and the actor network then take a gradient update step. Finally, we soft update the target critic’s parameters θ′ using θ′ ← τθ + (1 − τ)θ′, with τ < 1. The complete pseudocode is given in Algorithm 1.
4 Whittle Index Policy for RMABs
In this section, we demonstrate how the Whittle index policy [32], a powerful tool for solving the notoriously intractable Restless Multi-Armed Bandit (RMAB) problem, can be represented with a set of threshold functions. We first describe the RMAB control problem, and then define the Whittle index function.
An RMAB problem consists of N arms. The environment of an arm i, denoted as Ei, is an MDP with a discrete state space si,t ∈ Si, and a binary action space ai,t ∈ A := {0, 1}, where ai,t = 1 means that arm i is activated, and ai,t = 0 means that arm i is left passive at time t. Given the state-action pair (si,t, ai,t), Ei generates a random reward ri,t and a random next state si,t+1 following some unknown probability distributions based on (si,t, ai,t). Here we also use r̄i(si, ai) to denote the unknown expected one-step reward that can be obtained for the state-action pair (si, ai).
A control policy over all arms takes the states (s1,t, s2,t, . . . , sN,t) as input, and activates V out of N arms in every timestep. Solving for the optimal control policy for RMABs was proven to be intractable [21], since the agent must optimize over an input state space exponential in N. To circumvent the dimensionality challenge, the Whittle index policy assigns real values to an arm’s states using a Whittle index function for each arm Wi : Si → . Based on the assigned Whittle indices ( W1(s1,t),W2(s2,t), . . . ,WN(sN,t) ) , the Whittle index policy activates the V highest-valued arms out of N arms in timestep t, and picks the passive action for the remaining arms.
4.1 The Whittle Index Function as The Optimal Threshold Function
To define the Whittle index and relate it to threshold functions, let us first consider an alternative control problem of a single arm i as environment Ei with activation cost λ. In this problem, the agent follows a control policy that determines whether the arm is activated or not based on its current state si,t. If the policy activates the arm, then the agent must pay an activation cost of λ. Hence, the agent’s net reward at timestep t is defined as ri,t − λai,t. We now consider applying threshold policies for this alternative control problem. A threshold policy defines a threshold function µi : Si → that maps each state to a real value. It then activates the arm if and only if µi(si,t) > λ, i.e., ai,t = 1(µi(si) > λ). The value of µi(si,t) can therefore be viewed as the largest activation cost that the agent is willing to pay to activate the arm under state si,t. To characterize the performance of a threshold policy with a threshold function µi(·), we let ρµi,λ(s′i , si) be the discounted state distribution, which is the average discounted number of visits of state s′i when the initial state is si under the threshold policy and λ. When the initial state is si, the expected discounted net reward under the threshold policy is
Qi,λ ( si,1(µi(si) > λ) ) = ∑ s′i∈Si ρµi,λ(s ′ i , si) ( r̄i ( s′i ,1(µi(s ′ i) > λ) ) − λ1(µi(s′i) > λ)). (9)
The performance of the threshold policy under a given λ is defined as Ji,λ(µi) :=∑ si∈Si Qi,λ ( si,1(µi(si) > λ) ) . The Whittle index of this arm is defined as the function µi(·) whose corresponding threshold policy maximizes Ji,λ(µi) for all λ: Definition 1. (Whittle Index) If there exists a function µi : Si → such that choosing 1(µi(si) > λ) maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞), then we say that µi(si) is the Whittle index Wi(si) 1.
We note that, for some arms, there does not exist any function µi(si) that satisfies the condition in Definition 1. For such arms, the Whittle index does not exist. We say that an arm is indexable if it has a well-defined Whittle index function. Definition 1 shows that finding the Whittle index is equivalent to finding the optimal µi(·) that maximizes Ji,λ(µi) for all λ ∈ (−∞,+∞). Parameterizing a threshold function µφii (·) by parameters φi and letting M be a sufficiently large number such that µ φi i (si) ∈ (−M,+M) for all si and φi, we aim to find the optimal φi for maximizing the objective function
Ki(µ φi i ) := ∫ λ=+M λ=−M ∑ si∈Si Qi,λ ( si,1(µ φi i (si) > λ) ) dλ. (10)
5 Deep Threshold Optimal Policy for RMABs
To design a DeepTOP variant for RMABs, we first give the gradient of the objective function. Theorem 2. Given the parameter vector φi, let ρ̄λ(si) be the discounted state distribution when the initial state is chosen uniformly at random and the activation cost is λ. If all states si ∈ Si have distinct values of µφii (si), then,
∇φi Ki(µ φi i ) = |Si| ∑ si∈Si ρ̄ µ φi i (si) (si) ( Qi,µφii (si) ( si, 1 ) − Qi,µφii (si)(si, 0))∇φiµφii (si). (11) Proof. The proof is similar to that of Theorem 1. For completeness, we provide it in Appendix A.
We note that Theorem 2 does not require the arm to be indexable. Whether an arm is indexable or not, using Theorem 2 along with a gradient ascent algorithm will find a locally-optimal φi that maximizes Ki(µ φi i ). When the arm is indexable, the resulting threshold function µ φi i is the Whittle index function. Using the gradient result from Equation (11), we present the algorithm DeepTOP-RMAB for finding the optimal parametrized threshold functions µφii for arms i = 1, 2, . . . ,N. The training method is similar to the MDP version, except for two important differences. First, the training of each arm is done independently from others. Second, the value of λ is an artificial value that only exists in the alternative problem but not in the original RMAB problem. Similar to DeepTOP-MDP, we maintain three network parameters for each arm i: actor φi, critic θi, and target-critic θ′i . The critic network parametrizes the action-value function, and is optimized by minimizing the loss function
Li(θi) := λ=+M∫
λ=−M
E si,t ,ai,t ,ri,t ,si,t+1 [( Qθii,λ(si,t, ai,t) − ri,t − γmaxa′∈A Q θ′i i,λ(si,t+1, a ′) )2] dλ, (12)
with (si,t, ai,t, ri,t, si,t+1) sampled under some policy. In each timestep t, each arm environment Ei provides its current state si,t to the agent. For each arm i = 1, 2, . . . ,N, DeepTOP-RMAB calculates the state value µφii (si,t) with the arm’s respective actor network parameters φi. Given an exploration parameter t ∈ [0, 1), DeepTOP-RMAB activates the V arms with the largest µφii (si,t) with probability 1− t, and activates V randomly selected arms with probability t. Based on the executed actions, each arm provides a reward ri,t and the next state si,t+1. An arm’s transition {si,t, ai,t, ri,t, si,t+1} is then stored in the arm’s memory denoted byMi. After filling each arm’s memory with at least B transitions, DeepTOP-RMAB updates φi, θi, and θ′i in every timestep. For each arm i, DeepTOP-RMAB first samples a minibatch of size B of transitions {si,tk , ai,tk , ri,tk , si,tk+1}, for 1 ≤ k ≤ B from the memoryMi. It then randomly samples B values [λi,1, λi,2, . . . , λi,B] from the range [−M,+M]. Using the sampled transitions and λ values, it estimates the gradient of Li(θi) as
∇̂θiLi(θi) := 2 B B∑ k=1 ( Qθii,λk (si,tk , ai,tk ) − ri,tk − γmaxa′∈A Q θ′i i,λk ( si,tk+1, a ′ )) ∇θi Qθii,λk (si,tk , ai,tk ). (13)
1To simplify notations, we use a necessary and sufficient condition for the Whittle index as its definition. We refer interested readers to [9] for more thorough discussions on the Whittle index.
Using the sampled transitions and Equation (11), it estimates the gradient of Ki(µ φi i ) as
∇̂φi Ki(µ φi i ) := 1 B B∑ k=1 ( Qθi i,µφii (si,tk ) ( si,tk , 1 ) − Qθi i,µφii (si,tk ) ( si,tk , 0 )) ∇φiµ φi i (si,tk ). (14)
A gradient update step is taken after calculating the actor and critic networks’ gradients. Finally, DeepTOP-RMAB soft updates the target critic parameters θ′i using θ ′ i ← τθi + (1 − τ)θ′i , with τ < 1. The complete DeepTOP-RMAB pseudocode is given in Appendix B.
6 Simulations
We have implemented and tested both DeepTOP-MDP and DeepTOP-RMAB in a variety of settings. The training procedure of the two DeepTOP algorithms are similar to that of the DDPG [19] algorithm except for the expression of gradients. We implemented the DeepTOP algorithms by modifying an open-source implementation of DDPG [12]. All source code can be found in the repository https://github.com/khalednakhleh/deeptop.
6.1 Simulations for MDPs
We evaluate three MDPs, namely, the electric vehicle charging problem, the inventory management problem, and the make-to-stock problem.
EV charging problem. This problem is based on Yu, Xu, and Tong [34]. It considers a charging station serving EVs. When an EV arrives at the station, it specifies the amount of charges it needs and a deadline upon which it will leave the station. The electricity price changes over time and we model it by an Ornstein-Uhlenbeck process [30]. In each timestep, the station decides whether to charge the EV or not. If it decides to charge the EV, then it provides one unit charge to the EV. The station then obtains a unit reward and pays the current electricity price. If the station fails to fully charge the EV by the deadline of the EV, then the station suffers from a penalty that is a convex function of the remaining needed charge. A new EV arrives at the station when the previous EV leaves. We model this problem by letting the scalar state be the current electricity price and the vector state be the remaining needed charge and time-to-deadline of the current EV. A threshold policy is one that calculates a threshold based on the EV’s remaining needed charge and time-to-deadline, and then decides to charge the EV if and only if the current electricity price is below the threshold.
Inventory management problem. We construct an inventory management problem by jointly incorporating a variety of practical challenges, including seasonal fluctuations in demands and lead times in orders, in the literature [28, 15, 10, 27]. We consider a warehouse holding goods. In each timestep, there is a random amount of demand whose mean depends on the time of the year. The warehouse can fulfill the demand as long as it has sufficient inventory, and it makes a profit for each unit of sold goods. At the end of the timestep, the warehouse incurs a unit holding cost for each unit of unsold goods. The warehouse manager needs to decide whether to order more goods. When it places an order for goods, there is a lead time of one time step, that is, the goods ordered at timestep t are only available for sale at timestep t + 1. We model this problem by letting the scalar state be the current inventory and the vector state be the time of the year. A threshold policy calculates a threshold based on the time of the year and decides to place an order for goods if the current inventory is below the threshold.
Make-to-stock production problem. This problem is considered in [26]. It studies a system that produces m items with W demand classes and buffer size S . Accepting a class v order leads to a reward Rv, as long as there is still room in the buffer for the order. The classes of demands are ordered such that R1 > R2 > . . . . In this problem, the scalar state is the number of accepted but unfinished orders and the vector state is the class of the next arriving order. More details about the three MDPs can be found in Appendix C.
Evaluated policies. We compare DeepTOP-MDP against DDPG [19] and TD3 [8], two state-ofthe-art off-policy and model free deep RL algorithms. We use open-source implementations of these two algorithms for [12, 7]. We use the same hyper-parameters, including the neural network architecture, learning rates, etc., for all three algorithms. We also evaluate the Structure-Aware Learning for Multiple Thresholds algorithm (SALMUT) [26], a reinforcement learning algorithm
that finds the optimal threshold policy. SALMUT requires the vector states to be pre-sorted by their threshold values. Hence, SALMUT can only be applied to the make-to-stock production problem. Details about the training parameters can be found in Appendix D. For the EV charging problem, Yu, Xu, and Tong [34] has found the optimal threshold policy. We call the optimal threshold policy the Deadline Index policy and compare DeepTOP-MDP against it.
Simulations results. Simulation results of the three MDPs are shown in Figure 1. The results are the average of 20 independent runs. Before starting a run, we fill an agent’s memory with 1000 transitions by randomly selecting actions. We plot the average reward obtained from the previous 100 timesteps, and average them over 20 runs. In addition, we provide the standard deviation bounds from the average reward.
It can be observed that DeepTOP significantly outperforms DDPG, TD3, and SALMUT. Although the training procedure of DeepTOP is similar to that of DDPG, DeepTOP is able to achieve much faster learning by leveraging the monotone property. Without leveraging the monotone property, DDPG and TD3 need to learn the optimal policy for each scalar state independently, and therefore have much worse performance. DeepTOP performs better than SALMUT because DeepTOP directly employs the threshold policy gradient. SALMUT in contrast approximates threshold policies through randomized policies since it can only handle continuous and differentiable functions. We believe this might be the reason why DeepTOP outperforms SALMUT. We also note that DeepTOP performs virtually the same as the Deadline Index policy for the EV charging problem in about 2000 timesteps, suggesting that DeepTOP indeed finds the optimal threshold policy quickly. We also evaluate DeepTOP for different neural network architectures in Appendix E, and show that DeepTOP performs the best in all settings.
6.2 Simulations for RMABs
We evaluate two RMABs, namely, the onedimensional bandits from [17] and the recovering bandits from [20].
One-dimensional bandits. We consider an extension of the RMAB problem evaluated in Killian et al. [17]. Killian et al. [17] considers the case when each arm is a two-state Markov process. We extend it so that each arm is a Markov process with 100 states, numbered as 0, 1, . . . , 99, as shown in Figure 2 where state 99 is the optimal state.
The reward of an arm depends on the distance between its current state and state 99. Suppose the current state of arm i is si,t, then it generates a reward ri,t = 1− ( si,t−9999 )2. If the arm is activated, then it changes to state si,t+1 = min{si,t +1, 99} with probability pi. If the arm is not activated, then it changes to state si,t+1 = max{si,t − 1, 0} with probability qi. In the simulations, we pick the probabilities pi to be evenly spaced depending on the number of arms N from the interval [0.2, 0.8]. We set the
probabilities qi = pi. We consider that there are N arms and that the agent needs to activate V arms in each timestep. We evaluate three settings of (N,V) = (10, 3), (20, 5), and (30, 6).
Recovering bandits. First introduced in [25], we consider the case that studies the varying behavior of consumers over time. A consumer’s interest in a particular product falls if the consumer clicks on its advertisement link. However their interest in the product would recover with time. The recovering bandit is modelled as an RMAB with each arm being the advertisement link. The reward of playing an arm is given by a function f ( min(z, zmax) ) , with z being the time since the arm was last played.
In our experiments, we consider arms with different reward functions, with the arm’s state being the value min{z, zmax} and zmax = 100. We also evaluate recovering bandits on three settings of (N,V) = (10, 3), (20, 5), and (30, 6). More details can be found in Appendix F.
Evaluated policies. We compare DeepTOP-RMAB against three recent studies that aim to learn index policies for RMABs, namely, Lagrange policy Q learning (LPQL) [17], Whittle index based Q learning (WIBQL) [1], and neural Whittle index network (NeurWIN) [20]. LPQL consists of three steps: First, it learns a Q function for each arm independently. Second, it uses the Q functions of all arms to determine a common Lagrangian. Third, it uses the Lagrangian to calculate the index of each arm. WIBQL is a two-timescale algorithm that learns the Whittle indices of indexable arms by updating Q values on the fast timescale, and index values on the slower timescale. NeurWIN is an off-line training algorithm based on REINFORCE that requires a simulator to learn the Whittle index. Both LPQL and WIBQL are tabular learning methods which may perform poorly compared to deep RL algorithms when the size of the state space is large. Hence, we also design deep RL equivalent algorithms that approximate their Q functions using neural networks. We refer to the Deep RL extensions as neural LPQL and neural WIBQL. In all experiments, neural LPQL, neural WIBQL, and NeurWIN use the same hyper-parameters as DeepTOP-RMAB. For the one-dimensional bandits, it can be shown that the Whittle index is in the range of [−1, 1], and hence we set M = 1. For the recovering bandits, we set M = 10.
Simulation results. Simulation results are shown in Figures 3 and 4. It can be observed that DeepTOP achieves the optimal average rewards in all cases. The reason that neural LPQL performs worse than DeepTOP may lie in its reliance on a common Lagrangian. Since the common Lagrangian is calculated based on the Q functions of all arms, an inaccuracy in one arm’s Q function can result in an inaccurate Lagrangian, which, in turn, leads to inaccuracy in the index values of all arms. Prior work [17] has already shown that WIBQL performs worse than LPQL. Hence, it is not surprising that neural WIBQL performs worse than both neural LPQL and DeepTOP. NeurWIN performs worse than DeepTOP because it is based on REINFORCE and therefore can only apply updates at the end of each minibatch of episodes. We also evaluate DeepTOP for different neural network architectures and the results are shown in Appendix G for the one-dimensional bandits and Appendix H for the recovering bandits.
7 Related Work
Threshold policies have been analysed for many decision-making problems formed as MDPs. [11] examined the residential energy storage under price fluctuations problem, and proved the existence
of optimal threshold policies for minimizing the cost. [5] proved that MDPs with a convex and piecewise linear cost functions admit an optimal threshold policy. [24] shows the existence of an optimal threshold policy for energy arbitrage given degrading battery capacity, with [2] using the REINFORCE algorithm [33] to learn a trading policy with price thresholds for intraday electricity markets. [14] considered mean field games in a multi-agent MDP setting, and characterized individual agent strategy with a threshold policy when the mean game admits a threshold policy.
More recently, [31] studies finding a job assigning threshold policy for data centers with heterogeneous servers and job classes, and gave conditions for the existence of optimal threshold policies. [35] proposed a distributed threshold-based control policy for graph traversal by assigning a state threshold that determines if the agent stays in or leaves a state. For minimizing the age of information in energy-harvesting sensors, [4] used the finite-difference policy gradient [23] to learn a possibly sub-optimal threshold policy in the average cost setting. [13] proposed an RL-based threshold policy for semi-MDPs in controlling micro-climate for buildings with simulations proving efficacy on a single-zone building. [29] used the Deep Q-network RL algorithm for selecting alert thresholds in anti-fraud systems with simulations showing performance improvements over static threshold policies. [26] described the SALMUT RL algorithm for exploiting the ordered multi-threshold structure of the optimal policy with SALMUT implementations in [16] for computing node’s overload protection. In contrast to these works, DeepTOP-MDP is applicable to any MDP that admits threshold policies.
In learning the Whittle index policy for RMABs, [6] proposed a Q-learning heuristic called the Q Whittle Index Controller (QWIC) which may not find the Whittle indices even when the training converges. [20] describes a Deep RL algorithm called NeurWIN for learning the Whittle index of a restless arm independently of other arms. However, NeurWIN requires a simulator to train the neural networks. Some recent studies, such as [1, 3, 17], proposed various online learning algorithms that can find Whittle index when the algorithms converge. These algorithms rely on some indirect property of the Whittle index which explains why they converge slower than DeepTOP.
8 Conclusion and Future Work
In this paper, we presented DeepTOP: a Deep RL actor-critic algorithm that learns the optimal threshold function for MDPs that admit a threshold policy and for RMAB problems. We first developed the threshold policy gradient theorem, where we proved that a threshold function has a simple to compute gradient. Based on the gradient expressions, we design the DeepTOP-MDP and DeepTOP-RMAB algorithm variants and compare them against state-of-the-art learning algorithms. In both the MDP and RMAB settings, experiment results showed that DeepTOP exceeds the performance of baselines in all considered problems. A promising future direction is to extend DeepTOP to threshold policies with multiple actions. For example, the Federal Reserve needs to decide not only whether to raise interest rate, but also the amount of rate hike.
Acknowledgments and Disclosure of Funding This material is based upon work supported in part by NSF under Award Number ECCS-2127721, in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant Number W911NF-22-1-0151, and in part by Office of Naval Research under Contract N00014-21-1-2385. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. | 1. What is the focus of the paper regarding threshold policies?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. What are the limitations of the paper regarding its applicability and significance?
5. Does the reviewer have any questions or suggestions for improving the paper?
6. Are there any potential negative societal impacts associated with the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper considers threshold policies problem. The authors show that the gradient for these problems has a simple expression. The authors also propose a rephrasing of Whittle index policies for restless multi-armed bandits in the form of threshold policy and match their algorithm to this scenario. The authors support their results with simulations.
Strengths And Weaknesses
Originality To the best of my knowledge the results are novel.
Quality The theoretical results do not seem very surprising, but I did find them interesting and useful.
Clarity The paper is written very clearly. The restrictions on V seems to be quite drastic (discrete set at line 66, distinct threshold values in Theorem 1) so I think a short explanation is in order (is this just a technical restriction or a real pain?, why the restriction exists?)
Significance Threshold policies seem to match a rather small range of problems. The main limitations are two actions policies and the policy structure. Subsequently, the significance of the paper is highly limited just by tackling this small range. In its niche, I think the paper gives a very useful insight, even if its not very sophisticated. In addition, with some thought the core idea might extend to more general scenarios. For example - other cases where the gradient can be calculated easily or other problems that can be solved to threshold policies.
Questions
The paper is pretty straight-forward and clear. I have no questions or suggestions.
Limitations
The paper has no potential negative societal impact. |
NIPS | Title
Synthesizing Tasks for Block-based Programming
Abstract
Block-based visual programming environments play a critical role in introducing computing concepts to K-12 students. One of the key pedagogical challenges in these environments is in designing new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. In this paper, we formalize the problem of synthesizing visual programming tasks. In particular, given a reference visual task Tin and its solution code Cin, we propose a novel methodology to automatically generate a set {(Tout, Cout)} of new tasks along with solution codes such that tasks Tin and Tout are conceptually similar but visually dissimilar. Our methodology is based on the realization that the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, directly mutating reference task Tin to generate new tasks is futile. Our task synthesis algorithm operates by first mutating code Cin to obtain a set of codes {Cout}. Then, the algorithm performs symbolic execution over a code Cout to obtain a visual task Tout; this step uses the Monte Carlo Tree Search (MCTS) procedure to guide the search in the symbolic tree. We demonstrate the effectiveness of our algorithm through an extensive empirical evaluation and user study on reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com.
1 Introduction
Block-based visual programming environments are increasingly used nowadays to introduce computing concepts to novice programmers including children and K-12 students. Led by the success of environments like Scratch [29], initiatives like Hour of Code by Code.org [24] (HOC) and online platforms like CodeHS.com [21], block-based programming has become an integral part of introductory computer science education. Considering HOC alone, over one billion hours of block-based programming activity has been performed so far by over 50 million unique students worldwide [24, 35].
The societal need for enhancing K-12 computing education has led to a surge of interest in developing AI-driven systems for pedagogy of block-based programming [33, 26, 27, 34, 16]. Existing works have studied various aspects of intelligent support, including providing real-time next-step hints when a student is stuck solving a task [20, 36, 18, 17, 9], giving data-driven feedback about a student’s misconceptions [31, 19, 28, 30, 35], and demonstrating a worked-out solution for a task when a student lacks the required programming concepts [37]. An underlying assumption when providing such intelligent support is that afterwards the student can practice new similar tasks to finally learn the missing concepts. However, this assumption is far from reality in existing systems—the programming tasks are typically hand-curated by experts/tutors, and the available set of tasks is limited. Consider HOC’s Classic Maze challenge [23], which provides a progression of 20 tasks: Millions of students have attempted these tasks, yet when students fail to solve a task and receive assistance, they cannot practice similar tasks, hindering their ability to master the desired concepts. We seek to tackle this pedagogical challenge by developing techniques for synthesizing new programming tasks.
∗Authors listed alphabetically; Correspondence to: Ahana Ghosh <gahana@mpi-sws.org>.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
We formalize the problem of synthesizing visual programming tasks of the kind found in popular learning platforms like Code.org (see Fig. 1) and CodeHS.com (see Fig. 2). As input, we are given a reference task Tin, specified as a visual puzzle, and its solution code Cin. Our goal is to synthesize a set {(Tout, Cout)} of new tasks along with their solution codes that are conceptually similar but visually dissimilar to the input. This is motivated by the need for practice tasks that on one hand exercise the same concepts, while looking fresh in order to maintain student engagement.
When tackling the problem of synthesizing new tasks with the above desirable properties, three key challenges emerge. First, we are generating problems in a conceptual domain with no well-defined procedure that students follow to solve a task—consequently, existing work on educational problem generation in procedural domains does not apply in our setting [3, 11]. Second, the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, template-based problem generation techniques [32, 25] that rely on directly mutating the input to generate new tasks is ineffective (see Section 5 where we use this approach as a baseline). Furthermore, such a direct task-mutation approach would require access to an automated solution synthesizer; however, state-of-the-art program synthesis techniques are not yet on par with experts and their minimal solutions [5, 8, 6]. Third, the space of possible tasks and their solutions is potentially unbounded, and thus, any problem generation technique that relies on exhaustive enumeration is intractable [32, 1, 2].
To overcome these challenges, we propose a novel methodology that operates by first mutating the solution code Cin to obtain a set of codes {Cout}, and then performing symbolic execution over a code Cout to obtain a visual puzzle Tout. Mutation is efficient by creating an abstract representation of Cin along with appropriate constraints and querying an SMT solver [4]; any solution to this query is a mutated code Cout. During symbolic execution, we use Monte Carlo Tree Search (MCTS) to guide the search over the (unbounded) symbolic execution tree. We demonstrate the effectiveness of our methodology by performing an extensive empirical evaluation and user study on a set of reference tasks from the Hour of code challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. In summary, our main contributions are:
• We formalize the problem of synthesizing block-based visual programming tasks (Section 2). • We present a novel approach for generating new visual tasks along with solution codes such that
they are conceptually similar but visually dissimilar to a given reference task (Section 3). • We demonstrate the effectiveness of our approach through an extensive empirical evaluation and
user study on reference tasks from real-world programming platforms (Section 4 and Section 5).
2 Problem Formulation
The space of tasks. We define a task as a tuple T := (Tvis, Tstore, Tsize), where Tvis denotes the visual puzzle, Tstore the available block types, and Tsize the maximum number of blocks allowed in the
solution code. For instance, considering the task T := Tin in Fig. 1a, Tvis is illustrated in Fig. 1a, Tstore = {move, turnL, turnR, RepeatUntil, If}, and Tsize = 4. The space of codes. The programming environment has a domain-specific language (DSL), which defines the set of valid codes C and is shown in Fig. 4a. A code C ∈ C is characterized by several properties, such as the set Cblocks of block types in C, the number of blocks Csize, the depth Cdepth of the corresponding Abstract Syntax Tree (AST), and the nesting structure Cstruct representing programming concepts exercised by C. For instance, considering the code C := Cin in Fig. 1b, Cblocks = {move, turnL, RepeatUntil, If}, Csize = 4, Cdepth = 3, and Cstruct = {Run{RepeatUntil{If}}}. Below, we introduce two useful definitions relating the task and code space. Definition 1 (Solution code). C is a solution code for T if the following holds: C successfully solves the visual puzzle Tvis, Cblocks ⊆ Tstore, and Csize ≤ Tsize. CT denotes the set of all solution codes for T. Definition 2 (Minimality of a task). Given a solvable task T with |CT| ≥ 1 and a threshold δ ∈ N, the task is minimal if @C ∈ CT such that Csize < Tsize − δ.
Next, we introduce two definitions formalizing the notion of conceptual similarity. Definition 3 formalizes conceptual similarity of a task T along with one solution code C. Since a task can have multiple solution codes, Definition 4 provides a stricter notion of conceptual similarity of a task T for all its solution codes. These definitions are used in our objective of task synthesis in conditions (I) and (V) below. Definition 3 (Conceptual similarity of (T, C)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T along with a solution code C is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and Cstruct = Cinstruct. Definition 4 (Conceptual similarity of (T, ·)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and ∀C ∈ CT, Cstruct = Cinstruct.
Environment domain knowledge. We now formalize our domain knowledge about the block-based environment to measure visual dissimilarity of two tasks, and capture some notion of interestingness and quality of a task. Given tasks T and T′, we measure their visual dissimilarity by an environmentspecific function Fdiss(Tvis, T′vis) ∈ [0, 1]. Moreover, we measure generic quality of a task with function Fqual(Tvis, C) ∈ [0, 1]. We provide specific instantiations of Fdiss and Fqual in our evaluation.
Objective of task synthesis. Given a reference task Tin and a solution code Cin ∈ CTin as input, we seek to generate a set {(Tout, Cout)} of new tasks along with solution codes that are conceptually similar but visually dissimilar to the input. Formally, given parameters (δsize, δdiss, δqual), our objective is to synthesize new tasks meeting the following conditions:
(I) (Tout, Cout) is conceptually similar to (Tin, Cin) with threshold δsize in Definition 3. (II) Tout is visually dissimilar to Tin with margin δdiss, i.e., Fdiss(Tinvis, Toutvis ) ≥ δdiss.
(III) Tout has a quality score above threshold δqual, i.e., Fqual(Toutvis , Cout) ≥ δqual.
In addition, depending on the use case, it is desirable that the new tasks satisfy the following criteria: (IV) Cout is different from the input solution code, i.e., Cout 6= Cin. (V) Tout is conceptually similar to (Tin, Cin) with threshold δsize in Definition 4.
(VI) Tout is minimal as per Definition 2 for a desired value of δmini (e.g., δmini = 0 or δmini = 1).
3 Our Task Synthesis Algorithm
We now present the pipeline of our algorithm (see Fig. 3), which takes as input a reference task Tin and its solution code Cin, and generates a set {(Tout, Cout)} of new tasks with their solution codes. The goal is for this set to be conceptually similar to (Tin, Cin), but for new tasks {Tout} to
be visually dissimilar to Tin. This is achieved by two main stages: (1) mutation of Cin to obtain a set {Cout}, and (2) symbolic execution of each Cout to create a task Tout. The first stage, presented in Section 3.1, converts Cin into an abstract representation restricted by a set of constraints (Fig. 3(a)), which must be satisfied by any generated Cout (Fig. 3(b)). The second stage, described in Section 3.2, applies symbolic execution on each code Cout to create a corresponding visual task Tout (Fig. 3(c)) while using Monte Carlo Tree Search (MCTS) to guide the search in the symbolic execution tree.
3.1 Code Mutation
This stage in our pipeline mutates code Cin of task Tin such that its conceptual elements are preserved. Our mutation procedure consists of three main steps. First, we generate an abstract representation of Cin, called sketch. Second, we restrict the sketch with constraints that describe the space of its concrete instantiations. Although this formulation is inspired from work on generating algebra problems [32], we use it in the entirely different context of generating conceptually similar mutations of Cin. This is achieved in the last step, where we use the sketch and its constraints to query an SMT solver [4]; the query solutions are mutated codes {Cout} such that Coutstruct = Cinstruct (see Definition 3). Step 1: Sketch. The sketch of code C, denoted by Q, is an abstraction of C capturing its skeleton and generalizing C to the space of conceptually similar codes. Q, expressed in the language of Fig. 4b, is generated from C with mapping Ω. In particular, the map exploits the AST structure of the code: the AST is traversed in a depth-first manner, and all values are replaced with their corresponding sketch variables, i.e., action a, bool b, and iter x are replaced with A, B, and X, respectively. In the following, we also use mapping ω(·| C), which takes a sketch variable in Q and returns its value in C. In addition to the above, we may extend a variable A to an action sequence A, since any A is allowed to be empty (φ). We may also add an action sequence of length δsize at the beginning and end of the obtained sketch. As an example, consider the code in Fig. 4d and the resulting sketch in Fig. 4e. Notice that, while we add an action sequence at the beginning of the sketch (A1), no action sequence is appended at the end because construct RepeatUntil renders any succeeding code unreachable.
Step 2: Sketch constraints. Sketch constraints restrict the possible concrete instantiations of a sketch by encoding the required semantics of the mutated codes. All constraint types are in Fig. 4c.
In particular, ∆0 restricts the size of the mutated code within δsize. ∆1 specifies the allowed mutations to an action sequence based on its value in the code, given by ω(A | C). For instance, this constraint could result in converting all turnLeft actions of a sequence to turnRight. ∆2 restricts the possible values of the Repeat counter within threshold δiter. ∆3 ensures that the Repeat counter is optimal, i.e., action subsequences before and after this construct are not nested in it. ∆4 specifies the possible values of the If condition based on its value in the code, given by ω(B | C). ∆5 refers to constraints imposed on action sequences nested within conditionals. As an example, consider
∆5 in Fig. 4f, which states that if B1 = pathLeft, then the nested action sequence must have at least one turnLeft action, and the first occurrence of this action must not be preceded by a move or turnRight, thus preventing invalid actions within the conditional. ∆6 ensures minimality of an action sequence, i.e., optimality of the constituent actions to obtain the desired output. This constraint would, for instance, eliminate redundant sequences such as turnLeft, turnRight, which does not affect the output, or turnLeft, turnLeft, turnLeft, whose output could be achieved by a single turnRight. All employed elimination sequences can be found in the supplementary material. The entire list of constraints applied on the solution code in Fig. 4d is shown in Fig. 4f.
Step 3: SMT query. For a sketch Q generated from code C and its constraints, we pose the following query to an SMT solver: (sketch Q, Q-constraints). As a result, the solver generates a set of instantiations, which are conceptually similar to C. In our implementation, we used the Z3 solver [7]. For the code in Fig. 4d, Z3 generated 66 mutated codes in 0.8s from an exhaustive space of 2, 997 possible codes with δsize = 2. One such mutation is shown in Fig. 1d.
While this approach generates codes that are devoid of most semantic irregularities, it has its limitations. Certain irregularities continue to exist in some generated codes: An example of such a code included the action sequence move, turnLeft, move, turnLeft, move, turnLeft, move, turnLeft, which results in the agent circling back to its initial location in the task space. This kind of undesirable behaviour is eliminated in the symbolic execution stage of our pipeline.
3.2 Symbolic Execution
Symbolic execution [13] is an automated test-generation technique that symbolically explores execution paths in a program. During exploration of a path, it gathers symbolic constraints over program inputs from statements along the path. These constraints are then mutated (according to a search strategy), and an SMT solver is queried to generate new inputs that explore another path.
Obtaining visual tasks with symbolic execution. This stage in our pipeline applies symbolic execution on each generated code Cout to obtain a suitable visual task Tout. The program inputs of Cout are the agent’s initial location/orientation and the status of the grid cells (unknown, free, blocked, marker, goal), which is initially unknown. Symbolic execution collects constraints over these from code statements. As in Fig. 5 for one path, symbolic execution generates a visual task for each path in Cout.
However, not all of these tasks are suitable. For instance, if the goal is reached after the first move in Fig. 1d, all other statements in Cout are not covered, rendering the task less suitable for this code. Naïvely, symbolic execution could first enumerate all paths in Cout and their corresponding tasks, and then rank them in terms of suitability. However, solution codes may have an unbounded number of paths, which leads to path explosion, that is, the inability to cover all paths with tractable resources.
Guiding symbolic execution using Monte Carlo Tree Search (MCTS). To address this issue, we use MCTS [14] as a search strategy in symbolic execution with the goal of generating more suitable tasks with fewer resources—we define task suitability next. Symbolic execution has been previously combined with MCTS in order to direct the exploration towards costly paths [15]. In the supplementary material, we provide an example demonstrating how MCTS could guide the symbolic execution in generating more suitable tasks.
As previously observed [12], a critical component of effectively applying MCTS is to define an evaluation function that describes the desired properties of the output, i.e., the visual tasks. Tailoring the evaluation function to our unique setting is exactly what differentiates our approach from existing work. In particular, our evaluation function, Fscore, distinguishes suitable tasks by assigning a score (∈ [0, 1]) to them, which guides the MCTS search. A higher Fscore indicates a more suitable task.
Its constituent components are: (i) Fcov(Toutvis , Cout) ∈ {0, 1}, which evaluates to 1 in the event of complete coverage of code Cout by task Toutvis and 0 otherwise; (ii) Fdiss(Toutvis , Tinvis) ∈ [0, 1], which evaluates the dissimilarity of Tout to Tin (see Section 2); (iii) Fqual(Toutvis , Cout) ∈ [0, 1], which evaluates the quality and validity of Tout; (iv) Fnocrash(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 in case the agent crashes into a wall and 1 otherwise; and (v) Fnocut(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 if there is a shortcut sequence of actions (a in Fig. 4a) smaller than Coutsize that solves T
out and 1 otherwise. Fqual and Fnocut also resolve the limitations of our mutation stage by eliminating codes and tasks that lead to undesirable agent behavior. We instantiate Fscore in the next section.
4 Experimental Evaluation
In this section, we evaluate our task synthesis algorithm on HOC and Karel tasks. Our implementation is publicly available.2 While we give an overview of key results here, a detailed description of our setup and additional experiments can be found in the supplementary material.
4.1 Reference Tasks and Specifications
Reference tasks. We use a set of ten reference tasks from HOC and Karel, shown in Fig. 6. The HOC tasks were selected from the Hour of Code: Classic Maze challenge by Code.org [23] and the Karel tasks from the Intro to Programming with Karel course by CodeHS.com [22]. The DSL of Fig. 4a is generic in that it includes both HOC and Karel codes, with the following differences: (i) construct While, marker-related actions putM, pickM, and conditions noPathA, noPathL, noPathR, marker, noMarker are specific to Karel only; (ii) construct RepeatUntil and goal are specific to HOC only. Furthermore, the puzzles for HOC and Karel are of different styles (see Fig. 1 and Fig. 2). For all tasks, the grid size of the puzzles is fixed to 10× 10 cells (grid-size parameter n = 10). Specification of scoring functions. Fqual(Toutvis , Cout) ∈ [0, 1] was approximated as the sum of the normalized counts of ‘moves’, ‘turns’, ‘segments’, and ‘long-segments’ in the grid; segments and longsegments are sequences of ≥ 3 and ≥ 5 move actions respectively. More precisely, for HOC tasks, we used the following function where features are computed by executing Cout on Toutvis :
FHOCqual (Toutvis , Cout) = 1
4 (#moves 2n + #turns n + #segments n/2 + #long-segments n/3 ) .
Furthermore, in our implementation, Fqual(·) value was set to 0 when Fnocrash(·) = 0. For Karel tasks, Fqual additionally included the normalized counts of putM and pickM, and is provided in the supplementary material. Fdiss(Toutvis , Tinvis) ∈ [0, 1] was computed based on the dissimilarity of the agent’s initial location/orientation w.r.t. Tinvis, and the grid-cell level dissimilarity based on the Hamming distance between Toutvis and T in vis. More precisely, we used the following function:
Fdiss(Toutvis , Tinvis) = 1
3
( diss(loc | Toutvis , Tinvis) + diss(dir | Toutvis , Tinvis) + diss(grid-cells | Toutvis , Tinvis) ) where diss(loc | Toutvis , Tinvis) ∈ {0, 1}, diss(dir | Toutvis , Tinvis) ∈ {0, 1}, and diss(grid-cells | Toutvis , Tinvis) ∈ [0, 1] (after the Hamming distance is normalized with a factor of 2n2 ).
2https://github.com/adishs/neurips2020_synthesizing-tasks_code
Next, we define the evaluation function Fscore(Tout, Cout, Tin, Cin) ∈ [0, 1] used by MCTS: Fscore(Tout, Cout, Tin, Cin) = 1 ( Fqual(Toutvis , C out) ≥ δqual,Fnocrash(Toutvis , C out) = 1,Fnocut(Toutvis , C out) = 1 )︸ ︷︷ ︸
(i)
·
[ α1Fcov(Toutvis , Cout) + α2Fqual(Toutvis , Cout) + α3Fdiss(Toutvis , Tinvis) ]︸ ︷︷ ︸ (ii)
where 1 is an indicator function and each constant α = 1/3. Component (ii) in the above function supplies the gradients for guiding the search in MCTS; Component (i) is applied at the end of the MCTS run to pick the output. More precisely, the best task (i.e, the one with the highest Fscore value) is picked only from the pool of generated tasks which have Fscore(·) > 0 and satisfy Fcov(·) = 1. Specification of task synthesis and MCTS. As per Section 2, we set the following thresholds for our algorithm: (i) δsize = 2, (ii) δdiss = 0.33, and (iii) δqual = 0.2 for codes with While or RepeatUntil, and 0.05 otherwise. We run MCTS 10 times per code, with each run generating one task. We set the maximum iterations of a run to 2 million (M) and the exploration constant to 2 [14]. Even when considering a tree depth of 2n (= 20), there are millions of leaves for difficult tasks H5 and H6, reflecting the complexity of task generation. For each code Cout, we generated 10 different visual tasks. To ensure sufficient diversity among the tasks generated for the same code, we introduced a measure Fdiversity. This measure, not only ensures visual task dissimilarity, but also ensures sufficient diversity in entire symbolic paths during generation (for details, see supplementary material).
4.2 Results
Performance of task synthesis algorithm. Fig. 7 shows the results of our algorithm. The second column illustrates the enormity of the unconstrained space of mutated codes; we only impose size constraint ∆0 from Fig. 4c. We then additionally impose constraint ∆1 resulting in a partially constrained space of mutated codes (column 3), and finally apply all constraints from Fig. 4c to obtain the final set of generated codes (column 4). This reflects the systematic reduction in the space of mutated codes by our constraints. Column 5 shows the total running time for generating the final codes, which denotes the time taken by Z3 to compute solutions to our mutation query. As discussed in Section 3.1, few codes with semantic irregularities still remain after the mutation stage. The symbolic execution stage eliminates these to obtain the reduced set of valid codes (column 6). Column 7 shows the final number of generated tasks and column 8 is the average time per output task (i.e., one MCTS run).
Analyzing output tasks. We further analyze the generated tasks based on the objectives of Section 2. All tasks satisfy properties (I)–(III) by design. Objective (IV) is easily achieved by excluding generated tasks for which Cout = Cin. For a random sample of 100 of the generated tasks per reference task, we performed manual validation to determine whether objectives (V) and (VI) are met. The fraction of tasks that satisfy these objectives is listed in the last three columns of Fig. 7. We observe that the vast majority of tasks meet the objectives, even if not by design. For H6, the fraction of tasks satisfying (VI) is low because the corresponding codes are generic enough to solve several puzzles.
Deep dive into an MCTS run. To offer more insight into the task generation process, we take a closer look at an MCTS run for task H5, shown in Fig. 8. Fig. 8a illustrates the improvement in various components of Fscore as the number of MCTS iterations increases. Best tasks at different iterations are shown in Fig. 8b, 8c, 8d. As expected, the more the iterations, the better the tasks are.
Remarks. We also ran the mutation stage by enumerating the programs within size constraints and then post-checking other constraints without Z3. This implementation leads to a run-time increase by a factor of 10 to 100 for different tasks. So, Z3 seems to be very effective by jointly considering all the constraints. As a search method, although MCTS seems computationally expensive, the actual run-time and memory footprint of an MCTS run depend on the unique traces explored (i.e., unique symbolic executions done)—this number is typically much lower than the number of iterations, also see discussion in the supplementary material. Considering the MCTS output in Figs. 8c, 8d, to obtain a comparable evaluation score through a random search, the corresponding number of unique symbolic executions required is at least 10 times more than executed by MCTS. We note that while we considered one I/O pair for Karel tasks, our methodology can be easily extended to multiple I/O pairs by adapting techniques designed for generating diverse tasks.
5 User Study and Comparison with Alternate Methods
In this section, we evaluate our task synthesis algorithm with a user study focusing on tasks H2, H4, H5, and H6. We developed an online app3, which uses the publicly available toolkit of Blockly Games [10] and provides an interface for a participant to practice block-based programming tasks for HOC. Each “practice session” of the study involves three steps: (i) a reference task Tin ∈ {H2,H4,H5,H6} is shown to the participant along with its solution code Cin, (ii) a new task Tout is generated for which the participant has to provide a solution code, and (iii) a post-survey asks the participant to assess the visual dissimilarity of the two tasks on a 4-point Likert scale as used in [25]. Details on the app interface and questionnaire are provided in the supplementary material. Participants for the study were recruited through Amazon Mechanical Turk. We only selected four tasks due to the high cost involved in conducting the study (about 1.8 USD per participant). The number of participants and their performance are documented in Fig. 9.
Baselines and methods evaluated. We evaluated four different methods, including three baselines (SAME, TUTOR, MUTTASK) and our algorithm (SYNTASK). SAME generates tasks such that Tin = Tout. TUTOR produces tasks that are similar to Tin and designed by an expert. We picked similar problems from the set of 20 Classic Maze challenge [23] tasks exercising the same programming concepts: Maze 6, 9 for H2, Maze 11, 13 for H4, Maze 15, 17 for H5, and Maze 19 for H6.
MUTTASK generated tasks by directly mutating the grid-world of the original task, i.e., by moving the agent or goal by up to two cells and potentially changing the agent’s orientation. A total of 18, 20, 15, and 17 tasks were generated for H2, H4, H5, and H6, respectively. Fig. 10 shows two output tasks for H4 and illustrates the challenge in directly mutating the input task, given the high discontinuity in mapping from the space of tasks to their codes. For H4, a total of 14 out of 20 new tasks were structurally very different from the input.
SYNTASK uses our algorithm to generate tasks. We picked the generated tasks from three groups based on the size of the code mutations from which they were produced, differing from the reference solution code by +δsize for δsize ∈ {0, 1, 2}. For H2 and H4, we randomly selected 5 tasks from each group, for a total of 15 new tasks per reference task. For H5 and H6, we selected 10 tasks from the first group (δsize = 0) only, due to their complexity stemming from nested constructs in their codes. We observed that TUTOR tasks for H5, H6 were also of δsize = 0, i.e., Coutsize = C in size. All the generated tasks picked for SYNTASK adhere to properties (I)–(VI) in Section 2.
3https://www.teaching-blocks.cc/
Results on task solving. In terms of successfully solving the generated tasks, SAME performed best (mean success = 0.94) in comparison to TUTOR (mean = 0.90), SYNTASK (mean = 0.89), and MUTTASK (mean = 0.68)—this is expected given the tasks generated by SAME. In comparison to TUTOR, the performance of SYNTASK was not significantly different (χ2 = 0.04, p = 0.83); in comparison to MUTTASK, SYNTASK performed significantly better (χ2 = 28.74, p < e−8). The complexity of the generated tasks is also reflected in the average time that participants spent on solving them. As shown in Fig. 9, they spent more time solving the tasks generated by MUTTASK.
Results on visual task dissimilarity. Visual dissimilarity was measured on a Likert scale ranging from 1–4, 1 being highly similar and 4 highly dissimilar. Comparing the dissimilarity of the generated tasks w.r.t. the reference task, we found that the performance of SAME was worst (mean dissimilarity = 1.07), while that of TUTOR was best (mean = 2.90). SYNTASK (mean = 2.63) performed significantly better than MUTTASK (mean = 2.17), yet slightly worse than TUTOR. This is because TUTOR generates tasks with additional distracting paths and noise, which can also be done by our algorithm (although not done for this study). Moreover, for H2, which had no conditionals, the resulting codes were somewhat similar, and so were the generated puzzles. When excluding H2 from the analysis, the difference between SYNTASK (mean = 2.72) and TUTOR (mean =2.93) was not statistically significant. A detailed distribution of the responses can be found in the supplementary material.
Remarks. SAME’s performance in terms of tasks solved is below 1.00, possibly because participants overlooked the solution of Step 1, unaware they will be receiving the same task in Step 2, and the app did not allow them to go back to Step 1. This user study provides a proof-of-concept; more elaborate studies are needed to fully reach the motivational goal of teaching K-12 students, and evaluate the long term impact on students’ concept learning. As additional studies, it would be important to understand the sensitivity of user study results w.r.t. the Likert scale definition; another possibility is to use pairwise comparisons in eliciting user evaluations.
6 Conclusions and Outlook
We developed techniques for a critical aspect of pedagogy in block-based programming: Automatically generating new tasks that exercise specific programming concepts, while looking visually dissimilar to input. We demonstrated the effectiveness of our methodology through an extensive empirical evaluation and user study on reference tasks from popular programming platforms. We believe our techniques have the potential to drastically improve the success of pedagogy in block-based visual programming environments by providing tutors and students with a substantial pool of new tasks. Beyond the application domain of programming education, our methodology can be used for generating large-scale datasets consisting of tasks and solution codes with desirable characteristics—this can be potentially useful for training neural program synthesis methods.
There are several promising directions for future work, including but not limited to: Learning a policy to guide the MCTS procedure (instead of running vanilla MCTS); automatically learning the constraints and cost function from a human-generated pool of problems; and applying our methodology to other programming environments (e.g., Python problems).
Broader Impact
This paper develops new techniques for improving pedagogy in block-based visual programming environments. Such programming environments are increasingly used nowadays to introduce computing concepts to novice programmers, and our work is motivated by the clear societal need of enhancing K-12 computing education. In existing systems, the programming tasks are hand-curated by tutors, and the available set of tasks is typically very limited. This severely limits the utility of existing systems for long-term learning as students do not have access to practice tasks for mastering the programming concepts.
We take a step towards tackling this challenge by developing a methodology to generate new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. Our task synthesis algorithm is able to generate 1000’s of new similar tasks for reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. Our extensive experiments and user study further validate the quality of the generated tasks. Our task synthesis algorithm could be useful in many different ways in practical systems. For instance, tutors can assign new practice tasks as homework or quizzes to students to check their knowledge, students can automatically obtain new similar tasks after they failed to solve a given task and received assistance, and intelligent tutoring systems could automatically generate a personalized curriculum of problems for a student for long-term learning.
Acknowledgments and Disclosure of Funding
We would like to thank the anonymous reviewers for their helpful comments. Ahana Ghosh was supported by Microsoft Research through its PhD Scholarship Programme. Umair Z. Ahmed and Abhik Roychoudhury were supported by the National Research Foundation, Singapore and National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) project under the National Cybersecurity R&D (NCR) Grant award no. NRF2018NCRNSOE003-0001. | 1. What is the main contribution of the paper in terms of generating puzzles for programming courses?
2. What are the strengths of the proposed approach, particularly in combining different techniques?
3. Do you have any concerns about the novelty or complexity of the proposed method?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. What are the limitations regarding the applicability and impact of the technique on student learning outcomes? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The work presents a technique for automatic generation of puzzles for block-based introductory programming courses. The goal is to generate puzzles that exercise the same concepts as a given reference (problem,solution) but on a different input. The method is based on (a) constrained mutation of a reference solution using an SMT-solver, (b) generation of a corresponding input for a new solution using MCTS over symbolic paths in the program. It was evaluated automatically on a sample of real HOS/Karel introductory tasks and also with user studies regarding the solvability and distinctness of the generated puzzles.
Strengths
+ An end-to-end pipeline that generates realistic, solvable, and distinct puzzles. + Exciting combination of constraint solving, symbolic execution, and MCTS to ensure a complex set of objectives. + MCTS as a method to constrain symbolic execution could be used for many input generation tasks in the ML4Code community. + Comprehensive and systematic evaluation both on automatically verifiable metrics and on the effects of generated problems on user engagement.
Weaknesses
- Apart from the fundamental parts of MCTS, nothing is really learned. While the application is clearly relevant to the NeurIPS community, much of the solution is not. - Due to the complexity of the overall pipeline that has to be presented in 8 pages, most of the technical meat of the paper is left to the Appendix. - Some important parts are not elaborated even in the Appendix. - The broader impact of this technique on the students' concept learning and progression is unclear. |
NIPS | Title
Synthesizing Tasks for Block-based Programming
Abstract
Block-based visual programming environments play a critical role in introducing computing concepts to K-12 students. One of the key pedagogical challenges in these environments is in designing new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. In this paper, we formalize the problem of synthesizing visual programming tasks. In particular, given a reference visual task Tin and its solution code Cin, we propose a novel methodology to automatically generate a set {(Tout, Cout)} of new tasks along with solution codes such that tasks Tin and Tout are conceptually similar but visually dissimilar. Our methodology is based on the realization that the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, directly mutating reference task Tin to generate new tasks is futile. Our task synthesis algorithm operates by first mutating code Cin to obtain a set of codes {Cout}. Then, the algorithm performs symbolic execution over a code Cout to obtain a visual task Tout; this step uses the Monte Carlo Tree Search (MCTS) procedure to guide the search in the symbolic tree. We demonstrate the effectiveness of our algorithm through an extensive empirical evaluation and user study on reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com.
1 Introduction
Block-based visual programming environments are increasingly used nowadays to introduce computing concepts to novice programmers including children and K-12 students. Led by the success of environments like Scratch [29], initiatives like Hour of Code by Code.org [24] (HOC) and online platforms like CodeHS.com [21], block-based programming has become an integral part of introductory computer science education. Considering HOC alone, over one billion hours of block-based programming activity has been performed so far by over 50 million unique students worldwide [24, 35].
The societal need for enhancing K-12 computing education has led to a surge of interest in developing AI-driven systems for pedagogy of block-based programming [33, 26, 27, 34, 16]. Existing works have studied various aspects of intelligent support, including providing real-time next-step hints when a student is stuck solving a task [20, 36, 18, 17, 9], giving data-driven feedback about a student’s misconceptions [31, 19, 28, 30, 35], and demonstrating a worked-out solution for a task when a student lacks the required programming concepts [37]. An underlying assumption when providing such intelligent support is that afterwards the student can practice new similar tasks to finally learn the missing concepts. However, this assumption is far from reality in existing systems—the programming tasks are typically hand-curated by experts/tutors, and the available set of tasks is limited. Consider HOC’s Classic Maze challenge [23], which provides a progression of 20 tasks: Millions of students have attempted these tasks, yet when students fail to solve a task and receive assistance, they cannot practice similar tasks, hindering their ability to master the desired concepts. We seek to tackle this pedagogical challenge by developing techniques for synthesizing new programming tasks.
∗Authors listed alphabetically; Correspondence to: Ahana Ghosh <gahana@mpi-sws.org>.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
We formalize the problem of synthesizing visual programming tasks of the kind found in popular learning platforms like Code.org (see Fig. 1) and CodeHS.com (see Fig. 2). As input, we are given a reference task Tin, specified as a visual puzzle, and its solution code Cin. Our goal is to synthesize a set {(Tout, Cout)} of new tasks along with their solution codes that are conceptually similar but visually dissimilar to the input. This is motivated by the need for practice tasks that on one hand exercise the same concepts, while looking fresh in order to maintain student engagement.
When tackling the problem of synthesizing new tasks with the above desirable properties, three key challenges emerge. First, we are generating problems in a conceptual domain with no well-defined procedure that students follow to solve a task—consequently, existing work on educational problem generation in procedural domains does not apply in our setting [3, 11]. Second, the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, template-based problem generation techniques [32, 25] that rely on directly mutating the input to generate new tasks is ineffective (see Section 5 where we use this approach as a baseline). Furthermore, such a direct task-mutation approach would require access to an automated solution synthesizer; however, state-of-the-art program synthesis techniques are not yet on par with experts and their minimal solutions [5, 8, 6]. Third, the space of possible tasks and their solutions is potentially unbounded, and thus, any problem generation technique that relies on exhaustive enumeration is intractable [32, 1, 2].
To overcome these challenges, we propose a novel methodology that operates by first mutating the solution code Cin to obtain a set of codes {Cout}, and then performing symbolic execution over a code Cout to obtain a visual puzzle Tout. Mutation is efficient by creating an abstract representation of Cin along with appropriate constraints and querying an SMT solver [4]; any solution to this query is a mutated code Cout. During symbolic execution, we use Monte Carlo Tree Search (MCTS) to guide the search over the (unbounded) symbolic execution tree. We demonstrate the effectiveness of our methodology by performing an extensive empirical evaluation and user study on a set of reference tasks from the Hour of code challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. In summary, our main contributions are:
• We formalize the problem of synthesizing block-based visual programming tasks (Section 2). • We present a novel approach for generating new visual tasks along with solution codes such that
they are conceptually similar but visually dissimilar to a given reference task (Section 3). • We demonstrate the effectiveness of our approach through an extensive empirical evaluation and
user study on reference tasks from real-world programming platforms (Section 4 and Section 5).
2 Problem Formulation
The space of tasks. We define a task as a tuple T := (Tvis, Tstore, Tsize), where Tvis denotes the visual puzzle, Tstore the available block types, and Tsize the maximum number of blocks allowed in the
solution code. For instance, considering the task T := Tin in Fig. 1a, Tvis is illustrated in Fig. 1a, Tstore = {move, turnL, turnR, RepeatUntil, If}, and Tsize = 4. The space of codes. The programming environment has a domain-specific language (DSL), which defines the set of valid codes C and is shown in Fig. 4a. A code C ∈ C is characterized by several properties, such as the set Cblocks of block types in C, the number of blocks Csize, the depth Cdepth of the corresponding Abstract Syntax Tree (AST), and the nesting structure Cstruct representing programming concepts exercised by C. For instance, considering the code C := Cin in Fig. 1b, Cblocks = {move, turnL, RepeatUntil, If}, Csize = 4, Cdepth = 3, and Cstruct = {Run{RepeatUntil{If}}}. Below, we introduce two useful definitions relating the task and code space. Definition 1 (Solution code). C is a solution code for T if the following holds: C successfully solves the visual puzzle Tvis, Cblocks ⊆ Tstore, and Csize ≤ Tsize. CT denotes the set of all solution codes for T. Definition 2 (Minimality of a task). Given a solvable task T with |CT| ≥ 1 and a threshold δ ∈ N, the task is minimal if @C ∈ CT such that Csize < Tsize − δ.
Next, we introduce two definitions formalizing the notion of conceptual similarity. Definition 3 formalizes conceptual similarity of a task T along with one solution code C. Since a task can have multiple solution codes, Definition 4 provides a stricter notion of conceptual similarity of a task T for all its solution codes. These definitions are used in our objective of task synthesis in conditions (I) and (V) below. Definition 3 (Conceptual similarity of (T, C)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T along with a solution code C is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and Cstruct = Cinstruct. Definition 4 (Conceptual similarity of (T, ·)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and ∀C ∈ CT, Cstruct = Cinstruct.
Environment domain knowledge. We now formalize our domain knowledge about the block-based environment to measure visual dissimilarity of two tasks, and capture some notion of interestingness and quality of a task. Given tasks T and T′, we measure their visual dissimilarity by an environmentspecific function Fdiss(Tvis, T′vis) ∈ [0, 1]. Moreover, we measure generic quality of a task with function Fqual(Tvis, C) ∈ [0, 1]. We provide specific instantiations of Fdiss and Fqual in our evaluation.
Objective of task synthesis. Given a reference task Tin and a solution code Cin ∈ CTin as input, we seek to generate a set {(Tout, Cout)} of new tasks along with solution codes that are conceptually similar but visually dissimilar to the input. Formally, given parameters (δsize, δdiss, δqual), our objective is to synthesize new tasks meeting the following conditions:
(I) (Tout, Cout) is conceptually similar to (Tin, Cin) with threshold δsize in Definition 3. (II) Tout is visually dissimilar to Tin with margin δdiss, i.e., Fdiss(Tinvis, Toutvis ) ≥ δdiss.
(III) Tout has a quality score above threshold δqual, i.e., Fqual(Toutvis , Cout) ≥ δqual.
In addition, depending on the use case, it is desirable that the new tasks satisfy the following criteria: (IV) Cout is different from the input solution code, i.e., Cout 6= Cin. (V) Tout is conceptually similar to (Tin, Cin) with threshold δsize in Definition 4.
(VI) Tout is minimal as per Definition 2 for a desired value of δmini (e.g., δmini = 0 or δmini = 1).
3 Our Task Synthesis Algorithm
We now present the pipeline of our algorithm (see Fig. 3), which takes as input a reference task Tin and its solution code Cin, and generates a set {(Tout, Cout)} of new tasks with their solution codes. The goal is for this set to be conceptually similar to (Tin, Cin), but for new tasks {Tout} to
be visually dissimilar to Tin. This is achieved by two main stages: (1) mutation of Cin to obtain a set {Cout}, and (2) symbolic execution of each Cout to create a task Tout. The first stage, presented in Section 3.1, converts Cin into an abstract representation restricted by a set of constraints (Fig. 3(a)), which must be satisfied by any generated Cout (Fig. 3(b)). The second stage, described in Section 3.2, applies symbolic execution on each code Cout to create a corresponding visual task Tout (Fig. 3(c)) while using Monte Carlo Tree Search (MCTS) to guide the search in the symbolic execution tree.
3.1 Code Mutation
This stage in our pipeline mutates code Cin of task Tin such that its conceptual elements are preserved. Our mutation procedure consists of three main steps. First, we generate an abstract representation of Cin, called sketch. Second, we restrict the sketch with constraints that describe the space of its concrete instantiations. Although this formulation is inspired from work on generating algebra problems [32], we use it in the entirely different context of generating conceptually similar mutations of Cin. This is achieved in the last step, where we use the sketch and its constraints to query an SMT solver [4]; the query solutions are mutated codes {Cout} such that Coutstruct = Cinstruct (see Definition 3). Step 1: Sketch. The sketch of code C, denoted by Q, is an abstraction of C capturing its skeleton and generalizing C to the space of conceptually similar codes. Q, expressed in the language of Fig. 4b, is generated from C with mapping Ω. In particular, the map exploits the AST structure of the code: the AST is traversed in a depth-first manner, and all values are replaced with their corresponding sketch variables, i.e., action a, bool b, and iter x are replaced with A, B, and X, respectively. In the following, we also use mapping ω(·| C), which takes a sketch variable in Q and returns its value in C. In addition to the above, we may extend a variable A to an action sequence A, since any A is allowed to be empty (φ). We may also add an action sequence of length δsize at the beginning and end of the obtained sketch. As an example, consider the code in Fig. 4d and the resulting sketch in Fig. 4e. Notice that, while we add an action sequence at the beginning of the sketch (A1), no action sequence is appended at the end because construct RepeatUntil renders any succeeding code unreachable.
Step 2: Sketch constraints. Sketch constraints restrict the possible concrete instantiations of a sketch by encoding the required semantics of the mutated codes. All constraint types are in Fig. 4c.
In particular, ∆0 restricts the size of the mutated code within δsize. ∆1 specifies the allowed mutations to an action sequence based on its value in the code, given by ω(A | C). For instance, this constraint could result in converting all turnLeft actions of a sequence to turnRight. ∆2 restricts the possible values of the Repeat counter within threshold δiter. ∆3 ensures that the Repeat counter is optimal, i.e., action subsequences before and after this construct are not nested in it. ∆4 specifies the possible values of the If condition based on its value in the code, given by ω(B | C). ∆5 refers to constraints imposed on action sequences nested within conditionals. As an example, consider
∆5 in Fig. 4f, which states that if B1 = pathLeft, then the nested action sequence must have at least one turnLeft action, and the first occurrence of this action must not be preceded by a move or turnRight, thus preventing invalid actions within the conditional. ∆6 ensures minimality of an action sequence, i.e., optimality of the constituent actions to obtain the desired output. This constraint would, for instance, eliminate redundant sequences such as turnLeft, turnRight, which does not affect the output, or turnLeft, turnLeft, turnLeft, whose output could be achieved by a single turnRight. All employed elimination sequences can be found in the supplementary material. The entire list of constraints applied on the solution code in Fig. 4d is shown in Fig. 4f.
Step 3: SMT query. For a sketch Q generated from code C and its constraints, we pose the following query to an SMT solver: (sketch Q, Q-constraints). As a result, the solver generates a set of instantiations, which are conceptually similar to C. In our implementation, we used the Z3 solver [7]. For the code in Fig. 4d, Z3 generated 66 mutated codes in 0.8s from an exhaustive space of 2, 997 possible codes with δsize = 2. One such mutation is shown in Fig. 1d.
While this approach generates codes that are devoid of most semantic irregularities, it has its limitations. Certain irregularities continue to exist in some generated codes: An example of such a code included the action sequence move, turnLeft, move, turnLeft, move, turnLeft, move, turnLeft, which results in the agent circling back to its initial location in the task space. This kind of undesirable behaviour is eliminated in the symbolic execution stage of our pipeline.
3.2 Symbolic Execution
Symbolic execution [13] is an automated test-generation technique that symbolically explores execution paths in a program. During exploration of a path, it gathers symbolic constraints over program inputs from statements along the path. These constraints are then mutated (according to a search strategy), and an SMT solver is queried to generate new inputs that explore another path.
Obtaining visual tasks with symbolic execution. This stage in our pipeline applies symbolic execution on each generated code Cout to obtain a suitable visual task Tout. The program inputs of Cout are the agent’s initial location/orientation and the status of the grid cells (unknown, free, blocked, marker, goal), which is initially unknown. Symbolic execution collects constraints over these from code statements. As in Fig. 5 for one path, symbolic execution generates a visual task for each path in Cout.
However, not all of these tasks are suitable. For instance, if the goal is reached after the first move in Fig. 1d, all other statements in Cout are not covered, rendering the task less suitable for this code. Naïvely, symbolic execution could first enumerate all paths in Cout and their corresponding tasks, and then rank them in terms of suitability. However, solution codes may have an unbounded number of paths, which leads to path explosion, that is, the inability to cover all paths with tractable resources.
Guiding symbolic execution using Monte Carlo Tree Search (MCTS). To address this issue, we use MCTS [14] as a search strategy in symbolic execution with the goal of generating more suitable tasks with fewer resources—we define task suitability next. Symbolic execution has been previously combined with MCTS in order to direct the exploration towards costly paths [15]. In the supplementary material, we provide an example demonstrating how MCTS could guide the symbolic execution in generating more suitable tasks.
As previously observed [12], a critical component of effectively applying MCTS is to define an evaluation function that describes the desired properties of the output, i.e., the visual tasks. Tailoring the evaluation function to our unique setting is exactly what differentiates our approach from existing work. In particular, our evaluation function, Fscore, distinguishes suitable tasks by assigning a score (∈ [0, 1]) to them, which guides the MCTS search. A higher Fscore indicates a more suitable task.
Its constituent components are: (i) Fcov(Toutvis , Cout) ∈ {0, 1}, which evaluates to 1 in the event of complete coverage of code Cout by task Toutvis and 0 otherwise; (ii) Fdiss(Toutvis , Tinvis) ∈ [0, 1], which evaluates the dissimilarity of Tout to Tin (see Section 2); (iii) Fqual(Toutvis , Cout) ∈ [0, 1], which evaluates the quality and validity of Tout; (iv) Fnocrash(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 in case the agent crashes into a wall and 1 otherwise; and (v) Fnocut(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 if there is a shortcut sequence of actions (a in Fig. 4a) smaller than Coutsize that solves T
out and 1 otherwise. Fqual and Fnocut also resolve the limitations of our mutation stage by eliminating codes and tasks that lead to undesirable agent behavior. We instantiate Fscore in the next section.
4 Experimental Evaluation
In this section, we evaluate our task synthesis algorithm on HOC and Karel tasks. Our implementation is publicly available.2 While we give an overview of key results here, a detailed description of our setup and additional experiments can be found in the supplementary material.
4.1 Reference Tasks and Specifications
Reference tasks. We use a set of ten reference tasks from HOC and Karel, shown in Fig. 6. The HOC tasks were selected from the Hour of Code: Classic Maze challenge by Code.org [23] and the Karel tasks from the Intro to Programming with Karel course by CodeHS.com [22]. The DSL of Fig. 4a is generic in that it includes both HOC and Karel codes, with the following differences: (i) construct While, marker-related actions putM, pickM, and conditions noPathA, noPathL, noPathR, marker, noMarker are specific to Karel only; (ii) construct RepeatUntil and goal are specific to HOC only. Furthermore, the puzzles for HOC and Karel are of different styles (see Fig. 1 and Fig. 2). For all tasks, the grid size of the puzzles is fixed to 10× 10 cells (grid-size parameter n = 10). Specification of scoring functions. Fqual(Toutvis , Cout) ∈ [0, 1] was approximated as the sum of the normalized counts of ‘moves’, ‘turns’, ‘segments’, and ‘long-segments’ in the grid; segments and longsegments are sequences of ≥ 3 and ≥ 5 move actions respectively. More precisely, for HOC tasks, we used the following function where features are computed by executing Cout on Toutvis :
FHOCqual (Toutvis , Cout) = 1
4 (#moves 2n + #turns n + #segments n/2 + #long-segments n/3 ) .
Furthermore, in our implementation, Fqual(·) value was set to 0 when Fnocrash(·) = 0. For Karel tasks, Fqual additionally included the normalized counts of putM and pickM, and is provided in the supplementary material. Fdiss(Toutvis , Tinvis) ∈ [0, 1] was computed based on the dissimilarity of the agent’s initial location/orientation w.r.t. Tinvis, and the grid-cell level dissimilarity based on the Hamming distance between Toutvis and T in vis. More precisely, we used the following function:
Fdiss(Toutvis , Tinvis) = 1
3
( diss(loc | Toutvis , Tinvis) + diss(dir | Toutvis , Tinvis) + diss(grid-cells | Toutvis , Tinvis) ) where diss(loc | Toutvis , Tinvis) ∈ {0, 1}, diss(dir | Toutvis , Tinvis) ∈ {0, 1}, and diss(grid-cells | Toutvis , Tinvis) ∈ [0, 1] (after the Hamming distance is normalized with a factor of 2n2 ).
2https://github.com/adishs/neurips2020_synthesizing-tasks_code
Next, we define the evaluation function Fscore(Tout, Cout, Tin, Cin) ∈ [0, 1] used by MCTS: Fscore(Tout, Cout, Tin, Cin) = 1 ( Fqual(Toutvis , C out) ≥ δqual,Fnocrash(Toutvis , C out) = 1,Fnocut(Toutvis , C out) = 1 )︸ ︷︷ ︸
(i)
·
[ α1Fcov(Toutvis , Cout) + α2Fqual(Toutvis , Cout) + α3Fdiss(Toutvis , Tinvis) ]︸ ︷︷ ︸ (ii)
where 1 is an indicator function and each constant α = 1/3. Component (ii) in the above function supplies the gradients for guiding the search in MCTS; Component (i) is applied at the end of the MCTS run to pick the output. More precisely, the best task (i.e, the one with the highest Fscore value) is picked only from the pool of generated tasks which have Fscore(·) > 0 and satisfy Fcov(·) = 1. Specification of task synthesis and MCTS. As per Section 2, we set the following thresholds for our algorithm: (i) δsize = 2, (ii) δdiss = 0.33, and (iii) δqual = 0.2 for codes with While or RepeatUntil, and 0.05 otherwise. We run MCTS 10 times per code, with each run generating one task. We set the maximum iterations of a run to 2 million (M) and the exploration constant to 2 [14]. Even when considering a tree depth of 2n (= 20), there are millions of leaves for difficult tasks H5 and H6, reflecting the complexity of task generation. For each code Cout, we generated 10 different visual tasks. To ensure sufficient diversity among the tasks generated for the same code, we introduced a measure Fdiversity. This measure, not only ensures visual task dissimilarity, but also ensures sufficient diversity in entire symbolic paths during generation (for details, see supplementary material).
4.2 Results
Performance of task synthesis algorithm. Fig. 7 shows the results of our algorithm. The second column illustrates the enormity of the unconstrained space of mutated codes; we only impose size constraint ∆0 from Fig. 4c. We then additionally impose constraint ∆1 resulting in a partially constrained space of mutated codes (column 3), and finally apply all constraints from Fig. 4c to obtain the final set of generated codes (column 4). This reflects the systematic reduction in the space of mutated codes by our constraints. Column 5 shows the total running time for generating the final codes, which denotes the time taken by Z3 to compute solutions to our mutation query. As discussed in Section 3.1, few codes with semantic irregularities still remain after the mutation stage. The symbolic execution stage eliminates these to obtain the reduced set of valid codes (column 6). Column 7 shows the final number of generated tasks and column 8 is the average time per output task (i.e., one MCTS run).
Analyzing output tasks. We further analyze the generated tasks based on the objectives of Section 2. All tasks satisfy properties (I)–(III) by design. Objective (IV) is easily achieved by excluding generated tasks for which Cout = Cin. For a random sample of 100 of the generated tasks per reference task, we performed manual validation to determine whether objectives (V) and (VI) are met. The fraction of tasks that satisfy these objectives is listed in the last three columns of Fig. 7. We observe that the vast majority of tasks meet the objectives, even if not by design. For H6, the fraction of tasks satisfying (VI) is low because the corresponding codes are generic enough to solve several puzzles.
Deep dive into an MCTS run. To offer more insight into the task generation process, we take a closer look at an MCTS run for task H5, shown in Fig. 8. Fig. 8a illustrates the improvement in various components of Fscore as the number of MCTS iterations increases. Best tasks at different iterations are shown in Fig. 8b, 8c, 8d. As expected, the more the iterations, the better the tasks are.
Remarks. We also ran the mutation stage by enumerating the programs within size constraints and then post-checking other constraints without Z3. This implementation leads to a run-time increase by a factor of 10 to 100 for different tasks. So, Z3 seems to be very effective by jointly considering all the constraints. As a search method, although MCTS seems computationally expensive, the actual run-time and memory footprint of an MCTS run depend on the unique traces explored (i.e., unique symbolic executions done)—this number is typically much lower than the number of iterations, also see discussion in the supplementary material. Considering the MCTS output in Figs. 8c, 8d, to obtain a comparable evaluation score through a random search, the corresponding number of unique symbolic executions required is at least 10 times more than executed by MCTS. We note that while we considered one I/O pair for Karel tasks, our methodology can be easily extended to multiple I/O pairs by adapting techniques designed for generating diverse tasks.
5 User Study and Comparison with Alternate Methods
In this section, we evaluate our task synthesis algorithm with a user study focusing on tasks H2, H4, H5, and H6. We developed an online app3, which uses the publicly available toolkit of Blockly Games [10] and provides an interface for a participant to practice block-based programming tasks for HOC. Each “practice session” of the study involves three steps: (i) a reference task Tin ∈ {H2,H4,H5,H6} is shown to the participant along with its solution code Cin, (ii) a new task Tout is generated for which the participant has to provide a solution code, and (iii) a post-survey asks the participant to assess the visual dissimilarity of the two tasks on a 4-point Likert scale as used in [25]. Details on the app interface and questionnaire are provided in the supplementary material. Participants for the study were recruited through Amazon Mechanical Turk. We only selected four tasks due to the high cost involved in conducting the study (about 1.8 USD per participant). The number of participants and their performance are documented in Fig. 9.
Baselines and methods evaluated. We evaluated four different methods, including three baselines (SAME, TUTOR, MUTTASK) and our algorithm (SYNTASK). SAME generates tasks such that Tin = Tout. TUTOR produces tasks that are similar to Tin and designed by an expert. We picked similar problems from the set of 20 Classic Maze challenge [23] tasks exercising the same programming concepts: Maze 6, 9 for H2, Maze 11, 13 for H4, Maze 15, 17 for H5, and Maze 19 for H6.
MUTTASK generated tasks by directly mutating the grid-world of the original task, i.e., by moving the agent or goal by up to two cells and potentially changing the agent’s orientation. A total of 18, 20, 15, and 17 tasks were generated for H2, H4, H5, and H6, respectively. Fig. 10 shows two output tasks for H4 and illustrates the challenge in directly mutating the input task, given the high discontinuity in mapping from the space of tasks to their codes. For H4, a total of 14 out of 20 new tasks were structurally very different from the input.
SYNTASK uses our algorithm to generate tasks. We picked the generated tasks from three groups based on the size of the code mutations from which they were produced, differing from the reference solution code by +δsize for δsize ∈ {0, 1, 2}. For H2 and H4, we randomly selected 5 tasks from each group, for a total of 15 new tasks per reference task. For H5 and H6, we selected 10 tasks from the first group (δsize = 0) only, due to their complexity stemming from nested constructs in their codes. We observed that TUTOR tasks for H5, H6 were also of δsize = 0, i.e., Coutsize = C in size. All the generated tasks picked for SYNTASK adhere to properties (I)–(VI) in Section 2.
3https://www.teaching-blocks.cc/
Results on task solving. In terms of successfully solving the generated tasks, SAME performed best (mean success = 0.94) in comparison to TUTOR (mean = 0.90), SYNTASK (mean = 0.89), and MUTTASK (mean = 0.68)—this is expected given the tasks generated by SAME. In comparison to TUTOR, the performance of SYNTASK was not significantly different (χ2 = 0.04, p = 0.83); in comparison to MUTTASK, SYNTASK performed significantly better (χ2 = 28.74, p < e−8). The complexity of the generated tasks is also reflected in the average time that participants spent on solving them. As shown in Fig. 9, they spent more time solving the tasks generated by MUTTASK.
Results on visual task dissimilarity. Visual dissimilarity was measured on a Likert scale ranging from 1–4, 1 being highly similar and 4 highly dissimilar. Comparing the dissimilarity of the generated tasks w.r.t. the reference task, we found that the performance of SAME was worst (mean dissimilarity = 1.07), while that of TUTOR was best (mean = 2.90). SYNTASK (mean = 2.63) performed significantly better than MUTTASK (mean = 2.17), yet slightly worse than TUTOR. This is because TUTOR generates tasks with additional distracting paths and noise, which can also be done by our algorithm (although not done for this study). Moreover, for H2, which had no conditionals, the resulting codes were somewhat similar, and so were the generated puzzles. When excluding H2 from the analysis, the difference between SYNTASK (mean = 2.72) and TUTOR (mean =2.93) was not statistically significant. A detailed distribution of the responses can be found in the supplementary material.
Remarks. SAME’s performance in terms of tasks solved is below 1.00, possibly because participants overlooked the solution of Step 1, unaware they will be receiving the same task in Step 2, and the app did not allow them to go back to Step 1. This user study provides a proof-of-concept; more elaborate studies are needed to fully reach the motivational goal of teaching K-12 students, and evaluate the long term impact on students’ concept learning. As additional studies, it would be important to understand the sensitivity of user study results w.r.t. the Likert scale definition; another possibility is to use pairwise comparisons in eliciting user evaluations.
6 Conclusions and Outlook
We developed techniques for a critical aspect of pedagogy in block-based programming: Automatically generating new tasks that exercise specific programming concepts, while looking visually dissimilar to input. We demonstrated the effectiveness of our methodology through an extensive empirical evaluation and user study on reference tasks from popular programming platforms. We believe our techniques have the potential to drastically improve the success of pedagogy in block-based visual programming environments by providing tutors and students with a substantial pool of new tasks. Beyond the application domain of programming education, our methodology can be used for generating large-scale datasets consisting of tasks and solution codes with desirable characteristics—this can be potentially useful for training neural program synthesis methods.
There are several promising directions for future work, including but not limited to: Learning a policy to guide the MCTS procedure (instead of running vanilla MCTS); automatically learning the constraints and cost function from a human-generated pool of problems; and applying our methodology to other programming environments (e.g., Python problems).
Broader Impact
This paper develops new techniques for improving pedagogy in block-based visual programming environments. Such programming environments are increasingly used nowadays to introduce computing concepts to novice programmers, and our work is motivated by the clear societal need of enhancing K-12 computing education. In existing systems, the programming tasks are hand-curated by tutors, and the available set of tasks is typically very limited. This severely limits the utility of existing systems for long-term learning as students do not have access to practice tasks for mastering the programming concepts.
We take a step towards tackling this challenge by developing a methodology to generate new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. Our task synthesis algorithm is able to generate 1000’s of new similar tasks for reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. Our extensive experiments and user study further validate the quality of the generated tasks. Our task synthesis algorithm could be useful in many different ways in practical systems. For instance, tutors can assign new practice tasks as homework or quizzes to students to check their knowledge, students can automatically obtain new similar tasks after they failed to solve a given task and received assistance, and intelligent tutoring systems could automatically generate a personalized curriculum of problems for a student for long-term learning.
Acknowledgments and Disclosure of Funding
We would like to thank the anonymous reviewers for their helpful comments. Ahana Ghosh was supported by Microsoft Research through its PhD Scholarship Programme. Umair Z. Ahmed and Abhik Roychoudhury were supported by the National Research Foundation, Singapore and National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) project under the National Cybersecurity R&D (NCR) Grant award no. NRF2018NCRNSOE003-0001. | 1. What is the main contribution of the paper in the field of programming education?
2. What are the strengths of the proposed technique, particularly in its two-level decomposition?
3. What are the weaknesses of the paper, especially regarding the domain-specificity of the sketch constraints and the lack of baseline comparisons?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents a new technique to automatically synthesize practice tasks for block-based programming problems given a reference task. The key idea of the technique is to first perform mutations in the program solution that satisfy certain set of constraints. The second step of the solution is to use MCTS for generating task descriptions (visual inputs) given the mutated code in the first step. The technique is evaluated on Hour of Code and Karel problems, and the synthesized problems are evaluated using a user study that shows that the approach performs in a comparable fashion to the expert designed problems.
Strengths
+ Interesting problem domain of automatically generating practice programming problems + Nice two level decomposition of the solution strategy to first generate mutated code and then generate corresponding visual inputs + Detailed user study to evaluate the generated problems
Weaknesses
- The sketch constraints for performing code mutations seem a bit domain-specific - Manual effort to specify domain constraints for Sketch generation and MCTS - No baseline comparisons for mutation and symbolic execution |
NIPS | Title
Synthesizing Tasks for Block-based Programming
Abstract
Block-based visual programming environments play a critical role in introducing computing concepts to K-12 students. One of the key pedagogical challenges in these environments is in designing new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. In this paper, we formalize the problem of synthesizing visual programming tasks. In particular, given a reference visual task Tin and its solution code Cin, we propose a novel methodology to automatically generate a set {(Tout, Cout)} of new tasks along with solution codes such that tasks Tin and Tout are conceptually similar but visually dissimilar. Our methodology is based on the realization that the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, directly mutating reference task Tin to generate new tasks is futile. Our task synthesis algorithm operates by first mutating code Cin to obtain a set of codes {Cout}. Then, the algorithm performs symbolic execution over a code Cout to obtain a visual task Tout; this step uses the Monte Carlo Tree Search (MCTS) procedure to guide the search in the symbolic tree. We demonstrate the effectiveness of our algorithm through an extensive empirical evaluation and user study on reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com.
1 Introduction
Block-based visual programming environments are increasingly used nowadays to introduce computing concepts to novice programmers including children and K-12 students. Led by the success of environments like Scratch [29], initiatives like Hour of Code by Code.org [24] (HOC) and online platforms like CodeHS.com [21], block-based programming has become an integral part of introductory computer science education. Considering HOC alone, over one billion hours of block-based programming activity has been performed so far by over 50 million unique students worldwide [24, 35].
The societal need for enhancing K-12 computing education has led to a surge of interest in developing AI-driven systems for pedagogy of block-based programming [33, 26, 27, 34, 16]. Existing works have studied various aspects of intelligent support, including providing real-time next-step hints when a student is stuck solving a task [20, 36, 18, 17, 9], giving data-driven feedback about a student’s misconceptions [31, 19, 28, 30, 35], and demonstrating a worked-out solution for a task when a student lacks the required programming concepts [37]. An underlying assumption when providing such intelligent support is that afterwards the student can practice new similar tasks to finally learn the missing concepts. However, this assumption is far from reality in existing systems—the programming tasks are typically hand-curated by experts/tutors, and the available set of tasks is limited. Consider HOC’s Classic Maze challenge [23], which provides a progression of 20 tasks: Millions of students have attempted these tasks, yet when students fail to solve a task and receive assistance, they cannot practice similar tasks, hindering their ability to master the desired concepts. We seek to tackle this pedagogical challenge by developing techniques for synthesizing new programming tasks.
∗Authors listed alphabetically; Correspondence to: Ahana Ghosh <gahana@mpi-sws.org>.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
We formalize the problem of synthesizing visual programming tasks of the kind found in popular learning platforms like Code.org (see Fig. 1) and CodeHS.com (see Fig. 2). As input, we are given a reference task Tin, specified as a visual puzzle, and its solution code Cin. Our goal is to synthesize a set {(Tout, Cout)} of new tasks along with their solution codes that are conceptually similar but visually dissimilar to the input. This is motivated by the need for practice tasks that on one hand exercise the same concepts, while looking fresh in order to maintain student engagement.
When tackling the problem of synthesizing new tasks with the above desirable properties, three key challenges emerge. First, we are generating problems in a conceptual domain with no well-defined procedure that students follow to solve a task—consequently, existing work on educational problem generation in procedural domains does not apply in our setting [3, 11]. Second, the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, template-based problem generation techniques [32, 25] that rely on directly mutating the input to generate new tasks is ineffective (see Section 5 where we use this approach as a baseline). Furthermore, such a direct task-mutation approach would require access to an automated solution synthesizer; however, state-of-the-art program synthesis techniques are not yet on par with experts and their minimal solutions [5, 8, 6]. Third, the space of possible tasks and their solutions is potentially unbounded, and thus, any problem generation technique that relies on exhaustive enumeration is intractable [32, 1, 2].
To overcome these challenges, we propose a novel methodology that operates by first mutating the solution code Cin to obtain a set of codes {Cout}, and then performing symbolic execution over a code Cout to obtain a visual puzzle Tout. Mutation is efficient by creating an abstract representation of Cin along with appropriate constraints and querying an SMT solver [4]; any solution to this query is a mutated code Cout. During symbolic execution, we use Monte Carlo Tree Search (MCTS) to guide the search over the (unbounded) symbolic execution tree. We demonstrate the effectiveness of our methodology by performing an extensive empirical evaluation and user study on a set of reference tasks from the Hour of code challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. In summary, our main contributions are:
• We formalize the problem of synthesizing block-based visual programming tasks (Section 2). • We present a novel approach for generating new visual tasks along with solution codes such that
they are conceptually similar but visually dissimilar to a given reference task (Section 3). • We demonstrate the effectiveness of our approach through an extensive empirical evaluation and
user study on reference tasks from real-world programming platforms (Section 4 and Section 5).
2 Problem Formulation
The space of tasks. We define a task as a tuple T := (Tvis, Tstore, Tsize), where Tvis denotes the visual puzzle, Tstore the available block types, and Tsize the maximum number of blocks allowed in the
solution code. For instance, considering the task T := Tin in Fig. 1a, Tvis is illustrated in Fig. 1a, Tstore = {move, turnL, turnR, RepeatUntil, If}, and Tsize = 4. The space of codes. The programming environment has a domain-specific language (DSL), which defines the set of valid codes C and is shown in Fig. 4a. A code C ∈ C is characterized by several properties, such as the set Cblocks of block types in C, the number of blocks Csize, the depth Cdepth of the corresponding Abstract Syntax Tree (AST), and the nesting structure Cstruct representing programming concepts exercised by C. For instance, considering the code C := Cin in Fig. 1b, Cblocks = {move, turnL, RepeatUntil, If}, Csize = 4, Cdepth = 3, and Cstruct = {Run{RepeatUntil{If}}}. Below, we introduce two useful definitions relating the task and code space. Definition 1 (Solution code). C is a solution code for T if the following holds: C successfully solves the visual puzzle Tvis, Cblocks ⊆ Tstore, and Csize ≤ Tsize. CT denotes the set of all solution codes for T. Definition 2 (Minimality of a task). Given a solvable task T with |CT| ≥ 1 and a threshold δ ∈ N, the task is minimal if @C ∈ CT such that Csize < Tsize − δ.
Next, we introduce two definitions formalizing the notion of conceptual similarity. Definition 3 formalizes conceptual similarity of a task T along with one solution code C. Since a task can have multiple solution codes, Definition 4 provides a stricter notion of conceptual similarity of a task T for all its solution codes. These definitions are used in our objective of task synthesis in conditions (I) and (V) below. Definition 3 (Conceptual similarity of (T, C)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T along with a solution code C is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and Cstruct = Cinstruct. Definition 4 (Conceptual similarity of (T, ·)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and ∀C ∈ CT, Cstruct = Cinstruct.
Environment domain knowledge. We now formalize our domain knowledge about the block-based environment to measure visual dissimilarity of two tasks, and capture some notion of interestingness and quality of a task. Given tasks T and T′, we measure their visual dissimilarity by an environmentspecific function Fdiss(Tvis, T′vis) ∈ [0, 1]. Moreover, we measure generic quality of a task with function Fqual(Tvis, C) ∈ [0, 1]. We provide specific instantiations of Fdiss and Fqual in our evaluation.
Objective of task synthesis. Given a reference task Tin and a solution code Cin ∈ CTin as input, we seek to generate a set {(Tout, Cout)} of new tasks along with solution codes that are conceptually similar but visually dissimilar to the input. Formally, given parameters (δsize, δdiss, δqual), our objective is to synthesize new tasks meeting the following conditions:
(I) (Tout, Cout) is conceptually similar to (Tin, Cin) with threshold δsize in Definition 3. (II) Tout is visually dissimilar to Tin with margin δdiss, i.e., Fdiss(Tinvis, Toutvis ) ≥ δdiss.
(III) Tout has a quality score above threshold δqual, i.e., Fqual(Toutvis , Cout) ≥ δqual.
In addition, depending on the use case, it is desirable that the new tasks satisfy the following criteria: (IV) Cout is different from the input solution code, i.e., Cout 6= Cin. (V) Tout is conceptually similar to (Tin, Cin) with threshold δsize in Definition 4.
(VI) Tout is minimal as per Definition 2 for a desired value of δmini (e.g., δmini = 0 or δmini = 1).
3 Our Task Synthesis Algorithm
We now present the pipeline of our algorithm (see Fig. 3), which takes as input a reference task Tin and its solution code Cin, and generates a set {(Tout, Cout)} of new tasks with their solution codes. The goal is for this set to be conceptually similar to (Tin, Cin), but for new tasks {Tout} to
be visually dissimilar to Tin. This is achieved by two main stages: (1) mutation of Cin to obtain a set {Cout}, and (2) symbolic execution of each Cout to create a task Tout. The first stage, presented in Section 3.1, converts Cin into an abstract representation restricted by a set of constraints (Fig. 3(a)), which must be satisfied by any generated Cout (Fig. 3(b)). The second stage, described in Section 3.2, applies symbolic execution on each code Cout to create a corresponding visual task Tout (Fig. 3(c)) while using Monte Carlo Tree Search (MCTS) to guide the search in the symbolic execution tree.
3.1 Code Mutation
This stage in our pipeline mutates code Cin of task Tin such that its conceptual elements are preserved. Our mutation procedure consists of three main steps. First, we generate an abstract representation of Cin, called sketch. Second, we restrict the sketch with constraints that describe the space of its concrete instantiations. Although this formulation is inspired from work on generating algebra problems [32], we use it in the entirely different context of generating conceptually similar mutations of Cin. This is achieved in the last step, where we use the sketch and its constraints to query an SMT solver [4]; the query solutions are mutated codes {Cout} such that Coutstruct = Cinstruct (see Definition 3). Step 1: Sketch. The sketch of code C, denoted by Q, is an abstraction of C capturing its skeleton and generalizing C to the space of conceptually similar codes. Q, expressed in the language of Fig. 4b, is generated from C with mapping Ω. In particular, the map exploits the AST structure of the code: the AST is traversed in a depth-first manner, and all values are replaced with their corresponding sketch variables, i.e., action a, bool b, and iter x are replaced with A, B, and X, respectively. In the following, we also use mapping ω(·| C), which takes a sketch variable in Q and returns its value in C. In addition to the above, we may extend a variable A to an action sequence A, since any A is allowed to be empty (φ). We may also add an action sequence of length δsize at the beginning and end of the obtained sketch. As an example, consider the code in Fig. 4d and the resulting sketch in Fig. 4e. Notice that, while we add an action sequence at the beginning of the sketch (A1), no action sequence is appended at the end because construct RepeatUntil renders any succeeding code unreachable.
Step 2: Sketch constraints. Sketch constraints restrict the possible concrete instantiations of a sketch by encoding the required semantics of the mutated codes. All constraint types are in Fig. 4c.
In particular, ∆0 restricts the size of the mutated code within δsize. ∆1 specifies the allowed mutations to an action sequence based on its value in the code, given by ω(A | C). For instance, this constraint could result in converting all turnLeft actions of a sequence to turnRight. ∆2 restricts the possible values of the Repeat counter within threshold δiter. ∆3 ensures that the Repeat counter is optimal, i.e., action subsequences before and after this construct are not nested in it. ∆4 specifies the possible values of the If condition based on its value in the code, given by ω(B | C). ∆5 refers to constraints imposed on action sequences nested within conditionals. As an example, consider
∆5 in Fig. 4f, which states that if B1 = pathLeft, then the nested action sequence must have at least one turnLeft action, and the first occurrence of this action must not be preceded by a move or turnRight, thus preventing invalid actions within the conditional. ∆6 ensures minimality of an action sequence, i.e., optimality of the constituent actions to obtain the desired output. This constraint would, for instance, eliminate redundant sequences such as turnLeft, turnRight, which does not affect the output, or turnLeft, turnLeft, turnLeft, whose output could be achieved by a single turnRight. All employed elimination sequences can be found in the supplementary material. The entire list of constraints applied on the solution code in Fig. 4d is shown in Fig. 4f.
Step 3: SMT query. For a sketch Q generated from code C and its constraints, we pose the following query to an SMT solver: (sketch Q, Q-constraints). As a result, the solver generates a set of instantiations, which are conceptually similar to C. In our implementation, we used the Z3 solver [7]. For the code in Fig. 4d, Z3 generated 66 mutated codes in 0.8s from an exhaustive space of 2, 997 possible codes with δsize = 2. One such mutation is shown in Fig. 1d.
While this approach generates codes that are devoid of most semantic irregularities, it has its limitations. Certain irregularities continue to exist in some generated codes: An example of such a code included the action sequence move, turnLeft, move, turnLeft, move, turnLeft, move, turnLeft, which results in the agent circling back to its initial location in the task space. This kind of undesirable behaviour is eliminated in the symbolic execution stage of our pipeline.
3.2 Symbolic Execution
Symbolic execution [13] is an automated test-generation technique that symbolically explores execution paths in a program. During exploration of a path, it gathers symbolic constraints over program inputs from statements along the path. These constraints are then mutated (according to a search strategy), and an SMT solver is queried to generate new inputs that explore another path.
Obtaining visual tasks with symbolic execution. This stage in our pipeline applies symbolic execution on each generated code Cout to obtain a suitable visual task Tout. The program inputs of Cout are the agent’s initial location/orientation and the status of the grid cells (unknown, free, blocked, marker, goal), which is initially unknown. Symbolic execution collects constraints over these from code statements. As in Fig. 5 for one path, symbolic execution generates a visual task for each path in Cout.
However, not all of these tasks are suitable. For instance, if the goal is reached after the first move in Fig. 1d, all other statements in Cout are not covered, rendering the task less suitable for this code. Naïvely, symbolic execution could first enumerate all paths in Cout and their corresponding tasks, and then rank them in terms of suitability. However, solution codes may have an unbounded number of paths, which leads to path explosion, that is, the inability to cover all paths with tractable resources.
Guiding symbolic execution using Monte Carlo Tree Search (MCTS). To address this issue, we use MCTS [14] as a search strategy in symbolic execution with the goal of generating more suitable tasks with fewer resources—we define task suitability next. Symbolic execution has been previously combined with MCTS in order to direct the exploration towards costly paths [15]. In the supplementary material, we provide an example demonstrating how MCTS could guide the symbolic execution in generating more suitable tasks.
As previously observed [12], a critical component of effectively applying MCTS is to define an evaluation function that describes the desired properties of the output, i.e., the visual tasks. Tailoring the evaluation function to our unique setting is exactly what differentiates our approach from existing work. In particular, our evaluation function, Fscore, distinguishes suitable tasks by assigning a score (∈ [0, 1]) to them, which guides the MCTS search. A higher Fscore indicates a more suitable task.
Its constituent components are: (i) Fcov(Toutvis , Cout) ∈ {0, 1}, which evaluates to 1 in the event of complete coverage of code Cout by task Toutvis and 0 otherwise; (ii) Fdiss(Toutvis , Tinvis) ∈ [0, 1], which evaluates the dissimilarity of Tout to Tin (see Section 2); (iii) Fqual(Toutvis , Cout) ∈ [0, 1], which evaluates the quality and validity of Tout; (iv) Fnocrash(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 in case the agent crashes into a wall and 1 otherwise; and (v) Fnocut(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 if there is a shortcut sequence of actions (a in Fig. 4a) smaller than Coutsize that solves T
out and 1 otherwise. Fqual and Fnocut also resolve the limitations of our mutation stage by eliminating codes and tasks that lead to undesirable agent behavior. We instantiate Fscore in the next section.
4 Experimental Evaluation
In this section, we evaluate our task synthesis algorithm on HOC and Karel tasks. Our implementation is publicly available.2 While we give an overview of key results here, a detailed description of our setup and additional experiments can be found in the supplementary material.
4.1 Reference Tasks and Specifications
Reference tasks. We use a set of ten reference tasks from HOC and Karel, shown in Fig. 6. The HOC tasks were selected from the Hour of Code: Classic Maze challenge by Code.org [23] and the Karel tasks from the Intro to Programming with Karel course by CodeHS.com [22]. The DSL of Fig. 4a is generic in that it includes both HOC and Karel codes, with the following differences: (i) construct While, marker-related actions putM, pickM, and conditions noPathA, noPathL, noPathR, marker, noMarker are specific to Karel only; (ii) construct RepeatUntil and goal are specific to HOC only. Furthermore, the puzzles for HOC and Karel are of different styles (see Fig. 1 and Fig. 2). For all tasks, the grid size of the puzzles is fixed to 10× 10 cells (grid-size parameter n = 10). Specification of scoring functions. Fqual(Toutvis , Cout) ∈ [0, 1] was approximated as the sum of the normalized counts of ‘moves’, ‘turns’, ‘segments’, and ‘long-segments’ in the grid; segments and longsegments are sequences of ≥ 3 and ≥ 5 move actions respectively. More precisely, for HOC tasks, we used the following function where features are computed by executing Cout on Toutvis :
FHOCqual (Toutvis , Cout) = 1
4 (#moves 2n + #turns n + #segments n/2 + #long-segments n/3 ) .
Furthermore, in our implementation, Fqual(·) value was set to 0 when Fnocrash(·) = 0. For Karel tasks, Fqual additionally included the normalized counts of putM and pickM, and is provided in the supplementary material. Fdiss(Toutvis , Tinvis) ∈ [0, 1] was computed based on the dissimilarity of the agent’s initial location/orientation w.r.t. Tinvis, and the grid-cell level dissimilarity based on the Hamming distance between Toutvis and T in vis. More precisely, we used the following function:
Fdiss(Toutvis , Tinvis) = 1
3
( diss(loc | Toutvis , Tinvis) + diss(dir | Toutvis , Tinvis) + diss(grid-cells | Toutvis , Tinvis) ) where diss(loc | Toutvis , Tinvis) ∈ {0, 1}, diss(dir | Toutvis , Tinvis) ∈ {0, 1}, and diss(grid-cells | Toutvis , Tinvis) ∈ [0, 1] (after the Hamming distance is normalized with a factor of 2n2 ).
2https://github.com/adishs/neurips2020_synthesizing-tasks_code
Next, we define the evaluation function Fscore(Tout, Cout, Tin, Cin) ∈ [0, 1] used by MCTS: Fscore(Tout, Cout, Tin, Cin) = 1 ( Fqual(Toutvis , C out) ≥ δqual,Fnocrash(Toutvis , C out) = 1,Fnocut(Toutvis , C out) = 1 )︸ ︷︷ ︸
(i)
·
[ α1Fcov(Toutvis , Cout) + α2Fqual(Toutvis , Cout) + α3Fdiss(Toutvis , Tinvis) ]︸ ︷︷ ︸ (ii)
where 1 is an indicator function and each constant α = 1/3. Component (ii) in the above function supplies the gradients for guiding the search in MCTS; Component (i) is applied at the end of the MCTS run to pick the output. More precisely, the best task (i.e, the one with the highest Fscore value) is picked only from the pool of generated tasks which have Fscore(·) > 0 and satisfy Fcov(·) = 1. Specification of task synthesis and MCTS. As per Section 2, we set the following thresholds for our algorithm: (i) δsize = 2, (ii) δdiss = 0.33, and (iii) δqual = 0.2 for codes with While or RepeatUntil, and 0.05 otherwise. We run MCTS 10 times per code, with each run generating one task. We set the maximum iterations of a run to 2 million (M) and the exploration constant to 2 [14]. Even when considering a tree depth of 2n (= 20), there are millions of leaves for difficult tasks H5 and H6, reflecting the complexity of task generation. For each code Cout, we generated 10 different visual tasks. To ensure sufficient diversity among the tasks generated for the same code, we introduced a measure Fdiversity. This measure, not only ensures visual task dissimilarity, but also ensures sufficient diversity in entire symbolic paths during generation (for details, see supplementary material).
4.2 Results
Performance of task synthesis algorithm. Fig. 7 shows the results of our algorithm. The second column illustrates the enormity of the unconstrained space of mutated codes; we only impose size constraint ∆0 from Fig. 4c. We then additionally impose constraint ∆1 resulting in a partially constrained space of mutated codes (column 3), and finally apply all constraints from Fig. 4c to obtain the final set of generated codes (column 4). This reflects the systematic reduction in the space of mutated codes by our constraints. Column 5 shows the total running time for generating the final codes, which denotes the time taken by Z3 to compute solutions to our mutation query. As discussed in Section 3.1, few codes with semantic irregularities still remain after the mutation stage. The symbolic execution stage eliminates these to obtain the reduced set of valid codes (column 6). Column 7 shows the final number of generated tasks and column 8 is the average time per output task (i.e., one MCTS run).
Analyzing output tasks. We further analyze the generated tasks based on the objectives of Section 2. All tasks satisfy properties (I)–(III) by design. Objective (IV) is easily achieved by excluding generated tasks for which Cout = Cin. For a random sample of 100 of the generated tasks per reference task, we performed manual validation to determine whether objectives (V) and (VI) are met. The fraction of tasks that satisfy these objectives is listed in the last three columns of Fig. 7. We observe that the vast majority of tasks meet the objectives, even if not by design. For H6, the fraction of tasks satisfying (VI) is low because the corresponding codes are generic enough to solve several puzzles.
Deep dive into an MCTS run. To offer more insight into the task generation process, we take a closer look at an MCTS run for task H5, shown in Fig. 8. Fig. 8a illustrates the improvement in various components of Fscore as the number of MCTS iterations increases. Best tasks at different iterations are shown in Fig. 8b, 8c, 8d. As expected, the more the iterations, the better the tasks are.
Remarks. We also ran the mutation stage by enumerating the programs within size constraints and then post-checking other constraints without Z3. This implementation leads to a run-time increase by a factor of 10 to 100 for different tasks. So, Z3 seems to be very effective by jointly considering all the constraints. As a search method, although MCTS seems computationally expensive, the actual run-time and memory footprint of an MCTS run depend on the unique traces explored (i.e., unique symbolic executions done)—this number is typically much lower than the number of iterations, also see discussion in the supplementary material. Considering the MCTS output in Figs. 8c, 8d, to obtain a comparable evaluation score through a random search, the corresponding number of unique symbolic executions required is at least 10 times more than executed by MCTS. We note that while we considered one I/O pair for Karel tasks, our methodology can be easily extended to multiple I/O pairs by adapting techniques designed for generating diverse tasks.
5 User Study and Comparison with Alternate Methods
In this section, we evaluate our task synthesis algorithm with a user study focusing on tasks H2, H4, H5, and H6. We developed an online app3, which uses the publicly available toolkit of Blockly Games [10] and provides an interface for a participant to practice block-based programming tasks for HOC. Each “practice session” of the study involves three steps: (i) a reference task Tin ∈ {H2,H4,H5,H6} is shown to the participant along with its solution code Cin, (ii) a new task Tout is generated for which the participant has to provide a solution code, and (iii) a post-survey asks the participant to assess the visual dissimilarity of the two tasks on a 4-point Likert scale as used in [25]. Details on the app interface and questionnaire are provided in the supplementary material. Participants for the study were recruited through Amazon Mechanical Turk. We only selected four tasks due to the high cost involved in conducting the study (about 1.8 USD per participant). The number of participants and their performance are documented in Fig. 9.
Baselines and methods evaluated. We evaluated four different methods, including three baselines (SAME, TUTOR, MUTTASK) and our algorithm (SYNTASK). SAME generates tasks such that Tin = Tout. TUTOR produces tasks that are similar to Tin and designed by an expert. We picked similar problems from the set of 20 Classic Maze challenge [23] tasks exercising the same programming concepts: Maze 6, 9 for H2, Maze 11, 13 for H4, Maze 15, 17 for H5, and Maze 19 for H6.
MUTTASK generated tasks by directly mutating the grid-world of the original task, i.e., by moving the agent or goal by up to two cells and potentially changing the agent’s orientation. A total of 18, 20, 15, and 17 tasks were generated for H2, H4, H5, and H6, respectively. Fig. 10 shows two output tasks for H4 and illustrates the challenge in directly mutating the input task, given the high discontinuity in mapping from the space of tasks to their codes. For H4, a total of 14 out of 20 new tasks were structurally very different from the input.
SYNTASK uses our algorithm to generate tasks. We picked the generated tasks from three groups based on the size of the code mutations from which they were produced, differing from the reference solution code by +δsize for δsize ∈ {0, 1, 2}. For H2 and H4, we randomly selected 5 tasks from each group, for a total of 15 new tasks per reference task. For H5 and H6, we selected 10 tasks from the first group (δsize = 0) only, due to their complexity stemming from nested constructs in their codes. We observed that TUTOR tasks for H5, H6 were also of δsize = 0, i.e., Coutsize = C in size. All the generated tasks picked for SYNTASK adhere to properties (I)–(VI) in Section 2.
3https://www.teaching-blocks.cc/
Results on task solving. In terms of successfully solving the generated tasks, SAME performed best (mean success = 0.94) in comparison to TUTOR (mean = 0.90), SYNTASK (mean = 0.89), and MUTTASK (mean = 0.68)—this is expected given the tasks generated by SAME. In comparison to TUTOR, the performance of SYNTASK was not significantly different (χ2 = 0.04, p = 0.83); in comparison to MUTTASK, SYNTASK performed significantly better (χ2 = 28.74, p < e−8). The complexity of the generated tasks is also reflected in the average time that participants spent on solving them. As shown in Fig. 9, they spent more time solving the tasks generated by MUTTASK.
Results on visual task dissimilarity. Visual dissimilarity was measured on a Likert scale ranging from 1–4, 1 being highly similar and 4 highly dissimilar. Comparing the dissimilarity of the generated tasks w.r.t. the reference task, we found that the performance of SAME was worst (mean dissimilarity = 1.07), while that of TUTOR was best (mean = 2.90). SYNTASK (mean = 2.63) performed significantly better than MUTTASK (mean = 2.17), yet slightly worse than TUTOR. This is because TUTOR generates tasks with additional distracting paths and noise, which can also be done by our algorithm (although not done for this study). Moreover, for H2, which had no conditionals, the resulting codes were somewhat similar, and so were the generated puzzles. When excluding H2 from the analysis, the difference between SYNTASK (mean = 2.72) and TUTOR (mean =2.93) was not statistically significant. A detailed distribution of the responses can be found in the supplementary material.
Remarks. SAME’s performance in terms of tasks solved is below 1.00, possibly because participants overlooked the solution of Step 1, unaware they will be receiving the same task in Step 2, and the app did not allow them to go back to Step 1. This user study provides a proof-of-concept; more elaborate studies are needed to fully reach the motivational goal of teaching K-12 students, and evaluate the long term impact on students’ concept learning. As additional studies, it would be important to understand the sensitivity of user study results w.r.t. the Likert scale definition; another possibility is to use pairwise comparisons in eliciting user evaluations.
6 Conclusions and Outlook
We developed techniques for a critical aspect of pedagogy in block-based programming: Automatically generating new tasks that exercise specific programming concepts, while looking visually dissimilar to input. We demonstrated the effectiveness of our methodology through an extensive empirical evaluation and user study on reference tasks from popular programming platforms. We believe our techniques have the potential to drastically improve the success of pedagogy in block-based visual programming environments by providing tutors and students with a substantial pool of new tasks. Beyond the application domain of programming education, our methodology can be used for generating large-scale datasets consisting of tasks and solution codes with desirable characteristics—this can be potentially useful for training neural program synthesis methods.
There are several promising directions for future work, including but not limited to: Learning a policy to guide the MCTS procedure (instead of running vanilla MCTS); automatically learning the constraints and cost function from a human-generated pool of problems; and applying our methodology to other programming environments (e.g., Python problems).
Broader Impact
This paper develops new techniques for improving pedagogy in block-based visual programming environments. Such programming environments are increasingly used nowadays to introduce computing concepts to novice programmers, and our work is motivated by the clear societal need of enhancing K-12 computing education. In existing systems, the programming tasks are hand-curated by tutors, and the available set of tasks is typically very limited. This severely limits the utility of existing systems for long-term learning as students do not have access to practice tasks for mastering the programming concepts.
We take a step towards tackling this challenge by developing a methodology to generate new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. Our task synthesis algorithm is able to generate 1000’s of new similar tasks for reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. Our extensive experiments and user study further validate the quality of the generated tasks. Our task synthesis algorithm could be useful in many different ways in practical systems. For instance, tutors can assign new practice tasks as homework or quizzes to students to check their knowledge, students can automatically obtain new similar tasks after they failed to solve a given task and received assistance, and intelligent tutoring systems could automatically generate a personalized curriculum of problems for a student for long-term learning.
Acknowledgments and Disclosure of Funding
We would like to thank the anonymous reviewers for their helpful comments. Ahana Ghosh was supported by Microsoft Research through its PhD Scholarship Programme. Umair Z. Ahmed and Abhik Roychoudhury were supported by the National Research Foundation, Singapore and National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) project under the National Cybersecurity R&D (NCR) Grant award no. NRF2018NCRNSOE003-0001. | 1. What is the main contribution of the paper regarding visual programming tasks?
2. What are the strengths of the proposed approach, particularly in terms of its application in various domains?
3. What are the weaknesses of the paper, especially regarding the evaluation methodology?
4. How can the visual dissimilarity measures be improved to enhance the performance of the algorithm?
5. What are some potential applications of the proposed approach in various fields, such as multi-agent emergent communication? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper attempts to create/synthesise visual programming tasks by methods that automatically generate candidate referents based on similarity. Their algorithm is composed of first mutated existing sets, then performing symbolic execution over them, using MCTS to guide search in the execution tree. Effectiveness of the algorithm is measure with empirical experiments on several block-based games as well as user studies with humans. Update: I think this is a good-enough contribution to warrant an accept and I hope the authors take into account the notes on more detailed evaluation, because I think that would be helpful for anyone who wants to use this/have work that spins off of this. Updating my score slightly..
Strengths
1. This is an interesting problem that ties to many other relevant problems in machine learning. e.g., in general, learning to generate candidate sets of referents based on characteristics of the objects, as well as user/pragmatic outputs, will be largely helpful in the multi-agent emergent communication community as well. 2. The paper is well written and all the components of the algorithm and its working are clearly explained. 3. It's nice that there is a user study component in the evaluation that assess how this works.
Weaknesses
1. The visual dissimilarity measures (Likert scale) could possible be improved/made better (since it seems like this affect performance as well) |
NIPS | Title
Synthesizing Tasks for Block-based Programming
Abstract
Block-based visual programming environments play a critical role in introducing computing concepts to K-12 students. One of the key pedagogical challenges in these environments is in designing new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. In this paper, we formalize the problem of synthesizing visual programming tasks. In particular, given a reference visual task Tin and its solution code Cin, we propose a novel methodology to automatically generate a set {(Tout, Cout)} of new tasks along with solution codes such that tasks Tin and Tout are conceptually similar but visually dissimilar. Our methodology is based on the realization that the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, directly mutating reference task Tin to generate new tasks is futile. Our task synthesis algorithm operates by first mutating code Cin to obtain a set of codes {Cout}. Then, the algorithm performs symbolic execution over a code Cout to obtain a visual task Tout; this step uses the Monte Carlo Tree Search (MCTS) procedure to guide the search in the symbolic tree. We demonstrate the effectiveness of our algorithm through an extensive empirical evaluation and user study on reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com.
1 Introduction
Block-based visual programming environments are increasingly used nowadays to introduce computing concepts to novice programmers including children and K-12 students. Led by the success of environments like Scratch [29], initiatives like Hour of Code by Code.org [24] (HOC) and online platforms like CodeHS.com [21], block-based programming has become an integral part of introductory computer science education. Considering HOC alone, over one billion hours of block-based programming activity has been performed so far by over 50 million unique students worldwide [24, 35].
The societal need for enhancing K-12 computing education has led to a surge of interest in developing AI-driven systems for pedagogy of block-based programming [33, 26, 27, 34, 16]. Existing works have studied various aspects of intelligent support, including providing real-time next-step hints when a student is stuck solving a task [20, 36, 18, 17, 9], giving data-driven feedback about a student’s misconceptions [31, 19, 28, 30, 35], and demonstrating a worked-out solution for a task when a student lacks the required programming concepts [37]. An underlying assumption when providing such intelligent support is that afterwards the student can practice new similar tasks to finally learn the missing concepts. However, this assumption is far from reality in existing systems—the programming tasks are typically hand-curated by experts/tutors, and the available set of tasks is limited. Consider HOC’s Classic Maze challenge [23], which provides a progression of 20 tasks: Millions of students have attempted these tasks, yet when students fail to solve a task and receive assistance, they cannot practice similar tasks, hindering their ability to master the desired concepts. We seek to tackle this pedagogical challenge by developing techniques for synthesizing new programming tasks.
∗Authors listed alphabetically; Correspondence to: Ahana Ghosh <gahana@mpi-sws.org>.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
We formalize the problem of synthesizing visual programming tasks of the kind found in popular learning platforms like Code.org (see Fig. 1) and CodeHS.com (see Fig. 2). As input, we are given a reference task Tin, specified as a visual puzzle, and its solution code Cin. Our goal is to synthesize a set {(Tout, Cout)} of new tasks along with their solution codes that are conceptually similar but visually dissimilar to the input. This is motivated by the need for practice tasks that on one hand exercise the same concepts, while looking fresh in order to maintain student engagement.
When tackling the problem of synthesizing new tasks with the above desirable properties, three key challenges emerge. First, we are generating problems in a conceptual domain with no well-defined procedure that students follow to solve a task—consequently, existing work on educational problem generation in procedural domains does not apply in our setting [3, 11]. Second, the mapping from the space of visual tasks to their solution codes is highly discontinuous; hence, template-based problem generation techniques [32, 25] that rely on directly mutating the input to generate new tasks is ineffective (see Section 5 where we use this approach as a baseline). Furthermore, such a direct task-mutation approach would require access to an automated solution synthesizer; however, state-of-the-art program synthesis techniques are not yet on par with experts and their minimal solutions [5, 8, 6]. Third, the space of possible tasks and their solutions is potentially unbounded, and thus, any problem generation technique that relies on exhaustive enumeration is intractable [32, 1, 2].
To overcome these challenges, we propose a novel methodology that operates by first mutating the solution code Cin to obtain a set of codes {Cout}, and then performing symbolic execution over a code Cout to obtain a visual puzzle Tout. Mutation is efficient by creating an abstract representation of Cin along with appropriate constraints and querying an SMT solver [4]; any solution to this query is a mutated code Cout. During symbolic execution, we use Monte Carlo Tree Search (MCTS) to guide the search over the (unbounded) symbolic execution tree. We demonstrate the effectiveness of our methodology by performing an extensive empirical evaluation and user study on a set of reference tasks from the Hour of code challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. In summary, our main contributions are:
• We formalize the problem of synthesizing block-based visual programming tasks (Section 2). • We present a novel approach for generating new visual tasks along with solution codes such that
they are conceptually similar but visually dissimilar to a given reference task (Section 3). • We demonstrate the effectiveness of our approach through an extensive empirical evaluation and
user study on reference tasks from real-world programming platforms (Section 4 and Section 5).
2 Problem Formulation
The space of tasks. We define a task as a tuple T := (Tvis, Tstore, Tsize), where Tvis denotes the visual puzzle, Tstore the available block types, and Tsize the maximum number of blocks allowed in the
solution code. For instance, considering the task T := Tin in Fig. 1a, Tvis is illustrated in Fig. 1a, Tstore = {move, turnL, turnR, RepeatUntil, If}, and Tsize = 4. The space of codes. The programming environment has a domain-specific language (DSL), which defines the set of valid codes C and is shown in Fig. 4a. A code C ∈ C is characterized by several properties, such as the set Cblocks of block types in C, the number of blocks Csize, the depth Cdepth of the corresponding Abstract Syntax Tree (AST), and the nesting structure Cstruct representing programming concepts exercised by C. For instance, considering the code C := Cin in Fig. 1b, Cblocks = {move, turnL, RepeatUntil, If}, Csize = 4, Cdepth = 3, and Cstruct = {Run{RepeatUntil{If}}}. Below, we introduce two useful definitions relating the task and code space. Definition 1 (Solution code). C is a solution code for T if the following holds: C successfully solves the visual puzzle Tvis, Cblocks ⊆ Tstore, and Csize ≤ Tsize. CT denotes the set of all solution codes for T. Definition 2 (Minimality of a task). Given a solvable task T with |CT| ≥ 1 and a threshold δ ∈ N, the task is minimal if @C ∈ CT such that Csize < Tsize − δ.
Next, we introduce two definitions formalizing the notion of conceptual similarity. Definition 3 formalizes conceptual similarity of a task T along with one solution code C. Since a task can have multiple solution codes, Definition 4 provides a stricter notion of conceptual similarity of a task T for all its solution codes. These definitions are used in our objective of task synthesis in conditions (I) and (V) below. Definition 3 (Conceptual similarity of (T, C)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T along with a solution code C is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and Cstruct = Cinstruct. Definition 4 (Conceptual similarity of (T, ·)). Given a reference (Tin, Cin) and a threshold δ ∈ N, a task T is conceptually similar to (Tin, Cin) if the following holds: Tstore = Tinstore, |Tsize − Tinsize| ≤ δ, and ∀C ∈ CT, Cstruct = Cinstruct.
Environment domain knowledge. We now formalize our domain knowledge about the block-based environment to measure visual dissimilarity of two tasks, and capture some notion of interestingness and quality of a task. Given tasks T and T′, we measure their visual dissimilarity by an environmentspecific function Fdiss(Tvis, T′vis) ∈ [0, 1]. Moreover, we measure generic quality of a task with function Fqual(Tvis, C) ∈ [0, 1]. We provide specific instantiations of Fdiss and Fqual in our evaluation.
Objective of task synthesis. Given a reference task Tin and a solution code Cin ∈ CTin as input, we seek to generate a set {(Tout, Cout)} of new tasks along with solution codes that are conceptually similar but visually dissimilar to the input. Formally, given parameters (δsize, δdiss, δqual), our objective is to synthesize new tasks meeting the following conditions:
(I) (Tout, Cout) is conceptually similar to (Tin, Cin) with threshold δsize in Definition 3. (II) Tout is visually dissimilar to Tin with margin δdiss, i.e., Fdiss(Tinvis, Toutvis ) ≥ δdiss.
(III) Tout has a quality score above threshold δqual, i.e., Fqual(Toutvis , Cout) ≥ δqual.
In addition, depending on the use case, it is desirable that the new tasks satisfy the following criteria: (IV) Cout is different from the input solution code, i.e., Cout 6= Cin. (V) Tout is conceptually similar to (Tin, Cin) with threshold δsize in Definition 4.
(VI) Tout is minimal as per Definition 2 for a desired value of δmini (e.g., δmini = 0 or δmini = 1).
3 Our Task Synthesis Algorithm
We now present the pipeline of our algorithm (see Fig. 3), which takes as input a reference task Tin and its solution code Cin, and generates a set {(Tout, Cout)} of new tasks with their solution codes. The goal is for this set to be conceptually similar to (Tin, Cin), but for new tasks {Tout} to
be visually dissimilar to Tin. This is achieved by two main stages: (1) mutation of Cin to obtain a set {Cout}, and (2) symbolic execution of each Cout to create a task Tout. The first stage, presented in Section 3.1, converts Cin into an abstract representation restricted by a set of constraints (Fig. 3(a)), which must be satisfied by any generated Cout (Fig. 3(b)). The second stage, described in Section 3.2, applies symbolic execution on each code Cout to create a corresponding visual task Tout (Fig. 3(c)) while using Monte Carlo Tree Search (MCTS) to guide the search in the symbolic execution tree.
3.1 Code Mutation
This stage in our pipeline mutates code Cin of task Tin such that its conceptual elements are preserved. Our mutation procedure consists of three main steps. First, we generate an abstract representation of Cin, called sketch. Second, we restrict the sketch with constraints that describe the space of its concrete instantiations. Although this formulation is inspired from work on generating algebra problems [32], we use it in the entirely different context of generating conceptually similar mutations of Cin. This is achieved in the last step, where we use the sketch and its constraints to query an SMT solver [4]; the query solutions are mutated codes {Cout} such that Coutstruct = Cinstruct (see Definition 3). Step 1: Sketch. The sketch of code C, denoted by Q, is an abstraction of C capturing its skeleton and generalizing C to the space of conceptually similar codes. Q, expressed in the language of Fig. 4b, is generated from C with mapping Ω. In particular, the map exploits the AST structure of the code: the AST is traversed in a depth-first manner, and all values are replaced with their corresponding sketch variables, i.e., action a, bool b, and iter x are replaced with A, B, and X, respectively. In the following, we also use mapping ω(·| C), which takes a sketch variable in Q and returns its value in C. In addition to the above, we may extend a variable A to an action sequence A, since any A is allowed to be empty (φ). We may also add an action sequence of length δsize at the beginning and end of the obtained sketch. As an example, consider the code in Fig. 4d and the resulting sketch in Fig. 4e. Notice that, while we add an action sequence at the beginning of the sketch (A1), no action sequence is appended at the end because construct RepeatUntil renders any succeeding code unreachable.
Step 2: Sketch constraints. Sketch constraints restrict the possible concrete instantiations of a sketch by encoding the required semantics of the mutated codes. All constraint types are in Fig. 4c.
In particular, ∆0 restricts the size of the mutated code within δsize. ∆1 specifies the allowed mutations to an action sequence based on its value in the code, given by ω(A | C). For instance, this constraint could result in converting all turnLeft actions of a sequence to turnRight. ∆2 restricts the possible values of the Repeat counter within threshold δiter. ∆3 ensures that the Repeat counter is optimal, i.e., action subsequences before and after this construct are not nested in it. ∆4 specifies the possible values of the If condition based on its value in the code, given by ω(B | C). ∆5 refers to constraints imposed on action sequences nested within conditionals. As an example, consider
∆5 in Fig. 4f, which states that if B1 = pathLeft, then the nested action sequence must have at least one turnLeft action, and the first occurrence of this action must not be preceded by a move or turnRight, thus preventing invalid actions within the conditional. ∆6 ensures minimality of an action sequence, i.e., optimality of the constituent actions to obtain the desired output. This constraint would, for instance, eliminate redundant sequences such as turnLeft, turnRight, which does not affect the output, or turnLeft, turnLeft, turnLeft, whose output could be achieved by a single turnRight. All employed elimination sequences can be found in the supplementary material. The entire list of constraints applied on the solution code in Fig. 4d is shown in Fig. 4f.
Step 3: SMT query. For a sketch Q generated from code C and its constraints, we pose the following query to an SMT solver: (sketch Q, Q-constraints). As a result, the solver generates a set of instantiations, which are conceptually similar to C. In our implementation, we used the Z3 solver [7]. For the code in Fig. 4d, Z3 generated 66 mutated codes in 0.8s from an exhaustive space of 2, 997 possible codes with δsize = 2. One such mutation is shown in Fig. 1d.
While this approach generates codes that are devoid of most semantic irregularities, it has its limitations. Certain irregularities continue to exist in some generated codes: An example of such a code included the action sequence move, turnLeft, move, turnLeft, move, turnLeft, move, turnLeft, which results in the agent circling back to its initial location in the task space. This kind of undesirable behaviour is eliminated in the symbolic execution stage of our pipeline.
3.2 Symbolic Execution
Symbolic execution [13] is an automated test-generation technique that symbolically explores execution paths in a program. During exploration of a path, it gathers symbolic constraints over program inputs from statements along the path. These constraints are then mutated (according to a search strategy), and an SMT solver is queried to generate new inputs that explore another path.
Obtaining visual tasks with symbolic execution. This stage in our pipeline applies symbolic execution on each generated code Cout to obtain a suitable visual task Tout. The program inputs of Cout are the agent’s initial location/orientation and the status of the grid cells (unknown, free, blocked, marker, goal), which is initially unknown. Symbolic execution collects constraints over these from code statements. As in Fig. 5 for one path, symbolic execution generates a visual task for each path in Cout.
However, not all of these tasks are suitable. For instance, if the goal is reached after the first move in Fig. 1d, all other statements in Cout are not covered, rendering the task less suitable for this code. Naïvely, symbolic execution could first enumerate all paths in Cout and their corresponding tasks, and then rank them in terms of suitability. However, solution codes may have an unbounded number of paths, which leads to path explosion, that is, the inability to cover all paths with tractable resources.
Guiding symbolic execution using Monte Carlo Tree Search (MCTS). To address this issue, we use MCTS [14] as a search strategy in symbolic execution with the goal of generating more suitable tasks with fewer resources—we define task suitability next. Symbolic execution has been previously combined with MCTS in order to direct the exploration towards costly paths [15]. In the supplementary material, we provide an example demonstrating how MCTS could guide the symbolic execution in generating more suitable tasks.
As previously observed [12], a critical component of effectively applying MCTS is to define an evaluation function that describes the desired properties of the output, i.e., the visual tasks. Tailoring the evaluation function to our unique setting is exactly what differentiates our approach from existing work. In particular, our evaluation function, Fscore, distinguishes suitable tasks by assigning a score (∈ [0, 1]) to them, which guides the MCTS search. A higher Fscore indicates a more suitable task.
Its constituent components are: (i) Fcov(Toutvis , Cout) ∈ {0, 1}, which evaluates to 1 in the event of complete coverage of code Cout by task Toutvis and 0 otherwise; (ii) Fdiss(Toutvis , Tinvis) ∈ [0, 1], which evaluates the dissimilarity of Tout to Tin (see Section 2); (iii) Fqual(Toutvis , Cout) ∈ [0, 1], which evaluates the quality and validity of Tout; (iv) Fnocrash(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 in case the agent crashes into a wall and 1 otherwise; and (v) Fnocut(Toutvis , Cout) ∈ {0, 1}, which evaluates to 0 if there is a shortcut sequence of actions (a in Fig. 4a) smaller than Coutsize that solves T
out and 1 otherwise. Fqual and Fnocut also resolve the limitations of our mutation stage by eliminating codes and tasks that lead to undesirable agent behavior. We instantiate Fscore in the next section.
4 Experimental Evaluation
In this section, we evaluate our task synthesis algorithm on HOC and Karel tasks. Our implementation is publicly available.2 While we give an overview of key results here, a detailed description of our setup and additional experiments can be found in the supplementary material.
4.1 Reference Tasks and Specifications
Reference tasks. We use a set of ten reference tasks from HOC and Karel, shown in Fig. 6. The HOC tasks were selected from the Hour of Code: Classic Maze challenge by Code.org [23] and the Karel tasks from the Intro to Programming with Karel course by CodeHS.com [22]. The DSL of Fig. 4a is generic in that it includes both HOC and Karel codes, with the following differences: (i) construct While, marker-related actions putM, pickM, and conditions noPathA, noPathL, noPathR, marker, noMarker are specific to Karel only; (ii) construct RepeatUntil and goal are specific to HOC only. Furthermore, the puzzles for HOC and Karel are of different styles (see Fig. 1 and Fig. 2). For all tasks, the grid size of the puzzles is fixed to 10× 10 cells (grid-size parameter n = 10). Specification of scoring functions. Fqual(Toutvis , Cout) ∈ [0, 1] was approximated as the sum of the normalized counts of ‘moves’, ‘turns’, ‘segments’, and ‘long-segments’ in the grid; segments and longsegments are sequences of ≥ 3 and ≥ 5 move actions respectively. More precisely, for HOC tasks, we used the following function where features are computed by executing Cout on Toutvis :
FHOCqual (Toutvis , Cout) = 1
4 (#moves 2n + #turns n + #segments n/2 + #long-segments n/3 ) .
Furthermore, in our implementation, Fqual(·) value was set to 0 when Fnocrash(·) = 0. For Karel tasks, Fqual additionally included the normalized counts of putM and pickM, and is provided in the supplementary material. Fdiss(Toutvis , Tinvis) ∈ [0, 1] was computed based on the dissimilarity of the agent’s initial location/orientation w.r.t. Tinvis, and the grid-cell level dissimilarity based on the Hamming distance between Toutvis and T in vis. More precisely, we used the following function:
Fdiss(Toutvis , Tinvis) = 1
3
( diss(loc | Toutvis , Tinvis) + diss(dir | Toutvis , Tinvis) + diss(grid-cells | Toutvis , Tinvis) ) where diss(loc | Toutvis , Tinvis) ∈ {0, 1}, diss(dir | Toutvis , Tinvis) ∈ {0, 1}, and diss(grid-cells | Toutvis , Tinvis) ∈ [0, 1] (after the Hamming distance is normalized with a factor of 2n2 ).
2https://github.com/adishs/neurips2020_synthesizing-tasks_code
Next, we define the evaluation function Fscore(Tout, Cout, Tin, Cin) ∈ [0, 1] used by MCTS: Fscore(Tout, Cout, Tin, Cin) = 1 ( Fqual(Toutvis , C out) ≥ δqual,Fnocrash(Toutvis , C out) = 1,Fnocut(Toutvis , C out) = 1 )︸ ︷︷ ︸
(i)
·
[ α1Fcov(Toutvis , Cout) + α2Fqual(Toutvis , Cout) + α3Fdiss(Toutvis , Tinvis) ]︸ ︷︷ ︸ (ii)
where 1 is an indicator function and each constant α = 1/3. Component (ii) in the above function supplies the gradients for guiding the search in MCTS; Component (i) is applied at the end of the MCTS run to pick the output. More precisely, the best task (i.e, the one with the highest Fscore value) is picked only from the pool of generated tasks which have Fscore(·) > 0 and satisfy Fcov(·) = 1. Specification of task synthesis and MCTS. As per Section 2, we set the following thresholds for our algorithm: (i) δsize = 2, (ii) δdiss = 0.33, and (iii) δqual = 0.2 for codes with While or RepeatUntil, and 0.05 otherwise. We run MCTS 10 times per code, with each run generating one task. We set the maximum iterations of a run to 2 million (M) and the exploration constant to 2 [14]. Even when considering a tree depth of 2n (= 20), there are millions of leaves for difficult tasks H5 and H6, reflecting the complexity of task generation. For each code Cout, we generated 10 different visual tasks. To ensure sufficient diversity among the tasks generated for the same code, we introduced a measure Fdiversity. This measure, not only ensures visual task dissimilarity, but also ensures sufficient diversity in entire symbolic paths during generation (for details, see supplementary material).
4.2 Results
Performance of task synthesis algorithm. Fig. 7 shows the results of our algorithm. The second column illustrates the enormity of the unconstrained space of mutated codes; we only impose size constraint ∆0 from Fig. 4c. We then additionally impose constraint ∆1 resulting in a partially constrained space of mutated codes (column 3), and finally apply all constraints from Fig. 4c to obtain the final set of generated codes (column 4). This reflects the systematic reduction in the space of mutated codes by our constraints. Column 5 shows the total running time for generating the final codes, which denotes the time taken by Z3 to compute solutions to our mutation query. As discussed in Section 3.1, few codes with semantic irregularities still remain after the mutation stage. The symbolic execution stage eliminates these to obtain the reduced set of valid codes (column 6). Column 7 shows the final number of generated tasks and column 8 is the average time per output task (i.e., one MCTS run).
Analyzing output tasks. We further analyze the generated tasks based on the objectives of Section 2. All tasks satisfy properties (I)–(III) by design. Objective (IV) is easily achieved by excluding generated tasks for which Cout = Cin. For a random sample of 100 of the generated tasks per reference task, we performed manual validation to determine whether objectives (V) and (VI) are met. The fraction of tasks that satisfy these objectives is listed in the last three columns of Fig. 7. We observe that the vast majority of tasks meet the objectives, even if not by design. For H6, the fraction of tasks satisfying (VI) is low because the corresponding codes are generic enough to solve several puzzles.
Deep dive into an MCTS run. To offer more insight into the task generation process, we take a closer look at an MCTS run for task H5, shown in Fig. 8. Fig. 8a illustrates the improvement in various components of Fscore as the number of MCTS iterations increases. Best tasks at different iterations are shown in Fig. 8b, 8c, 8d. As expected, the more the iterations, the better the tasks are.
Remarks. We also ran the mutation stage by enumerating the programs within size constraints and then post-checking other constraints without Z3. This implementation leads to a run-time increase by a factor of 10 to 100 for different tasks. So, Z3 seems to be very effective by jointly considering all the constraints. As a search method, although MCTS seems computationally expensive, the actual run-time and memory footprint of an MCTS run depend on the unique traces explored (i.e., unique symbolic executions done)—this number is typically much lower than the number of iterations, also see discussion in the supplementary material. Considering the MCTS output in Figs. 8c, 8d, to obtain a comparable evaluation score through a random search, the corresponding number of unique symbolic executions required is at least 10 times more than executed by MCTS. We note that while we considered one I/O pair for Karel tasks, our methodology can be easily extended to multiple I/O pairs by adapting techniques designed for generating diverse tasks.
5 User Study and Comparison with Alternate Methods
In this section, we evaluate our task synthesis algorithm with a user study focusing on tasks H2, H4, H5, and H6. We developed an online app3, which uses the publicly available toolkit of Blockly Games [10] and provides an interface for a participant to practice block-based programming tasks for HOC. Each “practice session” of the study involves three steps: (i) a reference task Tin ∈ {H2,H4,H5,H6} is shown to the participant along with its solution code Cin, (ii) a new task Tout is generated for which the participant has to provide a solution code, and (iii) a post-survey asks the participant to assess the visual dissimilarity of the two tasks on a 4-point Likert scale as used in [25]. Details on the app interface and questionnaire are provided in the supplementary material. Participants for the study were recruited through Amazon Mechanical Turk. We only selected four tasks due to the high cost involved in conducting the study (about 1.8 USD per participant). The number of participants and their performance are documented in Fig. 9.
Baselines and methods evaluated. We evaluated four different methods, including three baselines (SAME, TUTOR, MUTTASK) and our algorithm (SYNTASK). SAME generates tasks such that Tin = Tout. TUTOR produces tasks that are similar to Tin and designed by an expert. We picked similar problems from the set of 20 Classic Maze challenge [23] tasks exercising the same programming concepts: Maze 6, 9 for H2, Maze 11, 13 for H4, Maze 15, 17 for H5, and Maze 19 for H6.
MUTTASK generated tasks by directly mutating the grid-world of the original task, i.e., by moving the agent or goal by up to two cells and potentially changing the agent’s orientation. A total of 18, 20, 15, and 17 tasks were generated for H2, H4, H5, and H6, respectively. Fig. 10 shows two output tasks for H4 and illustrates the challenge in directly mutating the input task, given the high discontinuity in mapping from the space of tasks to their codes. For H4, a total of 14 out of 20 new tasks were structurally very different from the input.
SYNTASK uses our algorithm to generate tasks. We picked the generated tasks from three groups based on the size of the code mutations from which they were produced, differing from the reference solution code by +δsize for δsize ∈ {0, 1, 2}. For H2 and H4, we randomly selected 5 tasks from each group, for a total of 15 new tasks per reference task. For H5 and H6, we selected 10 tasks from the first group (δsize = 0) only, due to their complexity stemming from nested constructs in their codes. We observed that TUTOR tasks for H5, H6 were also of δsize = 0, i.e., Coutsize = C in size. All the generated tasks picked for SYNTASK adhere to properties (I)–(VI) in Section 2.
3https://www.teaching-blocks.cc/
Results on task solving. In terms of successfully solving the generated tasks, SAME performed best (mean success = 0.94) in comparison to TUTOR (mean = 0.90), SYNTASK (mean = 0.89), and MUTTASK (mean = 0.68)—this is expected given the tasks generated by SAME. In comparison to TUTOR, the performance of SYNTASK was not significantly different (χ2 = 0.04, p = 0.83); in comparison to MUTTASK, SYNTASK performed significantly better (χ2 = 28.74, p < e−8). The complexity of the generated tasks is also reflected in the average time that participants spent on solving them. As shown in Fig. 9, they spent more time solving the tasks generated by MUTTASK.
Results on visual task dissimilarity. Visual dissimilarity was measured on a Likert scale ranging from 1–4, 1 being highly similar and 4 highly dissimilar. Comparing the dissimilarity of the generated tasks w.r.t. the reference task, we found that the performance of SAME was worst (mean dissimilarity = 1.07), while that of TUTOR was best (mean = 2.90). SYNTASK (mean = 2.63) performed significantly better than MUTTASK (mean = 2.17), yet slightly worse than TUTOR. This is because TUTOR generates tasks with additional distracting paths and noise, which can also be done by our algorithm (although not done for this study). Moreover, for H2, which had no conditionals, the resulting codes were somewhat similar, and so were the generated puzzles. When excluding H2 from the analysis, the difference between SYNTASK (mean = 2.72) and TUTOR (mean =2.93) was not statistically significant. A detailed distribution of the responses can be found in the supplementary material.
Remarks. SAME’s performance in terms of tasks solved is below 1.00, possibly because participants overlooked the solution of Step 1, unaware they will be receiving the same task in Step 2, and the app did not allow them to go back to Step 1. This user study provides a proof-of-concept; more elaborate studies are needed to fully reach the motivational goal of teaching K-12 students, and evaluate the long term impact on students’ concept learning. As additional studies, it would be important to understand the sensitivity of user study results w.r.t. the Likert scale definition; another possibility is to use pairwise comparisons in eliciting user evaluations.
6 Conclusions and Outlook
We developed techniques for a critical aspect of pedagogy in block-based programming: Automatically generating new tasks that exercise specific programming concepts, while looking visually dissimilar to input. We demonstrated the effectiveness of our methodology through an extensive empirical evaluation and user study on reference tasks from popular programming platforms. We believe our techniques have the potential to drastically improve the success of pedagogy in block-based visual programming environments by providing tutors and students with a substantial pool of new tasks. Beyond the application domain of programming education, our methodology can be used for generating large-scale datasets consisting of tasks and solution codes with desirable characteristics—this can be potentially useful for training neural program synthesis methods.
There are several promising directions for future work, including but not limited to: Learning a policy to guide the MCTS procedure (instead of running vanilla MCTS); automatically learning the constraints and cost function from a human-generated pool of problems; and applying our methodology to other programming environments (e.g., Python problems).
Broader Impact
This paper develops new techniques for improving pedagogy in block-based visual programming environments. Such programming environments are increasingly used nowadays to introduce computing concepts to novice programmers, and our work is motivated by the clear societal need of enhancing K-12 computing education. In existing systems, the programming tasks are hand-curated by tutors, and the available set of tasks is typically very limited. This severely limits the utility of existing systems for long-term learning as students do not have access to practice tasks for mastering the programming concepts.
We take a step towards tackling this challenge by developing a methodology to generate new practice tasks for a student that match a desired level of difficulty and exercise specific programming concepts. Our task synthesis algorithm is able to generate 1000’s of new similar tasks for reference tasks taken from the Hour of Code: Classic Maze challenge by Code.org and the Intro to Programming with Karel course by CodeHS.com. Our extensive experiments and user study further validate the quality of the generated tasks. Our task synthesis algorithm could be useful in many different ways in practical systems. For instance, tutors can assign new practice tasks as homework or quizzes to students to check their knowledge, students can automatically obtain new similar tasks after they failed to solve a given task and received assistance, and intelligent tutoring systems could automatically generate a personalized curriculum of problems for a student for long-term learning.
Acknowledgments and Disclosure of Funding
We would like to thank the anonymous reviewers for their helpful comments. Ahana Ghosh was supported by Microsoft Research through its PhD Scholarship Programme. Umair Z. Ahmed and Abhik Roychoudhury were supported by the National Research Foundation, Singapore and National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) project under the National Cybersecurity R&D (NCR) Grant award no. NRF2018NCRNSOE003-0001. | 1. What is the main contribution of the paper regarding generating new programming tasks?
2. How does the proposed algorithm work, and what are its strengths and weaknesses?
3. What are some potential limitations of the approach, such as the use of Monte Carlo Tree Search?
4. Are there any concerns about the empirical user study, such as the choice of baselines or participant demographics?
5. How might the method be adapted to other programming languages or domains? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper is about generating new programming tasks structurally similar to existing ones for pedagogical purposes. The domain is "block-based programming", where students write programs that control an agent in a grid-world, based on a visual specification for what the program should do; for example, the student is provided an initial state and a final state for the world, and tasked with writing the program that would transform the initial state into the final state. When provided with an existing specification, the algorithm in the paper produces new specifications which can be solved with programs that are similar to the solution for the original specification. By doing this, we can automatically generate many new interesting tasks that exercise similar concepts as an existing one, enabling students to practice their understanding of the concepts. The paper presents a formalization of the task of generating new specifications, an algorithm based on symbolic execution and an SMT solver, and an empirical evaluation comparing against baseline methods in a user study.
Strengths
The paper defines and motivates an interesting problem, presents a compelling algorithm for the problem, and shows that it works well with a user study. The methods used in the paper (applying a SMT solver, using MCTS, etc) are not novel on their own, but they make intuitive sense and appear to be a good fit for the problem in the empirical evaluation. I liked that the paper has a formal presentation of the problem and then uses it meaningfully as part of the solution (with the SMT solver). In other papers, it is not uncommon that the formalisms are mostly for expository purposes, and not directly relevant in the proposed method. The paper only evaluated the method on one domain, but most of the approach appears generic enough that it would be easily adapted to other programming languages (for example, teaching how to use regular expressions).
Weaknesses
In Section 3.2, I felt that Monte Carlo Tree Search was an unnecessarily opaque method for the guidance of the symbolic execution. The advantage is that it can work with many kinds of constraints and cost functions, but it is not very computationally efficient as many evaluations are needed. For the evaluation functions used in this paper, would it be possible to use a more efficient method tailored for them? Also, how would MCTS compare with other methods for searching the space of paths, like evolutionary algorithms? I was glad to see the empirical user study for the method, but I felt that the baselines used could have been more varied. In particular, with an ablation study where important components of the overall system are removed or modified, we can see more clearly their importance to achieving the end result. If the study budget allows, I would encourage the authors to consider more user studies that show the importance of e.g. different evaluation functions used in the MCTS. I was also a bit surprised to see that the study participants were recruited using Mechanical Turk, as the programming classes are targeted at K-12 students but Mechanical Turk requires workers to be at least 18 years old. The users participating in the study would have different demographics than the intended users for the system, so the latter may also behave differently in ways that invalidate conclusions drawn from the user study. Finally, there is not much learning involved in the paper, compared to most other NeurIPS submissions. |
NIPS | Title
Sharpness, Restart and Acceleration
Abstract
The Łojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls the performance of restart schemes, as observed by Nemirovskii and Nesterov [1985]. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods.
N/A
Introduction
We study convex optimization problems of the form
minimize f(x) (P)
where f is a convex function defined on Rn. The complexity of these problems using first order methods is generically controlled by smoothness assumptions on f such as Lipschitz continuity of its gradient. Additional assumptions such as strong convexity or uniform convexity provide respectively linear [Nesterov, 2013b] and faster polynomial [Juditski and Nesterov, 2014] rates of convergence. However, these assumptions are often too restrictive to be applied. Here, we make a much weaker and generic assumption that describes the sharpness of the function around its minimizers by constants µ ≥ 0 and r ≥ 1 such that
µ r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K, (Sharp)
where f∗ is the minimum of f , K ⊂ Rn is a compact set, d(x,X∗) = miny∈X∗ ‖x − y‖ is the distance from x to the set X∗ ⊂ K of minimizers of f 1 for the Euclidean norm ‖ · ‖. This defines a lower bound on the function around its minimizers: for r = 1, f shows a kink around its minimizers and the larger is r the flatter is the function around its minimizers. We tackle this property by restart schemes of classical convex optimization algorithms.
Sharpness assumption (Sharp) is better known as a Hölderian error bound on the distance to the set of minimizers. Hoffman [Hoffman, 1952] first introduced error bounds to study system of linear inequalities. Natural extensions were then developed for convex optimization [Robinson, 1975; Mangasarian, 1985; Auslender and Crouzeix, 1988], notably through the concept of sharp minima [Polyak, 1979; Burke and Ferris, 1993; Burke and Deng, 2002]. But the most striking discovery was made by Łojasiewicz [Łojasiewicz, 1963, 1993] who proved inequality (Sharp) for real analytic and subanalytic functions. It has then been extended to non-smooth subanalytic convex functions by Bolte et al. [2007]. Overall, since (Sharp) essentially measures the sharpness of minimizers, it holds somewhat generically. On the other hand, this inequality is purely descriptive as we have no hope of ever observing either r or µ, and deriving adaptive schemes is crucial to ensure practical relevance.
1We assume the problem feasible, i.e. X∗ 6= ∅.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Łojasiewicz inequalities either in the form of (Sharp) or as gradient dominated properties [Polyak, 1979] led to new simple convergence results [Karimi et al., 2016], in particular for alternating and splitting methods [Attouch et al., 2010; Frankel et al., 2015], even in the non-convex case [Bolte et al., 2014]. Here we focus on Hölderian error bounds as they offer simple explanation of accelerated rates of restart schemes.
Restart schemes were already studied for strongly or uniformly convex functions [Nemirovskii and Nesterov, 1985; Nesterov, 2013a; Juditski and Nesterov, 2014; Lin and Xiao, 2014]. In particular, Nemirovskii and Nesterov [1985] link a “strict minimum” condition akin to (Sharp) with faster convergence rates using restart schemes which form the basis of our results, but do not study the cost of adaptation and do not tackle the non-smooth case. In a similar spirit, weaker versions of this strict minimum condition were used more recently to study the performance of restart schemes in [Renegar, 2014; Freund and Lu, 2015; Roulet et al., 2015]. The fundamental question of a restart scheme is naturally to know when must an algorithm be stopped and relaunched. Several heuristics [O’Donoghue and Candes, 2015; Su et al., 2014; Giselsson and Boyd, 2014] studied adaptive restart schemes to speed up convergence of optimal methods. The robustness of restart schemes was then theoretically studied by Fercoq and Qu [2016] for quadratic error bounds, i.e. (Sharp) with r = 2, that LASSO problem satisfies for example. Fercoq and Qu [2017] extended recently their work to produce adaptive restarts with theoretical guarantees of optimal performance, still for quadratic error bounds. Previous references focus on smooth problems, but error bounds appear also for non-smooth ones, Gilpin et al. [2012] prove for example linear converge of restart schemes in bilinear matrix games where the minimum is sharp, i.e. (Sharp) with r = 1.
Our contribution here is to derive optimal scheduled restart schemes for general convex optimization problems for smooth, non-smooth or Hölder smooth functions satisfying the sharpness assumption. We then show that for smooth functions these schemes can be made adaptive with nearly optimal complexity (up to a squared log term) for a wide array of sharpness assumptions. We also analyze restart criterion based on a sufficient decrease of the gap to the minimum value of the problem, when this latter is known in advance. In that case, restart schemes are shown ot be optimal without requiring any additional information on the function.
1 Problem assumptions
1.1 Smoothness
Convex optimization problems (P) are generally divided in two classes: smooth problems, for which f has Lipschitz continuous gradients, and non-smooth problems for which f is not differentiable. Nesterov [2015] proposed to unify point of views by assuming generally that there exist constants 1 ≤ s ≤ 2 and L > 0 such that
‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖s−1, for all x, y ∈ Rn (Smooth)
where ∇f(x) is any sub-gradient of f at x if s = 1 (otherwise this implies differentiability of f ). For s = 2, we retrieve the classical definition of smoothness [Nesterov, 2013b]. For s = 1 we get a classical assumption made in non-smooth convex optimization, i.e., that sub-gradients of the function are bounded. For 1 < s < 2, this assumes gradient of f to be Hölder Lipschitz continuous. In a first step, we will analyze restart schemes for smooth convex optimization problems, then generalize to general smoothness assumption (Smooth) using appropriate accelerated algorithms developed by Nesterov [2015].
1.2 Error bounds
In general, an error bound is an inequality of the form
d(x,X∗) ≤ ω(f(x)− f∗),
where ω is an increasing function at 0, called the residual function, and x may evolve either in the whole space or in a bounded set, see Bolte et al. [2015] for more details. We focus on Hölderian Error Bounds (Sharp) as they are the most common in practice. They are notably satisfied by a analytic and subanalytic functions but the proof (see e.g. Bierstone and Milman [1988, Theorem 6.4]) is shown using topological arguments that are far from constructive. Hence, outside of some
particular cases (e.g. strong convexity), we cannot assume that the constants in (Sharp) are known, even approximately.
Error bounds can generically be linked to Łojasiewicz inequality that upper bounds magnitude of the gradient by values of the function [Bolte et al., 2015]. Such property paved the way to many recent results in optimization [Attouch et al., 2010; Frankel et al., 2015; Bolte et al., 2014]. Here we will see that (Sharp) is sufficient to acceleration of convex optimization algorithms by their restart. Note finally that in most cases, error bounds are local properties hence the convergence results that follow will generally be local.
1.3 Sharpness and smoothness
Let f be a convex function on Rn satisfying (Smooth) with parameters (s, L). This property ensures that, f(x) ≤ f∗ + Ls ‖x − y‖
s, for given x ∈ Rn and y ∈ X∗. Setting y to be the projection of x onto X∗, this yields the following upper bound on suboptimality
f(x)− f∗ ≤ L s d(x,X∗)s. (1)
Now, assume that f satisfies the error bound (Sharp) on a setK with parameters (r, µ). Combining (1) and (Sharp) this leads for every x ∈ K,
sµ rL ≤ d(x,X∗)s−r.
This means that necessarily s ≤ r by taking x → X∗. Moreover if s < r, this last inequality can only be valid on a bounded set, i.e. either smoothness or error bound or both are valid only on a bounded set. In the following, we write
κ , L 2 s /µ 2 r and τ , 1− s
r (2)
respectively a generalized condition number for the function f and a condition number based on the ratio of powers in inequalities (Smooth) and (Sharp). If r = s = 2, κ matches the classical condition number of the function.
2 Scheduled restarts for smooth convex problems
In this section, we seek to solve (P) assuming that the function f is smooth, i.e. satisfies (Smooth) with s = 2 and L > 0. Without further assumptions on f , an optimal algorithm to solve the smooth convex optimization problem (P) is Nesterov’s accelerated gradient method [Nesterov, 1983]. Given an initial point x0, this algorithm outputs, after t iterations, a point x = A(x0, t) such that
f(x)− f∗ ≤ cL t2 d(x0, X ∗)2, (3)
where c > 0 denotes a universal constant (whose value will be allowed to vary in what follows, with c = 4 here). We assume without loss of generality that f(x) ≤ f(x0). More details about Nesterov’s algorithm are given in Supplementary Material.
In what follows, we will also assume that f satisfies (Sharp) with parameters (r, µ) on a set K ⊇ X∗, which means
µ r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K. (Sharp)
As mentioned before if r > s = 2, this property is necessarily local, i.e. K is bounded. We assume then that given a starting point x0 ∈ Rn, sharpness is satisfied on the sublevel set {x| f(x) ≤ f(x0)}. Remark that if this property is valid on an open set K ⊃ X∗, it will also be valid on any compact set K ′ ⊃ K with the same exponent r but a potentially lower constant µ. The scheduled restart schemes we present here rely on a global sharpness hypothesis on the sublevel set defined by the initial point and are not adaptive to constant µ on smaller sublevel sets. On the other hand, restarts on criterion that we present in Section 4, assuming that f∗ is known, adapt to the value of µ. We now describe a restart scheme exploiting this extra regularity assumption to improve the computational complexity of solving problem (P) using accelerated methods.
2.1 Scheduled restarts
Here, we schedule the number of iterations tk made by Nesterov’s algorithm between restarts, with tk the number of (inner) iterations at the kth algorithm run (outer iteration). Our scheme is described in Algorithm 1 below.
Algorithm 1 Scheduled restarts for smooth convex minimization Inputs : x0 ∈ Rn and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do
xk := A(xk−1, tk) end for Output : x̂ := xR
The analysis of this scheme and the following ones relies on two steps. We first choose schedules that ensure linear convergence in the iterates xk at a given rate. We then adjust this linear rate to minimize the complexity in terms of the total number of iterations.
We begin with a technical lemma which assumes linear convergence holds, and connects the growth of tk, the precision reached and the total number of inner iterations N . Lemma 2.1. Let xk be a sequence whose kth iterate is generated from the previous one by an algorithm that runs tk iterations and write N = ∑R k=1 tk the total number of iterations to output a point xR. Suppose setting tk = Ceαk, k = 1, . . . , R for some C > 0 and α ≥ 0 ensures that outer iterations satisfy f(xk)− f∗ ≤ νe−γk, (4) for all k ≥ 0 with ν ≥ 0 and γ ≥ 0. Then precision at the output is given by,
f(xR)− f∗ ≤ ν exp(−γN/C), when α = 0,
and f(xR)− f∗ ≤
ν
(αe−αC−1N + 1) γ α
, when α > 0.
Proof. When α = 0, N = RC, and inserting this in (4) at the last point xR yields the desired result. On the other hand, when α > 0, we have N = ∑R k=1 tk = Ce α eαR−1 eα−1 , which gives
R = log ( eα−1 eαC N + 1 ) /α. Inserting this in (4) at the last point, we get
f(xR)− f∗ ≤ ν exp ( − γα log ( eα−1 eαC N + 1 )) ≤ ν
(αe−αC−1N+1) γ α ,
where we used ex − 1 ≥ x. This yields the second part of the result.
The last approximation in the case α > 0 simplifies the analysis that follows without significantly affecting the bounds. We also show in Supplementary Material that using t̃k = dtke does not significantly affect the bounds above. Remark that convergence bounds are generally linear or polynomial such that we can extract a subsequence that converges linearly. Therefore our approach does not restrict the analysis of our scheme. It simplifies it and can be used for other algorithms like the gradient descent as detailed in Supplementary Material.
We now analyze restart schedules tk that ensure linear convergence. Our choice of tk will heavily depend on the ratio between r and s (with s = 2 for smooth functions here), incorporated in the parameter τ = 1− s/r defined in (2). Below, we show that if τ = 0, a constant schedule is sufficient to ensure linear convergence. When τ > 0, we need a geometrically increasing number of iterations for each cycle. Proposition 2.2. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, µ) on a set K. Assume that we are given x0 ∈ Rn such that {x| f(x) ≤ f(x0)} ⊂ K. Run Algorithm 1 from x0 with iteration schedule tk = C∗κ,τeτk, for k = 1, . . . , R, where
C∗κ,τ , e 1−τ (cκ) 1 2 (f(x0)− f∗)− τ 2 , (5)
with κ and τ defined in (2) and c = 4e2/e here. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −2e−1(cκ)− 12N ) (f(x0)− f∗) = O ( exp(−κ− 12N) ) , when τ = 0, (6)
while,
f(x̂)− f∗ ≤ f(x0)− f ∗(
τe−1(f(x0)− f∗) τ 2 (cκ)− 1 2N + 1
) 2 τ
= O ( N− 2 τ ) , when τ > 0, (7)
where N = ∑R k=1 tk is the total number of iterations.
Proof. Our strategy is to choose tk such that the objective is linearly decreasing, i.e.
f (xk)− f∗ ≤ e−γk(f(x0)− f∗), (8)
for some γ ≥ 0 depending on the choice of tk. This directly holds for k = 0 and any γ ≥ 0. Combining (Sharp) with the complexity bound in (3), we get
f (xk)− f∗ ≤ cκt2k (f (xk−1)− f ∗) 2 r ,
where c = 4e2/e using that r2/r ≤ e2/e. Assuming recursively that (8) is satisfied at iteration k − 1 for a given γ, we have
f (xk)− f∗ ≤ cκe −γ 2 r (k−1)
t2k (f(x0)− f∗) 2 r ,
and to ensure (8) at iteration k, we impose
cκe−γ 2 r (k−1)
t2k (f(x0)− f∗) 2 r ≤ e−γk(f(x0)− f∗).
Rearranging terms in this last inequality, using τ defined in (2), we get
tk ≥ e γ(1−τ) 2 (cκ) 1 2 (f(x0)− f∗)− τ 2 e τγ 2 k. (9)
For a given γ ≥ 0, we can set tk = Ceαk where
C = e γ(1−τ) 2 (cκ) 1 2 (f(x0)− f∗)− τ 2 and α = τγ/2, (10)
and Lemma 2.1 then yields, f(x̂)− f∗ ≤ exp ( −γe− γ 2 (cκ)− 1 2N ) (f(x0)− f∗),
when τ = 0, while
f(x̂)− f∗ ≤ (f(x0)−f ∗)(
τ 2 γe
− γ 2 (cκ)− 1 2 (f(x0)−f∗) τ 2 N+1
) 2 τ ,
when τ > 0. These bounds are minimal for γ = 2, which yields the desired result.
When τ = 0, bound (6) matches the classical complexity bound for smooth strongly convex functions [Nesterov, 2013b]. When τ > 0 on the other hand, bound (7) highlights a much faster convergence rate than accelerated gradient methods. The sharper the function (i.e. the smaller r), the faster the convergence. This matches the lower bounds for optimizing smooth and sharp functions functions [Arjevani and Shamir, 2016; Nemirovskii and Nesterov, 1985, Page 6] up to constant factors. Also, setting tk = C∗κ,τe
τk yields continuous bounds on precision, i.e. when τ → 0, bound (7) converges to bound (6), which also shows that for τ near zero, constant restart schemes are almost optimal.
2.2 Adaptive scheduled restart
The previous restart schedules depend on the sharpness parameters (r, µ) in (Sharp). In general of course, these values are neither observed nor known a priori. Making our restart scheme adaptive is thus crucial to its practical performance. Fortunately, we show below that a simple logarithmic grid search strategy on these parameters is enough to guarantee nearly optimal performance.
We run several schemes with a fixed number of inner iterations N to perform a log-scale grid search on τ and κ. We define these schemes as follows.{
Si,0 : Algorithm 1 with tk = Ci, Si,j : Algorithm 1 with tk = Cieτjk,
(11)
where Ci = 2i and τj = 2−j . We stop these schemes when the total number of inner algorithm iterations has exceed N , i.e. at the smallest R such that ∑R k=1 tk ≥ N . The size of the grid search in Ci is naturally bounded as we cannot restart the algorithm after more than N total inner iterations, so i ∈ [1, . . . , blog2Nc]. We will also show that when τ is smaller than 1/N , a constant schedule performs as well as the optimal geometrically increasing schedule, which crucially means we can also choose j ∈ [1, . . . , dlog2Ne] and limits the cost of grid search. The following result details the convergence of this method, its notations are the same as in Proposition 2.2 and its technical proof can be found in Supplementary Material. Proposition 2.3. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, µ) on a set K. Assume that we are given x0 ∈ Rn such that {x| f(x) ≤ f(x0)} ⊂ K and denote N a given number of iterations. Run schemes Si,j defined in (11) to solve (P) for i ∈ [1, . . . , blog2Nc] and j ∈ [0, . . . , dlog2Ne], stopping each time after N total inner algorithm iterations i.e. for R such that ∑R k=1 tk ≥ N .
Assume N is large enough, so N ≥ 2C∗κ,τ , and if 1N > τ > 0, C ∗ κ,τ > 1.
If τ = 0, there exists i ∈ [1, . . . , blog2Nc] such that scheme Si,0 achieves a precision given by f(x̂)− f∗ ≤ exp ( −e−1(cκ)− 12N ) (f(x0)− f∗).
If τ > 0, there exist i ∈ [1, . . . , blog2Nc] and j ∈ [1, . . . , dlog2Ne] such that scheme Si,j achieves a precision given by
f(x̂)− f∗ ≤ f(x0)−f ∗(
τe−1(cκ)− 1 2 (f(x0)−f∗) τ 2 (N−1)/4+1
) 2 τ .
Overall, running the logarithmic grid search has a complexity (log2N) 2 times higher than running
N iterations using the optimal (oracle) scheme.
As showed in Supplementary Material, scheduled restart schemes are theoretically efficient only if the algorithm itself makes a sufficient number of iterations to decrease the objective value. Therefore we need N large enough to ensure the efficiency of the adaptive method. If τ = 0, we naturally have C∗κ,0 ≥ 1, therefore if 1N > τ > 0 and N is large, assuming C ∗ κ,τ ≈ C∗κ,0, we get C∗κ,τ ≥ 1. This adaptive bound is similar to the one of Nesterov [2013b] to optimize smooth strongly convex functions in the sense that we lose approximately a log factor of the condition number of the function. However our assumptions are weaker and we are able to tackle all regimes of the sharpness property, i.e. any exponent r ∈ [2,+∞], not just the strongly convex case. In the supplementary material we also analyze the simple gradient descent method under the sharpness (Sharp) assumption. It shows that simple gradient descent achieves a O( −τ ) complexity for a given accuracy . Therefore restarting accelerated gradient methods reduces complexity to O( −τ/2) compared to simple gradient descent. This result is similar to the acceleration of gradient descent. We extend now this restart scheme to solve non-smooth or Hölder smooth convex optimization problem under the sharpness assumption.
3 Universal scheduled restarts for convex problems
In this section, we use the framework introduced by Nesterov [2015] to describe smoothness of a convex function f , namely, we assume that there exist s ∈ [1, 2] and L > 0 on a set J ⊂ Rn, i.e.
‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖s−1, for every x, y ∈ J.
Without further assumptions on f , the optimal rate of convergence for this class of functions is bounded as O(1/Nρ), where N is the total number of iterations and
ρ = 3s/2− 1, (12) which gives ρ = 2 for smooth functions and ρ = 1/2 for non-smooth functions. The universal fast gradient method [Nesterov, 2015] achieves this rate by requiring only a target accuracy and a starting point x0. It outputs after t iterations a point x , U(x0, , t), such that
f(x)− f∗ ≤ 2 + cL
2 s d(x0, X ∗)2
2 s t 2ρ s
2 , (13)
where c is a constant (c = 2 4s−2 s ). More details about the universal fast gradient method are given in Supplementary Material.
We will again assume that f is sharp with parameters (r, µ) on a set K ⊇ X∗, i.e. µ
r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K. (Sharp)
As mentioned in Section 1.2, if r > s, smoothness or sharpness are local properties, i.e. either J or K or both are bounded, our analysis is therefore local. In the following we assume for simplicity, given an initial point x0, that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f(x) ≤ f(x0)}. The key difference with the smooth case described in the previous section is that here we schedule both the target accuracy k used by the algorithm and the number of iterations tk made at the kth run of the algorithm. Our scheme is described in Algorithm 2.
Algorithm 2 Universal scheduled restarts for convex minimization Inputs : x0 ∈ Rn, 0 ≥ f(x0)− f∗, γ ≥ 0 and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do
k := e −γ k−1, xk := U(xk−1, k, tk)
end for Output : x̂ := xR
Our strategy is to choose a sequence tk that ensures f(xk)− f∗ ≤ k,
for the geometrically decreasing sequence k. The overall complexity of our method will then depend on the growth of tk as described in Lemma 2.1. The proof is similar to the smooth case and can be found in Supplementary Material. Proposition 3.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given x0 ∈ Rn assume that {x|f(x) ≤ f(x0)} ⊂ J ∩K. Run Algorithm 2 from x0 for a given 0 ≥ f(x0)− f∗ with
γ = ρ, tk = C ∗ κ,τ,ρe τk, where C∗κ,τ,ρ , e 1−τ (cκ) s 2ρ − τρ 0
where ρ is defined in (12), κ and τ are defined in (2) and c = 8e2/e here. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −ρe−1(cκ)− s 2ρN ) 0 = O ( exp(−κ− s 2ρN) ) , when τ = 0,
while,
f(x̂)− f∗ ≤ 0( τe−1(cκ)− s 2ρ τ ρ 0 N + 1 )− ρτ = O (κ s2τN− ρτ ) , when τ > 0,
where N = ∑R k=1 tk is total number of iterations.
This bound matches the lower bounds for optimizing smooth and sharp functions [Nemirovskii and Nesterov, 1985, Page 6] up to constant factors. Notice that, compared to Nemirovskii and Nesterov [1985], we can tackle non-smooth convex optimization by using the universal fast gradient algorithm of Nesterov [2015]. The rate of convergence in Proposition 3.1 is controlled by the ratio between τ and ρ. If these are unknown, a log-scale grid search won’t be able to reach the optimal rate, even if ρ is known since we will miss the optimal rate by a constant factor. If both are known, in the case of non-smooth strongly convex functions for example, a grid-search on C recovers nearly the optimal bound. Now we will see that if f∗ is known, restart produces adaptive optimal rates.
4 Restart with termination criterion
Here, we assume that we know the optimum f∗ of (P), or have an exact termination criterion. This is the case for example in zero-sum matrix games problems or non-degenerate least-squares without regularization. We assume again that f satisfies (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given an initial point x0 we assume that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f(x) ≤ f(x0)}. We use again the universal gradient method U . Here however, we can stop the algorithm when it reaches the target accuracy as we know the optimum f∗, i.e. we stop after t inner iterations such that x = U(x0, , t ) satisfies f(x)− f∗ ≤ , and write x , C(x0, ) the output of this method. Here we simply restart this method and decrease the target accuracy by a constant factor after each restart. Our scheme is described in Algorithm 3.
Algorithm 3 Restart on criterion Inputs : x0 ∈ Rn, f∗, γ ≥ 0, 0 = f(x0)− f∗ for k = 1, . . . , R do
k := e −γ k−1, xk := C(xk−1, k)
end for Output : x̂ := xR
The following result describes the convergence of this method. It relies on the idea that it cannot do more iterations than the best scheduled restart to achieve the target accuracy at each iteration. Its proof can be found in Supplementary Material. Proposition 4.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given x0 ∈ Rn assume that {x, f(x) ≤ f(x0)} ⊂ J ∩K. Run Algorithm 3 from x0 with parameter γ = ρ. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −ρe−1(cκ)− s 2ρN ) (f(x0)− f∗) = O ( exp(−κ− s 2ρN) ) , when τ = 0,
while,
f(x̂)− f∗ ≤ f(x0)− f ∗(
τe−1(cκ)− s 2ρ (f(x0)− f∗) τ ρN + 1
) ρ τ
= O ( κ s 2τN− ρ τ ) , when τ > 0,
whereN is the total number of iterations, ρ is defined in (12), κ and τ are defined in (2) and c = 8e2/e here.
Therefore if f∗ is known, this method is adaptive, contrary to the general case in Proposition 3.1. It can even adapt to the local values of L or µ as we use a criterion instead of a preset schedule. Here, stopping using f(xk) − f∗ implicitly yields optimal choices of C and τ . A closer look at the proof shows that the dependency in γ of this restart scheme is a factor h(γ) = γe−γ/ρ of the number of iterations. Taking γ = 1, leads then to a suboptimal constant factor of at most h(ρ)/h(1) ≤ e/2 ≈ 1.3 for ρ ∈ [1/2, 2], so running this scheme with γ = 1 makes it parameter-free while getting nearly optimal bounds.
5 Numerical Results
We illustrate our results by testing our adaptive restart methods, denoted Adap and Crit, introduced respectively in Sections 2.2 and 4 on several problems and compare them against simple gradient descent (Grad), accelerated gradient methods (Acc), and the restart heuristic enforcing monotonicity (Mono in [O’Donoghue and Candes, 2015]). For Adap we plot the convergence of the best method found by grid search to compare with the restart heuristic. This implicitly assumes that the grid search is run in parallel with enough servers. For Crit we use the optimal f∗ found by another solver. This gives an overview of its performance in order to potentially approximate it along the iterations
in a future work as done with Polyak steps [Polyak, 1987]. All restart schemes were done using the accelerated gradient with backtracking line search detailed in the Supplementary Material, with large dots representing restart iterations.
The results focused on unconstrained problems but our approach can directly be extended to composite problems by using the proximal variant of the gradient, accelerated gradient and universal fast gradient methods [Nesterov, 2015] as detailed in the Supplementary Material. This includes constrained optimization as a particular case by adding the indicator function of the constraint set to the objective (as in the SVM example below).
In Figure 1, we solve classification problems with various losses on the UCI Sonar data set [Asuncion and Newman, 2007]. For least square loss on sonar data set, we observe much faster convergence of the restart schemes compared to the accelerated method. These results were already observed by O’Donoghue and Candes [2015]. For logistic loss, we observe that restart does not provide much improvement. The backtracking line search on the Lipschitz constant may be sufficient to capture the geometry of the problem. For hinge loss, we regularized by a squared norm and optimize the dual, which means solving a quadratic problem with box constraints. We observe here that the scheduled restart scheme convergences much faster, while restart heuristics may be activated too late. We observe similar results for the LASSO problem. In general Crit ensures the theoretical accelerated rate but Adap exhibits more consistent behavior. This highlights the benefits of a sharpness assumption for these last two problems. Precisely quantifying sharpness from data/problem structure is a key open problem.
Acknowledgments
The authors would like to acknowledge support from the chaire Économie des nouvelles données with the data science joint research initiative with the fonds AXA pour la recherche, a gift from Société Générale Cross Asset Quantitative Research and an AMX fellowship. The authors are affiliated to PSL Research University, Paris, France. | 1. What is the focus of the paper regarding first-order algorithms for Holder-smooth convex optimization?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and practical applications?
3. How does the reviewer assess the novelty and significance of the paper's contributions compared to prior works, such as Nemirovski and Nesterov's original paper?
4. Are there any concerns or suggestions regarding the paper's emphasis on adaptive methods and log-scale grid search?
5. How does the reviewer evaluate the paper's overall quality and relevance to NIPS? | Review | Review
This paper consider first-order algorithms for Holder-smooth convex optimization in the oracle model with an additional sharpness assumption, guaranteeing that, within a neighborhood of optimum, a reduction in objective value yields a reduction in distance from optimum. Recently, there has been growing interest in the algorithmic consequences of the presence of sharpness, particularly in the setting of alternating minimization and of compressed sensing.
Sharpness can be exploited to speed up the convergence of first-order methods, such as Nesterov's accelerated gradient descent, by appropriately restarting the algorithm after a certain number of iterations, possibly changing with the number of rounds. First, the authors provide asymptotically optimal restart schedules for this class of problem for given sharpness parameters mu and r. While this is interesting, the result is essentially the same as that appearing, in more obscure terms, in Nemirovski and Nesterov's original 1985 paper "Optimal methods of smooth convex optimization". See paragraph 5 of that paper.
More importantly, the authors show that a log-scale grid search can be performed to construct adaptive methods that work in settings when mu and r are unknown, which is typical in sharpness applications. This appears to be the main novel idea of the paper. From a theoretical point of view, I find this is to be a fairly straightforward observation. On the other hand, such observation may be important in practice. Indeed, the authors also show a small number of practical examples in the context of classification, in which the restart schedules significantly improve performance. At the same time, the fact that restarts can greatly help the convergence of accelerated methods has already been observed before (see O'Donoghue and Candes, as cited in the paper).
In conclusion, I find the paper interesting from a practical point of view and I wish that the authors had focused more on the empirical comparison of their restart schedule vs that of Nemirovski and Nesterov and others. From a theoretical point of view, my feeling is that the contribution is good but probably not good enough for NIPS. It might help if the authors, in their rebuttal, explained more clearly the relation of their non-adaptive bounds with those of Nemirovski and Nesterov. |
NIPS | Title
Sharpness, Restart and Acceleration
Abstract
The Łojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls the performance of restart schemes, as observed by Nemirovskii and Nesterov [1985]. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods.
N/A
Introduction
We study convex optimization problems of the form
minimize f(x) (P)
where f is a convex function defined on Rn. The complexity of these problems using first order methods is generically controlled by smoothness assumptions on f such as Lipschitz continuity of its gradient. Additional assumptions such as strong convexity or uniform convexity provide respectively linear [Nesterov, 2013b] and faster polynomial [Juditski and Nesterov, 2014] rates of convergence. However, these assumptions are often too restrictive to be applied. Here, we make a much weaker and generic assumption that describes the sharpness of the function around its minimizers by constants µ ≥ 0 and r ≥ 1 such that
µ r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K, (Sharp)
where f∗ is the minimum of f , K ⊂ Rn is a compact set, d(x,X∗) = miny∈X∗ ‖x − y‖ is the distance from x to the set X∗ ⊂ K of minimizers of f 1 for the Euclidean norm ‖ · ‖. This defines a lower bound on the function around its minimizers: for r = 1, f shows a kink around its minimizers and the larger is r the flatter is the function around its minimizers. We tackle this property by restart schemes of classical convex optimization algorithms.
Sharpness assumption (Sharp) is better known as a Hölderian error bound on the distance to the set of minimizers. Hoffman [Hoffman, 1952] first introduced error bounds to study system of linear inequalities. Natural extensions were then developed for convex optimization [Robinson, 1975; Mangasarian, 1985; Auslender and Crouzeix, 1988], notably through the concept of sharp minima [Polyak, 1979; Burke and Ferris, 1993; Burke and Deng, 2002]. But the most striking discovery was made by Łojasiewicz [Łojasiewicz, 1963, 1993] who proved inequality (Sharp) for real analytic and subanalytic functions. It has then been extended to non-smooth subanalytic convex functions by Bolte et al. [2007]. Overall, since (Sharp) essentially measures the sharpness of minimizers, it holds somewhat generically. On the other hand, this inequality is purely descriptive as we have no hope of ever observing either r or µ, and deriving adaptive schemes is crucial to ensure practical relevance.
1We assume the problem feasible, i.e. X∗ 6= ∅.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Łojasiewicz inequalities either in the form of (Sharp) or as gradient dominated properties [Polyak, 1979] led to new simple convergence results [Karimi et al., 2016], in particular for alternating and splitting methods [Attouch et al., 2010; Frankel et al., 2015], even in the non-convex case [Bolte et al., 2014]. Here we focus on Hölderian error bounds as they offer simple explanation of accelerated rates of restart schemes.
Restart schemes were already studied for strongly or uniformly convex functions [Nemirovskii and Nesterov, 1985; Nesterov, 2013a; Juditski and Nesterov, 2014; Lin and Xiao, 2014]. In particular, Nemirovskii and Nesterov [1985] link a “strict minimum” condition akin to (Sharp) with faster convergence rates using restart schemes which form the basis of our results, but do not study the cost of adaptation and do not tackle the non-smooth case. In a similar spirit, weaker versions of this strict minimum condition were used more recently to study the performance of restart schemes in [Renegar, 2014; Freund and Lu, 2015; Roulet et al., 2015]. The fundamental question of a restart scheme is naturally to know when must an algorithm be stopped and relaunched. Several heuristics [O’Donoghue and Candes, 2015; Su et al., 2014; Giselsson and Boyd, 2014] studied adaptive restart schemes to speed up convergence of optimal methods. The robustness of restart schemes was then theoretically studied by Fercoq and Qu [2016] for quadratic error bounds, i.e. (Sharp) with r = 2, that LASSO problem satisfies for example. Fercoq and Qu [2017] extended recently their work to produce adaptive restarts with theoretical guarantees of optimal performance, still for quadratic error bounds. Previous references focus on smooth problems, but error bounds appear also for non-smooth ones, Gilpin et al. [2012] prove for example linear converge of restart schemes in bilinear matrix games where the minimum is sharp, i.e. (Sharp) with r = 1.
Our contribution here is to derive optimal scheduled restart schemes for general convex optimization problems for smooth, non-smooth or Hölder smooth functions satisfying the sharpness assumption. We then show that for smooth functions these schemes can be made adaptive with nearly optimal complexity (up to a squared log term) for a wide array of sharpness assumptions. We also analyze restart criterion based on a sufficient decrease of the gap to the minimum value of the problem, when this latter is known in advance. In that case, restart schemes are shown ot be optimal without requiring any additional information on the function.
1 Problem assumptions
1.1 Smoothness
Convex optimization problems (P) are generally divided in two classes: smooth problems, for which f has Lipschitz continuous gradients, and non-smooth problems for which f is not differentiable. Nesterov [2015] proposed to unify point of views by assuming generally that there exist constants 1 ≤ s ≤ 2 and L > 0 such that
‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖s−1, for all x, y ∈ Rn (Smooth)
where ∇f(x) is any sub-gradient of f at x if s = 1 (otherwise this implies differentiability of f ). For s = 2, we retrieve the classical definition of smoothness [Nesterov, 2013b]. For s = 1 we get a classical assumption made in non-smooth convex optimization, i.e., that sub-gradients of the function are bounded. For 1 < s < 2, this assumes gradient of f to be Hölder Lipschitz continuous. In a first step, we will analyze restart schemes for smooth convex optimization problems, then generalize to general smoothness assumption (Smooth) using appropriate accelerated algorithms developed by Nesterov [2015].
1.2 Error bounds
In general, an error bound is an inequality of the form
d(x,X∗) ≤ ω(f(x)− f∗),
where ω is an increasing function at 0, called the residual function, and x may evolve either in the whole space or in a bounded set, see Bolte et al. [2015] for more details. We focus on Hölderian Error Bounds (Sharp) as they are the most common in practice. They are notably satisfied by a analytic and subanalytic functions but the proof (see e.g. Bierstone and Milman [1988, Theorem 6.4]) is shown using topological arguments that are far from constructive. Hence, outside of some
particular cases (e.g. strong convexity), we cannot assume that the constants in (Sharp) are known, even approximately.
Error bounds can generically be linked to Łojasiewicz inequality that upper bounds magnitude of the gradient by values of the function [Bolte et al., 2015]. Such property paved the way to many recent results in optimization [Attouch et al., 2010; Frankel et al., 2015; Bolte et al., 2014]. Here we will see that (Sharp) is sufficient to acceleration of convex optimization algorithms by their restart. Note finally that in most cases, error bounds are local properties hence the convergence results that follow will generally be local.
1.3 Sharpness and smoothness
Let f be a convex function on Rn satisfying (Smooth) with parameters (s, L). This property ensures that, f(x) ≤ f∗ + Ls ‖x − y‖
s, for given x ∈ Rn and y ∈ X∗. Setting y to be the projection of x onto X∗, this yields the following upper bound on suboptimality
f(x)− f∗ ≤ L s d(x,X∗)s. (1)
Now, assume that f satisfies the error bound (Sharp) on a setK with parameters (r, µ). Combining (1) and (Sharp) this leads for every x ∈ K,
sµ rL ≤ d(x,X∗)s−r.
This means that necessarily s ≤ r by taking x → X∗. Moreover if s < r, this last inequality can only be valid on a bounded set, i.e. either smoothness or error bound or both are valid only on a bounded set. In the following, we write
κ , L 2 s /µ 2 r and τ , 1− s
r (2)
respectively a generalized condition number for the function f and a condition number based on the ratio of powers in inequalities (Smooth) and (Sharp). If r = s = 2, κ matches the classical condition number of the function.
2 Scheduled restarts for smooth convex problems
In this section, we seek to solve (P) assuming that the function f is smooth, i.e. satisfies (Smooth) with s = 2 and L > 0. Without further assumptions on f , an optimal algorithm to solve the smooth convex optimization problem (P) is Nesterov’s accelerated gradient method [Nesterov, 1983]. Given an initial point x0, this algorithm outputs, after t iterations, a point x = A(x0, t) such that
f(x)− f∗ ≤ cL t2 d(x0, X ∗)2, (3)
where c > 0 denotes a universal constant (whose value will be allowed to vary in what follows, with c = 4 here). We assume without loss of generality that f(x) ≤ f(x0). More details about Nesterov’s algorithm are given in Supplementary Material.
In what follows, we will also assume that f satisfies (Sharp) with parameters (r, µ) on a set K ⊇ X∗, which means
µ r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K. (Sharp)
As mentioned before if r > s = 2, this property is necessarily local, i.e. K is bounded. We assume then that given a starting point x0 ∈ Rn, sharpness is satisfied on the sublevel set {x| f(x) ≤ f(x0)}. Remark that if this property is valid on an open set K ⊃ X∗, it will also be valid on any compact set K ′ ⊃ K with the same exponent r but a potentially lower constant µ. The scheduled restart schemes we present here rely on a global sharpness hypothesis on the sublevel set defined by the initial point and are not adaptive to constant µ on smaller sublevel sets. On the other hand, restarts on criterion that we present in Section 4, assuming that f∗ is known, adapt to the value of µ. We now describe a restart scheme exploiting this extra regularity assumption to improve the computational complexity of solving problem (P) using accelerated methods.
2.1 Scheduled restarts
Here, we schedule the number of iterations tk made by Nesterov’s algorithm between restarts, with tk the number of (inner) iterations at the kth algorithm run (outer iteration). Our scheme is described in Algorithm 1 below.
Algorithm 1 Scheduled restarts for smooth convex minimization Inputs : x0 ∈ Rn and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do
xk := A(xk−1, tk) end for Output : x̂ := xR
The analysis of this scheme and the following ones relies on two steps. We first choose schedules that ensure linear convergence in the iterates xk at a given rate. We then adjust this linear rate to minimize the complexity in terms of the total number of iterations.
We begin with a technical lemma which assumes linear convergence holds, and connects the growth of tk, the precision reached and the total number of inner iterations N . Lemma 2.1. Let xk be a sequence whose kth iterate is generated from the previous one by an algorithm that runs tk iterations and write N = ∑R k=1 tk the total number of iterations to output a point xR. Suppose setting tk = Ceαk, k = 1, . . . , R for some C > 0 and α ≥ 0 ensures that outer iterations satisfy f(xk)− f∗ ≤ νe−γk, (4) for all k ≥ 0 with ν ≥ 0 and γ ≥ 0. Then precision at the output is given by,
f(xR)− f∗ ≤ ν exp(−γN/C), when α = 0,
and f(xR)− f∗ ≤
ν
(αe−αC−1N + 1) γ α
, when α > 0.
Proof. When α = 0, N = RC, and inserting this in (4) at the last point xR yields the desired result. On the other hand, when α > 0, we have N = ∑R k=1 tk = Ce α eαR−1 eα−1 , which gives
R = log ( eα−1 eαC N + 1 ) /α. Inserting this in (4) at the last point, we get
f(xR)− f∗ ≤ ν exp ( − γα log ( eα−1 eαC N + 1 )) ≤ ν
(αe−αC−1N+1) γ α ,
where we used ex − 1 ≥ x. This yields the second part of the result.
The last approximation in the case α > 0 simplifies the analysis that follows without significantly affecting the bounds. We also show in Supplementary Material that using t̃k = dtke does not significantly affect the bounds above. Remark that convergence bounds are generally linear or polynomial such that we can extract a subsequence that converges linearly. Therefore our approach does not restrict the analysis of our scheme. It simplifies it and can be used for other algorithms like the gradient descent as detailed in Supplementary Material.
We now analyze restart schedules tk that ensure linear convergence. Our choice of tk will heavily depend on the ratio between r and s (with s = 2 for smooth functions here), incorporated in the parameter τ = 1− s/r defined in (2). Below, we show that if τ = 0, a constant schedule is sufficient to ensure linear convergence. When τ > 0, we need a geometrically increasing number of iterations for each cycle. Proposition 2.2. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, µ) on a set K. Assume that we are given x0 ∈ Rn such that {x| f(x) ≤ f(x0)} ⊂ K. Run Algorithm 1 from x0 with iteration schedule tk = C∗κ,τeτk, for k = 1, . . . , R, where
C∗κ,τ , e 1−τ (cκ) 1 2 (f(x0)− f∗)− τ 2 , (5)
with κ and τ defined in (2) and c = 4e2/e here. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −2e−1(cκ)− 12N ) (f(x0)− f∗) = O ( exp(−κ− 12N) ) , when τ = 0, (6)
while,
f(x̂)− f∗ ≤ f(x0)− f ∗(
τe−1(f(x0)− f∗) τ 2 (cκ)− 1 2N + 1
) 2 τ
= O ( N− 2 τ ) , when τ > 0, (7)
where N = ∑R k=1 tk is the total number of iterations.
Proof. Our strategy is to choose tk such that the objective is linearly decreasing, i.e.
f (xk)− f∗ ≤ e−γk(f(x0)− f∗), (8)
for some γ ≥ 0 depending on the choice of tk. This directly holds for k = 0 and any γ ≥ 0. Combining (Sharp) with the complexity bound in (3), we get
f (xk)− f∗ ≤ cκt2k (f (xk−1)− f ∗) 2 r ,
where c = 4e2/e using that r2/r ≤ e2/e. Assuming recursively that (8) is satisfied at iteration k − 1 for a given γ, we have
f (xk)− f∗ ≤ cκe −γ 2 r (k−1)
t2k (f(x0)− f∗) 2 r ,
and to ensure (8) at iteration k, we impose
cκe−γ 2 r (k−1)
t2k (f(x0)− f∗) 2 r ≤ e−γk(f(x0)− f∗).
Rearranging terms in this last inequality, using τ defined in (2), we get
tk ≥ e γ(1−τ) 2 (cκ) 1 2 (f(x0)− f∗)− τ 2 e τγ 2 k. (9)
For a given γ ≥ 0, we can set tk = Ceαk where
C = e γ(1−τ) 2 (cκ) 1 2 (f(x0)− f∗)− τ 2 and α = τγ/2, (10)
and Lemma 2.1 then yields, f(x̂)− f∗ ≤ exp ( −γe− γ 2 (cκ)− 1 2N ) (f(x0)− f∗),
when τ = 0, while
f(x̂)− f∗ ≤ (f(x0)−f ∗)(
τ 2 γe
− γ 2 (cκ)− 1 2 (f(x0)−f∗) τ 2 N+1
) 2 τ ,
when τ > 0. These bounds are minimal for γ = 2, which yields the desired result.
When τ = 0, bound (6) matches the classical complexity bound for smooth strongly convex functions [Nesterov, 2013b]. When τ > 0 on the other hand, bound (7) highlights a much faster convergence rate than accelerated gradient methods. The sharper the function (i.e. the smaller r), the faster the convergence. This matches the lower bounds for optimizing smooth and sharp functions functions [Arjevani and Shamir, 2016; Nemirovskii and Nesterov, 1985, Page 6] up to constant factors. Also, setting tk = C∗κ,τe
τk yields continuous bounds on precision, i.e. when τ → 0, bound (7) converges to bound (6), which also shows that for τ near zero, constant restart schemes are almost optimal.
2.2 Adaptive scheduled restart
The previous restart schedules depend on the sharpness parameters (r, µ) in (Sharp). In general of course, these values are neither observed nor known a priori. Making our restart scheme adaptive is thus crucial to its practical performance. Fortunately, we show below that a simple logarithmic grid search strategy on these parameters is enough to guarantee nearly optimal performance.
We run several schemes with a fixed number of inner iterations N to perform a log-scale grid search on τ and κ. We define these schemes as follows.{
Si,0 : Algorithm 1 with tk = Ci, Si,j : Algorithm 1 with tk = Cieτjk,
(11)
where Ci = 2i and τj = 2−j . We stop these schemes when the total number of inner algorithm iterations has exceed N , i.e. at the smallest R such that ∑R k=1 tk ≥ N . The size of the grid search in Ci is naturally bounded as we cannot restart the algorithm after more than N total inner iterations, so i ∈ [1, . . . , blog2Nc]. We will also show that when τ is smaller than 1/N , a constant schedule performs as well as the optimal geometrically increasing schedule, which crucially means we can also choose j ∈ [1, . . . , dlog2Ne] and limits the cost of grid search. The following result details the convergence of this method, its notations are the same as in Proposition 2.2 and its technical proof can be found in Supplementary Material. Proposition 2.3. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, µ) on a set K. Assume that we are given x0 ∈ Rn such that {x| f(x) ≤ f(x0)} ⊂ K and denote N a given number of iterations. Run schemes Si,j defined in (11) to solve (P) for i ∈ [1, . . . , blog2Nc] and j ∈ [0, . . . , dlog2Ne], stopping each time after N total inner algorithm iterations i.e. for R such that ∑R k=1 tk ≥ N .
Assume N is large enough, so N ≥ 2C∗κ,τ , and if 1N > τ > 0, C ∗ κ,τ > 1.
If τ = 0, there exists i ∈ [1, . . . , blog2Nc] such that scheme Si,0 achieves a precision given by f(x̂)− f∗ ≤ exp ( −e−1(cκ)− 12N ) (f(x0)− f∗).
If τ > 0, there exist i ∈ [1, . . . , blog2Nc] and j ∈ [1, . . . , dlog2Ne] such that scheme Si,j achieves a precision given by
f(x̂)− f∗ ≤ f(x0)−f ∗(
τe−1(cκ)− 1 2 (f(x0)−f∗) τ 2 (N−1)/4+1
) 2 τ .
Overall, running the logarithmic grid search has a complexity (log2N) 2 times higher than running
N iterations using the optimal (oracle) scheme.
As showed in Supplementary Material, scheduled restart schemes are theoretically efficient only if the algorithm itself makes a sufficient number of iterations to decrease the objective value. Therefore we need N large enough to ensure the efficiency of the adaptive method. If τ = 0, we naturally have C∗κ,0 ≥ 1, therefore if 1N > τ > 0 and N is large, assuming C ∗ κ,τ ≈ C∗κ,0, we get C∗κ,τ ≥ 1. This adaptive bound is similar to the one of Nesterov [2013b] to optimize smooth strongly convex functions in the sense that we lose approximately a log factor of the condition number of the function. However our assumptions are weaker and we are able to tackle all regimes of the sharpness property, i.e. any exponent r ∈ [2,+∞], not just the strongly convex case. In the supplementary material we also analyze the simple gradient descent method under the sharpness (Sharp) assumption. It shows that simple gradient descent achieves a O( −τ ) complexity for a given accuracy . Therefore restarting accelerated gradient methods reduces complexity to O( −τ/2) compared to simple gradient descent. This result is similar to the acceleration of gradient descent. We extend now this restart scheme to solve non-smooth or Hölder smooth convex optimization problem under the sharpness assumption.
3 Universal scheduled restarts for convex problems
In this section, we use the framework introduced by Nesterov [2015] to describe smoothness of a convex function f , namely, we assume that there exist s ∈ [1, 2] and L > 0 on a set J ⊂ Rn, i.e.
‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖s−1, for every x, y ∈ J.
Without further assumptions on f , the optimal rate of convergence for this class of functions is bounded as O(1/Nρ), where N is the total number of iterations and
ρ = 3s/2− 1, (12) which gives ρ = 2 for smooth functions and ρ = 1/2 for non-smooth functions. The universal fast gradient method [Nesterov, 2015] achieves this rate by requiring only a target accuracy and a starting point x0. It outputs after t iterations a point x , U(x0, , t), such that
f(x)− f∗ ≤ 2 + cL
2 s d(x0, X ∗)2
2 s t 2ρ s
2 , (13)
where c is a constant (c = 2 4s−2 s ). More details about the universal fast gradient method are given in Supplementary Material.
We will again assume that f is sharp with parameters (r, µ) on a set K ⊇ X∗, i.e. µ
r d(x,X∗)r ≤ f(x)− f∗, for every x ∈ K. (Sharp)
As mentioned in Section 1.2, if r > s, smoothness or sharpness are local properties, i.e. either J or K or both are bounded, our analysis is therefore local. In the following we assume for simplicity, given an initial point x0, that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f(x) ≤ f(x0)}. The key difference with the smooth case described in the previous section is that here we schedule both the target accuracy k used by the algorithm and the number of iterations tk made at the kth run of the algorithm. Our scheme is described in Algorithm 2.
Algorithm 2 Universal scheduled restarts for convex minimization Inputs : x0 ∈ Rn, 0 ≥ f(x0)− f∗, γ ≥ 0 and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do
k := e −γ k−1, xk := U(xk−1, k, tk)
end for Output : x̂ := xR
Our strategy is to choose a sequence tk that ensures f(xk)− f∗ ≤ k,
for the geometrically decreasing sequence k. The overall complexity of our method will then depend on the growth of tk as described in Lemma 2.1. The proof is similar to the smooth case and can be found in Supplementary Material. Proposition 3.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given x0 ∈ Rn assume that {x|f(x) ≤ f(x0)} ⊂ J ∩K. Run Algorithm 2 from x0 for a given 0 ≥ f(x0)− f∗ with
γ = ρ, tk = C ∗ κ,τ,ρe τk, where C∗κ,τ,ρ , e 1−τ (cκ) s 2ρ − τρ 0
where ρ is defined in (12), κ and τ are defined in (2) and c = 8e2/e here. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −ρe−1(cκ)− s 2ρN ) 0 = O ( exp(−κ− s 2ρN) ) , when τ = 0,
while,
f(x̂)− f∗ ≤ 0( τe−1(cκ)− s 2ρ τ ρ 0 N + 1 )− ρτ = O (κ s2τN− ρτ ) , when τ > 0,
where N = ∑R k=1 tk is total number of iterations.
This bound matches the lower bounds for optimizing smooth and sharp functions [Nemirovskii and Nesterov, 1985, Page 6] up to constant factors. Notice that, compared to Nemirovskii and Nesterov [1985], we can tackle non-smooth convex optimization by using the universal fast gradient algorithm of Nesterov [2015]. The rate of convergence in Proposition 3.1 is controlled by the ratio between τ and ρ. If these are unknown, a log-scale grid search won’t be able to reach the optimal rate, even if ρ is known since we will miss the optimal rate by a constant factor. If both are known, in the case of non-smooth strongly convex functions for example, a grid-search on C recovers nearly the optimal bound. Now we will see that if f∗ is known, restart produces adaptive optimal rates.
4 Restart with termination criterion
Here, we assume that we know the optimum f∗ of (P), or have an exact termination criterion. This is the case for example in zero-sum matrix games problems or non-degenerate least-squares without regularization. We assume again that f satisfies (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given an initial point x0 we assume that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f(x) ≤ f(x0)}. We use again the universal gradient method U . Here however, we can stop the algorithm when it reaches the target accuracy as we know the optimum f∗, i.e. we stop after t inner iterations such that x = U(x0, , t ) satisfies f(x)− f∗ ≤ , and write x , C(x0, ) the output of this method. Here we simply restart this method and decrease the target accuracy by a constant factor after each restart. Our scheme is described in Algorithm 3.
Algorithm 3 Restart on criterion Inputs : x0 ∈ Rn, f∗, γ ≥ 0, 0 = f(x0)− f∗ for k = 1, . . . , R do
k := e −γ k−1, xk := C(xk−1, k)
end for Output : x̂ := xR
The following result describes the convergence of this method. It relies on the idea that it cannot do more iterations than the best scheduled restart to achieve the target accuracy at each iteration. Its proof can be found in Supplementary Material. Proposition 4.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, µ) on a set K. Given x0 ∈ Rn assume that {x, f(x) ≤ f(x0)} ⊂ J ∩K. Run Algorithm 3 from x0 with parameter γ = ρ. The precision reached at the last point x̂ is given by,
f(x̂)− f∗ ≤ exp ( −ρe−1(cκ)− s 2ρN ) (f(x0)− f∗) = O ( exp(−κ− s 2ρN) ) , when τ = 0,
while,
f(x̂)− f∗ ≤ f(x0)− f ∗(
τe−1(cκ)− s 2ρ (f(x0)− f∗) τ ρN + 1
) ρ τ
= O ( κ s 2τN− ρ τ ) , when τ > 0,
whereN is the total number of iterations, ρ is defined in (12), κ and τ are defined in (2) and c = 8e2/e here.
Therefore if f∗ is known, this method is adaptive, contrary to the general case in Proposition 3.1. It can even adapt to the local values of L or µ as we use a criterion instead of a preset schedule. Here, stopping using f(xk) − f∗ implicitly yields optimal choices of C and τ . A closer look at the proof shows that the dependency in γ of this restart scheme is a factor h(γ) = γe−γ/ρ of the number of iterations. Taking γ = 1, leads then to a suboptimal constant factor of at most h(ρ)/h(1) ≤ e/2 ≈ 1.3 for ρ ∈ [1/2, 2], so running this scheme with γ = 1 makes it parameter-free while getting nearly optimal bounds.
5 Numerical Results
We illustrate our results by testing our adaptive restart methods, denoted Adap and Crit, introduced respectively in Sections 2.2 and 4 on several problems and compare them against simple gradient descent (Grad), accelerated gradient methods (Acc), and the restart heuristic enforcing monotonicity (Mono in [O’Donoghue and Candes, 2015]). For Adap we plot the convergence of the best method found by grid search to compare with the restart heuristic. This implicitly assumes that the grid search is run in parallel with enough servers. For Crit we use the optimal f∗ found by another solver. This gives an overview of its performance in order to potentially approximate it along the iterations
in a future work as done with Polyak steps [Polyak, 1987]. All restart schemes were done using the accelerated gradient with backtracking line search detailed in the Supplementary Material, with large dots representing restart iterations.
The results focused on unconstrained problems but our approach can directly be extended to composite problems by using the proximal variant of the gradient, accelerated gradient and universal fast gradient methods [Nesterov, 2015] as detailed in the Supplementary Material. This includes constrained optimization as a particular case by adding the indicator function of the constraint set to the objective (as in the SVM example below).
In Figure 1, we solve classification problems with various losses on the UCI Sonar data set [Asuncion and Newman, 2007]. For least square loss on sonar data set, we observe much faster convergence of the restart schemes compared to the accelerated method. These results were already observed by O’Donoghue and Candes [2015]. For logistic loss, we observe that restart does not provide much improvement. The backtracking line search on the Lipschitz constant may be sufficient to capture the geometry of the problem. For hinge loss, we regularized by a squared norm and optimize the dual, which means solving a quadratic problem with box constraints. We observe here that the scheduled restart scheme convergences much faster, while restart heuristics may be activated too late. We observe similar results for the LASSO problem. In general Crit ensures the theoretical accelerated rate but Adap exhibits more consistent behavior. This highlights the benefits of a sharpness assumption for these last two problems. Precisely quantifying sharpness from data/problem structure is a key open problem.
Acknowledgments
The authors would like to acknowledge support from the chaire Économie des nouvelles données with the data science joint research initiative with the fonds AXA pour la recherche, a gift from Société Générale Cross Asset Quantitative Research and an AMX fellowship. The authors are affiliated to PSL Research University, Paris, France. | 1. What is the focus of the paper regarding restarting schemes and growth conditions?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any concerns or suggestions regarding the paper's content, such as discussions, references, or minor issues? | Review | Review
Summary of the paper
====================
This paper considers restarting schemes which allow one to explicitly incorporate growth properties (namely, sharpness) of convex functions into algorithms which do not necessarily exploit this additional favorable assumptions. First, the number of inner iterations per epoch is scheduled based on the parameters of the growth condition. As these parameters are hard to approximate, an adaptive scheduling is devised based on parameters grid search. Finally, it is shown that one can obtain a near-optimal rate only by knowing the value of the minimizer (omitting the requirement for knowing the sharpness parameters).
Evaluation
==========
The main contribution of of the paper is combining the mechanism of restarting schemes with the growth conditions of convex functions. The actual rate obtained by this technique seem to be of a somewhat narrow practical value (requires strong prior knowledge or grid search). However, from theoretical standpoint, it is an interesting general approach of exploiting sharpness. That said, the paper seems to contribute to the study of restarting mechanisms schemes only incrementally. The paper is well-written and easy to follow.
General Comments
================
- A more through discussion regarding other existing algorithms which obtain the same optimal rate is missing.
- Related Work which may be worth mentioning:
- similar upper bound: https://arxiv.org/pdf/1609.07358.pdf
- lower bound using restarting scheme
http://proceedings.mlr.press/v48/arjevani16.pdf
Minor Comments
==============
- L16: Might worth emphasizing that f^* is taken over K (and not, e.g., the domain over which f is defined).
- L153: In what sense should we expect convergence?
- L217: Didn't find the definition for Q before this line (I did find a definition in the next section).
- L285 (appendix): broken reference ?? |
NIPS | Title
TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s
Abstract
This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-theart GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU.
1 Introduction
The K-nearest neighbor (K-NN) search problem has a wide range of applications in machine learning and information retrieval systems, including image search (Jia et al., 2021; Babenko and Lempitsky, 2016), semantic textual retrieval (Liu et al., 2009; Cer et al., 2018), anomaly detection (Gu et al., 2019; Omar et al., 2013), recommendation systems (Sarwar et al., 2002; Zhao et al., 2019), as well as serving as a component for a downstream tasks (Borgeaud et al., 2021; Guu et al., 2020; Lindgren et al., 2021; Shazeer et al., 2017). Given a query, the objective of K-NN is to identify K closest datapoints from a database of finite number of data points in a vector space. The main challenge of designing a good K-NN algorithm is to compute accurate K-NN results while being computationally efficient.
Solving the K-NN problem on accelerators has emerging interests from both the academia and the industry (Johnson et al., 2021; Shanbhag et al., 2018; Zhao et al., 2020). Many accelerators can deliver hundreds of Tera Floating Point Operations Per Seconds (TFLOPS) vital to the neighbor distance computation. However, utilizing accelerators in K-NN problems is not straightforward; multiple issues in data locality, memory bandwidth, and multiple types of hardware parallelism need to be carefully considered to achieve high utilization. In this paper we extend the roofline performance model (Williams et al., 2009) to quantify the hardware characteristics accurately. As a result, we designed a K-NN algorithm to reach peak performance by the precise modeling of the accelerators, and our TPU implementation aligned with our predicted performance.
The main contributions of this work are:
⇤Equal contributions.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
• We extend the roofline model to address the operation throughput differences of the instructions, essential to the algorithm analysis in this paper.
• We design an approximate K-NN algorithm with recall and performance guarantees based on our proposed roofline model.
• We conduct experiments verifying our TPU implementation of the algorithm accurately aligned with the performance model and achieves state-of-the-art speed-recall trade-offs on standard nearest neighbor search benchmarks.
2 Preliminaries
This section covers the necessary notations to work with the nearest neighbor search problem. Given a matrix A 2 RM⇥N , we let ai,j denote the item at the ith row and jth column of A, and ai denote the ith row-vector of A. We use the matrix X 2 RN⇥D to abbreviate a set-representation of a database X = {xi}i=1,2,...,N with N data points, where each data point xi 2 RD is a row vector of the matrix X in a D dimensional vector space. The set and matrix representation of database X are used interchangeably in this paper.
The K nearest neighbor search problem is stated as follows. Given a database X 2 RN⇥D and a query vector q 2 RD, find the subset S⇤ ⇢ X collecting the K-closest data points to q:
Sq ⇤ = K-argmin
x2X D(q,x), (1)
where D(x,y) is a distance measure such as Euclidean distance D`2(x,y) := kx yk2 or the cosine distance Dcos(x,y) := 1 hx,yi kxkkyk . A related problem is the maximum inner product search (MIPS), where the goal is to find the data points that have the highest inner products with the query:
Sq ⇤ = K-argmax
x2X hq,xi. (2)
MIPS is equivalent to the cosine similarity search when all data points are `2-normalized.
3 Related work
Exhaustively searching all pair-wise distances between the query and the entire database is computeintensive and often infeasible on many platforms. Therefore, a problem extensively discussed in the literature (Wang et al., 2014, 2015) is to find approximate nearest neighbors (ANN) in exchange of speed. By convention, the quality of ANN is measured by
Recall := |Sq \ Sq
⇤ |
|Sq ⇤ |
, (3)
where Sq ⇢ X denotes the set of data points retrieved by the search method.
Compressed domain search One class of ANN approaches is to search on a lossy-compressed problem domain. These methods are composed in two steps: a) search on compressed representation2 of the original problem to find a set of candidate data points, b) compute the distances between the query and the candidate data points to select the top-K results. Since only a subset of data points requires the exact distance computation, the overall cost is reduced.
The two steps can be composed in arbitrary ways. Locality sensitive hashing (Andoni et al., 2015; Neyshabur and Srebro, 2015) applies search followed by scoring; tree-search (Muja and Lowe, 2014; Dasgupta and Freund, 2008) applies the two steps recursively; graph-search (Malkov and Yashunin, 2018) iterates between two steps until the stopping condition is met. And the inverted file (IVF)
2Here we mean data structures like tree, graph, locality sensitive hash etc.
method (Jegou et al., 2010; Babenko and Lempitsky, 2014; Baranchuk et al., 2018; Guo et al., 2020) search on subset of data points indexed by the k-means centroids.
We see that there are two major challenges with the compressed domain search:
• Fractional search has a poor cache reuse rate because the candidate data points for each query rarely overlaps. We show optimizing the cache usage has a huge headroom for accelerators in Section 4.2.
• Tweaking the speed-recall trade-off is data-dependent and non-trivial to tune. The key result of Beyer et al. (1999) states that the distance contrast of neighbors diminishes with increasing dimensionality (also known as the curse of high dimensionality). Furthermore, the key result of Rubinstein (2018) states that sub-linear time nearest neighbor search with high recall is impossible for Euclidean, Manhattan, or Hamming distance; otherwise, it contradicts the Strong Exponential Time Hypothesis (Impagliazzo and Paturi, 1999).
Our work takes an opposite approach to focus on machine efficiency with zero search space pruning. Moreover, since our method computes all the distances, it is immune to the curse of high dimensionality.
Accelerators In this paper, the phrase accelerators represents a class of specialized hardware to accelerate machine learning workloads. In particular, we are interested in the novel platforms that deliver high FLOP/s for distance computation, namely Google TPU V3, V4, Nvidia GPU V100, and A100 in our analysis and evaluation.
Modern accelerators have special computation units for matrix multiplication, providing a higher operation throughput over the regular coefficient-wise operations. The corresponding units are tensor cores in Nvidia GPUs (Markidis et al., 2018) and systolic arrays in Google TPUs (Jouppi et al., 2017; Norrie et al., 2021). Addressing these operation throughput differences is essential to our algorithm design.
While accelerators excel in parallelism, developing an efficient K-selection algorithm on accelerators is still an active research area (Monroe et al., 2011; Shanbhag et al., 2018; Johnson et al., 2021; Zhao et al., 2020). Accelerators with higher FLOP/s introduce a higher opportunity cost of computing the K-selection problem instead of the distance computation. The trend of the increasing FLOP/s in accelerators motivated us to optimize the FLOP/s usage by reducing the time required for computing K-selection.
4 Methodology
This section presents a performance model to identify non-trivial bottlenecks on multiple platforms and demonstrates some fundamental limits when designing algorithms for K-NN and related problems, and we see that the cache inefficiency of the compressed domain methods introduces a significant cost on accelerators.
We model the accelerator’s runtime as executing a sequence of computation kernels, where each kernel is a compiled subroutine on the accelerator used by the main program on the CPU. A kernel may be composed of one or several high-level operators: Einsum, ReLU, ArgMax, etc., and each kernel can have different performance characteristics.
Given a sequence of kernels ki, we let Wi denotes the total amount of work and Pi denotes the operational speed. Our goal is to estimate the total time of a program:
t = X
i
Wi Pi . (4)
In the following example, we focus on the MIPS problem. Let Q 2 RM⇥D and X 2 RN⇥D denote the queries and the database, the runtime of a generic approximate-MIPS program can be modeled as
t = WD P +O(Auxiliary) WD P , (5)
where WD denotes the total FLOPs required for searching the entire database, and denotes the search fraction. We note that P varies by algorithm and platform. Traditionally, compressed domain search methods minimize but sacrifice cache efficiency. Our method use an alternative route to optimize P instead.
4.1 Instruction throughput-aware roofline model
This subsection describes how we model the kernel-dependent performance P on multiple platforms with a small extension of the roofline model.
The classic roofline model (Williams et al., 2009) is a function of machine peak performance ⇡ measured in FLOP/s, machine peak memory bandwidth measured in bytes/s, and arithmetic intensity IMEM expressed as the ratio of floating-point operations performed to data movement (FLOP/byte). The model states the performance is bounded by P min(⇡, ⇥ IMEM).
We desire to model kernels that has a mixture of floating point operations accelerated by dedicated hardware as well as other coefficient-wise operations. The coefficient-wise operations are abbreviated as COPs. Almost every non matrix multiplication operations are COPs, including vectorized add, multiply, compare, conditional-move, etc. We use the symbol for peak COP/s on platforms, and define the instruction throughput intensity ICOP as the ratio between the number FLOPs and the number of COPs performed in a kernel (FLOP/COP). The attainable performance of a kernel is bounded by:
P min
8 <
: ⇡ ⇥ IMEM ⇥ ICOP.
(6)
The statement is self-explanatory because the inadequate resources impede the kernel throughput. Table 1 lists the properties of selected accelerators for our analysis3. The roofline model is commonly used in accelerator profiling tools but not as frequently discussed in algorithm designs. The following sections show how the model prevents pitfalls due to the hardware constraints.
4.2 The memory bandwidth bound
This subsection demonstrates how to evaluate if a kernel hits the memory bandwidth wall. We associate the distance computation with three levels of BLAS (Dongarra et al., 1990). Level 1 BLAS describes vector operations on non-consecutive memory access, such as computing distances while traversing through a graph. Level 2 BLAS represents scoring a query with consecutively stored data points. Level 3 BLAS expresses batched query-database distance computation, often used in brute-force scoring.
Compressed domain searches are either level 1 or 2 BLAS due to the cache inefficiency. It has O(1) memory arithmetic intensity because the number of FLOPs is proportion to the bytes read. Combining (5) and (6) we have the following remark:
Remark 1. Distance computations in compressed domain searches are memory bandwidth bounded. In our model, the runtime is lower bounded by: t O ( WD/ ).
3Readers can find these numbers from the accelerators’ specification sheets.
To estimate the memory arithmetic intensity for level 3 BLAS, we continue to use Q 2 RM⇥D and X 2 RN⇥D for denoting queries and database. In many K-NN applications N and M are much greater than D. The corresponding memory arithmetic intensity is:
IMEM = 2MND
4MN + o(MN) ⇡
D 2 . (7)
The largest term in the denominator of (7) is the 4MN bytes of the query-database distances. We omit the insignificant terms and refer readers to (Golub and Van Loan, 2013, Section 1.5.4) for a comprehensive review on memory transfers in block matrix multiplications.
Figure 1 shows that the distance scoring kernels of different BLAS levels can easily hit the memory bandwidth wall. In order to attain high performance, we designed our algorithm to aggregate the results within the kernel to avoid writing the O(MN) bytes into memory.
4.3 The instruction bandwidth bound
The use of COPs (non matrix multiplication instructions) introduce another slowdown. We let C denotes the number of COPs used per dot-product score in a kernel equipped with COPs and matrix multiplication instructions. There are M ⇥N dot-product scores, so the total COPs used in a kernel is CMN . To prevent hitting the COPs bandwidth wall, we must satisfy:
ICOP = 2⇠⇠MND C⇠⇠MN
⇡
, (8)
) C 2D ⇥
⇡ . (9)
The number of COPs we can afford in the kernels is scarce. We take D = 128 as an example and substitute it into (9). We can only use 4 coefficient-wise instructions per dot-product for TPU V4, and 16 for GPU A100. We conclude with the following remark: Remark 2. Exact and generic K-selection algorithm cannot be efficiently implemented with the coefficient-wise operations for the selected platforms (GPU V100, A100, TPU V3 and V4).
Because of Remark 2, we develop an approximate approach to achieve the peak performances.
5 Algorithm
Algorithm 1: PartialReduce for MIPS Input: Q 2 RM⇥D Batch queries Input: X 2 RN⇥D Database Input: 2W Bin size Output: V 2 RM⇥L Top-K values Output: A 2 NM⇥L Top-K indices
1 for i 1 to M do 2 for j 1 to N do 3 yi,j hqi,xji ; 4 l ShiftRight(j, W) ; /* Unrolled and does not cost COP */ 5 b yi,j > vi,l ; /* COP 1: Vectorized compare */ 6 vi,l if b then yi,j else vi,l ; /* COP 2: Vectorized conditional move */ 7 ai,l if b then j else ai,l ; /* COP 3: Vectorized conditional move */ 8 end 9 end
Our algorithm consists of two kernels:
1. PartialReduce kernel computes the distances and partially aggregate the results from M ⇥N distances to M ⇥ L distances with original indices.
2. ExactRescoring kernel is an optional kernel that aggregates the final top-K results. The complexity is O(ML log2(L)) by a bitonic sort followed by a truncation.
The PartialReduce kernel is where most of the time and compute takes place. See Algorithm 1 for an outline of the algorithm. We collect top-1 distances from the L non-overlapping bins of size 2W for each query, resulting high arithmetic intensities:
IMEM ⇡ O (min (M,N)) , (10)
ICOP = 2⇠⇠MND C⇠⇠MN = 2D C . (11)
We show these arithmetic intensities can achieve high performance on real world database in section 6.1. See Appendix A.3 for the detailed expansion of the algorithm and how the arithmetic intensities are derived.
5.1 Recall estimation
This section shows the PartialReduce kernel can achieve high recall with good speed. We reformulate our problem in terms of balls and bins. We have K balls representing the top-K distances that are thrown into L bins. The location of each ball is chosen independently and uniformly at random. We let Z denotes the random variable of the number of balls that do not have collisions. Following the recall definition (3) we have:
Recall Z
K , (12)
which is a standard Birthday problem:
E[Recall] E[Z] K =
✓ L 1
L
◆K 1 . (13)
Our goal is to find the minimal L such that the expected recall is greater equals to the target recall r. Finding L is simple because (13) is invertible in the natural range 0 < r < 1.
E[Recall] r ) L 1 1 r1/(K 1) ⇡ K 1 1 r . (14)
The approximation in (14) follows from Appendix A.4. Since L is at the order of K, and in most applications K ⌧ N , the cost of the ExactRescoring kernel is amortized out. Thus we affirm the claim that our method attains high performance with an analytical recall guarantee.
6 Evaluation
In this section, we show that our proposed algorithm and implementation are near the hardware limit and lead to superior performance over the baselines of similar recalls. We applied our algorithm to two datasets from the public ANN benchmarks (Aumüller et al., 2020). In our first evaluation, we compare the measured FLOP/s to the theoretical peak governed by the proposed refinement of the roofline model (6), proclaiming our implementation is reaching the hardware peak performance. In the second benchmark, we compare the end-to-end performance with competitive baselines with pre-tuned parameters. We plot each algorithm’s speed-recall curve and show ours achieves the state-of-the-art. Finally, we measure the algorithm’s scalability by varying the dataset size and number of TPUs used.
6.1 Comparison with the theoretical peak
This section shows that our refined roofline model (6) captures additional performance characteristic over the classic roofline model, and demonstrates our kernels are having near optimal performances. We select the Glove4 (Pennington et al., 2014) and Sift5 (Jegou et al., 2010) datasets from the ANN benchmarks. Their corresponding distances are the cosine distance and the Euclidean distance. See the code snippets in Appendix A.1 and A.2.
4Released in Apache license 2.0. 5Released in CC0 public domain.
See Figure 2, the colored lines represent machines’ max performances, and the dots represent each benchmark with its measured FLOP/s. The classic roofline on the left shows that our incache aggregation strategy has a large memory arithmetic intensity (⇠4,700) exceeding the memory bandwidth ridge points ⇡/ . However, it is difficult to diagnose why the Euclidean distance search does not perform well on TPU V4 from the classic roofline plot.
Fortunately, when combined with the instruction bandwidth roofline we can tell the performance regression is caused by hitting the coefficient-wise operation throughput wall. Therefore we affirms the claim that our MIPS solution is reaching the peak FLOP/s, and our Euclidean distance search solution is meeting the compute bound on TPU V4 and attaining the peak FLOP/s on TPU V3.
6.2 Recall-speed benchmark
To evaluate the effectiveness of the K-NN algorithm in a realistic setting, we adopted the methodology of public ANN benchmarks (Aumüller et al., 2020) to compare the end-to-end performance against other methods on the following datasets: Glove (Pennington et al., 2014), Sift (Jegou et al., 2010), NYTimes (Dua and Graff, 2017), and Last.fm (Bertin-Mahieux et al., 2011). The typical ANN benchmarks are only performed on a single platform. However, it is non-trivial to either port our TPU algorithm to GPU or vice versa. Alternatively, we selected the following GPUs with parity in peak performance to TPU (Table 1).
We select the Faiss GPU (Johnson et al., 2021) implementation as our baseline. Faiss provides three algorithms: Flat, IVF-Flat, and IVF-PQ. The Flat algorithm performs a brute-force search, and the IVF-Flat and IVF-PQ algorithms corresponds to the inverted file method with and without the product quantization (Jegou et al., 2010; Johnson et al., 2021). We use the repository’s suggested inverted file size (16384) in the IVF methods.
Figure 3 shows our performance significantly outperforms competing methods in the high recall regions. We highlight that our method has a consistent recall-speed trade-off over different datasets, because our recall only rely on the order statistics instead of the information encoded in the compression domain search methods, which may vary by the datasets. Since our method scores all the pair-wise distances, our method is immune from the curse of high dimensionality.
6.3 Scalability benchmark
In the final benchmark, we examine the scalability of the algorithm from three aspects. First, we verify if the measured performance is inverse proportional to the database size. Second, we compare the scaling characteristics to the fastest GPU implementation. Last but not least, we are interested in knowing if our algorithm can horizontally scale by the number of TPUs.
We conduct our evaluation on TPU V4 and Nvidia GPU A100, which have similar peak performance and memory bandwidth. We sample the Yandex Deep dataset6 (Babenko and Lempitsky, 2016) into ten different scales and measure the QPS of each approach with a similar recall. Figure 4 verifies all measurements align with the ideal scalability model: QPS / #chips/N . Our method remains top performance on all database sizes and linearly scales with the number of TPU chips.
7 Discussion and future work
In Section 6, we benchmark our method against others on platforms with similar performances. Some questions might arise: "Is the performance gain an algorithmic optimization or due to platform efficiency?" "Can we achieve the same performance gain on GPU?" "The existence of efficient fractional-search on accelerators?" We address these questions in this section.
7.1 Platform discussions
We first discuss the modeling perspective of performance differences between platforms. In Section 4, we show that the memory bandwidth and instruction throughput bound applies to both GPU and TPU. For instance, it follows that to attain peak performance on every hardware platform, having the number of instructions used for collecting (approximate) top-k elements within 2 ·D/⇡ per distance computation is a necessary condition.
Although our Algorithm 1 is platform-independent, achieving the hardware peak performance requires many low level implementation details at the machine level, including cache management, preventing cross-core memory synchronization, in-register accumulation, and instruction scheduling. Typical high-performance libraries such as MKL, cuBLAS, and Google TPU compiler use platform-specific assembly to take full control of the stated requirements.
6Released in CC BY 4.0.
Nevertheless, we cannot use the high-level interface of these libraries, because Algorithm 1 only performs well when it is integrated into the inner loop of distance computations7. Moreover, these libraries are all close-sourced, thus increases the difficulty on the implementation.
Fortunately, we have the access to TPU compiler internals, and we have integrated Algorithm 1 into the compiler to generate the desired assembly code to solidify our analysis. Thus we leave implementations of other platforms to future works.
7.2 Algorithm discussions
The roofline complexity of the fractional search is identical to BLAS-2 (matrix-vector multiplication), which is memory bandwidth bound. When the cycles spend on data transfer are mutually exclusive to our method, it introduces an enormous opportunity cost. Nevertheless, we see an opportunity in a heterogeneous architecture because a fractional search on the host is not mutually exclusive to applying our method to accelerators.
A motivating example is the multi-billion nearest neighbor search, where fitting the dataset into device memory is possible (through device sharding, which TensorFlow and Jax have native support) but not economical. Since brute-force distance computations are often involved in the auxiliary data structures when performing the fractional search, we may replace the brute-force portion with TPU in conduction with the remaining search off-device. We note that heterogeneous architectures with off-device storage such as host-RAM or even SSD (Jayaram Subramanya et al., 2019; Ren et al., 2020; Chen et al., 2021) are great starting points for future research.
8 Conclusion
Accelerator-based machine learning has become the mainstream in academics and industries. However, the performance characteristics of accelerators are counter-intuitive and difficult to program. In this paper, we propose a roofline-based complexity analysis framework to discuss the optimality of the algorithms without low-level optimization details: unrolling factors, batch window sizes, vectorization, and systolic array scheduling, which are platform-dependent and lengthy to read. We demonstrated several examples of inferring the hardware performance limits by simply addressing the kernel’s total FLOPs, byte transferred, and the number of coefficient-wise instructions used. Our refined model foreshadowed non-trivial performance regression caused by the coefficient-wise instructions bandwidth. We took it into account to design a new algorithm for K-NN and achieved peak performance on TPU. Finally, our experiments showed that our method outperformed state-of-the-art baselines on platforms with similar performance characteristics, which are known to be hard to beat.
Acknowledgments and Disclosure of Funding
We would like to thank the XLA team for the continuous effort on developing the state-of-the-art compiler and the full support on enabling our new op: approx_max_k. We are also grateful to the Google ScaNN team for the joint effort on bridging the impactful K-NN problem into the accelerator ecosystem. Last but not least, we thank to Peter Hawkins, Edward Schwartz, and Mani Varadarajan for code reviews in Jax and Tensorflow, and Erik Lindgren for the proof reading of this paper.
This work was performed and funded by Google. | 1. What is the focus of the paper regarding NN-search algorithms and TPUs?
2. What are the strengths of the proposed approach, particularly its connection to theoretical observations and practical impact?
3. What are the weaknesses of the paper regarding its limited evaluation and lack of generalizability across different hardware platforms?
4. How straightforward is it to implement the proposed algorithm on other platforms, and what challenges might arise?
5. Have the authors sufficiently addressed the limitations of their work? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper presents a new NN-search algorithm, that reaches the peak performance on TPUs. The paper is based on observations of hardware architectural properties and the so called roofline performance model. The algorithm is implemented for TPUs. The evaluation is done on two TPU versions using two KNN datasets, and compared to several GPU algorithms.
Strengths And Weaknesses
Strengths
Nice connection between theoretical observations and practical results
Good performance
Can have significant practical impact
Weaknesses
Limited evaluation, only evaluated on two versions on TPUs, and two datasets
Would have been interesting to see how general the algorithm is, i.e., would it reach the same performance limits in other hardware platforms also?
Questions
The algorithm descriptions (Algorithm 1 and Algorithm 2 (suppl. mtrl)) looks relatively clear and straight-forward to implement on other platforms. Can you elaborate a bit on why it would be such a substantial effort to do?
Limitations
I think the authors have adequately addressed the limitations of their work. |
NIPS | Title
TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s
Abstract
This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-theart GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU.
1 Introduction
The K-nearest neighbor (K-NN) search problem has a wide range of applications in machine learning and information retrieval systems, including image search (Jia et al., 2021; Babenko and Lempitsky, 2016), semantic textual retrieval (Liu et al., 2009; Cer et al., 2018), anomaly detection (Gu et al., 2019; Omar et al., 2013), recommendation systems (Sarwar et al., 2002; Zhao et al., 2019), as well as serving as a component for a downstream tasks (Borgeaud et al., 2021; Guu et al., 2020; Lindgren et al., 2021; Shazeer et al., 2017). Given a query, the objective of K-NN is to identify K closest datapoints from a database of finite number of data points in a vector space. The main challenge of designing a good K-NN algorithm is to compute accurate K-NN results while being computationally efficient.
Solving the K-NN problem on accelerators has emerging interests from both the academia and the industry (Johnson et al., 2021; Shanbhag et al., 2018; Zhao et al., 2020). Many accelerators can deliver hundreds of Tera Floating Point Operations Per Seconds (TFLOPS) vital to the neighbor distance computation. However, utilizing accelerators in K-NN problems is not straightforward; multiple issues in data locality, memory bandwidth, and multiple types of hardware parallelism need to be carefully considered to achieve high utilization. In this paper we extend the roofline performance model (Williams et al., 2009) to quantify the hardware characteristics accurately. As a result, we designed a K-NN algorithm to reach peak performance by the precise modeling of the accelerators, and our TPU implementation aligned with our predicted performance.
The main contributions of this work are:
⇤Equal contributions.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
• We extend the roofline model to address the operation throughput differences of the instructions, essential to the algorithm analysis in this paper.
• We design an approximate K-NN algorithm with recall and performance guarantees based on our proposed roofline model.
• We conduct experiments verifying our TPU implementation of the algorithm accurately aligned with the performance model and achieves state-of-the-art speed-recall trade-offs on standard nearest neighbor search benchmarks.
2 Preliminaries
This section covers the necessary notations to work with the nearest neighbor search problem. Given a matrix A 2 RM⇥N , we let ai,j denote the item at the ith row and jth column of A, and ai denote the ith row-vector of A. We use the matrix X 2 RN⇥D to abbreviate a set-representation of a database X = {xi}i=1,2,...,N with N data points, where each data point xi 2 RD is a row vector of the matrix X in a D dimensional vector space. The set and matrix representation of database X are used interchangeably in this paper.
The K nearest neighbor search problem is stated as follows. Given a database X 2 RN⇥D and a query vector q 2 RD, find the subset S⇤ ⇢ X collecting the K-closest data points to q:
Sq ⇤ = K-argmin
x2X D(q,x), (1)
where D(x,y) is a distance measure such as Euclidean distance D`2(x,y) := kx yk2 or the cosine distance Dcos(x,y) := 1 hx,yi kxkkyk . A related problem is the maximum inner product search (MIPS), where the goal is to find the data points that have the highest inner products with the query:
Sq ⇤ = K-argmax
x2X hq,xi. (2)
MIPS is equivalent to the cosine similarity search when all data points are `2-normalized.
3 Related work
Exhaustively searching all pair-wise distances between the query and the entire database is computeintensive and often infeasible on many platforms. Therefore, a problem extensively discussed in the literature (Wang et al., 2014, 2015) is to find approximate nearest neighbors (ANN) in exchange of speed. By convention, the quality of ANN is measured by
Recall := |Sq \ Sq
⇤ |
|Sq ⇤ |
, (3)
where Sq ⇢ X denotes the set of data points retrieved by the search method.
Compressed domain search One class of ANN approaches is to search on a lossy-compressed problem domain. These methods are composed in two steps: a) search on compressed representation2 of the original problem to find a set of candidate data points, b) compute the distances between the query and the candidate data points to select the top-K results. Since only a subset of data points requires the exact distance computation, the overall cost is reduced.
The two steps can be composed in arbitrary ways. Locality sensitive hashing (Andoni et al., 2015; Neyshabur and Srebro, 2015) applies search followed by scoring; tree-search (Muja and Lowe, 2014; Dasgupta and Freund, 2008) applies the two steps recursively; graph-search (Malkov and Yashunin, 2018) iterates between two steps until the stopping condition is met. And the inverted file (IVF)
2Here we mean data structures like tree, graph, locality sensitive hash etc.
method (Jegou et al., 2010; Babenko and Lempitsky, 2014; Baranchuk et al., 2018; Guo et al., 2020) search on subset of data points indexed by the k-means centroids.
We see that there are two major challenges with the compressed domain search:
• Fractional search has a poor cache reuse rate because the candidate data points for each query rarely overlaps. We show optimizing the cache usage has a huge headroom for accelerators in Section 4.2.
• Tweaking the speed-recall trade-off is data-dependent and non-trivial to tune. The key result of Beyer et al. (1999) states that the distance contrast of neighbors diminishes with increasing dimensionality (also known as the curse of high dimensionality). Furthermore, the key result of Rubinstein (2018) states that sub-linear time nearest neighbor search with high recall is impossible for Euclidean, Manhattan, or Hamming distance; otherwise, it contradicts the Strong Exponential Time Hypothesis (Impagliazzo and Paturi, 1999).
Our work takes an opposite approach to focus on machine efficiency with zero search space pruning. Moreover, since our method computes all the distances, it is immune to the curse of high dimensionality.
Accelerators In this paper, the phrase accelerators represents a class of specialized hardware to accelerate machine learning workloads. In particular, we are interested in the novel platforms that deliver high FLOP/s for distance computation, namely Google TPU V3, V4, Nvidia GPU V100, and A100 in our analysis and evaluation.
Modern accelerators have special computation units for matrix multiplication, providing a higher operation throughput over the regular coefficient-wise operations. The corresponding units are tensor cores in Nvidia GPUs (Markidis et al., 2018) and systolic arrays in Google TPUs (Jouppi et al., 2017; Norrie et al., 2021). Addressing these operation throughput differences is essential to our algorithm design.
While accelerators excel in parallelism, developing an efficient K-selection algorithm on accelerators is still an active research area (Monroe et al., 2011; Shanbhag et al., 2018; Johnson et al., 2021; Zhao et al., 2020). Accelerators with higher FLOP/s introduce a higher opportunity cost of computing the K-selection problem instead of the distance computation. The trend of the increasing FLOP/s in accelerators motivated us to optimize the FLOP/s usage by reducing the time required for computing K-selection.
4 Methodology
This section presents a performance model to identify non-trivial bottlenecks on multiple platforms and demonstrates some fundamental limits when designing algorithms for K-NN and related problems, and we see that the cache inefficiency of the compressed domain methods introduces a significant cost on accelerators.
We model the accelerator’s runtime as executing a sequence of computation kernels, where each kernel is a compiled subroutine on the accelerator used by the main program on the CPU. A kernel may be composed of one or several high-level operators: Einsum, ReLU, ArgMax, etc., and each kernel can have different performance characteristics.
Given a sequence of kernels ki, we let Wi denotes the total amount of work and Pi denotes the operational speed. Our goal is to estimate the total time of a program:
t = X
i
Wi Pi . (4)
In the following example, we focus on the MIPS problem. Let Q 2 RM⇥D and X 2 RN⇥D denote the queries and the database, the runtime of a generic approximate-MIPS program can be modeled as
t = WD P +O(Auxiliary) WD P , (5)
where WD denotes the total FLOPs required for searching the entire database, and denotes the search fraction. We note that P varies by algorithm and platform. Traditionally, compressed domain search methods minimize but sacrifice cache efficiency. Our method use an alternative route to optimize P instead.
4.1 Instruction throughput-aware roofline model
This subsection describes how we model the kernel-dependent performance P on multiple platforms with a small extension of the roofline model.
The classic roofline model (Williams et al., 2009) is a function of machine peak performance ⇡ measured in FLOP/s, machine peak memory bandwidth measured in bytes/s, and arithmetic intensity IMEM expressed as the ratio of floating-point operations performed to data movement (FLOP/byte). The model states the performance is bounded by P min(⇡, ⇥ IMEM).
We desire to model kernels that has a mixture of floating point operations accelerated by dedicated hardware as well as other coefficient-wise operations. The coefficient-wise operations are abbreviated as COPs. Almost every non matrix multiplication operations are COPs, including vectorized add, multiply, compare, conditional-move, etc. We use the symbol for peak COP/s on platforms, and define the instruction throughput intensity ICOP as the ratio between the number FLOPs and the number of COPs performed in a kernel (FLOP/COP). The attainable performance of a kernel is bounded by:
P min
8 <
: ⇡ ⇥ IMEM ⇥ ICOP.
(6)
The statement is self-explanatory because the inadequate resources impede the kernel throughput. Table 1 lists the properties of selected accelerators for our analysis3. The roofline model is commonly used in accelerator profiling tools but not as frequently discussed in algorithm designs. The following sections show how the model prevents pitfalls due to the hardware constraints.
4.2 The memory bandwidth bound
This subsection demonstrates how to evaluate if a kernel hits the memory bandwidth wall. We associate the distance computation with three levels of BLAS (Dongarra et al., 1990). Level 1 BLAS describes vector operations on non-consecutive memory access, such as computing distances while traversing through a graph. Level 2 BLAS represents scoring a query with consecutively stored data points. Level 3 BLAS expresses batched query-database distance computation, often used in brute-force scoring.
Compressed domain searches are either level 1 or 2 BLAS due to the cache inefficiency. It has O(1) memory arithmetic intensity because the number of FLOPs is proportion to the bytes read. Combining (5) and (6) we have the following remark:
Remark 1. Distance computations in compressed domain searches are memory bandwidth bounded. In our model, the runtime is lower bounded by: t O ( WD/ ).
3Readers can find these numbers from the accelerators’ specification sheets.
To estimate the memory arithmetic intensity for level 3 BLAS, we continue to use Q 2 RM⇥D and X 2 RN⇥D for denoting queries and database. In many K-NN applications N and M are much greater than D. The corresponding memory arithmetic intensity is:
IMEM = 2MND
4MN + o(MN) ⇡
D 2 . (7)
The largest term in the denominator of (7) is the 4MN bytes of the query-database distances. We omit the insignificant terms and refer readers to (Golub and Van Loan, 2013, Section 1.5.4) for a comprehensive review on memory transfers in block matrix multiplications.
Figure 1 shows that the distance scoring kernels of different BLAS levels can easily hit the memory bandwidth wall. In order to attain high performance, we designed our algorithm to aggregate the results within the kernel to avoid writing the O(MN) bytes into memory.
4.3 The instruction bandwidth bound
The use of COPs (non matrix multiplication instructions) introduce another slowdown. We let C denotes the number of COPs used per dot-product score in a kernel equipped with COPs and matrix multiplication instructions. There are M ⇥N dot-product scores, so the total COPs used in a kernel is CMN . To prevent hitting the COPs bandwidth wall, we must satisfy:
ICOP = 2⇠⇠MND C⇠⇠MN
⇡
, (8)
) C 2D ⇥
⇡ . (9)
The number of COPs we can afford in the kernels is scarce. We take D = 128 as an example and substitute it into (9). We can only use 4 coefficient-wise instructions per dot-product for TPU V4, and 16 for GPU A100. We conclude with the following remark: Remark 2. Exact and generic K-selection algorithm cannot be efficiently implemented with the coefficient-wise operations for the selected platforms (GPU V100, A100, TPU V3 and V4).
Because of Remark 2, we develop an approximate approach to achieve the peak performances.
5 Algorithm
Algorithm 1: PartialReduce for MIPS Input: Q 2 RM⇥D Batch queries Input: X 2 RN⇥D Database Input: 2W Bin size Output: V 2 RM⇥L Top-K values Output: A 2 NM⇥L Top-K indices
1 for i 1 to M do 2 for j 1 to N do 3 yi,j hqi,xji ; 4 l ShiftRight(j, W) ; /* Unrolled and does not cost COP */ 5 b yi,j > vi,l ; /* COP 1: Vectorized compare */ 6 vi,l if b then yi,j else vi,l ; /* COP 2: Vectorized conditional move */ 7 ai,l if b then j else ai,l ; /* COP 3: Vectorized conditional move */ 8 end 9 end
Our algorithm consists of two kernels:
1. PartialReduce kernel computes the distances and partially aggregate the results from M ⇥N distances to M ⇥ L distances with original indices.
2. ExactRescoring kernel is an optional kernel that aggregates the final top-K results. The complexity is O(ML log2(L)) by a bitonic sort followed by a truncation.
The PartialReduce kernel is where most of the time and compute takes place. See Algorithm 1 for an outline of the algorithm. We collect top-1 distances from the L non-overlapping bins of size 2W for each query, resulting high arithmetic intensities:
IMEM ⇡ O (min (M,N)) , (10)
ICOP = 2⇠⇠MND C⇠⇠MN = 2D C . (11)
We show these arithmetic intensities can achieve high performance on real world database in section 6.1. See Appendix A.3 for the detailed expansion of the algorithm and how the arithmetic intensities are derived.
5.1 Recall estimation
This section shows the PartialReduce kernel can achieve high recall with good speed. We reformulate our problem in terms of balls and bins. We have K balls representing the top-K distances that are thrown into L bins. The location of each ball is chosen independently and uniformly at random. We let Z denotes the random variable of the number of balls that do not have collisions. Following the recall definition (3) we have:
Recall Z
K , (12)
which is a standard Birthday problem:
E[Recall] E[Z] K =
✓ L 1
L
◆K 1 . (13)
Our goal is to find the minimal L such that the expected recall is greater equals to the target recall r. Finding L is simple because (13) is invertible in the natural range 0 < r < 1.
E[Recall] r ) L 1 1 r1/(K 1) ⇡ K 1 1 r . (14)
The approximation in (14) follows from Appendix A.4. Since L is at the order of K, and in most applications K ⌧ N , the cost of the ExactRescoring kernel is amortized out. Thus we affirm the claim that our method attains high performance with an analytical recall guarantee.
6 Evaluation
In this section, we show that our proposed algorithm and implementation are near the hardware limit and lead to superior performance over the baselines of similar recalls. We applied our algorithm to two datasets from the public ANN benchmarks (Aumüller et al., 2020). In our first evaluation, we compare the measured FLOP/s to the theoretical peak governed by the proposed refinement of the roofline model (6), proclaiming our implementation is reaching the hardware peak performance. In the second benchmark, we compare the end-to-end performance with competitive baselines with pre-tuned parameters. We plot each algorithm’s speed-recall curve and show ours achieves the state-of-the-art. Finally, we measure the algorithm’s scalability by varying the dataset size and number of TPUs used.
6.1 Comparison with the theoretical peak
This section shows that our refined roofline model (6) captures additional performance characteristic over the classic roofline model, and demonstrates our kernels are having near optimal performances. We select the Glove4 (Pennington et al., 2014) and Sift5 (Jegou et al., 2010) datasets from the ANN benchmarks. Their corresponding distances are the cosine distance and the Euclidean distance. See the code snippets in Appendix A.1 and A.2.
4Released in Apache license 2.0. 5Released in CC0 public domain.
See Figure 2, the colored lines represent machines’ max performances, and the dots represent each benchmark with its measured FLOP/s. The classic roofline on the left shows that our incache aggregation strategy has a large memory arithmetic intensity (⇠4,700) exceeding the memory bandwidth ridge points ⇡/ . However, it is difficult to diagnose why the Euclidean distance search does not perform well on TPU V4 from the classic roofline plot.
Fortunately, when combined with the instruction bandwidth roofline we can tell the performance regression is caused by hitting the coefficient-wise operation throughput wall. Therefore we affirms the claim that our MIPS solution is reaching the peak FLOP/s, and our Euclidean distance search solution is meeting the compute bound on TPU V4 and attaining the peak FLOP/s on TPU V3.
6.2 Recall-speed benchmark
To evaluate the effectiveness of the K-NN algorithm in a realistic setting, we adopted the methodology of public ANN benchmarks (Aumüller et al., 2020) to compare the end-to-end performance against other methods on the following datasets: Glove (Pennington et al., 2014), Sift (Jegou et al., 2010), NYTimes (Dua and Graff, 2017), and Last.fm (Bertin-Mahieux et al., 2011). The typical ANN benchmarks are only performed on a single platform. However, it is non-trivial to either port our TPU algorithm to GPU or vice versa. Alternatively, we selected the following GPUs with parity in peak performance to TPU (Table 1).
We select the Faiss GPU (Johnson et al., 2021) implementation as our baseline. Faiss provides three algorithms: Flat, IVF-Flat, and IVF-PQ. The Flat algorithm performs a brute-force search, and the IVF-Flat and IVF-PQ algorithms corresponds to the inverted file method with and without the product quantization (Jegou et al., 2010; Johnson et al., 2021). We use the repository’s suggested inverted file size (16384) in the IVF methods.
Figure 3 shows our performance significantly outperforms competing methods in the high recall regions. We highlight that our method has a consistent recall-speed trade-off over different datasets, because our recall only rely on the order statistics instead of the information encoded in the compression domain search methods, which may vary by the datasets. Since our method scores all the pair-wise distances, our method is immune from the curse of high dimensionality.
6.3 Scalability benchmark
In the final benchmark, we examine the scalability of the algorithm from three aspects. First, we verify if the measured performance is inverse proportional to the database size. Second, we compare the scaling characteristics to the fastest GPU implementation. Last but not least, we are interested in knowing if our algorithm can horizontally scale by the number of TPUs.
We conduct our evaluation on TPU V4 and Nvidia GPU A100, which have similar peak performance and memory bandwidth. We sample the Yandex Deep dataset6 (Babenko and Lempitsky, 2016) into ten different scales and measure the QPS of each approach with a similar recall. Figure 4 verifies all measurements align with the ideal scalability model: QPS / #chips/N . Our method remains top performance on all database sizes and linearly scales with the number of TPU chips.
7 Discussion and future work
In Section 6, we benchmark our method against others on platforms with similar performances. Some questions might arise: "Is the performance gain an algorithmic optimization or due to platform efficiency?" "Can we achieve the same performance gain on GPU?" "The existence of efficient fractional-search on accelerators?" We address these questions in this section.
7.1 Platform discussions
We first discuss the modeling perspective of performance differences between platforms. In Section 4, we show that the memory bandwidth and instruction throughput bound applies to both GPU and TPU. For instance, it follows that to attain peak performance on every hardware platform, having the number of instructions used for collecting (approximate) top-k elements within 2 ·D/⇡ per distance computation is a necessary condition.
Although our Algorithm 1 is platform-independent, achieving the hardware peak performance requires many low level implementation details at the machine level, including cache management, preventing cross-core memory synchronization, in-register accumulation, and instruction scheduling. Typical high-performance libraries such as MKL, cuBLAS, and Google TPU compiler use platform-specific assembly to take full control of the stated requirements.
6Released in CC BY 4.0.
Nevertheless, we cannot use the high-level interface of these libraries, because Algorithm 1 only performs well when it is integrated into the inner loop of distance computations7. Moreover, these libraries are all close-sourced, thus increases the difficulty on the implementation.
Fortunately, we have the access to TPU compiler internals, and we have integrated Algorithm 1 into the compiler to generate the desired assembly code to solidify our analysis. Thus we leave implementations of other platforms to future works.
7.2 Algorithm discussions
The roofline complexity of the fractional search is identical to BLAS-2 (matrix-vector multiplication), which is memory bandwidth bound. When the cycles spend on data transfer are mutually exclusive to our method, it introduces an enormous opportunity cost. Nevertheless, we see an opportunity in a heterogeneous architecture because a fractional search on the host is not mutually exclusive to applying our method to accelerators.
A motivating example is the multi-billion nearest neighbor search, where fitting the dataset into device memory is possible (through device sharding, which TensorFlow and Jax have native support) but not economical. Since brute-force distance computations are often involved in the auxiliary data structures when performing the fractional search, we may replace the brute-force portion with TPU in conduction with the remaining search off-device. We note that heterogeneous architectures with off-device storage such as host-RAM or even SSD (Jayaram Subramanya et al., 2019; Ren et al., 2020; Chen et al., 2021) are great starting points for future research.
8 Conclusion
Accelerator-based machine learning has become the mainstream in academics and industries. However, the performance characteristics of accelerators are counter-intuitive and difficult to program. In this paper, we propose a roofline-based complexity analysis framework to discuss the optimality of the algorithms without low-level optimization details: unrolling factors, batch window sizes, vectorization, and systolic array scheduling, which are platform-dependent and lengthy to read. We demonstrated several examples of inferring the hardware performance limits by simply addressing the kernel’s total FLOPs, byte transferred, and the number of coefficient-wise instructions used. Our refined model foreshadowed non-trivial performance regression caused by the coefficient-wise instructions bandwidth. We took it into account to design a new algorithm for K-NN and achieved peak performance on TPU. Finally, our experiments showed that our method outperformed state-of-the-art baselines on platforms with similar performance characteristics, which are known to be hard to beat.
Acknowledgments and Disclosure of Funding
We would like to thank the XLA team for the continuous effort on developing the state-of-the-art compiler and the full support on enabling our new op: approx_max_k. We are also grateful to the Google ScaNN team for the joint effort on bridging the impactful K-NN problem into the accelerator ecosystem. Last but not least, we thank to Peter Hawkins, Edward Schwartz, and Mani Varadarajan for code reviews in Jax and Tensorflow, and Erik Lindgren for the proof reading of this paper.
This work was performed and funded by Google. | 1. What is the focus of the paper regarding ANN algorithms and TPU?
2. What are the strengths of the paper in terms of motivation, related work, and methodology?
3. What are the weaknesses of the paper regarding experiment evaluation and technical contributions?
4. Do you have any questions regarding the limitations of the algorithm and its adaptability to other similarity measures? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper presents an ANN algorithm on TPU and analyzes the memory and instruction bandwidth of ANN algorithms.
Strengths And Weaknesses
Strong Points
The problem is well-motivated.
The related work is comprehensively studied and discussed. And the entire story is easy to follow for readers.
The methodology to study the algorithm from the hardware bottlenecks is interesting.
Weak Points
I suggest the authors to include a brief discussion of TPU, e.g., what it can be done and what it can not be done (efficiently).
The experimental evaluation is a bit problematic. How about other methods on GPUs/TPUs, e.g., hashing and graph-based methods besides the FAISS baseline?
In Figure 3, the highest recall shown for Glove is 0.9. Is there a recall limitation for the proposed algorithm?
The technical contribution of the bi-stage partial reduction and scoring is limited.
Questions
Could we have other methods on GPUs/TPUs, e.g., hashing and graph-based methods besides the FAISS baseline, on more measures, e.g. cosine?
Could we show that the proposed algorithm can achieve high recalls for most ANN datasets?
Limitations
I suggest the authors to discuss the limitation of the algorithm, e.g., how to adapt the problem to other common similarity measures. |
NIPS | Title
TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s
Abstract
This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-theart GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU.
1 Introduction
The K-nearest neighbor (K-NN) search problem has a wide range of applications in machine learning and information retrieval systems, including image search (Jia et al., 2021; Babenko and Lempitsky, 2016), semantic textual retrieval (Liu et al., 2009; Cer et al., 2018), anomaly detection (Gu et al., 2019; Omar et al., 2013), recommendation systems (Sarwar et al., 2002; Zhao et al., 2019), as well as serving as a component for a downstream tasks (Borgeaud et al., 2021; Guu et al., 2020; Lindgren et al., 2021; Shazeer et al., 2017). Given a query, the objective of K-NN is to identify K closest datapoints from a database of finite number of data points in a vector space. The main challenge of designing a good K-NN algorithm is to compute accurate K-NN results while being computationally efficient.
Solving the K-NN problem on accelerators has emerging interests from both the academia and the industry (Johnson et al., 2021; Shanbhag et al., 2018; Zhao et al., 2020). Many accelerators can deliver hundreds of Tera Floating Point Operations Per Seconds (TFLOPS) vital to the neighbor distance computation. However, utilizing accelerators in K-NN problems is not straightforward; multiple issues in data locality, memory bandwidth, and multiple types of hardware parallelism need to be carefully considered to achieve high utilization. In this paper we extend the roofline performance model (Williams et al., 2009) to quantify the hardware characteristics accurately. As a result, we designed a K-NN algorithm to reach peak performance by the precise modeling of the accelerators, and our TPU implementation aligned with our predicted performance.
The main contributions of this work are:
⇤Equal contributions.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
• We extend the roofline model to address the operation throughput differences of the instructions, essential to the algorithm analysis in this paper.
• We design an approximate K-NN algorithm with recall and performance guarantees based on our proposed roofline model.
• We conduct experiments verifying our TPU implementation of the algorithm accurately aligned with the performance model and achieves state-of-the-art speed-recall trade-offs on standard nearest neighbor search benchmarks.
2 Preliminaries
This section covers the necessary notations to work with the nearest neighbor search problem. Given a matrix A 2 RM⇥N , we let ai,j denote the item at the ith row and jth column of A, and ai denote the ith row-vector of A. We use the matrix X 2 RN⇥D to abbreviate a set-representation of a database X = {xi}i=1,2,...,N with N data points, where each data point xi 2 RD is a row vector of the matrix X in a D dimensional vector space. The set and matrix representation of database X are used interchangeably in this paper.
The K nearest neighbor search problem is stated as follows. Given a database X 2 RN⇥D and a query vector q 2 RD, find the subset S⇤ ⇢ X collecting the K-closest data points to q:
Sq ⇤ = K-argmin
x2X D(q,x), (1)
where D(x,y) is a distance measure such as Euclidean distance D`2(x,y) := kx yk2 or the cosine distance Dcos(x,y) := 1 hx,yi kxkkyk . A related problem is the maximum inner product search (MIPS), where the goal is to find the data points that have the highest inner products with the query:
Sq ⇤ = K-argmax
x2X hq,xi. (2)
MIPS is equivalent to the cosine similarity search when all data points are `2-normalized.
3 Related work
Exhaustively searching all pair-wise distances between the query and the entire database is computeintensive and often infeasible on many platforms. Therefore, a problem extensively discussed in the literature (Wang et al., 2014, 2015) is to find approximate nearest neighbors (ANN) in exchange of speed. By convention, the quality of ANN is measured by
Recall := |Sq \ Sq
⇤ |
|Sq ⇤ |
, (3)
where Sq ⇢ X denotes the set of data points retrieved by the search method.
Compressed domain search One class of ANN approaches is to search on a lossy-compressed problem domain. These methods are composed in two steps: a) search on compressed representation2 of the original problem to find a set of candidate data points, b) compute the distances between the query and the candidate data points to select the top-K results. Since only a subset of data points requires the exact distance computation, the overall cost is reduced.
The two steps can be composed in arbitrary ways. Locality sensitive hashing (Andoni et al., 2015; Neyshabur and Srebro, 2015) applies search followed by scoring; tree-search (Muja and Lowe, 2014; Dasgupta and Freund, 2008) applies the two steps recursively; graph-search (Malkov and Yashunin, 2018) iterates between two steps until the stopping condition is met. And the inverted file (IVF)
2Here we mean data structures like tree, graph, locality sensitive hash etc.
method (Jegou et al., 2010; Babenko and Lempitsky, 2014; Baranchuk et al., 2018; Guo et al., 2020) search on subset of data points indexed by the k-means centroids.
We see that there are two major challenges with the compressed domain search:
• Fractional search has a poor cache reuse rate because the candidate data points for each query rarely overlaps. We show optimizing the cache usage has a huge headroom for accelerators in Section 4.2.
• Tweaking the speed-recall trade-off is data-dependent and non-trivial to tune. The key result of Beyer et al. (1999) states that the distance contrast of neighbors diminishes with increasing dimensionality (also known as the curse of high dimensionality). Furthermore, the key result of Rubinstein (2018) states that sub-linear time nearest neighbor search with high recall is impossible for Euclidean, Manhattan, or Hamming distance; otherwise, it contradicts the Strong Exponential Time Hypothesis (Impagliazzo and Paturi, 1999).
Our work takes an opposite approach to focus on machine efficiency with zero search space pruning. Moreover, since our method computes all the distances, it is immune to the curse of high dimensionality.
Accelerators In this paper, the phrase accelerators represents a class of specialized hardware to accelerate machine learning workloads. In particular, we are interested in the novel platforms that deliver high FLOP/s for distance computation, namely Google TPU V3, V4, Nvidia GPU V100, and A100 in our analysis and evaluation.
Modern accelerators have special computation units for matrix multiplication, providing a higher operation throughput over the regular coefficient-wise operations. The corresponding units are tensor cores in Nvidia GPUs (Markidis et al., 2018) and systolic arrays in Google TPUs (Jouppi et al., 2017; Norrie et al., 2021). Addressing these operation throughput differences is essential to our algorithm design.
While accelerators excel in parallelism, developing an efficient K-selection algorithm on accelerators is still an active research area (Monroe et al., 2011; Shanbhag et al., 2018; Johnson et al., 2021; Zhao et al., 2020). Accelerators with higher FLOP/s introduce a higher opportunity cost of computing the K-selection problem instead of the distance computation. The trend of the increasing FLOP/s in accelerators motivated us to optimize the FLOP/s usage by reducing the time required for computing K-selection.
4 Methodology
This section presents a performance model to identify non-trivial bottlenecks on multiple platforms and demonstrates some fundamental limits when designing algorithms for K-NN and related problems, and we see that the cache inefficiency of the compressed domain methods introduces a significant cost on accelerators.
We model the accelerator’s runtime as executing a sequence of computation kernels, where each kernel is a compiled subroutine on the accelerator used by the main program on the CPU. A kernel may be composed of one or several high-level operators: Einsum, ReLU, ArgMax, etc., and each kernel can have different performance characteristics.
Given a sequence of kernels ki, we let Wi denotes the total amount of work and Pi denotes the operational speed. Our goal is to estimate the total time of a program:
t = X
i
Wi Pi . (4)
In the following example, we focus on the MIPS problem. Let Q 2 RM⇥D and X 2 RN⇥D denote the queries and the database, the runtime of a generic approximate-MIPS program can be modeled as
t = WD P +O(Auxiliary) WD P , (5)
where WD denotes the total FLOPs required for searching the entire database, and denotes the search fraction. We note that P varies by algorithm and platform. Traditionally, compressed domain search methods minimize but sacrifice cache efficiency. Our method use an alternative route to optimize P instead.
4.1 Instruction throughput-aware roofline model
This subsection describes how we model the kernel-dependent performance P on multiple platforms with a small extension of the roofline model.
The classic roofline model (Williams et al., 2009) is a function of machine peak performance ⇡ measured in FLOP/s, machine peak memory bandwidth measured in bytes/s, and arithmetic intensity IMEM expressed as the ratio of floating-point operations performed to data movement (FLOP/byte). The model states the performance is bounded by P min(⇡, ⇥ IMEM).
We desire to model kernels that has a mixture of floating point operations accelerated by dedicated hardware as well as other coefficient-wise operations. The coefficient-wise operations are abbreviated as COPs. Almost every non matrix multiplication operations are COPs, including vectorized add, multiply, compare, conditional-move, etc. We use the symbol for peak COP/s on platforms, and define the instruction throughput intensity ICOP as the ratio between the number FLOPs and the number of COPs performed in a kernel (FLOP/COP). The attainable performance of a kernel is bounded by:
P min
8 <
: ⇡ ⇥ IMEM ⇥ ICOP.
(6)
The statement is self-explanatory because the inadequate resources impede the kernel throughput. Table 1 lists the properties of selected accelerators for our analysis3. The roofline model is commonly used in accelerator profiling tools but not as frequently discussed in algorithm designs. The following sections show how the model prevents pitfalls due to the hardware constraints.
4.2 The memory bandwidth bound
This subsection demonstrates how to evaluate if a kernel hits the memory bandwidth wall. We associate the distance computation with three levels of BLAS (Dongarra et al., 1990). Level 1 BLAS describes vector operations on non-consecutive memory access, such as computing distances while traversing through a graph. Level 2 BLAS represents scoring a query with consecutively stored data points. Level 3 BLAS expresses batched query-database distance computation, often used in brute-force scoring.
Compressed domain searches are either level 1 or 2 BLAS due to the cache inefficiency. It has O(1) memory arithmetic intensity because the number of FLOPs is proportion to the bytes read. Combining (5) and (6) we have the following remark:
Remark 1. Distance computations in compressed domain searches are memory bandwidth bounded. In our model, the runtime is lower bounded by: t O ( WD/ ).
3Readers can find these numbers from the accelerators’ specification sheets.
To estimate the memory arithmetic intensity for level 3 BLAS, we continue to use Q 2 RM⇥D and X 2 RN⇥D for denoting queries and database. In many K-NN applications N and M are much greater than D. The corresponding memory arithmetic intensity is:
IMEM = 2MND
4MN + o(MN) ⇡
D 2 . (7)
The largest term in the denominator of (7) is the 4MN bytes of the query-database distances. We omit the insignificant terms and refer readers to (Golub and Van Loan, 2013, Section 1.5.4) for a comprehensive review on memory transfers in block matrix multiplications.
Figure 1 shows that the distance scoring kernels of different BLAS levels can easily hit the memory bandwidth wall. In order to attain high performance, we designed our algorithm to aggregate the results within the kernel to avoid writing the O(MN) bytes into memory.
4.3 The instruction bandwidth bound
The use of COPs (non matrix multiplication instructions) introduce another slowdown. We let C denotes the number of COPs used per dot-product score in a kernel equipped with COPs and matrix multiplication instructions. There are M ⇥N dot-product scores, so the total COPs used in a kernel is CMN . To prevent hitting the COPs bandwidth wall, we must satisfy:
ICOP = 2⇠⇠MND C⇠⇠MN
⇡
, (8)
) C 2D ⇥
⇡ . (9)
The number of COPs we can afford in the kernels is scarce. We take D = 128 as an example and substitute it into (9). We can only use 4 coefficient-wise instructions per dot-product for TPU V4, and 16 for GPU A100. We conclude with the following remark: Remark 2. Exact and generic K-selection algorithm cannot be efficiently implemented with the coefficient-wise operations for the selected platforms (GPU V100, A100, TPU V3 and V4).
Because of Remark 2, we develop an approximate approach to achieve the peak performances.
5 Algorithm
Algorithm 1: PartialReduce for MIPS Input: Q 2 RM⇥D Batch queries Input: X 2 RN⇥D Database Input: 2W Bin size Output: V 2 RM⇥L Top-K values Output: A 2 NM⇥L Top-K indices
1 for i 1 to M do 2 for j 1 to N do 3 yi,j hqi,xji ; 4 l ShiftRight(j, W) ; /* Unrolled and does not cost COP */ 5 b yi,j > vi,l ; /* COP 1: Vectorized compare */ 6 vi,l if b then yi,j else vi,l ; /* COP 2: Vectorized conditional move */ 7 ai,l if b then j else ai,l ; /* COP 3: Vectorized conditional move */ 8 end 9 end
Our algorithm consists of two kernels:
1. PartialReduce kernel computes the distances and partially aggregate the results from M ⇥N distances to M ⇥ L distances with original indices.
2. ExactRescoring kernel is an optional kernel that aggregates the final top-K results. The complexity is O(ML log2(L)) by a bitonic sort followed by a truncation.
The PartialReduce kernel is where most of the time and compute takes place. See Algorithm 1 for an outline of the algorithm. We collect top-1 distances from the L non-overlapping bins of size 2W for each query, resulting high arithmetic intensities:
IMEM ⇡ O (min (M,N)) , (10)
ICOP = 2⇠⇠MND C⇠⇠MN = 2D C . (11)
We show these arithmetic intensities can achieve high performance on real world database in section 6.1. See Appendix A.3 for the detailed expansion of the algorithm and how the arithmetic intensities are derived.
5.1 Recall estimation
This section shows the PartialReduce kernel can achieve high recall with good speed. We reformulate our problem in terms of balls and bins. We have K balls representing the top-K distances that are thrown into L bins. The location of each ball is chosen independently and uniformly at random. We let Z denotes the random variable of the number of balls that do not have collisions. Following the recall definition (3) we have:
Recall Z
K , (12)
which is a standard Birthday problem:
E[Recall] E[Z] K =
✓ L 1
L
◆K 1 . (13)
Our goal is to find the minimal L such that the expected recall is greater equals to the target recall r. Finding L is simple because (13) is invertible in the natural range 0 < r < 1.
E[Recall] r ) L 1 1 r1/(K 1) ⇡ K 1 1 r . (14)
The approximation in (14) follows from Appendix A.4. Since L is at the order of K, and in most applications K ⌧ N , the cost of the ExactRescoring kernel is amortized out. Thus we affirm the claim that our method attains high performance with an analytical recall guarantee.
6 Evaluation
In this section, we show that our proposed algorithm and implementation are near the hardware limit and lead to superior performance over the baselines of similar recalls. We applied our algorithm to two datasets from the public ANN benchmarks (Aumüller et al., 2020). In our first evaluation, we compare the measured FLOP/s to the theoretical peak governed by the proposed refinement of the roofline model (6), proclaiming our implementation is reaching the hardware peak performance. In the second benchmark, we compare the end-to-end performance with competitive baselines with pre-tuned parameters. We plot each algorithm’s speed-recall curve and show ours achieves the state-of-the-art. Finally, we measure the algorithm’s scalability by varying the dataset size and number of TPUs used.
6.1 Comparison with the theoretical peak
This section shows that our refined roofline model (6) captures additional performance characteristic over the classic roofline model, and demonstrates our kernels are having near optimal performances. We select the Glove4 (Pennington et al., 2014) and Sift5 (Jegou et al., 2010) datasets from the ANN benchmarks. Their corresponding distances are the cosine distance and the Euclidean distance. See the code snippets in Appendix A.1 and A.2.
4Released in Apache license 2.0. 5Released in CC0 public domain.
See Figure 2, the colored lines represent machines’ max performances, and the dots represent each benchmark with its measured FLOP/s. The classic roofline on the left shows that our incache aggregation strategy has a large memory arithmetic intensity (⇠4,700) exceeding the memory bandwidth ridge points ⇡/ . However, it is difficult to diagnose why the Euclidean distance search does not perform well on TPU V4 from the classic roofline plot.
Fortunately, when combined with the instruction bandwidth roofline we can tell the performance regression is caused by hitting the coefficient-wise operation throughput wall. Therefore we affirms the claim that our MIPS solution is reaching the peak FLOP/s, and our Euclidean distance search solution is meeting the compute bound on TPU V4 and attaining the peak FLOP/s on TPU V3.
6.2 Recall-speed benchmark
To evaluate the effectiveness of the K-NN algorithm in a realistic setting, we adopted the methodology of public ANN benchmarks (Aumüller et al., 2020) to compare the end-to-end performance against other methods on the following datasets: Glove (Pennington et al., 2014), Sift (Jegou et al., 2010), NYTimes (Dua and Graff, 2017), and Last.fm (Bertin-Mahieux et al., 2011). The typical ANN benchmarks are only performed on a single platform. However, it is non-trivial to either port our TPU algorithm to GPU or vice versa. Alternatively, we selected the following GPUs with parity in peak performance to TPU (Table 1).
We select the Faiss GPU (Johnson et al., 2021) implementation as our baseline. Faiss provides three algorithms: Flat, IVF-Flat, and IVF-PQ. The Flat algorithm performs a brute-force search, and the IVF-Flat and IVF-PQ algorithms corresponds to the inverted file method with and without the product quantization (Jegou et al., 2010; Johnson et al., 2021). We use the repository’s suggested inverted file size (16384) in the IVF methods.
Figure 3 shows our performance significantly outperforms competing methods in the high recall regions. We highlight that our method has a consistent recall-speed trade-off over different datasets, because our recall only rely on the order statistics instead of the information encoded in the compression domain search methods, which may vary by the datasets. Since our method scores all the pair-wise distances, our method is immune from the curse of high dimensionality.
6.3 Scalability benchmark
In the final benchmark, we examine the scalability of the algorithm from three aspects. First, we verify if the measured performance is inverse proportional to the database size. Second, we compare the scaling characteristics to the fastest GPU implementation. Last but not least, we are interested in knowing if our algorithm can horizontally scale by the number of TPUs.
We conduct our evaluation on TPU V4 and Nvidia GPU A100, which have similar peak performance and memory bandwidth. We sample the Yandex Deep dataset6 (Babenko and Lempitsky, 2016) into ten different scales and measure the QPS of each approach with a similar recall. Figure 4 verifies all measurements align with the ideal scalability model: QPS / #chips/N . Our method remains top performance on all database sizes and linearly scales with the number of TPU chips.
7 Discussion and future work
In Section 6, we benchmark our method against others on platforms with similar performances. Some questions might arise: "Is the performance gain an algorithmic optimization or due to platform efficiency?" "Can we achieve the same performance gain on GPU?" "The existence of efficient fractional-search on accelerators?" We address these questions in this section.
7.1 Platform discussions
We first discuss the modeling perspective of performance differences between platforms. In Section 4, we show that the memory bandwidth and instruction throughput bound applies to both GPU and TPU. For instance, it follows that to attain peak performance on every hardware platform, having the number of instructions used for collecting (approximate) top-k elements within 2 ·D/⇡ per distance computation is a necessary condition.
Although our Algorithm 1 is platform-independent, achieving the hardware peak performance requires many low level implementation details at the machine level, including cache management, preventing cross-core memory synchronization, in-register accumulation, and instruction scheduling. Typical high-performance libraries such as MKL, cuBLAS, and Google TPU compiler use platform-specific assembly to take full control of the stated requirements.
6Released in CC BY 4.0.
Nevertheless, we cannot use the high-level interface of these libraries, because Algorithm 1 only performs well when it is integrated into the inner loop of distance computations7. Moreover, these libraries are all close-sourced, thus increases the difficulty on the implementation.
Fortunately, we have the access to TPU compiler internals, and we have integrated Algorithm 1 into the compiler to generate the desired assembly code to solidify our analysis. Thus we leave implementations of other platforms to future works.
7.2 Algorithm discussions
The roofline complexity of the fractional search is identical to BLAS-2 (matrix-vector multiplication), which is memory bandwidth bound. When the cycles spend on data transfer are mutually exclusive to our method, it introduces an enormous opportunity cost. Nevertheless, we see an opportunity in a heterogeneous architecture because a fractional search on the host is not mutually exclusive to applying our method to accelerators.
A motivating example is the multi-billion nearest neighbor search, where fitting the dataset into device memory is possible (through device sharding, which TensorFlow and Jax have native support) but not economical. Since brute-force distance computations are often involved in the auxiliary data structures when performing the fractional search, we may replace the brute-force portion with TPU in conduction with the remaining search off-device. We note that heterogeneous architectures with off-device storage such as host-RAM or even SSD (Jayaram Subramanya et al., 2019; Ren et al., 2020; Chen et al., 2021) are great starting points for future research.
8 Conclusion
Accelerator-based machine learning has become the mainstream in academics and industries. However, the performance characteristics of accelerators are counter-intuitive and difficult to program. In this paper, we propose a roofline-based complexity analysis framework to discuss the optimality of the algorithms without low-level optimization details: unrolling factors, batch window sizes, vectorization, and systolic array scheduling, which are platform-dependent and lengthy to read. We demonstrated several examples of inferring the hardware performance limits by simply addressing the kernel’s total FLOPs, byte transferred, and the number of coefficient-wise instructions used. Our refined model foreshadowed non-trivial performance regression caused by the coefficient-wise instructions bandwidth. We took it into account to design a new algorithm for K-NN and achieved peak performance on TPU. Finally, our experiments showed that our method outperformed state-of-the-art baselines on platforms with similar performance characteristics, which are known to be hard to beat.
Acknowledgments and Disclosure of Funding
We would like to thank the XLA team for the continuous effort on developing the state-of-the-art compiler and the full support on enabling our new op: approx_max_k. We are also grateful to the Google ScaNN team for the joint effort on bridging the impactful K-NN problem into the accelerator ecosystem. Last but not least, we thank to Peter Hawkins, Edward Schwartz, and Mani Varadarajan for code reviews in Jax and Tensorflow, and Erik Lindgren for the proof reading of this paper.
This work was performed and funded by Google. | 1. What is the focus and contribution of the paper regarding K nearest neighbor search?
2. What are the strengths of the proposed solution, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding its unclear hardware motivation, vague definition of COPs, and incomplete evaluation?
4. Do you have any questions regarding the proposed algorithm's specialization for TPU and its usage of COPs?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper presents a new algorithm implementation of K nearest neighbor search that is able to increase the arithmetic intensity. The distances between all queries and entries in the database are calculated in L3 BLAS fashion. Then only the top-1 entry is kept in each bin. The bin size can be adjusted based on the requirement of the recall. The roofline analysis shows a better peak throughput while the end-to-end evaluation demonstrates a better throughput-recall trade-off than the previous solutions.
Strengths And Weaknesses
Strengths
This paper presents a detailed analytical model of the kernel implementation of the BLAS-based operation. The authors conclude the impact of COPs and introduce the algorithm accordingly.
The proposed solution is simple yet effective. The adjustment of the bin size provides a simple method to balance recall and throughput.
Weaknesses
Unclear hardware motivation. The paper is entitled 'TPU-KNN'. However, throughout this paper, I'm not able to find any information indicating that the proposed algorithm requires any special architectural support from TPU rather than other accelerators. It seems that the same analysis could also apply to GPU or other accelerators.
Vague definition of COPs. I cannot understand the usage of the concept of coefficient-wise operations (COPs). In Table 1, comparing with the datasheet of A100/V100, I can understand COPs as a FLOP in the vector cores (or SMs in NV's terminology) while the FLOP means a half-precision FLOP in the tensor core. However, in the later description like Algorithm 1, one COP seems to be a general non-matrix operation without considering the problem size. In this case, one COP, for example, vectorized comparison, could include multiple FLOPs and seems to be unmatched by what is presented in Table 1.
Incomplete Evaluation. Based on what I've mentioned in 1, the evaluation part lacks a fair baseline. I can understand the advantage of recall of the proposed algorithm. However, it is hard to understand whether the performance gain over GPU comes from the algorithm itself or the higher efficiency of TPU. To isolate this factor, the authors should either provide the result of the proposed algorithm implemented on GPUs or the baseline algorithms implemented on TPU.
Questions
Is the proposed algorithm specialized for TPU? If it is, what special architectural support from TPU does it utilize?
How do you define ONE COP? For example, are vectorized comparisons between two 256 arrays and the comparison between two 16 arrays each counted as one COP? If not, how should I interpret the data presented in Table 1?
Could you provide a direct comparison of the proposed algorithm on the same hardware platform?
Limitations
The authors discussed the limitation of the current implementation. |
NIPS | Title
TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s
Abstract
This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-theart GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU.
1 Introduction
The K-nearest neighbor (K-NN) search problem has a wide range of applications in machine learning and information retrieval systems, including image search (Jia et al., 2021; Babenko and Lempitsky, 2016), semantic textual retrieval (Liu et al., 2009; Cer et al., 2018), anomaly detection (Gu et al., 2019; Omar et al., 2013), recommendation systems (Sarwar et al., 2002; Zhao et al., 2019), as well as serving as a component for a downstream tasks (Borgeaud et al., 2021; Guu et al., 2020; Lindgren et al., 2021; Shazeer et al., 2017). Given a query, the objective of K-NN is to identify K closest datapoints from a database of finite number of data points in a vector space. The main challenge of designing a good K-NN algorithm is to compute accurate K-NN results while being computationally efficient.
Solving the K-NN problem on accelerators has emerging interests from both the academia and the industry (Johnson et al., 2021; Shanbhag et al., 2018; Zhao et al., 2020). Many accelerators can deliver hundreds of Tera Floating Point Operations Per Seconds (TFLOPS) vital to the neighbor distance computation. However, utilizing accelerators in K-NN problems is not straightforward; multiple issues in data locality, memory bandwidth, and multiple types of hardware parallelism need to be carefully considered to achieve high utilization. In this paper we extend the roofline performance model (Williams et al., 2009) to quantify the hardware characteristics accurately. As a result, we designed a K-NN algorithm to reach peak performance by the precise modeling of the accelerators, and our TPU implementation aligned with our predicted performance.
The main contributions of this work are:
⇤Equal contributions.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
• We extend the roofline model to address the operation throughput differences of the instructions, essential to the algorithm analysis in this paper.
• We design an approximate K-NN algorithm with recall and performance guarantees based on our proposed roofline model.
• We conduct experiments verifying our TPU implementation of the algorithm accurately aligned with the performance model and achieves state-of-the-art speed-recall trade-offs on standard nearest neighbor search benchmarks.
2 Preliminaries
This section covers the necessary notations to work with the nearest neighbor search problem. Given a matrix A 2 RM⇥N , we let ai,j denote the item at the ith row and jth column of A, and ai denote the ith row-vector of A. We use the matrix X 2 RN⇥D to abbreviate a set-representation of a database X = {xi}i=1,2,...,N with N data points, where each data point xi 2 RD is a row vector of the matrix X in a D dimensional vector space. The set and matrix representation of database X are used interchangeably in this paper.
The K nearest neighbor search problem is stated as follows. Given a database X 2 RN⇥D and a query vector q 2 RD, find the subset S⇤ ⇢ X collecting the K-closest data points to q:
Sq ⇤ = K-argmin
x2X D(q,x), (1)
where D(x,y) is a distance measure such as Euclidean distance D`2(x,y) := kx yk2 or the cosine distance Dcos(x,y) := 1 hx,yi kxkkyk . A related problem is the maximum inner product search (MIPS), where the goal is to find the data points that have the highest inner products with the query:
Sq ⇤ = K-argmax
x2X hq,xi. (2)
MIPS is equivalent to the cosine similarity search when all data points are `2-normalized.
3 Related work
Exhaustively searching all pair-wise distances between the query and the entire database is computeintensive and often infeasible on many platforms. Therefore, a problem extensively discussed in the literature (Wang et al., 2014, 2015) is to find approximate nearest neighbors (ANN) in exchange of speed. By convention, the quality of ANN is measured by
Recall := |Sq \ Sq
⇤ |
|Sq ⇤ |
, (3)
where Sq ⇢ X denotes the set of data points retrieved by the search method.
Compressed domain search One class of ANN approaches is to search on a lossy-compressed problem domain. These methods are composed in two steps: a) search on compressed representation2 of the original problem to find a set of candidate data points, b) compute the distances between the query and the candidate data points to select the top-K results. Since only a subset of data points requires the exact distance computation, the overall cost is reduced.
The two steps can be composed in arbitrary ways. Locality sensitive hashing (Andoni et al., 2015; Neyshabur and Srebro, 2015) applies search followed by scoring; tree-search (Muja and Lowe, 2014; Dasgupta and Freund, 2008) applies the two steps recursively; graph-search (Malkov and Yashunin, 2018) iterates between two steps until the stopping condition is met. And the inverted file (IVF)
2Here we mean data structures like tree, graph, locality sensitive hash etc.
method (Jegou et al., 2010; Babenko and Lempitsky, 2014; Baranchuk et al., 2018; Guo et al., 2020) search on subset of data points indexed by the k-means centroids.
We see that there are two major challenges with the compressed domain search:
• Fractional search has a poor cache reuse rate because the candidate data points for each query rarely overlaps. We show optimizing the cache usage has a huge headroom for accelerators in Section 4.2.
• Tweaking the speed-recall trade-off is data-dependent and non-trivial to tune. The key result of Beyer et al. (1999) states that the distance contrast of neighbors diminishes with increasing dimensionality (also known as the curse of high dimensionality). Furthermore, the key result of Rubinstein (2018) states that sub-linear time nearest neighbor search with high recall is impossible for Euclidean, Manhattan, or Hamming distance; otherwise, it contradicts the Strong Exponential Time Hypothesis (Impagliazzo and Paturi, 1999).
Our work takes an opposite approach to focus on machine efficiency with zero search space pruning. Moreover, since our method computes all the distances, it is immune to the curse of high dimensionality.
Accelerators In this paper, the phrase accelerators represents a class of specialized hardware to accelerate machine learning workloads. In particular, we are interested in the novel platforms that deliver high FLOP/s for distance computation, namely Google TPU V3, V4, Nvidia GPU V100, and A100 in our analysis and evaluation.
Modern accelerators have special computation units for matrix multiplication, providing a higher operation throughput over the regular coefficient-wise operations. The corresponding units are tensor cores in Nvidia GPUs (Markidis et al., 2018) and systolic arrays in Google TPUs (Jouppi et al., 2017; Norrie et al., 2021). Addressing these operation throughput differences is essential to our algorithm design.
While accelerators excel in parallelism, developing an efficient K-selection algorithm on accelerators is still an active research area (Monroe et al., 2011; Shanbhag et al., 2018; Johnson et al., 2021; Zhao et al., 2020). Accelerators with higher FLOP/s introduce a higher opportunity cost of computing the K-selection problem instead of the distance computation. The trend of the increasing FLOP/s in accelerators motivated us to optimize the FLOP/s usage by reducing the time required for computing K-selection.
4 Methodology
This section presents a performance model to identify non-trivial bottlenecks on multiple platforms and demonstrates some fundamental limits when designing algorithms for K-NN and related problems, and we see that the cache inefficiency of the compressed domain methods introduces a significant cost on accelerators.
We model the accelerator’s runtime as executing a sequence of computation kernels, where each kernel is a compiled subroutine on the accelerator used by the main program on the CPU. A kernel may be composed of one or several high-level operators: Einsum, ReLU, ArgMax, etc., and each kernel can have different performance characteristics.
Given a sequence of kernels ki, we let Wi denotes the total amount of work and Pi denotes the operational speed. Our goal is to estimate the total time of a program:
t = X
i
Wi Pi . (4)
In the following example, we focus on the MIPS problem. Let Q 2 RM⇥D and X 2 RN⇥D denote the queries and the database, the runtime of a generic approximate-MIPS program can be modeled as
t = WD P +O(Auxiliary) WD P , (5)
where WD denotes the total FLOPs required for searching the entire database, and denotes the search fraction. We note that P varies by algorithm and platform. Traditionally, compressed domain search methods minimize but sacrifice cache efficiency. Our method use an alternative route to optimize P instead.
4.1 Instruction throughput-aware roofline model
This subsection describes how we model the kernel-dependent performance P on multiple platforms with a small extension of the roofline model.
The classic roofline model (Williams et al., 2009) is a function of machine peak performance ⇡ measured in FLOP/s, machine peak memory bandwidth measured in bytes/s, and arithmetic intensity IMEM expressed as the ratio of floating-point operations performed to data movement (FLOP/byte). The model states the performance is bounded by P min(⇡, ⇥ IMEM).
We desire to model kernels that has a mixture of floating point operations accelerated by dedicated hardware as well as other coefficient-wise operations. The coefficient-wise operations are abbreviated as COPs. Almost every non matrix multiplication operations are COPs, including vectorized add, multiply, compare, conditional-move, etc. We use the symbol for peak COP/s on platforms, and define the instruction throughput intensity ICOP as the ratio between the number FLOPs and the number of COPs performed in a kernel (FLOP/COP). The attainable performance of a kernel is bounded by:
P min
8 <
: ⇡ ⇥ IMEM ⇥ ICOP.
(6)
The statement is self-explanatory because the inadequate resources impede the kernel throughput. Table 1 lists the properties of selected accelerators for our analysis3. The roofline model is commonly used in accelerator profiling tools but not as frequently discussed in algorithm designs. The following sections show how the model prevents pitfalls due to the hardware constraints.
4.2 The memory bandwidth bound
This subsection demonstrates how to evaluate if a kernel hits the memory bandwidth wall. We associate the distance computation with three levels of BLAS (Dongarra et al., 1990). Level 1 BLAS describes vector operations on non-consecutive memory access, such as computing distances while traversing through a graph. Level 2 BLAS represents scoring a query with consecutively stored data points. Level 3 BLAS expresses batched query-database distance computation, often used in brute-force scoring.
Compressed domain searches are either level 1 or 2 BLAS due to the cache inefficiency. It has O(1) memory arithmetic intensity because the number of FLOPs is proportion to the bytes read. Combining (5) and (6) we have the following remark:
Remark 1. Distance computations in compressed domain searches are memory bandwidth bounded. In our model, the runtime is lower bounded by: t O ( WD/ ).
3Readers can find these numbers from the accelerators’ specification sheets.
To estimate the memory arithmetic intensity for level 3 BLAS, we continue to use Q 2 RM⇥D and X 2 RN⇥D for denoting queries and database. In many K-NN applications N and M are much greater than D. The corresponding memory arithmetic intensity is:
IMEM = 2MND
4MN + o(MN) ⇡
D 2 . (7)
The largest term in the denominator of (7) is the 4MN bytes of the query-database distances. We omit the insignificant terms and refer readers to (Golub and Van Loan, 2013, Section 1.5.4) for a comprehensive review on memory transfers in block matrix multiplications.
Figure 1 shows that the distance scoring kernels of different BLAS levels can easily hit the memory bandwidth wall. In order to attain high performance, we designed our algorithm to aggregate the results within the kernel to avoid writing the O(MN) bytes into memory.
4.3 The instruction bandwidth bound
The use of COPs (non matrix multiplication instructions) introduce another slowdown. We let C denotes the number of COPs used per dot-product score in a kernel equipped with COPs and matrix multiplication instructions. There are M ⇥N dot-product scores, so the total COPs used in a kernel is CMN . To prevent hitting the COPs bandwidth wall, we must satisfy:
ICOP = 2⇠⇠MND C⇠⇠MN
⇡
, (8)
) C 2D ⇥
⇡ . (9)
The number of COPs we can afford in the kernels is scarce. We take D = 128 as an example and substitute it into (9). We can only use 4 coefficient-wise instructions per dot-product for TPU V4, and 16 for GPU A100. We conclude with the following remark: Remark 2. Exact and generic K-selection algorithm cannot be efficiently implemented with the coefficient-wise operations for the selected platforms (GPU V100, A100, TPU V3 and V4).
Because of Remark 2, we develop an approximate approach to achieve the peak performances.
5 Algorithm
Algorithm 1: PartialReduce for MIPS Input: Q 2 RM⇥D Batch queries Input: X 2 RN⇥D Database Input: 2W Bin size Output: V 2 RM⇥L Top-K values Output: A 2 NM⇥L Top-K indices
1 for i 1 to M do 2 for j 1 to N do 3 yi,j hqi,xji ; 4 l ShiftRight(j, W) ; /* Unrolled and does not cost COP */ 5 b yi,j > vi,l ; /* COP 1: Vectorized compare */ 6 vi,l if b then yi,j else vi,l ; /* COP 2: Vectorized conditional move */ 7 ai,l if b then j else ai,l ; /* COP 3: Vectorized conditional move */ 8 end 9 end
Our algorithm consists of two kernels:
1. PartialReduce kernel computes the distances and partially aggregate the results from M ⇥N distances to M ⇥ L distances with original indices.
2. ExactRescoring kernel is an optional kernel that aggregates the final top-K results. The complexity is O(ML log2(L)) by a bitonic sort followed by a truncation.
The PartialReduce kernel is where most of the time and compute takes place. See Algorithm 1 for an outline of the algorithm. We collect top-1 distances from the L non-overlapping bins of size 2W for each query, resulting high arithmetic intensities:
IMEM ⇡ O (min (M,N)) , (10)
ICOP = 2⇠⇠MND C⇠⇠MN = 2D C . (11)
We show these arithmetic intensities can achieve high performance on real world database in section 6.1. See Appendix A.3 for the detailed expansion of the algorithm and how the arithmetic intensities are derived.
5.1 Recall estimation
This section shows the PartialReduce kernel can achieve high recall with good speed. We reformulate our problem in terms of balls and bins. We have K balls representing the top-K distances that are thrown into L bins. The location of each ball is chosen independently and uniformly at random. We let Z denotes the random variable of the number of balls that do not have collisions. Following the recall definition (3) we have:
Recall Z
K , (12)
which is a standard Birthday problem:
E[Recall] E[Z] K =
✓ L 1
L
◆K 1 . (13)
Our goal is to find the minimal L such that the expected recall is greater equals to the target recall r. Finding L is simple because (13) is invertible in the natural range 0 < r < 1.
E[Recall] r ) L 1 1 r1/(K 1) ⇡ K 1 1 r . (14)
The approximation in (14) follows from Appendix A.4. Since L is at the order of K, and in most applications K ⌧ N , the cost of the ExactRescoring kernel is amortized out. Thus we affirm the claim that our method attains high performance with an analytical recall guarantee.
6 Evaluation
In this section, we show that our proposed algorithm and implementation are near the hardware limit and lead to superior performance over the baselines of similar recalls. We applied our algorithm to two datasets from the public ANN benchmarks (Aumüller et al., 2020). In our first evaluation, we compare the measured FLOP/s to the theoretical peak governed by the proposed refinement of the roofline model (6), proclaiming our implementation is reaching the hardware peak performance. In the second benchmark, we compare the end-to-end performance with competitive baselines with pre-tuned parameters. We plot each algorithm’s speed-recall curve and show ours achieves the state-of-the-art. Finally, we measure the algorithm’s scalability by varying the dataset size and number of TPUs used.
6.1 Comparison with the theoretical peak
This section shows that our refined roofline model (6) captures additional performance characteristic over the classic roofline model, and demonstrates our kernels are having near optimal performances. We select the Glove4 (Pennington et al., 2014) and Sift5 (Jegou et al., 2010) datasets from the ANN benchmarks. Their corresponding distances are the cosine distance and the Euclidean distance. See the code snippets in Appendix A.1 and A.2.
4Released in Apache license 2.0. 5Released in CC0 public domain.
See Figure 2, the colored lines represent machines’ max performances, and the dots represent each benchmark with its measured FLOP/s. The classic roofline on the left shows that our incache aggregation strategy has a large memory arithmetic intensity (⇠4,700) exceeding the memory bandwidth ridge points ⇡/ . However, it is difficult to diagnose why the Euclidean distance search does not perform well on TPU V4 from the classic roofline plot.
Fortunately, when combined with the instruction bandwidth roofline we can tell the performance regression is caused by hitting the coefficient-wise operation throughput wall. Therefore we affirms the claim that our MIPS solution is reaching the peak FLOP/s, and our Euclidean distance search solution is meeting the compute bound on TPU V4 and attaining the peak FLOP/s on TPU V3.
6.2 Recall-speed benchmark
To evaluate the effectiveness of the K-NN algorithm in a realistic setting, we adopted the methodology of public ANN benchmarks (Aumüller et al., 2020) to compare the end-to-end performance against other methods on the following datasets: Glove (Pennington et al., 2014), Sift (Jegou et al., 2010), NYTimes (Dua and Graff, 2017), and Last.fm (Bertin-Mahieux et al., 2011). The typical ANN benchmarks are only performed on a single platform. However, it is non-trivial to either port our TPU algorithm to GPU or vice versa. Alternatively, we selected the following GPUs with parity in peak performance to TPU (Table 1).
We select the Faiss GPU (Johnson et al., 2021) implementation as our baseline. Faiss provides three algorithms: Flat, IVF-Flat, and IVF-PQ. The Flat algorithm performs a brute-force search, and the IVF-Flat and IVF-PQ algorithms corresponds to the inverted file method with and without the product quantization (Jegou et al., 2010; Johnson et al., 2021). We use the repository’s suggested inverted file size (16384) in the IVF methods.
Figure 3 shows our performance significantly outperforms competing methods in the high recall regions. We highlight that our method has a consistent recall-speed trade-off over different datasets, because our recall only rely on the order statistics instead of the information encoded in the compression domain search methods, which may vary by the datasets. Since our method scores all the pair-wise distances, our method is immune from the curse of high dimensionality.
6.3 Scalability benchmark
In the final benchmark, we examine the scalability of the algorithm from three aspects. First, we verify if the measured performance is inverse proportional to the database size. Second, we compare the scaling characteristics to the fastest GPU implementation. Last but not least, we are interested in knowing if our algorithm can horizontally scale by the number of TPUs.
We conduct our evaluation on TPU V4 and Nvidia GPU A100, which have similar peak performance and memory bandwidth. We sample the Yandex Deep dataset6 (Babenko and Lempitsky, 2016) into ten different scales and measure the QPS of each approach with a similar recall. Figure 4 verifies all measurements align with the ideal scalability model: QPS / #chips/N . Our method remains top performance on all database sizes and linearly scales with the number of TPU chips.
7 Discussion and future work
In Section 6, we benchmark our method against others on platforms with similar performances. Some questions might arise: "Is the performance gain an algorithmic optimization or due to platform efficiency?" "Can we achieve the same performance gain on GPU?" "The existence of efficient fractional-search on accelerators?" We address these questions in this section.
7.1 Platform discussions
We first discuss the modeling perspective of performance differences between platforms. In Section 4, we show that the memory bandwidth and instruction throughput bound applies to both GPU and TPU. For instance, it follows that to attain peak performance on every hardware platform, having the number of instructions used for collecting (approximate) top-k elements within 2 ·D/⇡ per distance computation is a necessary condition.
Although our Algorithm 1 is platform-independent, achieving the hardware peak performance requires many low level implementation details at the machine level, including cache management, preventing cross-core memory synchronization, in-register accumulation, and instruction scheduling. Typical high-performance libraries such as MKL, cuBLAS, and Google TPU compiler use platform-specific assembly to take full control of the stated requirements.
6Released in CC BY 4.0.
Nevertheless, we cannot use the high-level interface of these libraries, because Algorithm 1 only performs well when it is integrated into the inner loop of distance computations7. Moreover, these libraries are all close-sourced, thus increases the difficulty on the implementation.
Fortunately, we have the access to TPU compiler internals, and we have integrated Algorithm 1 into the compiler to generate the desired assembly code to solidify our analysis. Thus we leave implementations of other platforms to future works.
7.2 Algorithm discussions
The roofline complexity of the fractional search is identical to BLAS-2 (matrix-vector multiplication), which is memory bandwidth bound. When the cycles spend on data transfer are mutually exclusive to our method, it introduces an enormous opportunity cost. Nevertheless, we see an opportunity in a heterogeneous architecture because a fractional search on the host is not mutually exclusive to applying our method to accelerators.
A motivating example is the multi-billion nearest neighbor search, where fitting the dataset into device memory is possible (through device sharding, which TensorFlow and Jax have native support) but not economical. Since brute-force distance computations are often involved in the auxiliary data structures when performing the fractional search, we may replace the brute-force portion with TPU in conduction with the remaining search off-device. We note that heterogeneous architectures with off-device storage such as host-RAM or even SSD (Jayaram Subramanya et al., 2019; Ren et al., 2020; Chen et al., 2021) are great starting points for future research.
8 Conclusion
Accelerator-based machine learning has become the mainstream in academics and industries. However, the performance characteristics of accelerators are counter-intuitive and difficult to program. In this paper, we propose a roofline-based complexity analysis framework to discuss the optimality of the algorithms without low-level optimization details: unrolling factors, batch window sizes, vectorization, and systolic array scheduling, which are platform-dependent and lengthy to read. We demonstrated several examples of inferring the hardware performance limits by simply addressing the kernel’s total FLOPs, byte transferred, and the number of coefficient-wise instructions used. Our refined model foreshadowed non-trivial performance regression caused by the coefficient-wise instructions bandwidth. We took it into account to design a new algorithm for K-NN and achieved peak performance on TPU. Finally, our experiments showed that our method outperformed state-of-the-art baselines on platforms with similar performance characteristics, which are known to be hard to beat.
Acknowledgments and Disclosure of Funding
We would like to thank the XLA team for the continuous effort on developing the state-of-the-art compiler and the full support on enabling our new op: approx_max_k. We are also grateful to the Google ScaNN team for the joint effort on bridging the impactful K-NN problem into the accelerator ecosystem. Last but not least, we thank to Peter Hawkins, Edward Schwartz, and Mani Varadarajan for code reviews in Jax and Tensorflow, and Erik Lindgren for the proof reading of this paper.
This work was performed and funded by Google. | 1. What is the focus and contribution of the paper regarding TPU's utilization for approximate nearest neighbor search?
2. What are the strengths of the proposed approach, particularly in terms of its extended roofline analysis and partial reduce scheme?
3. What are the weaknesses of the paper, especially regarding its omission of introducing GPU and TPU characteristics and its limitation in reducing coefficient-wise operations?
4. Do you have any concerns regarding the paper's experimental study and its ability to handle large datasets or varying window sizes?
5. What are the limitations of the proposed method, and how might it be improved to address these limitations? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies using TPU for approximate nearest neighbor search. With careful analysis, it finds that search performance is bound by memory access and coefficient-wise operation instead of distance computation. Thus, it proposes to conduct partial reduce to reduce memory access such that search can reach peak FLOPs. Experiment results show that the proposed solution outperforms existing ones in both QPS and recall.
Strengths And Weaknesses
Strength
The extended roofline analysis is interesting and shows the bottleneck of nearest neighbor search with TPU.
The proposed partial reduce scheme reduces the amount of memory access and comes with theoretical analysis.
The experiment results show that the proposed solution has good performance.
Weakness
It would improve the paper if the authors can give a brief introduction about the characteristics of GPU and TPU. Are TPUs widely available as a commodity hardware?
The proposed solution may also work for GPUs as GPUs are also limited by memory access from Table 1. Moreover, as shown in Figure 2, the proposed solution is still bound by coefficient-wise operations for Euclidean distance search. Any thoughts on reducing the number of coefficient-wise operations?
The experiment study is far from the NeurIPS standard. Does the method still outperform the partial search methods (e.g., IVF) when the dataset is large (e.g., SIFT10M or 100M)? How does the proposed method perform when changing the window size W?
Questions
See weakness.
Limitations
Yes |
NIPS | Title
Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context
Abstract
Contextual linear bandits is a rich and theoretically important model that has many practical applications. Recently, this setup gained a lot of interest in applications over wireless where communication constraints can be a performance bottleneck, especially when the contexts come from a large d-dimensional space. In this paper, we consider a distributed memoryless contextual linear bandit learning problem, where the agents who observe the contexts and take actions are geographically separated from the learner who performs the learning while not seeing the contexts. We assume that contexts are generated from a distribution and propose a method that uses ⇡ 5d bits per context for the case of unknown context distribution and 0 bits per context if the context distribution is known, while achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a log(T ) factor, where T is the length of the horizon, while the latter achieves information theoretical tightness.
1 Introduction
Contextual linear bandits offer a sequential decision-making framework that combines fundamental theoretical importance with significant practical popularity [8], as it offers a tractable way to capture side information (context), as well as a potentially infinite set of decisions (actions). The most prominent application is in recommendation systems [30], but it has also been used in applications such as virtual support agents [39], clinical trials [12], transportation systems [9], wireless optimization [26, 25], health [10], robotics [31] and online education [34].
In this paper, we develop algorithms that support the deployment of contextual linear bandits in distributed settings. In particular, we consider the case where a central learner wishes to solve a contextual linear bandit problem with the help of transient agents. That is, we assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner. We view the central learner as a “knowledge repository”, that accumulates knowledge from the experience of the transient agents and makes it available to next agents. The central learner, through the information it keeps, could help passing by devices decide how to perform an action, for example: passing by drones decide how to perform a manoeuver; agricultural robots decide what amounts of substances such as pesticids to release; and passing by mobile devices decide which local restaurants to recommend.
The main challenge we try to address in this paper is the efficient communication of the context the agents experience. More specifically, in our setup, each time an agent joins, she receives from the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
central learner information on the system, such as current estimates of the system parameters; she observes her current context, selects and plays an action and collects the corresponding reward. Note that although the distributed agent knows her context, the action she plays and the observed reward, the central learner does not - and needs this information to update its estimate of the system parameters. The context in particular can be communication heavy - in the examples we mentioned before, for drones the context could be their navigation capabilities, physical attributes, and enviromental factors such as wind speed; for agricultural robots, it could be images that indicate state of plants and sensor measurements such as of soil consistency; for restaurant recommendations, it could be the personal dietary preferences and restrictions, budget, and emotional state. Moreover, because of geographical separation, the central learner may not have any other way to learn the context beyond communication. Unlike the reward, that is usually a single scalar value, the context can be a vector of a large dimension d from an infinite alphabet, and thus, communicating the context efficiently is heavily nontrivial.
The technical question we ask is, how many bits do we need to convey per context to solve the linear bandit problem without downgrading the performance as compared to the non-distributed setting?
In this paper, we design algorithms that support this goal. We note that our algorithms optimize the uplink communication (from the agents to the central learner), and assume unlimited (cost-free) downlink communication. This is a standard assumption in wireless [7, 33, 21] for several reasons: uplink wireless links tend to be much more bandwidth restricted, since several users may be sharing the same channel; uplink communication may also be battery-powered and thus more expensive to sustain; in our particular case, the agents may have less incentive to communicate (provide their feedback) than the central learner (who needs to learn). Having said that, we note that our algorithms (in Sections 3 and 4) make frugal use of the downlink channels, only using them to transmit system parameters.
Below we summarize our main contributions: 1. We show the surprising result that, if the central learner knows the distribution of the contexts, we do not need to communicate the context at all - the agent does not need to send any information on the actual context she observes and the action she plays. It is sufficient for the agent to just send 1 bit to convey quantized information on her observed reward and nothing else. But for this very limited communication, the central learner can learn a policy that achieves the same order of regret as if full information about the context and reward is received. This result holds for nearly all context distributions and it is the best we can hope for - zero bits of communication for the context. 2. If the central learner has no knowledge of the context distribution, we show that ⇡ 5d bits per context (where d is the context dimension) is sufficient to achieve the same order regret as knowing the context in full precision. Note that previous algorithms, that rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context to achieve the same order regret, where T is the length of the horizon [24], and require time complexity of O(T d) which is exponential in d.
Related Work and Distinction. Contextual linear bandits is a rich and important model that has attracted significant interest both in theory and applications [8, 24]. Popular algorithms for this setup include LinUCB [1, 37] and cotextual Thompson sampling [2]. Under Assumption 1, these algorithms achieve a regret of Õ(d p T ), where d is the dimension of an unknown system parameter and T is the time horizon, while the best known lower bound for this setup is ⌦(d p T ) [37]. These algorithms assume perfect knowledge of the contexts and rewards. Within this space, our work focuses on operation under communications constraints in a distributed setting.
There is large body of work focusing on distributed linear contextual bandits settings, but mainly within the framework of federated learning, where batched algorithms have been proposed for communication efficiency [43, 41, 6, 5, 23] that aggregate together observations and parameter learning across a large number of iterations. This is possible because in federated learning, the agents themselves wish to learn the system parameters, remain active playing multiple actions throughout the learning process, and exchange information with the goal of speeding up their learning [43, 41]. In contrast, in our setup batched algorithms cannot reduce the communication cost because each agent only plays a single action; this may be because agents are transient, but also because they may not be interested in learning - this may not be a task that the agents wish to consistently perform - and thus do not wish to devote resources to it. For example, an agent may wish to try a restaurant in a special occasion, but would not be interested in sampling multiple restaurants/learning recommendation system parameters. In other words, we consider a scenario where the user benefits from receiving an action (or policy) from the central learner, e.g., a recommendation. In response, the user gives
feedback to the central learner in terms of (compressed) context/reward. The compression operations benefit the user by helping reduce her communication cost. In principle, the user is not required to respond. But the central learner will be able to learn whenever there is a feedback; creating an incentive for the user to respond could be an interesting future topic. Our setup supports a different (and complementary) set of applications than the federated learning framework, and requires a new set of algorithms that operate without requiring agents to keep memory of past actions.1
There is a long line of research on compression for machine learning and distributed optimization, e.g., compression for distributed gradient descent [40, 3, 32, 18], and distributed inference [19]. However, such schemes are not optimized for active learning applications. Our compression schemes can be seen as quantization schemes for contexts and rewards tailored to active learning applications.
Our work also differs from traditional vector compression schemes [15] that aim to reconstruct the data potentially with some distortion (achieve rate-distortion trade-offs). In our case, we do not aim to reconstruct the data, but instead to distinguish the best arm for each context. Indeed, using 0 bits, as we do in Section 3, we cannot reconstruct a meaningful estimate of the context.
To the best of our knowledge, our framework has not been examined before for linear contextual bandits. Work in the literature has examined compression for distributed memoryless MABs [21], but only for rewards (scalar values) and not the contexts (large vectors), and thus these techniques also do not extend to our case.
Paper organization. Section 2 reviews our notation and problem formulation; Section 3 provides and analyzes our algorithm for known and Section 4 for unknown context distributions.
2 Notation and Problem Formulation
Notation. We use the following notation throughout the paper. For a vector X we use Xi or (X)i to denote the i-th element of the vector X; similarly for a matrix V , we use Vij or (V )ij to denote the element at row i, and column j. We use kV k2 to denote the matrix spectral norm. For a function f , we denote its domain and range by dom(f), ran(f) respectively. When dom(f) ✓ R, we use f(X) for a vector X 2 Rd to denote f(X) := [f(X1), ..., f(Xd)], i.e., the function f is applied element-wise; for example we use X2 to denote the element-wise square of X . We denote the inverse of a function f by f 1; if f is not one-to-one, with abuse of notation we use f 1 to denote a function that satisfies f(f 1(x)) = x8x 2 ran(f) (this is justified due to the axiom of choice [22]). For a matrix V , we use V 1 to denote its inverse; if V is singular, we use V 1 to denote its pseudo-inverse. We use [N ] for N 2 N to denote {1, ..., N}, and {Xa}a2A to denote the set {(a,Xa)|a 2 A}. We say that y = O(f(x)) if there is x0 and a constant C such that y Cf(x) 8x > x0; we also use Õ(f(x)) to omit log factors.
Contextual Linear Bandits. We consider a contextual linear bandits problem over a horizon of length T [8], where at each iteration t = 1, ..., T , an agent, taking into account the context, chooses an action at 2 A and receives a reward rt. For each action a 2 A, the agent has access to a corresponding feature vector Xt,a 2 Rd. The set of all such vectors {Xt,a}a2A is the context at time t, and the agent can use it to decide which action at to play. We assume that the context is generated from a distribution, i.e., given a, Xt,a is generated from a distribution Pa. As a specific example, we could have that a 2 Rd and Xt,a is generated from a Gaussian distribution with zero mean and covariance matrix ||a||2I , where I is the identity matrix, i.e., Pa = N (0, ||a||2I). The selection of at may depend not only on the current context {Xt,a}a2A but also on the history Ht , {{X1,a}a2A, a1, r1, ..., {Xt 1,a}a2A, at 1, rt 1}, namely, all previously selected actions, observed contexts and rewards. Once an action is selected, the reward is generated according to
rt = hXt,at , ✓?i+ ⌘t, (1)
where h., .i denotes the dot product, ✓? is an unknown (but fixed) parameter vector in Rd, and ⌘t is noise. We assume that the noise follows an unknown distribution with E[⌘t|Ft] = 0 and E[exp( ⌘t)|Ft] exp( 2/2)8 2 R, where Ft = ({X1,a}a2A, a1, r1, ..., {Xt,a}a2A, at) is the filtration [13] of historic information up to time t, and (X) is the -algebra generated by X [13].
1Our techniques could be adapted to additionally improve the communication efficiency of batched algorithms, but this is not the focus of our work.
The objective is to minimize the regret RT over a horizon of length T , where
RT = TX
t=1
max a2A hXt,a, ✓?i hXt,at , ✓?i. (2)
The best performing algorithms for this problem, such as LinUCB and contextual Thompson sampling, achieve a worst case regret of Õ(d p T ) [29, 28, 1, 2]. The best known lower bound is ⌦(d p T ) [37].
In the rest of this paper, we make the following assumptions that are standard in the literature [24]. Assumption 1. We consider contextual linear bandits that satisfy: (1.) kXt,ak2 1, 8t 2 [T ], a 2 A. (2.) k✓?k2 1. (3.) rt 2 [0, 1], 8t 2 [T ].
The boundedness assumption on rt can be relaxed using [21], which only requires approximately 3.5 bits on average to send rt, even if it is unbounded.
Memoryless Distributed Contextual Linear Bandits. We consider a distributed setting that consists of a central learner communicating with geographically separated agents. For example, the agents are drones that interact with a traffic policeman (central learner) as they fly by. We assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner.
At each time t, t = 1 . . . T , a distributed agent joins the system; she receives from the central learner information on the system, such as the current estimate of the parameter vector ✓? or the history Ht; she observes the current context {Xt,a}a2A, selects and plays an action at and collects the corresponding reward rt. Note that although the distributed agent knows the context {Xt,a}a2A, the action at and the observed reward rt, the central learner does not. The central learner may need this information to update its estimate of the system parameters, such as the unknown parameter vector ✓⇤, and the history Ht+1. However, we assume that the agent is restricted to utilize a communicationconstrained channel and thus may not be able to send the full information to the central learner.
The main question we ask in this paper is: can we design a compression scheme, where the agent sends to the central learner only one message using Bt bits (for as small as possible a value of Bt) that enables the central learner to learn equally well (experience the same order of regret) as if there were no communication constraints? With no communication constraints the agent could send unquantized the full information {{Xt,a}a2A, at, rt}. Instead, the agent transmits a message that could be a function of all locally available information at the agent. For example, it could be a function of (Ht, {Xt,at}a2A, at, rt), if the agent had received Ht from the central learner. It could also be a function of just (Xt,at , rt), which could be sufficient if the central learner employs an algorithm such as LinUCB [1, 37]. In summary, we set the following goal.
Goal. Design contextual linear bandit schemes for the memoryless distributed setting that achieve the best known regret of O(d p T log(T )), while communicating a small number of bits Bt.
We only impose communication constraints on the uplink communication (from the agents to the central learner) and assume no cost downlink communication (see discussion in Secttion 1).
Stochastic Quantizer (SQ) [16]. Our proposed algorithms use stochastic quantization, that we next review. We define SQ`, ` 2 N to be a quantizer, that uses log(`+1) bits, consisting of an encoder and decoder described as following. The encoder ⇠` takes a value x 2 [0, `] and outputs an integer value
⇠` = ⇢ bxc with probability dxe x dxe with probability x bxc. (3)
The output ⇠` is represented with log(` + 1) bits. The decoder D` takes as input the binary representation of ⇠`(x) and outputs the real value ⇠`(x). The composition of the encoder ⇠`, the binary mapping, and decoder D` is denoted by SQ`. We notice that since the decoder only inverts the binary mapping operation, we have that SQ` = ⇠`. When SQ` is applied at the agents side, the agent encodes its data, x, as ⇠`(x), then sends the corresponding binary mapping to the central learner that applies D` to get SQ`(x). With slightly abuse of notation, this operation is described in the paper, by saying that the agent sends SQ` to the central learner.
The quantizer SQ` is a form of dithering [16] and it has the following properties
E[SQ`(x)|x] = bxc(dxe x) + dxe(x bxc) = x(dxe bxc) = x, and |SQ`(x) x| 1.
In particular, it conveys an unbiased estimate of the input with a difference that is bounded by 1 almost surely. We also define a generalization of SQ` denoted by SQ [a,b] ` where the input x of the encoder is in [a, b] instead of [0, `]. The encoder first shifts and scales x using x̃ = `b a (x a) to make it lie in [0, `], then feeds x̃ to the encoder in (3). This operation is inverted at the decoder. It is easy to see that SQ[a,b]` satisfies
E[SQ[a,b]` (x)|x] = x, |SQ [a,b] ` (x) x| b a ` .
3 Contextual Linear Bandits with Known Context Distribution
In this section, we show that if the central learner knows the distributions for the vectors Xt,a, then the agent does not need to convey the specific realization of the vector Xt,a she observes at all - it is sufficient to just send 1 bit to convey some information on the observed reward and nothing else. But for this very limited communication, the central learner can experience the same order regret, as when receiving in full precision all the information that the agents have, namely, RT = O(d p T log T ). Algorithm 1, that we describe in this section, provides a method to achieve this. Algorithm 1 is clearly optimal, as we cannot hope to use less than zero bits for the vector Xt,a. Remark 1. Knowledge of the distribution of Xt,a is possible in practice, since many times the context may be capturing well studied statistics (e.g., male or female, age, weight, income, race, dietary restrictions, emotional state, etc) - the advent of large data has made and will continue to make such distributions available. Similarly, actions may be finite (eg., restaurants to visit) or well described (e.g., released amounts of substances), and thus the distribution of Xt,a could be derived. When the distribution is approximately known, we provide later in this section a bound on the misspefication performance penalty in terms of regret.
Main Idea. The intuition behind Algorithm 1 is that it reduces the multi-context linear bandit problem to a single context problem. In particular, it calls as a subroutine an algorithm we term ⇤, that serves as a placeholder for any current (or future) bandit algorithm that achieves regret O(d p T log T ) for the case of a single context (for example, LinUCB [1, 37]). The central learner uses ⇤ to convey to the agents the information they need to select a good action. Our aim is to parametrize the single context problem appropriately, so that, by solving it we also solve our original problem.
Recall that in a single context problem, at each iteration t, any standard linear bandit algorithm ⇤ selects a feature vector (an action) xt from a set of allowable actions X , and observes a reward rt = hxt, ✓0?i+ ⌘t, (4) where ✓0? is an uknown parameter and ⌘t is noise that satisfies the same assumptions as in (1). The objective of ⇤ is to minimize the standard linear regret RT (⇤) over a horizon of length T , namely
RT (⇤) = PT
t=1 maxx2X hx, ✓0?i hxt, ✓0?i. (5) Our reduction proceeds as follows. We assume that ⇤ operates over the same horizon of length T and is parametrized by an unknown parameter ✓0?. We will design the action set X that we provide to ⇤ using our knowledge of the distributions Pa2 as we will describe later in (7). During each iteration, the central learner asks ⇤ to select an action xt 2 X and then provides to ⇤ a reward for this action (our design ensures that this reward satisfies (4) with ✓0? = ✓?). ⇤ operates with this information, oblivious to what else the central learner does. Yet, the action xt is never actually played: the central learner uses the selected action xt to create an updated estimate of the parameter vector ✓̂t, as we will describe later, and only sends this parameter vector estimate to the distributed agent. The agent observes her context, selects what action to play, and sends back her observed quantized reward to the central learner. This is the reward that the central learner provides to ⇤. We design the set X and the agent operation to satisfy that: (4) holds; and RT RT (⇤) is small, where RT is the regret for our original multi-context problem and RT (⇤) the regret of ⇤. We next try to provide some intuition on how we achieve this.
We first describe how we construct the set X . Let ⇥ be the set of all values that ✓? could possibly take. For each possible parameter vector value ✓ 2 ⇥ the central learner considers the quantity
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i] (6)
2Recall that given a, Xt,a is generated from distribution Pa, see Section 2.
where xa is the random variable that follows the distribution Pa. Ties in (6) can be broken uniformly at random. In fact any pre-selected choice function would work as long as the same function is also used in step 12 of Algorithm 1. Note that the function X? : Rd ! Rd can be computed offline before the learning starts, see Example 1. We then use
X = {X?(✓)|✓ 2 ⇥}. (7)
Intuitively, for each value of ✓, we optimistically assume that the distributed agent may select the best possible realization Xt,a for this ✓ (that has the expectation in (6)), and receive the associated reward; accordingly, we restrict the action space X of ⇤ to only contain the expectation of these “best” Xt,a. The vector xt 2 X may not actually be the vector corresponding to the action the agent selects; it is only used to convey to the agent an estimate of the unknown parameter ✓̂t that satisfies xt = X?(✓̂t). Although the central learner does not control which action the agent plays, this is influenced by ✓̂t; we show in App. A that Xt,at is an unbiased estimate of xt, and the generated reward follows the linear model in (4) with ✓0? = ✓?. In Theorem 1, we prove that
argmax x2X hx, ✓?i = X?(✓?). (8)
Hence, if ⇤ converges to selecting the best action for the single context problem, we will have that ✓̂t converges to ✓? if the maximizer in (8) is unique. If there are multiple values for ✓ with X?(✓) = X?(✓?), we show in the proof of Theorem 1 that they all lead to the same expected reward for the original multi-context problem.
Example 1. Consider the case where d = 1, A = {1, 2}, Xt,a 2 { 1, 1} 8a 2 A, ⇥ = { 1, 1}, ✓? = 1 and Xt,1 takes the value 1 with probability p and 1 otherwise, while Xt,2 takes the values 1 with probability q and 1 otherwise. Then, we have that
argmax Xt,a hXt,a, 1i = ⇢ 1 with probability 1 pq 1 with probability pq, (9)
where we use the fact that if argmaxXt,ahXt,a, 1i 6= 1, it must be the case that both Xt,1 and Xt,2 are 1. Thus, X?(1) = E[argmaxXt,ahXt,a, 1i] = 1 2pq, and similarly X?( 1) = 1 + 2(1 p)(1 q), and hence, X = {1 2pq, 1 + 2(1 p)(1 q)}. If ⇤ decides to pick xt = 1 2pq, we have that ✓̂t = 1, otherwise ✓̂t = 1. This estimate ✓̂t is then conveyed to the agent to help her pick the action.
Algorithm Operation. The pseudo-code is provided in Algorithm 1. • First, the central learner calculates the function
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i], (10)
and creates the action set X = {X?(✓)|✓ 2 ⇥} that algorithm ⇤ is going to use. • At each time t, based on past history, ⇤ decides on a next action xt 2 X . The central learner uses xt to calculate the new update ✓̂t = X 1(xt), where X 1 is the inverse of X? (see Section 2). • The agent receives ✓̂t from the central learner, observes her context, plays an action at = argmaxa2AhXt,a, ✓̂ti, and observes the reward rt. She then quantizes the reward using a stochastic quantizer SQ1 (see Section 2), and communicates the outcome using one bit to the central learner. • The central learner provides the quantized reward as input to ⇤. Note that ⇤ is oblivious to what actions are actually played; it treats the received reward as corresponding to the action xt it had decided. The following theorem proves that Algorithm 1 achieves a regret RT (⇤) + O( p T log T ), where
RT (⇤) is the regret of ⇤ in (5). Hence, if ⇤ satisfies the best known regret bound of O(d p T log T ),
e.g., LinUCB, Algorithm 1 achieves a regret of O(d p T log T ). The theorem holds under the mild set of assumptions that we stated in Section 2. Theorem 1. Algorithm 1 uses 1 bit per reward and 0 bits per context. Under Assumption 1, it achieves a regret RT = RT (⇤) +O( p T log T ) with probability at least 1 1T .
Proof outline. The complete proof is available in App. A. We next provide a short outline. From the definition of X? in (10), we notice the following. Recall that the distributed agent receives ✓̂t from the central learner, and pulls the best action for this ✓̂t, i.e., at = argmaxa2AhXt,a, ✓̂ti. We
Algorithm 1 Communication efficient for contextual linear bandits with known distribution 1: Input: an algorithm ⇤ for one context case, underlying set of actions X , and time horizon T . 2: Initialize: X?(✓) = E{xa:xa⇠Pa}[argmaxx2{xa:a2A}hx, ✓i],X = {X?(✓)|✓ 2 ⇥} , r̂0 = 0. 3: Let X 1 be an inverse of X?. 4: for t = 1 : T do 5: Central learner: 6: Receive r̂t 1 and provide it to ⇤. 7: ⇤, using the history (x1, r̂1, ..., xt 1, r̂t 1), selects xt. 8: Send ✓̂t = X 1(xt) to agent. 9: Agent:
10: Receive ✓̂t from the central learner. 11: Observe context realization {Xt,a}a2A. 12: Pull arm at = argmaxa2AhXt,a, ✓̂ti and receive reward rt. 13: Send r̂t = SQ1(rt) to the central learner using 1-bit.
show that conditioned on xt, the associated vector Xt,at is an unbiased estimate of xt with a small variance. Given this, we prove that r̂t satisfies (6), and thus the rewards observed by ⇤ are generated according to a linear bandit model with unknown parameter that is the same as ✓?.
We next decompose the difference RT RT (⇤) to two terms: ⌃T =PT t=1hargmaxXt,ahXt,a, ✓?i, ✓?i hxt, ✓?i and ⌃0T = PT t=1hargmaxXt,ahXt,a, ✓̂ti, ✓?i maxx2X hx, ✓?i. To bound the first term, we show that the unbiasdness property together with Assumption 1 implies that ⌃T is a martingale with bounded difference. This implies that |⌃T | = O( p T log T ) with high probability. To bound ⌃0T , we first show that argmaxx2X hx, ✓?i = X?(✓?) (we note that this is why the algorithm converges to ✓̂t that is equal to, or results in the same expected reward as, ✓?). Then, following a similar approach, we can show that ⌃0T is a martingale with bounded difference which implies that |⌃0T | = O( p T log T ) with high probability. ⇤ Downlink Communication. The downlink cost of our scheme is O(d) (see App. A for discussion). Operation Complexity. The main complexity that our algorithm adds beyond the complexity of ⇤, is the computation of the function X?. The time-complexity of X⇤(✓) depends on the context distribution. While computing X⇤(✓) can be computationally expensive in worst-case scenarios, it can be computed/approximated efficiently for many practical distributions even in a closed form. We give the following examples:
• For d = 1, ✓ > 0, we have that X?(✓) is the expectation of the maximum of multiple random variables, i.e., X?(✓) = Exa⇠Pa [maxa2A xa], which can be computed/approximated efficiently if the distributions Pa are given in a closed form. • If {Pa}a2A are continuous distributions, then X⇤(✓), ✓ 6= 0 can be expressed as
X⇤(✓) = X
a2A
Z
xa⇠Pa xaExa0⇠Pa0 ,a02A/{a}[I[hxa0 , ✓i < hxa, ✓i8a
0 6= a]|xa]dPa. (11)
For many distributions, the previous expression can be computed/approximated efficiently. For instance, consider the case where d 1, xa are independent, identically distributed d-dimensional Gaussian vectors with mean µ and covariance matrix ⌃ = UTDU , where D is a diagonal matrix and U is upper triangular. The expectation in (11) is equal to (Q( hxa µ,✓ikpDU✓k2 ))
|A| 1, where Q(c) = 1p 2⇡ R1 c exp( 1 2x 2)dx. Hence, X⇤(✓) can be approximated efficiently in that case.
• For discrete distributions, X⇤(✓) can be computed efficiently depending on the number of mass points of the distribution and if the distribution has structures/properties to simplify the expression.
Imperfect Knowledge of Distributions. Since we only use the distributions to calculate X?, imperfect knowledge of distribution only affects us in the degree that it affects the calculation of X?. Suppose that we have an estimate X̃? of X? that satisfies
sup ✓2⇥ kX?(✓) X̃?(✓)k2 ✏. (12)
Using Theorem 1 we prove in App. A the following corollary.
Corollary 1. Suppose we are given X̃? that satisfies (12). Then, there exists an algorithm ⇤ for which Algorithm 1 achieves RT = Õ(d p T + ✏T p d) with probability at least 1 1T .
Privacy. Our result may be useful for applications beyond communication efficiency; indeed, the context may contain private information (e.g., personal preferences, financial information, etc); use of our algorithm enables to not share this private information at all with the central learner, without impeding the learning process. Surprisingly, work in [48], motivated from privacy considerations, has shown that if an agent adds a small amount of zero mean noise to the true context before sending it to the central learner, this can severely affect the regret in some cases - and yet our algorithm essentially enables to “guess” the context with no regret penalty if the distributions are known. Although adding a zero mean noise to the observed feature vector conveys an unbiased estimate of the observation, the difference between this and our case is technical and mainly due to the fact that the unbiasdness is required to hold conditioned on the central learner observation (noisy context).
Note that we do not make formal privacy claims in this paper, but simply observe that our approach could potentially be leveraged for privacy purposes. It is true that the reward can reveal some information about the context, e.g., if all the actions result in small reward for context and large reward for another context. However, privatizing the reward (which implies a private context in our case) is much easier than privatizing the context and there are many proposed optimal algorithms with little to no regret loss, e.g., see [20, 38, 44, 35]. This is not the case when privatizing the context. In fact it was shown in [42] that privatizing the context can lead to linear regret and relaxed definitions of privacy are proposed to avoid this.
4 Contextual Linear Bandits with Unknown Context Distribution
We now consider the case where the learner does not know the context distributions, and thus Algorithm 1 that uses zero bits for the context cannot be applied. In this case, related literature conjectures a lower bound of ⌦(d) [46, 47] – which is discouraging since it is probably impossible to establish an algorithm with communication logarithmically depending on d. Additionally, in practice we use 32d bits to convey full precision values - thus this conjecture indicates that in practice we may not be able to achieve order improvements in terms of bits communicated, without performance loss.
In this section, we provide Algorithm 2 that uses ⇡ 5d bits per context and achieves (optimal) regret RT = O(d p T log T ). We believe Algorithm 2 is interesting for two reasons: 1. In theory, we need an infinite number of bits to convey full precision values- we prove that a constant number of bits per dimension per context is sufficient. Previously best-known algorithms, which rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context, which goes to infinity as T goes to infinity. Moreover, these algorithms require exponential complexity [24] while ours is computationally efficient. 2. In practice, especially for large values of d, reducing the number of bits conveyed from 32d to⇡ 5d is quite significant - this is a reduction by a factor of six, which implies six times less communication.
Main Idea. The intuition behind Algorithm 2 is the following. The central learner is going to use an estimate of the d⇥ d least-squares matrix Vt = Pt i=1 Xi,aiX T i,ai to update her estimates for the parameter vector ✓?. Thus, when quantizing the vector Xt,a, we want to make sure that not only this vector is conveyed with sufficient accuracy, but also that the central learner can calculate the matrix Vt accurately. In particular, we would like the central learner to be able to calculate an unbiased estimator for each entry of Xt,a and each entry of the matrix Vt. Our algorithm achieves this by quantizing the feature vectors Xt,at , and also the diagonal (only the diagonal) entries of the least squares matrix Vt. We prove that by doing so, with only ⇡ 5d bits we can provide an unbiased estimate and guarantee an O( 1p
d ) quantization error for each entry in the matrix almost surely.
Quantization Scheme. We here describe the proposed quantization scheme. • To quantize Xt,at : Let m , d p de. We first send the sign of each coordinate of Xt,at using d bits, namely, we send the vector st = Xt,at/|Xt,at |. To quantize the magnitude |Xt,at |, we scale each coordinate of |Xt,at | by m and quantize it using a Stochastic Quantizer (SQ)3 with m+ 1 levels in
3As described in (3) in Section 2, SQ maps value x to an integer value, namely bxc with probability dxe x and dxe with probability x bxc.
the interval [0,m]. Let Xt , SQm(m|Xt,at |) denote the resulting SQ outputs, we note that Xt takes non-negative integer values and lies in a norm-1 ball of radius 2d (this holds since the original vector lies in a norm-2 ball of radius 1 and the error in each coordinate is at most 1/m). That is, it holds that Xt 2 Q = {x 2 Nd|kxk1 2d}. We then use any enumeration h : Q! [|Q|] of this set to encode Xt using log(|Q|) bits. • To quantize Xt,atXTt,at : Let X 2 t,at denote a vector that collects the diagonal entries of Xt,atX T t,at . Let X̂t , stXt/m be the estimate of Xt,at that the central learner retrieves. Note that X̂2t is not an unbiased estimate of X2t,at ; however, (X 2 t,at X̂ 2 t )i 3/m for all coordinates i (proved in App. B). Our scheme simply conveys the difference X2t,at X̂ 2 t with 1 bit per coordinate using a SQ[ 3/m,3/m]1 quantizer.
The central learner and distributed agent operations are presented in Algorithm 2.
Example 2. Consider the case where d = 5. Then each coordinate of |Xt,at | is scaled by 3 and quantized using SQ3 to one of the values 0, 1, 2, 3 to get Xt. The function h then maps the values for Xt that satisfy the kXtk1 10 to a unique value (a code) in the set [|Q|]. For instance the value 3.1 is not given a code, where 1 is the vector of all ones. However, note that for |Xt,at | to be mapped to 3.1, we must have 3|(Xt,at)i| 2 for all coordinates i, which cannot happen since it implies that kXt,atk2 2 p 5/6 > 1 which contradicts Assumption 1.
Algorithm 2 Communication efficient for contextual linear bandits with unknown distribution 1: Input: underlying set of actions A, and time horizon T . 2: ✓̂0 = 0, Ṽ0 = 0, u0 = 0,m = d p de.
3: Let h be an enumeration of the set Q = {x 2 Nd|kxk1 2d}. 4: for t = 1 : T do 5: Agent: 6: Receive ✓̂t 1 from the central learner. 7: Observe context realization {Xt,a}a2A. 8: Pull arm at = argmaxa2AhXt,a, ✓̂t 1i and receive reward rt. 9: Compute the signs st = Xt,at/|Xt,at | of Xt,at .
10: Let Xt = SQm(m|Xt,at |). 11: e2t = SQ [ 3/m,3/m] 1 (X 2 t,at X̂ 2 t ), where X̂t = stXt/m. 12: Send to the central learner h(Xt), st, and e2t using log2(|Q|), d, and d bits, respectively. 13: Send r̂t = SQ1(rt) using 1-bit. 14: Central learner: 15: Receive Xt, st, e2t , and r̂t from the distributed agent. 16: X̂t = stXt/m, X̂ (D) t = X̂ 2 t + e 2 t .
17: ut ut 1 + r̂tX̂t.
18: Ṽt Ṽt 1 + X̂tX̂Tt diag(X̂tX̂Tt ) + diag(X̂ (D) t ). 19: ✓̂t Ṽ 1t ut. 20: Send ✓̂t to the next agent.
Algorithm Performance. Theorem 2, stated next, holds under Assumption 1 in Section 2 and some additional regulatory assumptions on the distributions Pa provided in Assumption 2. Assumption 2. There exist constants c, c0 such that for any sequence ✓1, ..., ✓T , where ✓t depends only on Ht, with probability at least 1 c 0
T , it holds that Pt
i=1 Xi,aiX T i,ai
ct d I 8t 2 [T ], (13)
where at = argmaxa2AhXt,a, ✓ti, and I is the identity matrix.
We note that several common assumptions in the literature imply (13), for example, bounded eigenvalues for the covariance matrix of Xt,at [11, 27, 17]. Such assumptions hold for a wide range of distributions, including subgaussian distribitions with bounded density [36].
Challenge in relaxing assumption 2 (diversity assumption). The main challenge in relaxing the diversity assumption for LinUCB (or Thompson sampling) based algorithms is that the regret of those
algorithms is bounded as Õ( p Tk✓̂T ✓⇤kVT ). Without quantization, the quantity k✓̂T ✓⇤kVT grows slowly and is nearly a constant; however, without the diversity assumption, the quantization error can make k✓̂T ✓⇤k2VT to grow as p T in the worst case. This is due to the fact that sub-optimal arms do not have large number of pulls, hence, we do not have good estimate of ✓? on those direction; on the other hand, the quantization errors in estimating VT is accumulated in all directions. As a result, the regret bound increases by a factor of T 1/4. We leave it as a future work to either relax the diversity assumption (which is required in our paper only in the case of unknown context distribution) or else show that removing it will unavoidably increase the regret order. Theorem 2. Algorithm 2 satisfies that for all t: Xt 2 Q; and Bt 1 + log2(2d+ 1) + 5.03d bits. Under assumptions 1, 2, it achieves a regret RT = O(d p T log T ) with probability at least 1 1T .
Proof Outline. To bound the number of bits Bt, we first bound the size of Q by formulating a standard counting problem: we find the number of non-negative integer solutions for a linear equation. To bound the regret RT , we start by proving that our quantization scheme guarantees some desirable properties, namely, unbiasedness and O( 1p
d ) quantization error for each vector coordinate. We then
upper bound the regret in terms of k✓̂t ✓?k2 and show that this difference can be decomposed as
k✓̂t ✓?k2 = kV 1t k2(k Pt i=1 Eik2 + (1 + |⌘i|)k Pt i=1 eik2 + k Pt i=1 ⌘̂iXi,aik2, (14)
where Et captures the error in estimating the matrix Xt,atXTt,at , et is the error in estimating Xt,at , and ⌘0t is a noise that satisfies the same properties as ⌘t. Using Assumption 2, we prove that V 1 t grows as O(dt ) with high probability, and from the unbiasdness and boundedness of all error quantities we show that they grow as O( p t log t) with high probability. This implies that k✓̂t ✓?k2 = O(d q log t t ), and hence, RT = O(d p T log T ). The complete proof is provided in App. B. ⇤
Algorithm Complexity. If we do not count the quantization operations, it is easy to see that the complexity of the rest of the algorithm is dominated by the complexity of computing V 1t which can be done in O(d2.373) [4]. For the quantization, we note that each coordinate of Xt can be computed in Õ(1) time4. Moreover, the computation of h(x) for x 2 Q can be done in constant time with high probability using hash tables, where h is the enumeration function in Step 3. Hence, the added computational complexity is almost linear in d. Although a hash table for h can consume ⌦(25d) memory, by sacrificing a constant factor in the number of bits, we can find enumeration functions that can be stored efficiently. As an example, consider the scheme in [14] that can find an one-to-one function h : Q ! N+ which can be stored and computed efficiently, but only gives guarantees in expectation that E[log(h(x))] = O(d) for all x 2 Q. Downlink Communication Cost. Although we assume no-cost downlink communication, as was also the case for Algorithm 1, the downlink in Algorithm 2 is only used to send the updated parameter vector ✓̂t to the agents. If desired, these estimates can be quantized using the same method as for Xt,at , which (following a similar proof to that of Theorem 2) can be shown to not affect the order of the regret while reducing the downlink communication to ⇡ 5d bits per iteration. Offloading To Agents. For applications where the agents wish to computationally help the central learner, the central learner may simply aggregate information to keep track of ut, Ṽt and broadcast these values to the agents; the estimate ✓̂t can be calculated at each agent. Moving the computational load to the agents does not affect the regret order or the number of bits communicated on the uplink. Remark 2. Under the regulatory assumptions in [17], the regret bound can be improved by a factor of p log(K)/d, where K = |A| is the number of actions. However, this does not improve the regret
in the worst case as the worst case number of actions is O(Cd), C > 1 [24].
Societal Impact. Results in this work can be used in decision making systems which can potentially lead to biased decisions against racial, sex, or minority groups if used without care.
Acknowledgment. CF and OH are supported in part by NSF award 2007714, NSF award 2221871 and Army Research Laboratory grant under Cooperative Agreement W911NF-17-2-0196. LY is supported in part by DARPA grant HR00112190130, NSF Award 2221871.
4Multiplication by p d can take O(log d) time. | 1. What is the focus of the paper regarding contextual linear bandits?
2. What are the strengths of the proposed algorithm, particularly in reducing the communication overhead?
3. What are the weaknesses or concerns regarding the motivation and formulation of the problem?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or issues with the terminology, discussion, or limitations of the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies contextual linear bandits with communication constraints, where it is desired to transmit as few number of bits as possible from the agent to the learner. The authors considered two different settings, i.e., the learner knows the distribution of contexts, and the learner doesn’t know the distribution of contexts, respectively. For the former case, the authors proposed an interesting reduction method that converts the original bandit problem with time-varying candidate set (multiple contexts) on the agent side to one with fixed candidate set (one context) on the learner side, so that there is no need to spend any bits on the contexts. For the latter case, the authors proposed an algorithm that quantizes the context with 5d bits via a stochastic quantizer. The authors proved that the proposed algorithms match the regret of standard contextual linear bandit algorithms.
Strengths And Weaknesses
This paper is well written and easy to follow for the most part. The proposed algorithm for the setting with known context distribution seems novel, and the technique may be useful for other related problems. Recently there is an increasing number of works in distributed bandit learning that tries to minimize the total number of communication rounds needed in the time horizon. In comparison, this work aims to minimize the bits required for each round, so it may help further improve communication efficiency in a parallel direction.
My main concern is about the motivation for the current formulation.
As mentioned in introduction, the agents are assumed to be transient and "they may not be interested in learning - this may not be a task that the agents wish to consistently perform - and thus do not wish to devote resources to it". However, in the algorithm design, these agents still need to be cooperative, in the sense that they are required to perform computations to compress and communicate the required messages to help the learner, e.g. line 6-13, even though it will not benefit them at all.
Maybe I am missing something here. Why the focus is only put on the compression of context, while it is okay to transfer the uncompressed \hat{\theta}? For both algorithms, the learner still needs to send the uncompressed \hat{\theta} to the agent, which has the same dimension as the context vector for an arm, which is also communication expensive.
Questions
I am not sure if distributed contextual linear bandits a suitable title, as in this paper, learning only happens on the central learner, while the distributed agents are only responsible for pulling arms and compress the required messages. This is very different from the existing works in distributed contextual bandits, where each agent usually maintains its own model estimate.
The terms used for the agents in this problem should be made more consistent. Currently, we have "central agent", "central learner", and "learner" all referring to the same thing.
I am also a bit confused about the discussion about provacy in line 278. It is true that the proposed algorithms avoid directly sending context vectors, but isn't sending
θ
^
also reveals private information, considering it will eventually converge to the true parameter
θ
⋆
.
Limitations
No |
NIPS | Title
Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context
Abstract
Contextual linear bandits is a rich and theoretically important model that has many practical applications. Recently, this setup gained a lot of interest in applications over wireless where communication constraints can be a performance bottleneck, especially when the contexts come from a large d-dimensional space. In this paper, we consider a distributed memoryless contextual linear bandit learning problem, where the agents who observe the contexts and take actions are geographically separated from the learner who performs the learning while not seeing the contexts. We assume that contexts are generated from a distribution and propose a method that uses ⇡ 5d bits per context for the case of unknown context distribution and 0 bits per context if the context distribution is known, while achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a log(T ) factor, where T is the length of the horizon, while the latter achieves information theoretical tightness.
1 Introduction
Contextual linear bandits offer a sequential decision-making framework that combines fundamental theoretical importance with significant practical popularity [8], as it offers a tractable way to capture side information (context), as well as a potentially infinite set of decisions (actions). The most prominent application is in recommendation systems [30], but it has also been used in applications such as virtual support agents [39], clinical trials [12], transportation systems [9], wireless optimization [26, 25], health [10], robotics [31] and online education [34].
In this paper, we develop algorithms that support the deployment of contextual linear bandits in distributed settings. In particular, we consider the case where a central learner wishes to solve a contextual linear bandit problem with the help of transient agents. That is, we assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner. We view the central learner as a “knowledge repository”, that accumulates knowledge from the experience of the transient agents and makes it available to next agents. The central learner, through the information it keeps, could help passing by devices decide how to perform an action, for example: passing by drones decide how to perform a manoeuver; agricultural robots decide what amounts of substances such as pesticids to release; and passing by mobile devices decide which local restaurants to recommend.
The main challenge we try to address in this paper is the efficient communication of the context the agents experience. More specifically, in our setup, each time an agent joins, she receives from the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
central learner information on the system, such as current estimates of the system parameters; she observes her current context, selects and plays an action and collects the corresponding reward. Note that although the distributed agent knows her context, the action she plays and the observed reward, the central learner does not - and needs this information to update its estimate of the system parameters. The context in particular can be communication heavy - in the examples we mentioned before, for drones the context could be their navigation capabilities, physical attributes, and enviromental factors such as wind speed; for agricultural robots, it could be images that indicate state of plants and sensor measurements such as of soil consistency; for restaurant recommendations, it could be the personal dietary preferences and restrictions, budget, and emotional state. Moreover, because of geographical separation, the central learner may not have any other way to learn the context beyond communication. Unlike the reward, that is usually a single scalar value, the context can be a vector of a large dimension d from an infinite alphabet, and thus, communicating the context efficiently is heavily nontrivial.
The technical question we ask is, how many bits do we need to convey per context to solve the linear bandit problem without downgrading the performance as compared to the non-distributed setting?
In this paper, we design algorithms that support this goal. We note that our algorithms optimize the uplink communication (from the agents to the central learner), and assume unlimited (cost-free) downlink communication. This is a standard assumption in wireless [7, 33, 21] for several reasons: uplink wireless links tend to be much more bandwidth restricted, since several users may be sharing the same channel; uplink communication may also be battery-powered and thus more expensive to sustain; in our particular case, the agents may have less incentive to communicate (provide their feedback) than the central learner (who needs to learn). Having said that, we note that our algorithms (in Sections 3 and 4) make frugal use of the downlink channels, only using them to transmit system parameters.
Below we summarize our main contributions: 1. We show the surprising result that, if the central learner knows the distribution of the contexts, we do not need to communicate the context at all - the agent does not need to send any information on the actual context she observes and the action she plays. It is sufficient for the agent to just send 1 bit to convey quantized information on her observed reward and nothing else. But for this very limited communication, the central learner can learn a policy that achieves the same order of regret as if full information about the context and reward is received. This result holds for nearly all context distributions and it is the best we can hope for - zero bits of communication for the context. 2. If the central learner has no knowledge of the context distribution, we show that ⇡ 5d bits per context (where d is the context dimension) is sufficient to achieve the same order regret as knowing the context in full precision. Note that previous algorithms, that rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context to achieve the same order regret, where T is the length of the horizon [24], and require time complexity of O(T d) which is exponential in d.
Related Work and Distinction. Contextual linear bandits is a rich and important model that has attracted significant interest both in theory and applications [8, 24]. Popular algorithms for this setup include LinUCB [1, 37] and cotextual Thompson sampling [2]. Under Assumption 1, these algorithms achieve a regret of Õ(d p T ), where d is the dimension of an unknown system parameter and T is the time horizon, while the best known lower bound for this setup is ⌦(d p T ) [37]. These algorithms assume perfect knowledge of the contexts and rewards. Within this space, our work focuses on operation under communications constraints in a distributed setting.
There is large body of work focusing on distributed linear contextual bandits settings, but mainly within the framework of federated learning, where batched algorithms have been proposed for communication efficiency [43, 41, 6, 5, 23] that aggregate together observations and parameter learning across a large number of iterations. This is possible because in federated learning, the agents themselves wish to learn the system parameters, remain active playing multiple actions throughout the learning process, and exchange information with the goal of speeding up their learning [43, 41]. In contrast, in our setup batched algorithms cannot reduce the communication cost because each agent only plays a single action; this may be because agents are transient, but also because they may not be interested in learning - this may not be a task that the agents wish to consistently perform - and thus do not wish to devote resources to it. For example, an agent may wish to try a restaurant in a special occasion, but would not be interested in sampling multiple restaurants/learning recommendation system parameters. In other words, we consider a scenario where the user benefits from receiving an action (or policy) from the central learner, e.g., a recommendation. In response, the user gives
feedback to the central learner in terms of (compressed) context/reward. The compression operations benefit the user by helping reduce her communication cost. In principle, the user is not required to respond. But the central learner will be able to learn whenever there is a feedback; creating an incentive for the user to respond could be an interesting future topic. Our setup supports a different (and complementary) set of applications than the federated learning framework, and requires a new set of algorithms that operate without requiring agents to keep memory of past actions.1
There is a long line of research on compression for machine learning and distributed optimization, e.g., compression for distributed gradient descent [40, 3, 32, 18], and distributed inference [19]. However, such schemes are not optimized for active learning applications. Our compression schemes can be seen as quantization schemes for contexts and rewards tailored to active learning applications.
Our work also differs from traditional vector compression schemes [15] that aim to reconstruct the data potentially with some distortion (achieve rate-distortion trade-offs). In our case, we do not aim to reconstruct the data, but instead to distinguish the best arm for each context. Indeed, using 0 bits, as we do in Section 3, we cannot reconstruct a meaningful estimate of the context.
To the best of our knowledge, our framework has not been examined before for linear contextual bandits. Work in the literature has examined compression for distributed memoryless MABs [21], but only for rewards (scalar values) and not the contexts (large vectors), and thus these techniques also do not extend to our case.
Paper organization. Section 2 reviews our notation and problem formulation; Section 3 provides and analyzes our algorithm for known and Section 4 for unknown context distributions.
2 Notation and Problem Formulation
Notation. We use the following notation throughout the paper. For a vector X we use Xi or (X)i to denote the i-th element of the vector X; similarly for a matrix V , we use Vij or (V )ij to denote the element at row i, and column j. We use kV k2 to denote the matrix spectral norm. For a function f , we denote its domain and range by dom(f), ran(f) respectively. When dom(f) ✓ R, we use f(X) for a vector X 2 Rd to denote f(X) := [f(X1), ..., f(Xd)], i.e., the function f is applied element-wise; for example we use X2 to denote the element-wise square of X . We denote the inverse of a function f by f 1; if f is not one-to-one, with abuse of notation we use f 1 to denote a function that satisfies f(f 1(x)) = x8x 2 ran(f) (this is justified due to the axiom of choice [22]). For a matrix V , we use V 1 to denote its inverse; if V is singular, we use V 1 to denote its pseudo-inverse. We use [N ] for N 2 N to denote {1, ..., N}, and {Xa}a2A to denote the set {(a,Xa)|a 2 A}. We say that y = O(f(x)) if there is x0 and a constant C such that y Cf(x) 8x > x0; we also use Õ(f(x)) to omit log factors.
Contextual Linear Bandits. We consider a contextual linear bandits problem over a horizon of length T [8], where at each iteration t = 1, ..., T , an agent, taking into account the context, chooses an action at 2 A and receives a reward rt. For each action a 2 A, the agent has access to a corresponding feature vector Xt,a 2 Rd. The set of all such vectors {Xt,a}a2A is the context at time t, and the agent can use it to decide which action at to play. We assume that the context is generated from a distribution, i.e., given a, Xt,a is generated from a distribution Pa. As a specific example, we could have that a 2 Rd and Xt,a is generated from a Gaussian distribution with zero mean and covariance matrix ||a||2I , where I is the identity matrix, i.e., Pa = N (0, ||a||2I). The selection of at may depend not only on the current context {Xt,a}a2A but also on the history Ht , {{X1,a}a2A, a1, r1, ..., {Xt 1,a}a2A, at 1, rt 1}, namely, all previously selected actions, observed contexts and rewards. Once an action is selected, the reward is generated according to
rt = hXt,at , ✓?i+ ⌘t, (1)
where h., .i denotes the dot product, ✓? is an unknown (but fixed) parameter vector in Rd, and ⌘t is noise. We assume that the noise follows an unknown distribution with E[⌘t|Ft] = 0 and E[exp( ⌘t)|Ft] exp( 2/2)8 2 R, where Ft = ({X1,a}a2A, a1, r1, ..., {Xt,a}a2A, at) is the filtration [13] of historic information up to time t, and (X) is the -algebra generated by X [13].
1Our techniques could be adapted to additionally improve the communication efficiency of batched algorithms, but this is not the focus of our work.
The objective is to minimize the regret RT over a horizon of length T , where
RT = TX
t=1
max a2A hXt,a, ✓?i hXt,at , ✓?i. (2)
The best performing algorithms for this problem, such as LinUCB and contextual Thompson sampling, achieve a worst case regret of Õ(d p T ) [29, 28, 1, 2]. The best known lower bound is ⌦(d p T ) [37].
In the rest of this paper, we make the following assumptions that are standard in the literature [24]. Assumption 1. We consider contextual linear bandits that satisfy: (1.) kXt,ak2 1, 8t 2 [T ], a 2 A. (2.) k✓?k2 1. (3.) rt 2 [0, 1], 8t 2 [T ].
The boundedness assumption on rt can be relaxed using [21], which only requires approximately 3.5 bits on average to send rt, even if it is unbounded.
Memoryless Distributed Contextual Linear Bandits. We consider a distributed setting that consists of a central learner communicating with geographically separated agents. For example, the agents are drones that interact with a traffic policeman (central learner) as they fly by. We assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner.
At each time t, t = 1 . . . T , a distributed agent joins the system; she receives from the central learner information on the system, such as the current estimate of the parameter vector ✓? or the history Ht; she observes the current context {Xt,a}a2A, selects and plays an action at and collects the corresponding reward rt. Note that although the distributed agent knows the context {Xt,a}a2A, the action at and the observed reward rt, the central learner does not. The central learner may need this information to update its estimate of the system parameters, such as the unknown parameter vector ✓⇤, and the history Ht+1. However, we assume that the agent is restricted to utilize a communicationconstrained channel and thus may not be able to send the full information to the central learner.
The main question we ask in this paper is: can we design a compression scheme, where the agent sends to the central learner only one message using Bt bits (for as small as possible a value of Bt) that enables the central learner to learn equally well (experience the same order of regret) as if there were no communication constraints? With no communication constraints the agent could send unquantized the full information {{Xt,a}a2A, at, rt}. Instead, the agent transmits a message that could be a function of all locally available information at the agent. For example, it could be a function of (Ht, {Xt,at}a2A, at, rt), if the agent had received Ht from the central learner. It could also be a function of just (Xt,at , rt), which could be sufficient if the central learner employs an algorithm such as LinUCB [1, 37]. In summary, we set the following goal.
Goal. Design contextual linear bandit schemes for the memoryless distributed setting that achieve the best known regret of O(d p T log(T )), while communicating a small number of bits Bt.
We only impose communication constraints on the uplink communication (from the agents to the central learner) and assume no cost downlink communication (see discussion in Secttion 1).
Stochastic Quantizer (SQ) [16]. Our proposed algorithms use stochastic quantization, that we next review. We define SQ`, ` 2 N to be a quantizer, that uses log(`+1) bits, consisting of an encoder and decoder described as following. The encoder ⇠` takes a value x 2 [0, `] and outputs an integer value
⇠` = ⇢ bxc with probability dxe x dxe with probability x bxc. (3)
The output ⇠` is represented with log(` + 1) bits. The decoder D` takes as input the binary representation of ⇠`(x) and outputs the real value ⇠`(x). The composition of the encoder ⇠`, the binary mapping, and decoder D` is denoted by SQ`. We notice that since the decoder only inverts the binary mapping operation, we have that SQ` = ⇠`. When SQ` is applied at the agents side, the agent encodes its data, x, as ⇠`(x), then sends the corresponding binary mapping to the central learner that applies D` to get SQ`(x). With slightly abuse of notation, this operation is described in the paper, by saying that the agent sends SQ` to the central learner.
The quantizer SQ` is a form of dithering [16] and it has the following properties
E[SQ`(x)|x] = bxc(dxe x) + dxe(x bxc) = x(dxe bxc) = x, and |SQ`(x) x| 1.
In particular, it conveys an unbiased estimate of the input with a difference that is bounded by 1 almost surely. We also define a generalization of SQ` denoted by SQ [a,b] ` where the input x of the encoder is in [a, b] instead of [0, `]. The encoder first shifts and scales x using x̃ = `b a (x a) to make it lie in [0, `], then feeds x̃ to the encoder in (3). This operation is inverted at the decoder. It is easy to see that SQ[a,b]` satisfies
E[SQ[a,b]` (x)|x] = x, |SQ [a,b] ` (x) x| b a ` .
3 Contextual Linear Bandits with Known Context Distribution
In this section, we show that if the central learner knows the distributions for the vectors Xt,a, then the agent does not need to convey the specific realization of the vector Xt,a she observes at all - it is sufficient to just send 1 bit to convey some information on the observed reward and nothing else. But for this very limited communication, the central learner can experience the same order regret, as when receiving in full precision all the information that the agents have, namely, RT = O(d p T log T ). Algorithm 1, that we describe in this section, provides a method to achieve this. Algorithm 1 is clearly optimal, as we cannot hope to use less than zero bits for the vector Xt,a. Remark 1. Knowledge of the distribution of Xt,a is possible in practice, since many times the context may be capturing well studied statistics (e.g., male or female, age, weight, income, race, dietary restrictions, emotional state, etc) - the advent of large data has made and will continue to make such distributions available. Similarly, actions may be finite (eg., restaurants to visit) or well described (e.g., released amounts of substances), and thus the distribution of Xt,a could be derived. When the distribution is approximately known, we provide later in this section a bound on the misspefication performance penalty in terms of regret.
Main Idea. The intuition behind Algorithm 1 is that it reduces the multi-context linear bandit problem to a single context problem. In particular, it calls as a subroutine an algorithm we term ⇤, that serves as a placeholder for any current (or future) bandit algorithm that achieves regret O(d p T log T ) for the case of a single context (for example, LinUCB [1, 37]). The central learner uses ⇤ to convey to the agents the information they need to select a good action. Our aim is to parametrize the single context problem appropriately, so that, by solving it we also solve our original problem.
Recall that in a single context problem, at each iteration t, any standard linear bandit algorithm ⇤ selects a feature vector (an action) xt from a set of allowable actions X , and observes a reward rt = hxt, ✓0?i+ ⌘t, (4) where ✓0? is an uknown parameter and ⌘t is noise that satisfies the same assumptions as in (1). The objective of ⇤ is to minimize the standard linear regret RT (⇤) over a horizon of length T , namely
RT (⇤) = PT
t=1 maxx2X hx, ✓0?i hxt, ✓0?i. (5) Our reduction proceeds as follows. We assume that ⇤ operates over the same horizon of length T and is parametrized by an unknown parameter ✓0?. We will design the action set X that we provide to ⇤ using our knowledge of the distributions Pa2 as we will describe later in (7). During each iteration, the central learner asks ⇤ to select an action xt 2 X and then provides to ⇤ a reward for this action (our design ensures that this reward satisfies (4) with ✓0? = ✓?). ⇤ operates with this information, oblivious to what else the central learner does. Yet, the action xt is never actually played: the central learner uses the selected action xt to create an updated estimate of the parameter vector ✓̂t, as we will describe later, and only sends this parameter vector estimate to the distributed agent. The agent observes her context, selects what action to play, and sends back her observed quantized reward to the central learner. This is the reward that the central learner provides to ⇤. We design the set X and the agent operation to satisfy that: (4) holds; and RT RT (⇤) is small, where RT is the regret for our original multi-context problem and RT (⇤) the regret of ⇤. We next try to provide some intuition on how we achieve this.
We first describe how we construct the set X . Let ⇥ be the set of all values that ✓? could possibly take. For each possible parameter vector value ✓ 2 ⇥ the central learner considers the quantity
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i] (6)
2Recall that given a, Xt,a is generated from distribution Pa, see Section 2.
where xa is the random variable that follows the distribution Pa. Ties in (6) can be broken uniformly at random. In fact any pre-selected choice function would work as long as the same function is also used in step 12 of Algorithm 1. Note that the function X? : Rd ! Rd can be computed offline before the learning starts, see Example 1. We then use
X = {X?(✓)|✓ 2 ⇥}. (7)
Intuitively, for each value of ✓, we optimistically assume that the distributed agent may select the best possible realization Xt,a for this ✓ (that has the expectation in (6)), and receive the associated reward; accordingly, we restrict the action space X of ⇤ to only contain the expectation of these “best” Xt,a. The vector xt 2 X may not actually be the vector corresponding to the action the agent selects; it is only used to convey to the agent an estimate of the unknown parameter ✓̂t that satisfies xt = X?(✓̂t). Although the central learner does not control which action the agent plays, this is influenced by ✓̂t; we show in App. A that Xt,at is an unbiased estimate of xt, and the generated reward follows the linear model in (4) with ✓0? = ✓?. In Theorem 1, we prove that
argmax x2X hx, ✓?i = X?(✓?). (8)
Hence, if ⇤ converges to selecting the best action for the single context problem, we will have that ✓̂t converges to ✓? if the maximizer in (8) is unique. If there are multiple values for ✓ with X?(✓) = X?(✓?), we show in the proof of Theorem 1 that they all lead to the same expected reward for the original multi-context problem.
Example 1. Consider the case where d = 1, A = {1, 2}, Xt,a 2 { 1, 1} 8a 2 A, ⇥ = { 1, 1}, ✓? = 1 and Xt,1 takes the value 1 with probability p and 1 otherwise, while Xt,2 takes the values 1 with probability q and 1 otherwise. Then, we have that
argmax Xt,a hXt,a, 1i = ⇢ 1 with probability 1 pq 1 with probability pq, (9)
where we use the fact that if argmaxXt,ahXt,a, 1i 6= 1, it must be the case that both Xt,1 and Xt,2 are 1. Thus, X?(1) = E[argmaxXt,ahXt,a, 1i] = 1 2pq, and similarly X?( 1) = 1 + 2(1 p)(1 q), and hence, X = {1 2pq, 1 + 2(1 p)(1 q)}. If ⇤ decides to pick xt = 1 2pq, we have that ✓̂t = 1, otherwise ✓̂t = 1. This estimate ✓̂t is then conveyed to the agent to help her pick the action.
Algorithm Operation. The pseudo-code is provided in Algorithm 1. • First, the central learner calculates the function
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i], (10)
and creates the action set X = {X?(✓)|✓ 2 ⇥} that algorithm ⇤ is going to use. • At each time t, based on past history, ⇤ decides on a next action xt 2 X . The central learner uses xt to calculate the new update ✓̂t = X 1(xt), where X 1 is the inverse of X? (see Section 2). • The agent receives ✓̂t from the central learner, observes her context, plays an action at = argmaxa2AhXt,a, ✓̂ti, and observes the reward rt. She then quantizes the reward using a stochastic quantizer SQ1 (see Section 2), and communicates the outcome using one bit to the central learner. • The central learner provides the quantized reward as input to ⇤. Note that ⇤ is oblivious to what actions are actually played; it treats the received reward as corresponding to the action xt it had decided. The following theorem proves that Algorithm 1 achieves a regret RT (⇤) + O( p T log T ), where
RT (⇤) is the regret of ⇤ in (5). Hence, if ⇤ satisfies the best known regret bound of O(d p T log T ),
e.g., LinUCB, Algorithm 1 achieves a regret of O(d p T log T ). The theorem holds under the mild set of assumptions that we stated in Section 2. Theorem 1. Algorithm 1 uses 1 bit per reward and 0 bits per context. Under Assumption 1, it achieves a regret RT = RT (⇤) +O( p T log T ) with probability at least 1 1T .
Proof outline. The complete proof is available in App. A. We next provide a short outline. From the definition of X? in (10), we notice the following. Recall that the distributed agent receives ✓̂t from the central learner, and pulls the best action for this ✓̂t, i.e., at = argmaxa2AhXt,a, ✓̂ti. We
Algorithm 1 Communication efficient for contextual linear bandits with known distribution 1: Input: an algorithm ⇤ for one context case, underlying set of actions X , and time horizon T . 2: Initialize: X?(✓) = E{xa:xa⇠Pa}[argmaxx2{xa:a2A}hx, ✓i],X = {X?(✓)|✓ 2 ⇥} , r̂0 = 0. 3: Let X 1 be an inverse of X?. 4: for t = 1 : T do 5: Central learner: 6: Receive r̂t 1 and provide it to ⇤. 7: ⇤, using the history (x1, r̂1, ..., xt 1, r̂t 1), selects xt. 8: Send ✓̂t = X 1(xt) to agent. 9: Agent:
10: Receive ✓̂t from the central learner. 11: Observe context realization {Xt,a}a2A. 12: Pull arm at = argmaxa2AhXt,a, ✓̂ti and receive reward rt. 13: Send r̂t = SQ1(rt) to the central learner using 1-bit.
show that conditioned on xt, the associated vector Xt,at is an unbiased estimate of xt with a small variance. Given this, we prove that r̂t satisfies (6), and thus the rewards observed by ⇤ are generated according to a linear bandit model with unknown parameter that is the same as ✓?.
We next decompose the difference RT RT (⇤) to two terms: ⌃T =PT t=1hargmaxXt,ahXt,a, ✓?i, ✓?i hxt, ✓?i and ⌃0T = PT t=1hargmaxXt,ahXt,a, ✓̂ti, ✓?i maxx2X hx, ✓?i. To bound the first term, we show that the unbiasdness property together with Assumption 1 implies that ⌃T is a martingale with bounded difference. This implies that |⌃T | = O( p T log T ) with high probability. To bound ⌃0T , we first show that argmaxx2X hx, ✓?i = X?(✓?) (we note that this is why the algorithm converges to ✓̂t that is equal to, or results in the same expected reward as, ✓?). Then, following a similar approach, we can show that ⌃0T is a martingale with bounded difference which implies that |⌃0T | = O( p T log T ) with high probability. ⇤ Downlink Communication. The downlink cost of our scheme is O(d) (see App. A for discussion). Operation Complexity. The main complexity that our algorithm adds beyond the complexity of ⇤, is the computation of the function X?. The time-complexity of X⇤(✓) depends on the context distribution. While computing X⇤(✓) can be computationally expensive in worst-case scenarios, it can be computed/approximated efficiently for many practical distributions even in a closed form. We give the following examples:
• For d = 1, ✓ > 0, we have that X?(✓) is the expectation of the maximum of multiple random variables, i.e., X?(✓) = Exa⇠Pa [maxa2A xa], which can be computed/approximated efficiently if the distributions Pa are given in a closed form. • If {Pa}a2A are continuous distributions, then X⇤(✓), ✓ 6= 0 can be expressed as
X⇤(✓) = X
a2A
Z
xa⇠Pa xaExa0⇠Pa0 ,a02A/{a}[I[hxa0 , ✓i < hxa, ✓i8a
0 6= a]|xa]dPa. (11)
For many distributions, the previous expression can be computed/approximated efficiently. For instance, consider the case where d 1, xa are independent, identically distributed d-dimensional Gaussian vectors with mean µ and covariance matrix ⌃ = UTDU , where D is a diagonal matrix and U is upper triangular. The expectation in (11) is equal to (Q( hxa µ,✓ikpDU✓k2 ))
|A| 1, where Q(c) = 1p 2⇡ R1 c exp( 1 2x 2)dx. Hence, X⇤(✓) can be approximated efficiently in that case.
• For discrete distributions, X⇤(✓) can be computed efficiently depending on the number of mass points of the distribution and if the distribution has structures/properties to simplify the expression.
Imperfect Knowledge of Distributions. Since we only use the distributions to calculate X?, imperfect knowledge of distribution only affects us in the degree that it affects the calculation of X?. Suppose that we have an estimate X̃? of X? that satisfies
sup ✓2⇥ kX?(✓) X̃?(✓)k2 ✏. (12)
Using Theorem 1 we prove in App. A the following corollary.
Corollary 1. Suppose we are given X̃? that satisfies (12). Then, there exists an algorithm ⇤ for which Algorithm 1 achieves RT = Õ(d p T + ✏T p d) with probability at least 1 1T .
Privacy. Our result may be useful for applications beyond communication efficiency; indeed, the context may contain private information (e.g., personal preferences, financial information, etc); use of our algorithm enables to not share this private information at all with the central learner, without impeding the learning process. Surprisingly, work in [48], motivated from privacy considerations, has shown that if an agent adds a small amount of zero mean noise to the true context before sending it to the central learner, this can severely affect the regret in some cases - and yet our algorithm essentially enables to “guess” the context with no regret penalty if the distributions are known. Although adding a zero mean noise to the observed feature vector conveys an unbiased estimate of the observation, the difference between this and our case is technical and mainly due to the fact that the unbiasdness is required to hold conditioned on the central learner observation (noisy context).
Note that we do not make formal privacy claims in this paper, but simply observe that our approach could potentially be leveraged for privacy purposes. It is true that the reward can reveal some information about the context, e.g., if all the actions result in small reward for context and large reward for another context. However, privatizing the reward (which implies a private context in our case) is much easier than privatizing the context and there are many proposed optimal algorithms with little to no regret loss, e.g., see [20, 38, 44, 35]. This is not the case when privatizing the context. In fact it was shown in [42] that privatizing the context can lead to linear regret and relaxed definitions of privacy are proposed to avoid this.
4 Contextual Linear Bandits with Unknown Context Distribution
We now consider the case where the learner does not know the context distributions, and thus Algorithm 1 that uses zero bits for the context cannot be applied. In this case, related literature conjectures a lower bound of ⌦(d) [46, 47] – which is discouraging since it is probably impossible to establish an algorithm with communication logarithmically depending on d. Additionally, in practice we use 32d bits to convey full precision values - thus this conjecture indicates that in practice we may not be able to achieve order improvements in terms of bits communicated, without performance loss.
In this section, we provide Algorithm 2 that uses ⇡ 5d bits per context and achieves (optimal) regret RT = O(d p T log T ). We believe Algorithm 2 is interesting for two reasons: 1. In theory, we need an infinite number of bits to convey full precision values- we prove that a constant number of bits per dimension per context is sufficient. Previously best-known algorithms, which rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context, which goes to infinity as T goes to infinity. Moreover, these algorithms require exponential complexity [24] while ours is computationally efficient. 2. In practice, especially for large values of d, reducing the number of bits conveyed from 32d to⇡ 5d is quite significant - this is a reduction by a factor of six, which implies six times less communication.
Main Idea. The intuition behind Algorithm 2 is the following. The central learner is going to use an estimate of the d⇥ d least-squares matrix Vt = Pt i=1 Xi,aiX T i,ai to update her estimates for the parameter vector ✓?. Thus, when quantizing the vector Xt,a, we want to make sure that not only this vector is conveyed with sufficient accuracy, but also that the central learner can calculate the matrix Vt accurately. In particular, we would like the central learner to be able to calculate an unbiased estimator for each entry of Xt,a and each entry of the matrix Vt. Our algorithm achieves this by quantizing the feature vectors Xt,at , and also the diagonal (only the diagonal) entries of the least squares matrix Vt. We prove that by doing so, with only ⇡ 5d bits we can provide an unbiased estimate and guarantee an O( 1p
d ) quantization error for each entry in the matrix almost surely.
Quantization Scheme. We here describe the proposed quantization scheme. • To quantize Xt,at : Let m , d p de. We first send the sign of each coordinate of Xt,at using d bits, namely, we send the vector st = Xt,at/|Xt,at |. To quantize the magnitude |Xt,at |, we scale each coordinate of |Xt,at | by m and quantize it using a Stochastic Quantizer (SQ)3 with m+ 1 levels in
3As described in (3) in Section 2, SQ maps value x to an integer value, namely bxc with probability dxe x and dxe with probability x bxc.
the interval [0,m]. Let Xt , SQm(m|Xt,at |) denote the resulting SQ outputs, we note that Xt takes non-negative integer values and lies in a norm-1 ball of radius 2d (this holds since the original vector lies in a norm-2 ball of radius 1 and the error in each coordinate is at most 1/m). That is, it holds that Xt 2 Q = {x 2 Nd|kxk1 2d}. We then use any enumeration h : Q! [|Q|] of this set to encode Xt using log(|Q|) bits. • To quantize Xt,atXTt,at : Let X 2 t,at denote a vector that collects the diagonal entries of Xt,atX T t,at . Let X̂t , stXt/m be the estimate of Xt,at that the central learner retrieves. Note that X̂2t is not an unbiased estimate of X2t,at ; however, (X 2 t,at X̂ 2 t )i 3/m for all coordinates i (proved in App. B). Our scheme simply conveys the difference X2t,at X̂ 2 t with 1 bit per coordinate using a SQ[ 3/m,3/m]1 quantizer.
The central learner and distributed agent operations are presented in Algorithm 2.
Example 2. Consider the case where d = 5. Then each coordinate of |Xt,at | is scaled by 3 and quantized using SQ3 to one of the values 0, 1, 2, 3 to get Xt. The function h then maps the values for Xt that satisfy the kXtk1 10 to a unique value (a code) in the set [|Q|]. For instance the value 3.1 is not given a code, where 1 is the vector of all ones. However, note that for |Xt,at | to be mapped to 3.1, we must have 3|(Xt,at)i| 2 for all coordinates i, which cannot happen since it implies that kXt,atk2 2 p 5/6 > 1 which contradicts Assumption 1.
Algorithm 2 Communication efficient for contextual linear bandits with unknown distribution 1: Input: underlying set of actions A, and time horizon T . 2: ✓̂0 = 0, Ṽ0 = 0, u0 = 0,m = d p de.
3: Let h be an enumeration of the set Q = {x 2 Nd|kxk1 2d}. 4: for t = 1 : T do 5: Agent: 6: Receive ✓̂t 1 from the central learner. 7: Observe context realization {Xt,a}a2A. 8: Pull arm at = argmaxa2AhXt,a, ✓̂t 1i and receive reward rt. 9: Compute the signs st = Xt,at/|Xt,at | of Xt,at .
10: Let Xt = SQm(m|Xt,at |). 11: e2t = SQ [ 3/m,3/m] 1 (X 2 t,at X̂ 2 t ), where X̂t = stXt/m. 12: Send to the central learner h(Xt), st, and e2t using log2(|Q|), d, and d bits, respectively. 13: Send r̂t = SQ1(rt) using 1-bit. 14: Central learner: 15: Receive Xt, st, e2t , and r̂t from the distributed agent. 16: X̂t = stXt/m, X̂ (D) t = X̂ 2 t + e 2 t .
17: ut ut 1 + r̂tX̂t.
18: Ṽt Ṽt 1 + X̂tX̂Tt diag(X̂tX̂Tt ) + diag(X̂ (D) t ). 19: ✓̂t Ṽ 1t ut. 20: Send ✓̂t to the next agent.
Algorithm Performance. Theorem 2, stated next, holds under Assumption 1 in Section 2 and some additional regulatory assumptions on the distributions Pa provided in Assumption 2. Assumption 2. There exist constants c, c0 such that for any sequence ✓1, ..., ✓T , where ✓t depends only on Ht, with probability at least 1 c 0
T , it holds that Pt
i=1 Xi,aiX T i,ai
ct d I 8t 2 [T ], (13)
where at = argmaxa2AhXt,a, ✓ti, and I is the identity matrix.
We note that several common assumptions in the literature imply (13), for example, bounded eigenvalues for the covariance matrix of Xt,at [11, 27, 17]. Such assumptions hold for a wide range of distributions, including subgaussian distribitions with bounded density [36].
Challenge in relaxing assumption 2 (diversity assumption). The main challenge in relaxing the diversity assumption for LinUCB (or Thompson sampling) based algorithms is that the regret of those
algorithms is bounded as Õ( p Tk✓̂T ✓⇤kVT ). Without quantization, the quantity k✓̂T ✓⇤kVT grows slowly and is nearly a constant; however, without the diversity assumption, the quantization error can make k✓̂T ✓⇤k2VT to grow as p T in the worst case. This is due to the fact that sub-optimal arms do not have large number of pulls, hence, we do not have good estimate of ✓? on those direction; on the other hand, the quantization errors in estimating VT is accumulated in all directions. As a result, the regret bound increases by a factor of T 1/4. We leave it as a future work to either relax the diversity assumption (which is required in our paper only in the case of unknown context distribution) or else show that removing it will unavoidably increase the regret order. Theorem 2. Algorithm 2 satisfies that for all t: Xt 2 Q; and Bt 1 + log2(2d+ 1) + 5.03d bits. Under assumptions 1, 2, it achieves a regret RT = O(d p T log T ) with probability at least 1 1T .
Proof Outline. To bound the number of bits Bt, we first bound the size of Q by formulating a standard counting problem: we find the number of non-negative integer solutions for a linear equation. To bound the regret RT , we start by proving that our quantization scheme guarantees some desirable properties, namely, unbiasedness and O( 1p
d ) quantization error for each vector coordinate. We then
upper bound the regret in terms of k✓̂t ✓?k2 and show that this difference can be decomposed as
k✓̂t ✓?k2 = kV 1t k2(k Pt i=1 Eik2 + (1 + |⌘i|)k Pt i=1 eik2 + k Pt i=1 ⌘̂iXi,aik2, (14)
where Et captures the error in estimating the matrix Xt,atXTt,at , et is the error in estimating Xt,at , and ⌘0t is a noise that satisfies the same properties as ⌘t. Using Assumption 2, we prove that V 1 t grows as O(dt ) with high probability, and from the unbiasdness and boundedness of all error quantities we show that they grow as O( p t log t) with high probability. This implies that k✓̂t ✓?k2 = O(d q log t t ), and hence, RT = O(d p T log T ). The complete proof is provided in App. B. ⇤
Algorithm Complexity. If we do not count the quantization operations, it is easy to see that the complexity of the rest of the algorithm is dominated by the complexity of computing V 1t which can be done in O(d2.373) [4]. For the quantization, we note that each coordinate of Xt can be computed in Õ(1) time4. Moreover, the computation of h(x) for x 2 Q can be done in constant time with high probability using hash tables, where h is the enumeration function in Step 3. Hence, the added computational complexity is almost linear in d. Although a hash table for h can consume ⌦(25d) memory, by sacrificing a constant factor in the number of bits, we can find enumeration functions that can be stored efficiently. As an example, consider the scheme in [14] that can find an one-to-one function h : Q ! N+ which can be stored and computed efficiently, but only gives guarantees in expectation that E[log(h(x))] = O(d) for all x 2 Q. Downlink Communication Cost. Although we assume no-cost downlink communication, as was also the case for Algorithm 1, the downlink in Algorithm 2 is only used to send the updated parameter vector ✓̂t to the agents. If desired, these estimates can be quantized using the same method as for Xt,at , which (following a similar proof to that of Theorem 2) can be shown to not affect the order of the regret while reducing the downlink communication to ⇡ 5d bits per iteration. Offloading To Agents. For applications where the agents wish to computationally help the central learner, the central learner may simply aggregate information to keep track of ut, Ṽt and broadcast these values to the agents; the estimate ✓̂t can be calculated at each agent. Moving the computational load to the agents does not affect the regret order or the number of bits communicated on the uplink. Remark 2. Under the regulatory assumptions in [17], the regret bound can be improved by a factor of p log(K)/d, where K = |A| is the number of actions. However, this does not improve the regret
in the worst case as the worst case number of actions is O(Cd), C > 1 [24].
Societal Impact. Results in this work can be used in decision making systems which can potentially lead to biased decisions against racial, sex, or minority groups if used without care.
Acknowledgment. CF and OH are supported in part by NSF award 2007714, NSF award 2221871 and Army Research Laboratory grant under Cooperative Agreement W911NF-17-2-0196. LY is supported in part by DARPA grant HR00112190130, NSF Award 2221871.
4Multiplication by p d can take O(log d) time. | 1. What is the focus of the paper regarding distributed linear bandit settings?
2. What are the strengths of the proposed solution, particularly in terms of reduction framework and existing results leveraging?
3. What are the weaknesses of the paper, especially regarding assumptions and communication limitations?
4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content?
5. Are there any minor issues or typos noticed by the reviewer? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies a distributed linear bandit setting with a central learner and agents that observe contexts and execute decisions. As the central learner does not observe the context, each agent needs to communicate its decision and observations. The goal of this paper is to minimize "uplink" communication, i.e. the number of bits each agents needs to transfer in order to enable central low regret learning. Two results are provided under different assumptions on the context distribution. The proposed solution is a reduction framework that leverages existing results on linear bandits.
Strengths And Weaknesses
Clarity:
Overall, the paper is well written. I have some minor remarks to further improve clarity:
line 66 and 300: Please discuss why previous works require O(d log(T)) bits and explain what is meant by "exponential complexity". Note that the reference [19] does not provide algorithms for the distributed setting, as far as I know.
line 91: Are two different notations needed for vector indices? Consider using only one.
line 120: The cited bounds are not quite correct; [1] achieves
d
(
T
)
log
(
T
)
; [28] achieves
d
(
T
)
log
(
T
)
3
/
2
; [2] achieves roughly
d
T
log
(
K
)
(which is a factor
log
(
K
)
worse). Moreover, these bounds are not the best known, see e.g. https://arxiv.org/abs/1904.00242 and https://arxiv.org/abs/1905.01435
below line 190: I find the notation
θ
∗
(
Λ
)
very confusing: Why is the unknown parameter a function of the algorithm?
line 216: It might hep to say that the agent plays the argmax action for
θ
^
t
!
line 220: In what sense do we have
θ
t
=
θ
∗
? As written this is almost certainly not true.
Algorithm 1 / 2: Consider using the same ordering for central learner and agents in the pseudo code
Eq (14) in the appendix: Why do you introduce a conditioning on
θ
∗
in the expectation?
θ
∗
is not a random variable.
Quality:
The assumptions and results are clearly stated. I liked that the main paper provides proof sketches. I spent some time checking the proofs in the appendix and did not identify any technical flaws.
Originality & Significance:
This paper studies a well motivated setting. Related work is discussed. I am not an expert in distributed computation, but to me the ideas in the context of linear bandits are novel.
Minor:
191: typo in 'uknown'
254: typo in 'unbiasdness'
Questions
How is (6) defined for parameters with multiple optimal actions? For instance at
θ
=
0
, all actions are optimal, and the definition of
X
∗
(
θ
)
depends on this choice.
Limitations
It is unclear if
X
∗
(
θ
)
and its inverse can be computed. The provided example is for d=1, whereas d>1 will be relevant for many applications. This is discussed around 267-271, but I do not see that the authors provide a good resolution. This is potentially quite a big limitation of the method.
The regret bound for unknown contextual distribution requires a strong diversity assumption (Assumption 2). This allows the agents to essentially always play the greedy action. Without this assumption, the agents would need to engage in active exploration, and it is perhaps less clear how to coordinate this. A straightforward idea could be to use Thompson sampling, and communicate the sampled parameter to the agent. |
NIPS | Title
Learning from Distributed Users in Contextual Linear Bandits Without Sharing the Context
Abstract
Contextual linear bandits is a rich and theoretically important model that has many practical applications. Recently, this setup gained a lot of interest in applications over wireless where communication constraints can be a performance bottleneck, especially when the contexts come from a large d-dimensional space. In this paper, we consider a distributed memoryless contextual linear bandit learning problem, where the agents who observe the contexts and take actions are geographically separated from the learner who performs the learning while not seeing the contexts. We assume that contexts are generated from a distribution and propose a method that uses ⇡ 5d bits per context for the case of unknown context distribution and 0 bits per context if the context distribution is known, while achieving nearly the same regret bound as if the contexts were directly observable. The former bound improves upon existing bounds by a log(T ) factor, where T is the length of the horizon, while the latter achieves information theoretical tightness.
1 Introduction
Contextual linear bandits offer a sequential decision-making framework that combines fundamental theoretical importance with significant practical popularity [8], as it offers a tractable way to capture side information (context), as well as a potentially infinite set of decisions (actions). The most prominent application is in recommendation systems [30], but it has also been used in applications such as virtual support agents [39], clinical trials [12], transportation systems [9], wireless optimization [26, 25], health [10], robotics [31] and online education [34].
In this paper, we develop algorithms that support the deployment of contextual linear bandits in distributed settings. In particular, we consider the case where a central learner wishes to solve a contextual linear bandit problem with the help of transient agents. That is, we assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner. We view the central learner as a “knowledge repository”, that accumulates knowledge from the experience of the transient agents and makes it available to next agents. The central learner, through the information it keeps, could help passing by devices decide how to perform an action, for example: passing by drones decide how to perform a manoeuver; agricultural robots decide what amounts of substances such as pesticids to release; and passing by mobile devices decide which local restaurants to recommend.
The main challenge we try to address in this paper is the efficient communication of the context the agents experience. More specifically, in our setup, each time an agent joins, she receives from the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
central learner information on the system, such as current estimates of the system parameters; she observes her current context, selects and plays an action and collects the corresponding reward. Note that although the distributed agent knows her context, the action she plays and the observed reward, the central learner does not - and needs this information to update its estimate of the system parameters. The context in particular can be communication heavy - in the examples we mentioned before, for drones the context could be their navigation capabilities, physical attributes, and enviromental factors such as wind speed; for agricultural robots, it could be images that indicate state of plants and sensor measurements such as of soil consistency; for restaurant recommendations, it could be the personal dietary preferences and restrictions, budget, and emotional state. Moreover, because of geographical separation, the central learner may not have any other way to learn the context beyond communication. Unlike the reward, that is usually a single scalar value, the context can be a vector of a large dimension d from an infinite alphabet, and thus, communicating the context efficiently is heavily nontrivial.
The technical question we ask is, how many bits do we need to convey per context to solve the linear bandit problem without downgrading the performance as compared to the non-distributed setting?
In this paper, we design algorithms that support this goal. We note that our algorithms optimize the uplink communication (from the agents to the central learner), and assume unlimited (cost-free) downlink communication. This is a standard assumption in wireless [7, 33, 21] for several reasons: uplink wireless links tend to be much more bandwidth restricted, since several users may be sharing the same channel; uplink communication may also be battery-powered and thus more expensive to sustain; in our particular case, the agents may have less incentive to communicate (provide their feedback) than the central learner (who needs to learn). Having said that, we note that our algorithms (in Sections 3 and 4) make frugal use of the downlink channels, only using them to transmit system parameters.
Below we summarize our main contributions: 1. We show the surprising result that, if the central learner knows the distribution of the contexts, we do not need to communicate the context at all - the agent does not need to send any information on the actual context she observes and the action she plays. It is sufficient for the agent to just send 1 bit to convey quantized information on her observed reward and nothing else. But for this very limited communication, the central learner can learn a policy that achieves the same order of regret as if full information about the context and reward is received. This result holds for nearly all context distributions and it is the best we can hope for - zero bits of communication for the context. 2. If the central learner has no knowledge of the context distribution, we show that ⇡ 5d bits per context (where d is the context dimension) is sufficient to achieve the same order regret as knowing the context in full precision. Note that previous algorithms, that rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context to achieve the same order regret, where T is the length of the horizon [24], and require time complexity of O(T d) which is exponential in d.
Related Work and Distinction. Contextual linear bandits is a rich and important model that has attracted significant interest both in theory and applications [8, 24]. Popular algorithms for this setup include LinUCB [1, 37] and cotextual Thompson sampling [2]. Under Assumption 1, these algorithms achieve a regret of Õ(d p T ), where d is the dimension of an unknown system parameter and T is the time horizon, while the best known lower bound for this setup is ⌦(d p T ) [37]. These algorithms assume perfect knowledge of the contexts and rewards. Within this space, our work focuses on operation under communications constraints in a distributed setting.
There is large body of work focusing on distributed linear contextual bandits settings, but mainly within the framework of federated learning, where batched algorithms have been proposed for communication efficiency [43, 41, 6, 5, 23] that aggregate together observations and parameter learning across a large number of iterations. This is possible because in federated learning, the agents themselves wish to learn the system parameters, remain active playing multiple actions throughout the learning process, and exchange information with the goal of speeding up their learning [43, 41]. In contrast, in our setup batched algorithms cannot reduce the communication cost because each agent only plays a single action; this may be because agents are transient, but also because they may not be interested in learning - this may not be a task that the agents wish to consistently perform - and thus do not wish to devote resources to it. For example, an agent may wish to try a restaurant in a special occasion, but would not be interested in sampling multiple restaurants/learning recommendation system parameters. In other words, we consider a scenario where the user benefits from receiving an action (or policy) from the central learner, e.g., a recommendation. In response, the user gives
feedback to the central learner in terms of (compressed) context/reward. The compression operations benefit the user by helping reduce her communication cost. In principle, the user is not required to respond. But the central learner will be able to learn whenever there is a feedback; creating an incentive for the user to respond could be an interesting future topic. Our setup supports a different (and complementary) set of applications than the federated learning framework, and requires a new set of algorithms that operate without requiring agents to keep memory of past actions.1
There is a long line of research on compression for machine learning and distributed optimization, e.g., compression for distributed gradient descent [40, 3, 32, 18], and distributed inference [19]. However, such schemes are not optimized for active learning applications. Our compression schemes can be seen as quantization schemes for contexts and rewards tailored to active learning applications.
Our work also differs from traditional vector compression schemes [15] that aim to reconstruct the data potentially with some distortion (achieve rate-distortion trade-offs). In our case, we do not aim to reconstruct the data, but instead to distinguish the best arm for each context. Indeed, using 0 bits, as we do in Section 3, we cannot reconstruct a meaningful estimate of the context.
To the best of our knowledge, our framework has not been examined before for linear contextual bandits. Work in the literature has examined compression for distributed memoryless MABs [21], but only for rewards (scalar values) and not the contexts (large vectors), and thus these techniques also do not extend to our case.
Paper organization. Section 2 reviews our notation and problem formulation; Section 3 provides and analyzes our algorithm for known and Section 4 for unknown context distributions.
2 Notation and Problem Formulation
Notation. We use the following notation throughout the paper. For a vector X we use Xi or (X)i to denote the i-th element of the vector X; similarly for a matrix V , we use Vij or (V )ij to denote the element at row i, and column j. We use kV k2 to denote the matrix spectral norm. For a function f , we denote its domain and range by dom(f), ran(f) respectively. When dom(f) ✓ R, we use f(X) for a vector X 2 Rd to denote f(X) := [f(X1), ..., f(Xd)], i.e., the function f is applied element-wise; for example we use X2 to denote the element-wise square of X . We denote the inverse of a function f by f 1; if f is not one-to-one, with abuse of notation we use f 1 to denote a function that satisfies f(f 1(x)) = x8x 2 ran(f) (this is justified due to the axiom of choice [22]). For a matrix V , we use V 1 to denote its inverse; if V is singular, we use V 1 to denote its pseudo-inverse. We use [N ] for N 2 N to denote {1, ..., N}, and {Xa}a2A to denote the set {(a,Xa)|a 2 A}. We say that y = O(f(x)) if there is x0 and a constant C such that y Cf(x) 8x > x0; we also use Õ(f(x)) to omit log factors.
Contextual Linear Bandits. We consider a contextual linear bandits problem over a horizon of length T [8], where at each iteration t = 1, ..., T , an agent, taking into account the context, chooses an action at 2 A and receives a reward rt. For each action a 2 A, the agent has access to a corresponding feature vector Xt,a 2 Rd. The set of all such vectors {Xt,a}a2A is the context at time t, and the agent can use it to decide which action at to play. We assume that the context is generated from a distribution, i.e., given a, Xt,a is generated from a distribution Pa. As a specific example, we could have that a 2 Rd and Xt,a is generated from a Gaussian distribution with zero mean and covariance matrix ||a||2I , where I is the identity matrix, i.e., Pa = N (0, ||a||2I). The selection of at may depend not only on the current context {Xt,a}a2A but also on the history Ht , {{X1,a}a2A, a1, r1, ..., {Xt 1,a}a2A, at 1, rt 1}, namely, all previously selected actions, observed contexts and rewards. Once an action is selected, the reward is generated according to
rt = hXt,at , ✓?i+ ⌘t, (1)
where h., .i denotes the dot product, ✓? is an unknown (but fixed) parameter vector in Rd, and ⌘t is noise. We assume that the noise follows an unknown distribution with E[⌘t|Ft] = 0 and E[exp( ⌘t)|Ft] exp( 2/2)8 2 R, where Ft = ({X1,a}a2A, a1, r1, ..., {Xt,a}a2A, at) is the filtration [13] of historic information up to time t, and (X) is the -algebra generated by X [13].
1Our techniques could be adapted to additionally improve the communication efficiency of batched algorithms, but this is not the focus of our work.
The objective is to minimize the regret RT over a horizon of length T , where
RT = TX
t=1
max a2A hXt,a, ✓?i hXt,at , ✓?i. (2)
The best performing algorithms for this problem, such as LinUCB and contextual Thompson sampling, achieve a worst case regret of Õ(d p T ) [29, 28, 1, 2]. The best known lower bound is ⌦(d p T ) [37].
In the rest of this paper, we make the following assumptions that are standard in the literature [24]. Assumption 1. We consider contextual linear bandits that satisfy: (1.) kXt,ak2 1, 8t 2 [T ], a 2 A. (2.) k✓?k2 1. (3.) rt 2 [0, 1], 8t 2 [T ].
The boundedness assumption on rt can be relaxed using [21], which only requires approximately 3.5 bits on average to send rt, even if it is unbounded.
Memoryless Distributed Contextual Linear Bandits. We consider a distributed setting that consists of a central learner communicating with geographically separated agents. For example, the agents are drones that interact with a traffic policeman (central learner) as they fly by. We assume that the agents do not keep memory of past actions and may not be present for the whole duration of learning; learning in our setup can happen thanks to the persistent presence of the central learner.
At each time t, t = 1 . . . T , a distributed agent joins the system; she receives from the central learner information on the system, such as the current estimate of the parameter vector ✓? or the history Ht; she observes the current context {Xt,a}a2A, selects and plays an action at and collects the corresponding reward rt. Note that although the distributed agent knows the context {Xt,a}a2A, the action at and the observed reward rt, the central learner does not. The central learner may need this information to update its estimate of the system parameters, such as the unknown parameter vector ✓⇤, and the history Ht+1. However, we assume that the agent is restricted to utilize a communicationconstrained channel and thus may not be able to send the full information to the central learner.
The main question we ask in this paper is: can we design a compression scheme, where the agent sends to the central learner only one message using Bt bits (for as small as possible a value of Bt) that enables the central learner to learn equally well (experience the same order of regret) as if there were no communication constraints? With no communication constraints the agent could send unquantized the full information {{Xt,a}a2A, at, rt}. Instead, the agent transmits a message that could be a function of all locally available information at the agent. For example, it could be a function of (Ht, {Xt,at}a2A, at, rt), if the agent had received Ht from the central learner. It could also be a function of just (Xt,at , rt), which could be sufficient if the central learner employs an algorithm such as LinUCB [1, 37]. In summary, we set the following goal.
Goal. Design contextual linear bandit schemes for the memoryless distributed setting that achieve the best known regret of O(d p T log(T )), while communicating a small number of bits Bt.
We only impose communication constraints on the uplink communication (from the agents to the central learner) and assume no cost downlink communication (see discussion in Secttion 1).
Stochastic Quantizer (SQ) [16]. Our proposed algorithms use stochastic quantization, that we next review. We define SQ`, ` 2 N to be a quantizer, that uses log(`+1) bits, consisting of an encoder and decoder described as following. The encoder ⇠` takes a value x 2 [0, `] and outputs an integer value
⇠` = ⇢ bxc with probability dxe x dxe with probability x bxc. (3)
The output ⇠` is represented with log(` + 1) bits. The decoder D` takes as input the binary representation of ⇠`(x) and outputs the real value ⇠`(x). The composition of the encoder ⇠`, the binary mapping, and decoder D` is denoted by SQ`. We notice that since the decoder only inverts the binary mapping operation, we have that SQ` = ⇠`. When SQ` is applied at the agents side, the agent encodes its data, x, as ⇠`(x), then sends the corresponding binary mapping to the central learner that applies D` to get SQ`(x). With slightly abuse of notation, this operation is described in the paper, by saying that the agent sends SQ` to the central learner.
The quantizer SQ` is a form of dithering [16] and it has the following properties
E[SQ`(x)|x] = bxc(dxe x) + dxe(x bxc) = x(dxe bxc) = x, and |SQ`(x) x| 1.
In particular, it conveys an unbiased estimate of the input with a difference that is bounded by 1 almost surely. We also define a generalization of SQ` denoted by SQ [a,b] ` where the input x of the encoder is in [a, b] instead of [0, `]. The encoder first shifts and scales x using x̃ = `b a (x a) to make it lie in [0, `], then feeds x̃ to the encoder in (3). This operation is inverted at the decoder. It is easy to see that SQ[a,b]` satisfies
E[SQ[a,b]` (x)|x] = x, |SQ [a,b] ` (x) x| b a ` .
3 Contextual Linear Bandits with Known Context Distribution
In this section, we show that if the central learner knows the distributions for the vectors Xt,a, then the agent does not need to convey the specific realization of the vector Xt,a she observes at all - it is sufficient to just send 1 bit to convey some information on the observed reward and nothing else. But for this very limited communication, the central learner can experience the same order regret, as when receiving in full precision all the information that the agents have, namely, RT = O(d p T log T ). Algorithm 1, that we describe in this section, provides a method to achieve this. Algorithm 1 is clearly optimal, as we cannot hope to use less than zero bits for the vector Xt,a. Remark 1. Knowledge of the distribution of Xt,a is possible in practice, since many times the context may be capturing well studied statistics (e.g., male or female, age, weight, income, race, dietary restrictions, emotional state, etc) - the advent of large data has made and will continue to make such distributions available. Similarly, actions may be finite (eg., restaurants to visit) or well described (e.g., released amounts of substances), and thus the distribution of Xt,a could be derived. When the distribution is approximately known, we provide later in this section a bound on the misspefication performance penalty in terms of regret.
Main Idea. The intuition behind Algorithm 1 is that it reduces the multi-context linear bandit problem to a single context problem. In particular, it calls as a subroutine an algorithm we term ⇤, that serves as a placeholder for any current (or future) bandit algorithm that achieves regret O(d p T log T ) for the case of a single context (for example, LinUCB [1, 37]). The central learner uses ⇤ to convey to the agents the information they need to select a good action. Our aim is to parametrize the single context problem appropriately, so that, by solving it we also solve our original problem.
Recall that in a single context problem, at each iteration t, any standard linear bandit algorithm ⇤ selects a feature vector (an action) xt from a set of allowable actions X , and observes a reward rt = hxt, ✓0?i+ ⌘t, (4) where ✓0? is an uknown parameter and ⌘t is noise that satisfies the same assumptions as in (1). The objective of ⇤ is to minimize the standard linear regret RT (⇤) over a horizon of length T , namely
RT (⇤) = PT
t=1 maxx2X hx, ✓0?i hxt, ✓0?i. (5) Our reduction proceeds as follows. We assume that ⇤ operates over the same horizon of length T and is parametrized by an unknown parameter ✓0?. We will design the action set X that we provide to ⇤ using our knowledge of the distributions Pa2 as we will describe later in (7). During each iteration, the central learner asks ⇤ to select an action xt 2 X and then provides to ⇤ a reward for this action (our design ensures that this reward satisfies (4) with ✓0? = ✓?). ⇤ operates with this information, oblivious to what else the central learner does. Yet, the action xt is never actually played: the central learner uses the selected action xt to create an updated estimate of the parameter vector ✓̂t, as we will describe later, and only sends this parameter vector estimate to the distributed agent. The agent observes her context, selects what action to play, and sends back her observed quantized reward to the central learner. This is the reward that the central learner provides to ⇤. We design the set X and the agent operation to satisfy that: (4) holds; and RT RT (⇤) is small, where RT is the regret for our original multi-context problem and RT (⇤) the regret of ⇤. We next try to provide some intuition on how we achieve this.
We first describe how we construct the set X . Let ⇥ be the set of all values that ✓? could possibly take. For each possible parameter vector value ✓ 2 ⇥ the central learner considers the quantity
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i] (6)
2Recall that given a, Xt,a is generated from distribution Pa, see Section 2.
where xa is the random variable that follows the distribution Pa. Ties in (6) can be broken uniformly at random. In fact any pre-selected choice function would work as long as the same function is also used in step 12 of Algorithm 1. Note that the function X? : Rd ! Rd can be computed offline before the learning starts, see Example 1. We then use
X = {X?(✓)|✓ 2 ⇥}. (7)
Intuitively, for each value of ✓, we optimistically assume that the distributed agent may select the best possible realization Xt,a for this ✓ (that has the expectation in (6)), and receive the associated reward; accordingly, we restrict the action space X of ⇤ to only contain the expectation of these “best” Xt,a. The vector xt 2 X may not actually be the vector corresponding to the action the agent selects; it is only used to convey to the agent an estimate of the unknown parameter ✓̂t that satisfies xt = X?(✓̂t). Although the central learner does not control which action the agent plays, this is influenced by ✓̂t; we show in App. A that Xt,at is an unbiased estimate of xt, and the generated reward follows the linear model in (4) with ✓0? = ✓?. In Theorem 1, we prove that
argmax x2X hx, ✓?i = X?(✓?). (8)
Hence, if ⇤ converges to selecting the best action for the single context problem, we will have that ✓̂t converges to ✓? if the maximizer in (8) is unique. If there are multiple values for ✓ with X?(✓) = X?(✓?), we show in the proof of Theorem 1 that they all lead to the same expected reward for the original multi-context problem.
Example 1. Consider the case where d = 1, A = {1, 2}, Xt,a 2 { 1, 1} 8a 2 A, ⇥ = { 1, 1}, ✓? = 1 and Xt,1 takes the value 1 with probability p and 1 otherwise, while Xt,2 takes the values 1 with probability q and 1 otherwise. Then, we have that
argmax Xt,a hXt,a, 1i = ⇢ 1 with probability 1 pq 1 with probability pq, (9)
where we use the fact that if argmaxXt,ahXt,a, 1i 6= 1, it must be the case that both Xt,1 and Xt,2 are 1. Thus, X?(1) = E[argmaxXt,ahXt,a, 1i] = 1 2pq, and similarly X?( 1) = 1 + 2(1 p)(1 q), and hence, X = {1 2pq, 1 + 2(1 p)(1 q)}. If ⇤ decides to pick xt = 1 2pq, we have that ✓̂t = 1, otherwise ✓̂t = 1. This estimate ✓̂t is then conveyed to the agent to help her pick the action.
Algorithm Operation. The pseudo-code is provided in Algorithm 1. • First, the central learner calculates the function
X?(✓) = E{xa:xa⇠Pa}[arg max x2{xa:a2A} hx, ✓i], (10)
and creates the action set X = {X?(✓)|✓ 2 ⇥} that algorithm ⇤ is going to use. • At each time t, based on past history, ⇤ decides on a next action xt 2 X . The central learner uses xt to calculate the new update ✓̂t = X 1(xt), where X 1 is the inverse of X? (see Section 2). • The agent receives ✓̂t from the central learner, observes her context, plays an action at = argmaxa2AhXt,a, ✓̂ti, and observes the reward rt. She then quantizes the reward using a stochastic quantizer SQ1 (see Section 2), and communicates the outcome using one bit to the central learner. • The central learner provides the quantized reward as input to ⇤. Note that ⇤ is oblivious to what actions are actually played; it treats the received reward as corresponding to the action xt it had decided. The following theorem proves that Algorithm 1 achieves a regret RT (⇤) + O( p T log T ), where
RT (⇤) is the regret of ⇤ in (5). Hence, if ⇤ satisfies the best known regret bound of O(d p T log T ),
e.g., LinUCB, Algorithm 1 achieves a regret of O(d p T log T ). The theorem holds under the mild set of assumptions that we stated in Section 2. Theorem 1. Algorithm 1 uses 1 bit per reward and 0 bits per context. Under Assumption 1, it achieves a regret RT = RT (⇤) +O( p T log T ) with probability at least 1 1T .
Proof outline. The complete proof is available in App. A. We next provide a short outline. From the definition of X? in (10), we notice the following. Recall that the distributed agent receives ✓̂t from the central learner, and pulls the best action for this ✓̂t, i.e., at = argmaxa2AhXt,a, ✓̂ti. We
Algorithm 1 Communication efficient for contextual linear bandits with known distribution 1: Input: an algorithm ⇤ for one context case, underlying set of actions X , and time horizon T . 2: Initialize: X?(✓) = E{xa:xa⇠Pa}[argmaxx2{xa:a2A}hx, ✓i],X = {X?(✓)|✓ 2 ⇥} , r̂0 = 0. 3: Let X 1 be an inverse of X?. 4: for t = 1 : T do 5: Central learner: 6: Receive r̂t 1 and provide it to ⇤. 7: ⇤, using the history (x1, r̂1, ..., xt 1, r̂t 1), selects xt. 8: Send ✓̂t = X 1(xt) to agent. 9: Agent:
10: Receive ✓̂t from the central learner. 11: Observe context realization {Xt,a}a2A. 12: Pull arm at = argmaxa2AhXt,a, ✓̂ti and receive reward rt. 13: Send r̂t = SQ1(rt) to the central learner using 1-bit.
show that conditioned on xt, the associated vector Xt,at is an unbiased estimate of xt with a small variance. Given this, we prove that r̂t satisfies (6), and thus the rewards observed by ⇤ are generated according to a linear bandit model with unknown parameter that is the same as ✓?.
We next decompose the difference RT RT (⇤) to two terms: ⌃T =PT t=1hargmaxXt,ahXt,a, ✓?i, ✓?i hxt, ✓?i and ⌃0T = PT t=1hargmaxXt,ahXt,a, ✓̂ti, ✓?i maxx2X hx, ✓?i. To bound the first term, we show that the unbiasdness property together with Assumption 1 implies that ⌃T is a martingale with bounded difference. This implies that |⌃T | = O( p T log T ) with high probability. To bound ⌃0T , we first show that argmaxx2X hx, ✓?i = X?(✓?) (we note that this is why the algorithm converges to ✓̂t that is equal to, or results in the same expected reward as, ✓?). Then, following a similar approach, we can show that ⌃0T is a martingale with bounded difference which implies that |⌃0T | = O( p T log T ) with high probability. ⇤ Downlink Communication. The downlink cost of our scheme is O(d) (see App. A for discussion). Operation Complexity. The main complexity that our algorithm adds beyond the complexity of ⇤, is the computation of the function X?. The time-complexity of X⇤(✓) depends on the context distribution. While computing X⇤(✓) can be computationally expensive in worst-case scenarios, it can be computed/approximated efficiently for many practical distributions even in a closed form. We give the following examples:
• For d = 1, ✓ > 0, we have that X?(✓) is the expectation of the maximum of multiple random variables, i.e., X?(✓) = Exa⇠Pa [maxa2A xa], which can be computed/approximated efficiently if the distributions Pa are given in a closed form. • If {Pa}a2A are continuous distributions, then X⇤(✓), ✓ 6= 0 can be expressed as
X⇤(✓) = X
a2A
Z
xa⇠Pa xaExa0⇠Pa0 ,a02A/{a}[I[hxa0 , ✓i < hxa, ✓i8a
0 6= a]|xa]dPa. (11)
For many distributions, the previous expression can be computed/approximated efficiently. For instance, consider the case where d 1, xa are independent, identically distributed d-dimensional Gaussian vectors with mean µ and covariance matrix ⌃ = UTDU , where D is a diagonal matrix and U is upper triangular. The expectation in (11) is equal to (Q( hxa µ,✓ikpDU✓k2 ))
|A| 1, where Q(c) = 1p 2⇡ R1 c exp( 1 2x 2)dx. Hence, X⇤(✓) can be approximated efficiently in that case.
• For discrete distributions, X⇤(✓) can be computed efficiently depending on the number of mass points of the distribution and if the distribution has structures/properties to simplify the expression.
Imperfect Knowledge of Distributions. Since we only use the distributions to calculate X?, imperfect knowledge of distribution only affects us in the degree that it affects the calculation of X?. Suppose that we have an estimate X̃? of X? that satisfies
sup ✓2⇥ kX?(✓) X̃?(✓)k2 ✏. (12)
Using Theorem 1 we prove in App. A the following corollary.
Corollary 1. Suppose we are given X̃? that satisfies (12). Then, there exists an algorithm ⇤ for which Algorithm 1 achieves RT = Õ(d p T + ✏T p d) with probability at least 1 1T .
Privacy. Our result may be useful for applications beyond communication efficiency; indeed, the context may contain private information (e.g., personal preferences, financial information, etc); use of our algorithm enables to not share this private information at all with the central learner, without impeding the learning process. Surprisingly, work in [48], motivated from privacy considerations, has shown that if an agent adds a small amount of zero mean noise to the true context before sending it to the central learner, this can severely affect the regret in some cases - and yet our algorithm essentially enables to “guess” the context with no regret penalty if the distributions are known. Although adding a zero mean noise to the observed feature vector conveys an unbiased estimate of the observation, the difference between this and our case is technical and mainly due to the fact that the unbiasdness is required to hold conditioned on the central learner observation (noisy context).
Note that we do not make formal privacy claims in this paper, but simply observe that our approach could potentially be leveraged for privacy purposes. It is true that the reward can reveal some information about the context, e.g., if all the actions result in small reward for context and large reward for another context. However, privatizing the reward (which implies a private context in our case) is much easier than privatizing the context and there are many proposed optimal algorithms with little to no regret loss, e.g., see [20, 38, 44, 35]. This is not the case when privatizing the context. In fact it was shown in [42] that privatizing the context can lead to linear regret and relaxed definitions of privacy are proposed to avoid this.
4 Contextual Linear Bandits with Unknown Context Distribution
We now consider the case where the learner does not know the context distributions, and thus Algorithm 1 that uses zero bits for the context cannot be applied. In this case, related literature conjectures a lower bound of ⌦(d) [46, 47] – which is discouraging since it is probably impossible to establish an algorithm with communication logarithmically depending on d. Additionally, in practice we use 32d bits to convey full precision values - thus this conjecture indicates that in practice we may not be able to achieve order improvements in terms of bits communicated, without performance loss.
In this section, we provide Algorithm 2 that uses ⇡ 5d bits per context and achieves (optimal) regret RT = O(d p T log T ). We believe Algorithm 2 is interesting for two reasons: 1. In theory, we need an infinite number of bits to convey full precision values- we prove that a constant number of bits per dimension per context is sufficient. Previously best-known algorithms, which rely on constructing 1/T -net for the set of feature vectors, use O(d log T ) bits per context, which goes to infinity as T goes to infinity. Moreover, these algorithms require exponential complexity [24] while ours is computationally efficient. 2. In practice, especially for large values of d, reducing the number of bits conveyed from 32d to⇡ 5d is quite significant - this is a reduction by a factor of six, which implies six times less communication.
Main Idea. The intuition behind Algorithm 2 is the following. The central learner is going to use an estimate of the d⇥ d least-squares matrix Vt = Pt i=1 Xi,aiX T i,ai to update her estimates for the parameter vector ✓?. Thus, when quantizing the vector Xt,a, we want to make sure that not only this vector is conveyed with sufficient accuracy, but also that the central learner can calculate the matrix Vt accurately. In particular, we would like the central learner to be able to calculate an unbiased estimator for each entry of Xt,a and each entry of the matrix Vt. Our algorithm achieves this by quantizing the feature vectors Xt,at , and also the diagonal (only the diagonal) entries of the least squares matrix Vt. We prove that by doing so, with only ⇡ 5d bits we can provide an unbiased estimate and guarantee an O( 1p
d ) quantization error for each entry in the matrix almost surely.
Quantization Scheme. We here describe the proposed quantization scheme. • To quantize Xt,at : Let m , d p de. We first send the sign of each coordinate of Xt,at using d bits, namely, we send the vector st = Xt,at/|Xt,at |. To quantize the magnitude |Xt,at |, we scale each coordinate of |Xt,at | by m and quantize it using a Stochastic Quantizer (SQ)3 with m+ 1 levels in
3As described in (3) in Section 2, SQ maps value x to an integer value, namely bxc with probability dxe x and dxe with probability x bxc.
the interval [0,m]. Let Xt , SQm(m|Xt,at |) denote the resulting SQ outputs, we note that Xt takes non-negative integer values and lies in a norm-1 ball of radius 2d (this holds since the original vector lies in a norm-2 ball of radius 1 and the error in each coordinate is at most 1/m). That is, it holds that Xt 2 Q = {x 2 Nd|kxk1 2d}. We then use any enumeration h : Q! [|Q|] of this set to encode Xt using log(|Q|) bits. • To quantize Xt,atXTt,at : Let X 2 t,at denote a vector that collects the diagonal entries of Xt,atX T t,at . Let X̂t , stXt/m be the estimate of Xt,at that the central learner retrieves. Note that X̂2t is not an unbiased estimate of X2t,at ; however, (X 2 t,at X̂ 2 t )i 3/m for all coordinates i (proved in App. B). Our scheme simply conveys the difference X2t,at X̂ 2 t with 1 bit per coordinate using a SQ[ 3/m,3/m]1 quantizer.
The central learner and distributed agent operations are presented in Algorithm 2.
Example 2. Consider the case where d = 5. Then each coordinate of |Xt,at | is scaled by 3 and quantized using SQ3 to one of the values 0, 1, 2, 3 to get Xt. The function h then maps the values for Xt that satisfy the kXtk1 10 to a unique value (a code) in the set [|Q|]. For instance the value 3.1 is not given a code, where 1 is the vector of all ones. However, note that for |Xt,at | to be mapped to 3.1, we must have 3|(Xt,at)i| 2 for all coordinates i, which cannot happen since it implies that kXt,atk2 2 p 5/6 > 1 which contradicts Assumption 1.
Algorithm 2 Communication efficient for contextual linear bandits with unknown distribution 1: Input: underlying set of actions A, and time horizon T . 2: ✓̂0 = 0, Ṽ0 = 0, u0 = 0,m = d p de.
3: Let h be an enumeration of the set Q = {x 2 Nd|kxk1 2d}. 4: for t = 1 : T do 5: Agent: 6: Receive ✓̂t 1 from the central learner. 7: Observe context realization {Xt,a}a2A. 8: Pull arm at = argmaxa2AhXt,a, ✓̂t 1i and receive reward rt. 9: Compute the signs st = Xt,at/|Xt,at | of Xt,at .
10: Let Xt = SQm(m|Xt,at |). 11: e2t = SQ [ 3/m,3/m] 1 (X 2 t,at X̂ 2 t ), where X̂t = stXt/m. 12: Send to the central learner h(Xt), st, and e2t using log2(|Q|), d, and d bits, respectively. 13: Send r̂t = SQ1(rt) using 1-bit. 14: Central learner: 15: Receive Xt, st, e2t , and r̂t from the distributed agent. 16: X̂t = stXt/m, X̂ (D) t = X̂ 2 t + e 2 t .
17: ut ut 1 + r̂tX̂t.
18: Ṽt Ṽt 1 + X̂tX̂Tt diag(X̂tX̂Tt ) + diag(X̂ (D) t ). 19: ✓̂t Ṽ 1t ut. 20: Send ✓̂t to the next agent.
Algorithm Performance. Theorem 2, stated next, holds under Assumption 1 in Section 2 and some additional regulatory assumptions on the distributions Pa provided in Assumption 2. Assumption 2. There exist constants c, c0 such that for any sequence ✓1, ..., ✓T , where ✓t depends only on Ht, with probability at least 1 c 0
T , it holds that Pt
i=1 Xi,aiX T i,ai
ct d I 8t 2 [T ], (13)
where at = argmaxa2AhXt,a, ✓ti, and I is the identity matrix.
We note that several common assumptions in the literature imply (13), for example, bounded eigenvalues for the covariance matrix of Xt,at [11, 27, 17]. Such assumptions hold for a wide range of distributions, including subgaussian distribitions with bounded density [36].
Challenge in relaxing assumption 2 (diversity assumption). The main challenge in relaxing the diversity assumption for LinUCB (or Thompson sampling) based algorithms is that the regret of those
algorithms is bounded as Õ( p Tk✓̂T ✓⇤kVT ). Without quantization, the quantity k✓̂T ✓⇤kVT grows slowly and is nearly a constant; however, without the diversity assumption, the quantization error can make k✓̂T ✓⇤k2VT to grow as p T in the worst case. This is due to the fact that sub-optimal arms do not have large number of pulls, hence, we do not have good estimate of ✓? on those direction; on the other hand, the quantization errors in estimating VT is accumulated in all directions. As a result, the regret bound increases by a factor of T 1/4. We leave it as a future work to either relax the diversity assumption (which is required in our paper only in the case of unknown context distribution) or else show that removing it will unavoidably increase the regret order. Theorem 2. Algorithm 2 satisfies that for all t: Xt 2 Q; and Bt 1 + log2(2d+ 1) + 5.03d bits. Under assumptions 1, 2, it achieves a regret RT = O(d p T log T ) with probability at least 1 1T .
Proof Outline. To bound the number of bits Bt, we first bound the size of Q by formulating a standard counting problem: we find the number of non-negative integer solutions for a linear equation. To bound the regret RT , we start by proving that our quantization scheme guarantees some desirable properties, namely, unbiasedness and O( 1p
d ) quantization error for each vector coordinate. We then
upper bound the regret in terms of k✓̂t ✓?k2 and show that this difference can be decomposed as
k✓̂t ✓?k2 = kV 1t k2(k Pt i=1 Eik2 + (1 + |⌘i|)k Pt i=1 eik2 + k Pt i=1 ⌘̂iXi,aik2, (14)
where Et captures the error in estimating the matrix Xt,atXTt,at , et is the error in estimating Xt,at , and ⌘0t is a noise that satisfies the same properties as ⌘t. Using Assumption 2, we prove that V 1 t grows as O(dt ) with high probability, and from the unbiasdness and boundedness of all error quantities we show that they grow as O( p t log t) with high probability. This implies that k✓̂t ✓?k2 = O(d q log t t ), and hence, RT = O(d p T log T ). The complete proof is provided in App. B. ⇤
Algorithm Complexity. If we do not count the quantization operations, it is easy to see that the complexity of the rest of the algorithm is dominated by the complexity of computing V 1t which can be done in O(d2.373) [4]. For the quantization, we note that each coordinate of Xt can be computed in Õ(1) time4. Moreover, the computation of h(x) for x 2 Q can be done in constant time with high probability using hash tables, where h is the enumeration function in Step 3. Hence, the added computational complexity is almost linear in d. Although a hash table for h can consume ⌦(25d) memory, by sacrificing a constant factor in the number of bits, we can find enumeration functions that can be stored efficiently. As an example, consider the scheme in [14] that can find an one-to-one function h : Q ! N+ which can be stored and computed efficiently, but only gives guarantees in expectation that E[log(h(x))] = O(d) for all x 2 Q. Downlink Communication Cost. Although we assume no-cost downlink communication, as was also the case for Algorithm 1, the downlink in Algorithm 2 is only used to send the updated parameter vector ✓̂t to the agents. If desired, these estimates can be quantized using the same method as for Xt,at , which (following a similar proof to that of Theorem 2) can be shown to not affect the order of the regret while reducing the downlink communication to ⇡ 5d bits per iteration. Offloading To Agents. For applications where the agents wish to computationally help the central learner, the central learner may simply aggregate information to keep track of ut, Ṽt and broadcast these values to the agents; the estimate ✓̂t can be calculated at each agent. Moving the computational load to the agents does not affect the regret order or the number of bits communicated on the uplink. Remark 2. Under the regulatory assumptions in [17], the regret bound can be improved by a factor of p log(K)/d, where K = |A| is the number of actions. However, this does not improve the regret
in the worst case as the worst case number of actions is O(Cd), C > 1 [24].
Societal Impact. Results in this work can be used in decision making systems which can potentially lead to biased decisions against racial, sex, or minority groups if used without care.
Acknowledgment. CF and OH are supported in part by NSF award 2007714, NSF award 2221871 and Army Research Laboratory grant under Cooperative Agreement W911NF-17-2-0196. LY is supported in part by DARPA grant HR00112190130, NSF Award 2221871.
4Multiplication by p d can take O(log d) time. | 1. What is the focus and contribution of the paper regarding contextual bandit problems?
2. What are the strengths of the proposed algorithms, particularly in their ability to handle limited communication?
3. What are the weaknesses of the paper, especially regarding the practicality of the settings and the lack of discussion on operational complexity?
4. Do you have any concerns or questions about the computational complexity of Algorithm 1 or the limitations of Algorithm 2?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper considers the contextual bandit with limited communication. In this problem, each arm has a context distribution and a context at each round t=1,2,...,T is iid from the corresponding distribution. The goal is to learn the coefficient theta* to choose the arm of the largest reward rti = <Xti, theta_*> + eta_t, where eta_t is an iid noise. For ease of discussion, all contexts and theta_* lie in a unit ball. It is well-known that UCB and Thompson sampling algorithms are effective in this setting. Each agent receives the estimated theta from the central learner, chooses an arm, and sends information to the learner. Regarding the learner's ability, this paper considers two settings: First setting is that the learner knows the context distribution. The second setting is that the learner does not know the context distribution.
Algorithm 1 for the first setting sends 1-bit information that suffices to have optimal sqrt{T log T} rate. Algorithm 2 for the second setting sends 5d-bit information and has the optimal rate.
This paper is mostly well-written. The introduction of algorithms can be improved. For example, Line154-160 and L194-206 are not very helpful before I actually see the algorithm with example. I feel the results somewhat lack integration (for example, alg 1 is black-box whereas alg 2 is base algorithm dependent), but overall the results are slightly above the bar.
Details:
Algorithm 1 is a meta-algorithm that internally uses the standard bandit algorithm \Lambda. The crux in this part is that, if the algorithm sends the agent hattheta_t = X^{-1}(xt), then the average context is xt and the 1-bit signal is an unbiased estimator of rt = <xt, theta>. Since agents never send contextual information to the learner, this algorithm depends on the context distribution, but some remedy is discussed in Line272-274.
Algorithm 2 is an algorithm that sends partial information about the contexts. The algorithm carefully quantifies contexts. Note that if several terms are multiplicated in the update formula, the composite terms also need to be quantified. It assumes that every path of arms linearly increases each eigenvalue and the exploration-exploitation tradeoff is not seriously dealt.
The model depends on the assumption that rt is bounded, and extending it to Gaussian bandits is plausible.
Strengths And Weaknesses
Strengths: Elegant setup and algorithms. robustness to the estimation error of context distribution.
Weaknesses: Inpractical settings (unlimited downlink communication). Insufficient discussion on the operation complexity of obtaining X^*(theta) and its inverse (alg 1).
Questions
Is the computational complexity of alg 1 addressed?
Is it possible to analyze incorrectly specified context distribution?
Can we make algorithm 2 black-box?
What happens when the minimum eigenvalue is not guaranteed to grow in algorithm 2?
Is there any justification for the limited uplink communication while allowing unlimited downlink communication?
Limitations
I found no ethical problem. Algorithm 2 omits the exploration-exploitation tradeoff, but is okay for the conceptual idea for these results. |