venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS | Title
Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
Abstract
We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the learning rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an adaptive learning rate scheme which significantly improves the convergence rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the learning rate is changed at pre-determined time instants.
1 Introduction
A key component of reinforcement learning algorithms is to learn or approximate value functions under a given policy [Sutton, 1988], [Bertsekas and Tsitsiklis, 1996], [Szepesvári, 2010], [Bertsekas, 2011], [Bhatnagar et al., 2012], [Sutton and Barto, 2018]. Many existing algorithms for learning value functions are variants of the temporal-difference (TD) learning algorithms [Sutton, 1988], [Tsitsiklis and Van Roy, 1997], and can be viewed as stochastic approximation algorithms for minimizing the Bellman error (or objectives related to the Bellman error). Characterizing the convergence of these algorithms, such as TD(0), TD(λ), GTD , nonlinear GTD has been an important objective of reinforcement learning [Szepesvári, 2010], [Bhatnagar et al., 2009], and [Sutton et al., 2016]. The asymptotic convergence of these algorithms with diminishing steps has been established using stochastic approximation theory in many prior works (comprehensive surveys on stochastic approximations can be found in [Benveniste et al., 2012], [Kushner and Yin, 2003], and [Borkar, 2009]).
The conditions required for theoretically establishing asymptotic convergence in an algorithm with diminishing step sizes imply that the learning rate becomes very small very quickly. As a result, the algorithm will require a very large number of samples to converge. Reinforcement learning algorithms used in practice follow a pre-determined learning rate (step-size) schedule which, in most cases, uses decaying step sizes first and then a fixed step size. This gap between theory and practice has prompted a sequence of works on finite-time performance of temporal difference learning algorithms with either time-varying step sizes or constant step sizes [Dalal et al., 2017a,b, Liu et al.,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
2018, Lakshminarayanan and Szepesvari, 2018, Bhandari et al., 2018, Srikant and Ying, 2019]. Most of these results are for single time-scale TD algorithms, except [Dalal et al., 2017b] which considers two time-scale algorithms with decaying step sizes. Two time-scale TD algorithms are an important class of reinforcement learning algorithms because they can improve the convergence rate of TD learning or remedy the instability of single time-scale TD in some cases. This paper focuses on two time-scale linear stochastic approximation algorithms with constant step sizes. The model includes TDC, GTD and GTD2 as special cases (see [Sutton et al., 2008], [Sutton et al., 2009] and [Szepesvári, 2010] for more details). We note that, in contemporaneous work, [Xu et al., 2019] have carried out a two-time-scale analysis of linear stochastic approximation with diminishing step sizes.
Besides the theoretical analysis of finite-time performance of two time-scale reinforcement learning algorithms, another important aspect of reinforcement learning algorithms, which is imperative in practice but has been largely overlooked, is the design of learning rate schedule, i.e., how to choose proper step-sizes to improve the learning accuracy and reduce the learning time. This paper addresses this important question by developing principled heuristics based on the finite-time performance bounds.
The main contributions of this paper are summarized below.
• Finite Time Performance Bounds: We study two time-scale linear stochastic approximation algorithms, driven by Markovian samples. We establish finite time bounds on the mean-square error with respect to the fixed point of the corresponding ordinary differential equations (ODEs). The performance bound consists of two parts: a steady-state error and a transient error, where the steady-state error is determined by the step sizes but independent of the number of samples (or number of iterations), and the transient error depends on both step sizes and the number of samples. The transient error decays geometrically as the number of samples increases. The key differences between this paper and [Dalal et al., 2017b] include (i) we do not require a sparse projection step in the algorithm; and (ii) we assume constant step-sizes which allows us to develop the adaptive step-size selection heuristic mentioned next.
• Adaptive Learning Rate Selection: Based on the finite-time performance bounds, in particular, the steady-state error and the transient error terms in the bounds, we propose an adaptive learning rate selection scheme. The intuition is to use a constant learning rate until the transient error is dominated by the steady-state error; after that, running the algorithm further with the same learning rate is not very useful and therefore, we reduce the learning rate at this time. To apply adaptive learning rate selection in a model-free fashion, we develop data-driven heuristics to determine the time at which the transient error is close to the steady-state error. A useful property of our adaptive rate selection scheme is that it can be used with any learning rate schedule which already exists in many machine learning software platforms: one can start with the initial learning rate suggested by such schedules and get improved performance by using our adaptive scheme. Our experiments on Mountain Car and Inverted Pendulum show that our adaptive learning rate selection significantly improves the convergence rates as compared to optimal polynomial decay learning rate strategies (see [Dalal et al., 2017b] and [Konda et al., 2004] for more details on polynomial decay step-size rules).
2 Model, Notation and Assumptions
We consider the following two time-scale linear stochastic approximation algorithm:
Uk+1 = Uk + α (Auu(Xk)Uk +Auv(Xk)Vk + bu(Xk))
Vk+1 = Vk + β (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk)) ,
(1)
where {Xk} are the samples from a Markov process. We assume β < α so that, over −β iterations, the change in V is O(1) while the change in U is O ( α−β ) . Therefore, V is updated at a faster time scale than U.
In the context of reinforcement learning, when combined with linear function approximation of the value function, GTD, GTD2, and and TDC can be viewed as two time-scale linear stochastic approximation algorithms, and can be described in the same form as (1). For example, GTD2 with
linear function approximation is as follows:
Uk+1 =Uk + α (φ(Xk)− ζφ(Xk+1))φ>(Xk)Vk Vk+1 =Vk + β ( δk − φ>(Xk)Vk ) φ(Xk),
where ζ is the discount factor, φ(x) is the feature vector of state x, Uk is the weight vector such that φ>(x)Uk is the approximation of value function of state x at iteration k, δk = c(Xk) + ζφ
>(Xk+1)Uk − φ>(Xk)Uk is the TD error, and Vk is the weight vector that estimates E[φ(Xk)φ(Xk)T ]−1E[δkφ(Xk)]. We now summarize the notation we use throughout the paper and the assumptions we make.
• Assumption 1: {Xk} is a Markov chain with state space S. We assume that the following two limits exist: (
Āuu Āuv Āvu Āvv ) = lim k−→∞ ( E [Auu(Xk)] E [Auv(Xk)] E [Avu(Xk)] E [Avv(Xk)] ) ( b̄u b̄v ) = lim k−→∞ ( E[bu(Xk)] E[bv(Xk)]) = 0.
Note that without the loss of generality, we assume b̄ = 0 which allows for the fixed point of the associated ODEs to be 0. This can be guaranteed by appropriate centering. We define
B(Xk) =Auu(Xk)−Auv(Xk)Ā−1vv Āvu B̃(Xk) =Avu(Xk)−Avv(Xk)Ā−1vv Āvu B̄ =Āuu − ĀuvĀ−1vv Avu ¯̃B =Āvu − ĀvvĀ−1vv Āvu.
• Assumption 2: We assume that max{‖bu(x)‖, ‖bv(x)‖} ≤ bmax < ∞ for any x ∈ S. We also assume that max{‖B(x)‖, ‖B̃(x)‖, ‖Auu(x)‖, ‖Avu(x)‖, ‖Auv(x)‖, ‖Avv(x)‖} ≤ 1 for any x ∈ S. Note that these assumptions imply that the steady-state limits of the random matrices/vectors will also satisfy the same inequalities.
• Assumption 3: We assume Āvv and B̄ are Hurwitz and Āvv is invertible. Let Pu and Pv be the solutions to the following Lyapunov equations:
−I = B̄>Pu + PuB̄ −I = Ā>vvPv + PvĀvv.
Since both Āvv and B̄ are Hurwitz, Pu and Pv are real positive definite matrices. • Assumption 4: Define τ∆ ≥ 1 to be the mixing time of the Markov chain {Xk}. We assume
‖E[bk|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖B̄ − E[B(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖ ¯̃B − E[B̃(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆
‖Āuv − E[Auv(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖Āvv − E[Avv(Xk)|X0 = i]‖ ≤ ∆,∀i, ∀k ≥ τ∆.
• Assumption 5: As in [Srikant and Ying, 2019], we assume that there exists K ≥ 1 such that τ∆ ≤ K log( 1∆ ). For convenience, we choose
∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) and drop the subscript from τ∆, i.e., τ∆ = τ . Also, for convenience, we assume that is small enough such that ̃τ ≤ 14 , where ̃ = ∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) .
We further define the following notation:
• Define matrix
P =
( ξv
ξu+ξv Pu 0
0 ξuξu+ξvPv
) , (2)
where ξu = 2‖PuĀuv‖ and ξv = 2 ∥∥PvĀ−1vv ĀvuB̄∥∥ .
• Let γmax and γmin denote the largest and smallest eigenvalues of Pu and Pv, respectively. So γmax and γmin are also upper and lower bounds on the eigenvalues of P.
3 Finite-Time Performance Bounds
To establish the finite-time performance guarantees of the two time-scale linear stochastic approximation algorithm (1), we define
Zk = Vk + Ā −1 vv ĀvuUk and Θk = ( Uk Zk ) .
Then we consider the following Lyapunov function:
W (Θk) = Θ > k PΘk, (3)
where P is a symmetric positive definite matrix defined in (2) (P is positive definite because both Pu and Pv are positive definite matrices). The reason to introduce Zk will become clear when we introduce the key idea of our analysis based on singular perturbation theory.
The following lemma bounds the expected change in the Lyapunov function in one time step.
Lemma 1. For any k ≥ τ and , α, and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , the following inequality holds:
E[W (Θk+1)−W (Θk)] ≤ − α
γmax
(κ1 2 − κ2 α−β ) E[W (Θk)] + 2βτη2,
where ̃ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) , and η1, η2 κ1, and κ2 are constants independent of .
The proof of Lemma 1 is somewhat involved, and is provided in the supplementary material. The definitions of η1, η2, κ1 and κ2 can be found in the supplementary material as well. Here, we provide some intuition behind the result by studying a related ordinary differential equation (ODE). In particular, consider the expected change in the stochastic system divided by the slow time-scale step size α:
E[Uk+1 − Uk|Uk−τ = u, Vk−τ = v,Xk−τ = x] α
=E [ (Auu(Xk)Uk +Auv(Xk)Vk + bu)|Uk−τ = u, Vk−τ = v,Xk−τ = x]
α−β E[Vk+1 − Vk|Uk−τ = u, Vk−τ = v,Xk−τ = x]
α
=E [ (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk))|Uk−τ = u, Vk−τ = v,Xk−τ = x] ,
(4)
where the expectation is conditioned sufficiently in the past in terms of the underlying Markov chain (i.e. conditioned on the state at time k − τ instead of k) so the expectation is approximately in steady-state.
Approximating the left-hand side by derivatives and the right-hand side using steady-state expectations, we get the following ODEs:
u̇ =Āuuu+ Āuvv (5)
α−β v̇ =Āvuu+ Āvvv. (6)
Note that, in the limit as → 0, the second of the above two ODEs becomes an algebraic equation, instead of a differential equation. In the control theory literature, such systems are called singularlyperturbed differential equations, see for example [Kokotovic et al., 1999]. In [Khalil, 2002, Chapter 11], the following Lyapunov equation has been suggested to study the stability of such singularly perturbed ODEs:
W (u, v) = du>Puu+ (1− d) ( v + Ā−1vv Āvuu )> Pv ( v + Ā−1vv Āvuu ) , (7)
for d ∈ [0, 1]. The function W mentioned earlier in (3) is the same as above for a carefully chosen d. The rationale behind the use of the Lyapunov function (7) is presented in the appendix.
The intuition behind the result in Lemma 1 can be understood by studying the dynamics of the above Lyapunov function in the ODE setting. To simplify the notation, we define z = v + Ā−1vv Āvuu, so the Lyapunov function can also be written as
W (u, z) = du>Puu+ (1− d)z>Pvz, (8)
and adapting the manipulations for nonlinear ODEs in [Khalil, 2002, Chapter 11] to our linear model, we get
Ẇ =2duTPuu̇+ 2(1− d)z>Pv ż (9) ≤− (‖u‖ ‖z‖) Ψ̃ ( ‖u‖ ‖z‖ ) , (10)
where
Ψ̃ =
( d −dγmax − (1− d)γmaxσmin
−dγmax − (1− d)γmaxσmin ( 1−d 2 α−β − (1− d)γmaxσmin )) . (11)
Note that Ψ̃ is positive definite when
d ( 1− d 2 α−β − (1− d)γmaxσmin ) ≥ (dγmax + (1− d)γmaxσmin)2 , (12)
i.e., when
α−β ≤ d(1− d) 2d(1− d)γmaxσmin + (dγmax + (1− d)γmaxσmin)2 . (13)
Let λ̃min denote the smallest eigenvalue of Ψ̃. We have
Ẇ ≤ −λ̃min ( ‖u‖2 + ‖z‖2 ) ≤ − λ̃min
γmax W. (14)
In particular, recall that we obtained the ODEs by dividing by the step-size α. Therefore, for the discrete equations, we would expect
E[W (Θk+1)−W (Θk)] ≈≤ − α λ̃min γmax E [W (Θk)] , (15)
which resembles the transient term of the upper bound in Lemma 1. The exact expression in the discrete, stochastic case is of course different and additionally includes a steady-state term, which is not captured by the ODE analysis above.
Now, we are ready to the state the main theorem.
Theorem 1. For any k ≥ τ, , α and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , we have
E[‖Θk‖2] ≤ γmax γmin
( 1− α
γmax
(κ1 2 − κ2 α−β ))k−τ (1.5‖Θ0‖+ 0.5bmax)2
+ 2β−α γmax γmin η2τ( κ1 2 − κ2 α−β ) . Proof. Applying Lemma 1 recursively, we obtain
E[W (Θk)] ≤ uk−τE[W (Θτ )] + v 1− uk−τ
1− u ≤ uk−τE[W (Θk)] + v
1
1− u (16)
where u = 1− α
γmax
( κ1 2 − κ2 α−β) and v = η2τ 2β . Also, we have that E[‖Θk‖2] ≤ 1
γmin E[W (Θk)] ≤
1
γmin uk−τE[W (Θτ )] + v
1
γmin(1− u) . (17)
Furthermore, E[W (Θτ )] ≤ γmaxE[‖Θτ‖2] ≤ γmaxE[(‖Θτ −Θ0‖+ ‖Θ0‖)2]
≤ γmax ((1 + 2̃τ)‖Θ0‖+ 2̃τ bmax)2 . (18)
The theorem then holds using the fact that ̃τ ≤ 14 .
Theorem 1 essentially states that the expected error for a two-time scale linear stochastic approximation algorithm comprises two terms: a transient error term which decays geometrically with time and a steady-state error term which is directly proportional to 2β−α and the mixing time. This characterization of the finite-time error is useful in understanding the impact of different algorithmic and problem parameters on the rate of convergence, allowing the design of efficient techniques such as the adaptive learning rate rule which we will present in the next section.
4 Adaptive Selection of Learning Rates
Equipped with the theoretical results from the previous section, one interesting question that arises is the following: given a time-scale ratio λ = αβ , can we use the finite-time performance bound to design a rule for adapting the learning rate to optimize performance?
In order to simplify the discussion, let β = µ and α = µλ. Therefore, Theorem 1 can be simplified and written as
E[‖Θk‖2] ≤K1 ( 1− µλ (
κ1 2γmax − κ2 γmax
µλ−1 ))k + µ2−λ K2(
κ1 2 − κ2µλ−1 ) (19) where K1 and K2 are problem-dependent positive constants. Since we want the system to be stable, we will assume that µ is small enough such that κ12γmax − κ2 γmax
µλ−1 = c > 0. Plugging this condition in (19), we get
E[‖Θk‖2] ≤K1 ( 1− cµλ )k + K2µ 2−λ
γmaxc (20)
In order to optimize performance for a given number of samples, we would like to choose the learning rate µ as a function of the time step. In principle, one can assume time-varying learning rates, derive more general mean-squared error expressions (similar to Theorem 1), and then try to optimize over the learning rates to minimize the error for a given number of samples. However, this optimization problem is computationally intractable. We note that even if we assume that we are only going to change the learning rate a finite number of times, the resulting optimization problem of finding the times at which such changes are performed and finding the learning rate at these change points is an equally intractable optimization problem. Therefore, we have to devise simpler adaptive learning rate rules.
To motivate our learning rate rule, we first consider a time T such that errors due to the transient and steady-state parts in (20) are equal, i.e.,
K1(1− cµλ)T = K2µ
2−λ
γmaxc (21)
From this time onwards, running the two timescale stochastic approximation algorithm any further with µ as the learning rate is not going to significantly improve the mean-squared error. In particular, the mean-squared error beyond this time is upper bounded by twice the steadystate error K2µ 2−λ
γmaxc . Thus, at time T, it makes
sense to reset µ as µ ← µ/ξ, where ξ > 1 is a hyperparameter. Roughly speaking, T is the time at which one is close to steady-state for a given learning rate, and therefore, it is the time to reduce the learning rate to get to a new "steady-state" with a smaller error.
The key difficulty in implementing the above idea is that it is difficult to determine T . For ease of exposition, we considered a system centered around 0 in our analysis (i.e., Θ∗ = 0). More generally, the results presented in Theorem 1 and (19) - (20) will have Θk replaced by Θk−Θ∗. In any practical application, Θ∗ will be unknown. Thus, we cannot determine ‖Θk − Θ∗‖ as a function of k and hence, it is difficult to use this approach.
Our idea to overcome this difficulty is to estimate whether the algorithm is close to its steady-state by observing ‖Θk −Θ0‖ where Θ0 is our initial guess for the unknown parameter vector and is thus known to us. Note that ‖Θk −Θ0‖ is zero at k = 0 and will increase (with some fluctuations due to randomness) to ‖Θ∗ −Θ0‖ in steady-state (see Figure 1 for an illustration). Roughly speaking, we approximate the curve in this figure by a sequence of straight lines, i.e., perform a piecewise linear approximation, and conclude that the system has reached steady-state when the lines become approximately horizontal. We provide the details next.
To derive a test to estimate whether ‖Θk −Θ0‖ has reached steady-state, we first note the following inequality for k ≥ T (i.e., after the steady-state time defined in (21)):
E[‖Θ0 −Θ∗‖]− E[‖Θk −Θ∗‖] ≤E[‖Θk −Θ0‖] ≤ E[‖Θk −Θ∗‖] + E[‖Θ0 −Θ∗‖]
⇒ d−
√ 2K2µ2−λ
γmaxc ≤E[‖Θk −Θ0‖] ≤ d+
√ 2K2µ2−λ
γmaxc
(22)
where the first pair of inequalities follow from the triangle inequality and the second pair of inequalities follow from (20) - (21), Jensen’s inequality and letting d = E[‖Θ0−Θ∗‖]. Now, for k ≥ T , consider the following N points: {Xi = i, Yi = ‖Θk+i −Θ0‖}Ni=1. Since these points are all obtained after “steady-state" is reached, if we draw the best-fit line through these points, its slope should be small. More precisely, let ψN denote the slope of the best-fit line passing through these N points. Using (22) along with formulas for the slope in linear regression, and after some algebraic manipulations (see Appendix ?? for detailed calculations), one can show that:
|E[ψN ]| = O
( µ1− λ 2
N
) , Var(ψN ) = O ( 1
N2
) (23)
Therefore, if N ≥ χ µ λ 2
, then the slope of the best-fit line connecting {Xi, Yi} will be O ( µ1− λ 2
N ) with high probability (for a sufficiently large constant χ > 0). On the other hand, when the algorithm is in the transient state, the difference between ‖Θk+m −Θ0‖ and ‖Θk −Θ0‖ will be O(mµ) since Θk changes by O(µ) from one time slot to the next (see Lemma 3 in Appendix ?? for more details). Using this fact, the slope of the best-fit line through N consecutive points in the transient state can be shown to be O (µ), similar to (23). Since we choose N ≥ χ
µ λ 2
, the slope of the best-fit line in steady state, i.e., O ( µ1− λ 2
N
) will be lower than the slope of the best-fit line in the transient phase,
i.e., O (µ) (for a sufficiently large χ). We use this fact as a diagnostic test to determine whether or not the algorithm has entered steady-state. If the diagnostic test returns true, we update the learning rate (see Algorithm 1).
Algorithm 1 Adaptive Learning Rate Rule Hyperparameters: ρ, σ, ξ,N Initialize µ = ρ, ψN = 2σµ1− λ 2 , Θ0, Θini = Θ0.
for i = 1, 2, ... do Do two time-scale algorithm update. Compute ψN = Slope ( {k, ‖Θi−k −Θini‖}N−1k=0 ) .
if ψN < σµ 1−λ 2
N then µ = µξ . Θini = Θi.
end if end for
We note that our adaptive learning rate rule will also work for single time-scale reinforcement learning algorithms such as TD(λ) since our expressions for the mean-square error, when specialized to the case of a single time-scale, will recover the result in [Srikant and Ying, 2019] (see [Gupta et al., 2019] for more details). Therefore, an interesting question that arises from (19) is whether one can optimize the rate of convergence with respect to the time-scale ratio λ? Since the RHS in (19) depends on a variety of problem-dependent parameters, it is difficult to optimize it over λ. An interesting direction of further research
is to investigate if practical adaptive strategies for λ can be developed in order to improve the rate of convergence further.
5 Experiments
We implemented our adaptive learning rate schedule on two popular classic control problems in reinforcement learning - Mountain Car and Inverted Pendulum, and compared its performance with the optimal polynomial decay learning rate rule suggested in [Dalal et al., 2017b] (described in the next subsection). See Appendix ?? for more details on the Mountain Car and Inverted Pendulum problems. We evaluated the following policies using the two time-scale TDC algorithm (see [Sutton et al., 2009] for more details regarding TDC):
• Mountain Car - At each time step, choose a random action ∈ {0, 2}, i.e., accelerate randomly to the left or right.
• Inverted Pendulum - At each time step, choose a random action in the entire action space, i.e., apply a random torque ∈ [−2.0, 2.0] at the pivot point.
Since the true value of Θ∗ is not known in both the problems we consider, to quantify the performance of the TDC algorithm, we used the error metric known as the norm of the expected TD update (NEU, see [Sutton et al., 2009] for more details). For both problems, we used a O(3) Fourier basis (see [Konidaris et al., 2011] for more details) to approximate the value function and used 0.95 as the discount factor.
5.1 Learning Rate Rules and Tuning
1. The optimal polynomial decay rule suggested in [Dalal et al., 2017b] is the following: at time step k, choose αk = 1 (k+1)α and β k = 1 (k+1)β
, where α → 1 and β → 23 . For our experiments, we chose α = 0.99 and β = 0.66. This implies λ = αβ = 1.5. Since the problems we considered require smaller initial step-sizes for convergence, we let αk = ρ0 (k+1)α and β k = ρ0 (k+1)β
and did a grid search to determine the best ρ0, i.e., the best initial learning rate. The following values for ρ0 were found to be the best: Mountain Car - ρ0 = 0.2, Inverted Pendulum - ρ0 = 0.2.
2. For our proposed adaptive learning rate rule, we fixed ξ = 1.2, N = 200 in both problems since we did not want the decay in the learning rate to be too aggressive and the resource consumption for slope computation to be high. We also set λ = 1.5 as in the polynomial decay case to have a fair comparison. We then fixed ρ and conducted a grid search to find the best σ. Subsequently, we conducted a grid search over ρ. Interestingly, the adaptive learning rate rule was reasonably robust to the value of ρ. We used ρ = 0.05 in Inverted Pendulum and ρ = 0.1 in Mountain Car. Effectively, the only hyperparameter that affected the rule’s performance significantly was σ. The following values for σ were found to be the best: Mountain Car - σ = 0.001, Inverted Pendulum - σ = 0.01.
5.2 Results
For each experiment, one run involved the following: 10, 000 episodes with the number of iterations in each episode being 50 and 200 for Inverted Pendulum and Mountain Car respectively. After every 1, 000 episodes, training/learning was paused and the NEU was computed by averaging over 1, 000 test episodes. We initialized Θ0 = 0. For Mountain Car, 50 such runs were conducted and the results were computed by averaging over these runs. For Inverted Pendulum, 100 runs were conducted and the results were computed by averaging over these runs. Note that the learning rate for each adaptive strategy was adapted at the episodic level due to the episodic nature of the problems. The results are reported in Figures 2a and 2b. As is clear from the figures, our proposed adaptive learning rate rule significantly outperforms the optimal polynomial decay rule.
6 Conclusion
We have presented finite-time bounds quantifying the performance of two time-scale linear stochastic approximation algorithms. The bounds give insight into how the different time-scale and learning rate parameters affect the rate of convergence. We utilized these insights and designed an adaptive learning rate selection rule. We implemented our rule on popular classical control problems in reinforcement learning and showed that the proposed rule significantly outperforms the optimal polynomial decay strategy suggested in literature.
Acknowledgements
Research supported by ONR Grant N00014-19-1-2566, NSF Grants CPS ECCS 1739189, NeTS 1718203, CMMI 1562276, ECCS 16-09370, and NSF/USDA Grant AG 2018-67007-28379. Lei Ying’s work supported by NSF grants CNS 1618768, ECCS 1609202, IIS 1715385, ECCS 1739344, CNS 1824393 and CNS 1813392. | 1. How does the reviewer suggest expanding the literature review, and what specific works should be included?
2. How do the proposed methods compare to prior works, specifically Liu et al.'s proximal gradient TD learning and GTD2-MP?
3. What are the limitations of the analysis regarding its focus on linear function approximation, and how might the approach be adapted for use with deep nonlinear neural networks?
4. How could the presentation be improved, particularly in terms of simplifying complex concepts and providing easier-to-understand summaries of key results? | Review | Review
Nice paper, and very clear presentation of the main results. Here are few suggestions for expanding your presentation: 1. The literature review needs to be broadened. In particular, you should discuss the work of Liu et al., JAIR 2019 (Proximal Gradient TD Learning), which analyzes two time-scale algorithms that include a proximal gradient step. The results in that paper show improved finite sample bounds over classic gradient TD methods, like GTD2. How do your results compare with those in that paper, and in particular, can your analysis be extended to GTD2-MP (the mirror-prox variant of GTD2, which has an improved finite sample convergence rate compared to GTD2. 2. Two time scale algorithms are somewhat more complex than the standard TD method, and Sutton et al. and others have developed a variant of TD called emphatic TD (JMLR 2016: "An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning") that is stable under off-policy training. How does your analysis relate to emphatic TD methods? 3. Your analysis is largely set in the context of linear function approximation, but of course, all the recent excitement in RL is over deep nonlinear function approximation networks. Does your learning rate adaptation scheme apply to nonlinear deep neural networks and have you done any experiments on such networks? 4. The presentation can be improved. Some of the main theoretical results (e.g., Theorem 1) would benefit from some simpler exposition. Rather than just state the exact theorem, it would help to add a sentence or two distilling the main implication into easier to parse language for those who want to get a gist of the main result. |
NIPS | Title
Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
Abstract
We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the learning rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an adaptive learning rate scheme which significantly improves the convergence rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the learning rate is changed at pre-determined time instants.
1 Introduction
A key component of reinforcement learning algorithms is to learn or approximate value functions under a given policy [Sutton, 1988], [Bertsekas and Tsitsiklis, 1996], [Szepesvári, 2010], [Bertsekas, 2011], [Bhatnagar et al., 2012], [Sutton and Barto, 2018]. Many existing algorithms for learning value functions are variants of the temporal-difference (TD) learning algorithms [Sutton, 1988], [Tsitsiklis and Van Roy, 1997], and can be viewed as stochastic approximation algorithms for minimizing the Bellman error (or objectives related to the Bellman error). Characterizing the convergence of these algorithms, such as TD(0), TD(λ), GTD , nonlinear GTD has been an important objective of reinforcement learning [Szepesvári, 2010], [Bhatnagar et al., 2009], and [Sutton et al., 2016]. The asymptotic convergence of these algorithms with diminishing steps has been established using stochastic approximation theory in many prior works (comprehensive surveys on stochastic approximations can be found in [Benveniste et al., 2012], [Kushner and Yin, 2003], and [Borkar, 2009]).
The conditions required for theoretically establishing asymptotic convergence in an algorithm with diminishing step sizes imply that the learning rate becomes very small very quickly. As a result, the algorithm will require a very large number of samples to converge. Reinforcement learning algorithms used in practice follow a pre-determined learning rate (step-size) schedule which, in most cases, uses decaying step sizes first and then a fixed step size. This gap between theory and practice has prompted a sequence of works on finite-time performance of temporal difference learning algorithms with either time-varying step sizes or constant step sizes [Dalal et al., 2017a,b, Liu et al.,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
2018, Lakshminarayanan and Szepesvari, 2018, Bhandari et al., 2018, Srikant and Ying, 2019]. Most of these results are for single time-scale TD algorithms, except [Dalal et al., 2017b] which considers two time-scale algorithms with decaying step sizes. Two time-scale TD algorithms are an important class of reinforcement learning algorithms because they can improve the convergence rate of TD learning or remedy the instability of single time-scale TD in some cases. This paper focuses on two time-scale linear stochastic approximation algorithms with constant step sizes. The model includes TDC, GTD and GTD2 as special cases (see [Sutton et al., 2008], [Sutton et al., 2009] and [Szepesvári, 2010] for more details). We note that, in contemporaneous work, [Xu et al., 2019] have carried out a two-time-scale analysis of linear stochastic approximation with diminishing step sizes.
Besides the theoretical analysis of finite-time performance of two time-scale reinforcement learning algorithms, another important aspect of reinforcement learning algorithms, which is imperative in practice but has been largely overlooked, is the design of learning rate schedule, i.e., how to choose proper step-sizes to improve the learning accuracy and reduce the learning time. This paper addresses this important question by developing principled heuristics based on the finite-time performance bounds.
The main contributions of this paper are summarized below.
• Finite Time Performance Bounds: We study two time-scale linear stochastic approximation algorithms, driven by Markovian samples. We establish finite time bounds on the mean-square error with respect to the fixed point of the corresponding ordinary differential equations (ODEs). The performance bound consists of two parts: a steady-state error and a transient error, where the steady-state error is determined by the step sizes but independent of the number of samples (or number of iterations), and the transient error depends on both step sizes and the number of samples. The transient error decays geometrically as the number of samples increases. The key differences between this paper and [Dalal et al., 2017b] include (i) we do not require a sparse projection step in the algorithm; and (ii) we assume constant step-sizes which allows us to develop the adaptive step-size selection heuristic mentioned next.
• Adaptive Learning Rate Selection: Based on the finite-time performance bounds, in particular, the steady-state error and the transient error terms in the bounds, we propose an adaptive learning rate selection scheme. The intuition is to use a constant learning rate until the transient error is dominated by the steady-state error; after that, running the algorithm further with the same learning rate is not very useful and therefore, we reduce the learning rate at this time. To apply adaptive learning rate selection in a model-free fashion, we develop data-driven heuristics to determine the time at which the transient error is close to the steady-state error. A useful property of our adaptive rate selection scheme is that it can be used with any learning rate schedule which already exists in many machine learning software platforms: one can start with the initial learning rate suggested by such schedules and get improved performance by using our adaptive scheme. Our experiments on Mountain Car and Inverted Pendulum show that our adaptive learning rate selection significantly improves the convergence rates as compared to optimal polynomial decay learning rate strategies (see [Dalal et al., 2017b] and [Konda et al., 2004] for more details on polynomial decay step-size rules).
2 Model, Notation and Assumptions
We consider the following two time-scale linear stochastic approximation algorithm:
Uk+1 = Uk + α (Auu(Xk)Uk +Auv(Xk)Vk + bu(Xk))
Vk+1 = Vk + β (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk)) ,
(1)
where {Xk} are the samples from a Markov process. We assume β < α so that, over −β iterations, the change in V is O(1) while the change in U is O ( α−β ) . Therefore, V is updated at a faster time scale than U.
In the context of reinforcement learning, when combined with linear function approximation of the value function, GTD, GTD2, and and TDC can be viewed as two time-scale linear stochastic approximation algorithms, and can be described in the same form as (1). For example, GTD2 with
linear function approximation is as follows:
Uk+1 =Uk + α (φ(Xk)− ζφ(Xk+1))φ>(Xk)Vk Vk+1 =Vk + β ( δk − φ>(Xk)Vk ) φ(Xk),
where ζ is the discount factor, φ(x) is the feature vector of state x, Uk is the weight vector such that φ>(x)Uk is the approximation of value function of state x at iteration k, δk = c(Xk) + ζφ
>(Xk+1)Uk − φ>(Xk)Uk is the TD error, and Vk is the weight vector that estimates E[φ(Xk)φ(Xk)T ]−1E[δkφ(Xk)]. We now summarize the notation we use throughout the paper and the assumptions we make.
• Assumption 1: {Xk} is a Markov chain with state space S. We assume that the following two limits exist: (
Āuu Āuv Āvu Āvv ) = lim k−→∞ ( E [Auu(Xk)] E [Auv(Xk)] E [Avu(Xk)] E [Avv(Xk)] ) ( b̄u b̄v ) = lim k−→∞ ( E[bu(Xk)] E[bv(Xk)]) = 0.
Note that without the loss of generality, we assume b̄ = 0 which allows for the fixed point of the associated ODEs to be 0. This can be guaranteed by appropriate centering. We define
B(Xk) =Auu(Xk)−Auv(Xk)Ā−1vv Āvu B̃(Xk) =Avu(Xk)−Avv(Xk)Ā−1vv Āvu B̄ =Āuu − ĀuvĀ−1vv Avu ¯̃B =Āvu − ĀvvĀ−1vv Āvu.
• Assumption 2: We assume that max{‖bu(x)‖, ‖bv(x)‖} ≤ bmax < ∞ for any x ∈ S. We also assume that max{‖B(x)‖, ‖B̃(x)‖, ‖Auu(x)‖, ‖Avu(x)‖, ‖Auv(x)‖, ‖Avv(x)‖} ≤ 1 for any x ∈ S. Note that these assumptions imply that the steady-state limits of the random matrices/vectors will also satisfy the same inequalities.
• Assumption 3: We assume Āvv and B̄ are Hurwitz and Āvv is invertible. Let Pu and Pv be the solutions to the following Lyapunov equations:
−I = B̄>Pu + PuB̄ −I = Ā>vvPv + PvĀvv.
Since both Āvv and B̄ are Hurwitz, Pu and Pv are real positive definite matrices. • Assumption 4: Define τ∆ ≥ 1 to be the mixing time of the Markov chain {Xk}. We assume
‖E[bk|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖B̄ − E[B(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖ ¯̃B − E[B̃(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆
‖Āuv − E[Auv(Xk)|X0 = i]‖ ≤ ∆,∀i,∀k ≥ τ∆ ‖Āvv − E[Avv(Xk)|X0 = i]‖ ≤ ∆,∀i, ∀k ≥ τ∆.
• Assumption 5: As in [Srikant and Ying, 2019], we assume that there exists K ≥ 1 such that τ∆ ≤ K log( 1∆ ). For convenience, we choose
∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) and drop the subscript from τ∆, i.e., τ∆ = τ . Also, for convenience, we assume that is small enough such that ̃τ ≤ 14 , where ̃ = ∆ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) .
We further define the following notation:
• Define matrix
P =
( ξv
ξu+ξv Pu 0
0 ξuξu+ξvPv
) , (2)
where ξu = 2‖PuĀuv‖ and ξv = 2 ∥∥PvĀ−1vv ĀvuB̄∥∥ .
• Let γmax and γmin denote the largest and smallest eigenvalues of Pu and Pv, respectively. So γmax and γmin are also upper and lower bounds on the eigenvalues of P.
3 Finite-Time Performance Bounds
To establish the finite-time performance guarantees of the two time-scale linear stochastic approximation algorithm (1), we define
Zk = Vk + Ā −1 vv ĀvuUk and Θk = ( Uk Zk ) .
Then we consider the following Lyapunov function:
W (Θk) = Θ > k PΘk, (3)
where P is a symmetric positive definite matrix defined in (2) (P is positive definite because both Pu and Pv are positive definite matrices). The reason to introduce Zk will become clear when we introduce the key idea of our analysis based on singular perturbation theory.
The following lemma bounds the expected change in the Lyapunov function in one time step.
Lemma 1. For any k ≥ τ and , α, and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , the following inequality holds:
E[W (Θk+1)−W (Θk)] ≤ − α
γmax
(κ1 2 − κ2 α−β ) E[W (Θk)] + 2βτη2,
where ̃ = 2 α ( 1 + ‖Ā−1vv Āvu‖+ β−α ) , and η1, η2 κ1, and κ2 are constants independent of .
The proof of Lemma 1 is somewhat involved, and is provided in the supplementary material. The definitions of η1, η2, κ1 and κ2 can be found in the supplementary material as well. Here, we provide some intuition behind the result by studying a related ordinary differential equation (ODE). In particular, consider the expected change in the stochastic system divided by the slow time-scale step size α:
E[Uk+1 − Uk|Uk−τ = u, Vk−τ = v,Xk−τ = x] α
=E [ (Auu(Xk)Uk +Auv(Xk)Vk + bu)|Uk−τ = u, Vk−τ = v,Xk−τ = x]
α−β E[Vk+1 − Vk|Uk−τ = u, Vk−τ = v,Xk−τ = x]
α
=E [ (Avu(Xk)Uk +Avv(Xk)Vk + bv(Xk))|Uk−τ = u, Vk−τ = v,Xk−τ = x] ,
(4)
where the expectation is conditioned sufficiently in the past in terms of the underlying Markov chain (i.e. conditioned on the state at time k − τ instead of k) so the expectation is approximately in steady-state.
Approximating the left-hand side by derivatives and the right-hand side using steady-state expectations, we get the following ODEs:
u̇ =Āuuu+ Āuvv (5)
α−β v̇ =Āvuu+ Āvvv. (6)
Note that, in the limit as → 0, the second of the above two ODEs becomes an algebraic equation, instead of a differential equation. In the control theory literature, such systems are called singularlyperturbed differential equations, see for example [Kokotovic et al., 1999]. In [Khalil, 2002, Chapter 11], the following Lyapunov equation has been suggested to study the stability of such singularly perturbed ODEs:
W (u, v) = du>Puu+ (1− d) ( v + Ā−1vv Āvuu )> Pv ( v + Ā−1vv Āvuu ) , (7)
for d ∈ [0, 1]. The function W mentioned earlier in (3) is the same as above for a carefully chosen d. The rationale behind the use of the Lyapunov function (7) is presented in the appendix.
The intuition behind the result in Lemma 1 can be understood by studying the dynamics of the above Lyapunov function in the ODE setting. To simplify the notation, we define z = v + Ā−1vv Āvuu, so the Lyapunov function can also be written as
W (u, z) = du>Puu+ (1− d)z>Pvz, (8)
and adapting the manipulations for nonlinear ODEs in [Khalil, 2002, Chapter 11] to our linear model, we get
Ẇ =2duTPuu̇+ 2(1− d)z>Pv ż (9) ≤− (‖u‖ ‖z‖) Ψ̃ ( ‖u‖ ‖z‖ ) , (10)
where
Ψ̃ =
( d −dγmax − (1− d)γmaxσmin
−dγmax − (1− d)γmaxσmin ( 1−d 2 α−β − (1− d)γmaxσmin )) . (11)
Note that Ψ̃ is positive definite when
d ( 1− d 2 α−β − (1− d)γmaxσmin ) ≥ (dγmax + (1− d)γmaxσmin)2 , (12)
i.e., when
α−β ≤ d(1− d) 2d(1− d)γmaxσmin + (dγmax + (1− d)γmaxσmin)2 . (13)
Let λ̃min denote the smallest eigenvalue of Ψ̃. We have
Ẇ ≤ −λ̃min ( ‖u‖2 + ‖z‖2 ) ≤ − λ̃min
γmax W. (14)
In particular, recall that we obtained the ODEs by dividing by the step-size α. Therefore, for the discrete equations, we would expect
E[W (Θk+1)−W (Θk)] ≈≤ − α λ̃min γmax E [W (Θk)] , (15)
which resembles the transient term of the upper bound in Lemma 1. The exact expression in the discrete, stochastic case is of course different and additionally includes a steady-state term, which is not captured by the ODE analysis above.
Now, we are ready to the state the main theorem.
Theorem 1. For any k ≥ τ, , α and β such that η1̃τ + 2 ̃ 2 α γmax ≤ κ1 2 , we have
E[‖Θk‖2] ≤ γmax γmin
( 1− α
γmax
(κ1 2 − κ2 α−β ))k−τ (1.5‖Θ0‖+ 0.5bmax)2
+ 2β−α γmax γmin η2τ( κ1 2 − κ2 α−β ) . Proof. Applying Lemma 1 recursively, we obtain
E[W (Θk)] ≤ uk−τE[W (Θτ )] + v 1− uk−τ
1− u ≤ uk−τE[W (Θk)] + v
1
1− u (16)
where u = 1− α
γmax
( κ1 2 − κ2 α−β) and v = η2τ 2β . Also, we have that E[‖Θk‖2] ≤ 1
γmin E[W (Θk)] ≤
1
γmin uk−τE[W (Θτ )] + v
1
γmin(1− u) . (17)
Furthermore, E[W (Θτ )] ≤ γmaxE[‖Θτ‖2] ≤ γmaxE[(‖Θτ −Θ0‖+ ‖Θ0‖)2]
≤ γmax ((1 + 2̃τ)‖Θ0‖+ 2̃τ bmax)2 . (18)
The theorem then holds using the fact that ̃τ ≤ 14 .
Theorem 1 essentially states that the expected error for a two-time scale linear stochastic approximation algorithm comprises two terms: a transient error term which decays geometrically with time and a steady-state error term which is directly proportional to 2β−α and the mixing time. This characterization of the finite-time error is useful in understanding the impact of different algorithmic and problem parameters on the rate of convergence, allowing the design of efficient techniques such as the adaptive learning rate rule which we will present in the next section.
4 Adaptive Selection of Learning Rates
Equipped with the theoretical results from the previous section, one interesting question that arises is the following: given a time-scale ratio λ = αβ , can we use the finite-time performance bound to design a rule for adapting the learning rate to optimize performance?
In order to simplify the discussion, let β = µ and α = µλ. Therefore, Theorem 1 can be simplified and written as
E[‖Θk‖2] ≤K1 ( 1− µλ (
κ1 2γmax − κ2 γmax
µλ−1 ))k + µ2−λ K2(
κ1 2 − κ2µλ−1 ) (19) where K1 and K2 are problem-dependent positive constants. Since we want the system to be stable, we will assume that µ is small enough such that κ12γmax − κ2 γmax
µλ−1 = c > 0. Plugging this condition in (19), we get
E[‖Θk‖2] ≤K1 ( 1− cµλ )k + K2µ 2−λ
γmaxc (20)
In order to optimize performance for a given number of samples, we would like to choose the learning rate µ as a function of the time step. In principle, one can assume time-varying learning rates, derive more general mean-squared error expressions (similar to Theorem 1), and then try to optimize over the learning rates to minimize the error for a given number of samples. However, this optimization problem is computationally intractable. We note that even if we assume that we are only going to change the learning rate a finite number of times, the resulting optimization problem of finding the times at which such changes are performed and finding the learning rate at these change points is an equally intractable optimization problem. Therefore, we have to devise simpler adaptive learning rate rules.
To motivate our learning rate rule, we first consider a time T such that errors due to the transient and steady-state parts in (20) are equal, i.e.,
K1(1− cµλ)T = K2µ
2−λ
γmaxc (21)
From this time onwards, running the two timescale stochastic approximation algorithm any further with µ as the learning rate is not going to significantly improve the mean-squared error. In particular, the mean-squared error beyond this time is upper bounded by twice the steadystate error K2µ 2−λ
γmaxc . Thus, at time T, it makes
sense to reset µ as µ ← µ/ξ, where ξ > 1 is a hyperparameter. Roughly speaking, T is the time at which one is close to steady-state for a given learning rate, and therefore, it is the time to reduce the learning rate to get to a new "steady-state" with a smaller error.
The key difficulty in implementing the above idea is that it is difficult to determine T . For ease of exposition, we considered a system centered around 0 in our analysis (i.e., Θ∗ = 0). More generally, the results presented in Theorem 1 and (19) - (20) will have Θk replaced by Θk−Θ∗. In any practical application, Θ∗ will be unknown. Thus, we cannot determine ‖Θk − Θ∗‖ as a function of k and hence, it is difficult to use this approach.
Our idea to overcome this difficulty is to estimate whether the algorithm is close to its steady-state by observing ‖Θk −Θ0‖ where Θ0 is our initial guess for the unknown parameter vector and is thus known to us. Note that ‖Θk −Θ0‖ is zero at k = 0 and will increase (with some fluctuations due to randomness) to ‖Θ∗ −Θ0‖ in steady-state (see Figure 1 for an illustration). Roughly speaking, we approximate the curve in this figure by a sequence of straight lines, i.e., perform a piecewise linear approximation, and conclude that the system has reached steady-state when the lines become approximately horizontal. We provide the details next.
To derive a test to estimate whether ‖Θk −Θ0‖ has reached steady-state, we first note the following inequality for k ≥ T (i.e., after the steady-state time defined in (21)):
E[‖Θ0 −Θ∗‖]− E[‖Θk −Θ∗‖] ≤E[‖Θk −Θ0‖] ≤ E[‖Θk −Θ∗‖] + E[‖Θ0 −Θ∗‖]
⇒ d−
√ 2K2µ2−λ
γmaxc ≤E[‖Θk −Θ0‖] ≤ d+
√ 2K2µ2−λ
γmaxc
(22)
where the first pair of inequalities follow from the triangle inequality and the second pair of inequalities follow from (20) - (21), Jensen’s inequality and letting d = E[‖Θ0−Θ∗‖]. Now, for k ≥ T , consider the following N points: {Xi = i, Yi = ‖Θk+i −Θ0‖}Ni=1. Since these points are all obtained after “steady-state" is reached, if we draw the best-fit line through these points, its slope should be small. More precisely, let ψN denote the slope of the best-fit line passing through these N points. Using (22) along with formulas for the slope in linear regression, and after some algebraic manipulations (see Appendix ?? for detailed calculations), one can show that:
|E[ψN ]| = O
( µ1− λ 2
N
) , Var(ψN ) = O ( 1
N2
) (23)
Therefore, if N ≥ χ µ λ 2
, then the slope of the best-fit line connecting {Xi, Yi} will be O ( µ1− λ 2
N ) with high probability (for a sufficiently large constant χ > 0). On the other hand, when the algorithm is in the transient state, the difference between ‖Θk+m −Θ0‖ and ‖Θk −Θ0‖ will be O(mµ) since Θk changes by O(µ) from one time slot to the next (see Lemma 3 in Appendix ?? for more details). Using this fact, the slope of the best-fit line through N consecutive points in the transient state can be shown to be O (µ), similar to (23). Since we choose N ≥ χ
µ λ 2
, the slope of the best-fit line in steady state, i.e., O ( µ1− λ 2
N
) will be lower than the slope of the best-fit line in the transient phase,
i.e., O (µ) (for a sufficiently large χ). We use this fact as a diagnostic test to determine whether or not the algorithm has entered steady-state. If the diagnostic test returns true, we update the learning rate (see Algorithm 1).
Algorithm 1 Adaptive Learning Rate Rule Hyperparameters: ρ, σ, ξ,N Initialize µ = ρ, ψN = 2σµ1− λ 2 , Θ0, Θini = Θ0.
for i = 1, 2, ... do Do two time-scale algorithm update. Compute ψN = Slope ( {k, ‖Θi−k −Θini‖}N−1k=0 ) .
if ψN < σµ 1−λ 2
N then µ = µξ . Θini = Θi.
end if end for
We note that our adaptive learning rate rule will also work for single time-scale reinforcement learning algorithms such as TD(λ) since our expressions for the mean-square error, when specialized to the case of a single time-scale, will recover the result in [Srikant and Ying, 2019] (see [Gupta et al., 2019] for more details). Therefore, an interesting question that arises from (19) is whether one can optimize the rate of convergence with respect to the time-scale ratio λ? Since the RHS in (19) depends on a variety of problem-dependent parameters, it is difficult to optimize it over λ. An interesting direction of further research
is to investigate if practical adaptive strategies for λ can be developed in order to improve the rate of convergence further.
5 Experiments
We implemented our adaptive learning rate schedule on two popular classic control problems in reinforcement learning - Mountain Car and Inverted Pendulum, and compared its performance with the optimal polynomial decay learning rate rule suggested in [Dalal et al., 2017b] (described in the next subsection). See Appendix ?? for more details on the Mountain Car and Inverted Pendulum problems. We evaluated the following policies using the two time-scale TDC algorithm (see [Sutton et al., 2009] for more details regarding TDC):
• Mountain Car - At each time step, choose a random action ∈ {0, 2}, i.e., accelerate randomly to the left or right.
• Inverted Pendulum - At each time step, choose a random action in the entire action space, i.e., apply a random torque ∈ [−2.0, 2.0] at the pivot point.
Since the true value of Θ∗ is not known in both the problems we consider, to quantify the performance of the TDC algorithm, we used the error metric known as the norm of the expected TD update (NEU, see [Sutton et al., 2009] for more details). For both problems, we used a O(3) Fourier basis (see [Konidaris et al., 2011] for more details) to approximate the value function and used 0.95 as the discount factor.
5.1 Learning Rate Rules and Tuning
1. The optimal polynomial decay rule suggested in [Dalal et al., 2017b] is the following: at time step k, choose αk = 1 (k+1)α and β k = 1 (k+1)β
, where α → 1 and β → 23 . For our experiments, we chose α = 0.99 and β = 0.66. This implies λ = αβ = 1.5. Since the problems we considered require smaller initial step-sizes for convergence, we let αk = ρ0 (k+1)α and β k = ρ0 (k+1)β
and did a grid search to determine the best ρ0, i.e., the best initial learning rate. The following values for ρ0 were found to be the best: Mountain Car - ρ0 = 0.2, Inverted Pendulum - ρ0 = 0.2.
2. For our proposed adaptive learning rate rule, we fixed ξ = 1.2, N = 200 in both problems since we did not want the decay in the learning rate to be too aggressive and the resource consumption for slope computation to be high. We also set λ = 1.5 as in the polynomial decay case to have a fair comparison. We then fixed ρ and conducted a grid search to find the best σ. Subsequently, we conducted a grid search over ρ. Interestingly, the adaptive learning rate rule was reasonably robust to the value of ρ. We used ρ = 0.05 in Inverted Pendulum and ρ = 0.1 in Mountain Car. Effectively, the only hyperparameter that affected the rule’s performance significantly was σ. The following values for σ were found to be the best: Mountain Car - σ = 0.001, Inverted Pendulum - σ = 0.01.
5.2 Results
For each experiment, one run involved the following: 10, 000 episodes with the number of iterations in each episode being 50 and 200 for Inverted Pendulum and Mountain Car respectively. After every 1, 000 episodes, training/learning was paused and the NEU was computed by averaging over 1, 000 test episodes. We initialized Θ0 = 0. For Mountain Car, 50 such runs were conducted and the results were computed by averaging over these runs. For Inverted Pendulum, 100 runs were conducted and the results were computed by averaging over these runs. Note that the learning rate for each adaptive strategy was adapted at the episodic level due to the episodic nature of the problems. The results are reported in Figures 2a and 2b. As is clear from the figures, our proposed adaptive learning rate rule significantly outperforms the optimal polynomial decay rule.
6 Conclusion
We have presented finite-time bounds quantifying the performance of two time-scale linear stochastic approximation algorithms. The bounds give insight into how the different time-scale and learning rate parameters affect the rate of convergence. We utilized these insights and designed an adaptive learning rate selection rule. We implemented our rule on popular classical control problems in reinforcement learning and showed that the proposed rule significantly outperforms the optimal polynomial decay strategy suggested in literature.
Acknowledgements
Research supported by ONR Grant N00014-19-1-2566, NSF Grants CPS ECCS 1739189, NeTS 1718203, CMMI 1562276, ECCS 16-09370, and NSF/USDA Grant AG 2018-67007-28379. Lei Ying’s work supported by NSF grants CNS 1618768, ECCS 1609202, IIS 1715385, ECCS 1739344, CNS 1824393 and CNS 1813392. | 1. What are the strengths and weaknesses of the paper regarding its quality, significance, clarity, and impact on the field?
2. How does the reviewer assess the technical aspects of the experiments and their validation of the proposed heuristic?
3. What are the concerns regarding the mathematical notation and its potential effect on the paper's clarity?
4. How does the reviewer suggest the authors justify the significance of their analysis to practitioners in the reinforcement learning community? | Review | Review
**Things to like** I like this paper. 1. Quality. First of all, it is well written and pleasant to read. I couldnât find any grammatical mistakes. 2. Significance. The authors use their analysis to produce something useful: a heuristic for learning rate selection. Then they provide some experimental validation that this heuristic is useful. I appreciate seeing this in a theory paper. 3. Quality. The technical aspects of the experiments are solid. **Concerns** Take these concerns with a grain of salt. I set my confidence score to 2, as I have a Sutton & Barto-level knowledge of reinforcement learning theory. 1. Clarity. I am worried about how the amount of mathematical notation affects clarity. The authors are not obfuscating anything; I believe this topic simply involves a lot of math. However, I doubt if the majority of NeurIPS attendees will be able to understand this paper. Since there are previous RL theory papers accepted to NeurIPS with this density of math [2], I donât believe this is a fatal flaw and I did not take this into account in my overall score. 2. Significance. As a practitioner of RL, Iâd like the authors to provide a more convincing argument about why I should care about this analysis. Iâm not saying I donât care (Iâm quite interested), but I would love to hear the authors make a case for why and how this work affects RL âexperimentalists.â [1] Dalal, Gal, et al. "Finite sample analysis of two-timescale stochastic approximation with applications to reinforcement learning." arXiv preprint arXiv:1703.05376 (2017). [2] Hasselt, Hado V. "Double Q-learning." Advances in Neural Information Processing Systems. 2010. |
NIPS | Title
Attention is All you Need
Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.
1 Introduction
Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [31, 21, 13]. ∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditional computation [26], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
2 Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [11]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [28].
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [14, 15] and [8].
3 Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [9], consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.
3.1 Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-
wise fully connected feed-forward network. We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.
3.2 Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
3.2.1 Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv . We compute the dot products of the
query with all keys, divide each by √ dk, and apply a softmax function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as:
Attention(Q,K, V ) = softmax( QKT√ dk )V (1)
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1√
dk . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1√
dk .
3.2.2 Multi-Head Attention
Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
4To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q · k = ∑dk i=1 qiki, has mean 0 and variance dk.
MultiHead(Q,K, V ) = Concat(head1, ...,headh)W O
where headi = Attention(QW Q i ,KW K i , V W V i )
Where the projections are parameter matricesWQi ∈ Rdmodel×dk ,WKi ∈ Rdmodel×dk ,WVi ∈ Rdmodel×dv and WO ∈ Rhdv×dmodel . In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
3.2.3 Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [31, 2, 8].
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.
3.3 Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality dff = 2048.
3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √ dmodel.
3.5 Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [8].
In this work, we use sine and cosine functions of different frequencies:
PE(pos,2i) = sin(pos/10000 2i/dmodel)
PE(pos,2i+1) = cos(pos/10000 2i/dmodel)
where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos.
We also experimented with using learned positional embeddings [8] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
4 Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [31] and byte-pair [25] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in
the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [15], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.
5 Training
This section describes the training regime for our models.
5.1 Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.
5.2 Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).
5.3 Optimizer
We used the Adam optimizer [17] with β1 = 0.9, β2 = 0.98 and = 10−9. We varied the learning rate over the course of training, according to the formula:
lrate = d−0.5model ·min(step_num −0.5, step_num · warmup_steps−1.5) (3)
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000.
5.4 Regularization
We employ three types of regularization during training:
Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1.
Label Smoothing During training, we employed label smoothing of value ls = 0.1 [30]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
6 Results
6.1 Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [31]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [31].
Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.
6.2 Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical results to the base model.
7 Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. | 1. What is the main contribution of the paper, and how does it differ from previous sequence-to-sequence modeling approaches?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and performance?
3. Are there any concerns or challenges regarding the implementation and training of the model?
4. How do the authors address the issue of hyperparameters and their impact on the model's performance?
5. Can the authors provide more insights into the choice of temporal embeddings, layer norm, and other techniques used in the model? | Review | Review
This work introduces a quite strikingly different approach to the problem of sequence-to-sequence modeling, by utilizing several different layers of self-attention combined with a standard attention. The work uses a variant of dot-product attention with multiple heads that can both be computed very quickly (particularly on GPU). When combined with temporal embeddings, layer norm, and several other tricks, this technique can replace the use of RNNs entirely on seq2seq models. Since this removes a serial training bottleneck, the whole system can be trained much more efficiently. Even better the system achieves state-of-the-art results on translation, and significantly improves the performance of seq2seq only parsing models.
I feel this work is a clear accept. Seq2seq is so influential that major improvements of this form will have significant impact on the field of NLP almost instantly. This work is already the talk of the community, and many people are trying to replicate these results already. While none of the underlying techniques here are strikingly novel in themselves, the combination of them and the details necessary for getting it to work as well as LSTMs is a major achievement.
As part of this review, I spent a lot of time reimplementing the work and looking through the code. Here are a couple suggestions of areas that I got tripped up on:
- There are a lot of hyperparameters in the code itself that I had to extract, might be nice to include these in the paper.
- The learning rate schedule seems to really matter. Using simple SGD works fine for LSTM, but seems to fail here
- Inference for this problem is quite different than other NMT systems, might be worth discussing a bit more. |
NIPS | Title
Attention is All you Need
Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.
1 Introduction
Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [31, 21, 13]. ∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditional computation [26], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
2 Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [11]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [28].
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [14, 15] and [8].
3 Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [9], consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.
3.1 Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-
wise fully connected feed-forward network. We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.
3.2 Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
3.2.1 Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv . We compute the dot products of the
query with all keys, divide each by √ dk, and apply a softmax function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as:
Attention(Q,K, V ) = softmax( QKT√ dk )V (1)
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1√
dk . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1√
dk .
3.2.2 Multi-Head Attention
Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
4To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q · k = ∑dk i=1 qiki, has mean 0 and variance dk.
MultiHead(Q,K, V ) = Concat(head1, ...,headh)W O
where headi = Attention(QW Q i ,KW K i , V W V i )
Where the projections are parameter matricesWQi ∈ Rdmodel×dk ,WKi ∈ Rdmodel×dk ,WVi ∈ Rdmodel×dv and WO ∈ Rhdv×dmodel . In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
3.2.3 Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [31, 2, 8].
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.
3.3 Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality dff = 2048.
3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √ dmodel.
3.5 Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [8].
In this work, we use sine and cosine functions of different frequencies:
PE(pos,2i) = sin(pos/10000 2i/dmodel)
PE(pos,2i+1) = cos(pos/10000 2i/dmodel)
where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos.
We also experimented with using learned positional embeddings [8] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
4 Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [31] and byte-pair [25] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in
the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [15], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.
5 Training
This section describes the training regime for our models.
5.1 Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.
5.2 Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).
5.3 Optimizer
We used the Adam optimizer [17] with β1 = 0.9, β2 = 0.98 and = 10−9. We varied the learning rate over the course of training, according to the formula:
lrate = d−0.5model ·min(step_num −0.5, step_num · warmup_steps−1.5) (3)
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000.
5.4 Regularization
We employ three types of regularization during training:
Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1.
Label Smoothing During training, we employed label smoothing of value ls = 0.1 [30]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
6 Results
6.1 Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [31]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [31].
Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.
6.2 Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical results to the base model.
7 Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. | 1. What is the focus of the paper regarding sequence-to-sequence modeling?
2. What are the advantages of the proposed architecture according to the reviewer?
3. What are the weaknesses of the paper, particularly regarding architectural details and hyperparameters?
4. Do you have any questions about the minor comments mentioned by the reviewer? | Review | Review
The paper presents a new architecture for encoder/decoder models for sequence-to-sequence modeling that is solely based on (multi-layered) attention networks combined with standard Feed-Forward networks as opposed to the common scheme of using recurrent or convolutional neural networks. The paper presents two main advantages of this new architecture: (1) Reduced training time due to reduced complexity of the architecture, and (2) new State-of-the-Art result on standard WMT data sets, outperforming previous work by about 1 BLEU point.
Strengths:
- The paper argues well that (1) can be achieved by avoiding recurrent or convolutional layers and the complexity analysis in Table 1 strengthens the argument.
- (2) is shown by comparing the model performance against strong baselines on two language pairs, English-German and English-French.
The main strengths of the paper are that it proposes an entirely novel architecture without recurrence or convolutions, and advances state of the art.
Weaknesses:
- While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this.
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
- The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically.
Minor comments:
- Are you using dropout on the source/target embeddings?
- Line 146: There seems to be dangling "2" |
NIPS | Title
Attention is All you Need
Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.
1 Introduction
Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [31, 21, 13]. ∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [18] and conditional computation [26], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
2 Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [11]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 22, 23, 19].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [28].
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [14, 15] and [8].
3 Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [9], consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.
3.1 Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-
wise fully connected feed-forward network. We employ a residual connection [10] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.
3.2 Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
3.2.1 Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv . We compute the dot products of the
query with all keys, divide each by √ dk, and apply a softmax function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as:
Attention(Q,K, V ) = softmax( QKT√ dk )V (1)
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1√
dk . Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1√
dk .
3.2.2 Multi-Head Attention
Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
4To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q · k = ∑dk i=1 qiki, has mean 0 and variance dk.
MultiHead(Q,K, V ) = Concat(head1, ...,headh)W O
where headi = Attention(QW Q i ,KW K i , V W V i )
Where the projections are parameter matricesWQi ∈ Rdmodel×dk ,WKi ∈ Rdmodel×dk ,WVi ∈ Rdmodel×dv and WO ∈ Rhdv×dmodel . In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
3.2.3 Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [31, 2, 8].
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.
3.3 Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality dff = 2048.
3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √ dmodel.
3.5 Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [8].
In this work, we use sine and cosine functions of different frequencies:
PE(pos,2i) = sin(pos/10000 2i/dmodel)
PE(pos,2i+1) = cos(pos/10000 2i/dmodel)
where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos.
We also experimented with using learned positional embeddings [8] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
4 Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [31] and byte-pair [25] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in
the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [15], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.
5 Training
This section describes the training regime for our models.
5.1 Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.
5.2 Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).
5.3 Optimizer
We used the Adam optimizer [17] with β1 = 0.9, β2 = 0.98 and = 10−9. We varied the learning rate over the course of training, according to the formula:
lrate = d−0.5model ·min(step_num −0.5, step_num · warmup_steps−1.5) (3)
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000.
5.4 Regularization
We employ three types of regularization during training:
Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1.
Label Smoothing During training, we employed label smoothing of value ls = 0.1 [30]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
6 Results
6.1 Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [31]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [31].
Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.
6.2 Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical results to the base model.
7 Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. | 1. What is the novel approach introduced by the paper in machine translation?
2. What are the strengths of the proposed model, particularly in its architecture and performance?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Are there any suggestions or recommendations for future works related to this research? | Review | Review
Summary: This paper presents an approach for machine translation using attention based layers. The model does not include convolution or rnns and still achieves state of the art on WMT14 English-German and English-French data sets. The model uses parallel attention layers whose outputs are concatenated and then fed to a feed forward position-wise layer.
Qualitative Assessment:
The paper reads well and is easy to follow. The experimental setup is clear and provides enough details for replication.
The paper provides many useful hints such as scaled dot product attention which improves gradient flow.
A lot of content is presented and I hope to see a more in depth version. |
NIPS | Title
Multi-Class $H$-Consistency Bounds
Abstract
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
N/A
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
1 Introduction
The loss functions optimized by learning algorithms are often distinct from the original one specified for a task. This is typically because optimizing the original loss is computationally intractable or because it does not admit some favorable properties of differentiability or smoothness. As an example, the loss function minimized by the support vector machine (SVM) algorithm is the hinge loss (Cortes and Vapnik, 1995) or the one associated to AdaBoost is the exponential loss (Schapire and Freund, 2012), both distinct from the binary classification loss used as a benchmark in applications. But, what learning guarantees can we rely on when using a surrogate loss? This is a fundamental question in learning theory that directly relates to the design of algorithms.
The standard property of Bayes-consistency, which has been shown to hold for several surrogate losses (Zhang, 2004a,b; Bartlett, Jordan, and McAuliffe, 2006; Tewari and Bartlett, 2007; Steinwart, 2007), does not supply a sufficient guarantee, since it only ensures that, asymptotically, near optimal minimizers of the surrogate excess loss nearly optimally minimize the target excess error. Moreover, this asymptotic property only holds for the full family of measurable functions, which of course is distinct from the more restricted hypothesis set used by a learning algorithm. In fact, it has been shown by Long and Servedio (2013), both theoretically and empirically, that for some hypothesis sets and distributions, the expected error of an algorithm minimizing a Bayes-consistent loss is bounded below by a positive constant, while that of an algorithm minimizing an inconsistent loss goes to zero.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
This suggests that a hypothesis set-dependent notion of H-consistency is more pertinent to the study of consistency for learning (Long and Servedio, 2013), which has been used by Kuznetsov et al. (2014); Cortes et al. (2016a,b) and Zhang and Agarwal (2020) and more generally by Awasthi, Frank, Mao, Mohri, and Zhong (2021a) in an extensive study of both binary classification and adversarial binary classification losses, as defined in (Goodfellow et al., 2014; Madry et al., 2017; Tsipras et al., 2018; Carlini and Wagner, 2017). Nevertheless, H-consistency remains an asymptotic property and does not provide guarantees for approximate surrogate loss minimizers that rely on finite samples.
Awasthi, Mao, Mohri, and Zhong (2022a) recently presented a series of results providing Hconsistency bounds in binary classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. These guarantees are significantly stronger than the H-calibration or H-consistency properties studied by Awasthi et al. (2021b,c). They are also more informative than similar excess error bounds derived in the literature, which correspond to the special case where H is the family of all measurable functions (Zhang, 2004a; Bartlett et al., 2006; Mohri et al., 2018). Combining H-consistency bounds with existing surrogate loss estimation bounds directly yields finite sample bounds on the estimation error for the original loss. See Appendix C for a more detailed discussion.
This paper presents an extensive study of H-consistency bounds for multi-class classification. We show in Section 4.1 that, in general, no non-trivial H-consistency bounds can be derived for multiclass max losses such as those of Crammer and Singer (2001), when used with a convex loss auxiliary function such as the hinge loss. On the positive side, we prove multi-class H-consistency bounds for max losses under a realizability assumption and give multi-class H-consistency bounds using as an auxiliary function the ρ-margin loss, without requiring a realizability assumption. For sum losses, that is multi-class losses such as that of Weston and Watkins (1998), we give a series of results, including a negative result when using as auxiliary function the hinge-loss, and H-consistency bounds when using the exponential loss, the squared hinge-loss, and the ρ-margin loss (Section 4.2). We also present a series of results for the so-called constrained losses, such as the loss function adopted by Lee et al. (2004) in the analysis of multi-class SVM. Here, we prove multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss (Section 4.3). We further give multi-class adversarial H-consistency bounds for all three of the general multi-class losses just mentioned (max losses, sum losses and constrained losses) in Section 5.
We are not aware of any prior H-consistency bound derived in the multi-class setting, even in the special case of H being the family of all measurable functions, whether in the non-adversarial or adversarial setting. All of our results are novel, including our proof techniques. Our results are given for the hypothesis set H being the family of all measurable functions, the family of linear functions, or the family of one-hidden-layer ReLU neural networks. The binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting since the study of calibration and conditional risk is more complex, the form of the surrogate losses is more diverse, and in general the analysis is more involved and requires entirely novel proof techniques in the multi-class setting (see Section 3 for a more detailed discussion of this point).
We give a detailed discussion of related work in Appendix A. We start with the introduction of several multi-class definitions, as well as key concepts and definitions related to the study of H-consistency bounds (Section 2).
2 Preliminaries
We consider the familiar multi-class classification scenario with c ≥ 2 classes. We denote by X the input space and by Y = {1, . . . , c} the set of classes or categories. Let H be a hypothesis set of functions mapping from X × Y to R. The label h(x) associated by a hypothesis h ∈ H to x ∈ X is the one with the largest score: h(x) = argmaxy∈Y h(x, y) with an arbitrary but fixed deterministic strategy used for breaking ties. For simplicity, we fix that strategy to be the one selecting the label with the highest index under the natural ordering of labels. See Appendix B for a more detailed discussion of this choice.
The margin ρh(x, y) of a hypothesis h ∈H for a labeled example (x, y) ∈ X × Y is defined by ρh(x, y) = h(x, y) −max
y′≠y h(x, y′),
that is the difference between the score assigned to (x, y) and that of the runner-up. Given a distribution D over X × Y and a loss function `∶H × X × Y → R, the generalization error of a hypothesis h ∈H and the minimal generalization error are defined as follows:
R`(h) = E (x,y)∼D [`(h,x, y)] and R∗`,H = inf h∈H R`(h).
The goal in multi-class classification is to select a hypothesis h ∈H with small generalization error with respect to the multi-class 0/1 loss defined, for any h ∈ H, by `0−1(h,x, y) = 1h(x)≠y. In the adversarial scenario, the goal is to select a hypothesis h ∈H with small adversarial generalization error defined, for any γ ∈ (0,1) and p ∈ [1,+∞], by R`γ (h) = E(x,y)∼D[`γ(h,x, y)], where
`γ(h,x, y) = sup x′∶∥x−x′∥p≤γ 1ρh(x′,y)≤0 = 1infx′ ∶∥x−x′∥p≤γ ρh(x′,y)≤0,
is the adversarial multi-class 0/1 loss. More generally, the adversarial generalization error and minimal adversarial generalization error for a loss function `(h,x, y) are defined as follows:
R̃̀(h) = E (x,y)∼D [̃̀(h,x, y)] and R∗̃̀,H = infh∈HR̃̀(h),
where ̃̀(h,x, y) = supx′∶∥x−x′∥p≤γ `(h,x ′, y) is the supremum-based counterpart of `.
For a distribution D over X × Y, we define, for any x ∈ X, p(x) = (p(x,1), . . . , p(x, c)), where p(x, y) =D(Y = y ∣X = x) is the conditional probability of Y = y given X = x. We can then write the generalization error as R`(h) = EX[C`(h,x)], where C`(h,x) is the conditional `-risk defined by C`(h,x) = ∑y∈Y p(x, y)`(h,x, y). We will denote by P a set of distributions D over X×Y and by Pall the set of all such distributions. For convenience, we define ymax by ymax = argmaxy∈Y p(x, y). When there is a tie, we pick the label with the highest index under the natural ordering of labels.
The minimal conditional `-risk is denoted by C∗`,H(x) = infh∈H C`(h,x). We also use the following shorthand for the gap ∆C`,H(h,x) = C`(h,x) − C∗`,H(x) and call ∆C`,H(h,x)1∆C`,H(h,x)> the conditional -regret for `. For convenience, we also define, for any vector τ = (τ1, . . . , τc) in the probability simplex of Rc, C`(h,x, τ) = ∑y∈Y τy `(h,x, y), C∗`,H(x, τ) = infh∈H C`(h,x, τ) and ∆C`,H(h,x, τ) = C`(h,x, τ) − C∗`,H(x, τ). Thus, we have ∆C`,H(h,x, p(x)) = ∆C`,H(h,x). For any > 0, we will denote by [t] the -truncation of t ∈ R defined by t1t> . Thus, the conditional -regret can be rewritten as [∆C`,H(h,x)] . For a hypothesis set H and distribution D, we also define the (`,H)-minimizability gap as M`,H = R∗`,H − EX[C∗`,H(x)], that is the difference between the best-in class error and the expectation of the minimal conditional `-risk. This is a key quantity appearing in our bounds that we cannot hope to estimate or minimize. Its value only depends on the distribution D and the hypothesis set H. As an example, when H is the family of all measurable functions, then the minimizability gap for the multi-class 0/1 loss is zero for any distribution D.
3 General theorems
The general form of the H-consistency bounds that we are seeking for a surrogate loss `1 of a target loss `2 is R`2(h) − R∗`2,H ≤ f(R`1(h) − R ∗ `1,H
) for all h ∈ H, for some non-decreasing function f . To derive such bounds for surrogate multi-class losses, we draw on the following two general theorems, which show that, under some conditions, the target loss estimation error can be bounded by some functional form of the surrogate loss estimation error involving minimizability gaps. Theorem 1 (Distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: Ψ([∆C`2,H(h,x)] ) ≤ ∆C`1,H(h,x). Then, for any hypothesis h ∈H and any distribution D ∈ P,
Ψ(R`2(h) −R∗`2,H +M`2,H) ≤ R`1(h) −R ∗ `1,H +M`1,H +max{Ψ(0),Ψ( )}.
Theorem 2 (Distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: [∆C`2,H(h,x)] ≤ Γ(∆C`1,H(h,x)). Then, for any hypothesis h ∈H and any distribution D ∈ P,
R`2(h) −R∗`2,H ≤ Γ(R`1(h) −R ∗ `1,H +M`1,H) −M`2,H + .
The theorems show that, to derive such bounds for a specific hypothesis set and a set of distributions, it suffices to verify that for the same hypothesis set and set of distributions, the conditional -regret for the target loss can be upper bounded with the same functional form of the gap between the conditional risk and minimal conditional risk of the surrogate loss. These results are similar to their binary classification counterparts due to Awasthi et al. (2022b). In particular, the conditional `-risk C`(h,x) in our theorems is the multi-class generalization of their binary definition. The proofs are similar and are included in Appendix E for completeness.
For a given hypothesis set H, the resulting bounds suggest three key ingredients for the choice of a surrogate loss: (1) the functional form of the H-consistency bound, which is specified by the function Ψ or Γ; (2) the smoothness of the loss and more generally its optimization virtues, as needed for the minimization of R`1(h) − R∗`1,H; (3) and the approximation properties of the surrogate loss function which determine the value of the minimizability gap M`1,H. Our quantitative H-consistency bounds can help select the most favorable surrogate loss function among surrogate losses with good optimization merits and comparable approximation properties.
In Section 4 and Section 5, we will apply Theorem 1 and Theorem 2 to the analysis of multi-class loss functions and hypothesis sets widely used in practice. Here, we wish to first comment on the novelty of our results and proof techniques. Let us emphasize that although the general tools of Theorems 1 and 2 are the multi-class generalization of that in (Awasthi et al., 2022a), the binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting. This is true, even in the classical study of Bayes-consistency, where the multi-class setting (Tewari and Bartlett, 2007) does not readily follow the binary case (Bartlett et al., 2006) and required an alternative analysis and new proofs. Note that, additionally, in the multi-class setting, surrogate losses are more diverse: we will distinguish max losses, sum losses, and constrained losses and present an analysis for each loss family with various auxiliary functions for each (see Section 4).
Proof techniques. More specifically, the need for novel proof techniques stems from the following. To use Theorem 1 and Theorem 2, we need to find Ψ and Γ such that the inequality conditions in these theorems hold. This requires us to characterize the conditional risk and the minimal conditional risk of the multi-class zero-one loss function and the corresponding ones for diverse surrogate loss functions in both the non-adversarial and adversarial scenario. Unlike the binary case, such a characterization in the multi-class setting is very difficult. For example, for the constrained loss, solving the minimal conditional risk given a hypothesis set is equivalent to solving a c-dimensional constrained optimization problem, which does not admit an analytical expression. In contrast, in the binary case, solving the minimal conditional risk is equivalent to solving a minimization problem for a univariate function and the needed function Ψ can be characterized explicitly by the H-estimation error transformation, as shown in (Awasthi et al., 2022a). Unfortunately, such binary classification transformation tools cannot be adapted to the multi-class setting. Instead, in our proof for the multiclass setting, we adopt a new idea that avoids directly characterizing the explicit expression of the minimal conditional risk.
For example, for the constrained loss, we leverage the condition of (Lee et al., 2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax (see Appendix K). Then, we can upper bound the minimal conditional risk by the conditional risk of h without having to derive the closed form expression of the minimal conditional risk. Therefore, the conditional regret of the surrogate loss can be lower bounded by that of the zero-one loss with an appropriate function Ψ. To the best of our knowledge, this proof idea and technique are entirely novel. We believe that they can be used for the analysis of other multi-class surrogate losses. Furthermore, all of our multi-class H-consistency results are new. Likewise, our proofs of the H-consistency bounds for sum losses for the squared hinge loss and exponential loss use similarly a new technique and idea, and so does the proof for the ρ-margin loss. Furthermore, we also present an analysis of the adversarial scenario (see Section 5), for which the multi-class proofs are also novel. Finally, our bounds in the multi-class setting are more general: for c = 2, we recover the binary classification bounds of (Awasthi et al., 2022a). Thus, our bounds benefit from the same tightness guarantees shown by (Awasthi et al., 2022a). A further analysis of the tightness of our guarantees in the multi-class setting is left to future work.
4 H-consistency bounds
In this section, we discuss H-consistency bounds in the non-adversarial scenario where the target loss `2 is `0−1, the multi-class 0/1 loss. The lemma stated next characterizes the minimal conditional `0−1- risk and the corresponding conditional -regret, which will be helpful for instantiating Theorems 1 and 2 in the non-adversarial scenario. For any x ∈ X, we will denote, by H(x) the set of labels generated by hypotheses in H: H(x) = {h(x)∶h ∈H}. Lemma 3. For any x ∈ X, the minimal conditional `0−1-risk and the conditional -regret for `0−1 can be expressed as follows:
C∗`0−1,H(x) = 1 − maxy∈H(x)p(x, y)
[∆C`0−1,H(h,x)] = [ max y∈H(x) p(x, y) − p(x,h(x))] .
The proof of Lemma 3 is given in Appendix F. By Lemma 3, Theorems 1 and 2 can be instantiated as Theorems 4 and 5 in the non-adversarial scenario as follows, where H-consistency bounds are provided between the multi-class 0/1 loss and a surrogate loss `. Theorem 4 (Non-adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
Ψ([ max y∈H(x) p(x, y) − p(x,h(x))] ) ≤ ∆C`,H(h,x). (1)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
Ψ(R`0−1(h) −R∗`0−1,H +M`0−1,H) ≤ R`(h) −R ∗ `,H +M`,H +max{Ψ(0),Ψ( )}. (2)
Theorem 5 (Non-adversarial distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
[ max y∈H(x) p(x, y) − p(x,h(x))] ≤ Γ(∆C`,H(h,x)). (3)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
R`0−1(h) −R∗`0−1,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`0−1,H + . (4)
In the following, we will apply Theorems 4 and 5 to study the H-consistency bounds for different families of multi-class losses parameterized by various auxiliary functions, for several general hypothesis sets. It is worth emphasizing that the form of the surrogate losses is more diverse in the multi-class setting and each case requires a careful analysis and that the techniques used in the binary case (Awasthi et al., 2022a) do not apply and cannot be readily extended to our case.
Hypothesis sets. Let Bdp(r) = {z ∈ Rd ∣ ∥z∥p ≤ r} denote the d-dimensional `p-ball with radius r, with p ∈ [1,+∞]. Without loss of generality, in the following, we choose X = Bdp(1). Let p, q ∈ [1,+∞] be conjugate indices, that is 1
p + 1 q = 1. In the following, we will specifically study
three families: the family of all measurable functions Hall, the family of linear hypotheses
Hlin = {(x, y)↦ wy ⋅ x + by ∣ ∥wy∥q ≤W, ∣by ∣ ≤ B},
and that of one-hidden-layer ReLU networks defined by the following, where (⋅)+ = max(⋅,0):
HNN = {(x, y)↦ n
∑ j=1 uy,j(wy,j ⋅ x + by,j)+ ∣ ∥uy∥1 ≤ Λ, ∥wy,j∥q ≤W, ∣by,j ∣ ≤ B}.
Multi-class loss families. We will study three broad families of multi-class loss functions: max losses, sum losses and constrained losses, each parameterized by an auxiliary function Φ on R, assumed to be non-increasing and non-negative. In particular, we will consider the following
common auxiliary functions: the hinge loss Φhinge(t) = max{0,1 − t}, the squared hinge loss Φsq−hinge(t) = max{0,1 − t}2, the exponential loss Φexp(t) = e−t, and the ρ-margin loss Φρ(t) = min{max{0,1 − t/ρ},1}. Note that the first three auxiliary functions are convex, while the last one is not. Figure 1 shows plots of these auxiliary functions.
We will say that a hypothesis set H is symmetric if there exists a family F of functions f mapping from X to R such that {[h(x,1), . . . , h(x, c)]∶h ∈H} = {[f1(x), . . . , fc(x)]∶ f1, . . . , fc ∈ F} and ∣{f(x)∶ f ∈ F}∣ ≥ 2 for any x ∈ X. The hypothesis sets defined above (Hall, Hlin and HNN) are all symmetric. Note that for a symmetric hypothesis set H, we have H(x) = Y. We will say that a hypothesis set H is complete if the set of scores it generates spans R, that is, {h(x, y)∶h ∈H} = R, for any (x, y) ∈ X × Y. The hypothesis sets defined above, Hall, Hlin and HNN with B = +∞ are all complete.
4.1 Max losses
In this section, we discuss guarantees for max losses, that is loss functions that can be defined by the application of an auxiliary function Φ to the margin ρh(x, y), as in (Crammer and Singer, 2001):
∀(x, y) ∈ X × Y, Φmax(h,x, y) = max y′≠y Φ(h(x, y) − h(x, y′)) = Φ(ρh(x, y)). (5)
i) Negative results. We first give negative results showing that max losses Φmax(h,x, y) with convex and non-increasing auxiliary functions Φ do not admit useful H-consistency bounds for multi-class classification (c > 2). The proof is given in Appendix G. Theorem 6 (Negative results for convex Φ). Assume that c > 2. Suppose that Φ is convex and non-increasing, and H satisfies there exist x ∈ X and h ∈ H such that ∣H(x)∣ ≥ 2 and h(x, y) are equal for all y ∈ Y. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦmax(h) −R ∗ Φmax,H), (6)
then, f is lower bounded by 1 2 .
The condition on the hypothesis set in Theorem 6 is very general and all symmetric hypothesis sets verify the condition, e.g. Hall, Hlin and HNN. It is also worth pointing out that when c = 2, that is, in binary classification, Theorem 6 does not hold. Indeed, Awasthi et al. (2022a) present a series of results providing H-consistency bounds for convex Φ in the binary case. In the proof, we make use of the assumption that c > 2 and thus are able to take a probability vector p(x) whose dimension is at least three, which is crucial for the proof.
ii) Positive results without distributional assumptions. On the positive side, the max loss with the non-convex auxiliary function Φ = Φρ admits H-consistency bounds. Theorem 7 (H-consistency bound of Φmaxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D,
R`0−1(h) −R∗`0−1,H ≤ RΦmaxρ (h) −R ∗ Φmaxρ ,H +MΦmaxρ ,H
min{1, infx∈X suph∈H ρh(x,h(x)) ρ
} −M`0−1,H. (7)
See Appendix G for the proof. Theorem 7 is very powerful since it only requires H to be symmetric. We can use it to derive H-consistency bounds for Φmaxρ with common symmetric hypothesis sets
such as Hall, Hlin and HNN, as summarized in Table 1. The proofs with corresponding summarized Corollaries 18, 19 and 20 are included in Appendix H. In the proofs, we characterize the term infx∈X suph∈H ρh(x,h(x)) for each hypothesis set. Note that by Theorem 6, there is no useful H-consistency bound for the max loss with Φ = Φhinge, Φsq−hinge or Φexp in these cases. However, under the realizability assumption (Definition 8), we will show that such bounds hold.
iii) Positive results with realizable distributions. We consider the H-realizability condition (Long and Servedio, 2013; Kuznetsov et al., 2014; Cortes et al., 2016a,b; Zhang and Agarwal, 2020; Awasthi et al., 2021a) which is defined as follows. Definition 8 (H-realizability). A distribution D over X × Y is H-realizable if it labels points according to a deterministic model in H, i.e., if ∃h ∈H such that P(x,y)∼D(ρh(x, y) > 0) = 1. Theorem 9 (Realizable H-consistency bound of Φmax). Suppose that H is symmetric and complete, and Φ is non-increasing and satisfies that limt→+∞ Φ(t) = 0. Then, for any hypothesis h ∈ H and any H-realizable distribution D, we have
R`0−1(h) −R∗`0−1,H ≤ RΦmax(h) −R ∗ Φmax,H +MΦmax,H. (8)
See Appendix G for the proof. Long and Servedio (2013, Theorem 9) show that Φmaxhinge is realizable H-consistent for any symmetric hypothesis set H that is closed under scaling. Since for any Hrealizable distribution, the assumption that H is closed under scaling implies that H is complete and MΦmax,H = 0, Theorem 9 also yields a quantitative relationship in that case that is stronger than the asymptotic consistency property of that previous work.
4.2 Sum losses
In this section, we discuss guarantees for sum losses, that is loss functions defined via a sum, as in (Weston and Watkins, 1998):
Φsum(h,x, y) = ∑ y′≠y Φ(h(x, y) − h(x, y′)). (9)
i) Negative results. We first give a negative result showing that when using as auxiliary function the hinge-loss, the sum loss cannot benefit from any useful H-consistency guarantee. The proof is deferred to Appendix J. Theorem 10 (Negative results for hinge loss). Assume that c > 2. Suppose that H is symmetric and complete. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦsumhinge(h) −R ∗ Φsum hinge ,H), (10)
then, f is lower bounded by 1 6 .
ii) Positive results. We then complement this negative result with positive results when using the exponential loss, the squared hinge-loss, and the ρ-margin loss, as summarized in Table 2. The proofs with corresponding summarized Theorems 22, 23 and 24 are included in Appendix J for completeness. For Φsumρ , the symmetry and completeness assumption can be relaxed to symmetry and the condition that for any x ∈ X, there exists a hypothesis h ∈H such that ∣h(x, i) − h(x, j)∣ ≥ ρ for any i ≠ j ∈ Y, as shown in Theorem 24. In the proof, we introduce an auxiliary Lemma 21 in Appendix I, which would be helpful for lower bounding the conditional regret of Φsumρ with that of the multi-class 0/1 loss.
Table 2: H-consistency bounds for sum losses with symmetric and complete hypothesis sets.
Sum loss H-consistency bound (Theorems 22, 23 and 24)
Φsumsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦsumsq−hinge(h) −R ∗ Φsum sq−hinge,H +MΦsum sq−hinge,H )
1 2 −M`0−1,H
Φsumexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦsumexp (h) −R ∗ Φsumexp ,H +MΦsumexp ,H) 1 2 −M`0−1,H Φsumρ R`0−1(h) −R∗`0−1,H ≤ RΦsumρ (h) −R ∗ Φsumρ ,H +MΦsumρ ,H −M`0−1,H
Table 3: H-consistency bounds for constrained losses with symmetric and complete hypothesis sets.
Constrained loss H-consistency bound (Theorems 25, 26, 27 and 28)
Φcstndhinge R`0−1(h) −R∗`0−1,H ≤ RΦcstndhinge(h) −R ∗ Φcstnd hinge ,H +MΦcstnd hinge ,H −M`0−1,H
Φcstndsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦcstndsq−hinge(h) −R ∗ Φcstnd sq−hinge,H +MΦcstnd sq−hinge,H
) 1 2
−M`0−1,H
Φcstndexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦcstndexp (h) −R ∗ Φcstndexp ,H +MΦcstndexp ,H) 1 2 −M`0−1,H Φcstndρ R`0−1(h) −R∗`0−1,H ≤ RΦcstndρ (h) −R ∗ Φcstndρ ,H +MΦcstndρ ,H −M`0−1,H
4.3 Constrained losses
In this section, we discuss guarantees for constrained loss, that is loss functions defined via a constraint, as in (Lee et al., 2004):
Φcstnd(h,x, y) = ∑ y′≠y Φ(−h(x, y′)) (11)
with the constraint that∑y∈Y h(x, y) = 0. We present a series of positive results by proving multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss, as summarized in Table 3. As with the binary case (Awasthi et al., 2022a), the bound admits a linear dependency for Φcstndhinge and Φ cstnd ρ , in contrast with a square-root dependency for Φcstndsq−hinge and Φ cstnd exp , as illustrated in Figure 1. The proofs with corresponding summarized Theorems 25, 26, 27 and 28 are included in Appendix K for completeness. For Φcstndρ , the symmetric and complete assumption can be relaxed to be symmetric and satisfy that for any x ∈ X, there exists a hypothesis h ∈H such that h(x, y) ≤ −ρ for any y ≠ ymax, as shown in Theorem 28.
The main idea of the proofs in this section is to leverage the constraint condition of Lee et al. (2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax. We can then upper bound the minimal conditional risk by the conditional risk of h, without having to derive the closed form expression of the minimal conditional risk.
As shown by Steinwart (2007, Theorem 3.2), for the family of all measurable functions, the minimizability gaps vanish: M`0−1,Hall = MΦsum,Hall = MΦcstnd,Hall = 0, for Φ = Φhinge, Φsq−hinge, Φexp and Φρ. Therefore, when H =Hall, our quantitative bounds in Table 2 and Table 3 imply the asymptotic consistency results of those multi-class losses in (Tewari and Bartlett, 2007), which shows that our results are stronger and more significant. We also provide bounds for multi-class losses using a non-convex auxiliary function, which are not studied in the previous work.
5 Adversarial H-consistency bounds
In this section, we analyze multi-class H-consistency bounds in the adversarial scenario (`2 = `γ). For any x ∈ X, we denote by Hγ(x) the set of hypotheses h with a positive margin on the ball of radius γ around x, Hγ(x) = {h ∈H ∶ infx′∶∥x−x′∥p≤γ ρh(x
′,h(x)) > 0}, and by Hγ(x) the set of labels generated by these hypotheses, Hγ(x) = {h(x)∶h ∈Hγ(x)}. When H is symmetric, we have Hγ(x) = Y iff Hγ(x) ≠ ∅. The following lemma characterizes the conditional -regret for
adversarial 0/1 loss, which will be helpful for applying Theorem 1 and Theorem 2 to the adversarial scenario. Lemma 11. For any x ∈ X, the minimal conditional `γ-risk and the conditional -regret for `γ can be expressed as follows:
C∗`γ ,H(x) = 1 − maxy∈Hγ(x) p(x, y)1Hγ(x)≠∅
[∆C`γ ,H(h,x)] = { [maxy∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] if Hγ(x) ≠ ∅ 0 otherwise.
The proof of Lemma 11 is presented in Appendix F. By Lemma 11, Theorems 1 and 2 can be instantiated as Theorems 12 and 13 in the adversarial scenario as follows, where H-consistency bounds are provided between the adversarial multi-class 0/1 loss and a surrogate loss `. Theorem 12 (Adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) = 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
Ψ([ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ) ≤ ∆C`,H(h,x). (12)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have Ψ(R`γ (h) −R∗`γ ,H +M`γ ,H) ≤ R`(h) −R ∗ `,H +M`,H +max{0,Ψ( )}. (13)
Theorem 13 (Adversarial distribution-dependent Γ-bound). Assume that there exists a nonnegative concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
[ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ≤ Γ(∆C`,H(h,x)). (14)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have R`γ (h) −R∗`γ ,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`γ ,H + . (15)
Next, we will apply Theorem 12 and Theorem 13 to study various hypothesis sets and adversarial surrogate loss functions in Sections 5.1 for negative results and Section 5.2, 5.3, and 5.4 for positive results. A careful analysis is presented in each case (see Appendix L, M, N and O).
5.1 Negative results for adversarial robustness
The following result rules out the H-consistency guarantee of multi-class losses with a convex auxiliary function, which are commonly used in practice. The proof is given in Appendix L. Theorem 14 (Negative results for convex functions). Fix c = 2. Suppose that Φ is convex and nonincreasing, and H contains 0 and satisfies the condition that there exists x ∈ X such that Hγ(x) ≠ ∅. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`γ (h) −R∗`γ ,H ≤ f(R̃̀(h) −R ∗ ̃̀,H ), (16)
then, f is lower bounded by 1 2
, for ̃̀= Φ̃max, Φ̃sum and Φ̃cstnd.
Instead, we show in Sections 5.2, 5.3, and 5.4 that the max, sum and constrained losses using as auxiliary function the non-convex ρ-margin loss admit favorable H-consistency bounds in the multi-class setting, thereby significantly generalizing the binary counterpart in (Awasthi et al., 2022a).
5.2 Adversarial max losses
We first consider the adversarial max loss Φ̃max defined as the supremum based counterpart of (5):
Φ̃max(h,x, y) = sup x′∶∥x−x′∥p≤γ Φ(ρh(x′, y)). (17)
For the adversarial max loss with Φ = Φρ, we can obtain H-consistency bounds as follows.
Theorem 15 (H-consistency bound of Φ̃maxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D, we have
R`γ (h) −R∗`γ ,H ≤ RΦ̃maxρ (h) −R∗ Φ̃maxρ ,H +MΦ̃maxρ ,H
min{1, infx∈{x∈X∶Hγ (x)≠∅} suph∈Hγ (x) infx′ ∶∥x−x′∥p≤γ
ρh(x′,h(x)) ρ } −M`γ ,H. (18)
5.3 Adversarial sum losses
Next, we consider the adversarial sum loss Φ̃sum defined as the supremum based counterpart of (9):
Φ̃sum(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(h(x′, y) − h(x′, y′)). (19)
Using the auxiliary Lemma 21 in Appendix I, we can obtain the H-consistency bound of Φ̃sumρ .
Theorem 16 (H-consistency bound of Φ̃sumρ ). Assume that H is symmetric and that for any x ∈ X, there exists a hypothesis h ∈ H inducing the same ordering of the labels for any x′ ∈ {x′∶ ∥x − x′∥p ≤ γ} and such that infx′∶∥x−x′∥p≤γ ∣h(x
′, i) − h(x′, j)∣ ≥ ρ for any i ≠ j ∈ Y. Then, for any hypothesis h ∈H and any distribution D, the following inequality holds:
R`γ (h) −R∗`γ ,H ≤ RΦ̃sumρ (h) −R ∗ Φ̃sumρ ,H +MΦ̃sumρ ,H −M`γ ,H. (20)
5.4 Adversarial constrained loss
Similarly, we define the adversarial constrained loss Φ̃cstnd as supremum based counterpart of (11):
Φ̃cstnd(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(−h(x′, y′)) (21)
with the constraint that ∑y∈Y h(x, y) = 0. For the adversarial constrained loss with Φ = Φρ, we can obtain the H-consistency bound of Φ̃cstndρ as follows.
Theorem 17 (H-consistency bound of Φ̃cstndρ ). Suppose that H is symmetric and satisfies that for any x ∈ X, there exists a hypothesis h ∈ H with the constraint ∑y∈Y h(x, y) = 0 such that supx′∶∥x−x′∥p≤γ h(x ′, y) ≤ −ρ for any y ≠ ymax. Then, for any hypothesis h ∈H and any distribution,
R`γ (h) −R∗`γ ,H ≤ RΦ̃cstndρ (h) −R ∗ Φ̃cstndρ ,H +MΦ̃cstndρ ,H −M`γ ,H. (22)
The proofs of Theorems 15, 16 and 17 are included in Appendix M, N and O respectively. These results are significant since they apply to general hypothesis sets. In particular, symmetric hypothesis sets Hall, Hlin and HNN with B = +∞ all verify the conditions of those theorems. When B < +∞, the conditions in Theorems 16 and 17 can still be verified with a suitable choice of ρ, where we can consider the hypotheses such that wy = 0 in Hlin and HNN, while Theorem 15 holds for any ρ > 0.
6 Conclusion
We presented a comprehensive study of H-consistency bounds for multi-class classification, including the analysis of the three most commonly used families of multi-class surrogate losses (max losses, sum losses and constrained losses) and including the study of surrogate losses for the adversarial robustness. Our theoretical analysis helps determine which surrogate losses admit a favorable guarantee for a given hypothesis set H. Our bounds can help guide the design of multi-class classification algorithms for both the adversarial and non-adversarial settings. They also help compare different surrogate losses for the same setting and the same hypothesis set. Of course, in addition to the functional form of the H-consistency bound, the approximation property of a surrogate loss function combined with the hypothesis set plays an important role. | 1. What is the focus of the paper regarding multi-class classification problems?
2. What are the strengths of the proposed approach, particularly in its ability to generalize previous works?
3. Do you have any concerns or questions about the negative result in Theorem 6?
4. How can one estimate the bounds, and do the results provide insights into choosing the best surrogate loss for a given hypothesis set?
5. Are there any limitations or potential negative societal impacts associated with the work? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The target loss in multi-class classification problems is usually the 0/1 loss, while the learning algorithms usually optimize with respect to surrogate losses. The work provided a comprehensive study of how well an optimization problem with respect to surrogate losses represents the results using the target loss in multi-class classification problems. The work considered the max loss, the sum loss, and the constrained loss, and provide the corresponding bounds or negative results.
Strengths And Weaknesses
The paper is well presented. The basis of the results (Theorem 1 - Theorem 5) is a generalization of the results in Awasthi et al. (2022) from binary classification to multi-class classification. However, since loss functions used in binary classification and multi-class classification are different, most of the results in the following sections are new. The work provided a systematic and comprehensive study of surrogate losses in multi-class classification problems, which can most likely lead the community to a better understanding of the use of different surrogate losses under different problem settings.
Questions
For the negative result in Theorem 6, it is not yet clear to me whether the problem comes from using convex \Phi or having a h giving equal scores. Will the problem be solved if one removes such h in the hypothesis class?
How can one estimate the bounds? Can the results indicate which surrogate loss is a better choice for a given hypothesis set H?
Figure 1 showed \rho-margin loss but didn’t specify the value of \rho in the figure.
L175, L187: any distribution “D”
Limitations
The reviewer has not yet seen the potential negative societal impact of the work. |
NIPS | Title
Multi-Class $H$-Consistency Bounds
Abstract
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
N/A
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
1 Introduction
The loss functions optimized by learning algorithms are often distinct from the original one specified for a task. This is typically because optimizing the original loss is computationally intractable or because it does not admit some favorable properties of differentiability or smoothness. As an example, the loss function minimized by the support vector machine (SVM) algorithm is the hinge loss (Cortes and Vapnik, 1995) or the one associated to AdaBoost is the exponential loss (Schapire and Freund, 2012), both distinct from the binary classification loss used as a benchmark in applications. But, what learning guarantees can we rely on when using a surrogate loss? This is a fundamental question in learning theory that directly relates to the design of algorithms.
The standard property of Bayes-consistency, which has been shown to hold for several surrogate losses (Zhang, 2004a,b; Bartlett, Jordan, and McAuliffe, 2006; Tewari and Bartlett, 2007; Steinwart, 2007), does not supply a sufficient guarantee, since it only ensures that, asymptotically, near optimal minimizers of the surrogate excess loss nearly optimally minimize the target excess error. Moreover, this asymptotic property only holds for the full family of measurable functions, which of course is distinct from the more restricted hypothesis set used by a learning algorithm. In fact, it has been shown by Long and Servedio (2013), both theoretically and empirically, that for some hypothesis sets and distributions, the expected error of an algorithm minimizing a Bayes-consistent loss is bounded below by a positive constant, while that of an algorithm minimizing an inconsistent loss goes to zero.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
This suggests that a hypothesis set-dependent notion of H-consistency is more pertinent to the study of consistency for learning (Long and Servedio, 2013), which has been used by Kuznetsov et al. (2014); Cortes et al. (2016a,b) and Zhang and Agarwal (2020) and more generally by Awasthi, Frank, Mao, Mohri, and Zhong (2021a) in an extensive study of both binary classification and adversarial binary classification losses, as defined in (Goodfellow et al., 2014; Madry et al., 2017; Tsipras et al., 2018; Carlini and Wagner, 2017). Nevertheless, H-consistency remains an asymptotic property and does not provide guarantees for approximate surrogate loss minimizers that rely on finite samples.
Awasthi, Mao, Mohri, and Zhong (2022a) recently presented a series of results providing Hconsistency bounds in binary classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. These guarantees are significantly stronger than the H-calibration or H-consistency properties studied by Awasthi et al. (2021b,c). They are also more informative than similar excess error bounds derived in the literature, which correspond to the special case where H is the family of all measurable functions (Zhang, 2004a; Bartlett et al., 2006; Mohri et al., 2018). Combining H-consistency bounds with existing surrogate loss estimation bounds directly yields finite sample bounds on the estimation error for the original loss. See Appendix C for a more detailed discussion.
This paper presents an extensive study of H-consistency bounds for multi-class classification. We show in Section 4.1 that, in general, no non-trivial H-consistency bounds can be derived for multiclass max losses such as those of Crammer and Singer (2001), when used with a convex loss auxiliary function such as the hinge loss. On the positive side, we prove multi-class H-consistency bounds for max losses under a realizability assumption and give multi-class H-consistency bounds using as an auxiliary function the ρ-margin loss, without requiring a realizability assumption. For sum losses, that is multi-class losses such as that of Weston and Watkins (1998), we give a series of results, including a negative result when using as auxiliary function the hinge-loss, and H-consistency bounds when using the exponential loss, the squared hinge-loss, and the ρ-margin loss (Section 4.2). We also present a series of results for the so-called constrained losses, such as the loss function adopted by Lee et al. (2004) in the analysis of multi-class SVM. Here, we prove multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss (Section 4.3). We further give multi-class adversarial H-consistency bounds for all three of the general multi-class losses just mentioned (max losses, sum losses and constrained losses) in Section 5.
We are not aware of any prior H-consistency bound derived in the multi-class setting, even in the special case of H being the family of all measurable functions, whether in the non-adversarial or adversarial setting. All of our results are novel, including our proof techniques. Our results are given for the hypothesis set H being the family of all measurable functions, the family of linear functions, or the family of one-hidden-layer ReLU neural networks. The binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting since the study of calibration and conditional risk is more complex, the form of the surrogate losses is more diverse, and in general the analysis is more involved and requires entirely novel proof techniques in the multi-class setting (see Section 3 for a more detailed discussion of this point).
We give a detailed discussion of related work in Appendix A. We start with the introduction of several multi-class definitions, as well as key concepts and definitions related to the study of H-consistency bounds (Section 2).
2 Preliminaries
We consider the familiar multi-class classification scenario with c ≥ 2 classes. We denote by X the input space and by Y = {1, . . . , c} the set of classes or categories. Let H be a hypothesis set of functions mapping from X × Y to R. The label h(x) associated by a hypothesis h ∈ H to x ∈ X is the one with the largest score: h(x) = argmaxy∈Y h(x, y) with an arbitrary but fixed deterministic strategy used for breaking ties. For simplicity, we fix that strategy to be the one selecting the label with the highest index under the natural ordering of labels. See Appendix B for a more detailed discussion of this choice.
The margin ρh(x, y) of a hypothesis h ∈H for a labeled example (x, y) ∈ X × Y is defined by ρh(x, y) = h(x, y) −max
y′≠y h(x, y′),
that is the difference between the score assigned to (x, y) and that of the runner-up. Given a distribution D over X × Y and a loss function `∶H × X × Y → R, the generalization error of a hypothesis h ∈H and the minimal generalization error are defined as follows:
R`(h) = E (x,y)∼D [`(h,x, y)] and R∗`,H = inf h∈H R`(h).
The goal in multi-class classification is to select a hypothesis h ∈H with small generalization error with respect to the multi-class 0/1 loss defined, for any h ∈ H, by `0−1(h,x, y) = 1h(x)≠y. In the adversarial scenario, the goal is to select a hypothesis h ∈H with small adversarial generalization error defined, for any γ ∈ (0,1) and p ∈ [1,+∞], by R`γ (h) = E(x,y)∼D[`γ(h,x, y)], where
`γ(h,x, y) = sup x′∶∥x−x′∥p≤γ 1ρh(x′,y)≤0 = 1infx′ ∶∥x−x′∥p≤γ ρh(x′,y)≤0,
is the adversarial multi-class 0/1 loss. More generally, the adversarial generalization error and minimal adversarial generalization error for a loss function `(h,x, y) are defined as follows:
R̃̀(h) = E (x,y)∼D [̃̀(h,x, y)] and R∗̃̀,H = infh∈HR̃̀(h),
where ̃̀(h,x, y) = supx′∶∥x−x′∥p≤γ `(h,x ′, y) is the supremum-based counterpart of `.
For a distribution D over X × Y, we define, for any x ∈ X, p(x) = (p(x,1), . . . , p(x, c)), where p(x, y) =D(Y = y ∣X = x) is the conditional probability of Y = y given X = x. We can then write the generalization error as R`(h) = EX[C`(h,x)], where C`(h,x) is the conditional `-risk defined by C`(h,x) = ∑y∈Y p(x, y)`(h,x, y). We will denote by P a set of distributions D over X×Y and by Pall the set of all such distributions. For convenience, we define ymax by ymax = argmaxy∈Y p(x, y). When there is a tie, we pick the label with the highest index under the natural ordering of labels.
The minimal conditional `-risk is denoted by C∗`,H(x) = infh∈H C`(h,x). We also use the following shorthand for the gap ∆C`,H(h,x) = C`(h,x) − C∗`,H(x) and call ∆C`,H(h,x)1∆C`,H(h,x)> the conditional -regret for `. For convenience, we also define, for any vector τ = (τ1, . . . , τc) in the probability simplex of Rc, C`(h,x, τ) = ∑y∈Y τy `(h,x, y), C∗`,H(x, τ) = infh∈H C`(h,x, τ) and ∆C`,H(h,x, τ) = C`(h,x, τ) − C∗`,H(x, τ). Thus, we have ∆C`,H(h,x, p(x)) = ∆C`,H(h,x). For any > 0, we will denote by [t] the -truncation of t ∈ R defined by t1t> . Thus, the conditional -regret can be rewritten as [∆C`,H(h,x)] . For a hypothesis set H and distribution D, we also define the (`,H)-minimizability gap as M`,H = R∗`,H − EX[C∗`,H(x)], that is the difference between the best-in class error and the expectation of the minimal conditional `-risk. This is a key quantity appearing in our bounds that we cannot hope to estimate or minimize. Its value only depends on the distribution D and the hypothesis set H. As an example, when H is the family of all measurable functions, then the minimizability gap for the multi-class 0/1 loss is zero for any distribution D.
3 General theorems
The general form of the H-consistency bounds that we are seeking for a surrogate loss `1 of a target loss `2 is R`2(h) − R∗`2,H ≤ f(R`1(h) − R ∗ `1,H
) for all h ∈ H, for some non-decreasing function f . To derive such bounds for surrogate multi-class losses, we draw on the following two general theorems, which show that, under some conditions, the target loss estimation error can be bounded by some functional form of the surrogate loss estimation error involving minimizability gaps. Theorem 1 (Distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: Ψ([∆C`2,H(h,x)] ) ≤ ∆C`1,H(h,x). Then, for any hypothesis h ∈H and any distribution D ∈ P,
Ψ(R`2(h) −R∗`2,H +M`2,H) ≤ R`1(h) −R ∗ `1,H +M`1,H +max{Ψ(0),Ψ( )}.
Theorem 2 (Distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: [∆C`2,H(h,x)] ≤ Γ(∆C`1,H(h,x)). Then, for any hypothesis h ∈H and any distribution D ∈ P,
R`2(h) −R∗`2,H ≤ Γ(R`1(h) −R ∗ `1,H +M`1,H) −M`2,H + .
The theorems show that, to derive such bounds for a specific hypothesis set and a set of distributions, it suffices to verify that for the same hypothesis set and set of distributions, the conditional -regret for the target loss can be upper bounded with the same functional form of the gap between the conditional risk and minimal conditional risk of the surrogate loss. These results are similar to their binary classification counterparts due to Awasthi et al. (2022b). In particular, the conditional `-risk C`(h,x) in our theorems is the multi-class generalization of their binary definition. The proofs are similar and are included in Appendix E for completeness.
For a given hypothesis set H, the resulting bounds suggest three key ingredients for the choice of a surrogate loss: (1) the functional form of the H-consistency bound, which is specified by the function Ψ or Γ; (2) the smoothness of the loss and more generally its optimization virtues, as needed for the minimization of R`1(h) − R∗`1,H; (3) and the approximation properties of the surrogate loss function which determine the value of the minimizability gap M`1,H. Our quantitative H-consistency bounds can help select the most favorable surrogate loss function among surrogate losses with good optimization merits and comparable approximation properties.
In Section 4 and Section 5, we will apply Theorem 1 and Theorem 2 to the analysis of multi-class loss functions and hypothesis sets widely used in practice. Here, we wish to first comment on the novelty of our results and proof techniques. Let us emphasize that although the general tools of Theorems 1 and 2 are the multi-class generalization of that in (Awasthi et al., 2022a), the binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting. This is true, even in the classical study of Bayes-consistency, where the multi-class setting (Tewari and Bartlett, 2007) does not readily follow the binary case (Bartlett et al., 2006) and required an alternative analysis and new proofs. Note that, additionally, in the multi-class setting, surrogate losses are more diverse: we will distinguish max losses, sum losses, and constrained losses and present an analysis for each loss family with various auxiliary functions for each (see Section 4).
Proof techniques. More specifically, the need for novel proof techniques stems from the following. To use Theorem 1 and Theorem 2, we need to find Ψ and Γ such that the inequality conditions in these theorems hold. This requires us to characterize the conditional risk and the minimal conditional risk of the multi-class zero-one loss function and the corresponding ones for diverse surrogate loss functions in both the non-adversarial and adversarial scenario. Unlike the binary case, such a characterization in the multi-class setting is very difficult. For example, for the constrained loss, solving the minimal conditional risk given a hypothesis set is equivalent to solving a c-dimensional constrained optimization problem, which does not admit an analytical expression. In contrast, in the binary case, solving the minimal conditional risk is equivalent to solving a minimization problem for a univariate function and the needed function Ψ can be characterized explicitly by the H-estimation error transformation, as shown in (Awasthi et al., 2022a). Unfortunately, such binary classification transformation tools cannot be adapted to the multi-class setting. Instead, in our proof for the multiclass setting, we adopt a new idea that avoids directly characterizing the explicit expression of the minimal conditional risk.
For example, for the constrained loss, we leverage the condition of (Lee et al., 2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax (see Appendix K). Then, we can upper bound the minimal conditional risk by the conditional risk of h without having to derive the closed form expression of the minimal conditional risk. Therefore, the conditional regret of the surrogate loss can be lower bounded by that of the zero-one loss with an appropriate function Ψ. To the best of our knowledge, this proof idea and technique are entirely novel. We believe that they can be used for the analysis of other multi-class surrogate losses. Furthermore, all of our multi-class H-consistency results are new. Likewise, our proofs of the H-consistency bounds for sum losses for the squared hinge loss and exponential loss use similarly a new technique and idea, and so does the proof for the ρ-margin loss. Furthermore, we also present an analysis of the adversarial scenario (see Section 5), for which the multi-class proofs are also novel. Finally, our bounds in the multi-class setting are more general: for c = 2, we recover the binary classification bounds of (Awasthi et al., 2022a). Thus, our bounds benefit from the same tightness guarantees shown by (Awasthi et al., 2022a). A further analysis of the tightness of our guarantees in the multi-class setting is left to future work.
4 H-consistency bounds
In this section, we discuss H-consistency bounds in the non-adversarial scenario where the target loss `2 is `0−1, the multi-class 0/1 loss. The lemma stated next characterizes the minimal conditional `0−1- risk and the corresponding conditional -regret, which will be helpful for instantiating Theorems 1 and 2 in the non-adversarial scenario. For any x ∈ X, we will denote, by H(x) the set of labels generated by hypotheses in H: H(x) = {h(x)∶h ∈H}. Lemma 3. For any x ∈ X, the minimal conditional `0−1-risk and the conditional -regret for `0−1 can be expressed as follows:
C∗`0−1,H(x) = 1 − maxy∈H(x)p(x, y)
[∆C`0−1,H(h,x)] = [ max y∈H(x) p(x, y) − p(x,h(x))] .
The proof of Lemma 3 is given in Appendix F. By Lemma 3, Theorems 1 and 2 can be instantiated as Theorems 4 and 5 in the non-adversarial scenario as follows, where H-consistency bounds are provided between the multi-class 0/1 loss and a surrogate loss `. Theorem 4 (Non-adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
Ψ([ max y∈H(x) p(x, y) − p(x,h(x))] ) ≤ ∆C`,H(h,x). (1)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
Ψ(R`0−1(h) −R∗`0−1,H +M`0−1,H) ≤ R`(h) −R ∗ `,H +M`,H +max{Ψ(0),Ψ( )}. (2)
Theorem 5 (Non-adversarial distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
[ max y∈H(x) p(x, y) − p(x,h(x))] ≤ Γ(∆C`,H(h,x)). (3)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
R`0−1(h) −R∗`0−1,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`0−1,H + . (4)
In the following, we will apply Theorems 4 and 5 to study the H-consistency bounds for different families of multi-class losses parameterized by various auxiliary functions, for several general hypothesis sets. It is worth emphasizing that the form of the surrogate losses is more diverse in the multi-class setting and each case requires a careful analysis and that the techniques used in the binary case (Awasthi et al., 2022a) do not apply and cannot be readily extended to our case.
Hypothesis sets. Let Bdp(r) = {z ∈ Rd ∣ ∥z∥p ≤ r} denote the d-dimensional `p-ball with radius r, with p ∈ [1,+∞]. Without loss of generality, in the following, we choose X = Bdp(1). Let p, q ∈ [1,+∞] be conjugate indices, that is 1
p + 1 q = 1. In the following, we will specifically study
three families: the family of all measurable functions Hall, the family of linear hypotheses
Hlin = {(x, y)↦ wy ⋅ x + by ∣ ∥wy∥q ≤W, ∣by ∣ ≤ B},
and that of one-hidden-layer ReLU networks defined by the following, where (⋅)+ = max(⋅,0):
HNN = {(x, y)↦ n
∑ j=1 uy,j(wy,j ⋅ x + by,j)+ ∣ ∥uy∥1 ≤ Λ, ∥wy,j∥q ≤W, ∣by,j ∣ ≤ B}.
Multi-class loss families. We will study three broad families of multi-class loss functions: max losses, sum losses and constrained losses, each parameterized by an auxiliary function Φ on R, assumed to be non-increasing and non-negative. In particular, we will consider the following
common auxiliary functions: the hinge loss Φhinge(t) = max{0,1 − t}, the squared hinge loss Φsq−hinge(t) = max{0,1 − t}2, the exponential loss Φexp(t) = e−t, and the ρ-margin loss Φρ(t) = min{max{0,1 − t/ρ},1}. Note that the first three auxiliary functions are convex, while the last one is not. Figure 1 shows plots of these auxiliary functions.
We will say that a hypothesis set H is symmetric if there exists a family F of functions f mapping from X to R such that {[h(x,1), . . . , h(x, c)]∶h ∈H} = {[f1(x), . . . , fc(x)]∶ f1, . . . , fc ∈ F} and ∣{f(x)∶ f ∈ F}∣ ≥ 2 for any x ∈ X. The hypothesis sets defined above (Hall, Hlin and HNN) are all symmetric. Note that for a symmetric hypothesis set H, we have H(x) = Y. We will say that a hypothesis set H is complete if the set of scores it generates spans R, that is, {h(x, y)∶h ∈H} = R, for any (x, y) ∈ X × Y. The hypothesis sets defined above, Hall, Hlin and HNN with B = +∞ are all complete.
4.1 Max losses
In this section, we discuss guarantees for max losses, that is loss functions that can be defined by the application of an auxiliary function Φ to the margin ρh(x, y), as in (Crammer and Singer, 2001):
∀(x, y) ∈ X × Y, Φmax(h,x, y) = max y′≠y Φ(h(x, y) − h(x, y′)) = Φ(ρh(x, y)). (5)
i) Negative results. We first give negative results showing that max losses Φmax(h,x, y) with convex and non-increasing auxiliary functions Φ do not admit useful H-consistency bounds for multi-class classification (c > 2). The proof is given in Appendix G. Theorem 6 (Negative results for convex Φ). Assume that c > 2. Suppose that Φ is convex and non-increasing, and H satisfies there exist x ∈ X and h ∈ H such that ∣H(x)∣ ≥ 2 and h(x, y) are equal for all y ∈ Y. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦmax(h) −R ∗ Φmax,H), (6)
then, f is lower bounded by 1 2 .
The condition on the hypothesis set in Theorem 6 is very general and all symmetric hypothesis sets verify the condition, e.g. Hall, Hlin and HNN. It is also worth pointing out that when c = 2, that is, in binary classification, Theorem 6 does not hold. Indeed, Awasthi et al. (2022a) present a series of results providing H-consistency bounds for convex Φ in the binary case. In the proof, we make use of the assumption that c > 2 and thus are able to take a probability vector p(x) whose dimension is at least three, which is crucial for the proof.
ii) Positive results without distributional assumptions. On the positive side, the max loss with the non-convex auxiliary function Φ = Φρ admits H-consistency bounds. Theorem 7 (H-consistency bound of Φmaxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D,
R`0−1(h) −R∗`0−1,H ≤ RΦmaxρ (h) −R ∗ Φmaxρ ,H +MΦmaxρ ,H
min{1, infx∈X suph∈H ρh(x,h(x)) ρ
} −M`0−1,H. (7)
See Appendix G for the proof. Theorem 7 is very powerful since it only requires H to be symmetric. We can use it to derive H-consistency bounds for Φmaxρ with common symmetric hypothesis sets
such as Hall, Hlin and HNN, as summarized in Table 1. The proofs with corresponding summarized Corollaries 18, 19 and 20 are included in Appendix H. In the proofs, we characterize the term infx∈X suph∈H ρh(x,h(x)) for each hypothesis set. Note that by Theorem 6, there is no useful H-consistency bound for the max loss with Φ = Φhinge, Φsq−hinge or Φexp in these cases. However, under the realizability assumption (Definition 8), we will show that such bounds hold.
iii) Positive results with realizable distributions. We consider the H-realizability condition (Long and Servedio, 2013; Kuznetsov et al., 2014; Cortes et al., 2016a,b; Zhang and Agarwal, 2020; Awasthi et al., 2021a) which is defined as follows. Definition 8 (H-realizability). A distribution D over X × Y is H-realizable if it labels points according to a deterministic model in H, i.e., if ∃h ∈H such that P(x,y)∼D(ρh(x, y) > 0) = 1. Theorem 9 (Realizable H-consistency bound of Φmax). Suppose that H is symmetric and complete, and Φ is non-increasing and satisfies that limt→+∞ Φ(t) = 0. Then, for any hypothesis h ∈ H and any H-realizable distribution D, we have
R`0−1(h) −R∗`0−1,H ≤ RΦmax(h) −R ∗ Φmax,H +MΦmax,H. (8)
See Appendix G for the proof. Long and Servedio (2013, Theorem 9) show that Φmaxhinge is realizable H-consistent for any symmetric hypothesis set H that is closed under scaling. Since for any Hrealizable distribution, the assumption that H is closed under scaling implies that H is complete and MΦmax,H = 0, Theorem 9 also yields a quantitative relationship in that case that is stronger than the asymptotic consistency property of that previous work.
4.2 Sum losses
In this section, we discuss guarantees for sum losses, that is loss functions defined via a sum, as in (Weston and Watkins, 1998):
Φsum(h,x, y) = ∑ y′≠y Φ(h(x, y) − h(x, y′)). (9)
i) Negative results. We first give a negative result showing that when using as auxiliary function the hinge-loss, the sum loss cannot benefit from any useful H-consistency guarantee. The proof is deferred to Appendix J. Theorem 10 (Negative results for hinge loss). Assume that c > 2. Suppose that H is symmetric and complete. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦsumhinge(h) −R ∗ Φsum hinge ,H), (10)
then, f is lower bounded by 1 6 .
ii) Positive results. We then complement this negative result with positive results when using the exponential loss, the squared hinge-loss, and the ρ-margin loss, as summarized in Table 2. The proofs with corresponding summarized Theorems 22, 23 and 24 are included in Appendix J for completeness. For Φsumρ , the symmetry and completeness assumption can be relaxed to symmetry and the condition that for any x ∈ X, there exists a hypothesis h ∈H such that ∣h(x, i) − h(x, j)∣ ≥ ρ for any i ≠ j ∈ Y, as shown in Theorem 24. In the proof, we introduce an auxiliary Lemma 21 in Appendix I, which would be helpful for lower bounding the conditional regret of Φsumρ with that of the multi-class 0/1 loss.
Table 2: H-consistency bounds for sum losses with symmetric and complete hypothesis sets.
Sum loss H-consistency bound (Theorems 22, 23 and 24)
Φsumsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦsumsq−hinge(h) −R ∗ Φsum sq−hinge,H +MΦsum sq−hinge,H )
1 2 −M`0−1,H
Φsumexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦsumexp (h) −R ∗ Φsumexp ,H +MΦsumexp ,H) 1 2 −M`0−1,H Φsumρ R`0−1(h) −R∗`0−1,H ≤ RΦsumρ (h) −R ∗ Φsumρ ,H +MΦsumρ ,H −M`0−1,H
Table 3: H-consistency bounds for constrained losses with symmetric and complete hypothesis sets.
Constrained loss H-consistency bound (Theorems 25, 26, 27 and 28)
Φcstndhinge R`0−1(h) −R∗`0−1,H ≤ RΦcstndhinge(h) −R ∗ Φcstnd hinge ,H +MΦcstnd hinge ,H −M`0−1,H
Φcstndsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦcstndsq−hinge(h) −R ∗ Φcstnd sq−hinge,H +MΦcstnd sq−hinge,H
) 1 2
−M`0−1,H
Φcstndexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦcstndexp (h) −R ∗ Φcstndexp ,H +MΦcstndexp ,H) 1 2 −M`0−1,H Φcstndρ R`0−1(h) −R∗`0−1,H ≤ RΦcstndρ (h) −R ∗ Φcstndρ ,H +MΦcstndρ ,H −M`0−1,H
4.3 Constrained losses
In this section, we discuss guarantees for constrained loss, that is loss functions defined via a constraint, as in (Lee et al., 2004):
Φcstnd(h,x, y) = ∑ y′≠y Φ(−h(x, y′)) (11)
with the constraint that∑y∈Y h(x, y) = 0. We present a series of positive results by proving multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss, as summarized in Table 3. As with the binary case (Awasthi et al., 2022a), the bound admits a linear dependency for Φcstndhinge and Φ cstnd ρ , in contrast with a square-root dependency for Φcstndsq−hinge and Φ cstnd exp , as illustrated in Figure 1. The proofs with corresponding summarized Theorems 25, 26, 27 and 28 are included in Appendix K for completeness. For Φcstndρ , the symmetric and complete assumption can be relaxed to be symmetric and satisfy that for any x ∈ X, there exists a hypothesis h ∈H such that h(x, y) ≤ −ρ for any y ≠ ymax, as shown in Theorem 28.
The main idea of the proofs in this section is to leverage the constraint condition of Lee et al. (2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax. We can then upper bound the minimal conditional risk by the conditional risk of h, without having to derive the closed form expression of the minimal conditional risk.
As shown by Steinwart (2007, Theorem 3.2), for the family of all measurable functions, the minimizability gaps vanish: M`0−1,Hall = MΦsum,Hall = MΦcstnd,Hall = 0, for Φ = Φhinge, Φsq−hinge, Φexp and Φρ. Therefore, when H =Hall, our quantitative bounds in Table 2 and Table 3 imply the asymptotic consistency results of those multi-class losses in (Tewari and Bartlett, 2007), which shows that our results are stronger and more significant. We also provide bounds for multi-class losses using a non-convex auxiliary function, which are not studied in the previous work.
5 Adversarial H-consistency bounds
In this section, we analyze multi-class H-consistency bounds in the adversarial scenario (`2 = `γ). For any x ∈ X, we denote by Hγ(x) the set of hypotheses h with a positive margin on the ball of radius γ around x, Hγ(x) = {h ∈H ∶ infx′∶∥x−x′∥p≤γ ρh(x
′,h(x)) > 0}, and by Hγ(x) the set of labels generated by these hypotheses, Hγ(x) = {h(x)∶h ∈Hγ(x)}. When H is symmetric, we have Hγ(x) = Y iff Hγ(x) ≠ ∅. The following lemma characterizes the conditional -regret for
adversarial 0/1 loss, which will be helpful for applying Theorem 1 and Theorem 2 to the adversarial scenario. Lemma 11. For any x ∈ X, the minimal conditional `γ-risk and the conditional -regret for `γ can be expressed as follows:
C∗`γ ,H(x) = 1 − maxy∈Hγ(x) p(x, y)1Hγ(x)≠∅
[∆C`γ ,H(h,x)] = { [maxy∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] if Hγ(x) ≠ ∅ 0 otherwise.
The proof of Lemma 11 is presented in Appendix F. By Lemma 11, Theorems 1 and 2 can be instantiated as Theorems 12 and 13 in the adversarial scenario as follows, where H-consistency bounds are provided between the adversarial multi-class 0/1 loss and a surrogate loss `. Theorem 12 (Adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) = 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
Ψ([ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ) ≤ ∆C`,H(h,x). (12)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have Ψ(R`γ (h) −R∗`γ ,H +M`γ ,H) ≤ R`(h) −R ∗ `,H +M`,H +max{0,Ψ( )}. (13)
Theorem 13 (Adversarial distribution-dependent Γ-bound). Assume that there exists a nonnegative concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
[ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ≤ Γ(∆C`,H(h,x)). (14)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have R`γ (h) −R∗`γ ,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`γ ,H + . (15)
Next, we will apply Theorem 12 and Theorem 13 to study various hypothesis sets and adversarial surrogate loss functions in Sections 5.1 for negative results and Section 5.2, 5.3, and 5.4 for positive results. A careful analysis is presented in each case (see Appendix L, M, N and O).
5.1 Negative results for adversarial robustness
The following result rules out the H-consistency guarantee of multi-class losses with a convex auxiliary function, which are commonly used in practice. The proof is given in Appendix L. Theorem 14 (Negative results for convex functions). Fix c = 2. Suppose that Φ is convex and nonincreasing, and H contains 0 and satisfies the condition that there exists x ∈ X such that Hγ(x) ≠ ∅. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`γ (h) −R∗`γ ,H ≤ f(R̃̀(h) −R ∗ ̃̀,H ), (16)
then, f is lower bounded by 1 2
, for ̃̀= Φ̃max, Φ̃sum and Φ̃cstnd.
Instead, we show in Sections 5.2, 5.3, and 5.4 that the max, sum and constrained losses using as auxiliary function the non-convex ρ-margin loss admit favorable H-consistency bounds in the multi-class setting, thereby significantly generalizing the binary counterpart in (Awasthi et al., 2022a).
5.2 Adversarial max losses
We first consider the adversarial max loss Φ̃max defined as the supremum based counterpart of (5):
Φ̃max(h,x, y) = sup x′∶∥x−x′∥p≤γ Φ(ρh(x′, y)). (17)
For the adversarial max loss with Φ = Φρ, we can obtain H-consistency bounds as follows.
Theorem 15 (H-consistency bound of Φ̃maxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D, we have
R`γ (h) −R∗`γ ,H ≤ RΦ̃maxρ (h) −R∗ Φ̃maxρ ,H +MΦ̃maxρ ,H
min{1, infx∈{x∈X∶Hγ (x)≠∅} suph∈Hγ (x) infx′ ∶∥x−x′∥p≤γ
ρh(x′,h(x)) ρ } −M`γ ,H. (18)
5.3 Adversarial sum losses
Next, we consider the adversarial sum loss Φ̃sum defined as the supremum based counterpart of (9):
Φ̃sum(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(h(x′, y) − h(x′, y′)). (19)
Using the auxiliary Lemma 21 in Appendix I, we can obtain the H-consistency bound of Φ̃sumρ .
Theorem 16 (H-consistency bound of Φ̃sumρ ). Assume that H is symmetric and that for any x ∈ X, there exists a hypothesis h ∈ H inducing the same ordering of the labels for any x′ ∈ {x′∶ ∥x − x′∥p ≤ γ} and such that infx′∶∥x−x′∥p≤γ ∣h(x
′, i) − h(x′, j)∣ ≥ ρ for any i ≠ j ∈ Y. Then, for any hypothesis h ∈H and any distribution D, the following inequality holds:
R`γ (h) −R∗`γ ,H ≤ RΦ̃sumρ (h) −R ∗ Φ̃sumρ ,H +MΦ̃sumρ ,H −M`γ ,H. (20)
5.4 Adversarial constrained loss
Similarly, we define the adversarial constrained loss Φ̃cstnd as supremum based counterpart of (11):
Φ̃cstnd(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(−h(x′, y′)) (21)
with the constraint that ∑y∈Y h(x, y) = 0. For the adversarial constrained loss with Φ = Φρ, we can obtain the H-consistency bound of Φ̃cstndρ as follows.
Theorem 17 (H-consistency bound of Φ̃cstndρ ). Suppose that H is symmetric and satisfies that for any x ∈ X, there exists a hypothesis h ∈ H with the constraint ∑y∈Y h(x, y) = 0 such that supx′∶∥x−x′∥p≤γ h(x ′, y) ≤ −ρ for any y ≠ ymax. Then, for any hypothesis h ∈H and any distribution,
R`γ (h) −R∗`γ ,H ≤ RΦ̃cstndρ (h) −R ∗ Φ̃cstndρ ,H +MΦ̃cstndρ ,H −M`γ ,H. (22)
The proofs of Theorems 15, 16 and 17 are included in Appendix M, N and O respectively. These results are significant since they apply to general hypothesis sets. In particular, symmetric hypothesis sets Hall, Hlin and HNN with B = +∞ all verify the conditions of those theorems. When B < +∞, the conditions in Theorems 16 and 17 can still be verified with a suitable choice of ρ, where we can consider the hypotheses such that wy = 0 in Hlin and HNN, while Theorem 15 holds for any ρ > 0.
6 Conclusion
We presented a comprehensive study of H-consistency bounds for multi-class classification, including the analysis of the three most commonly used families of multi-class surrogate losses (max losses, sum losses and constrained losses) and including the study of surrogate losses for the adversarial robustness. Our theoretical analysis helps determine which surrogate losses admit a favorable guarantee for a given hypothesis set H. Our bounds can help guide the design of multi-class classification algorithms for both the adversarial and non-adversarial settings. They also help compare different surrogate losses for the same setting and the same hypothesis set. Of course, in addition to the functional form of the H-consistency bound, the approximation property of a surrogate loss function combined with the hypothesis set plays an important role. | 1. What is the focus and contribution of the paper regarding theoretical guarantees in multi-class settings?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and precision?
3. What are the weaknesses of the paper, especially regarding its clarity and accessibility?
4. Do you have any concerns or suggestions regarding the relevance and impact of the work?
5. What are the limitations of the paper, and how might they be addressed in future research? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work is interested in deriving theoretical guarantees on the generalization error for the 0-1 loss in a multi-class setting when the learner minimize a surrogate loss like the hinge loss. This problem has already been widely studied in the literature with notions like Bayes-consistency or H-consistency. However, the authors argue that all the previous notions either hold only for the full family of measurable functions and/or are asymptotic. In contrast, this work aims at deriving non-asymptotic guarantees of consistency for a specific set H called H-consistency bounds. A previous paper have investigated the case of binary classification. This paper shows similar results for the multi-class setting also proving negative results when such extensions were not possible.
Strengths And Weaknesses
The main contribution of the paper is to show H-consistency bounds (notion introduced in a previous paper) in the multi-class setting. Four surrogates are studied. The results are summarized in the three tables of the paper. The paper is well written and the mathematical statements are clear and precise. Nevertheless, I found the paper hard to follow. This may be due to my lack of familiarity with this kind of bounds. The paper may benefit from moving some of the theorems to the appendices (keeping the tables unchanged) and using the remaining space to add more informal explanations, maybe focusing on one result. The paper is clean but a bit arid in the current state. I was not able to assess the relevance and the impact of this work.
Questions
.
Limitations
. |
NIPS | Title
Multi-Class $H$-Consistency Bounds
Abstract
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
N/A
We present an extensive study of H-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayesconsistency, H-calibration or H-consistency, and more informative than excess error bounds derived for H being the family of all measurable functions. We give a series of new H-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial H-consistency bound can be given in some cases. To our knowledge, these are the first H-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.
1 Introduction
The loss functions optimized by learning algorithms are often distinct from the original one specified for a task. This is typically because optimizing the original loss is computationally intractable or because it does not admit some favorable properties of differentiability or smoothness. As an example, the loss function minimized by the support vector machine (SVM) algorithm is the hinge loss (Cortes and Vapnik, 1995) or the one associated to AdaBoost is the exponential loss (Schapire and Freund, 2012), both distinct from the binary classification loss used as a benchmark in applications. But, what learning guarantees can we rely on when using a surrogate loss? This is a fundamental question in learning theory that directly relates to the design of algorithms.
The standard property of Bayes-consistency, which has been shown to hold for several surrogate losses (Zhang, 2004a,b; Bartlett, Jordan, and McAuliffe, 2006; Tewari and Bartlett, 2007; Steinwart, 2007), does not supply a sufficient guarantee, since it only ensures that, asymptotically, near optimal minimizers of the surrogate excess loss nearly optimally minimize the target excess error. Moreover, this asymptotic property only holds for the full family of measurable functions, which of course is distinct from the more restricted hypothesis set used by a learning algorithm. In fact, it has been shown by Long and Servedio (2013), both theoretically and empirically, that for some hypothesis sets and distributions, the expected error of an algorithm minimizing a Bayes-consistent loss is bounded below by a positive constant, while that of an algorithm minimizing an inconsistent loss goes to zero.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
This suggests that a hypothesis set-dependent notion of H-consistency is more pertinent to the study of consistency for learning (Long and Servedio, 2013), which has been used by Kuznetsov et al. (2014); Cortes et al. (2016a,b) and Zhang and Agarwal (2020) and more generally by Awasthi, Frank, Mao, Mohri, and Zhong (2021a) in an extensive study of both binary classification and adversarial binary classification losses, as defined in (Goodfellow et al., 2014; Madry et al., 2017; Tsipras et al., 2018; Carlini and Wagner, 2017). Nevertheless, H-consistency remains an asymptotic property and does not provide guarantees for approximate surrogate loss minimizers that rely on finite samples.
Awasthi, Mao, Mohri, and Zhong (2022a) recently presented a series of results providing Hconsistency bounds in binary classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set H, expressed in terms of the surrogate loss estimation error of that predictor. These guarantees are significantly stronger than the H-calibration or H-consistency properties studied by Awasthi et al. (2021b,c). They are also more informative than similar excess error bounds derived in the literature, which correspond to the special case where H is the family of all measurable functions (Zhang, 2004a; Bartlett et al., 2006; Mohri et al., 2018). Combining H-consistency bounds with existing surrogate loss estimation bounds directly yields finite sample bounds on the estimation error for the original loss. See Appendix C for a more detailed discussion.
This paper presents an extensive study of H-consistency bounds for multi-class classification. We show in Section 4.1 that, in general, no non-trivial H-consistency bounds can be derived for multiclass max losses such as those of Crammer and Singer (2001), when used with a convex loss auxiliary function such as the hinge loss. On the positive side, we prove multi-class H-consistency bounds for max losses under a realizability assumption and give multi-class H-consistency bounds using as an auxiliary function the ρ-margin loss, without requiring a realizability assumption. For sum losses, that is multi-class losses such as that of Weston and Watkins (1998), we give a series of results, including a negative result when using as auxiliary function the hinge-loss, and H-consistency bounds when using the exponential loss, the squared hinge-loss, and the ρ-margin loss (Section 4.2). We also present a series of results for the so-called constrained losses, such as the loss function adopted by Lee et al. (2004) in the analysis of multi-class SVM. Here, we prove multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss (Section 4.3). We further give multi-class adversarial H-consistency bounds for all three of the general multi-class losses just mentioned (max losses, sum losses and constrained losses) in Section 5.
We are not aware of any prior H-consistency bound derived in the multi-class setting, even in the special case of H being the family of all measurable functions, whether in the non-adversarial or adversarial setting. All of our results are novel, including our proof techniques. Our results are given for the hypothesis set H being the family of all measurable functions, the family of linear functions, or the family of one-hidden-layer ReLU neural networks. The binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting since the study of calibration and conditional risk is more complex, the form of the surrogate losses is more diverse, and in general the analysis is more involved and requires entirely novel proof techniques in the multi-class setting (see Section 3 for a more detailed discussion of this point).
We give a detailed discussion of related work in Appendix A. We start with the introduction of several multi-class definitions, as well as key concepts and definitions related to the study of H-consistency bounds (Section 2).
2 Preliminaries
We consider the familiar multi-class classification scenario with c ≥ 2 classes. We denote by X the input space and by Y = {1, . . . , c} the set of classes or categories. Let H be a hypothesis set of functions mapping from X × Y to R. The label h(x) associated by a hypothesis h ∈ H to x ∈ X is the one with the largest score: h(x) = argmaxy∈Y h(x, y) with an arbitrary but fixed deterministic strategy used for breaking ties. For simplicity, we fix that strategy to be the one selecting the label with the highest index under the natural ordering of labels. See Appendix B for a more detailed discussion of this choice.
The margin ρh(x, y) of a hypothesis h ∈H for a labeled example (x, y) ∈ X × Y is defined by ρh(x, y) = h(x, y) −max
y′≠y h(x, y′),
that is the difference between the score assigned to (x, y) and that of the runner-up. Given a distribution D over X × Y and a loss function `∶H × X × Y → R, the generalization error of a hypothesis h ∈H and the minimal generalization error are defined as follows:
R`(h) = E (x,y)∼D [`(h,x, y)] and R∗`,H = inf h∈H R`(h).
The goal in multi-class classification is to select a hypothesis h ∈H with small generalization error with respect to the multi-class 0/1 loss defined, for any h ∈ H, by `0−1(h,x, y) = 1h(x)≠y. In the adversarial scenario, the goal is to select a hypothesis h ∈H with small adversarial generalization error defined, for any γ ∈ (0,1) and p ∈ [1,+∞], by R`γ (h) = E(x,y)∼D[`γ(h,x, y)], where
`γ(h,x, y) = sup x′∶∥x−x′∥p≤γ 1ρh(x′,y)≤0 = 1infx′ ∶∥x−x′∥p≤γ ρh(x′,y)≤0,
is the adversarial multi-class 0/1 loss. More generally, the adversarial generalization error and minimal adversarial generalization error for a loss function `(h,x, y) are defined as follows:
R̃̀(h) = E (x,y)∼D [̃̀(h,x, y)] and R∗̃̀,H = infh∈HR̃̀(h),
where ̃̀(h,x, y) = supx′∶∥x−x′∥p≤γ `(h,x ′, y) is the supremum-based counterpart of `.
For a distribution D over X × Y, we define, for any x ∈ X, p(x) = (p(x,1), . . . , p(x, c)), where p(x, y) =D(Y = y ∣X = x) is the conditional probability of Y = y given X = x. We can then write the generalization error as R`(h) = EX[C`(h,x)], where C`(h,x) is the conditional `-risk defined by C`(h,x) = ∑y∈Y p(x, y)`(h,x, y). We will denote by P a set of distributions D over X×Y and by Pall the set of all such distributions. For convenience, we define ymax by ymax = argmaxy∈Y p(x, y). When there is a tie, we pick the label with the highest index under the natural ordering of labels.
The minimal conditional `-risk is denoted by C∗`,H(x) = infh∈H C`(h,x). We also use the following shorthand for the gap ∆C`,H(h,x) = C`(h,x) − C∗`,H(x) and call ∆C`,H(h,x)1∆C`,H(h,x)> the conditional -regret for `. For convenience, we also define, for any vector τ = (τ1, . . . , τc) in the probability simplex of Rc, C`(h,x, τ) = ∑y∈Y τy `(h,x, y), C∗`,H(x, τ) = infh∈H C`(h,x, τ) and ∆C`,H(h,x, τ) = C`(h,x, τ) − C∗`,H(x, τ). Thus, we have ∆C`,H(h,x, p(x)) = ∆C`,H(h,x). For any > 0, we will denote by [t] the -truncation of t ∈ R defined by t1t> . Thus, the conditional -regret can be rewritten as [∆C`,H(h,x)] . For a hypothesis set H and distribution D, we also define the (`,H)-minimizability gap as M`,H = R∗`,H − EX[C∗`,H(x)], that is the difference between the best-in class error and the expectation of the minimal conditional `-risk. This is a key quantity appearing in our bounds that we cannot hope to estimate or minimize. Its value only depends on the distribution D and the hypothesis set H. As an example, when H is the family of all measurable functions, then the minimizability gap for the multi-class 0/1 loss is zero for any distribution D.
3 General theorems
The general form of the H-consistency bounds that we are seeking for a surrogate loss `1 of a target loss `2 is R`2(h) − R∗`2,H ≤ f(R`1(h) − R ∗ `1,H
) for all h ∈ H, for some non-decreasing function f . To derive such bounds for surrogate multi-class losses, we draw on the following two general theorems, which show that, under some conditions, the target loss estimation error can be bounded by some functional form of the surrogate loss estimation error involving minimizability gaps. Theorem 1 (Distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: Ψ([∆C`2,H(h,x)] ) ≤ ∆C`1,H(h,x). Then, for any hypothesis h ∈H and any distribution D ∈ P,
Ψ(R`2(h) −R∗`2,H +M`2,H) ≤ R`1(h) −R ∗ `1,H +M`1,H +max{Ψ(0),Ψ( )}.
Theorem 2 (Distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ X and D ∈ P: [∆C`2,H(h,x)] ≤ Γ(∆C`1,H(h,x)). Then, for any hypothesis h ∈H and any distribution D ∈ P,
R`2(h) −R∗`2,H ≤ Γ(R`1(h) −R ∗ `1,H +M`1,H) −M`2,H + .
The theorems show that, to derive such bounds for a specific hypothesis set and a set of distributions, it suffices to verify that for the same hypothesis set and set of distributions, the conditional -regret for the target loss can be upper bounded with the same functional form of the gap between the conditional risk and minimal conditional risk of the surrogate loss. These results are similar to their binary classification counterparts due to Awasthi et al. (2022b). In particular, the conditional `-risk C`(h,x) in our theorems is the multi-class generalization of their binary definition. The proofs are similar and are included in Appendix E for completeness.
For a given hypothesis set H, the resulting bounds suggest three key ingredients for the choice of a surrogate loss: (1) the functional form of the H-consistency bound, which is specified by the function Ψ or Γ; (2) the smoothness of the loss and more generally its optimization virtues, as needed for the minimization of R`1(h) − R∗`1,H; (3) and the approximation properties of the surrogate loss function which determine the value of the minimizability gap M`1,H. Our quantitative H-consistency bounds can help select the most favorable surrogate loss function among surrogate losses with good optimization merits and comparable approximation properties.
In Section 4 and Section 5, we will apply Theorem 1 and Theorem 2 to the analysis of multi-class loss functions and hypothesis sets widely used in practice. Here, we wish to first comment on the novelty of our results and proof techniques. Let us emphasize that although the general tools of Theorems 1 and 2 are the multi-class generalization of that in (Awasthi et al., 2022a), the binary classification results of Awasthi et al. (2022a) do not readily extend to the multi-class setting. This is true, even in the classical study of Bayes-consistency, where the multi-class setting (Tewari and Bartlett, 2007) does not readily follow the binary case (Bartlett et al., 2006) and required an alternative analysis and new proofs. Note that, additionally, in the multi-class setting, surrogate losses are more diverse: we will distinguish max losses, sum losses, and constrained losses and present an analysis for each loss family with various auxiliary functions for each (see Section 4).
Proof techniques. More specifically, the need for novel proof techniques stems from the following. To use Theorem 1 and Theorem 2, we need to find Ψ and Γ such that the inequality conditions in these theorems hold. This requires us to characterize the conditional risk and the minimal conditional risk of the multi-class zero-one loss function and the corresponding ones for diverse surrogate loss functions in both the non-adversarial and adversarial scenario. Unlike the binary case, such a characterization in the multi-class setting is very difficult. For example, for the constrained loss, solving the minimal conditional risk given a hypothesis set is equivalent to solving a c-dimensional constrained optimization problem, which does not admit an analytical expression. In contrast, in the binary case, solving the minimal conditional risk is equivalent to solving a minimization problem for a univariate function and the needed function Ψ can be characterized explicitly by the H-estimation error transformation, as shown in (Awasthi et al., 2022a). Unfortunately, such binary classification transformation tools cannot be adapted to the multi-class setting. Instead, in our proof for the multiclass setting, we adopt a new idea that avoids directly characterizing the explicit expression of the minimal conditional risk.
For example, for the constrained loss, we leverage the condition of (Lee et al., 2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax (see Appendix K). Then, we can upper bound the minimal conditional risk by the conditional risk of h without having to derive the closed form expression of the minimal conditional risk. Therefore, the conditional regret of the surrogate loss can be lower bounded by that of the zero-one loss with an appropriate function Ψ. To the best of our knowledge, this proof idea and technique are entirely novel. We believe that they can be used for the analysis of other multi-class surrogate losses. Furthermore, all of our multi-class H-consistency results are new. Likewise, our proofs of the H-consistency bounds for sum losses for the squared hinge loss and exponential loss use similarly a new technique and idea, and so does the proof for the ρ-margin loss. Furthermore, we also present an analysis of the adversarial scenario (see Section 5), for which the multi-class proofs are also novel. Finally, our bounds in the multi-class setting are more general: for c = 2, we recover the binary classification bounds of (Awasthi et al., 2022a). Thus, our bounds benefit from the same tightness guarantees shown by (Awasthi et al., 2022a). A further analysis of the tightness of our guarantees in the multi-class setting is left to future work.
4 H-consistency bounds
In this section, we discuss H-consistency bounds in the non-adversarial scenario where the target loss `2 is `0−1, the multi-class 0/1 loss. The lemma stated next characterizes the minimal conditional `0−1- risk and the corresponding conditional -regret, which will be helpful for instantiating Theorems 1 and 2 in the non-adversarial scenario. For any x ∈ X, we will denote, by H(x) the set of labels generated by hypotheses in H: H(x) = {h(x)∶h ∈H}. Lemma 3. For any x ∈ X, the minimal conditional `0−1-risk and the conditional -regret for `0−1 can be expressed as follows:
C∗`0−1,H(x) = 1 − maxy∈H(x)p(x, y)
[∆C`0−1,H(h,x)] = [ max y∈H(x) p(x, y) − p(x,h(x))] .
The proof of Lemma 3 is given in Appendix F. By Lemma 3, Theorems 1 and 2 can be instantiated as Theorems 4 and 5 in the non-adversarial scenario as follows, where H-consistency bounds are provided between the multi-class 0/1 loss and a surrogate loss `. Theorem 4 (Non-adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) ≥ 0 and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
Ψ([ max y∈H(x) p(x, y) − p(x,h(x))] ) ≤ ∆C`,H(h,x). (1)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
Ψ(R`0−1(h) −R∗`0−1,H +M`0−1,H) ≤ R`(h) −R ∗ `,H +M`,H +max{Ψ(0),Ψ( )}. (2)
Theorem 5 (Non-adversarial distribution-dependent Γ-bound). Assume that there exists a concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈H, x ∈ X and D ∈ P:
[ max y∈H(x) p(x, y) − p(x,h(x))] ≤ Γ(∆C`,H(h,x)). (3)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have
R`0−1(h) −R∗`0−1,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`0−1,H + . (4)
In the following, we will apply Theorems 4 and 5 to study the H-consistency bounds for different families of multi-class losses parameterized by various auxiliary functions, for several general hypothesis sets. It is worth emphasizing that the form of the surrogate losses is more diverse in the multi-class setting and each case requires a careful analysis and that the techniques used in the binary case (Awasthi et al., 2022a) do not apply and cannot be readily extended to our case.
Hypothesis sets. Let Bdp(r) = {z ∈ Rd ∣ ∥z∥p ≤ r} denote the d-dimensional `p-ball with radius r, with p ∈ [1,+∞]. Without loss of generality, in the following, we choose X = Bdp(1). Let p, q ∈ [1,+∞] be conjugate indices, that is 1
p + 1 q = 1. In the following, we will specifically study
three families: the family of all measurable functions Hall, the family of linear hypotheses
Hlin = {(x, y)↦ wy ⋅ x + by ∣ ∥wy∥q ≤W, ∣by ∣ ≤ B},
and that of one-hidden-layer ReLU networks defined by the following, where (⋅)+ = max(⋅,0):
HNN = {(x, y)↦ n
∑ j=1 uy,j(wy,j ⋅ x + by,j)+ ∣ ∥uy∥1 ≤ Λ, ∥wy,j∥q ≤W, ∣by,j ∣ ≤ B}.
Multi-class loss families. We will study three broad families of multi-class loss functions: max losses, sum losses and constrained losses, each parameterized by an auxiliary function Φ on R, assumed to be non-increasing and non-negative. In particular, we will consider the following
common auxiliary functions: the hinge loss Φhinge(t) = max{0,1 − t}, the squared hinge loss Φsq−hinge(t) = max{0,1 − t}2, the exponential loss Φexp(t) = e−t, and the ρ-margin loss Φρ(t) = min{max{0,1 − t/ρ},1}. Note that the first three auxiliary functions are convex, while the last one is not. Figure 1 shows plots of these auxiliary functions.
We will say that a hypothesis set H is symmetric if there exists a family F of functions f mapping from X to R such that {[h(x,1), . . . , h(x, c)]∶h ∈H} = {[f1(x), . . . , fc(x)]∶ f1, . . . , fc ∈ F} and ∣{f(x)∶ f ∈ F}∣ ≥ 2 for any x ∈ X. The hypothesis sets defined above (Hall, Hlin and HNN) are all symmetric. Note that for a symmetric hypothesis set H, we have H(x) = Y. We will say that a hypothesis set H is complete if the set of scores it generates spans R, that is, {h(x, y)∶h ∈H} = R, for any (x, y) ∈ X × Y. The hypothesis sets defined above, Hall, Hlin and HNN with B = +∞ are all complete.
4.1 Max losses
In this section, we discuss guarantees for max losses, that is loss functions that can be defined by the application of an auxiliary function Φ to the margin ρh(x, y), as in (Crammer and Singer, 2001):
∀(x, y) ∈ X × Y, Φmax(h,x, y) = max y′≠y Φ(h(x, y) − h(x, y′)) = Φ(ρh(x, y)). (5)
i) Negative results. We first give negative results showing that max losses Φmax(h,x, y) with convex and non-increasing auxiliary functions Φ do not admit useful H-consistency bounds for multi-class classification (c > 2). The proof is given in Appendix G. Theorem 6 (Negative results for convex Φ). Assume that c > 2. Suppose that Φ is convex and non-increasing, and H satisfies there exist x ∈ X and h ∈ H such that ∣H(x)∣ ≥ 2 and h(x, y) are equal for all y ∈ Y. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦmax(h) −R ∗ Φmax,H), (6)
then, f is lower bounded by 1 2 .
The condition on the hypothesis set in Theorem 6 is very general and all symmetric hypothesis sets verify the condition, e.g. Hall, Hlin and HNN. It is also worth pointing out that when c = 2, that is, in binary classification, Theorem 6 does not hold. Indeed, Awasthi et al. (2022a) present a series of results providing H-consistency bounds for convex Φ in the binary case. In the proof, we make use of the assumption that c > 2 and thus are able to take a probability vector p(x) whose dimension is at least three, which is crucial for the proof.
ii) Positive results without distributional assumptions. On the positive side, the max loss with the non-convex auxiliary function Φ = Φρ admits H-consistency bounds. Theorem 7 (H-consistency bound of Φmaxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D,
R`0−1(h) −R∗`0−1,H ≤ RΦmaxρ (h) −R ∗ Φmaxρ ,H +MΦmaxρ ,H
min{1, infx∈X suph∈H ρh(x,h(x)) ρ
} −M`0−1,H. (7)
See Appendix G for the proof. Theorem 7 is very powerful since it only requires H to be symmetric. We can use it to derive H-consistency bounds for Φmaxρ with common symmetric hypothesis sets
such as Hall, Hlin and HNN, as summarized in Table 1. The proofs with corresponding summarized Corollaries 18, 19 and 20 are included in Appendix H. In the proofs, we characterize the term infx∈X suph∈H ρh(x,h(x)) for each hypothesis set. Note that by Theorem 6, there is no useful H-consistency bound for the max loss with Φ = Φhinge, Φsq−hinge or Φexp in these cases. However, under the realizability assumption (Definition 8), we will show that such bounds hold.
iii) Positive results with realizable distributions. We consider the H-realizability condition (Long and Servedio, 2013; Kuznetsov et al., 2014; Cortes et al., 2016a,b; Zhang and Agarwal, 2020; Awasthi et al., 2021a) which is defined as follows. Definition 8 (H-realizability). A distribution D over X × Y is H-realizable if it labels points according to a deterministic model in H, i.e., if ∃h ∈H such that P(x,y)∼D(ρh(x, y) > 0) = 1. Theorem 9 (Realizable H-consistency bound of Φmax). Suppose that H is symmetric and complete, and Φ is non-increasing and satisfies that limt→+∞ Φ(t) = 0. Then, for any hypothesis h ∈ H and any H-realizable distribution D, we have
R`0−1(h) −R∗`0−1,H ≤ RΦmax(h) −R ∗ Φmax,H +MΦmax,H. (8)
See Appendix G for the proof. Long and Servedio (2013, Theorem 9) show that Φmaxhinge is realizable H-consistent for any symmetric hypothesis set H that is closed under scaling. Since for any Hrealizable distribution, the assumption that H is closed under scaling implies that H is complete and MΦmax,H = 0, Theorem 9 also yields a quantitative relationship in that case that is stronger than the asymptotic consistency property of that previous work.
4.2 Sum losses
In this section, we discuss guarantees for sum losses, that is loss functions defined via a sum, as in (Weston and Watkins, 1998):
Φsum(h,x, y) = ∑ y′≠y Φ(h(x, y) − h(x, y′)). (9)
i) Negative results. We first give a negative result showing that when using as auxiliary function the hinge-loss, the sum loss cannot benefit from any useful H-consistency guarantee. The proof is deferred to Appendix J. Theorem 10 (Negative results for hinge loss). Assume that c > 2. Suppose that H is symmetric and complete. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`0−1(h) −R∗`0−1,H ≤ f(RΦsumhinge(h) −R ∗ Φsum hinge ,H), (10)
then, f is lower bounded by 1 6 .
ii) Positive results. We then complement this negative result with positive results when using the exponential loss, the squared hinge-loss, and the ρ-margin loss, as summarized in Table 2. The proofs with corresponding summarized Theorems 22, 23 and 24 are included in Appendix J for completeness. For Φsumρ , the symmetry and completeness assumption can be relaxed to symmetry and the condition that for any x ∈ X, there exists a hypothesis h ∈H such that ∣h(x, i) − h(x, j)∣ ≥ ρ for any i ≠ j ∈ Y, as shown in Theorem 24. In the proof, we introduce an auxiliary Lemma 21 in Appendix I, which would be helpful for lower bounding the conditional regret of Φsumρ with that of the multi-class 0/1 loss.
Table 2: H-consistency bounds for sum losses with symmetric and complete hypothesis sets.
Sum loss H-consistency bound (Theorems 22, 23 and 24)
Φsumsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦsumsq−hinge(h) −R ∗ Φsum sq−hinge,H +MΦsum sq−hinge,H )
1 2 −M`0−1,H
Φsumexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦsumexp (h) −R ∗ Φsumexp ,H +MΦsumexp ,H) 1 2 −M`0−1,H Φsumρ R`0−1(h) −R∗`0−1,H ≤ RΦsumρ (h) −R ∗ Φsumρ ,H +MΦsumρ ,H −M`0−1,H
Table 3: H-consistency bounds for constrained losses with symmetric and complete hypothesis sets.
Constrained loss H-consistency bound (Theorems 25, 26, 27 and 28)
Φcstndhinge R`0−1(h) −R∗`0−1,H ≤ RΦcstndhinge(h) −R ∗ Φcstnd hinge ,H +MΦcstnd hinge ,H −M`0−1,H
Φcstndsq−hinge R`0−1(h) −R∗`0−1,H ≤ (RΦcstndsq−hinge(h) −R ∗ Φcstnd sq−hinge,H +MΦcstnd sq−hinge,H
) 1 2
−M`0−1,H
Φcstndexp R`0−1(h) −R∗`0−1,H ≤ √ 2(RΦcstndexp (h) −R ∗ Φcstndexp ,H +MΦcstndexp ,H) 1 2 −M`0−1,H Φcstndρ R`0−1(h) −R∗`0−1,H ≤ RΦcstndρ (h) −R ∗ Φcstndρ ,H +MΦcstndρ ,H −M`0−1,H
4.3 Constrained losses
In this section, we discuss guarantees for constrained loss, that is loss functions defined via a constraint, as in (Lee et al., 2004):
Φcstnd(h,x, y) = ∑ y′≠y Φ(−h(x, y′)) (11)
with the constraint that∑y∈Y h(x, y) = 0. We present a series of positive results by proving multi-class H-consistency bounds when using as an auxiliary function the hinge-loss, the squared hinge-loss, the exponential loss, and the ρ-margin loss, as summarized in Table 3. As with the binary case (Awasthi et al., 2022a), the bound admits a linear dependency for Φcstndhinge and Φ cstnd ρ , in contrast with a square-root dependency for Φcstndsq−hinge and Φ cstnd exp , as illustrated in Figure 1. The proofs with corresponding summarized Theorems 25, 26, 27 and 28 are included in Appendix K for completeness. For Φcstndρ , the symmetric and complete assumption can be relaxed to be symmetric and satisfy that for any x ∈ X, there exists a hypothesis h ∈H such that h(x, y) ≤ −ρ for any y ≠ ymax, as shown in Theorem 28.
The main idea of the proofs in this section is to leverage the constraint condition of Lee et al. (2004) that the scores sum to zero, and appropriately choose a hypothesis h that differs from h only by its scores for h(x) and ymax. We can then upper bound the minimal conditional risk by the conditional risk of h, without having to derive the closed form expression of the minimal conditional risk.
As shown by Steinwart (2007, Theorem 3.2), for the family of all measurable functions, the minimizability gaps vanish: M`0−1,Hall = MΦsum,Hall = MΦcstnd,Hall = 0, for Φ = Φhinge, Φsq−hinge, Φexp and Φρ. Therefore, when H =Hall, our quantitative bounds in Table 2 and Table 3 imply the asymptotic consistency results of those multi-class losses in (Tewari and Bartlett, 2007), which shows that our results are stronger and more significant. We also provide bounds for multi-class losses using a non-convex auxiliary function, which are not studied in the previous work.
5 Adversarial H-consistency bounds
In this section, we analyze multi-class H-consistency bounds in the adversarial scenario (`2 = `γ). For any x ∈ X, we denote by Hγ(x) the set of hypotheses h with a positive margin on the ball of radius γ around x, Hγ(x) = {h ∈H ∶ infx′∶∥x−x′∥p≤γ ρh(x
′,h(x)) > 0}, and by Hγ(x) the set of labels generated by these hypotheses, Hγ(x) = {h(x)∶h ∈Hγ(x)}. When H is symmetric, we have Hγ(x) = Y iff Hγ(x) ≠ ∅. The following lemma characterizes the conditional -regret for
adversarial 0/1 loss, which will be helpful for applying Theorem 1 and Theorem 2 to the adversarial scenario. Lemma 11. For any x ∈ X, the minimal conditional `γ-risk and the conditional -regret for `γ can be expressed as follows:
C∗`γ ,H(x) = 1 − maxy∈Hγ(x) p(x, y)1Hγ(x)≠∅
[∆C`γ ,H(h,x)] = { [maxy∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] if Hγ(x) ≠ ∅ 0 otherwise.
The proof of Lemma 11 is presented in Appendix F. By Lemma 11, Theorems 1 and 2 can be instantiated as Theorems 12 and 13 in the adversarial scenario as follows, where H-consistency bounds are provided between the adversarial multi-class 0/1 loss and a surrogate loss `. Theorem 12 (Adversarial distribution-dependent Ψ-bound). Assume that there exists a convex function Ψ∶R+ → R with Ψ(0) = 0 and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
Ψ([ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ) ≤ ∆C`,H(h,x). (12)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have Ψ(R`γ (h) −R∗`γ ,H +M`γ ,H) ≤ R`(h) −R ∗ `,H +M`,H +max{0,Ψ( )}. (13)
Theorem 13 (Adversarial distribution-dependent Γ-bound). Assume that there exists a nonnegative concave function Γ∶R+ → R and ≥ 0 such that the following holds for all h ∈ H, x ∈ {x ∈ X ∶Hγ(x) ≠ ∅} and D ∈ P:
[ max y∈Hγ(x) p(x, y) − p(x,h(x))1h∈Hγ(x)] ≤ Γ(∆C`,H(h,x)). (14)
Then, for any hypothesis h ∈H and any distribution D ∈ P, we have R`γ (h) −R∗`γ ,H ≤ Γ(R`(h) −R ∗ `,H +M`,H) −M`γ ,H + . (15)
Next, we will apply Theorem 12 and Theorem 13 to study various hypothesis sets and adversarial surrogate loss functions in Sections 5.1 for negative results and Section 5.2, 5.3, and 5.4 for positive results. A careful analysis is presented in each case (see Appendix L, M, N and O).
5.1 Negative results for adversarial robustness
The following result rules out the H-consistency guarantee of multi-class losses with a convex auxiliary function, which are commonly used in practice. The proof is given in Appendix L. Theorem 14 (Negative results for convex functions). Fix c = 2. Suppose that Φ is convex and nonincreasing, and H contains 0 and satisfies the condition that there exists x ∈ X such that Hγ(x) ≠ ∅. If for a non-decreasing function f ∶R+ → R+, the following H-consistency bound holds for any hypothesis h ∈H and any distribution D:
R`γ (h) −R∗`γ ,H ≤ f(R̃̀(h) −R ∗ ̃̀,H ), (16)
then, f is lower bounded by 1 2
, for ̃̀= Φ̃max, Φ̃sum and Φ̃cstnd.
Instead, we show in Sections 5.2, 5.3, and 5.4 that the max, sum and constrained losses using as auxiliary function the non-convex ρ-margin loss admit favorable H-consistency bounds in the multi-class setting, thereby significantly generalizing the binary counterpart in (Awasthi et al., 2022a).
5.2 Adversarial max losses
We first consider the adversarial max loss Φ̃max defined as the supremum based counterpart of (5):
Φ̃max(h,x, y) = sup x′∶∥x−x′∥p≤γ Φ(ρh(x′, y)). (17)
For the adversarial max loss with Φ = Φρ, we can obtain H-consistency bounds as follows.
Theorem 15 (H-consistency bound of Φ̃maxρ ). Suppose that H is symmetric. Then, for any hypothesis h ∈H and any distribution D, we have
R`γ (h) −R∗`γ ,H ≤ RΦ̃maxρ (h) −R∗ Φ̃maxρ ,H +MΦ̃maxρ ,H
min{1, infx∈{x∈X∶Hγ (x)≠∅} suph∈Hγ (x) infx′ ∶∥x−x′∥p≤γ
ρh(x′,h(x)) ρ } −M`γ ,H. (18)
5.3 Adversarial sum losses
Next, we consider the adversarial sum loss Φ̃sum defined as the supremum based counterpart of (9):
Φ̃sum(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(h(x′, y) − h(x′, y′)). (19)
Using the auxiliary Lemma 21 in Appendix I, we can obtain the H-consistency bound of Φ̃sumρ .
Theorem 16 (H-consistency bound of Φ̃sumρ ). Assume that H is symmetric and that for any x ∈ X, there exists a hypothesis h ∈ H inducing the same ordering of the labels for any x′ ∈ {x′∶ ∥x − x′∥p ≤ γ} and such that infx′∶∥x−x′∥p≤γ ∣h(x
′, i) − h(x′, j)∣ ≥ ρ for any i ≠ j ∈ Y. Then, for any hypothesis h ∈H and any distribution D, the following inequality holds:
R`γ (h) −R∗`γ ,H ≤ RΦ̃sumρ (h) −R ∗ Φ̃sumρ ,H +MΦ̃sumρ ,H −M`γ ,H. (20)
5.4 Adversarial constrained loss
Similarly, we define the adversarial constrained loss Φ̃cstnd as supremum based counterpart of (11):
Φ̃cstnd(h,x, y) = sup x′∶∥x−x′∥p≤γ ∑ y′≠y Φ(−h(x′, y′)) (21)
with the constraint that ∑y∈Y h(x, y) = 0. For the adversarial constrained loss with Φ = Φρ, we can obtain the H-consistency bound of Φ̃cstndρ as follows.
Theorem 17 (H-consistency bound of Φ̃cstndρ ). Suppose that H is symmetric and satisfies that for any x ∈ X, there exists a hypothesis h ∈ H with the constraint ∑y∈Y h(x, y) = 0 such that supx′∶∥x−x′∥p≤γ h(x ′, y) ≤ −ρ for any y ≠ ymax. Then, for any hypothesis h ∈H and any distribution,
R`γ (h) −R∗`γ ,H ≤ RΦ̃cstndρ (h) −R ∗ Φ̃cstndρ ,H +MΦ̃cstndρ ,H −M`γ ,H. (22)
The proofs of Theorems 15, 16 and 17 are included in Appendix M, N and O respectively. These results are significant since they apply to general hypothesis sets. In particular, symmetric hypothesis sets Hall, Hlin and HNN with B = +∞ all verify the conditions of those theorems. When B < +∞, the conditions in Theorems 16 and 17 can still be verified with a suitable choice of ρ, where we can consider the hypotheses such that wy = 0 in Hlin and HNN, while Theorem 15 holds for any ρ > 0.
6 Conclusion
We presented a comprehensive study of H-consistency bounds for multi-class classification, including the analysis of the three most commonly used families of multi-class surrogate losses (max losses, sum losses and constrained losses) and including the study of surrogate losses for the adversarial robustness. Our theoretical analysis helps determine which surrogate losses admit a favorable guarantee for a given hypothesis set H. Our bounds can help guide the design of multi-class classification algorithms for both the adversarial and non-adversarial settings. They also help compare different surrogate losses for the same setting and the same hypothesis set. Of course, in addition to the functional form of the H-consistency bound, the approximation property of a surrogate loss function combined with the hypothesis set plays an important role. | 1. What is the focus and contribution of the paper on multi-class classification?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works?
4. Do you have any questions regarding the novelty of the proof techniques used in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The aim of this work is to study the H-consistency bounds for multi-class classification. To this end, the authors give a series bounds for surrogate multi-class losses, including max losses, sum losses and constrained loss. Then they extend their results to adversarial setting. Lastly, they prove that no non-trivial H-consistency bound can be given in some cases.
Strengths And Weaknesses
Strength:
To my knowledge, this paper is the first work to study multi-class H-consistency.
This paper is well-written and easy to follow.
The theoretical results and proofs are sound, based on my judgement.
Weakness: The H-consistency for binary setting has been well studied in [1], which is somewhat similar to this paper in structure. Most results are also generalized from [1]. Besides, some important aspects of [1] are not covered in this paper, e.g. the tightness of the bounds.
[1] Pranjal Awasthi, et al. "H-Consistency Estimation Error of Surrogate Loss Minimizers." International Conference on Machine Learning (2022).
Questions
As mentioned above, I have some concerns about the contribution of this work, so I would appreciate it if the authors can describe the novelty of their proof techniques.
Limitations
N/A |
NIPS | Title
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Abstract
Deep reinforcement learning includes a broad family of algorithms that parameterise an internal representation, such as a value function or policy, by a deep neural network. Each algorithm optimises its parameters with respect to an objective, such as Q-learning or policy gradient, that defines its semantics. In this work, we propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network, solely from interactive experience with its environment. Over time, this allows the agent to learn how to learn increasingly effectively. Furthermore, because the objective is discovered online, it can adapt to changes over time. We demonstrate that the algorithm discovers how to address several important issues in RL, such as bootstrapping, non-stationarity, and off-policy learning. On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency, eventually outperforming the median score of a strong actor-critic baseline.
1 Introduction
Recent advances in supervised and unsupervised learning have been driven by a transition from handcrafted expert features to deep representations [15]; these are typically learned by gradient descent on a suitable objective function to adjust a rich parametric function approximator. As a field, reinforcement learning (RL) has also largely embraced the transition from handcrafting features to handcrafting objectives: deep function approximation has been successfully combined with ideas such as TD-learning [30, 34], Q-learning [42, 23], double Q-learning [36, 37], n-step updates [32, 14], general value functions [33, 18], distributional value functions [7, 3], policy gradients [43, 21] and a variety of off-policy actor-critics [8, 10, 29]. In RL, the agent doesn’t have access a differentiable performance metric, thus choosing the right proxy is of particular importance: indeed, each of the aforementioned algorithms differs fundamentally in their choice of objective, designed in each case by expert human knowledge. The deep RL version of these algorithms is otherwise very similar in essence: updating parameters via gradient descent on the corresponding objective function.
Our goal is an algorithm that instead learns its own objective, and hence its own deep reinforcement learning algorithm, solely from experience of interacting with its environment. Following the principles of deep learning, we parameterise the objective function by a rich function approximator, and update it by meta-gradient learning [28, 1, 11, 44, 47, 39, 2, 20] – i.e. by gradient descent on the sequence of gradient descent updates resulting from the choice of objective function – so as to maximise a naive outer loss function (such as REINFORCE) with minimal initial knowledge.
Importantly, and in contrast to the majority of recent work on meta-learning [11, 2, 20], our metagradient algorithm learns online, on a single task, during a single “lifetime" of training. This online approach to meta-learning confers several advantages. First, an online learning algorithm can be applied to any RL environment, and does not require a distribution of related environments, nor the
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ability to reset and rerun on different environments. Second, an online learning algorithm can adapt the objective function as learning progresses, rather than assume a global, static “one-size-fits-all" objective. Our hypothesis is that an online meta-gradient learning agent will, over time, learn to learn with greater efficiency, and in the long-run this will outperform a fixed (handcrafted) objective.
We show in toy problems that our approach can discover how to address important issues in RL, such as bootstrapping and non-stationarity. We also applied our algorithm for online discovery of an off-policy learning objective to independent training runs on each of 57 classic Atari games. Augmented with a simple heuristic to encourage consistent predictions, our meta-gradient algorithm outperformed the median score of a strong actor-critic baseline on this benchmark.
2 Related Work
The idea of learning to learn by gradient descent has a long history. In supervised learning, IDBD and SMD [31, 28] used a meta-gradient approach to adapt the learning rate online so as to optimise future performance. “Learning by gradient descent to learn by gradient descent" [1] used meta-gradients, offline and over multiple lifetimes, to learn a gradient-based optimiser, parameterised by a “black-box” neural network. MAML [11] and REPTILE [24] also use meta-gradients, offline and over multiple lifetimes, to learn initial parameters that can be optimised more efficiently.
In reinforcement learning, methods such as meta reinforcement learning [40] and RL2 [9] allow a recurrent network to jointly represent, in its activations, both the agent’s representation of state and also its internal parameters. Xu et al [44] introduced metagradients as a general but efficient approach for optimising the meta-parameters of gradient-based RL agents. This approach has since been applied to many different meta-parameters of RL algorithms, such as the discount γ and bootstrapping parameter λ [44], intrinsic rewards [47, 46], auxiliary tasks [39], off-policy corrections [45], and to parameterise returns as a linear combination of rewards [41] (without any bootstrapping). The metagradient approach has also been applied, offline and over multiple lifetimes, to black-box parameterisations, via deep neural networks, of the entire RL algorithm [2, 20] and [25] (contemporaneous work); evolutionary approaches have also been applied [17].
No prior work has addressed the most ambitious combination: a black-box approach that can parameterise the RL algorithm, meta-learned online, during the course of a single agent lifetime.
The table below catalogs related work on meta-gradient RL. We differentiate methods by several key properties. First, whether they are single lifetime (i.e. they learn online to improve performance by interacting with an environment), or require multiple lifetimes (i.e. they improve performance across repeated agent “lifetimes", each of which faces a different environment sampled from a suitable distribution). Second, whether they are white-box methods (i.e. they meta-learn the hyper-parameters of an existing RL update rule) or black-box methods (i.e. they meta-learn a general-purpose neural network encoding an RL update rule). Third, whether they compute meta-gradients by forward-mode or backward-mode differentiation (or do not use meta-gradients at all). Finally, what is meta-learned.
Algorithm Algorithm properties What is meta-learned? IDBD, SMD [31, 28] → learning rate SGD2 [1] ← optimiser RL2, Meta-RL [9, 40] X recurrent network MAML, REPTILE [11, 24] ← initial params Meta-Gradient [44, 47] → γ, λ, reward Meta-Gradient [39, 45, 41] ← auxiliary tasks, hyperparams, reward weights ML3, MetaGenRL [2, 20] ← loss function Evolved PG [17] X loss function Oh et al. 2020 [25] ← target vector This paper ← target white box, black box, single lifetime, multi-lifetime ← backward mode,→ forward mode, X no meta-gradient
3 Algorithm
In this section, we describe our proposed algorithm for online learning of reinforcement learning objectives using meta-gradients. Our starting point is the observation that a single quantity, the update target G, plays a pivotal role in characterising the objective of most deep RL algorithms; therefore, the update target offers an ideal entry point to flexibly parameterise the overall RL algorithm.
We first review the objectives used in classic RL algorithms and discuss how they may be flexibly parameterised via a neural network in order to expose them to learning instead of being manually designed by human researchers. We then recap the overall idea of meta-gradient reinforcement learning, and we illustrate how it can be used to meta-learn, online, the neural network that parametrises the update target. We discuss this for prediction, value-based control and actor-critic algorithms.
3.1 Update Targets in Reinforcement Learning Algorithms
To learn a value function vθ(S) in temporal difference (TD) prediction algorithms, in each state we use bootstrapping to construct a target G for the value function. For a trajectory τt = {St, At, Rt+1, . . . }, the target Gt for the one-step TD algorithm is
Gt = Rt+1 + γvθ(St+1); (1)
with stochastic gradient descent, we can then update the parameter θ of the value function as follows:
θ ← θ + α(Gt − vθ(St))∇θvθ(St), (2) where α is the learning rate used to update the parameter θ.
Similarly in value-based algorithms for control, we can construct the update target for the actionvalues qθ(St, At); for instance, in one-step Q-learning parameters θ for the action-value function qθ can be updated by:
θ ← θ + α(Gt − qθ(St, At))∇θqθ(St, At) where Gt = Rt+1 + γmax a qθ(St+1, a) . (3)
More generally, n-step update targets can bootstrap from the value estimation after accumulating rewards for n steps, instead of just considering the immediate reward. For instance in the case of prediction, we may consider the n-step truncated and bootstrapped return defined as:
Gt = Rt+1 + γRt+2 + γ 2Rt+3 + · · ·+ γnvθ(St+n). (4)
3.2 Parameterising RL objectives
In this work, instead of using a discounted cumulative return, we fully parameterise the update target by a neural network. This meta-network takes the trajectory as input and produces a scalar update target, i.e., G = gη(τt), where the function gη : τt → R is a neural network with parameters η. We train the meta-network using an end-to-end meta-gradient algorithm, so as to learn an update target that leads to good subsequent performance.
A different way to learn the RL objective is to directly parameterise a loss by a meta-network [2, 20], rather than the target of a loss. For instance, a standard TD learning update can either be represented by a TD loss gη(τ) = (Rt+1 + γ⊥(vθ(St+1)) − vθ(St))2, or by a squared loss with respect to a TD target, gη(τ) = Rt+1 + γ⊥(vθ(St+1)), where ⊥ represents a gradient-stopping operation. Both forms of representation are rich enough to include a rich variety of reinforcement learning objectives.
However, we hypothesise that learning a target will lead to more stable online meta-gradients than learning a loss. This is because the induced learning rule is inherently moving towards a target, rather than potentially away from it, thereby reducing the chance of immediate divergence. Because we are operating in an online meta-learning regime, avoiding divergence is of critical importance. This contrasts to prior work in offline meta-learning [2, 20], where divergence may be corrected in a subsequent lifetime.
3.3 Meta-Gradient Reinforcement Learning
Meta-gradient reinforcement learning is a family of gradient-based meta learning algorithms for learning and adapting differentiable components (denoted as meta-parameters η) in the RL update
rule. The key insight of meta-gradient RL is that most commonly used update rules are differentiable, and thus the effect of a sequence of updates is also differentiable.
Meta-gradient RL is a two-level optimisation process. A meta-learned inner loss Linnerη is parameterised by meta-parameters η and the agent tries to optimise Linnerη to update θ. Given a sequence of trajectories T = {τi, τi+1, τi+2, . . . , τi+M , τi+M+1}, we apply multiple steps of gradient descent updates to the agent θ according to the inner losses Linnerη (τi, θi). For each trajectory τ ∈ T , we have:
∆θi ∝ ∇θiLinnerη (τi, θi) θi+1 = θi + ∆θi . (5)
Consider keeping η fixed for M updates to the agent parameter θ:
θi η−→ θi+1 η−→ . . . η−→ θi+M−1 η−→ θi+M . (6)
A differentiable outer loss Louter(τi+M+1, θi+M ) is then applied to the updated agent parameters θ′ = θi+M . The gradient of this loss is taken w.r.t. meta-parameters η and then used to update η via gradient descent:
∆η ∝ ∇ηLouter(τi+M+1, θi+M ) η ← η + ∆η . (7)
We call this quantity ∇ηLouterη the meta-gradient. We can iterate this procedure during the training of the agent θ and repeatedly update the meta-parameters η.
The meta-gradient flows through the multiple gradient descent updates to the agent θ, i.e., the whole update procedure from θi to the outer loss of the final updated agent parameter θi+M . By applying chain rule, we have ∂L outer
∂η = ∂Louter ∂θ′ ∂θ′ ∂η . In practice, we can use automatic differentiation packages to
compute the meta gradient ∂L outer
∂η with comparable compute complexity as the forward pass.
The meta-gradient algorithm above can be applied to any differentiable component of the update rule, for example to learn the discount factor γ and bootstrapping factor λ [44], intrinsic rewards [47, 46], and auxiliary tasks [39]. In this paper, we apply meta-gradients to learn the meta-parameters of the update target gη online, where η are the parameters of a neural network. We call this algorithm FRODO (Flexible Reinforcement Objective Discovered Online). The following sections instantiate the FRODO algorithm for value prediction, value-based control and actor-critic control, respectively.
3.4 Learned Update Target for Prediction and Value-based Control
Given a learned update target gη , we propose updating the predictions vθ towards the target gη . With a squared loss, this results in the update
∆θ ∝ (gη(τ)− vθ(S))∇θvθ(S), (8) where the meta-network parameterised by η takes the trajectory τ as input and outputs a scalar gη(τ). After M updates, we compute the outer loss Louter from a validation trajectory τ ′ as the squared difference between the predicted value and a canonical multi-step bootstrapped return G(τ ′), as used in classic RL: ∇θ′Louter = (G(τ ′)− vθ′(S′))∇θ′vθ′(S′) (9) can then be plugged in Equation (7), to update η and continue with the next iteration. Here θt is interpreted and treated as a function of η, which was held fixed during several updates to θ.
For value-based algorithms in control, the inner update is similar, but the learned target is used to update an action-value function qθ(S,A) in the inner update. Any standard RL update can be used in the outer loss, such as Q-learning [42], Q(λ), or (Expected) Sarsa [27, 38].
3.5 Learned Update Target for Actor-Critic Algorithms in Control
In actor-critic algorithms, the update target is used both to compute policy gradient update to the policy, as well as to update the critic. We form an A2C [21] update with gη(τ):
∆θ ∝ (gη(τ)− V (S))∇θ log π(S,A) + c1(gη(τ)− V (S))∇θv(S) + c2∇θH(π(S)), (10)
where H(·) denotes the entropy of the agent’s policy, and c1 and c2 denote the weightings of the critic loss and entropy regularisation terms, respectively.
The meta-gradient can be computed on the validation trajectory τ ′ using the classic actor-critic update:
∇θ′Louter = (G(τ ′)− V (S′))∇θ′ log π(S′, A′) + c1(G(τ ′)− V (S′))∇θ′v(S′) + c2∇θ′H(π(S′)), (11) where θ′ is the updated agent after M updates to the agent parameter θ. According to the chain rule of meta-gradient, we obtain the gradient of η and update η accordingly. Note that one can use either n-step return (including λ-return [32]) as G(τ), or to use VTrace-return [10] to enable off-policy correction.
4 Motivating Examples
In this section, we explore the capability of FRODO to discover how to address fundamental issues in RL, such as bootstrapping and non-stationarity, based on simple toy domains. Larger scale experiments will subsequently be presented addressing off-policy learning.
Bootstrapping: We use a simple 6 × 11 environment called “Catch” [22]. The agent controls a paddle located on the bottom row of the grid. The agent starts in the centre and, on each step, it can move on cell to the left, one cell to the right or stand still. At the beginning of each episode a pellet appears in a random start location on the top row. On each step, the pellet move down on cell. When the pellet reaches the top row the episode terminates. The agent receives a reward of 1 if it caught the pellet, -1 otherwise. All other rewards are zero. See Figure 1a, for a depiction of the environment.
We applied FRODO to learning to control the agent. We used the the full Monte Carlo return as update target in the outer loss. In the inner update, instead, the agent only received a trajectory with
3 transitions. This requires FRODO to discover the concepts of temporal-difference prediction and bootstrapping – learning to estimate and use a prediction about events outside of the data – since the game cannot be solved perfectly by looking ahead just three steps. We conduct 10 independent runs with random seeds. From Figure 2a, in orange, we report the average episode return of FRODO observed during the course of training. The policy learned by FRODO surpassed the best possible performance for a 3 step look-ahead agent (the dashed blue line), and learned to control the agent optimally (an average episode return of 1).
Non-Stationarity: We use a non-stationary variant of the “5-state Random Walk” environment [32]. The agent starts in the centre of a chain, depicted on Figure 1b, and moves randomly left or right on each step. Episodes terminate when the agent steps out of either boundary, on termination a reward is provided to the agent; on the left side, the reward is either 0 or −1 (switching every 960 time-steps, which corresponds to 10 iterations of FRODO), on the right side the reward is always 1. Each trajectory has 16 time steps.
We applied FRODO to predict the value function, using a TD(1) update as an outer loss. The critical issue is the non-stationarity; the agent must quickly adapt its prediction whenever the reward on the leftmost side switched from 0 to 1, or vice versa. FRODO learned an update capable of dealing with such non-stationarity effectively. In the experiment, we perform 10 independent runs. In Figure 2b, we report the mean squared error of the predictions learned by FRODO, in orange. The dashed horizontal lines correspond to the average error of the predictions learned by TD(λ) at convergence. The update learned online by FRODO resulted in more accurate predictions, compared to those learned by the TD(λ) update, for any value of λ. The plot zooms into the period around 5M steps for FRODO (in orange) and for the best performing TD(λ), i.e. λ = 0.4; the predictions learned by FRODO adapted much more robustly to change-points than those of TD(λ), as demonstrated by the significantly lower spikes in the prediction error.
5 Large-Scale Experiments
In this section we scale up the actor-critic variant of FRODO from Section 3.5 to more complex RL environments in the Arcade Learning Environment. We instantiate our algorithm within a distributed framework, based on an actor-learner decomposition [10], and implemented in JAX [5]. The implementation details, computing infrastructure and pseudo-code are provided in Appendix.
5.1 Off-Policy Learning
In actor-learner architectures [10], a parallel group of actors collect trajectories by interacting with separate copies of the environment, and send the trajectories to a queue for the learner to process in batch. This architecture enables excellent throughput, but the latency in communications between the actors and the learner introduces off-policy learning issues, because the parameters of the actors’ policy (the behaviour policy µ) lag behind the parameters in the learner’s policy (the target policy π). Actor-critic updates such as VTrace can correct for such off-policyness by using the action probabilities under the behaviour policy to perform importance sampling.
To address this in our actor-critic instantiation of FRODO, we use VTrace [10], rather than a vanilla actor-critic, as outer update. In the inner loop, the meta network takes trajectories from the behaviour policy µ as input. Specifically it receives the rewards Rt, discounts γt, and, as in the motivating examples from Section 4, the values from future time-steps v(St+1), to allow bootstrapping from the learned predictions. To address off-policy learning, the probabilities of the target policy and behaviour policy for the current actionAt selected in state St, (i.e., π(St, At) and µ(St, At)), are also fed as inputs. This allows the inner loss to potentially discover off-policy algorithms, by constructing suitable off-policy update targets for the policy and value function. Thus, inputs to the meta network include {Rt+1, γt+1, v(St+1), π(St, At), µ(St, At), · · · }. The meta network is parameterised by an LSTM [16] and processes each trajectory in reverse order to compute the target Gη(τ).
5.2 Consistent Prediction
In our large scale experiments, we consider a simple yet effective heuristic which enables dramatic speed ups of learning in complex environments. While the meta-networks have the expressive power to model any target function that takes a trajectory as input, we regularise the learning space of the
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Environment Frames 1e10
0.0
0.5
1.0
1.5
2.0
2.5
M ed
ia n
Sc or
e
FRODO IMPALA
(a) Comparison between FRODO and an IMPALA baseline, in terms of the median human-normalised score across 57 Atari games. FRODO takes longer to take-off but eventually outperforms IMPALA.
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Environment Frames 1e9
0.0
0.5
1.0
1.5
2.0
2.5
3.0
M ed
ia n
Sc or
e
Learned Loss FRODO FRODO (No Consistency)
(b) Comparison (on 8 games) of several metagradient algorithms, where the meta-network either parametrises the loss (in blue), or the target, with (in orange) and without (in green) regularisation.
target function towards targets that are self-consistent over time (a property that is common to most update targets in deep reinforcement learning - c.f. [26]). Concretely, we suggest to regularise the learned update targets Gη towards functions that decompose as:
Gηt = Rt+1 + γG η t+1. (12)
To incorporate the heuristic into our meta-gradient framework, we transform the above equations into a prediction loss, and add this component into Louter to learn our meta-network η. For example, using a one-step prediction consistency loss:
Louter ← Louter + c||⊥(Rt+1 + γGηt+1)−G η t ||2, (13)
where c is for a coefficient for the consistency loss and ⊥ denotes stop gradient operator. Extension to n-step self-consistent prediction can be obtained by decomposing Gηt into n-step cumulative discounted rewards with bootstrapping in the final step.
5.3 Atari Experiments
We evaluated the performance of our method on a challenging and diverse set of classic Atari games, from the Arcade Learning Environment (ALE) [4].
We applied the FRODO algorithm to learn a target online, using an outer loss based on the actorcritic algorithm IMPALA [10], and using a consistency loss was included with c = 0.1. The agent network is parameterised with a two-layer convolutional neural network (detailed configurations
and hyperparameters can be found in the Appendix). We evaluate our agents over 16 billion frames of training. We ran separate training runs for all 57 standard benchmark games in the ALE. We computed the median of human-normalised scores throughout training, and compared to the same IMPALA actor-critic algorithm without any objective discovery. Note that FRODO algorithm does introduce algorithmic complexity compared to the IMPALA baseline, thus we provide pseudo-code in Appendix C to facilitate understanding and reproducibility.
In Figure 3a we see that the meta-gradient algorithm learned slowly and gradually to discover an effective objective. However, over time the meta-gradient algorithm learned to learn more rapidly, ultimately overtaking the actor-critic baseline and achieving significantly stronger final results. We hypothesise the performance advantage is led by the adaptive nature of the learned objective, which allows the agent to find most suitable objective according to its learning context along the way, instead of using a traditional global static objective function.
5.4 Analysis
We now examine the technical contributions that facilitate our primary goal of online objective discovery: representing targets in the meta-network versus representing a loss; and the introduction of a consistency loss. In these analysis experiments, we use a subset of 8 Atari games, namely, “kung fu master”, “qbert”, “crazy climber”, “asterix”, “beam rider”, “defender”, “pong” &“seaquest”, and train each of the games over three independent runs. In each of the plots, we show the median human-normalised score over all three runs; the shaded area shows standard derivation across random seeds. Ablation runs were performed for 4 billion frames.
Discovering targets v.s. Discovering loss: Our first experiment compares the performance of online objective discovery between a meta-network that represents the target and a meta-network that represents the loss, similarly to prior work in offline, multi-lifetime setups such as ML3 [2], MetaGenRL [20]. As we illustrate in Figure 3b, directly representing the loss by the meta-network performs poorly across all games. We hypothesise that this is due to significant instabilities in the learning dynamics, which may at any time form a loss that leads to rapid divergence. In contrast, representing the target by the meta-network performs much more stably across all games.
Consistency loss: Next, we examine the effectiveness of a consistency loss in large-scale experiments. We use values of different magnitude as the coefficient of the consistency loss in FRODO, varying between disabling consistency loss (coefficient c = 0) and a large consistency loss (c = 1). The aggregated median score learning curves are shown in Figure 3c. The introduction of a modest level (c = 0.1) of consistency loss led to a dramatic improvements in learning speed and achieved significantly higher performance. Without the consistency heuristic, performance dropped significantly and was also more unstable, presumably due to an increased likelihood of uninformative or misleading targets. Additionally, regularising too strongly (c = 1) led to significantly worse performance.
Analysis of Learned Objective: Finally, we analysed the objective learned by FRODO over time. Our primary question was whether the discovered target used in the inner loss differed significantly from the VTrace target used in the outer loss. For each of the eight games, we computed the meansquared error, averaged over the time-steps in the trajectory, between the VTrace return and the meta-network return gη(τ). Figure 3d shows that the discovered objective both varies over time, and varies significantly away from the VTrace target, with a very different characteristic in each game. Only in the game of “Pong” was a target close to VTrace preferred throughout training, perhaps because nothing more complex was required in this case.
6 Conclusion
In this paper, we proposed an algorithm that allows RL agents to learn their own objective during online interactions with their environment. The objective, specifically the target used to update the policy and value function, is parameterised by a deep neural meta-network. The nature of the meta-network, and hence the objective of the RL algorithm, is discovered by meta-gradient descent over the sequence of updates based upon the discovered target.
Our results in toy domains demonstrate that FRODO can successfully discover how to address key issues in RL, such as bootstrapping and non-stationarity, through online adaptation of its objective. Our results in Atari demonstrate that FRODO can successfully discover and adapt off-policy learning
objectives that are distinct from, and performed better than, strong benchmark RL algorithms. Taken together, these examples illustrate the generality of the proposed method, and suggest its potential both to recover existing concepts, and to discover new concepts for RL algorithms.
Broader Impact
This work is situated within a broad and important long-term research program for reinforcement learning: how might a machine discover its own algorithm for RL? Specifically, the research considers one strand of potential research in this area, which is how a machine might discover its own objective for RL. Because our research focuses on the online setting (compared, for example, to much prior work on meta-learning that learns offline from a distribution of tasks), it is applicable to any RL problem. In principle, therefore, any benefits demonstrated in this paper might potentially be applicable to other future RL problems. Thus, the ethical consequences of this research are similar to those for any other research on the RL problem itself: it may provide some progress and accelerate research towards all RL problems, which may benefit any users and use-cases of RL (both "good" and "bad"). Our algorithm learns entirely from interaction with its environment, and does not utilise any external source of data.
Acknowledgments and Disclosure of Funding
The authors would like to thank Manuel Kroiss, Iurii Kemaev and developers of JAX, Haiku, RLax, Optax for their kind engineering support; and thank Joseph Modayil, Doina Precup for their comments and suggestions on the paper. | 1. What is the focus and contribution of the paper on meta-gradient reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its novelty and differences from prior works?
3. What are the weaknesses of the paper, especially regarding the choice of environments for experiments?
4. Do you have any concerns about the significance of the promising empirical results presented in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes an approach for meta-gradient reinforcement learning where the value target used in loss functions is learned in an outer optimization loop. The authors hypothesize that this provides more freedom to an RL agent to learn its own objective for policy optimization, as opposed to priors (such as the Bellman equation) being applied to the learning algorithm. The authors present their algorithm (called FRODO) in both value function and actor critic variants and provide empirical evidence for the effectiveness of their method on toy tasks and the ALE benchmark. UPDATE: Thanks to the authors for their clarifications!
Strengths
* Novel method for meta-RL that is clearly different from prior work (also clearly highlighted in section 2). * Promising empirical results on the tested environments along with useful analyses * Clear exposition and writing
Weaknesses
* Would have liked to see some environments more substantial than Atari (e.g. Mujoco or Deepmind 3D lab) |
NIPS | Title
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Abstract
Deep reinforcement learning includes a broad family of algorithms that parameterise an internal representation, such as a value function or policy, by a deep neural network. Each algorithm optimises its parameters with respect to an objective, such as Q-learning or policy gradient, that defines its semantics. In this work, we propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network, solely from interactive experience with its environment. Over time, this allows the agent to learn how to learn increasingly effectively. Furthermore, because the objective is discovered online, it can adapt to changes over time. We demonstrate that the algorithm discovers how to address several important issues in RL, such as bootstrapping, non-stationarity, and off-policy learning. On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency, eventually outperforming the median score of a strong actor-critic baseline.
1 Introduction
Recent advances in supervised and unsupervised learning have been driven by a transition from handcrafted expert features to deep representations [15]; these are typically learned by gradient descent on a suitable objective function to adjust a rich parametric function approximator. As a field, reinforcement learning (RL) has also largely embraced the transition from handcrafting features to handcrafting objectives: deep function approximation has been successfully combined with ideas such as TD-learning [30, 34], Q-learning [42, 23], double Q-learning [36, 37], n-step updates [32, 14], general value functions [33, 18], distributional value functions [7, 3], policy gradients [43, 21] and a variety of off-policy actor-critics [8, 10, 29]. In RL, the agent doesn’t have access a differentiable performance metric, thus choosing the right proxy is of particular importance: indeed, each of the aforementioned algorithms differs fundamentally in their choice of objective, designed in each case by expert human knowledge. The deep RL version of these algorithms is otherwise very similar in essence: updating parameters via gradient descent on the corresponding objective function.
Our goal is an algorithm that instead learns its own objective, and hence its own deep reinforcement learning algorithm, solely from experience of interacting with its environment. Following the principles of deep learning, we parameterise the objective function by a rich function approximator, and update it by meta-gradient learning [28, 1, 11, 44, 47, 39, 2, 20] – i.e. by gradient descent on the sequence of gradient descent updates resulting from the choice of objective function – so as to maximise a naive outer loss function (such as REINFORCE) with minimal initial knowledge.
Importantly, and in contrast to the majority of recent work on meta-learning [11, 2, 20], our metagradient algorithm learns online, on a single task, during a single “lifetime" of training. This online approach to meta-learning confers several advantages. First, an online learning algorithm can be applied to any RL environment, and does not require a distribution of related environments, nor the
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ability to reset and rerun on different environments. Second, an online learning algorithm can adapt the objective function as learning progresses, rather than assume a global, static “one-size-fits-all" objective. Our hypothesis is that an online meta-gradient learning agent will, over time, learn to learn with greater efficiency, and in the long-run this will outperform a fixed (handcrafted) objective.
We show in toy problems that our approach can discover how to address important issues in RL, such as bootstrapping and non-stationarity. We also applied our algorithm for online discovery of an off-policy learning objective to independent training runs on each of 57 classic Atari games. Augmented with a simple heuristic to encourage consistent predictions, our meta-gradient algorithm outperformed the median score of a strong actor-critic baseline on this benchmark.
2 Related Work
The idea of learning to learn by gradient descent has a long history. In supervised learning, IDBD and SMD [31, 28] used a meta-gradient approach to adapt the learning rate online so as to optimise future performance. “Learning by gradient descent to learn by gradient descent" [1] used meta-gradients, offline and over multiple lifetimes, to learn a gradient-based optimiser, parameterised by a “black-box” neural network. MAML [11] and REPTILE [24] also use meta-gradients, offline and over multiple lifetimes, to learn initial parameters that can be optimised more efficiently.
In reinforcement learning, methods such as meta reinforcement learning [40] and RL2 [9] allow a recurrent network to jointly represent, in its activations, both the agent’s representation of state and also its internal parameters. Xu et al [44] introduced metagradients as a general but efficient approach for optimising the meta-parameters of gradient-based RL agents. This approach has since been applied to many different meta-parameters of RL algorithms, such as the discount γ and bootstrapping parameter λ [44], intrinsic rewards [47, 46], auxiliary tasks [39], off-policy corrections [45], and to parameterise returns as a linear combination of rewards [41] (without any bootstrapping). The metagradient approach has also been applied, offline and over multiple lifetimes, to black-box parameterisations, via deep neural networks, of the entire RL algorithm [2, 20] and [25] (contemporaneous work); evolutionary approaches have also been applied [17].
No prior work has addressed the most ambitious combination: a black-box approach that can parameterise the RL algorithm, meta-learned online, during the course of a single agent lifetime.
The table below catalogs related work on meta-gradient RL. We differentiate methods by several key properties. First, whether they are single lifetime (i.e. they learn online to improve performance by interacting with an environment), or require multiple lifetimes (i.e. they improve performance across repeated agent “lifetimes", each of which faces a different environment sampled from a suitable distribution). Second, whether they are white-box methods (i.e. they meta-learn the hyper-parameters of an existing RL update rule) or black-box methods (i.e. they meta-learn a general-purpose neural network encoding an RL update rule). Third, whether they compute meta-gradients by forward-mode or backward-mode differentiation (or do not use meta-gradients at all). Finally, what is meta-learned.
Algorithm Algorithm properties What is meta-learned? IDBD, SMD [31, 28] → learning rate SGD2 [1] ← optimiser RL2, Meta-RL [9, 40] X recurrent network MAML, REPTILE [11, 24] ← initial params Meta-Gradient [44, 47] → γ, λ, reward Meta-Gradient [39, 45, 41] ← auxiliary tasks, hyperparams, reward weights ML3, MetaGenRL [2, 20] ← loss function Evolved PG [17] X loss function Oh et al. 2020 [25] ← target vector This paper ← target white box, black box, single lifetime, multi-lifetime ← backward mode,→ forward mode, X no meta-gradient
3 Algorithm
In this section, we describe our proposed algorithm for online learning of reinforcement learning objectives using meta-gradients. Our starting point is the observation that a single quantity, the update target G, plays a pivotal role in characterising the objective of most deep RL algorithms; therefore, the update target offers an ideal entry point to flexibly parameterise the overall RL algorithm.
We first review the objectives used in classic RL algorithms and discuss how they may be flexibly parameterised via a neural network in order to expose them to learning instead of being manually designed by human researchers. We then recap the overall idea of meta-gradient reinforcement learning, and we illustrate how it can be used to meta-learn, online, the neural network that parametrises the update target. We discuss this for prediction, value-based control and actor-critic algorithms.
3.1 Update Targets in Reinforcement Learning Algorithms
To learn a value function vθ(S) in temporal difference (TD) prediction algorithms, in each state we use bootstrapping to construct a target G for the value function. For a trajectory τt = {St, At, Rt+1, . . . }, the target Gt for the one-step TD algorithm is
Gt = Rt+1 + γvθ(St+1); (1)
with stochastic gradient descent, we can then update the parameter θ of the value function as follows:
θ ← θ + α(Gt − vθ(St))∇θvθ(St), (2) where α is the learning rate used to update the parameter θ.
Similarly in value-based algorithms for control, we can construct the update target for the actionvalues qθ(St, At); for instance, in one-step Q-learning parameters θ for the action-value function qθ can be updated by:
θ ← θ + α(Gt − qθ(St, At))∇θqθ(St, At) where Gt = Rt+1 + γmax a qθ(St+1, a) . (3)
More generally, n-step update targets can bootstrap from the value estimation after accumulating rewards for n steps, instead of just considering the immediate reward. For instance in the case of prediction, we may consider the n-step truncated and bootstrapped return defined as:
Gt = Rt+1 + γRt+2 + γ 2Rt+3 + · · ·+ γnvθ(St+n). (4)
3.2 Parameterising RL objectives
In this work, instead of using a discounted cumulative return, we fully parameterise the update target by a neural network. This meta-network takes the trajectory as input and produces a scalar update target, i.e., G = gη(τt), where the function gη : τt → R is a neural network with parameters η. We train the meta-network using an end-to-end meta-gradient algorithm, so as to learn an update target that leads to good subsequent performance.
A different way to learn the RL objective is to directly parameterise a loss by a meta-network [2, 20], rather than the target of a loss. For instance, a standard TD learning update can either be represented by a TD loss gη(τ) = (Rt+1 + γ⊥(vθ(St+1)) − vθ(St))2, or by a squared loss with respect to a TD target, gη(τ) = Rt+1 + γ⊥(vθ(St+1)), where ⊥ represents a gradient-stopping operation. Both forms of representation are rich enough to include a rich variety of reinforcement learning objectives.
However, we hypothesise that learning a target will lead to more stable online meta-gradients than learning a loss. This is because the induced learning rule is inherently moving towards a target, rather than potentially away from it, thereby reducing the chance of immediate divergence. Because we are operating in an online meta-learning regime, avoiding divergence is of critical importance. This contrasts to prior work in offline meta-learning [2, 20], where divergence may be corrected in a subsequent lifetime.
3.3 Meta-Gradient Reinforcement Learning
Meta-gradient reinforcement learning is a family of gradient-based meta learning algorithms for learning and adapting differentiable components (denoted as meta-parameters η) in the RL update
rule. The key insight of meta-gradient RL is that most commonly used update rules are differentiable, and thus the effect of a sequence of updates is also differentiable.
Meta-gradient RL is a two-level optimisation process. A meta-learned inner loss Linnerη is parameterised by meta-parameters η and the agent tries to optimise Linnerη to update θ. Given a sequence of trajectories T = {τi, τi+1, τi+2, . . . , τi+M , τi+M+1}, we apply multiple steps of gradient descent updates to the agent θ according to the inner losses Linnerη (τi, θi). For each trajectory τ ∈ T , we have:
∆θi ∝ ∇θiLinnerη (τi, θi) θi+1 = θi + ∆θi . (5)
Consider keeping η fixed for M updates to the agent parameter θ:
θi η−→ θi+1 η−→ . . . η−→ θi+M−1 η−→ θi+M . (6)
A differentiable outer loss Louter(τi+M+1, θi+M ) is then applied to the updated agent parameters θ′ = θi+M . The gradient of this loss is taken w.r.t. meta-parameters η and then used to update η via gradient descent:
∆η ∝ ∇ηLouter(τi+M+1, θi+M ) η ← η + ∆η . (7)
We call this quantity ∇ηLouterη the meta-gradient. We can iterate this procedure during the training of the agent θ and repeatedly update the meta-parameters η.
The meta-gradient flows through the multiple gradient descent updates to the agent θ, i.e., the whole update procedure from θi to the outer loss of the final updated agent parameter θi+M . By applying chain rule, we have ∂L outer
∂η = ∂Louter ∂θ′ ∂θ′ ∂η . In practice, we can use automatic differentiation packages to
compute the meta gradient ∂L outer
∂η with comparable compute complexity as the forward pass.
The meta-gradient algorithm above can be applied to any differentiable component of the update rule, for example to learn the discount factor γ and bootstrapping factor λ [44], intrinsic rewards [47, 46], and auxiliary tasks [39]. In this paper, we apply meta-gradients to learn the meta-parameters of the update target gη online, where η are the parameters of a neural network. We call this algorithm FRODO (Flexible Reinforcement Objective Discovered Online). The following sections instantiate the FRODO algorithm for value prediction, value-based control and actor-critic control, respectively.
3.4 Learned Update Target for Prediction and Value-based Control
Given a learned update target gη , we propose updating the predictions vθ towards the target gη . With a squared loss, this results in the update
∆θ ∝ (gη(τ)− vθ(S))∇θvθ(S), (8) where the meta-network parameterised by η takes the trajectory τ as input and outputs a scalar gη(τ). After M updates, we compute the outer loss Louter from a validation trajectory τ ′ as the squared difference between the predicted value and a canonical multi-step bootstrapped return G(τ ′), as used in classic RL: ∇θ′Louter = (G(τ ′)− vθ′(S′))∇θ′vθ′(S′) (9) can then be plugged in Equation (7), to update η and continue with the next iteration. Here θt is interpreted and treated as a function of η, which was held fixed during several updates to θ.
For value-based algorithms in control, the inner update is similar, but the learned target is used to update an action-value function qθ(S,A) in the inner update. Any standard RL update can be used in the outer loss, such as Q-learning [42], Q(λ), or (Expected) Sarsa [27, 38].
3.5 Learned Update Target for Actor-Critic Algorithms in Control
In actor-critic algorithms, the update target is used both to compute policy gradient update to the policy, as well as to update the critic. We form an A2C [21] update with gη(τ):
∆θ ∝ (gη(τ)− V (S))∇θ log π(S,A) + c1(gη(τ)− V (S))∇θv(S) + c2∇θH(π(S)), (10)
where H(·) denotes the entropy of the agent’s policy, and c1 and c2 denote the weightings of the critic loss and entropy regularisation terms, respectively.
The meta-gradient can be computed on the validation trajectory τ ′ using the classic actor-critic update:
∇θ′Louter = (G(τ ′)− V (S′))∇θ′ log π(S′, A′) + c1(G(τ ′)− V (S′))∇θ′v(S′) + c2∇θ′H(π(S′)), (11) where θ′ is the updated agent after M updates to the agent parameter θ. According to the chain rule of meta-gradient, we obtain the gradient of η and update η accordingly. Note that one can use either n-step return (including λ-return [32]) as G(τ), or to use VTrace-return [10] to enable off-policy correction.
4 Motivating Examples
In this section, we explore the capability of FRODO to discover how to address fundamental issues in RL, such as bootstrapping and non-stationarity, based on simple toy domains. Larger scale experiments will subsequently be presented addressing off-policy learning.
Bootstrapping: We use a simple 6 × 11 environment called “Catch” [22]. The agent controls a paddle located on the bottom row of the grid. The agent starts in the centre and, on each step, it can move on cell to the left, one cell to the right or stand still. At the beginning of each episode a pellet appears in a random start location on the top row. On each step, the pellet move down on cell. When the pellet reaches the top row the episode terminates. The agent receives a reward of 1 if it caught the pellet, -1 otherwise. All other rewards are zero. See Figure 1a, for a depiction of the environment.
We applied FRODO to learning to control the agent. We used the the full Monte Carlo return as update target in the outer loss. In the inner update, instead, the agent only received a trajectory with
3 transitions. This requires FRODO to discover the concepts of temporal-difference prediction and bootstrapping – learning to estimate and use a prediction about events outside of the data – since the game cannot be solved perfectly by looking ahead just three steps. We conduct 10 independent runs with random seeds. From Figure 2a, in orange, we report the average episode return of FRODO observed during the course of training. The policy learned by FRODO surpassed the best possible performance for a 3 step look-ahead agent (the dashed blue line), and learned to control the agent optimally (an average episode return of 1).
Non-Stationarity: We use a non-stationary variant of the “5-state Random Walk” environment [32]. The agent starts in the centre of a chain, depicted on Figure 1b, and moves randomly left or right on each step. Episodes terminate when the agent steps out of either boundary, on termination a reward is provided to the agent; on the left side, the reward is either 0 or −1 (switching every 960 time-steps, which corresponds to 10 iterations of FRODO), on the right side the reward is always 1. Each trajectory has 16 time steps.
We applied FRODO to predict the value function, using a TD(1) update as an outer loss. The critical issue is the non-stationarity; the agent must quickly adapt its prediction whenever the reward on the leftmost side switched from 0 to 1, or vice versa. FRODO learned an update capable of dealing with such non-stationarity effectively. In the experiment, we perform 10 independent runs. In Figure 2b, we report the mean squared error of the predictions learned by FRODO, in orange. The dashed horizontal lines correspond to the average error of the predictions learned by TD(λ) at convergence. The update learned online by FRODO resulted in more accurate predictions, compared to those learned by the TD(λ) update, for any value of λ. The plot zooms into the period around 5M steps for FRODO (in orange) and for the best performing TD(λ), i.e. λ = 0.4; the predictions learned by FRODO adapted much more robustly to change-points than those of TD(λ), as demonstrated by the significantly lower spikes in the prediction error.
5 Large-Scale Experiments
In this section we scale up the actor-critic variant of FRODO from Section 3.5 to more complex RL environments in the Arcade Learning Environment. We instantiate our algorithm within a distributed framework, based on an actor-learner decomposition [10], and implemented in JAX [5]. The implementation details, computing infrastructure and pseudo-code are provided in Appendix.
5.1 Off-Policy Learning
In actor-learner architectures [10], a parallel group of actors collect trajectories by interacting with separate copies of the environment, and send the trajectories to a queue for the learner to process in batch. This architecture enables excellent throughput, but the latency in communications between the actors and the learner introduces off-policy learning issues, because the parameters of the actors’ policy (the behaviour policy µ) lag behind the parameters in the learner’s policy (the target policy π). Actor-critic updates such as VTrace can correct for such off-policyness by using the action probabilities under the behaviour policy to perform importance sampling.
To address this in our actor-critic instantiation of FRODO, we use VTrace [10], rather than a vanilla actor-critic, as outer update. In the inner loop, the meta network takes trajectories from the behaviour policy µ as input. Specifically it receives the rewards Rt, discounts γt, and, as in the motivating examples from Section 4, the values from future time-steps v(St+1), to allow bootstrapping from the learned predictions. To address off-policy learning, the probabilities of the target policy and behaviour policy for the current actionAt selected in state St, (i.e., π(St, At) and µ(St, At)), are also fed as inputs. This allows the inner loss to potentially discover off-policy algorithms, by constructing suitable off-policy update targets for the policy and value function. Thus, inputs to the meta network include {Rt+1, γt+1, v(St+1), π(St, At), µ(St, At), · · · }. The meta network is parameterised by an LSTM [16] and processes each trajectory in reverse order to compute the target Gη(τ).
5.2 Consistent Prediction
In our large scale experiments, we consider a simple yet effective heuristic which enables dramatic speed ups of learning in complex environments. While the meta-networks have the expressive power to model any target function that takes a trajectory as input, we regularise the learning space of the
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Environment Frames 1e10
0.0
0.5
1.0
1.5
2.0
2.5
M ed
ia n
Sc or
e
FRODO IMPALA
(a) Comparison between FRODO and an IMPALA baseline, in terms of the median human-normalised score across 57 Atari games. FRODO takes longer to take-off but eventually outperforms IMPALA.
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Environment Frames 1e9
0.0
0.5
1.0
1.5
2.0
2.5
3.0
M ed
ia n
Sc or
e
Learned Loss FRODO FRODO (No Consistency)
(b) Comparison (on 8 games) of several metagradient algorithms, where the meta-network either parametrises the loss (in blue), or the target, with (in orange) and without (in green) regularisation.
target function towards targets that are self-consistent over time (a property that is common to most update targets in deep reinforcement learning - c.f. [26]). Concretely, we suggest to regularise the learned update targets Gη towards functions that decompose as:
Gηt = Rt+1 + γG η t+1. (12)
To incorporate the heuristic into our meta-gradient framework, we transform the above equations into a prediction loss, and add this component into Louter to learn our meta-network η. For example, using a one-step prediction consistency loss:
Louter ← Louter + c||⊥(Rt+1 + γGηt+1)−G η t ||2, (13)
where c is for a coefficient for the consistency loss and ⊥ denotes stop gradient operator. Extension to n-step self-consistent prediction can be obtained by decomposing Gηt into n-step cumulative discounted rewards with bootstrapping in the final step.
5.3 Atari Experiments
We evaluated the performance of our method on a challenging and diverse set of classic Atari games, from the Arcade Learning Environment (ALE) [4].
We applied the FRODO algorithm to learn a target online, using an outer loss based on the actorcritic algorithm IMPALA [10], and using a consistency loss was included with c = 0.1. The agent network is parameterised with a two-layer convolutional neural network (detailed configurations
and hyperparameters can be found in the Appendix). We evaluate our agents over 16 billion frames of training. We ran separate training runs for all 57 standard benchmark games in the ALE. We computed the median of human-normalised scores throughout training, and compared to the same IMPALA actor-critic algorithm without any objective discovery. Note that FRODO algorithm does introduce algorithmic complexity compared to the IMPALA baseline, thus we provide pseudo-code in Appendix C to facilitate understanding and reproducibility.
In Figure 3a we see that the meta-gradient algorithm learned slowly and gradually to discover an effective objective. However, over time the meta-gradient algorithm learned to learn more rapidly, ultimately overtaking the actor-critic baseline and achieving significantly stronger final results. We hypothesise the performance advantage is led by the adaptive nature of the learned objective, which allows the agent to find most suitable objective according to its learning context along the way, instead of using a traditional global static objective function.
5.4 Analysis
We now examine the technical contributions that facilitate our primary goal of online objective discovery: representing targets in the meta-network versus representing a loss; and the introduction of a consistency loss. In these analysis experiments, we use a subset of 8 Atari games, namely, “kung fu master”, “qbert”, “crazy climber”, “asterix”, “beam rider”, “defender”, “pong” &“seaquest”, and train each of the games over three independent runs. In each of the plots, we show the median human-normalised score over all three runs; the shaded area shows standard derivation across random seeds. Ablation runs were performed for 4 billion frames.
Discovering targets v.s. Discovering loss: Our first experiment compares the performance of online objective discovery between a meta-network that represents the target and a meta-network that represents the loss, similarly to prior work in offline, multi-lifetime setups such as ML3 [2], MetaGenRL [20]. As we illustrate in Figure 3b, directly representing the loss by the meta-network performs poorly across all games. We hypothesise that this is due to significant instabilities in the learning dynamics, which may at any time form a loss that leads to rapid divergence. In contrast, representing the target by the meta-network performs much more stably across all games.
Consistency loss: Next, we examine the effectiveness of a consistency loss in large-scale experiments. We use values of different magnitude as the coefficient of the consistency loss in FRODO, varying between disabling consistency loss (coefficient c = 0) and a large consistency loss (c = 1). The aggregated median score learning curves are shown in Figure 3c. The introduction of a modest level (c = 0.1) of consistency loss led to a dramatic improvements in learning speed and achieved significantly higher performance. Without the consistency heuristic, performance dropped significantly and was also more unstable, presumably due to an increased likelihood of uninformative or misleading targets. Additionally, regularising too strongly (c = 1) led to significantly worse performance.
Analysis of Learned Objective: Finally, we analysed the objective learned by FRODO over time. Our primary question was whether the discovered target used in the inner loss differed significantly from the VTrace target used in the outer loss. For each of the eight games, we computed the meansquared error, averaged over the time-steps in the trajectory, between the VTrace return and the meta-network return gη(τ). Figure 3d shows that the discovered objective both varies over time, and varies significantly away from the VTrace target, with a very different characteristic in each game. Only in the game of “Pong” was a target close to VTrace preferred throughout training, perhaps because nothing more complex was required in this case.
6 Conclusion
In this paper, we proposed an algorithm that allows RL agents to learn their own objective during online interactions with their environment. The objective, specifically the target used to update the policy and value function, is parameterised by a deep neural meta-network. The nature of the meta-network, and hence the objective of the RL algorithm, is discovered by meta-gradient descent over the sequence of updates based upon the discovered target.
Our results in toy domains demonstrate that FRODO can successfully discover how to address key issues in RL, such as bootstrapping and non-stationarity, through online adaptation of its objective. Our results in Atari demonstrate that FRODO can successfully discover and adapt off-policy learning
objectives that are distinct from, and performed better than, strong benchmark RL algorithms. Taken together, these examples illustrate the generality of the proposed method, and suggest its potential both to recover existing concepts, and to discover new concepts for RL algorithms.
Broader Impact
This work is situated within a broad and important long-term research program for reinforcement learning: how might a machine discover its own algorithm for RL? Specifically, the research considers one strand of potential research in this area, which is how a machine might discover its own objective for RL. Because our research focuses on the online setting (compared, for example, to much prior work on meta-learning that learns offline from a distribution of tasks), it is applicable to any RL problem. In principle, therefore, any benefits demonstrated in this paper might potentially be applicable to other future RL problems. Thus, the ethical consequences of this research are similar to those for any other research on the RL problem itself: it may provide some progress and accelerate research towards all RL problems, which may benefit any users and use-cases of RL (both "good" and "bad"). Our algorithm learns entirely from interaction with its environment, and does not utilise any external source of data.
Acknowledgments and Disclosure of Funding
The authors would like to thank Manuel Kroiss, Iurii Kemaev and developers of JAX, Haiku, RLax, Optax for their kind engineering support; and thank Joseph Modayil, Doina Precup for their comments and suggestions on the paper. | 1. What is the focus and contribution of the paper regarding meta RL algorithms?
2. What are the strengths of the proposed approach, particularly in terms of its novel formulation and experimental results?
3. What are the weaknesses of the paper, especially regarding the lack of comparisons with other meta RL methods?
4. Do you have any concerns about the self-discovered objective in the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
In this paper, the authors propose a new meta RL algorithm where the value prediction target is self-learned, i.e., generated by a trained prediction model. The value function learns to predict the self-generated value target at the inner loop of the meta RL algorithm, whereas at the outer loop, the value function learns to predict a canonical multi-step bootstrapped return. The target at the outer loss could be replaced by any standard RL update target. ------->>>> Post-rebuttal update The main results for this method are presented as synthesized learning curves over a set of Atari games (Figure 3 from the main paper) and the individual playing scores for each of the games are not provided from the appendix. It is good to see such scores in the updated version of this paper so that it is easier for the follow-up works to compare with this method.
Strengths
+ The idea of formulating the inner loss for meta RL as learning from the objective discovered by its own is interesting and novel. Generally, defining the algorithm to self-discover its objective makes the learning algorithm moves one step closer towards developing automated machine intelligence compared to the conventional meta RL methods which greatly rely on expert's design choice such as the hyperparameter to perform learning-to-learn. + The authors present extensive experiment results to evaluate the proposed method. The proposed method has been evaluated on three task domains: a catch game to demonstrate the method could effectively learn bootstrapping, a 5-state random walk to demonstrate the method works in non-stationary environments, and ALE which is a large-scale RL testbed. In all the task domains, the proposed method achieves noticeable performance improvement over the compared baselines. + The authors propose a consistency loss for large-scale experiments, which is used to regularize the output of the target prediction model to be self-consistent over time. The consistency loss could bring significant performance improvement when testified on ALE, if its weight is properly set.
Weaknesses
- My main concern for the paper is that the proposed method has not been compared with many meta RL methods. It is good to have some meta RL baselines that performs hyperparameter optimization (e.g., [1]) or using a classic RL target different from the outer loop target return as the inner loop prediction target, etc. The effect of incorporating self-discovered object could have been evaluated more thoroughly. [1] Meta-Gradient Reinforcement Learning (Neurips 2018). |
NIPS | Title
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Abstract
Deep reinforcement learning includes a broad family of algorithms that parameterise an internal representation, such as a value function or policy, by a deep neural network. Each algorithm optimises its parameters with respect to an objective, such as Q-learning or policy gradient, that defines its semantics. In this work, we propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network, solely from interactive experience with its environment. Over time, this allows the agent to learn how to learn increasingly effectively. Furthermore, because the objective is discovered online, it can adapt to changes over time. We demonstrate that the algorithm discovers how to address several important issues in RL, such as bootstrapping, non-stationarity, and off-policy learning. On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency, eventually outperforming the median score of a strong actor-critic baseline.
1 Introduction
Recent advances in supervised and unsupervised learning have been driven by a transition from handcrafted expert features to deep representations [15]; these are typically learned by gradient descent on a suitable objective function to adjust a rich parametric function approximator. As a field, reinforcement learning (RL) has also largely embraced the transition from handcrafting features to handcrafting objectives: deep function approximation has been successfully combined with ideas such as TD-learning [30, 34], Q-learning [42, 23], double Q-learning [36, 37], n-step updates [32, 14], general value functions [33, 18], distributional value functions [7, 3], policy gradients [43, 21] and a variety of off-policy actor-critics [8, 10, 29]. In RL, the agent doesn’t have access a differentiable performance metric, thus choosing the right proxy is of particular importance: indeed, each of the aforementioned algorithms differs fundamentally in their choice of objective, designed in each case by expert human knowledge. The deep RL version of these algorithms is otherwise very similar in essence: updating parameters via gradient descent on the corresponding objective function.
Our goal is an algorithm that instead learns its own objective, and hence its own deep reinforcement learning algorithm, solely from experience of interacting with its environment. Following the principles of deep learning, we parameterise the objective function by a rich function approximator, and update it by meta-gradient learning [28, 1, 11, 44, 47, 39, 2, 20] – i.e. by gradient descent on the sequence of gradient descent updates resulting from the choice of objective function – so as to maximise a naive outer loss function (such as REINFORCE) with minimal initial knowledge.
Importantly, and in contrast to the majority of recent work on meta-learning [11, 2, 20], our metagradient algorithm learns online, on a single task, during a single “lifetime" of training. This online approach to meta-learning confers several advantages. First, an online learning algorithm can be applied to any RL environment, and does not require a distribution of related environments, nor the
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ability to reset and rerun on different environments. Second, an online learning algorithm can adapt the objective function as learning progresses, rather than assume a global, static “one-size-fits-all" objective. Our hypothesis is that an online meta-gradient learning agent will, over time, learn to learn with greater efficiency, and in the long-run this will outperform a fixed (handcrafted) objective.
We show in toy problems that our approach can discover how to address important issues in RL, such as bootstrapping and non-stationarity. We also applied our algorithm for online discovery of an off-policy learning objective to independent training runs on each of 57 classic Atari games. Augmented with a simple heuristic to encourage consistent predictions, our meta-gradient algorithm outperformed the median score of a strong actor-critic baseline on this benchmark.
2 Related Work
The idea of learning to learn by gradient descent has a long history. In supervised learning, IDBD and SMD [31, 28] used a meta-gradient approach to adapt the learning rate online so as to optimise future performance. “Learning by gradient descent to learn by gradient descent" [1] used meta-gradients, offline and over multiple lifetimes, to learn a gradient-based optimiser, parameterised by a “black-box” neural network. MAML [11] and REPTILE [24] also use meta-gradients, offline and over multiple lifetimes, to learn initial parameters that can be optimised more efficiently.
In reinforcement learning, methods such as meta reinforcement learning [40] and RL2 [9] allow a recurrent network to jointly represent, in its activations, both the agent’s representation of state and also its internal parameters. Xu et al [44] introduced metagradients as a general but efficient approach for optimising the meta-parameters of gradient-based RL agents. This approach has since been applied to many different meta-parameters of RL algorithms, such as the discount γ and bootstrapping parameter λ [44], intrinsic rewards [47, 46], auxiliary tasks [39], off-policy corrections [45], and to parameterise returns as a linear combination of rewards [41] (without any bootstrapping). The metagradient approach has also been applied, offline and over multiple lifetimes, to black-box parameterisations, via deep neural networks, of the entire RL algorithm [2, 20] and [25] (contemporaneous work); evolutionary approaches have also been applied [17].
No prior work has addressed the most ambitious combination: a black-box approach that can parameterise the RL algorithm, meta-learned online, during the course of a single agent lifetime.
The table below catalogs related work on meta-gradient RL. We differentiate methods by several key properties. First, whether they are single lifetime (i.e. they learn online to improve performance by interacting with an environment), or require multiple lifetimes (i.e. they improve performance across repeated agent “lifetimes", each of which faces a different environment sampled from a suitable distribution). Second, whether they are white-box methods (i.e. they meta-learn the hyper-parameters of an existing RL update rule) or black-box methods (i.e. they meta-learn a general-purpose neural network encoding an RL update rule). Third, whether they compute meta-gradients by forward-mode or backward-mode differentiation (or do not use meta-gradients at all). Finally, what is meta-learned.
Algorithm Algorithm properties What is meta-learned? IDBD, SMD [31, 28] → learning rate SGD2 [1] ← optimiser RL2, Meta-RL [9, 40] X recurrent network MAML, REPTILE [11, 24] ← initial params Meta-Gradient [44, 47] → γ, λ, reward Meta-Gradient [39, 45, 41] ← auxiliary tasks, hyperparams, reward weights ML3, MetaGenRL [2, 20] ← loss function Evolved PG [17] X loss function Oh et al. 2020 [25] ← target vector This paper ← target white box, black box, single lifetime, multi-lifetime ← backward mode,→ forward mode, X no meta-gradient
3 Algorithm
In this section, we describe our proposed algorithm for online learning of reinforcement learning objectives using meta-gradients. Our starting point is the observation that a single quantity, the update target G, plays a pivotal role in characterising the objective of most deep RL algorithms; therefore, the update target offers an ideal entry point to flexibly parameterise the overall RL algorithm.
We first review the objectives used in classic RL algorithms and discuss how they may be flexibly parameterised via a neural network in order to expose them to learning instead of being manually designed by human researchers. We then recap the overall idea of meta-gradient reinforcement learning, and we illustrate how it can be used to meta-learn, online, the neural network that parametrises the update target. We discuss this for prediction, value-based control and actor-critic algorithms.
3.1 Update Targets in Reinforcement Learning Algorithms
To learn a value function vθ(S) in temporal difference (TD) prediction algorithms, in each state we use bootstrapping to construct a target G for the value function. For a trajectory τt = {St, At, Rt+1, . . . }, the target Gt for the one-step TD algorithm is
Gt = Rt+1 + γvθ(St+1); (1)
with stochastic gradient descent, we can then update the parameter θ of the value function as follows:
θ ← θ + α(Gt − vθ(St))∇θvθ(St), (2) where α is the learning rate used to update the parameter θ.
Similarly in value-based algorithms for control, we can construct the update target for the actionvalues qθ(St, At); for instance, in one-step Q-learning parameters θ for the action-value function qθ can be updated by:
θ ← θ + α(Gt − qθ(St, At))∇θqθ(St, At) where Gt = Rt+1 + γmax a qθ(St+1, a) . (3)
More generally, n-step update targets can bootstrap from the value estimation after accumulating rewards for n steps, instead of just considering the immediate reward. For instance in the case of prediction, we may consider the n-step truncated and bootstrapped return defined as:
Gt = Rt+1 + γRt+2 + γ 2Rt+3 + · · ·+ γnvθ(St+n). (4)
3.2 Parameterising RL objectives
In this work, instead of using a discounted cumulative return, we fully parameterise the update target by a neural network. This meta-network takes the trajectory as input and produces a scalar update target, i.e., G = gη(τt), where the function gη : τt → R is a neural network with parameters η. We train the meta-network using an end-to-end meta-gradient algorithm, so as to learn an update target that leads to good subsequent performance.
A different way to learn the RL objective is to directly parameterise a loss by a meta-network [2, 20], rather than the target of a loss. For instance, a standard TD learning update can either be represented by a TD loss gη(τ) = (Rt+1 + γ⊥(vθ(St+1)) − vθ(St))2, or by a squared loss with respect to a TD target, gη(τ) = Rt+1 + γ⊥(vθ(St+1)), where ⊥ represents a gradient-stopping operation. Both forms of representation are rich enough to include a rich variety of reinforcement learning objectives.
However, we hypothesise that learning a target will lead to more stable online meta-gradients than learning a loss. This is because the induced learning rule is inherently moving towards a target, rather than potentially away from it, thereby reducing the chance of immediate divergence. Because we are operating in an online meta-learning regime, avoiding divergence is of critical importance. This contrasts to prior work in offline meta-learning [2, 20], where divergence may be corrected in a subsequent lifetime.
3.3 Meta-Gradient Reinforcement Learning
Meta-gradient reinforcement learning is a family of gradient-based meta learning algorithms for learning and adapting differentiable components (denoted as meta-parameters η) in the RL update
rule. The key insight of meta-gradient RL is that most commonly used update rules are differentiable, and thus the effect of a sequence of updates is also differentiable.
Meta-gradient RL is a two-level optimisation process. A meta-learned inner loss Linnerη is parameterised by meta-parameters η and the agent tries to optimise Linnerη to update θ. Given a sequence of trajectories T = {τi, τi+1, τi+2, . . . , τi+M , τi+M+1}, we apply multiple steps of gradient descent updates to the agent θ according to the inner losses Linnerη (τi, θi). For each trajectory τ ∈ T , we have:
∆θi ∝ ∇θiLinnerη (τi, θi) θi+1 = θi + ∆θi . (5)
Consider keeping η fixed for M updates to the agent parameter θ:
θi η−→ θi+1 η−→ . . . η−→ θi+M−1 η−→ θi+M . (6)
A differentiable outer loss Louter(τi+M+1, θi+M ) is then applied to the updated agent parameters θ′ = θi+M . The gradient of this loss is taken w.r.t. meta-parameters η and then used to update η via gradient descent:
∆η ∝ ∇ηLouter(τi+M+1, θi+M ) η ← η + ∆η . (7)
We call this quantity ∇ηLouterη the meta-gradient. We can iterate this procedure during the training of the agent θ and repeatedly update the meta-parameters η.
The meta-gradient flows through the multiple gradient descent updates to the agent θ, i.e., the whole update procedure from θi to the outer loss of the final updated agent parameter θi+M . By applying chain rule, we have ∂L outer
∂η = ∂Louter ∂θ′ ∂θ′ ∂η . In practice, we can use automatic differentiation packages to
compute the meta gradient ∂L outer
∂η with comparable compute complexity as the forward pass.
The meta-gradient algorithm above can be applied to any differentiable component of the update rule, for example to learn the discount factor γ and bootstrapping factor λ [44], intrinsic rewards [47, 46], and auxiliary tasks [39]. In this paper, we apply meta-gradients to learn the meta-parameters of the update target gη online, where η are the parameters of a neural network. We call this algorithm FRODO (Flexible Reinforcement Objective Discovered Online). The following sections instantiate the FRODO algorithm for value prediction, value-based control and actor-critic control, respectively.
3.4 Learned Update Target for Prediction and Value-based Control
Given a learned update target gη , we propose updating the predictions vθ towards the target gη . With a squared loss, this results in the update
∆θ ∝ (gη(τ)− vθ(S))∇θvθ(S), (8) where the meta-network parameterised by η takes the trajectory τ as input and outputs a scalar gη(τ). After M updates, we compute the outer loss Louter from a validation trajectory τ ′ as the squared difference between the predicted value and a canonical multi-step bootstrapped return G(τ ′), as used in classic RL: ∇θ′Louter = (G(τ ′)− vθ′(S′))∇θ′vθ′(S′) (9) can then be plugged in Equation (7), to update η and continue with the next iteration. Here θt is interpreted and treated as a function of η, which was held fixed during several updates to θ.
For value-based algorithms in control, the inner update is similar, but the learned target is used to update an action-value function qθ(S,A) in the inner update. Any standard RL update can be used in the outer loss, such as Q-learning [42], Q(λ), or (Expected) Sarsa [27, 38].
3.5 Learned Update Target for Actor-Critic Algorithms in Control
In actor-critic algorithms, the update target is used both to compute policy gradient update to the policy, as well as to update the critic. We form an A2C [21] update with gη(τ):
∆θ ∝ (gη(τ)− V (S))∇θ log π(S,A) + c1(gη(τ)− V (S))∇θv(S) + c2∇θH(π(S)), (10)
where H(·) denotes the entropy of the agent’s policy, and c1 and c2 denote the weightings of the critic loss and entropy regularisation terms, respectively.
The meta-gradient can be computed on the validation trajectory τ ′ using the classic actor-critic update:
∇θ′Louter = (G(τ ′)− V (S′))∇θ′ log π(S′, A′) + c1(G(τ ′)− V (S′))∇θ′v(S′) + c2∇θ′H(π(S′)), (11) where θ′ is the updated agent after M updates to the agent parameter θ. According to the chain rule of meta-gradient, we obtain the gradient of η and update η accordingly. Note that one can use either n-step return (including λ-return [32]) as G(τ), or to use VTrace-return [10] to enable off-policy correction.
4 Motivating Examples
In this section, we explore the capability of FRODO to discover how to address fundamental issues in RL, such as bootstrapping and non-stationarity, based on simple toy domains. Larger scale experiments will subsequently be presented addressing off-policy learning.
Bootstrapping: We use a simple 6 × 11 environment called “Catch” [22]. The agent controls a paddle located on the bottom row of the grid. The agent starts in the centre and, on each step, it can move on cell to the left, one cell to the right or stand still. At the beginning of each episode a pellet appears in a random start location on the top row. On each step, the pellet move down on cell. When the pellet reaches the top row the episode terminates. The agent receives a reward of 1 if it caught the pellet, -1 otherwise. All other rewards are zero. See Figure 1a, for a depiction of the environment.
We applied FRODO to learning to control the agent. We used the the full Monte Carlo return as update target in the outer loss. In the inner update, instead, the agent only received a trajectory with
3 transitions. This requires FRODO to discover the concepts of temporal-difference prediction and bootstrapping – learning to estimate and use a prediction about events outside of the data – since the game cannot be solved perfectly by looking ahead just three steps. We conduct 10 independent runs with random seeds. From Figure 2a, in orange, we report the average episode return of FRODO observed during the course of training. The policy learned by FRODO surpassed the best possible performance for a 3 step look-ahead agent (the dashed blue line), and learned to control the agent optimally (an average episode return of 1).
Non-Stationarity: We use a non-stationary variant of the “5-state Random Walk” environment [32]. The agent starts in the centre of a chain, depicted on Figure 1b, and moves randomly left or right on each step. Episodes terminate when the agent steps out of either boundary, on termination a reward is provided to the agent; on the left side, the reward is either 0 or −1 (switching every 960 time-steps, which corresponds to 10 iterations of FRODO), on the right side the reward is always 1. Each trajectory has 16 time steps.
We applied FRODO to predict the value function, using a TD(1) update as an outer loss. The critical issue is the non-stationarity; the agent must quickly adapt its prediction whenever the reward on the leftmost side switched from 0 to 1, or vice versa. FRODO learned an update capable of dealing with such non-stationarity effectively. In the experiment, we perform 10 independent runs. In Figure 2b, we report the mean squared error of the predictions learned by FRODO, in orange. The dashed horizontal lines correspond to the average error of the predictions learned by TD(λ) at convergence. The update learned online by FRODO resulted in more accurate predictions, compared to those learned by the TD(λ) update, for any value of λ. The plot zooms into the period around 5M steps for FRODO (in orange) and for the best performing TD(λ), i.e. λ = 0.4; the predictions learned by FRODO adapted much more robustly to change-points than those of TD(λ), as demonstrated by the significantly lower spikes in the prediction error.
5 Large-Scale Experiments
In this section we scale up the actor-critic variant of FRODO from Section 3.5 to more complex RL environments in the Arcade Learning Environment. We instantiate our algorithm within a distributed framework, based on an actor-learner decomposition [10], and implemented in JAX [5]. The implementation details, computing infrastructure and pseudo-code are provided in Appendix.
5.1 Off-Policy Learning
In actor-learner architectures [10], a parallel group of actors collect trajectories by interacting with separate copies of the environment, and send the trajectories to a queue for the learner to process in batch. This architecture enables excellent throughput, but the latency in communications between the actors and the learner introduces off-policy learning issues, because the parameters of the actors’ policy (the behaviour policy µ) lag behind the parameters in the learner’s policy (the target policy π). Actor-critic updates such as VTrace can correct for such off-policyness by using the action probabilities under the behaviour policy to perform importance sampling.
To address this in our actor-critic instantiation of FRODO, we use VTrace [10], rather than a vanilla actor-critic, as outer update. In the inner loop, the meta network takes trajectories from the behaviour policy µ as input. Specifically it receives the rewards Rt, discounts γt, and, as in the motivating examples from Section 4, the values from future time-steps v(St+1), to allow bootstrapping from the learned predictions. To address off-policy learning, the probabilities of the target policy and behaviour policy for the current actionAt selected in state St, (i.e., π(St, At) and µ(St, At)), are also fed as inputs. This allows the inner loss to potentially discover off-policy algorithms, by constructing suitable off-policy update targets for the policy and value function. Thus, inputs to the meta network include {Rt+1, γt+1, v(St+1), π(St, At), µ(St, At), · · · }. The meta network is parameterised by an LSTM [16] and processes each trajectory in reverse order to compute the target Gη(τ).
5.2 Consistent Prediction
In our large scale experiments, we consider a simple yet effective heuristic which enables dramatic speed ups of learning in complex environments. While the meta-networks have the expressive power to model any target function that takes a trajectory as input, we regularise the learning space of the
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Environment Frames 1e10
0.0
0.5
1.0
1.5
2.0
2.5
M ed
ia n
Sc or
e
FRODO IMPALA
(a) Comparison between FRODO and an IMPALA baseline, in terms of the median human-normalised score across 57 Atari games. FRODO takes longer to take-off but eventually outperforms IMPALA.
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Environment Frames 1e9
0.0
0.5
1.0
1.5
2.0
2.5
3.0
M ed
ia n
Sc or
e
Learned Loss FRODO FRODO (No Consistency)
(b) Comparison (on 8 games) of several metagradient algorithms, where the meta-network either parametrises the loss (in blue), or the target, with (in orange) and without (in green) regularisation.
target function towards targets that are self-consistent over time (a property that is common to most update targets in deep reinforcement learning - c.f. [26]). Concretely, we suggest to regularise the learned update targets Gη towards functions that decompose as:
Gηt = Rt+1 + γG η t+1. (12)
To incorporate the heuristic into our meta-gradient framework, we transform the above equations into a prediction loss, and add this component into Louter to learn our meta-network η. For example, using a one-step prediction consistency loss:
Louter ← Louter + c||⊥(Rt+1 + γGηt+1)−G η t ||2, (13)
where c is for a coefficient for the consistency loss and ⊥ denotes stop gradient operator. Extension to n-step self-consistent prediction can be obtained by decomposing Gηt into n-step cumulative discounted rewards with bootstrapping in the final step.
5.3 Atari Experiments
We evaluated the performance of our method on a challenging and diverse set of classic Atari games, from the Arcade Learning Environment (ALE) [4].
We applied the FRODO algorithm to learn a target online, using an outer loss based on the actorcritic algorithm IMPALA [10], and using a consistency loss was included with c = 0.1. The agent network is parameterised with a two-layer convolutional neural network (detailed configurations
and hyperparameters can be found in the Appendix). We evaluate our agents over 16 billion frames of training. We ran separate training runs for all 57 standard benchmark games in the ALE. We computed the median of human-normalised scores throughout training, and compared to the same IMPALA actor-critic algorithm without any objective discovery. Note that FRODO algorithm does introduce algorithmic complexity compared to the IMPALA baseline, thus we provide pseudo-code in Appendix C to facilitate understanding and reproducibility.
In Figure 3a we see that the meta-gradient algorithm learned slowly and gradually to discover an effective objective. However, over time the meta-gradient algorithm learned to learn more rapidly, ultimately overtaking the actor-critic baseline and achieving significantly stronger final results. We hypothesise the performance advantage is led by the adaptive nature of the learned objective, which allows the agent to find most suitable objective according to its learning context along the way, instead of using a traditional global static objective function.
5.4 Analysis
We now examine the technical contributions that facilitate our primary goal of online objective discovery: representing targets in the meta-network versus representing a loss; and the introduction of a consistency loss. In these analysis experiments, we use a subset of 8 Atari games, namely, “kung fu master”, “qbert”, “crazy climber”, “asterix”, “beam rider”, “defender”, “pong” &“seaquest”, and train each of the games over three independent runs. In each of the plots, we show the median human-normalised score over all three runs; the shaded area shows standard derivation across random seeds. Ablation runs were performed for 4 billion frames.
Discovering targets v.s. Discovering loss: Our first experiment compares the performance of online objective discovery between a meta-network that represents the target and a meta-network that represents the loss, similarly to prior work in offline, multi-lifetime setups such as ML3 [2], MetaGenRL [20]. As we illustrate in Figure 3b, directly representing the loss by the meta-network performs poorly across all games. We hypothesise that this is due to significant instabilities in the learning dynamics, which may at any time form a loss that leads to rapid divergence. In contrast, representing the target by the meta-network performs much more stably across all games.
Consistency loss: Next, we examine the effectiveness of a consistency loss in large-scale experiments. We use values of different magnitude as the coefficient of the consistency loss in FRODO, varying between disabling consistency loss (coefficient c = 0) and a large consistency loss (c = 1). The aggregated median score learning curves are shown in Figure 3c. The introduction of a modest level (c = 0.1) of consistency loss led to a dramatic improvements in learning speed and achieved significantly higher performance. Without the consistency heuristic, performance dropped significantly and was also more unstable, presumably due to an increased likelihood of uninformative or misleading targets. Additionally, regularising too strongly (c = 1) led to significantly worse performance.
Analysis of Learned Objective: Finally, we analysed the objective learned by FRODO over time. Our primary question was whether the discovered target used in the inner loss differed significantly from the VTrace target used in the outer loss. For each of the eight games, we computed the meansquared error, averaged over the time-steps in the trajectory, between the VTrace return and the meta-network return gη(τ). Figure 3d shows that the discovered objective both varies over time, and varies significantly away from the VTrace target, with a very different characteristic in each game. Only in the game of “Pong” was a target close to VTrace preferred throughout training, perhaps because nothing more complex was required in this case.
6 Conclusion
In this paper, we proposed an algorithm that allows RL agents to learn their own objective during online interactions with their environment. The objective, specifically the target used to update the policy and value function, is parameterised by a deep neural meta-network. The nature of the meta-network, and hence the objective of the RL algorithm, is discovered by meta-gradient descent over the sequence of updates based upon the discovered target.
Our results in toy domains demonstrate that FRODO can successfully discover how to address key issues in RL, such as bootstrapping and non-stationarity, through online adaptation of its objective. Our results in Atari demonstrate that FRODO can successfully discover and adapt off-policy learning
objectives that are distinct from, and performed better than, strong benchmark RL algorithms. Taken together, these examples illustrate the generality of the proposed method, and suggest its potential both to recover existing concepts, and to discover new concepts for RL algorithms.
Broader Impact
This work is situated within a broad and important long-term research program for reinforcement learning: how might a machine discover its own algorithm for RL? Specifically, the research considers one strand of potential research in this area, which is how a machine might discover its own objective for RL. Because our research focuses on the online setting (compared, for example, to much prior work on meta-learning that learns offline from a distribution of tasks), it is applicable to any RL problem. In principle, therefore, any benefits demonstrated in this paper might potentially be applicable to other future RL problems. Thus, the ethical consequences of this research are similar to those for any other research on the RL problem itself: it may provide some progress and accelerate research towards all RL problems, which may benefit any users and use-cases of RL (both "good" and "bad"). Our algorithm learns entirely from interaction with its environment, and does not utilise any external source of data.
Acknowledgments and Disclosure of Funding
The authors would like to thank Manuel Kroiss, Iurii Kemaev and developers of JAX, Haiku, RLax, Optax for their kind engineering support; and thank Joseph Modayil, Doina Precup for their comments and suggestions on the paper. | 1. What is the focus and contribution of the paper regarding meta-learning in reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its ability to adapt to changing contexts and learn from online trajectories?
3. Do you have any concerns or questions about the method's performance compared to other approaches, such as model-free RL or POMDP?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential drawbacks to the proposed method that could be addressed in future research? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes to allow RL agents to learn their own objective during online interactions with their environment. First, a meta-learner takes the trajectory as input and meta-learns an update target. Then, a traditional TD-like update or policy-gradient update will be performed based on the learned target. Because the target is learned according to online trajectories, it can adapt to the changing context of learning over time. To speed up learning in complex environments, a simple heuristic is added to regularize the learning space of the target function towards targets that are self-consistent over time. This paper provides motivating examples to show their proposed method can address issues on bootstrapping, non-stationarity, and off-policy learning. In addition, the authors also conduct large-scale experiments on Atari to further evaluate their method. Contributions: 1. This paper makes a first step towards meta-learning update target in the context of meta-gradient RL. 2. The proposed method makes uses of online trajectories to allow the agent to learn its own online objective and thus learn how to learn increasingly effectively.
Strengths
1. This paper tackles a very valuable problem of meta-learning knowledge from online trajectories. The idea of building a learnable update target for RL is interesting and novel. This paper is relevant to a broad range of researches on meta RL. 2. They provide motivating examples to validate their motivation of algorithm design. These experiments are well-designed to support the main claims.
Weaknesses
1. I wonder if R2D2 or an extension version of a conventional model-free RL that supports POMDP will have similar performance with the proposed method in this paper. For example, what if we simply use a trajectory encoder to capture the true state through history and then add it to the value network and policy network as an additional input feature. Maybe this naïve method can also address tasks like “Catch” and “5-state Random Walk”. More experiments will be helpful to further understand the contribution of this paper. 2. The proposed approach has only comparable or slightly better performance than baseline method (IMPALA) on large-scale standard Atari benchmark, but it has a much more complex implementation than baseline method (a meta learning algorithm seems to be harder to train and use compared to a simple model-free method, and may be less stable in practice). In addition, some recent stronger baselines(e.g., R2D2, NGU, Agent57, and MuZero)on this dataset are not included. |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. What is the main contribution of the paper regarding reinforcement learning methods?
2. What are the strengths of the proposed approach, particularly in terms of technicality and efficiency?
3. Are there any concerns or limitations regarding the exploration strategy and its implementation?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are some minor comments or suggestions for improving the paper? | Review | Review
The paper presents a new exploration technique for reinforcement learning methods. The approach is based on computing the information gain for the posterior distribution of a learned dynamics model. The dynamics model is modeled by a possible deep neural network. Actions get higher rewards if the posterior distribution over the parameters of the learned dynamics model is likely to change (in a KL sense, which is equivalent to the information gain). As the posterior distribution can not be represented, the authors use variational Bayes to approximate the posterior by a fully factorized Gaussian distribution. The paper includes efficient update equations for the variational distribution, which has to be computed for each experienced state action sample. The authors evaluate their exploration strategy with different reinforcement learning algorithms on a couple of challenging continuous control problems. Positive points: - I like the idea of using the information gain for exploration. This paper is an important step for scaling such exploration to complex systems. - The paper is technically very strong, presenting a clever idea to drive exploration and efficient algorithms to implement this idea. - Exploration in in continuous action control problems is an unsolved problem and the algorithm seems to be very effective - The evaluations are convincing, including several easy but also some challenging control tasks. The exploration strategy is tested with different RL algorithms. Negative points: - Not many... maybe clarity could be slightly improved, but in general, the paper reads well. Minor comments: - Equation 7 would be easier to understand if the authors would indicate the dependency of \phi_t+1 on s_t and a_t. Also do that in the algorithm box - Why does the system dynamics in line 3 of the algorithm box depend on the history and not on the state s_t? - I do not understand how to compute p(s_t|\theta) in 13. Should it not be p(s_t|\theta, s_t-1, a_t-1) ? - There are typos in the in-text equation before Eq. 17. The last l should be in the brackets and the first nabla operator is missing the l. - Even if it is in the supplement, it would be good to shortly describe what models are used for the learned system dynamics - citation [11] is not published in ICML, its only available on archive. |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. What is the main contribution of the paper regarding curiosity in deep reinforcement learning?
2. What are the strengths and weaknesses of the proposed variational approach and algorithm?
3. How does the reviewer assess the relationship between the proposed approach and Schmidhuber's compression improvement?
4. What are the concerns regarding the brittleness of the proposed algorithm, specifically with regards to computing the Hessian and dividing by the median?
5. Can you explain the purpose and necessity of the two intrinsic rewards proposed in Algorithm 1?
6. Are there any issues with learned transition models?
7. What is the significance of line 230, "very low eta values should reduce VIME to traditional Gaussian control noise"?
8. How does the reviewer suggest improving the paper, particularly regarding empirical evidence for the median and citing more references on Bayesian networks? | Review | Review
This paper looks at the notion of curiosity in deep reinforcement learning. It returns to the idea of equating exploratory behavior with maximizing information. Key contributions are the formulation of a variational perspective on information gain, an algorithm based on Blundell et al.'s Bayesian neural networks and the exposition of the relationship between their approach and Schmidhuber's compression improvement. Results on simple domains are given. The paper shows a pleasant breadth of understanding of the literature. It provides a number of insights into curiosity for RL with neural networks. I think it could be improved by focusing on the development of the variational approach and the immediately resulting algorithm. As is, there are a number of asides that detract from the main contribution. My main concern is that the proposed algorithm seems relatively brittle. In the case of Eqn 17, computing the Hessian might only be a good idea in the diagonal case. Dividing by the median suggests an underlying instability. Questions: * The median trick bothers me. Suppose the model & KL have converged. Then at best the intrinsic reward is 1 everywhere and this does not change the value function. In the worst case the KL is close to 0 and you end up with a high variance in your intrinsic rewards. Why isn't this an issue here? * Eqn 13: updating the posterior at every step is different from updating the posterior given all data from the prior. Do you think there are issues with the resulting "posterior chaining"? * How good are the learned transition models? * Can you explain line 230: "very low eta values should reduce VIME to traditional Gaussian control noise"? * Why do you propose two intrinsic rewards on line 5 of algorithm 1? I'd like to see a clear position. Suggestions: Eqn 2: P(. | s_t, a_t) Line 116: For another connection, you may want to look at Lopes et al. (2012), "Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress" Line 119: You should specify which description length you mean... the statement is possibly imprecise/incorrect as is Line 128: in expectation propagation (which I know from Bishop (2006)) the KLs end up getting reversed, too... is there a relationship? Eqn 13: log p(s_t | theta) should be p(s_t | s_t-1, a_t-1, theta), no? It would be good to give empirical evidence showing why the median is needed Graphs: unreadable It might be good to cite more than just [20] as a reference on Bayesian networks. Again, Bishop (2006) (Section 5.7) provides a nice list. |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. What is the focus of the paper in terms of reinforcement learning?
2. What are the strengths of the proposed approach, particularly in utilizing Bayesian neural networks?
3. What are the weaknesses of the paper regarding its exploration strategy?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any concerns or questions regarding the application of the proposed method in different domains? | Review | Review
The paper describes a curiosity driven RL technique for continuous state and actions spaces. It selects actions to maximize the information gain. Since Bayesian learning is intractable most of the time, a variational Bayes approximation is described. The approach is applied to domains where the transition dynamics are represented by Bayesian neural networks.The paper is well written. The ideas are clearly described. The approach advances the state of the art in curiosity driven RL by using Bayesian neural networks and showing how to maximize information gain in that context. This is good work. I have one high level comment regarding the reason for focusing on curiosity driven RL. Why maximize information gain instead of expected rewards? Bayesian RL induces a distribution over rewards. If we maximize expected rewards, exploration naturally occurs in Bayesian RL and the exploration/exploitation tradeoff is optimally balanced. Curiosity driven RL based on information gain makes sense when there are no rewards. However if there are rewards, adding an information gain might yield unnecessary exploration. For instance, suppose that an action yields a reward of exactly 10, while a second action yields uncertain rewards of at most 9. Curiosity driven RL would explore the second action in order to resolve the uncertainty even though this action will never be optimal. Bayesian RL that optimizes rewards only will explore the environment systematically, but will explore only what is necessary. |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. What is the main contribution of the paper, and how does it build upon previous work in the field?
2. How does the proposed method differ from other exploration strategies in reinforcement learning, and what are its advantages?
3. What are the limitations of the experimental comparison presented in the paper, and how could they be improved?
4. Are there any minor errors or unclear points in the paper's equations and explanations? | Review | Review
This paper proposes Variational Information Maximizing Exploration for RL, which is heavily based on Schmidhubers work on curiosity driven learning and his more recent work on utilizing Kolmogorov Complexity. Experiments: The experiments are nice, but unfortunately not a fair comparison. It would be more useful to compare how your information-theoretic reward compares to a simple maximization of the entropy, which can also functions as an exploration term, as in e.g., Information-Theoretic Neuro-Correlates Boost Evolution of Cognitive Systems by Jory Schossau, Christoph Adami and Arend Hintze and Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis by Keyan Zahedi, Georg Martius and Nihat Ay Minor remarks: Equation 2: This equation results from Eq. (1) in which H(\theta|\xi_t, a_t) is compared with H(\theta|S_t+1, \xi_t, a_t). It seems that a_t is missing in p(\theta|\xi_t), if not, please mention why it can be omitted. Equation 4: Bayes' rule is formalted as p(a|b) = p(b|a) p(a) / p(b) (which I am sure the authors know). Unfortunately, I cannot see how Eq. 4 fits into this formulation. It seems that there are terms omitted. I would see it, if e.g., p(\theta|\xi_t,s_t+1) p(s_t+1|\xi_t, a_t;\theta). |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. How does the proposed method compute curiosity-based intrinsic rewards for reinforcement learning?
2. What is the main contribution of the paper regarding exploration strategies?
3. What are the concerns regarding the use of model-free reinforcement learning methods despite estimating the dynamics model?
4. Why did the proposed method not improve the learning performance in the cart pole swing-up task?
5. Can using the original KL divergence as an intrinsic reward lead to instability in the learning process?
6. How many previous trajectories are required for computing the KL divergence? | Review | Review
This paper proposes a new method to compute a curiosity based intrinsic reward for reinforcement learning to promote exploration. The learning agent maintains the dynamics model explicitly, and it is used to compute the KL divergence between the approximated distribution of the dynamics with the current parameters and that with the old parameters. The computed KL divergence is used as intrinsic reward and the augmented reward values can be used with model-free policy search methods such as TRPO, REINFORCE and ERWR. The authors also show the relation between the proposed idea and compression improvement that is developed by Schmidhuber and his colleagues. The proposed method is evaluated by several control tasks and some of them have high dimensional state-action spaces. The agentâs dynamics model is estimated by Bayesian neural networks (BNNs) explicitly and the intrinsic reward. My major concern is that the estimated model is not used to find an optimal policy. In other words, model free reinforcement learning such as TRPO, REINFORCE and ERWR are used. Since the model is estimated, model-based reinforcement learning is more appropriate for this setting. Please discuss about this point for more detail. The second concern is the performance of the cart pole swing-up task. Figure 2 show that REINFORCE and ERWR obtained sub-optimal policies as compared with the result of TRPO. In this task, there was no significant difference between the reinforcement learning agent with and without VIME exploration. Please discuss why the proposed method did not improve the learning performance in the cart pole swing-up task for more detail. Is it related to the property of the dynamics itself? Lines 170-174 claim that the KL-divergence is divided by median of previous KL divergences. What happens if you directly use the original KL divergence as intrinsic reward? Does it make the learning process unstable? In addition, how many previous trajectories is needed? |
NIPS | Title
VIME: Variational Information Maximizing Exploration
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
N/A
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as -greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent’s belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
1 Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously unknown environment, which it learns about through experience. A long-standing problem is how to manage the trade-off between exploration and exploitation. In exploration, the agent experiments with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards through behavior that is known to be successful. An effective exploration strategy allows the agent to generate trajectories that are maximally informative about the environment. For small tasks, this trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2–6], which offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on random walk behavior which can be highly inefficient, for example Boltzmann exploration requires a training time exponential in the number of states in order to solve the well-known n-chain MDP [9]. In between formal methods and simple heuristics, several works have proposed to address the exploration problem using less formal, but more expressive methods [10–14]. However, none of them fully address exploration in continuous control, as discretization of the state-action space scales exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the agent’s internal belief of the dynamics model as a driving force. This principle can be traced back to the concepts of curiosity and surprise [16–18]. Within this framework, agents are encouraged to take actions that result in states they deem surprising—i.e., states that cause large updates to the dynamics model distribution. We propose a practical implementation of measuring information gain using variational inference. Herein, the agent’s current understanding of the environment dynamics is represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces. The presented approach is evaluated on a range of continuous control tasks, and multiple underlying RL algorithms. Experimental results show that VIME achieves significantly better performance than naïve exploration strategies.
2 Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this idea to continuous control, and we show how to build on recent advances in variational inference for Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit the intuitive link between compression improvement and the variational lower bound in Section 2.4. Finally, Section 2.5 describes how our method is practically implemented.
2.1 Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by (S,A,P, r, ρ0, γ, T ), in which S ⊆ Rn is a state set, A ⊆ Rm an action set, P : S ×A×S → R≥0 a transition probability distribution, r : S × A → R a bounded reward function, ρ0 : S → R≥0 an initial state distribution, γ ∈ (0, 1] a discount factor, and T the horizon. States and actions viewed as random variables are abbreviated as S and A. The presented models are based on the optimization of a stochastic policy πα : S × A → R≥0, parametrized by α. Let µ(πα) denote its expected discounted return: µ(πα) = Eτ [ ∑T t=0 γ
tr(st, at)], where τ = (s0, a0, . . .) denotes the whole trajectory, s0 ∼ ρ0(s0), at ∼ πα(at|st), and st+1 ∼ P(st+1|st, at).
2.2 Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent engages in systematic exploration by seeking out state-action regions that are relatively unexplored. The agent models the environment dynamics via a model p(st+1|st, at; θ), parametrized by the random variable Θ with values θ ∈ Θ. Assuming a prior p(θ), it maintains a distribution over dynamic models through a distribution over θ, which is updated in a Bayesian manner (as opposed to a point estimate). The history of the agent up until time step t is denoted as ξt = {s1, a1, . . . , st}. According to curiosity-driven exploration [17], the agent should take actions that maximize the reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of reductions in entropy ∑ t (H(Θ|ξt, at)−H(Θ|St+1, ξt, at)) , (1) through a sequence of actions {at}. According to information theory, the individual terms equal the mutual information between the next state distribution St+1 and the model parameter Θ, namely I (St+1; Θ|ξt, at). Therefore, the agent is encouraged to take actions that lead to states that are maximally informative about the dynamics model. Furthermore, we note that
I (St+1; Θ|ξt, at) = Est+1∼P(·|ξt,at) [ DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] ] , (2)
the KL divergence from the agent’s new belief over the dynamics model to the old one, taking expectation over all possible next states according to the true dynamics P . This KL divergence can be interpreted as information gain.
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2) directly by maintaining a belief over the dynamics model [17]. However, this is not generally the case. Therefore, a common practice [10, 23] is to use RL to approximate planning for maximal mutual information along a trajectory ∑ t I (St+1; Θ|ξt, at) by adding each term I (St+1; Θ|ξt, at) as an intrinsic reward, which captures the agent’s surprise in the form of a reward function. This is practically realized by taking actions at ∼ πα(st) and sampling st+1 ∼ P(·|st, at) in order to add DKL[p(θ|ξt, at, st+1)‖p(θ|ξt)] to the external reward. The trade-off between exploitation and exploration can now be realized explicitly as follows:
r′(st, at, st+1) = r(st, at) + ηDKL[p(θ|ξt, at, st+1)‖p(θ|ξt)], (3)
with η ∈ R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical issue with maximizing information gain for exploration is that the computation of Eq. (3) requires calculating the posterior p(θ|ξt, at, st+1), which is generally intractable.
2.3 Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action pair through Bayes’ rule as
p(θ|ξt, at, st+1) = p(θ|ξt)p(st+1|ξt, at; θ)
p(st+1|ξt, at) , (4)
with p(θ|ξt, at) = p(θ|ξt) as actions do not influence beliefs about the environment [17]. Herein, the denominator is computed through the integral
p(st+1|ξt, at) = ∫ Θ p(st+1|ξt, at; θ)p(θ|ξt)dθ. (5)
In general, this integral tends to be intractable when using highly expressive parametrized models (e.g., neural networks), which are often needed to accurately capture the environment model in high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that calculating the posterior p(θ|D) for a data set D is intractable. Instead we approximate it through an alternative distribution q(θ;φ), parameterized by φ, by minimizing DKL[q(θ;φ)‖p(θ|D)]. This is done through maximization of the variational lower bound L[q(θ;φ),D]:
L[q(θ;φ),D] = Eθ∼q(·;φ) [log p(D|θ)]−DKL[q(θ;φ)‖p(θ)]. (6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it, leading to the following total reward:
r′(st, at, st+1) = r(st, at) + ηDKL[q(θ;φt+1)‖q(θ;φt)], (7)
with φt+1 the updated and φt the old parameters representing the agent’s belief. Natural candidates for parametrizing the agent’s dynamics model are Bayesian neural networks (BNNs) [19], as they maintain a distribution over their weights. This allows us to view the BNN as an infinite neural network ensemble by integrating out its parameters:
p(y|x) = ∫ Θ p(y|x; θ)q(θ;φ)dθ. (8)
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical BNN implementation details are deferred to Section 2.5, while we give some intuition into the behavior of BNNs in the appendix.
2.4 Compression
It is possible to derive an interesting relationship between compression improvement—an intrinsic reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent’s curiosity is
equated with compression improvement, measured through C(ξt;φt−1)− C(ξt;φt), where C(ξ;φ) is the description length of ξ using φ as a model. Furthermore, it is known that the negative variational lower bound can be viewed as the description length [19]. Hence, we can write compression improvement as L[q(θ;φt), ξt] − L[q(θ;φt−1), ξt]. In addition, an alternative formulation of the variational lower bound in Eq. (6) is given by
log p(D) = L[q(θ;φ),D]︷ ︸︸ ︷∫ Θ q(θ;φ) log p(θ,D) q(θ;φ) dθ+DKL[q(θ;φ)‖p(θ|D)]. (9)
Thus, compression improvement can now be written as
(log p(ξt)−DKL[q(θ;φt)‖p(θ|ξt)])− (log p(ξt)−DKL[q(θ;φt−1)‖p(θ|ξt)]) . (10) If we assume that φt perfectly optimizes the variational lower bound for the history ξt, then DKL[q(θ;φt)‖p(θ|ξt)] = 0, which occurs when the approximation equals the true posterior, i.e., q(θ;φt) = p(θ|ξt). Hence, compression improvement becomes DKL[p(θ|ξt−1)‖p(θ|ξt)]. Therefore, optimizing for compression improvement comes down to optimizing the KL divergence from the posterior given the past history ξt−1 to the posterior given the total history ξt. As such, we arrive at an alternative way to encode curiosity than information gain, namely DKL[p(θ|ξt)‖p(θ|ξt, at, st+1)], its reversed KL divergence. In experiments, we noticed no significant difference between the two KL divergence variants. This can be explained as both variants are locally equal when introducing small changes to the parameter distributions. Investigation of how to combine both information gain and compression improvement is deferred to future work.
2.5 Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and parametrization details of the dynamics BNN. The BNN weight distribution q(θ;φ) is given by the fully factorized Gaussian distribution [20]:
q(θ;φ) = ∏|Θ| i=1N (θi|µi;σ2i ). (11)
Hence, φ = {µ, σ}, with µ the Gaussian’s mean vector and σ the covariance matrix diagonal. This is particularly convenient as it allows for a simple analytical formulation of the KL divergence. This is described later in this section. Because of the restriction σ > 0, the standard deviation of the Gaussian BNN parameter is parametrized as σ = log(1 + eρ), with ρ ∈ R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is described. The second term in Eq. (6) is approximated through sampling Eθ∼q(·;φ) [log p(D|θ)] ≈ 1 N ∑N i=1 log p(D|θi) withN samples drawn according to θ ∼ q(·;φ) [20]. Optimizing the variational lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the neuron pre-activations, which is more computationally efficient and reduces gradient variance. The optimization of the variational lower bound is done at regular intervals during the RL training process, by sampling D from a FIFO replay pool that stores recent samples (st, at, st+1). This is to break up the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d. data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence in the total reward function r′ of Eq. (7), can be computed through the following minimization
φ′ = arg min φ [ `(q(θ;φ),st)︷ ︸︸ ︷ DKL[q(θ;φ)‖q(θ;φt−1)]︸ ︷︷ ︸
`KL(q(θ;φ))
−Eθ∼q(·;φ) [log p(st|ξt, at; θ)] ] , (12)
where we replace the expectation over θ with samples θ ∼ q(·;φ). Because we only update the model periodically based on samples drawn from the replay pool, this optimization can be performed in parallel for each st, keeping φt−1 fixed. Once φ′ has been obtained, we can use it to compute the intrinsic reward.
Algorithm 1: Variational Information Maximizing Exploration (VIME) for each epoch n do
for each timestep t in each trajectory generated during n do Generate action at ∼ πα(st) and sample state st+1 ∼ P(·|ξt, at), get r(st, at). Add triplet (st, at, st+1) to FIFO replay poolR. Compute DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by approximation∇>H−1∇, following Eq. (16) for diagonal BNNs, or by optimizing Eq. (12) to obtain φ′n+1 for general BNNs.
Divide DKL[q(θ;φ′n+1)‖q(θ;φn+1)] by median of previous KL divergences. Construct r′(st, at, st+1)← r(st, at) + ηDKL[q(θ;φ′n+1)‖q(θ;φn+1)], following Eq. (7).
Minimize DKL[q(θ;φn)‖p(θ)]− Eθ∼q(·;φn) [log p(D|θ)] following Eq. (6), with D sampled randomly fromR, leading to updated posterior q(θ;φn+1). Use rewards {r′(st, at, st+1)} to update policy πα using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient is rescaled according to the curvature of the KL divergence at the origin. As such, we compute DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)], with the update step ∆φ defined as
∆φ = H−1(`)∇φ`(q(θ;φ), st), (13)
in which H(`) is the Hessian of `(q(θ;φ), st). Since we assume that the variational approximation is a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
DKL[q(θ;φ)‖q(θ;φ′)] = 12 ∑|Θ| i=1 (( σi σ′i )2 + 2 log σ′i − 2 log σi + (µ′i−µi) 2 σ′2i ) − |Θ|2 . (14)
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term can be seen as locally linear compared to this highly curved KL term, we approximate H by only calculating it for the term KL term `KL(q(θ;φ)). This can be computed very efficiently in case of a fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at Eq. (14), we can calculate the following Hessian at the origin. The µ and ρ entries are defined as
∂2`KL ∂µ2i = 1 log2(1 + eρi) and ∂2`KL ∂ρ2i = 2e2ρi (1 + eρi)2 1 log2(1 + eρi) , (15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence through a second-order Taylor expansion as 12∆φH∆φ = 1 2 ( H−1∇ )> H ( H−1∇ ) , since both the value and gradient of the KL divergence are zero at the origin. This gives us
DKL[q(θ;φ+ λ∆φ)‖q(θ;φ)] ≈ 12λ 2∇φ`>H−1(`KL)∇φ`. (16)
Note that H−1(`KL) is diagonal, so this expression can be computed efficiently. Instead of using the KL divergence DKL[q(θ;φt+1)‖q(θ;φt)] directly as an intrinsic reward in Eq. (7), we normalize it by division through the average of the median KL divergences taken over a fixed number of previous trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL divergence between samples. This accomplishes the same effect since the variance of KL divergence converges to zero, once the model is fully learned.
3 Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards its goal, and (iii) how η, as used in in Eq. (3), trades off exploration and exploitation behavior. All experiments make use of the rllab [15] benchmark code base and the complementary continuous control tasks suite. The following tasks are part of the experimental setup: CartPole (S ⊆ R4, A ⊆ R1), CartPoleSwingup (S ⊆ R4,A ⊆ R1), DoublePendulum (S ⊆ R6,A ⊆ R1), MountainCar (S ⊆ R3, A ⊆ R1), locomotion tasks HalfCheetah (S ⊆ R20, A ⊆ R6), Walker2D (S ⊆ R20, A ⊆ R6), and the hierarchical task SwimmerGather (S ⊆ R33, A ⊆ R2).
Performance is measured through the average return (not including the intrinsic rewards) over the trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in each plot represent the median performance over a fixed set of 10 random seeds while the shaded areas show the interquartile range at each iteration. Moreover, the number in each legend shows this performance measure, averaged over all iterations. The exact setup is described in the Appendix.
Domains with sparse rewards are difficult to solve through naïve exploration behavior because, before the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows us to test whether an exploration strategy is truly capable of systematic exploration, rather than improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is given when the car escapes the valley on the right side in MountainCar; when the pole is pointed upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah. We compare VIME with the following baselines: only using Gaussian control noise [15] and using the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows the performance results. We notice that using a naïve exploration performs very poorly, as it is almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward, where naïve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot needs to reach “apples” while avoiding “bombs” that are perceived through a laser scanner. However, before it can make any forward progress, it has to learn complex locomotion primitives in the absence of any reward. None of the RL methods tested previously in [15] were able to make progress with naïve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by better exploration. Their performance is shown in Figure 2 on several well-established continuous control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task. In the majority of cases, VIME leads to a significant performance gain over heuristic exploration. Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar. We note that in environments such as CartPole, a better exploration strategy is redundant as following the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME’s exploration behavior, the distribution of visited states for both naïve exploration and VIME after convergence is investigated. Figure 1d shows that using Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern, exploring the states more efficiently, and hence reaching the goal more quickly.
Finally, we investigate how η, as used in in Eq. (3), trades off exploration and exploitation behavior. On the one hand, higher η values should lead to a higher curiosity drive, causing more exploration. On the other hand, very low η values should reduce VIME to traditional Gaussian control noise. Figure 4 shows the performance on MountainCar for different η values. Setting η too high clearly results in prioritizing exploration over getting additional external reward. Too low of an η value reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward r′ becomes negligible. Most importantly, this figure highlights that there is a wide η range for which the task is best solved, across different algorithms.
4 Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online in a previously unknown MDP and incur a polynomial amount of regret—as a result, these algorithms find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17, 23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces, for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for curiosity have also been proposed, including prediction error [10, 35], prediction error improvement [36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not been applied directly to high-dimensional continuous control tasks without discretization. We refer the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10] proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson sampling, training multiple value functions using bootstrapping. Although these approaches can scale up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use of mutual information for gait stabilization in continuous control, but rely on state discretization. Finally, [41] proposes a variational method for information maximization in the context of optimizing empowerment, which, as noted by [42], does not explicitly favor exploration.
5 Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven exploration strategy for continuous control tasks. Variational inference is used to approximate the posterior distribution of a Bayesian neural network that represents the environment dynamics. Using information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME performs significantly better than heuristic exploration methods across various continuous control tasks and algorithms. As future work, we would like to investigate measuring surprise in the value function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO). Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. | 1. What is the main contribution of the paper in reinforcement learning?
2. How does the proposed method, VIME, optimize exploration and exploitation in reinforcement learning tasks?
3. Can you explain how the Bayesian neural network used in VIME helps in maximizing information gain about the environment's model?
4. What are the strengths of the paper, particularly in its demonstrations and comparisons?
5. Are there any concerns or limitations regarding the use of VIME in certain environments or scenarios? | Review | Review
Authors introduce a method: VIME for trying to optimally trade off exploration and exploitation for reinforcement learning tasks. The idea is to add an auxiliary cost that maximizes the information gain about the model of the environment, encouraging exploration into regions with larger uncertainty. This requires use of a model that has a notion of uncertainty for which the authors use a Bayesian neural network in the style of Blundell et al. Experiments demonstrate the utility of the method, especially in instances which require some amount of exploration before rewards are obtained.I enjoyed the paper. Demonstrations and comparisons seem great, and the improvements are hard to ignore. The method is novel and appears to be generally useful. The Figure legends are difficult to read and should be made larger, especially in Figure 1 and 2. |
NIPS | Title
Big Bird: Transformers for Longer Sequences
Abstract
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
1 Introduction
Models based on Transformers [92], such as BERT [22, 63], are wildly successful for a wide variety of Natural Language Processing (NLP) tasks and consequently are mainstay of modern NLP research. Their versatility and robustness are the primary drivers behind the wide-scale adoption of Transformers. The model is easily adapted for a diverse range of sequence based tasks – as a seq2seq model for translation [92], summarization [66], generation [15], etc. or as a standalone encoders for sentiment analysis [84], POS tagging [65], machine reading comprehension [94], etc. – and it is known to vastly outperform previous sequence models like LSTM [37]. The key innovation in Transformers is the introduction of a self-attention mechanism, which can be evaluated in parallel for each token of the input sequence, eliminating the sequential dependency in recurrent neural networks, like LSTM. This parallelism enables Transformers to leverage the full power of modern SIMD hardware accelerators like GPUs/TPUs, thereby facilitating training of NLP models on datasets of unprecedented size. This ability to train on large scale data has led to surfacing of models like BERT [22] and T5 [75], which pretrain transformers on large general purpose corpora and transfer the knowledge to down-stream task. The pretraining has led to significant improvement in low data regime downstream tasks [51] as well as tasks with sufficient data [102] and thus have been a major force behind the ubiquity of transformers in contemporary NLP.
The self-attention mechanism overcomes constraints of RNNs (namely the sequential nature of RNN) by allowing each token in the input sequence to attend independently to every other token in the sequence. This design choice has several interesting repercussions. In particular, the full self-attention have computational and memory requirement that is quadratic in the sequence length. We note that while the corpus can be large, the sequence length, which provides the context in many applications is very limited. Using commonly available current hardware and model sizes, this requirement translates to roughly being able to handle input sequences of length 512 tokens. This reduces its direct applicability to tasks that require larger context, like QA [60], document classification, etc.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, while we know that self-attention and Transformers are useful, our theoretical understanding is rudimentary. What aspects of the self-attention model are necessary for its performance? What can we say about the expressivity of Transformers and similar models? Apriori, it was not even clear from the design if the proposed self-attention mechanism was as effective as RNNs. For example, the self-attention does not even obey sequence order as it is permutation equivariant. This concern has been partially resolved, as Yun et al. [105] showed that transformers are expressive enough to capture all continuous sequence to sequence functions with a compact domain. Meanwhile, Pérez et al. [72] showed that the full transformer is Turing Complete (i.e. can simulate a full Turing machine). Two natural questions arise: Can we achieve the empirical benefits of a fully quadratic self-attention scheme using fewer inner-products? Do these sparse attention mechanisms preserve the expressivity and flexibility of the original network?
In this paper, we address both the above questions and produce a sparse attention mechanism that improves performance on a multitude of tasks that require long contexts. We systematically develop BIGBIRD, an attention mechanism whose complexity is linear in the number of tokens (Sec. 2). We take inspiration from graph sparsification methods and understand where the proof for expressiveness of Transformers breaks down when full-attention is relaxed to form the proposed attention pattern. This understanding helped us develop BIGBIRD, which is theoretically as expressive and also empirically useful. In particular, our BIGBIRD consists of three main part: • A set of g global tokens attending on all parts of the sequence. • All tokens attending to a set of w local neighboring tokens. • All tokens attending to a set of r random tokens.
This leads to a high performing attention mechanism scaling to much longer sequence lengths (8x). To summarize, our main contributions are: 1. BIGBIRD satisfies all the known theoretical properties of full transformer (Sec. 3). In particular,
we show that adding extra tokens allows one to express all continuous sequence to sequence functions with only O(n)-inner products. Furthermore, we show that under standard assumptions regarding precision, BIGBIRD is Turing complete. 2. Empirically, we show that the extended context modelled by BIGBIRD benefits variety of NLP tasks. We achieve state of the art results for question answering and document summarization on a number of different datasets. Summary of these results are presented in Sec. 4. 3. Lastly, we introduce a novel application of attention based models where long contexts are beneficial: extracting contextual representations of genomics sequences like DNA. With longer masked LM pretraining, BIGBIRD improves performance on downstream tasks such as promoterregion and chromatin profile prediction (Sec. 5).
1.1 Related Work
There have been a number of interesting attempts, that were aimed at alleviating the quadratic dependency of Transformers, which can broadly categorized into two directions. First line of work embraces the length limitation and develops method around it. Simplest methods in this category just employ sliding window [94], but in general most work fits in the following general paradigm: using some other mechanism select a smaller subset of relevant contexts to feed in the transformer and optionally iterate, i.e. call transformer block multiple time with different contexts each time. Most prominently, SpanBERT [42], ORQA [54], REALM [34], RAG [57] have achieved strong performance for different tasks. However, it is worth noting that these methods often require significant engineering efforts (like back prop through large scale nearest neighbor search) and are hard to train.
Second line of work questions if full attention is essential and have tried to come up with approaches that do not require full attention, thereby reducing the memory and computation requirements. Prominently, Dai et al. [21], Sukhbaatar et al. [83], Rae et al. [74] have proposed auto-regresive models that work well for left-to-right language modeling but suffer in tasks which require bidirectional context. Child et al. [16] proposed a sparse model that reduces the complexity to O(n p n), Kitaev et al. [49] further reduced the complexity to O(n log(n)) by using LSH to compute nearest neighbors. Ye et al. [104] proposed binary partitions of the data where as Qiu et al. [73] reduced complexity by using block sparsity. Recently, Longformer [8] introduced a localized sliding window based mask with few global mask to reduce computation and extended BERT to longer sequence based tasks. Finally, our work is closely related to and built on the work of Extended Transformers Construction [4]. This work was designed to encode structure in text for transformers. The idea of global tokens was used extensively by them to achieve their goals. Our theoretical work can be seen as providing a justification for the success of these models as well. It is important to note that most of the
aforementioned methods are heuristic based and empirically are not as versatile and robust as the original transformer, i.e. the same architecture do not attain SoTA on multiple standard benchmarks. (There is one exception of Longformer which we include in all our comparisons, see App. E.3 for a more detailed comparison). Moreover, these approximations do not come with theoretical guarantees.
2 BIGBIRD Architecture
In this section, we describe the BIGBIRD model using the generalised attention mechanism that is used in each layer of transformer operating on an input sequence X = (x1, ...,xn) 2 Rn⇥d. The generalized attention mechanism is described by a directed graph D whose vertex set is [n] = {1, . . . , n}. The set of arcs (directed edges) represent the set of inner products that the attention mechanism will consider. Let N(i) denote the out-neighbors set of node i in D, then the ith output vector of the generalized attention mechanism is defined as
ATTND(X)i = xi + HX
h=1
⇣ Qh(xi)Kh(XN(i)) T ⌘ · Vh(XN(i)) (AT)
where Qh,Kh : Rd ! Rm are query and key functions respectively, Vh : Rd ! Rd is a value function, is a scoring function (e.g. softmax or hardmax) and H denotes the number of heads. Also note XN(i) corresponds to the matrix formed by only stacking {xj : j 2 N(i)} and not all the inputs. If D is the complete digraph, we recover the full quadratic attention mechanism of Vaswani et al. [92]. To simplify our exposition, we will operate on the adjacency matrix A of the graph D even though the underlying graph maybe sparse. To elaborate, A 2 [0, 1]n⇥n with A(i, j) = 1 if query i attends to key j and is zero otherwise. For example, when A is the ones matrix (as in BERT), it leads to quadratic complexity, since all tokens attend on every other token. This view of self-attention as a fully connected graph allows us to exploit existing graph theory to help reduce its complexity. The problem of reducing the quadratic complexity of self-attention can now be seen as a graph sparsification problem. It is well-known that random graphs are expanders and can approximate complete graphs in a number of different contexts including in their spectral properties [80, 38]. We believe sparse random graph for attention mechanism should have two desiderata: small average path length between nodes and a notion of locality, each of which we discuss below.
Let us consider the simplest random graph construction, known as Erdős-Rényi model, where each edge is independently chosen with a fixed probability. In such a random graph with just ⇥̃(n) edges, the shortest path between any two nodes is logarithmic in the number of nodes [17, 43]. As a consequence, such a random graph approximates the complete graph spectrally and its second eigenvalue (of the adjacency matrix) is quite far from the first eigenvalue [9, 10, 6]. This property leads to a rapid mixing time for random walks in the grpah, which informally suggests that information can flow fast between any pair of nodes. Thus, we propose a sparse attention where each query attends over r random number of keys i.e. A(i, ·) = 1 for r randomly chosen keys (see Fig. 1a). The second viewpoint which inspired the creation of BIGBIRD is that most contexts within NLP and computational biology have data which displays a great deal of locality of reference. In this phenomenon, a great deal of information about a token can be derived from its neighboring tokens. Most pertinently, Clark et al. [19] investigated self-attention models in NLP tasks and concluded that that neighboring inner-products are extremely important. The concept of locality, proximity of tokens in linguistic structure, also forms the basis of various linguistic theories such as transformationalgenerative grammar. In the terminology of graph theory, clustering coefficient is a measure of locality
of connectivity, and is high when the graph contains many cliques or near-cliques (subgraphs that are almost fully interconnected). Simple Erdős-Rényi random graphs do not have a high clustering coefficient [85], but a class of random graphs, known as small world graphs, exhibit high clustering coefficient [95]. A particular model introduced by Watts and Strogatz [95] is of high relevance to us as it achieves a good balance between average shortest path and the notion of locality. The generative process of their model is as follows: Construct a regular ring lattice, a graph with n nodes each connected to w neighbors, w/2 on each side.
In other words we begin with a sliding window on the nodes. Then a random subset (k%) of all connections is replaced with a random connection. The other (100 - k)% local connections are retained. However, deleting such random edges might be inefficient on modern hardware, so we retain it, which will not affect its properties. In summary, to capture
these local structures in the context, in BIGBIRD, we define a sliding window attention, so that during self attention of width w, query at location i attends from i w/2 to i+ w/2 keys. In our notation, A(i, i w/2 : i+w/2) = 1 (see Fig. 1b). As an initial sanity check, we performed basic experiments to test whether these intuitions are sufficient in getting performance close to BERT like models, while keeping attention linear in the number of tokens. We found that random blocks and local window were insufficient in capturing all the context necessary to compete with the performance of BERT.
The final piece of BIGBIRD is inspired from our theoretical analysis (Sec. 3), which is critical for empirical performance. More specifically, our theory utilizes the importance of “global tokens” (tokens that attend to all tokens in the sequence and to whom all tokens attend to (see Fig. 1c). These global tokens can be defined in two ways: • BIGBIRD-ITC: In internal transformer construction (ITC), we make some existing tokens “global”,
which attend over the entire sequence. Concretely, we choose a subset G of indices (with g := |G|), such that A(i, :) = 1 and A(:, i) = 1 for all i 2 G. • BIGBIRD-ETC: In extended transformer construction (ETC), we include additional “global” tokens such as CLS. Concretely, we add g global tokens that attend to all existing tokens. In our notation, this corresponds to creating a new matrix B 2 [0, 1](N+g)⇥(N+g) by adding g rows to matrix A, such that B(i, :) = 1, and B(:, i) = 1 for all i 2 {1, 2, . . . g}, and B(g + i, g + j) = A(i, j)8 i, j 2 {1, . . . , N}. This adds extra location to store context and as we will see in the experiments improves performance.
The final attention mechanism for BIGBIRD (Fig. 1d) has all three of these properties: queries attend to r random keys, each query attends to w/2 tokens to the left of its location and w/2 to the right of its location and they contain g global tokens (The global tokens can be from existing tokens or extra added tokens). We provide implementation details in App. D.
3 Theoretical Results about Sparse Attention Mechanism
In this section, we will show that that sparse attention mechanisms are as powerful and expressive as full-attention mechanisms in two respects. First, we show that when sparse attention mechanisms are used in a standalone encoder (such as BERT), they are Universal Approximators of sequence to sequence functions in the style of Yun et al. [105]. We note that this property was also explored theoretically in contemporary work Yun et al. [106]. Second, unlike [106], we further show that sparse encoder-decoder transformers are Turing Complete (assuming the same conditions defined in [72]). Complementing the above positive results, we also show that moving to a sparse-attention mechanism incurs a cost, i.e. there is no free lunch. In Sec. 3.4, we show lower bounds by exhibiting a natural task where any sufficiently sparse mechanism will require polynomially more layers.
3.1 Notation
The complete Transformer encoder stack is nothing but the repeated application of a single-layer encoder (with independent parameters). We denote class of such Transformer encoders stack, defined using generalized encoder (Sec. 2), by T H,m,qD which consists of H-heads with head size m and q is the hidden layer size of the output network, and the attention layer is defined by the directed graph D.
The key difference between our proposed attention mechanism to that of Vaswani et al. [92], Yun et al. [105] is that we add a special token at the beginning of each sequence and assign it a special vector.
We will refer to this as x0. Therefore our graph D will have vertex set {0} [ [n] = {0, 1, 2, . . . , n}. We will assume that this extra node and its respective vector will be dropped at the final output layer of transformer. To avoid cumbersome notation, we will still treat transformer as mapping sequences X 2 Rn⇥d to Rn⇥d. We will also allow the transformer to append position embeddings E 2 Rd⇥n to matrix X in the input layer.
Finally, we need to define the function class and distance measure for proving universal approximation property. Let FCD denote the set of continuous functions f : [0, 1]n⇥d ! Rn⇥d which are continuous with respect to the topology defined by `p norm. Recall for any p 1, the `p distance is dp(f1, f2) = R
kf1(X) f2(X)kppdX 1/p.
3.2 Universal Approximators
Definition 1. The star-graph S centered at 0 is the graph defined on {0, . . . , n}. The neighborhood of all vertices i is N(i) = {0, i} for i 2 {1 . . . n} and N(0) = {1, . . . n}.
Our main theorem is that the sparse attention mechanism defined by any graph containing S is a universal approximator: Theorem 1. Given 1 < p < 1 and ✏ > 0, for any f 2 FCD, there exists a transformer with sparse-attention, g 2 T H,m,qD such that dp(f, g) ✏ where D is any graph containing star graph S.
To prove the theorem, we will follow the standard proof structure outlined in [105].
Step 1: Approximate FCD by piece-wise constant functions. Since f is a continuous function with bounded domain [0, 1)n⇥d, we will approximate it with a suitable piece-wise constant function. This is accomplished by a suitable partition of the region [0, 1) into a grid of granularity to get a discrete set G . Therefore, we can assume that we are dealing with a function f̄ : G ! Rn⇥d, where dp(f, f̄) ✏3 . Step 2: Approximate piece-wise constant functions by modified transformers. This is the key step of the proof where the self-attention mechanism is used to generate a contextual-mapping of the input. Informally, a contextual mapping is a unique code for the pair consisting of a matrix (X,xi) and a column. Its uniqueness allows the Feed forward layers to use each code to map it to a unique output column.
The main technical challenge is computing the contextual mapping using only sparse attention mechanism. This was done in [105] using a “selective” shift operator which shift up entries that are in a specific interval. Key to their proof was the fact that the shift, was exactly the range of the largest entry to the smallest entry.
Creating a contextual mapping with a sparse attention mechanism is quite a challenge. In particular, because each query only attends to a few keys, it is not at all clear that sufficient information can be corralled to make a contextual embedding of the entire matrix. To get around this, we develop a sparse shift operator which shifts the entries of the matrices if they lie in a certain range. The exact amount of the shift is controlled by the directed sparse attention graphg D. The second key ingredient is the use of additional global token. By carefully applying the operator to a set of chosen ranges, we will show that each column will contain a unique mapping of the full mapping. Therefore, we can augment the loss of inner-products in the self attention mechanism by using multiple layers and an auxiliary global token.
Step 3: Approximate modified transformers by original Transformers: The final step is to approximate the modified transformers by the original transformer which uses ReLU and softmax.
We provide the full details in App. A.
3.3 Turing Completeness
Transformers are a very general class. In the original paper of Vaswani et al. [92], they were used in both an encoder and a decoder. While the previous section outlined how powerful just the encoders were, another natural question is to ask what the additional power of both a decoder along with an encoder is? Pérez et al. [72] showed that the full transformer based on a quadratic attention mechanism is Turing Complete. This result makes one unrealistic assumption, which is that the model works on arbitrary precision model. Of course, this is necessary as otherwise, Transformers are bounded finite state machines and cannot be Turing Complete.
It is natural to ask if the full attention mechanism is necessary. Or can a sparse attention mechanism also be used to simulate any Turing Machine? We show that this is indeed the case: we can use a sparse encoder and sparse decoder to simulate any Turing Machine.
To use the sparse attention mechanism in the transformer architecture, we need to define a suitable modification where each token only reacts to previous tokens. Unlike the case for BERT, where the entire attention mechanism is applied once, in full transformers, the sparse attention mechanism at decoder side is used token by token. Secondly the work of Pérez et al. [72], uses each token as a representation of the tape history and uses the full attention to move and retrieve the correct tape symbol. Most of the construction of Pérez et al. [72] goes through for sparse attentions, except for their addressing scheme to point back in history (Lemma B.4 in [72]). We show how to simulate this using a sparse attention mechanism and defer the details to App. B.
3.4 Limitations
We demonstrate a natural task which can be solved by the full attention mechanism in O(1)-layers. However, under standard complexity theoretic assumptions, this problem requires ⌦̃(n)-layers for any sparse attention layers with Õ(n) edges (not just BIGBIRD). (Here Õ hides poly-logarthmic factors). Consider the simple problem of finding the corresponding furthest vector for each vector in the given sequence of length n. Formally,
Task 1. Given n unit vectors {u1, . . . , un}, find f(u1, . . . , un) ! (u1⇤ , . . . , un⇤) where for a fixed j 2 [n], we define j⇤ = argmaxk kuk ujk22. Finding vectors that are furthest apart boils down to minimize inner product search in case of unit vectors. For a full-attention mechanism with appropriate query and keys, this task is very easy as we can evaluate all pair-wise inner products.
The impossibility for sparse-attention follows from hardness results stemming from Orthogonal Vector Conjecture(OVC) [1, 2, 7, 97]. The OVC is a widely used assumption in fine-grained complexity. Informally, it states that one cannot determine if the minimum inner product among n boolean vectors is 0 in subquadratic time. In App. C, we show a reduction using OVC to show that if a transformer g 2 T H=1,m=2d,q=0D for any sparse directed graph D can evaluate the Task 1, it can solve the orthogonal vector problem. Proposition 1. There exists a single layer full self-attention g 2 T H=1,m=2d,q=0 that can evaluate Task 1, i.e. g(u1, ..., un) = [u1⇤ , . . . , un⇤ ], but for any sparse-attention graph D with Õ(n) edges (i.e. inner product evaluations), would require ⌦̃(n1 o(1)) layers. We give a formal proof of this fact in App. C.
4 Experiments: Natural Language Processing
In this section our goal is to showcase benefits of modeling longer input sequence for NLP tasks, for which we select three representative tasks. We begin with basic masked language modeling (MLM; Devlin et al. 22) to check if better contextual representations can be learnt by utilizing longer contiguous sequences. Next, we consider QA with supporting evidence, for which capability to handle longer sequence would allow us to retrieve more evidence using crude systems like TF-IDF/BM25. Finally, we tackle long document classification where discriminating information may not be located in first 512 tokens. Below we summarize the results for BIGBIRD using sequence length 40961, while we defer all other setup details including computational resources, batch size, step size, to App. E.
Pretraining and MLM We follow [22, 63] to create base and large versions of BIGBIRD and pretrain it using MLM objective. This task involves predicting a random subset of tokens which have been masked out. We use four standard data-sets for pretraining (listed in App. E.1, Tab. 9), warm-starting from the public RoBERTa checkpoint2. We compare performance in predicting the masked out tokens in terms of bits per character, following [8]. As seen in App. E.1, Tab. 10, both BIGBIRD and Longformer perform better than limited length RoBERTa, with BIGBIRD-ETC performing the best. We note that we trained our models on a reasonable 16GB memory/chip with batch size of 32-64. Our memory efficiency is due to efficient blocking and sparsity structure of the sparse attention mechanism described in Sec. 2.
1code available at http://goo.gle/bigbird-transformer 2https://github.com/pytorch/fairseq/tree/master/examples/roberta
Question Answering (QA) We considered following four challenging datasets: 1. Natural Questions [52]: For the given question, find a short span of answer (SA) from the given
evidences as well highlight the paragraph from the given evidences containing information about the correct answer (LA). 2. HotpotQA-distractor [101]: Similar to natural questions, it requires finding the answer (Ans) as well as the supporting facts (Sup) over different documents needed for multi-hop reasoning from the given evidences. 3. TriviaQA-wiki [41]: We need to provide an answer for the given question using provided Wikipedia evidence, however, the answer might not be present in the given evidence. On a smaller verified subset of question, the given evidence is guaranteed to contain the answer. Nevertheless, we model the answer as span selection problem in this case as well. 4. WikiHop [96]: Chose correct option from multiple-choice questions (MCQ), by aggregating information spread across multiple documents given in the evidences.
As these tasks are very competitive, multiple highly engineered systems have been designed specific each dataset confirming to respective output formats. For a fair comparison, we had to use some additional regularization for training BIGBIRD, details of which are provided in App. E.2 along with exact architecture description. We experiment using the base sized model and select the best configuration on the development set for each dataset (as reported in Tab. 2). We can see that BIGBIRD-ETC, with expanded global tokens consistently outperforms all other models. Thus, we chose this configuration to train a large sized model to be used for evaluation on the hidden test set.
In Tab. 3, we compare BIGBIRD-ETC model to top-3 entries from the leaderboard excluding BIGBIRD. One can clearly see the importance of using longer context as both Longformer and BIGBIRD outperform models with smaller contexts. Also, it is worth noting that BIGBIRD submission is a single model, whereas the other top-3 entries for Natural Questions are ensembles, which might explain the slightly lower accuracy in exact answer phrase selection. Classification We experiment on datasets of different lengths and contents, specifically various document classification and GLUE tasks. Following BERT, we used one layer with cross entropy loss on top of the first [CLS] token. We see that gains of using BIGBIRD are more significant when we have longer documents and fewer training examples. For instance, using base sized model,
BIGBIRD improves state-of-the-art for Arxiv dataset by about 5% points. On Patents dataset, there is improvement over using simple BERT/RoBERTa, but given the large size of training data the improvement over SoTA (which is not BERT based) is not significant. Note that this performance gain is not seen for much smaller IMDb dataset. Along with experimental setup detail, we present detailed results in App. E.4 which show competitive performance.
4.1 Encoder-Decoder Tasks
For an encoder-decoder setup, one can easily see that both suffer from quadratic complexity due to the full self attention. We focus on introducing the sparse attention mechanism of BIGBIRD only at the encoder side. This is because, in practical generative applications, the length of output sequence is typically small as compared to the input. For example for text summarization, we see in realistic scenarios (c.f. App. E.5 Tab. 18) that the median output sequence length is ⇠ 200 where as the input sequence’s median length is > 3000. For such applications, it is more efficient to use sparse attention mechanism for the encoder and full self-attention for the decoder.
Summarization Document summarization is a task of creating a short and accurate summary of a text document. We used three long document datasets for testing our model details of which are mention in Tab. 18. In this paper we focus on abstractive summarization of long documents where using a longer contextual encoder should improve performance. The reasons are two fold: First, the salient content can be evenly distributed in the long document, not just in first 512 tokens, and this is by design in the BigPatents dataset [78]. Second, longer documents exhibit a richer discourse structure and summaries are considerably more abstractive, thereby observing more context helps. As has been pointed out recently [76, 108], pretraining helps in generative tasks, we warm start from our general purpose MLM pretraining on base-sized models as well as utilizing state-of-the-art summarization specific pretraining from Pegasus [108] on large-sized models. The results of training BIGBIRD sparse encoder along with full decoder on these long document datasets are presented in Tab. 4. We can clearly see modeling longer context brings significant improvement. Along with hyperparameters, we also present results on shorter but more widespread datasets in App. E.5, which show that using sparse attention does not hamper performance either.
5 Experiments: Genomics
There has been a recent upsurge in using deep learning for genomics data [87, 107, 13], which has resulted in improved performance on several biologically-significant tasks such as promoter site prediction [71], methylation analysis [55], predicting functional effects of non-coding variant [110], etc. These approaches consume DNA sequence fragments as inputs, and therefore we believe longer input sequence handling capability of BIGBIRD would be beneficial as many functional effects
in DNA are highly non-local [12]. Furthermore, taking inspiration from NLP, we learn powerful contextual representations for DNA fragments utilizing abundant unlabeled data (e.g. human reference genome, Saccharomyces Genome Database) via MLM pretraining. Next, we showcase that our long input BIGBIRD along with the proposed pretraining significantly improves performances in two downstream tasks. Detailed experimental setup for the two tasks are provided in App. F.
Pre-training and MLM As explored in Liang [58], instead of operating on base pairs, we propose to first segment DNA into tokens so as to further increase the context length (App. F, Fig. 7). In particular, we build a byte-pair encoding [50] table for the DNA sequence of size 32K, with each token representing 8.78 base pairs on average. We learn contextual representation of these token on the human reference genome (GRCh37)3 using MLM objective. We then report the bits per
character (BPC) on a held-out set in Tab. 5. We find that attention based contextual representation of DNA does improve BPC, which is further improved by using longer context.
Promoter Region Prediction Promoter is a DNA region typically located upstream of the gene, which is the site of transcription initiation. Multiple methods have been proposed to identify the promoter regions in a given DNA sequence [100, 59, 11, 99, 71], as it is an important first step in understanding gene regulation. The corresponding machine learning task is to classify a given DNA fragment as promoter or non-promoter sequence. We use the dataset compiled by Oubounyt et al. [71] which was
built from Eukaryotic Promoter Database (EPDnew) [24] 4. We finetuned the pretrained BIGBIRD model from above, using the training data and report F1 on test dataset. We compare our results to the previously reported best method in Tab. 6. We see that BIGBIRD achieve nearly perfect accuracy with a 5% jump from the previous best reported accuracy.
Chromatin-Profile Prediction Non-coding regions of DNA do not code for proteins. Majority of diseases and other trait associated single-nucleotide polymorphism are correlated to non-coding genomic variations [110, 46]. Thus, understanding the functional effects of non-coding regions of DNA is a very important task. An important step in this process, as defined by Zhou and Troyanskaya
[110], is to predict large-scale chromatin-profiling from non-coding genomic sequence. To this effect, DeepSea [110], compiled 919 chromatin-profile of 2.4M non-coding variants from Encyclopedia of DNA Elements (ENCODE)5 and Roadmap Epigenomics projects6. The corresponding ML task is to predict, for a given non-coding region of DNA, these 919 chromatin-profile including 690 transcription factors (TF) binding profiles for 160 different TFs, 125 DNase I sensitivity (DHS) profiles and 104 histone-mark (HM) profiles. We jointly learn 919 binary classifiers to predict these functional effects from sequence of DNA fragments. On held-out chromosomes, we compare AUC with the baselines in Tab. 7 and see that we significantly improve on performance on the harder task HM, which is known to have longer-range correlations [27] than others.
6 Conclusion
We propose BIGBIRD: a sparse attention mechanism that is linear in the number of tokens. BIGBIRD satisfies a number of theoretical results: it is a universal approximator of sequence to sequence functions and is also Turing complete. Theoretically, we use the power of extra global tokens preserve the expressive powers of the model. We complement these results by showing that moving to sparse attention mechanism do incur a cost. Empirically, BIGBIRD gives state-of-the-art performance on a number of NLP tasks such as question answering and long document classification. We further introduce attention based contextual language model for DNA and fine-tune it for down stream tasks such as promoter region prediction and predicting effects of non-coding variants.
3https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.13/ 4https://epd.epfl.ch/human/human_database.php?db=human 5https://www.encodeproject.org/ 6http://www.roadmapepigenomics.org/
Broader Impacts Inference Efficiency: Quadratic attention mechanisms cannot capture long range dependencies which exist in natural text and other datasets. Moreover, there is a growing concern in the ML community about the resource and energy requirement training large scale systems [81]. Moreover, that sparse, computationally efficient systems, like BIGBIRD, can capture long range dependencies in an energy efficient way without losing expressive power.
Wide Applicability: Beyond the impact of our model on NLP tasks that require longer context, our proposed contextualized representations of DNA using attention based models, should help in better modeling effects of longer sequences of DNA. Our effort continues a long line of research that bridges the gap between computational models designed for NLP and those for computational biology. | 1. What is the focus and contribution of the paper on BERT-based models?
2. What are the strengths of the proposed approach, particularly in terms of handling long sequences?
3. What are the weaknesses of the paper, especially regarding the limitation of the sliding window approach?
4. Do you have any concerns about the inference time comparison between BigBERT and other methods? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The Big Bert model proposed in this paper contains sparse attention that can handle long sequences without increasing the requirement of hardware. The results on various NLP tasks support their method.
Strengths
- Modeling the long-range of text for the BERT-based model is challenging. The sparse attention proposed in this paper is quite interesting, especially, they claim that it can handle sequences of length up to 8x of previous works. - They also provide theoretical results to support their sparse attention. - The experiments on QA and document classification looks quite good (compared with the state-of-the-art methods)
Weaknesses
While I quite agree that modeling long text is challenging for current Transformer (Vaswani). One of the inspirations of this work is "locality of reference" which assumes that a token can be derived from its neighboring tokens. However, for a document with a series of paragraphs, sometimes, the token may be related to the sentence in another paragraph. I think this is the weakness of the sliding window in BigBERT. Did the authors conduct the speed (inference time) comparison between their BigBERT and other methods? |
NIPS | Title
Big Bird: Transformers for Longer Sequences
Abstract
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
1 Introduction
Models based on Transformers [92], such as BERT [22, 63], are wildly successful for a wide variety of Natural Language Processing (NLP) tasks and consequently are mainstay of modern NLP research. Their versatility and robustness are the primary drivers behind the wide-scale adoption of Transformers. The model is easily adapted for a diverse range of sequence based tasks – as a seq2seq model for translation [92], summarization [66], generation [15], etc. or as a standalone encoders for sentiment analysis [84], POS tagging [65], machine reading comprehension [94], etc. – and it is known to vastly outperform previous sequence models like LSTM [37]. The key innovation in Transformers is the introduction of a self-attention mechanism, which can be evaluated in parallel for each token of the input sequence, eliminating the sequential dependency in recurrent neural networks, like LSTM. This parallelism enables Transformers to leverage the full power of modern SIMD hardware accelerators like GPUs/TPUs, thereby facilitating training of NLP models on datasets of unprecedented size. This ability to train on large scale data has led to surfacing of models like BERT [22] and T5 [75], which pretrain transformers on large general purpose corpora and transfer the knowledge to down-stream task. The pretraining has led to significant improvement in low data regime downstream tasks [51] as well as tasks with sufficient data [102] and thus have been a major force behind the ubiquity of transformers in contemporary NLP.
The self-attention mechanism overcomes constraints of RNNs (namely the sequential nature of RNN) by allowing each token in the input sequence to attend independently to every other token in the sequence. This design choice has several interesting repercussions. In particular, the full self-attention have computational and memory requirement that is quadratic in the sequence length. We note that while the corpus can be large, the sequence length, which provides the context in many applications is very limited. Using commonly available current hardware and model sizes, this requirement translates to roughly being able to handle input sequences of length 512 tokens. This reduces its direct applicability to tasks that require larger context, like QA [60], document classification, etc.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, while we know that self-attention and Transformers are useful, our theoretical understanding is rudimentary. What aspects of the self-attention model are necessary for its performance? What can we say about the expressivity of Transformers and similar models? Apriori, it was not even clear from the design if the proposed self-attention mechanism was as effective as RNNs. For example, the self-attention does not even obey sequence order as it is permutation equivariant. This concern has been partially resolved, as Yun et al. [105] showed that transformers are expressive enough to capture all continuous sequence to sequence functions with a compact domain. Meanwhile, Pérez et al. [72] showed that the full transformer is Turing Complete (i.e. can simulate a full Turing machine). Two natural questions arise: Can we achieve the empirical benefits of a fully quadratic self-attention scheme using fewer inner-products? Do these sparse attention mechanisms preserve the expressivity and flexibility of the original network?
In this paper, we address both the above questions and produce a sparse attention mechanism that improves performance on a multitude of tasks that require long contexts. We systematically develop BIGBIRD, an attention mechanism whose complexity is linear in the number of tokens (Sec. 2). We take inspiration from graph sparsification methods and understand where the proof for expressiveness of Transformers breaks down when full-attention is relaxed to form the proposed attention pattern. This understanding helped us develop BIGBIRD, which is theoretically as expressive and also empirically useful. In particular, our BIGBIRD consists of three main part: • A set of g global tokens attending on all parts of the sequence. • All tokens attending to a set of w local neighboring tokens. • All tokens attending to a set of r random tokens.
This leads to a high performing attention mechanism scaling to much longer sequence lengths (8x). To summarize, our main contributions are: 1. BIGBIRD satisfies all the known theoretical properties of full transformer (Sec. 3). In particular,
we show that adding extra tokens allows one to express all continuous sequence to sequence functions with only O(n)-inner products. Furthermore, we show that under standard assumptions regarding precision, BIGBIRD is Turing complete. 2. Empirically, we show that the extended context modelled by BIGBIRD benefits variety of NLP tasks. We achieve state of the art results for question answering and document summarization on a number of different datasets. Summary of these results are presented in Sec. 4. 3. Lastly, we introduce a novel application of attention based models where long contexts are beneficial: extracting contextual representations of genomics sequences like DNA. With longer masked LM pretraining, BIGBIRD improves performance on downstream tasks such as promoterregion and chromatin profile prediction (Sec. 5).
1.1 Related Work
There have been a number of interesting attempts, that were aimed at alleviating the quadratic dependency of Transformers, which can broadly categorized into two directions. First line of work embraces the length limitation and develops method around it. Simplest methods in this category just employ sliding window [94], but in general most work fits in the following general paradigm: using some other mechanism select a smaller subset of relevant contexts to feed in the transformer and optionally iterate, i.e. call transformer block multiple time with different contexts each time. Most prominently, SpanBERT [42], ORQA [54], REALM [34], RAG [57] have achieved strong performance for different tasks. However, it is worth noting that these methods often require significant engineering efforts (like back prop through large scale nearest neighbor search) and are hard to train.
Second line of work questions if full attention is essential and have tried to come up with approaches that do not require full attention, thereby reducing the memory and computation requirements. Prominently, Dai et al. [21], Sukhbaatar et al. [83], Rae et al. [74] have proposed auto-regresive models that work well for left-to-right language modeling but suffer in tasks which require bidirectional context. Child et al. [16] proposed a sparse model that reduces the complexity to O(n p n), Kitaev et al. [49] further reduced the complexity to O(n log(n)) by using LSH to compute nearest neighbors. Ye et al. [104] proposed binary partitions of the data where as Qiu et al. [73] reduced complexity by using block sparsity. Recently, Longformer [8] introduced a localized sliding window based mask with few global mask to reduce computation and extended BERT to longer sequence based tasks. Finally, our work is closely related to and built on the work of Extended Transformers Construction [4]. This work was designed to encode structure in text for transformers. The idea of global tokens was used extensively by them to achieve their goals. Our theoretical work can be seen as providing a justification for the success of these models as well. It is important to note that most of the
aforementioned methods are heuristic based and empirically are not as versatile and robust as the original transformer, i.e. the same architecture do not attain SoTA on multiple standard benchmarks. (There is one exception of Longformer which we include in all our comparisons, see App. E.3 for a more detailed comparison). Moreover, these approximations do not come with theoretical guarantees.
2 BIGBIRD Architecture
In this section, we describe the BIGBIRD model using the generalised attention mechanism that is used in each layer of transformer operating on an input sequence X = (x1, ...,xn) 2 Rn⇥d. The generalized attention mechanism is described by a directed graph D whose vertex set is [n] = {1, . . . , n}. The set of arcs (directed edges) represent the set of inner products that the attention mechanism will consider. Let N(i) denote the out-neighbors set of node i in D, then the ith output vector of the generalized attention mechanism is defined as
ATTND(X)i = xi + HX
h=1
⇣ Qh(xi)Kh(XN(i)) T ⌘ · Vh(XN(i)) (AT)
where Qh,Kh : Rd ! Rm are query and key functions respectively, Vh : Rd ! Rd is a value function, is a scoring function (e.g. softmax or hardmax) and H denotes the number of heads. Also note XN(i) corresponds to the matrix formed by only stacking {xj : j 2 N(i)} and not all the inputs. If D is the complete digraph, we recover the full quadratic attention mechanism of Vaswani et al. [92]. To simplify our exposition, we will operate on the adjacency matrix A of the graph D even though the underlying graph maybe sparse. To elaborate, A 2 [0, 1]n⇥n with A(i, j) = 1 if query i attends to key j and is zero otherwise. For example, when A is the ones matrix (as in BERT), it leads to quadratic complexity, since all tokens attend on every other token. This view of self-attention as a fully connected graph allows us to exploit existing graph theory to help reduce its complexity. The problem of reducing the quadratic complexity of self-attention can now be seen as a graph sparsification problem. It is well-known that random graphs are expanders and can approximate complete graphs in a number of different contexts including in their spectral properties [80, 38]. We believe sparse random graph for attention mechanism should have two desiderata: small average path length between nodes and a notion of locality, each of which we discuss below.
Let us consider the simplest random graph construction, known as Erdős-Rényi model, where each edge is independently chosen with a fixed probability. In such a random graph with just ⇥̃(n) edges, the shortest path between any two nodes is logarithmic in the number of nodes [17, 43]. As a consequence, such a random graph approximates the complete graph spectrally and its second eigenvalue (of the adjacency matrix) is quite far from the first eigenvalue [9, 10, 6]. This property leads to a rapid mixing time for random walks in the grpah, which informally suggests that information can flow fast between any pair of nodes. Thus, we propose a sparse attention where each query attends over r random number of keys i.e. A(i, ·) = 1 for r randomly chosen keys (see Fig. 1a). The second viewpoint which inspired the creation of BIGBIRD is that most contexts within NLP and computational biology have data which displays a great deal of locality of reference. In this phenomenon, a great deal of information about a token can be derived from its neighboring tokens. Most pertinently, Clark et al. [19] investigated self-attention models in NLP tasks and concluded that that neighboring inner-products are extremely important. The concept of locality, proximity of tokens in linguistic structure, also forms the basis of various linguistic theories such as transformationalgenerative grammar. In the terminology of graph theory, clustering coefficient is a measure of locality
of connectivity, and is high when the graph contains many cliques or near-cliques (subgraphs that are almost fully interconnected). Simple Erdős-Rényi random graphs do not have a high clustering coefficient [85], but a class of random graphs, known as small world graphs, exhibit high clustering coefficient [95]. A particular model introduced by Watts and Strogatz [95] is of high relevance to us as it achieves a good balance between average shortest path and the notion of locality. The generative process of their model is as follows: Construct a regular ring lattice, a graph with n nodes each connected to w neighbors, w/2 on each side.
In other words we begin with a sliding window on the nodes. Then a random subset (k%) of all connections is replaced with a random connection. The other (100 - k)% local connections are retained. However, deleting such random edges might be inefficient on modern hardware, so we retain it, which will not affect its properties. In summary, to capture
these local structures in the context, in BIGBIRD, we define a sliding window attention, so that during self attention of width w, query at location i attends from i w/2 to i+ w/2 keys. In our notation, A(i, i w/2 : i+w/2) = 1 (see Fig. 1b). As an initial sanity check, we performed basic experiments to test whether these intuitions are sufficient in getting performance close to BERT like models, while keeping attention linear in the number of tokens. We found that random blocks and local window were insufficient in capturing all the context necessary to compete with the performance of BERT.
The final piece of BIGBIRD is inspired from our theoretical analysis (Sec. 3), which is critical for empirical performance. More specifically, our theory utilizes the importance of “global tokens” (tokens that attend to all tokens in the sequence and to whom all tokens attend to (see Fig. 1c). These global tokens can be defined in two ways: • BIGBIRD-ITC: In internal transformer construction (ITC), we make some existing tokens “global”,
which attend over the entire sequence. Concretely, we choose a subset G of indices (with g := |G|), such that A(i, :) = 1 and A(:, i) = 1 for all i 2 G. • BIGBIRD-ETC: In extended transformer construction (ETC), we include additional “global” tokens such as CLS. Concretely, we add g global tokens that attend to all existing tokens. In our notation, this corresponds to creating a new matrix B 2 [0, 1](N+g)⇥(N+g) by adding g rows to matrix A, such that B(i, :) = 1, and B(:, i) = 1 for all i 2 {1, 2, . . . g}, and B(g + i, g + j) = A(i, j)8 i, j 2 {1, . . . , N}. This adds extra location to store context and as we will see in the experiments improves performance.
The final attention mechanism for BIGBIRD (Fig. 1d) has all three of these properties: queries attend to r random keys, each query attends to w/2 tokens to the left of its location and w/2 to the right of its location and they contain g global tokens (The global tokens can be from existing tokens or extra added tokens). We provide implementation details in App. D.
3 Theoretical Results about Sparse Attention Mechanism
In this section, we will show that that sparse attention mechanisms are as powerful and expressive as full-attention mechanisms in two respects. First, we show that when sparse attention mechanisms are used in a standalone encoder (such as BERT), they are Universal Approximators of sequence to sequence functions in the style of Yun et al. [105]. We note that this property was also explored theoretically in contemporary work Yun et al. [106]. Second, unlike [106], we further show that sparse encoder-decoder transformers are Turing Complete (assuming the same conditions defined in [72]). Complementing the above positive results, we also show that moving to a sparse-attention mechanism incurs a cost, i.e. there is no free lunch. In Sec. 3.4, we show lower bounds by exhibiting a natural task where any sufficiently sparse mechanism will require polynomially more layers.
3.1 Notation
The complete Transformer encoder stack is nothing but the repeated application of a single-layer encoder (with independent parameters). We denote class of such Transformer encoders stack, defined using generalized encoder (Sec. 2), by T H,m,qD which consists of H-heads with head size m and q is the hidden layer size of the output network, and the attention layer is defined by the directed graph D.
The key difference between our proposed attention mechanism to that of Vaswani et al. [92], Yun et al. [105] is that we add a special token at the beginning of each sequence and assign it a special vector.
We will refer to this as x0. Therefore our graph D will have vertex set {0} [ [n] = {0, 1, 2, . . . , n}. We will assume that this extra node and its respective vector will be dropped at the final output layer of transformer. To avoid cumbersome notation, we will still treat transformer as mapping sequences X 2 Rn⇥d to Rn⇥d. We will also allow the transformer to append position embeddings E 2 Rd⇥n to matrix X in the input layer.
Finally, we need to define the function class and distance measure for proving universal approximation property. Let FCD denote the set of continuous functions f : [0, 1]n⇥d ! Rn⇥d which are continuous with respect to the topology defined by `p norm. Recall for any p 1, the `p distance is dp(f1, f2) = R
kf1(X) f2(X)kppdX 1/p.
3.2 Universal Approximators
Definition 1. The star-graph S centered at 0 is the graph defined on {0, . . . , n}. The neighborhood of all vertices i is N(i) = {0, i} for i 2 {1 . . . n} and N(0) = {1, . . . n}.
Our main theorem is that the sparse attention mechanism defined by any graph containing S is a universal approximator: Theorem 1. Given 1 < p < 1 and ✏ > 0, for any f 2 FCD, there exists a transformer with sparse-attention, g 2 T H,m,qD such that dp(f, g) ✏ where D is any graph containing star graph S.
To prove the theorem, we will follow the standard proof structure outlined in [105].
Step 1: Approximate FCD by piece-wise constant functions. Since f is a continuous function with bounded domain [0, 1)n⇥d, we will approximate it with a suitable piece-wise constant function. This is accomplished by a suitable partition of the region [0, 1) into a grid of granularity to get a discrete set G . Therefore, we can assume that we are dealing with a function f̄ : G ! Rn⇥d, where dp(f, f̄) ✏3 . Step 2: Approximate piece-wise constant functions by modified transformers. This is the key step of the proof where the self-attention mechanism is used to generate a contextual-mapping of the input. Informally, a contextual mapping is a unique code for the pair consisting of a matrix (X,xi) and a column. Its uniqueness allows the Feed forward layers to use each code to map it to a unique output column.
The main technical challenge is computing the contextual mapping using only sparse attention mechanism. This was done in [105] using a “selective” shift operator which shift up entries that are in a specific interval. Key to their proof was the fact that the shift, was exactly the range of the largest entry to the smallest entry.
Creating a contextual mapping with a sparse attention mechanism is quite a challenge. In particular, because each query only attends to a few keys, it is not at all clear that sufficient information can be corralled to make a contextual embedding of the entire matrix. To get around this, we develop a sparse shift operator which shifts the entries of the matrices if they lie in a certain range. The exact amount of the shift is controlled by the directed sparse attention graphg D. The second key ingredient is the use of additional global token. By carefully applying the operator to a set of chosen ranges, we will show that each column will contain a unique mapping of the full mapping. Therefore, we can augment the loss of inner-products in the self attention mechanism by using multiple layers and an auxiliary global token.
Step 3: Approximate modified transformers by original Transformers: The final step is to approximate the modified transformers by the original transformer which uses ReLU and softmax.
We provide the full details in App. A.
3.3 Turing Completeness
Transformers are a very general class. In the original paper of Vaswani et al. [92], they were used in both an encoder and a decoder. While the previous section outlined how powerful just the encoders were, another natural question is to ask what the additional power of both a decoder along with an encoder is? Pérez et al. [72] showed that the full transformer based on a quadratic attention mechanism is Turing Complete. This result makes one unrealistic assumption, which is that the model works on arbitrary precision model. Of course, this is necessary as otherwise, Transformers are bounded finite state machines and cannot be Turing Complete.
It is natural to ask if the full attention mechanism is necessary. Or can a sparse attention mechanism also be used to simulate any Turing Machine? We show that this is indeed the case: we can use a sparse encoder and sparse decoder to simulate any Turing Machine.
To use the sparse attention mechanism in the transformer architecture, we need to define a suitable modification where each token only reacts to previous tokens. Unlike the case for BERT, where the entire attention mechanism is applied once, in full transformers, the sparse attention mechanism at decoder side is used token by token. Secondly the work of Pérez et al. [72], uses each token as a representation of the tape history and uses the full attention to move and retrieve the correct tape symbol. Most of the construction of Pérez et al. [72] goes through for sparse attentions, except for their addressing scheme to point back in history (Lemma B.4 in [72]). We show how to simulate this using a sparse attention mechanism and defer the details to App. B.
3.4 Limitations
We demonstrate a natural task which can be solved by the full attention mechanism in O(1)-layers. However, under standard complexity theoretic assumptions, this problem requires ⌦̃(n)-layers for any sparse attention layers with Õ(n) edges (not just BIGBIRD). (Here Õ hides poly-logarthmic factors). Consider the simple problem of finding the corresponding furthest vector for each vector in the given sequence of length n. Formally,
Task 1. Given n unit vectors {u1, . . . , un}, find f(u1, . . . , un) ! (u1⇤ , . . . , un⇤) where for a fixed j 2 [n], we define j⇤ = argmaxk kuk ujk22. Finding vectors that are furthest apart boils down to minimize inner product search in case of unit vectors. For a full-attention mechanism with appropriate query and keys, this task is very easy as we can evaluate all pair-wise inner products.
The impossibility for sparse-attention follows from hardness results stemming from Orthogonal Vector Conjecture(OVC) [1, 2, 7, 97]. The OVC is a widely used assumption in fine-grained complexity. Informally, it states that one cannot determine if the minimum inner product among n boolean vectors is 0 in subquadratic time. In App. C, we show a reduction using OVC to show that if a transformer g 2 T H=1,m=2d,q=0D for any sparse directed graph D can evaluate the Task 1, it can solve the orthogonal vector problem. Proposition 1. There exists a single layer full self-attention g 2 T H=1,m=2d,q=0 that can evaluate Task 1, i.e. g(u1, ..., un) = [u1⇤ , . . . , un⇤ ], but for any sparse-attention graph D with Õ(n) edges (i.e. inner product evaluations), would require ⌦̃(n1 o(1)) layers. We give a formal proof of this fact in App. C.
4 Experiments: Natural Language Processing
In this section our goal is to showcase benefits of modeling longer input sequence for NLP tasks, for which we select three representative tasks. We begin with basic masked language modeling (MLM; Devlin et al. 22) to check if better contextual representations can be learnt by utilizing longer contiguous sequences. Next, we consider QA with supporting evidence, for which capability to handle longer sequence would allow us to retrieve more evidence using crude systems like TF-IDF/BM25. Finally, we tackle long document classification where discriminating information may not be located in first 512 tokens. Below we summarize the results for BIGBIRD using sequence length 40961, while we defer all other setup details including computational resources, batch size, step size, to App. E.
Pretraining and MLM We follow [22, 63] to create base and large versions of BIGBIRD and pretrain it using MLM objective. This task involves predicting a random subset of tokens which have been masked out. We use four standard data-sets for pretraining (listed in App. E.1, Tab. 9), warm-starting from the public RoBERTa checkpoint2. We compare performance in predicting the masked out tokens in terms of bits per character, following [8]. As seen in App. E.1, Tab. 10, both BIGBIRD and Longformer perform better than limited length RoBERTa, with BIGBIRD-ETC performing the best. We note that we trained our models on a reasonable 16GB memory/chip with batch size of 32-64. Our memory efficiency is due to efficient blocking and sparsity structure of the sparse attention mechanism described in Sec. 2.
1code available at http://goo.gle/bigbird-transformer 2https://github.com/pytorch/fairseq/tree/master/examples/roberta
Question Answering (QA) We considered following four challenging datasets: 1. Natural Questions [52]: For the given question, find a short span of answer (SA) from the given
evidences as well highlight the paragraph from the given evidences containing information about the correct answer (LA). 2. HotpotQA-distractor [101]: Similar to natural questions, it requires finding the answer (Ans) as well as the supporting facts (Sup) over different documents needed for multi-hop reasoning from the given evidences. 3. TriviaQA-wiki [41]: We need to provide an answer for the given question using provided Wikipedia evidence, however, the answer might not be present in the given evidence. On a smaller verified subset of question, the given evidence is guaranteed to contain the answer. Nevertheless, we model the answer as span selection problem in this case as well. 4. WikiHop [96]: Chose correct option from multiple-choice questions (MCQ), by aggregating information spread across multiple documents given in the evidences.
As these tasks are very competitive, multiple highly engineered systems have been designed specific each dataset confirming to respective output formats. For a fair comparison, we had to use some additional regularization for training BIGBIRD, details of which are provided in App. E.2 along with exact architecture description. We experiment using the base sized model and select the best configuration on the development set for each dataset (as reported in Tab. 2). We can see that BIGBIRD-ETC, with expanded global tokens consistently outperforms all other models. Thus, we chose this configuration to train a large sized model to be used for evaluation on the hidden test set.
In Tab. 3, we compare BIGBIRD-ETC model to top-3 entries from the leaderboard excluding BIGBIRD. One can clearly see the importance of using longer context as both Longformer and BIGBIRD outperform models with smaller contexts. Also, it is worth noting that BIGBIRD submission is a single model, whereas the other top-3 entries for Natural Questions are ensembles, which might explain the slightly lower accuracy in exact answer phrase selection. Classification We experiment on datasets of different lengths and contents, specifically various document classification and GLUE tasks. Following BERT, we used one layer with cross entropy loss on top of the first [CLS] token. We see that gains of using BIGBIRD are more significant when we have longer documents and fewer training examples. For instance, using base sized model,
BIGBIRD improves state-of-the-art for Arxiv dataset by about 5% points. On Patents dataset, there is improvement over using simple BERT/RoBERTa, but given the large size of training data the improvement over SoTA (which is not BERT based) is not significant. Note that this performance gain is not seen for much smaller IMDb dataset. Along with experimental setup detail, we present detailed results in App. E.4 which show competitive performance.
4.1 Encoder-Decoder Tasks
For an encoder-decoder setup, one can easily see that both suffer from quadratic complexity due to the full self attention. We focus on introducing the sparse attention mechanism of BIGBIRD only at the encoder side. This is because, in practical generative applications, the length of output sequence is typically small as compared to the input. For example for text summarization, we see in realistic scenarios (c.f. App. E.5 Tab. 18) that the median output sequence length is ⇠ 200 where as the input sequence’s median length is > 3000. For such applications, it is more efficient to use sparse attention mechanism for the encoder and full self-attention for the decoder.
Summarization Document summarization is a task of creating a short and accurate summary of a text document. We used three long document datasets for testing our model details of which are mention in Tab. 18. In this paper we focus on abstractive summarization of long documents where using a longer contextual encoder should improve performance. The reasons are two fold: First, the salient content can be evenly distributed in the long document, not just in first 512 tokens, and this is by design in the BigPatents dataset [78]. Second, longer documents exhibit a richer discourse structure and summaries are considerably more abstractive, thereby observing more context helps. As has been pointed out recently [76, 108], pretraining helps in generative tasks, we warm start from our general purpose MLM pretraining on base-sized models as well as utilizing state-of-the-art summarization specific pretraining from Pegasus [108] on large-sized models. The results of training BIGBIRD sparse encoder along with full decoder on these long document datasets are presented in Tab. 4. We can clearly see modeling longer context brings significant improvement. Along with hyperparameters, we also present results on shorter but more widespread datasets in App. E.5, which show that using sparse attention does not hamper performance either.
5 Experiments: Genomics
There has been a recent upsurge in using deep learning for genomics data [87, 107, 13], which has resulted in improved performance on several biologically-significant tasks such as promoter site prediction [71], methylation analysis [55], predicting functional effects of non-coding variant [110], etc. These approaches consume DNA sequence fragments as inputs, and therefore we believe longer input sequence handling capability of BIGBIRD would be beneficial as many functional effects
in DNA are highly non-local [12]. Furthermore, taking inspiration from NLP, we learn powerful contextual representations for DNA fragments utilizing abundant unlabeled data (e.g. human reference genome, Saccharomyces Genome Database) via MLM pretraining. Next, we showcase that our long input BIGBIRD along with the proposed pretraining significantly improves performances in two downstream tasks. Detailed experimental setup for the two tasks are provided in App. F.
Pre-training and MLM As explored in Liang [58], instead of operating on base pairs, we propose to first segment DNA into tokens so as to further increase the context length (App. F, Fig. 7). In particular, we build a byte-pair encoding [50] table for the DNA sequence of size 32K, with each token representing 8.78 base pairs on average. We learn contextual representation of these token on the human reference genome (GRCh37)3 using MLM objective. We then report the bits per
character (BPC) on a held-out set in Tab. 5. We find that attention based contextual representation of DNA does improve BPC, which is further improved by using longer context.
Promoter Region Prediction Promoter is a DNA region typically located upstream of the gene, which is the site of transcription initiation. Multiple methods have been proposed to identify the promoter regions in a given DNA sequence [100, 59, 11, 99, 71], as it is an important first step in understanding gene regulation. The corresponding machine learning task is to classify a given DNA fragment as promoter or non-promoter sequence. We use the dataset compiled by Oubounyt et al. [71] which was
built from Eukaryotic Promoter Database (EPDnew) [24] 4. We finetuned the pretrained BIGBIRD model from above, using the training data and report F1 on test dataset. We compare our results to the previously reported best method in Tab. 6. We see that BIGBIRD achieve nearly perfect accuracy with a 5% jump from the previous best reported accuracy.
Chromatin-Profile Prediction Non-coding regions of DNA do not code for proteins. Majority of diseases and other trait associated single-nucleotide polymorphism are correlated to non-coding genomic variations [110, 46]. Thus, understanding the functional effects of non-coding regions of DNA is a very important task. An important step in this process, as defined by Zhou and Troyanskaya
[110], is to predict large-scale chromatin-profiling from non-coding genomic sequence. To this effect, DeepSea [110], compiled 919 chromatin-profile of 2.4M non-coding variants from Encyclopedia of DNA Elements (ENCODE)5 and Roadmap Epigenomics projects6. The corresponding ML task is to predict, for a given non-coding region of DNA, these 919 chromatin-profile including 690 transcription factors (TF) binding profiles for 160 different TFs, 125 DNase I sensitivity (DHS) profiles and 104 histone-mark (HM) profiles. We jointly learn 919 binary classifiers to predict these functional effects from sequence of DNA fragments. On held-out chromosomes, we compare AUC with the baselines in Tab. 7 and see that we significantly improve on performance on the harder task HM, which is known to have longer-range correlations [27] than others.
6 Conclusion
We propose BIGBIRD: a sparse attention mechanism that is linear in the number of tokens. BIGBIRD satisfies a number of theoretical results: it is a universal approximator of sequence to sequence functions and is also Turing complete. Theoretically, we use the power of extra global tokens preserve the expressive powers of the model. We complement these results by showing that moving to sparse attention mechanism do incur a cost. Empirically, BIGBIRD gives state-of-the-art performance on a number of NLP tasks such as question answering and long document classification. We further introduce attention based contextual language model for DNA and fine-tune it for down stream tasks such as promoter region prediction and predicting effects of non-coding variants.
3https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.13/ 4https://epd.epfl.ch/human/human_database.php?db=human 5https://www.encodeproject.org/ 6http://www.roadmapepigenomics.org/
Broader Impacts Inference Efficiency: Quadratic attention mechanisms cannot capture long range dependencies which exist in natural text and other datasets. Moreover, there is a growing concern in the ML community about the resource and energy requirement training large scale systems [81]. Moreover, that sparse, computationally efficient systems, like BIGBIRD, can capture long range dependencies in an energy efficient way without losing expressive power.
Wide Applicability: Beyond the impact of our model on NLP tasks that require longer context, our proposed contextualized representations of DNA using attention based models, should help in better modeling effects of longer sequences of DNA. Our effort continues a long line of research that bridges the gap between computational models designed for NLP and those for computational biology. | 1. What is the focus and contribution of the paper on sequence attention mechanisms?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and experimental results?
3. What are the weaknesses of the paper, especially regarding its task selection and potential limitations in short text performances? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors pointed that the full self-attention have computational and memory requirement that is quadratic in the sequence length. They produce a sparse attention mechanism that improves performance on a multitude of tasks requiring long contexts. At meanwhile, they proved the proposed BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model.
Strengths
The overall motivation and theoretical part of the article is detailed and clear. The proposed sparse attention mechanism improves performance on a multitude of tasks that require long contexts and satisfies all the known theoretical properties of full transformer. Moreover, the experiments for proving the performance of the model is much more important are also sufficient.
Weaknesses
The authors select three representative NLP tasks to showcase benefits of modeling longer input sequence. However, there is no experiment to show whether the model performs well in the short text than other models. |
NIPS | Title
Big Bird: Transformers for Longer Sequences
Abstract
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
1 Introduction
Models based on Transformers [92], such as BERT [22, 63], are wildly successful for a wide variety of Natural Language Processing (NLP) tasks and consequently are mainstay of modern NLP research. Their versatility and robustness are the primary drivers behind the wide-scale adoption of Transformers. The model is easily adapted for a diverse range of sequence based tasks – as a seq2seq model for translation [92], summarization [66], generation [15], etc. or as a standalone encoders for sentiment analysis [84], POS tagging [65], machine reading comprehension [94], etc. – and it is known to vastly outperform previous sequence models like LSTM [37]. The key innovation in Transformers is the introduction of a self-attention mechanism, which can be evaluated in parallel for each token of the input sequence, eliminating the sequential dependency in recurrent neural networks, like LSTM. This parallelism enables Transformers to leverage the full power of modern SIMD hardware accelerators like GPUs/TPUs, thereby facilitating training of NLP models on datasets of unprecedented size. This ability to train on large scale data has led to surfacing of models like BERT [22] and T5 [75], which pretrain transformers on large general purpose corpora and transfer the knowledge to down-stream task. The pretraining has led to significant improvement in low data regime downstream tasks [51] as well as tasks with sufficient data [102] and thus have been a major force behind the ubiquity of transformers in contemporary NLP.
The self-attention mechanism overcomes constraints of RNNs (namely the sequential nature of RNN) by allowing each token in the input sequence to attend independently to every other token in the sequence. This design choice has several interesting repercussions. In particular, the full self-attention have computational and memory requirement that is quadratic in the sequence length. We note that while the corpus can be large, the sequence length, which provides the context in many applications is very limited. Using commonly available current hardware and model sizes, this requirement translates to roughly being able to handle input sequences of length 512 tokens. This reduces its direct applicability to tasks that require larger context, like QA [60], document classification, etc.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
However, while we know that self-attention and Transformers are useful, our theoretical understanding is rudimentary. What aspects of the self-attention model are necessary for its performance? What can we say about the expressivity of Transformers and similar models? Apriori, it was not even clear from the design if the proposed self-attention mechanism was as effective as RNNs. For example, the self-attention does not even obey sequence order as it is permutation equivariant. This concern has been partially resolved, as Yun et al. [105] showed that transformers are expressive enough to capture all continuous sequence to sequence functions with a compact domain. Meanwhile, Pérez et al. [72] showed that the full transformer is Turing Complete (i.e. can simulate a full Turing machine). Two natural questions arise: Can we achieve the empirical benefits of a fully quadratic self-attention scheme using fewer inner-products? Do these sparse attention mechanisms preserve the expressivity and flexibility of the original network?
In this paper, we address both the above questions and produce a sparse attention mechanism that improves performance on a multitude of tasks that require long contexts. We systematically develop BIGBIRD, an attention mechanism whose complexity is linear in the number of tokens (Sec. 2). We take inspiration from graph sparsification methods and understand where the proof for expressiveness of Transformers breaks down when full-attention is relaxed to form the proposed attention pattern. This understanding helped us develop BIGBIRD, which is theoretically as expressive and also empirically useful. In particular, our BIGBIRD consists of three main part: • A set of g global tokens attending on all parts of the sequence. • All tokens attending to a set of w local neighboring tokens. • All tokens attending to a set of r random tokens.
This leads to a high performing attention mechanism scaling to much longer sequence lengths (8x). To summarize, our main contributions are: 1. BIGBIRD satisfies all the known theoretical properties of full transformer (Sec. 3). In particular,
we show that adding extra tokens allows one to express all continuous sequence to sequence functions with only O(n)-inner products. Furthermore, we show that under standard assumptions regarding precision, BIGBIRD is Turing complete. 2. Empirically, we show that the extended context modelled by BIGBIRD benefits variety of NLP tasks. We achieve state of the art results for question answering and document summarization on a number of different datasets. Summary of these results are presented in Sec. 4. 3. Lastly, we introduce a novel application of attention based models where long contexts are beneficial: extracting contextual representations of genomics sequences like DNA. With longer masked LM pretraining, BIGBIRD improves performance on downstream tasks such as promoterregion and chromatin profile prediction (Sec. 5).
1.1 Related Work
There have been a number of interesting attempts, that were aimed at alleviating the quadratic dependency of Transformers, which can broadly categorized into two directions. First line of work embraces the length limitation and develops method around it. Simplest methods in this category just employ sliding window [94], but in general most work fits in the following general paradigm: using some other mechanism select a smaller subset of relevant contexts to feed in the transformer and optionally iterate, i.e. call transformer block multiple time with different contexts each time. Most prominently, SpanBERT [42], ORQA [54], REALM [34], RAG [57] have achieved strong performance for different tasks. However, it is worth noting that these methods often require significant engineering efforts (like back prop through large scale nearest neighbor search) and are hard to train.
Second line of work questions if full attention is essential and have tried to come up with approaches that do not require full attention, thereby reducing the memory and computation requirements. Prominently, Dai et al. [21], Sukhbaatar et al. [83], Rae et al. [74] have proposed auto-regresive models that work well for left-to-right language modeling but suffer in tasks which require bidirectional context. Child et al. [16] proposed a sparse model that reduces the complexity to O(n p n), Kitaev et al. [49] further reduced the complexity to O(n log(n)) by using LSH to compute nearest neighbors. Ye et al. [104] proposed binary partitions of the data where as Qiu et al. [73] reduced complexity by using block sparsity. Recently, Longformer [8] introduced a localized sliding window based mask with few global mask to reduce computation and extended BERT to longer sequence based tasks. Finally, our work is closely related to and built on the work of Extended Transformers Construction [4]. This work was designed to encode structure in text for transformers. The idea of global tokens was used extensively by them to achieve their goals. Our theoretical work can be seen as providing a justification for the success of these models as well. It is important to note that most of the
aforementioned methods are heuristic based and empirically are not as versatile and robust as the original transformer, i.e. the same architecture do not attain SoTA on multiple standard benchmarks. (There is one exception of Longformer which we include in all our comparisons, see App. E.3 for a more detailed comparison). Moreover, these approximations do not come with theoretical guarantees.
2 BIGBIRD Architecture
In this section, we describe the BIGBIRD model using the generalised attention mechanism that is used in each layer of transformer operating on an input sequence X = (x1, ...,xn) 2 Rn⇥d. The generalized attention mechanism is described by a directed graph D whose vertex set is [n] = {1, . . . , n}. The set of arcs (directed edges) represent the set of inner products that the attention mechanism will consider. Let N(i) denote the out-neighbors set of node i in D, then the ith output vector of the generalized attention mechanism is defined as
ATTND(X)i = xi + HX
h=1
⇣ Qh(xi)Kh(XN(i)) T ⌘ · Vh(XN(i)) (AT)
where Qh,Kh : Rd ! Rm are query and key functions respectively, Vh : Rd ! Rd is a value function, is a scoring function (e.g. softmax or hardmax) and H denotes the number of heads. Also note XN(i) corresponds to the matrix formed by only stacking {xj : j 2 N(i)} and not all the inputs. If D is the complete digraph, we recover the full quadratic attention mechanism of Vaswani et al. [92]. To simplify our exposition, we will operate on the adjacency matrix A of the graph D even though the underlying graph maybe sparse. To elaborate, A 2 [0, 1]n⇥n with A(i, j) = 1 if query i attends to key j and is zero otherwise. For example, when A is the ones matrix (as in BERT), it leads to quadratic complexity, since all tokens attend on every other token. This view of self-attention as a fully connected graph allows us to exploit existing graph theory to help reduce its complexity. The problem of reducing the quadratic complexity of self-attention can now be seen as a graph sparsification problem. It is well-known that random graphs are expanders and can approximate complete graphs in a number of different contexts including in their spectral properties [80, 38]. We believe sparse random graph for attention mechanism should have two desiderata: small average path length between nodes and a notion of locality, each of which we discuss below.
Let us consider the simplest random graph construction, known as Erdős-Rényi model, where each edge is independently chosen with a fixed probability. In such a random graph with just ⇥̃(n) edges, the shortest path between any two nodes is logarithmic in the number of nodes [17, 43]. As a consequence, such a random graph approximates the complete graph spectrally and its second eigenvalue (of the adjacency matrix) is quite far from the first eigenvalue [9, 10, 6]. This property leads to a rapid mixing time for random walks in the grpah, which informally suggests that information can flow fast between any pair of nodes. Thus, we propose a sparse attention where each query attends over r random number of keys i.e. A(i, ·) = 1 for r randomly chosen keys (see Fig. 1a). The second viewpoint which inspired the creation of BIGBIRD is that most contexts within NLP and computational biology have data which displays a great deal of locality of reference. In this phenomenon, a great deal of information about a token can be derived from its neighboring tokens. Most pertinently, Clark et al. [19] investigated self-attention models in NLP tasks and concluded that that neighboring inner-products are extremely important. The concept of locality, proximity of tokens in linguistic structure, also forms the basis of various linguistic theories such as transformationalgenerative grammar. In the terminology of graph theory, clustering coefficient is a measure of locality
of connectivity, and is high when the graph contains many cliques or near-cliques (subgraphs that are almost fully interconnected). Simple Erdős-Rényi random graphs do not have a high clustering coefficient [85], but a class of random graphs, known as small world graphs, exhibit high clustering coefficient [95]. A particular model introduced by Watts and Strogatz [95] is of high relevance to us as it achieves a good balance between average shortest path and the notion of locality. The generative process of their model is as follows: Construct a regular ring lattice, a graph with n nodes each connected to w neighbors, w/2 on each side.
In other words we begin with a sliding window on the nodes. Then a random subset (k%) of all connections is replaced with a random connection. The other (100 - k)% local connections are retained. However, deleting such random edges might be inefficient on modern hardware, so we retain it, which will not affect its properties. In summary, to capture
these local structures in the context, in BIGBIRD, we define a sliding window attention, so that during self attention of width w, query at location i attends from i w/2 to i+ w/2 keys. In our notation, A(i, i w/2 : i+w/2) = 1 (see Fig. 1b). As an initial sanity check, we performed basic experiments to test whether these intuitions are sufficient in getting performance close to BERT like models, while keeping attention linear in the number of tokens. We found that random blocks and local window were insufficient in capturing all the context necessary to compete with the performance of BERT.
The final piece of BIGBIRD is inspired from our theoretical analysis (Sec. 3), which is critical for empirical performance. More specifically, our theory utilizes the importance of “global tokens” (tokens that attend to all tokens in the sequence and to whom all tokens attend to (see Fig. 1c). These global tokens can be defined in two ways: • BIGBIRD-ITC: In internal transformer construction (ITC), we make some existing tokens “global”,
which attend over the entire sequence. Concretely, we choose a subset G of indices (with g := |G|), such that A(i, :) = 1 and A(:, i) = 1 for all i 2 G. • BIGBIRD-ETC: In extended transformer construction (ETC), we include additional “global” tokens such as CLS. Concretely, we add g global tokens that attend to all existing tokens. In our notation, this corresponds to creating a new matrix B 2 [0, 1](N+g)⇥(N+g) by adding g rows to matrix A, such that B(i, :) = 1, and B(:, i) = 1 for all i 2 {1, 2, . . . g}, and B(g + i, g + j) = A(i, j)8 i, j 2 {1, . . . , N}. This adds extra location to store context and as we will see in the experiments improves performance.
The final attention mechanism for BIGBIRD (Fig. 1d) has all three of these properties: queries attend to r random keys, each query attends to w/2 tokens to the left of its location and w/2 to the right of its location and they contain g global tokens (The global tokens can be from existing tokens or extra added tokens). We provide implementation details in App. D.
3 Theoretical Results about Sparse Attention Mechanism
In this section, we will show that that sparse attention mechanisms are as powerful and expressive as full-attention mechanisms in two respects. First, we show that when sparse attention mechanisms are used in a standalone encoder (such as BERT), they are Universal Approximators of sequence to sequence functions in the style of Yun et al. [105]. We note that this property was also explored theoretically in contemporary work Yun et al. [106]. Second, unlike [106], we further show that sparse encoder-decoder transformers are Turing Complete (assuming the same conditions defined in [72]). Complementing the above positive results, we also show that moving to a sparse-attention mechanism incurs a cost, i.e. there is no free lunch. In Sec. 3.4, we show lower bounds by exhibiting a natural task where any sufficiently sparse mechanism will require polynomially more layers.
3.1 Notation
The complete Transformer encoder stack is nothing but the repeated application of a single-layer encoder (with independent parameters). We denote class of such Transformer encoders stack, defined using generalized encoder (Sec. 2), by T H,m,qD which consists of H-heads with head size m and q is the hidden layer size of the output network, and the attention layer is defined by the directed graph D.
The key difference between our proposed attention mechanism to that of Vaswani et al. [92], Yun et al. [105] is that we add a special token at the beginning of each sequence and assign it a special vector.
We will refer to this as x0. Therefore our graph D will have vertex set {0} [ [n] = {0, 1, 2, . . . , n}. We will assume that this extra node and its respective vector will be dropped at the final output layer of transformer. To avoid cumbersome notation, we will still treat transformer as mapping sequences X 2 Rn⇥d to Rn⇥d. We will also allow the transformer to append position embeddings E 2 Rd⇥n to matrix X in the input layer.
Finally, we need to define the function class and distance measure for proving universal approximation property. Let FCD denote the set of continuous functions f : [0, 1]n⇥d ! Rn⇥d which are continuous with respect to the topology defined by `p norm. Recall for any p 1, the `p distance is dp(f1, f2) = R
kf1(X) f2(X)kppdX 1/p.
3.2 Universal Approximators
Definition 1. The star-graph S centered at 0 is the graph defined on {0, . . . , n}. The neighborhood of all vertices i is N(i) = {0, i} for i 2 {1 . . . n} and N(0) = {1, . . . n}.
Our main theorem is that the sparse attention mechanism defined by any graph containing S is a universal approximator: Theorem 1. Given 1 < p < 1 and ✏ > 0, for any f 2 FCD, there exists a transformer with sparse-attention, g 2 T H,m,qD such that dp(f, g) ✏ where D is any graph containing star graph S.
To prove the theorem, we will follow the standard proof structure outlined in [105].
Step 1: Approximate FCD by piece-wise constant functions. Since f is a continuous function with bounded domain [0, 1)n⇥d, we will approximate it with a suitable piece-wise constant function. This is accomplished by a suitable partition of the region [0, 1) into a grid of granularity to get a discrete set G . Therefore, we can assume that we are dealing with a function f̄ : G ! Rn⇥d, where dp(f, f̄) ✏3 . Step 2: Approximate piece-wise constant functions by modified transformers. This is the key step of the proof where the self-attention mechanism is used to generate a contextual-mapping of the input. Informally, a contextual mapping is a unique code for the pair consisting of a matrix (X,xi) and a column. Its uniqueness allows the Feed forward layers to use each code to map it to a unique output column.
The main technical challenge is computing the contextual mapping using only sparse attention mechanism. This was done in [105] using a “selective” shift operator which shift up entries that are in a specific interval. Key to their proof was the fact that the shift, was exactly the range of the largest entry to the smallest entry.
Creating a contextual mapping with a sparse attention mechanism is quite a challenge. In particular, because each query only attends to a few keys, it is not at all clear that sufficient information can be corralled to make a contextual embedding of the entire matrix. To get around this, we develop a sparse shift operator which shifts the entries of the matrices if they lie in a certain range. The exact amount of the shift is controlled by the directed sparse attention graphg D. The second key ingredient is the use of additional global token. By carefully applying the operator to a set of chosen ranges, we will show that each column will contain a unique mapping of the full mapping. Therefore, we can augment the loss of inner-products in the self attention mechanism by using multiple layers and an auxiliary global token.
Step 3: Approximate modified transformers by original Transformers: The final step is to approximate the modified transformers by the original transformer which uses ReLU and softmax.
We provide the full details in App. A.
3.3 Turing Completeness
Transformers are a very general class. In the original paper of Vaswani et al. [92], they were used in both an encoder and a decoder. While the previous section outlined how powerful just the encoders were, another natural question is to ask what the additional power of both a decoder along with an encoder is? Pérez et al. [72] showed that the full transformer based on a quadratic attention mechanism is Turing Complete. This result makes one unrealistic assumption, which is that the model works on arbitrary precision model. Of course, this is necessary as otherwise, Transformers are bounded finite state machines and cannot be Turing Complete.
It is natural to ask if the full attention mechanism is necessary. Or can a sparse attention mechanism also be used to simulate any Turing Machine? We show that this is indeed the case: we can use a sparse encoder and sparse decoder to simulate any Turing Machine.
To use the sparse attention mechanism in the transformer architecture, we need to define a suitable modification where each token only reacts to previous tokens. Unlike the case for BERT, where the entire attention mechanism is applied once, in full transformers, the sparse attention mechanism at decoder side is used token by token. Secondly the work of Pérez et al. [72], uses each token as a representation of the tape history and uses the full attention to move and retrieve the correct tape symbol. Most of the construction of Pérez et al. [72] goes through for sparse attentions, except for their addressing scheme to point back in history (Lemma B.4 in [72]). We show how to simulate this using a sparse attention mechanism and defer the details to App. B.
3.4 Limitations
We demonstrate a natural task which can be solved by the full attention mechanism in O(1)-layers. However, under standard complexity theoretic assumptions, this problem requires ⌦̃(n)-layers for any sparse attention layers with Õ(n) edges (not just BIGBIRD). (Here Õ hides poly-logarthmic factors). Consider the simple problem of finding the corresponding furthest vector for each vector in the given sequence of length n. Formally,
Task 1. Given n unit vectors {u1, . . . , un}, find f(u1, . . . , un) ! (u1⇤ , . . . , un⇤) where for a fixed j 2 [n], we define j⇤ = argmaxk kuk ujk22. Finding vectors that are furthest apart boils down to minimize inner product search in case of unit vectors. For a full-attention mechanism with appropriate query and keys, this task is very easy as we can evaluate all pair-wise inner products.
The impossibility for sparse-attention follows from hardness results stemming from Orthogonal Vector Conjecture(OVC) [1, 2, 7, 97]. The OVC is a widely used assumption in fine-grained complexity. Informally, it states that one cannot determine if the minimum inner product among n boolean vectors is 0 in subquadratic time. In App. C, we show a reduction using OVC to show that if a transformer g 2 T H=1,m=2d,q=0D for any sparse directed graph D can evaluate the Task 1, it can solve the orthogonal vector problem. Proposition 1. There exists a single layer full self-attention g 2 T H=1,m=2d,q=0 that can evaluate Task 1, i.e. g(u1, ..., un) = [u1⇤ , . . . , un⇤ ], but for any sparse-attention graph D with Õ(n) edges (i.e. inner product evaluations), would require ⌦̃(n1 o(1)) layers. We give a formal proof of this fact in App. C.
4 Experiments: Natural Language Processing
In this section our goal is to showcase benefits of modeling longer input sequence for NLP tasks, for which we select three representative tasks. We begin with basic masked language modeling (MLM; Devlin et al. 22) to check if better contextual representations can be learnt by utilizing longer contiguous sequences. Next, we consider QA with supporting evidence, for which capability to handle longer sequence would allow us to retrieve more evidence using crude systems like TF-IDF/BM25. Finally, we tackle long document classification where discriminating information may not be located in first 512 tokens. Below we summarize the results for BIGBIRD using sequence length 40961, while we defer all other setup details including computational resources, batch size, step size, to App. E.
Pretraining and MLM We follow [22, 63] to create base and large versions of BIGBIRD and pretrain it using MLM objective. This task involves predicting a random subset of tokens which have been masked out. We use four standard data-sets for pretraining (listed in App. E.1, Tab. 9), warm-starting from the public RoBERTa checkpoint2. We compare performance in predicting the masked out tokens in terms of bits per character, following [8]. As seen in App. E.1, Tab. 10, both BIGBIRD and Longformer perform better than limited length RoBERTa, with BIGBIRD-ETC performing the best. We note that we trained our models on a reasonable 16GB memory/chip with batch size of 32-64. Our memory efficiency is due to efficient blocking and sparsity structure of the sparse attention mechanism described in Sec. 2.
1code available at http://goo.gle/bigbird-transformer 2https://github.com/pytorch/fairseq/tree/master/examples/roberta
Question Answering (QA) We considered following four challenging datasets: 1. Natural Questions [52]: For the given question, find a short span of answer (SA) from the given
evidences as well highlight the paragraph from the given evidences containing information about the correct answer (LA). 2. HotpotQA-distractor [101]: Similar to natural questions, it requires finding the answer (Ans) as well as the supporting facts (Sup) over different documents needed for multi-hop reasoning from the given evidences. 3. TriviaQA-wiki [41]: We need to provide an answer for the given question using provided Wikipedia evidence, however, the answer might not be present in the given evidence. On a smaller verified subset of question, the given evidence is guaranteed to contain the answer. Nevertheless, we model the answer as span selection problem in this case as well. 4. WikiHop [96]: Chose correct option from multiple-choice questions (MCQ), by aggregating information spread across multiple documents given in the evidences.
As these tasks are very competitive, multiple highly engineered systems have been designed specific each dataset confirming to respective output formats. For a fair comparison, we had to use some additional regularization for training BIGBIRD, details of which are provided in App. E.2 along with exact architecture description. We experiment using the base sized model and select the best configuration on the development set for each dataset (as reported in Tab. 2). We can see that BIGBIRD-ETC, with expanded global tokens consistently outperforms all other models. Thus, we chose this configuration to train a large sized model to be used for evaluation on the hidden test set.
In Tab. 3, we compare BIGBIRD-ETC model to top-3 entries from the leaderboard excluding BIGBIRD. One can clearly see the importance of using longer context as both Longformer and BIGBIRD outperform models with smaller contexts. Also, it is worth noting that BIGBIRD submission is a single model, whereas the other top-3 entries for Natural Questions are ensembles, which might explain the slightly lower accuracy in exact answer phrase selection. Classification We experiment on datasets of different lengths and contents, specifically various document classification and GLUE tasks. Following BERT, we used one layer with cross entropy loss on top of the first [CLS] token. We see that gains of using BIGBIRD are more significant when we have longer documents and fewer training examples. For instance, using base sized model,
BIGBIRD improves state-of-the-art for Arxiv dataset by about 5% points. On Patents dataset, there is improvement over using simple BERT/RoBERTa, but given the large size of training data the improvement over SoTA (which is not BERT based) is not significant. Note that this performance gain is not seen for much smaller IMDb dataset. Along with experimental setup detail, we present detailed results in App. E.4 which show competitive performance.
4.1 Encoder-Decoder Tasks
For an encoder-decoder setup, one can easily see that both suffer from quadratic complexity due to the full self attention. We focus on introducing the sparse attention mechanism of BIGBIRD only at the encoder side. This is because, in practical generative applications, the length of output sequence is typically small as compared to the input. For example for text summarization, we see in realistic scenarios (c.f. App. E.5 Tab. 18) that the median output sequence length is ⇠ 200 where as the input sequence’s median length is > 3000. For such applications, it is more efficient to use sparse attention mechanism for the encoder and full self-attention for the decoder.
Summarization Document summarization is a task of creating a short and accurate summary of a text document. We used three long document datasets for testing our model details of which are mention in Tab. 18. In this paper we focus on abstractive summarization of long documents where using a longer contextual encoder should improve performance. The reasons are two fold: First, the salient content can be evenly distributed in the long document, not just in first 512 tokens, and this is by design in the BigPatents dataset [78]. Second, longer documents exhibit a richer discourse structure and summaries are considerably more abstractive, thereby observing more context helps. As has been pointed out recently [76, 108], pretraining helps in generative tasks, we warm start from our general purpose MLM pretraining on base-sized models as well as utilizing state-of-the-art summarization specific pretraining from Pegasus [108] on large-sized models. The results of training BIGBIRD sparse encoder along with full decoder on these long document datasets are presented in Tab. 4. We can clearly see modeling longer context brings significant improvement. Along with hyperparameters, we also present results on shorter but more widespread datasets in App. E.5, which show that using sparse attention does not hamper performance either.
5 Experiments: Genomics
There has been a recent upsurge in using deep learning for genomics data [87, 107, 13], which has resulted in improved performance on several biologically-significant tasks such as promoter site prediction [71], methylation analysis [55], predicting functional effects of non-coding variant [110], etc. These approaches consume DNA sequence fragments as inputs, and therefore we believe longer input sequence handling capability of BIGBIRD would be beneficial as many functional effects
in DNA are highly non-local [12]. Furthermore, taking inspiration from NLP, we learn powerful contextual representations for DNA fragments utilizing abundant unlabeled data (e.g. human reference genome, Saccharomyces Genome Database) via MLM pretraining. Next, we showcase that our long input BIGBIRD along with the proposed pretraining significantly improves performances in two downstream tasks. Detailed experimental setup for the two tasks are provided in App. F.
Pre-training and MLM As explored in Liang [58], instead of operating on base pairs, we propose to first segment DNA into tokens so as to further increase the context length (App. F, Fig. 7). In particular, we build a byte-pair encoding [50] table for the DNA sequence of size 32K, with each token representing 8.78 base pairs on average. We learn contextual representation of these token on the human reference genome (GRCh37)3 using MLM objective. We then report the bits per
character (BPC) on a held-out set in Tab. 5. We find that attention based contextual representation of DNA does improve BPC, which is further improved by using longer context.
Promoter Region Prediction Promoter is a DNA region typically located upstream of the gene, which is the site of transcription initiation. Multiple methods have been proposed to identify the promoter regions in a given DNA sequence [100, 59, 11, 99, 71], as it is an important first step in understanding gene regulation. The corresponding machine learning task is to classify a given DNA fragment as promoter or non-promoter sequence. We use the dataset compiled by Oubounyt et al. [71] which was
built from Eukaryotic Promoter Database (EPDnew) [24] 4. We finetuned the pretrained BIGBIRD model from above, using the training data and report F1 on test dataset. We compare our results to the previously reported best method in Tab. 6. We see that BIGBIRD achieve nearly perfect accuracy with a 5% jump from the previous best reported accuracy.
Chromatin-Profile Prediction Non-coding regions of DNA do not code for proteins. Majority of diseases and other trait associated single-nucleotide polymorphism are correlated to non-coding genomic variations [110, 46]. Thus, understanding the functional effects of non-coding regions of DNA is a very important task. An important step in this process, as defined by Zhou and Troyanskaya
[110], is to predict large-scale chromatin-profiling from non-coding genomic sequence. To this effect, DeepSea [110], compiled 919 chromatin-profile of 2.4M non-coding variants from Encyclopedia of DNA Elements (ENCODE)5 and Roadmap Epigenomics projects6. The corresponding ML task is to predict, for a given non-coding region of DNA, these 919 chromatin-profile including 690 transcription factors (TF) binding profiles for 160 different TFs, 125 DNase I sensitivity (DHS) profiles and 104 histone-mark (HM) profiles. We jointly learn 919 binary classifiers to predict these functional effects from sequence of DNA fragments. On held-out chromosomes, we compare AUC with the baselines in Tab. 7 and see that we significantly improve on performance on the harder task HM, which is known to have longer-range correlations [27] than others.
6 Conclusion
We propose BIGBIRD: a sparse attention mechanism that is linear in the number of tokens. BIGBIRD satisfies a number of theoretical results: it is a universal approximator of sequence to sequence functions and is also Turing complete. Theoretically, we use the power of extra global tokens preserve the expressive powers of the model. We complement these results by showing that moving to sparse attention mechanism do incur a cost. Empirically, BIGBIRD gives state-of-the-art performance on a number of NLP tasks such as question answering and long document classification. We further introduce attention based contextual language model for DNA and fine-tune it for down stream tasks such as promoter region prediction and predicting effects of non-coding variants.
3https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.13/ 4https://epd.epfl.ch/human/human_database.php?db=human 5https://www.encodeproject.org/ 6http://www.roadmapepigenomics.org/
Broader Impacts Inference Efficiency: Quadratic attention mechanisms cannot capture long range dependencies which exist in natural text and other datasets. Moreover, there is a growing concern in the ML community about the resource and energy requirement training large scale systems [81]. Moreover, that sparse, computationally efficient systems, like BIGBIRD, can capture long range dependencies in an energy efficient way without losing expressive power.
Wide Applicability: Beyond the impact of our model on NLP tasks that require longer context, our proposed contextualized representations of DNA using attention based models, should help in better modeling effects of longer sequences of DNA. Our effort continues a long line of research that bridges the gap between computational models designed for NLP and those for computational biology. | 1. What is the main contribution of the paper in the field of Natural Language Processing?
2. What are the strengths of the proposed approach, particularly in terms of its ability to reduce the quadratic dependency issue in self-attention?
3. Are there any weaknesses or limitations of the proposed approach, especially when compared to other works in the field such as sparse-transformer?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors propose a sparse attention mechanism that reduces this quadratic dependency issue in self-attention. The model consists of three types of attention mechanism: global attention on fixed positions, local attention in sliding-window and attention on random positions. The authors prove that the proposed method can preserve the properties of full attention model and achieve SOTA on a variety of NLP tasks.
Strengths
1. The authors provide theoretical analysis on sparse attention mechanism which is interesting and useful for further work in this direction. 2. It's interesting to see random attention only can also work on SQuAD and MNLI tasks. 3. The experiment results are quite solid. The proposed model achieves SOTA on a variety of NLP tasks which cover multi-hop QA, QA with longer context and document classification. It is also tested on genomics data.
Weaknesses
1. Although the random attention in BigBird is interesting, the global attention and local attention in sliding window are not novel, similar to sparse-transformer (Generating Long Sequences with Sparse Transformers). 2. The model doesn't work well on short answer extraction of natural question dataset. |
NIPS | Title
Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees
Abstract
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherently nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in MuJoCo and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.
1 Introduction
Given observed trajectories of states and actions implemented by an expert, we consider the problem of estimating the reinforcement learning environment in which the expert was trained. This problem is generally referred to as inverse reinforcement learning (IRL) (see [3] for a recent survey). Assuming the environment dynamics are known (or available online), the IRL problem consists of estimating the reward function and the expert’s policy (optimizing such rewards) that best fits the data. While there are limitations on the identifiability of rewards [4], the estimation of rewards based upon expert trajectories enables important counterfactual analysis such as the estimation of optimal policies under different environment dynamics and/or reinforcement learning of new tasks.
In the seminal work [5], the authors developed an IRL formulation, in which the model for the expert’s behavior is the policy that maximizes entropy subject to a constraint requiring that the expected
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
features under such policy match the empirical averages in the expert’s observation dataset. The algorithms developed for MaxEnt-IRL [5–7] have a nested loop structure, alternating between an outer loop with a reward update step, and an inner loop that calculates the explicit policy estimates. The computational burden of this nested structure is manageable in tabular environments, but it becomes significant in high dimensional settings requiring function approximation.
Towards developing more efficient IRL algorithms, a number of works [8–12] propose to leverage the idea of adversarial training [13]. These algorithms learn a non-stationary reward function through training a discriminator, which is then used to guide the policy to match the behavior trajectories from the expert dataset. However, [14] pointed out that the resulting discriminator (hence the reward function) typically cannot be used in new learning tasks, since it is highly dependent on the corresponding policy and current environment dynamics. Moreover, due to the brittle approximation techniques and sensitive hyperparameter choice in the adversarial training, these IRL algorithms can be unstable. [15, 16].
More recent works [1, 2] have developed algorithms to alleviate the computational burden of the nested-loop training procedures. In [1], the authors propose to model the IRL using certain maximum entropy RL problem with specific reward function (which assigns r = +1 for matching expert demonstrations and r = 0 for all other behaviors). Then a soft Q imitation learning (SQIL) algorithm is developed. In [2], the authors propose to transform the standard formulation of IRL (discussed above) into a single-level problem, through learning a soft Q-function to implicitly represent the reward function and the policy. An inverse soft-Q learning (IQ-Learn) algorithm is then developed, which is shown to be effective in estimating the policy for the environment that it is trained on. Despite being computationally efficient, IQ-Learn sacrifices the accuracy in estimating the rewards since it indirectly recovers rewards from a soft Q-function approximator which is highly dependent upon the environment dynamics and does not strictly satisfy the soft-Bellman equation. Therefore it is not well-suited for counterfactual prediction or transfer learning setting.
Finally, in f -IRL [14] the authors consider an approach for estimating rewards based on the minimization of several measures of divergence with respect to the expert’s state visitation measure. The approach is limited to estimating rewards that only depend on state. Moreover, while the results reported are based upon a single-loop implementation, the paper does not provide a convergence guarantee to support performance. We refer the readers to Appendix A for other related works.
Our Contributions. The goal of this work is to develop an algorithm for IRL which is capable of producing high-quality estimates of both rewards and behavior policies with finite-time guarantees. The major contributions of this work are listed below.
• We consider a formulation of IRL based on maximum likelihood (ML) estimation over optimal (entropy-regularized) policies, and prove that a strong duality relationship with maximum entropy IRL holds if rewards are represented by a linear combination of features. 1 The ML formulation is a bi-level optimization problem, where the upper-level problem maximizes the likelihood function, while the lower-level finds the optimal policy under the current reward parameterization. Such a bi-level structure is not only instrumental to the subsequent algorithm design, but is also flexible to incorporate the use of state-only, as well as the regular reward function (which depends on the state and action pair). The former is suitable for transfer learning since it is insensitive to the changes of the environment dynamics, while the latter can be used to efficiently imitate the expert policy.
• Based on the ML-IRL formulation, we develop an efficient algorithm. To avoid the computational burden of repeatedly solving the lower-level policy optimization problem, the proposed algorithm has a single-loop structure where the policy improvement step and reward optimization step are performed alternatingly so that each step can be performed relatively cheaply. Further, we show that the algorithm has strong theoretical guarantees: to achieve certain ✏-approximate stationary solution for a non-linearly parameterized problem, it requires O(✏ 2) steps of policy and reward updates each. To our knowledge, it is the first algorithm which has finite-time guarantee for the IRL problem under nonlinear parameterization of reward functions.
• We conduct extensive experiments to demonstrate that the proposed algorithm outperforms many state-of-the-art IRL algorithms in both policy estimation and reward recovery. In particular, when
1Heuristic arguments for this duality result are discussed in [5] wherein the distribution of state-action paths is approximated (see equation (4) in [5]) and the equivalence between maximum entropy estimation and maximum likelihood (over the class of exponential distributions) [17] is invoked.
transferring to a new environment, RL algorithms using rewards recovered by the proposed algorithm outperform those that use rewards recovered from existing IRL and imitation learning benchmarks.
2 Preliminaries
In this section, we review the fundamentals of the maximum entropy inverse reinforcement learning (MaxEnt-IRL). We consider an MDP defined by the tuple (S,A,P, ⌘, r, ); S and A denote the state space and the action space respectively; P(s0|s, a) : S ⇥A⇥ S ! [0, 1] denotes the transition probability; ⌘(·) denotes the distribution for the initial state; r(s, a) : S ⇥ A ! R is the reward function and is a discount factor.
The MaxEnt-IRL formulation [6, 18–20] consists of finding a policy maximizing entropy subject to the expected features under such policy matching the empirical averages in the expert’s observation dataset. Specifically, the MaxEnt-IRL formulation is given by:
max ⇡
H(⇡) := E⌧⇠⇡ 1X
t=0
t log ⇡(at|st)
(MaxEnt-IRL)
s.t. E⌧⇠⇡ 1X
t=0
t (st, at) = E⌧⇠⇡E
1X
t=0
t (st, at)
where ⌧ = {(st, at)}1t=0 denotes a trajectory, (s, a) is the feature vector of the state-action pair (s, a) and ⇡E denotes the expert policy. Let ✓ denote the dual variable for the linear constraint, then the Lagrangian of (MaxEnt-IRL) is given by
L(⇡, ✓) := H(⇡) + * ✓,E⌧⇠⇡ 1X
t=0
t (st, at)
E⌧⇠⇡E
1X
t=0
t (st, at) + . (1)
In [6, 18, 19], the authors proposed a "dual descent" algorithm, which alternates between i) solving max⇡ L(⇡, ✓) for fixed ✓, and ii) a gradient descent step to optimize the dual variable ✓. It is shown that the optimizer ⇡⇤✓ in step i) can be recursively defined as ⇡ ⇤ ✓(at|st) = Zat|st,✓ Zst,✓
, where logZat|st,✓ = (st, at) T ✓ + Est+1⇠P(·|st,at) ⇥ logZst+1,✓ ⇤ and logZst,✓ = log P a2A Za|st,✓ .
From a computational perspective, the above algorithm is not efficient: it has a nested-loop structure, which repeatedly computes the optimal policy ⇡⇤✓ under each variable ✓. It is known that when the underlying MDP is of high-dimension, such an algorithm can be computationally prohibitive [9, 10].
Recent work [2] proposed an algorithm called IQ-Learn to improve upon the MaxEnt-IRL by considering a saddle-point formulation:
min r max ⇡
n H(⇡) + E⌧⇠⇡ ⇥ 1X
t=0
t · r(st, at) ⇤ E⌧⇠⇡E
⇥ 1X
t=0
t · r(st, at) ⇤o (2)
where r(st, at) is the reward associated with state-action pair (st, at). The authors show that this problem can be transformed into an optimization problem only defined in terms of the soft Q-function, which implicitly represents both reward and policy. IQ-Learn is shown to be effective in imitating the expert behavior while only relying on the estimation of the soft Q-function. However, the implicit reward estimate obtained is not necessarily accurate since its soft Q-function estimate depends on the environment dynamics and does not strictly satisfy the soft-Bellman equation. Hence, it is difficult to transfer the recovered reward function to new environments.
3 Problem Formulation
In this section, we consider a ML formulation of the IRL problem and formalize a duality relationship with maximum entropy-based formulation (MaxEnt-IRL).
Maximum Log-Likelihood IRL (ML-IRL) A model of the expert’s behavior is a randomized policy ⇡✓(·|s), where ⇡✓ is a specific policy corresponding to the reward parameter ✓. With the state dynamics P(st+1|st, at), the discounted
log-likelihood of observing the expert trajectory ⌧ under model ⇡✓ can be written follows:
E⌧⇠⇡E ⇥ log
Y t 0 (P(st+1|st, at)⇡✓(at|st)) t⇤ = E⌧⇠⇡E ⇥X t 0 t log ⇡✓(at|st) ⇤
+ E⌧⇠⇡E ⇥X
t 0 t logP(st+1|st, at)
⇤ .
Then we consider the following maximum log-likelihood IRL formulation:
max ✓
L(✓) := E⌧⇠⇡E ⇥ 1X
t=0
t log ⇡✓(at|st) ⇤ (ML-IRL)
s.t ⇡✓ := argmax ⇡
E⌧⇠⇡ 1X
t=0
t ✓ r(st, at; ✓) +H(⇡(·|st)) ◆ , (3a)
where r(s, a; ✓) is the reward function and H(⇡(·|s)) := P
a2A ⇡(a|s) log ⇡(a|s). We now make some remarks about ML-IRL. First, the problem takes the form of a bi-level optimization problem, where the upper-level problem (ML-IRL) optimizes the reward parameter ✓, while the lower-level problem describes the corresponding policy ⇡✓ as the solution to an entropy-regularized MDP ([21, 22]). In what follows we will leverage recently developed (stochastic) algorithms for bi-level optimization [23–25], that avoid the high complexity resulted from nested loop algorithms. Second, it is reasonable to use the ML function as the loss, because it searches for a reward function which generates a behavior policy that can best fit the expert demonstrations. While the ML function has been considered in [26, 27], they rely on heuristic algorithms with nested-loop computations to solve their IRL formulations, and the theoretical properties are not studied. Finally, the lower-level problem has been well-studied in the literature [21, 22, 28–30]. The entropy regularization in (3a) ensures the uniqueness of the optimal policy ⇡✓ under the fixed reward function r(s, a; ✓) [21, 28]. Even when the underlying MDP is high-dimensional and/or complex, the optimal policy could still be obtained; see recent developments in [21, 22]. We close this section by formally establishing a connection between (MaxEnt-IRL) and (ML-IRL). Theorem 1. (Strong Duality) Suppose that the reward function is given as: r(s, a; ✓) := (s, a)T ✓, for all s 2 S and a 2 A. Then (ML-IRL) is the Lagrangian dual of (MaxEnt-IRL). Furthermore, strong duality holds, that is: L(✓⇤) = H(⇡⇤), where ✓⇤ and ⇡⇤ are the global optimal solutions for problems (ML-IRL) and (MaxEnt-IRL), respectively.
The proof of Theorem 1 is relegated to Appendix G. To our knowledge this result which specifically addresses the (MaxEnt-IRL) formulation is novel. Under finite horizon, a duality between ML estimation and maximum causal entropy is obtained in [18, Theorem 3]. However, the problem considered in that paper is not in RL nor IRL setting, therefore they cannot be directly used in the context of the present paper.
The above duality result reveals a strong connection between the two formulations under linear reward parameterization. Due to the duality result, we know that (ML-IRL) is a concave problem under linear reward parameterization. In this case, any stationary solution to (ML-IRL) is a global optimal estimator of the reward parameter.
4 The Proposed Algorithm
In this section, we design algorithms for (ML-IRL). Recall that one major drawback of algorithms for (MaxEnt-IRL) is that, they repeatedly solve certain policy optimization problem in the inner loop. Even though the recently proposed algorithm IQ-Learn [2] tries to improve the computational efficiency through implicitly representing the reward function and the policy by a Q-function approximator, it has sacrificed the estimation accuracy of the recovered reward. Therefore, one important goal of our design is to find provably efficient algorithms that can avoid high-complexity operations and accurately recover the reward function. Specifically, it is desirable that the resulting algorithm only uses a finite number of reward and policy updates to reach certain high-quality solutions.
To proceed, we will leverage the special bi-level structure of the ML-IRL problem. The idea is to alternate between one step of policy update to improve the solution of the lower-level problem, and
Algorithm 1 Maximum Likelihood Inverse Reinforcement Learning (ML-IRL) Input: Initialize reward parameter ✓0 and policy ⇡0. Set the reward parameter’s stepsize as ↵. for k = 0, 1, . . . ,K 1 do
Policy Evaluation: Compute Qsoftr✓k ,⇡k(·, ·) under reward function r(·, ·; ✓k) Policy Improvement: ⇡k+1(·|s) / exp(Qsoftr✓k ,⇡k(s, ·)), 8s 2 S . Data Sampling I: Sampling an expert trajectory ⌧Ek := {st, at}t 0 Data Sampling II: Sampling a trajectory ⌧Ak := {st, at}t 0 from the current policy ⇡k+1 Estimating Gradient: gk := h(✓k; ⌧Ek ) h(✓k; ⌧Ak ) where h(✓; ⌧) := P t 0
tr✓r(st, at; ✓) Reward Parameter Update: ✓k+1 := ✓k + ↵gk
end for
one step of the parameter update which improves the upper-level loss function. At each iteration k, given the current policy ⇡k and the reward parameter ✓k, a new policy ⇡k+1 is generated from the policy improvement step, and ✓k+1 is generated by the reward optimization step.
This kind of alternating update is efficient, because there is no need to completely solve the policy optimization subproblem, before updating the reward parameters. It has been used in many other RL related settings as well. For example, the well-known actor-critic (AC) algorithm for policy optimization [31, 32, 23] alternates between one step of policy update, and one step of critic parameter update. Below we present the details of our algorithm at a given iteration k.
Policy Improvement Step. Let us consider optimizing the lower-level problem, when the reward parameter ✓k is held fixed. Towards this end, define the so-called soft Q and soft value functions for a given policy ⇡k and a reward parameter ✓k:
V soft rk,⇡k(s) = E⇡k
1X
t=0
t ✓ r(st, at; ✓k) +H(⇡k(·|st)) ◆ s0 = s
(4a)
Q soft rk,⇡k(s, a) = r(s, a; ✓k) + Es0⇠P(·|s,a)
⇥ V
soft rk,⇡k(s)
⇤ (4b)
We will adopt the well-known soft policy iteration [21] to optimize the lower-level problem (3a). Under the current reward parameter ✓k and the policy ⇡k, the soft policy iteration generates a new policy ⇡k+1 as follows
⇡k+1(a|s) / exp Q soft r✓k ,⇡k (s, a) , 8s 2 S, a 2 A. (5)
Under a fixed reward function, it can be shown that the new policy ⇡k+1 monotonically improves ⇡k, and it converges linearly to the optimal policy; see [21, Theorem 4] and [28, Thoerem 1].
Note that in practice, we usually do not have direct access to the exact soft Q-function in (4b). In order to perform the policy improvement, a few stochastic update steps in soft Q-learning [21] or soft Actor-Critic (SAC) [22] could be used to replace the one-step soft policy iteration (5). In the appendix, we present Alg. 2 to demonstrate such practical implementation of our proposed algorithm.
Reward Optimization Step. We propose to use a stochastic gradient-type algorithm to optimize ✓. Towards this end, let us first derive the exact gradient rL(✓). See Appendix D for detailed proof. Lemma 1. The gradient of the likelihood function L(✓) can be expressed as follows:
rL(✓) = E⌧⇠⇡E X
t 0 tr✓r(st, at; ✓)
E⌧⇠⇡✓
X
t 0 tr✓r(st, at; ✓) . (6)
To obtain stochastic estimators of the exact gradient rL(✓k), we take two approximation steps: 1) approximate the optimal policy ⇡✓k by ⇡k+1 in (5), since the optimal policy ⇡✓k is not available throughout the algorithm; 2) sample one expert trajectory ⌧Ek which is already generated by the expert policy ⇡E; 3) sample one trajectory ⌧Ak from the current policy ⇡k+1.
Following the approximation steps mentioned above, we construct a stochastic estimator gk to approximate the exact gradient rL(✓k) in (6) as follows:
gk := h(✓k; ⌧ E k ) h(✓k; ⌧Ak ), where h(✓; ⌧) :=
X t 0 tr✓r(st, at; ✓). (7)
With the stochastic gradient estimator gk, the reward parameter ✓k is updated as:
✓k+1 = ✓k + ↵gk. (8)
where ↵ is the stepsize in updating the reward parameter.
In summary, the proposed algorithm for solving the ML-IRL problem (ML-IRL) is given in Alg. 1.
5 Theoretical Analysis
In this section, we present finite-time guarantees for the proposed algorithm.
To begin with, first recall that in Sec. 3, we have mentioned that (ML-IRL) is a bi-level problem, where the upper level (resp. the lower level) problem optimizes the reward parameter (resp. the policy). In order to solve (ML-IRL), our algorithm 1 has a singe-loop structure, which alternates between one step of policy update and one step of the reward parameter update. Such a single-loop structure indeed has computational benefit, but it also leads to potential unstableness, since the lower level problem can stay far away from its true solutions. Specifically, at each iteration k, the potential unstableness is induced by the distribution mismatch between the policy ⇡k+1 and ⇡✓k , when we use estimator gk (7) to approximate the exact gradient rL(✓k) (6) in updating the reward parameter ✓k. Towards stabilizing the algorithm, we adopt the so-called two-timescale stochastic approximation (TTSA) approach [33, 23], where the lower-level problem updates in a faster time-scale (i.e., converges faster) compared with its upper-level counterpart. Intuitively, the TTSA enables the ⇡k+1 tracks the optimal ⇡✓k , leading to a stable algorithm. In the proposed Algorithm 1, the policy (lower-level variable) is continuously updated by the soft policy iteration (5), and it is ‘fast’ because it converges linearly to the optimal policy under a fixed reward function [28, Theorem 1]. On the other hand, the reward parameter update (8) does not have such linear convergence property, therefore it works in a ‘slow’ timescale. To begin our analysis, let us first present a few technical assumptions. Assumption 1 (Ergodicity). For any policy ⇡, assume the Markov chain with transition kernel P is irreducible and aperiodic under policy ⇡. Then there exist constants > 0 and ⇢ 2 (0, 1) such that
sup s2S
kP(st 2 ·|s0 = s,⇡) µ⇡(·)kTV ⇢t, 8 t 0
where k · kTV is the total variation (TV) norm; µ⇡ is the stationary state distribution under ⇡.
Assumption 1 assumes the Markov chain mixes at a geometric rate. It is a common assumption in the iterature of RL [34, 35, 32], which holds for any time-homogeneous Markov chain with finite-state space or any uniformly ergodic Markov chain with general state space. Assumption 2. For any s 2 S , a 2 A and any reward parameter ✓, the following holds:
r✓r(s, a; ✓) Lr, (9a) r✓r(s, a; ✓1) r✓r(s, a; ✓2) Lgk✓1 ✓2k (9b)
where Lr and Lg are positive constants.
Assumption 2 assumes that the parameterized reward function has bounded gradient and is Lipschitz smooth. Such assumption in Lipschitz property are common in the literature of min-max / bi-level optimization [36, 23, 37, 25, 38].
Based on Assumptions 1 - 2, we next provide the following Lipschitz properties: Lemma 2. Suppose Assumptions 1 - 2 hold. For any reward parameter ✓1 and ✓2, the following results hold:
|Qsoftr✓1 ,⇡✓1 (s, a) Q soft r✓2 ,⇡✓2 (s, a)| Lqk✓1 ✓2k, 8s 2 S, a 2 A (10a)
krL(✓1) rL(✓2)k Lck✓1 ✓2k (10b)
where Q soft r✓,⇡✓ (·, ·) denotes the soft Q-function under the reward function r(·, ·; ✓) and the policy ⇡✓. The positive constants Lq and Lc are defined in Appendix E.
The Lipschitz properties identified in Lemma 2 are vital for the convergence analysis. Then we present the main results, which show the convergence speed of the policy {⇡k}k 0 and the reward parameter {✓k}k 0 in the Alg. 1. Please see Appendix E for the detailed proof.
Theorem 2. Suppose Assumptions 1 - 2 hold. Selecting stepsize ↵ := ↵0K for the reward update step (8) where ↵0 > 0 and 2 (0, 1) are some fixed constants, and K is the total number of iterations to be run by the algorithm. Then the following result holds:
1
K
K 1X
k=0
E ⇥ log ⇡k+1 log ⇡✓k 1 ⇤ = O(K 1) +O(K ) (11a)
1
K
K 1X
k=0
E ⇥ krL(✓k)k2 ⇤ = O(K ) +O(K 1+ ) +O(K 1) (11b)
where we denote k log ⇡k+1 log ⇡✓kk1 := maxs2S,a2A log ⇡k+1(a|s) log ⇡✓k(a|s)
. In particular, setting = 1/2, then both quantities in (11a) and (11b) converge with the rate O(K 1/2).
In Theorem 2, we present the finite-time guarantee for the convergence of the Alg.1. Moreover, as a special case, when the reward is parameterized as a linear function, we know that (ML-IRL) is concave and Theorem 2 provides a stronger guarantee which identify the global optimal reward estimator in finite time.
We provide a proof sketch below to present the key steps. The detailed proof is in Appendix H.
Proof sketch. We outline our main steps in analyzing (11a) and (11b) respectively.
In order to show the convergence of policy estimates in (11a), there are several key steps. First, we note that both policies ⇡k+1 and ⇡✓k are in the softmax parameterization, where ⇡k+1(·|s) / exp Q
soft r✓k ,⇡k (s, ·) and ⇡✓k(·|s) / exp Q soft r✓k ,⇡✓k (s, ·) . Then, we can show a Lipschitz continuity
property between the policy and the soft Q-function:
klog⇡k+1 log⇡✓kk1 2kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1,
where the infinty norm k · k1 is defined over the state-action space S ⇥A. Moreover, by analyzing the contraction property of the soft policy iteration (5), we bound kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 as:
kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 kQsoftr✓k 1 ,⇡k 1 Q soft r✓k 1 ,⇡✓k 1 k1 + 2Lqk✓k ✓k 1k.
To ensure that the error term k✓k ✓k 1k is small, we select the stepsize of reward parameters as ↵ := ↵0K , where K is the total number of iterations and > 0. Then, by combining previous two steps, we could further show the convergence rate of the policy estimates in (11a).
To prove the convergence of the reward parameters in (11b), we first leverage the Lipschitz smooth property of L(✓) in (10b). However, one technical challenge in the convergence analysis is how to handle the bias between the gradient estimator gk defined in (7) and the exact gradient rL(✓k). When we construct the gradient estimator gk in (7), we need to sample trajectories from the current policy ⇡k+1 and the expert dataset D. However, according to the expression of rL(✓k) in (6), the trajectories are sampled from the optimal policy ⇡✓k and the dataset D. Hence, there is a distribution mismatch between ⇡k+1 and ⇡✓k . Our key idea is to leverage (11a) to handle this distribution mismatch error, and thus show that the bias between gk and rL(✓k) could be controlled. To the best of our knowledge, Theorem 2 is the first non-asymptotic convergence result for IRL with nonlinear reward parameterization.
6 A Discussion over State-Only Reward
In this section we consider the IRL problems modeled by using rewards that are only a function of the state. A lower dimensional representation of the agent’s preferences (i.e. in terms only of states as opposed to states and actions) is more likely to facilitate counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. This is because the estimation of preferences which are only defined in terms of states is less sensitive to the specific environment dynamics in the expert’s demonstration dataset. Moreover, in application such as healthcare [39] and autonomous driving [40], where simply imitating the expert policy can potentially result in poor performance, since the learner and the expert may have different transition dynamics. Similar points have also been argued in recent works [14, 41–43].
Next, let us briefly discuss how we can understand (ML-IRL) and Alg.1, when the reward is parameterized as a state-only function. First, it turns out that there is an equivalent formulation of (ML-IRL), when the expert trajectories only contain the visited states. Lemma 3. Suppose the expert trajectories ⌧ is sampled from a policy ⇡E, and the reward is parameterized as a state-only function r(s; ✓). Then ML-IRL is equivalent to the following:
min ✓
Es0⇠⌘(·) ⇥ V soft r✓,⇡✓ (s0) ⇤ Es0⇠⌘(·) ⇥ V soft r✓,⇡E (s0) ⇤ (12a)
s.t. ⇡✓ := argmax ⇡
E⇡ 1X
t=0
t ✓ r(st; ✓) +H(⇡(·|st)) ◆ . (12b)
Please see Appendix F for the detailed derivation. Intuitively, the above lemma says that, when dealing with the state-only IRL, (ML-IRL) minimizes the gap between the soft value functions of the optimal policy ⇡✓ and that of the expert policy ⇡E. Moreover, Alg.1 can also be easily implemented with the state-only reward. In fact, the entire algorithm essentially stays the same, and the only change is that r(s, a; ✓) will be replaced by r(s; ✓). In this way, by only using the visited states in the trajectories, one can still compute the stochastic gradient estimator in (7). Therefore, even under the state-only IRL setting where the expert dataset only contains visited states, our formulation and the proposed algorithm still work if we parameterize the reward as a state-only function.
7 Numerical Results
In this section, we test the performance of our algorithm on a diverse collection of RL tasks and environments. In each experiment set, we train algorithms until convergence and average the scores of the trajectories over multiple random seeds. The hyperparameter settings and simulation details are provided in Appendix B.
MuJoCo Tasks For Inverse Reinforcement Learning. In this experiment set, we test the performance of our algorithm on imitating the expert behavior. We consider several high-dimensional robotics control tasks in MuJoCo [44]. Two class of existing algorithms are considered as the comparison baselines: 1) imitation learning algorithms that only learn the policy to imitate the expert, including Behavior Cloning (BC) [45] and Generative Adversarial Imitation Learning (GAIL) [10]; 2) IRL algorithms which learn a reward function and a policy simultaneously, including Adversarial Inverse Reinforcement Learning (AIRL) [11], f -IRL [14] and IQ-Learn [2]. To ensure fair comparison, all imitation learning / IRL algorithms use soft Actor-Critic [22] as the base RL algorithm. For the expert dataset, we use the data provided in the official implementation2 of f -IRL.
In this experiment, we implement two versions of our proposed algorithm: ML-IRL(State-Action) where the reward is parameterized as a function of state and action; ML-IRL(State-Only) which utilizes the state-only reward function. In Table 1, we present the simulation results under a limited data regime where the expert dataset only contains a single expert trajectory. The scores (cumulative rewards) reported in the table is averaged over 6 random seeds. In each random seed, we train algorithm from initialization and collect 20 trajectories to average their cumulative rewards after the algorithms converge. The results reported in Table 1 show that our proposed algorithms outperform the baselines. The numerical results with confidence intervals are in Table 3 (See Appendix).
We observe that BC fails to imitate the expert’s behavior. It is due to the fact that BC is based on supervised learning and thus could not learn a good policy under such a limited data regime. Moreover, we notice the training of IQ-Learn is unstable, which may be due to its inaccurate approximation to the soft Q-function. Therefore, in the MuJoCo tasks where IQ-Learn does not perform well, so that we cannot match the results presented in the original paper [2], we directly report results from there (and mark them by ⇤ in Table 1). The results of AIRL are not presented in Table 1 since it performs poorly even after spending significant efforts in parameter tuning (similar observations have been made in in [46, 14]).
Transfer Learning Across Changing Dynamics. We further evaluate IRL algorithms on the transfer learning setting. We follows the environment setup in [11], where two environments with different dynamics are considered: Custom-Ant vs Disabled-Ant. We compare ML-IRL(State-Only) with several existing IRL methods: 1) AIRL [11], 2) f -IRL [14]; 3) IQ-Learn [2].
2https://github.com/twni2016/f-IRL
We consider two transfer learning settings: 1) data transfer; 2) reward transfer. For both settings, the expert dataset / trajectories are generated in Custom-Ant. In the data transfer setting, we train IRL agents in Disabled-Ant by using the expert trajectories, which are generated in Custom-Ant. In the reward transfer setting, we first use IRL algorithms to infer the reward functions in Custom-Ant, and then transfer these recovered reward functions to Disabled-Ant for further evaluation. In both settings, we also train SAC with the ground-truth reward in Disabled-Ant and report the scores.
The numerical results are reoprted in Table 2. the proposed ML-IRL(State-Only) achieves superior performance compared with the existing IRL benchmarks in both settings. We notice that IQ-Learn fails in both settings since it indirectly recovers the reward function from a soft Q-function approximator, which could be inaccurate and is highly dependent upon the environment dynamics. Therefore, the reward function recovered by IQ-Learn can not be disentangled from the expert actions and environment dynamics, which leads to its failures in the transfer learning tasks.
8 Conclusion
In this paper, we present a maximum likelihood IRL formulation and propose a provably efficient algorithm with a single-loop structure. To our knowledge, we provide the first non-asymptotic analysis for IRL algorithm under nonlinear reward parameterization. As a by-product, when we parameterize the reward as a state-only function, our algorithm could work in state-only IRL setting and enable reward transfer to new environments with different dynamics. Our algorithm outperforms existing IRL methods on high-dimensional robotics control tasks and corresponding transfer learning settings. A limitation of our method is the requirement for online training, so one future direction of this work is to further extend our algorithm and the theoretical analysis to the offline IRL setting.
Potential Negative Social Impacts
Since IRL methods aim to recover the reward function and the associated optimal policy from the observed expert dataset, potential negative social impacts may occur if there are bad demonstrations included in the expert dataset. Thus, for sensitive applications such as autonomous driving and clinical decision support, additional care should be taken to avoid negative biases from the expert demonstrations and ensure safe adaptation.
Acknowledgments
We thank the anonymous reviewers for their valuable comments. M. Hong and S. Zeng are partially supported by NSF grants CIF-1910385, CMMI-1727757, and AFOSR grant 19RT0424. A. Garcia would like to acknowledge partial support from grant FA9550-19-1-00347 by AFOSR. | 1. What is the focus and contribution of the paper on maximum likelihood approach for IRL?
2. What are the strengths of the proposed method, particularly in its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. Do you have any questions regarding the implementation and empirical results of the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose a maximum likelihood approach for IRL. Theoretical results show that the algorithm converges in finite time and that for linear rewards that the ML-IRL problem is dual with MaxEnt-IRL and that there is strong duality. Empirical results show that ML-IRL outperforms state-of-the-art IRL algorithms across several standard benchmarks.
Strengths And Weaknesses
+The strong duality theorem appears novel and draws an interesting connection between ML-IRL and MaxEnt-IRL
+The authors prove the first finite guarantees for IRL with nonlinear reward functions
+The empirical results are promising and improve upon SOA
-Some of the claims of novelty seem to be standard practice and are used in prior work.
-It is unclear why an ML approach to IRL should perform better than other approaches. The authors have nice theory, but in practice the implemented algorithm seems very similar to prior work.
-It is unclear how statistically significant the empirical results are with only 3 seeds and no confidence intervals.
Questions
The authors are missing related work that also seeks to make IRL more tractable.
Wang, Ruohan, et al. "Random expert distillation: Imitation learning via expert policy support estimation." International Conference on Machine Learning. PMLR, 2019. and Brown, Daniel S., Wonjoon Goo, and Scott Niekum. "Better-than-demonstrator imitation learning via automatically-ranked demonstrations." Conference on robot learning. PMLR, 2020. Propose approaches that do not require interleaving reward learning and policy learning. Recent work Barde, Paul, et al. "Adversarial soft advantage fitting: Imitation learning without policy optimization." Advances in Neural Information Processing Systems 33 (2020): 12334-12344. proposes an adversarial IL approach that does not require any RL.
One of the stated contributions is to implement single-loop approach. This doesn't seem like a contribution since most Adversarial IRL methods do the same thing. This idea is well studied, going back to GAIL and Guided cost learning, and probably earlier. I do not think the authors can claim this as novel. Furthermore, the two-timescale approach is common in GANs and GAIL-like algorithms. I don't think this can be claimed as a contribution.
Equation 3: The theta parameterizes r which informs argmax to get pi. This is confusing with line 119 where the policy is parameterized by theta. Policies are not usually parameterized by rewards.
Line 155: is supposed to be an example of solving a hard optimization problem in the inner loop but it refers to IQ-Learn which does was previously noted to be computationally efficient because it avoids this inner loop mdp solver.
Lemma 1: How is his different from the guided cost learning update. In guided cost learning the partition function is estimated with onpolicy rollouts so it appears quite similar.
Line 188: Why only sample one demo and one policy rollout? Doesn't this have very high variance?
Allowing the algorithm to work with state only demonstrations is nice, but many other algorithms already do this. E.g. Brown, Daniel S., Wonjoon Goo, and Scott Niekum. "Better-than-demonstrator imitation learning via automatically-ranked demonstrations." Conference on robot learning. PMLR, 2020. Torabi, Faraz, Garrett Warnell, and Peter Stone. "Generative adversarial imitation from observation." arXiv preprint arXiv:1807.06158 (2018). As well as all of the classical IRL algorithms. It would be good to motivate more why this is a contribution.
Limitations
A discussion of limitations are lacking in the main body of the paper. |
NIPS | Title
Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees
Abstract
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherently nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in MuJoCo and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.
1 Introduction
Given observed trajectories of states and actions implemented by an expert, we consider the problem of estimating the reinforcement learning environment in which the expert was trained. This problem is generally referred to as inverse reinforcement learning (IRL) (see [3] for a recent survey). Assuming the environment dynamics are known (or available online), the IRL problem consists of estimating the reward function and the expert’s policy (optimizing such rewards) that best fits the data. While there are limitations on the identifiability of rewards [4], the estimation of rewards based upon expert trajectories enables important counterfactual analysis such as the estimation of optimal policies under different environment dynamics and/or reinforcement learning of new tasks.
In the seminal work [5], the authors developed an IRL formulation, in which the model for the expert’s behavior is the policy that maximizes entropy subject to a constraint requiring that the expected
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
features under such policy match the empirical averages in the expert’s observation dataset. The algorithms developed for MaxEnt-IRL [5–7] have a nested loop structure, alternating between an outer loop with a reward update step, and an inner loop that calculates the explicit policy estimates. The computational burden of this nested structure is manageable in tabular environments, but it becomes significant in high dimensional settings requiring function approximation.
Towards developing more efficient IRL algorithms, a number of works [8–12] propose to leverage the idea of adversarial training [13]. These algorithms learn a non-stationary reward function through training a discriminator, which is then used to guide the policy to match the behavior trajectories from the expert dataset. However, [14] pointed out that the resulting discriminator (hence the reward function) typically cannot be used in new learning tasks, since it is highly dependent on the corresponding policy and current environment dynamics. Moreover, due to the brittle approximation techniques and sensitive hyperparameter choice in the adversarial training, these IRL algorithms can be unstable. [15, 16].
More recent works [1, 2] have developed algorithms to alleviate the computational burden of the nested-loop training procedures. In [1], the authors propose to model the IRL using certain maximum entropy RL problem with specific reward function (which assigns r = +1 for matching expert demonstrations and r = 0 for all other behaviors). Then a soft Q imitation learning (SQIL) algorithm is developed. In [2], the authors propose to transform the standard formulation of IRL (discussed above) into a single-level problem, through learning a soft Q-function to implicitly represent the reward function and the policy. An inverse soft-Q learning (IQ-Learn) algorithm is then developed, which is shown to be effective in estimating the policy for the environment that it is trained on. Despite being computationally efficient, IQ-Learn sacrifices the accuracy in estimating the rewards since it indirectly recovers rewards from a soft Q-function approximator which is highly dependent upon the environment dynamics and does not strictly satisfy the soft-Bellman equation. Therefore it is not well-suited for counterfactual prediction or transfer learning setting.
Finally, in f -IRL [14] the authors consider an approach for estimating rewards based on the minimization of several measures of divergence with respect to the expert’s state visitation measure. The approach is limited to estimating rewards that only depend on state. Moreover, while the results reported are based upon a single-loop implementation, the paper does not provide a convergence guarantee to support performance. We refer the readers to Appendix A for other related works.
Our Contributions. The goal of this work is to develop an algorithm for IRL which is capable of producing high-quality estimates of both rewards and behavior policies with finite-time guarantees. The major contributions of this work are listed below.
• We consider a formulation of IRL based on maximum likelihood (ML) estimation over optimal (entropy-regularized) policies, and prove that a strong duality relationship with maximum entropy IRL holds if rewards are represented by a linear combination of features. 1 The ML formulation is a bi-level optimization problem, where the upper-level problem maximizes the likelihood function, while the lower-level finds the optimal policy under the current reward parameterization. Such a bi-level structure is not only instrumental to the subsequent algorithm design, but is also flexible to incorporate the use of state-only, as well as the regular reward function (which depends on the state and action pair). The former is suitable for transfer learning since it is insensitive to the changes of the environment dynamics, while the latter can be used to efficiently imitate the expert policy.
• Based on the ML-IRL formulation, we develop an efficient algorithm. To avoid the computational burden of repeatedly solving the lower-level policy optimization problem, the proposed algorithm has a single-loop structure where the policy improvement step and reward optimization step are performed alternatingly so that each step can be performed relatively cheaply. Further, we show that the algorithm has strong theoretical guarantees: to achieve certain ✏-approximate stationary solution for a non-linearly parameterized problem, it requires O(✏ 2) steps of policy and reward updates each. To our knowledge, it is the first algorithm which has finite-time guarantee for the IRL problem under nonlinear parameterization of reward functions.
• We conduct extensive experiments to demonstrate that the proposed algorithm outperforms many state-of-the-art IRL algorithms in both policy estimation and reward recovery. In particular, when
1Heuristic arguments for this duality result are discussed in [5] wherein the distribution of state-action paths is approximated (see equation (4) in [5]) and the equivalence between maximum entropy estimation and maximum likelihood (over the class of exponential distributions) [17] is invoked.
transferring to a new environment, RL algorithms using rewards recovered by the proposed algorithm outperform those that use rewards recovered from existing IRL and imitation learning benchmarks.
2 Preliminaries
In this section, we review the fundamentals of the maximum entropy inverse reinforcement learning (MaxEnt-IRL). We consider an MDP defined by the tuple (S,A,P, ⌘, r, ); S and A denote the state space and the action space respectively; P(s0|s, a) : S ⇥A⇥ S ! [0, 1] denotes the transition probability; ⌘(·) denotes the distribution for the initial state; r(s, a) : S ⇥ A ! R is the reward function and is a discount factor.
The MaxEnt-IRL formulation [6, 18–20] consists of finding a policy maximizing entropy subject to the expected features under such policy matching the empirical averages in the expert’s observation dataset. Specifically, the MaxEnt-IRL formulation is given by:
max ⇡
H(⇡) := E⌧⇠⇡ 1X
t=0
t log ⇡(at|st)
(MaxEnt-IRL)
s.t. E⌧⇠⇡ 1X
t=0
t (st, at) = E⌧⇠⇡E
1X
t=0
t (st, at)
where ⌧ = {(st, at)}1t=0 denotes a trajectory, (s, a) is the feature vector of the state-action pair (s, a) and ⇡E denotes the expert policy. Let ✓ denote the dual variable for the linear constraint, then the Lagrangian of (MaxEnt-IRL) is given by
L(⇡, ✓) := H(⇡) + * ✓,E⌧⇠⇡ 1X
t=0
t (st, at)
E⌧⇠⇡E
1X
t=0
t (st, at) + . (1)
In [6, 18, 19], the authors proposed a "dual descent" algorithm, which alternates between i) solving max⇡ L(⇡, ✓) for fixed ✓, and ii) a gradient descent step to optimize the dual variable ✓. It is shown that the optimizer ⇡⇤✓ in step i) can be recursively defined as ⇡ ⇤ ✓(at|st) = Zat|st,✓ Zst,✓
, where logZat|st,✓ = (st, at) T ✓ + Est+1⇠P(·|st,at) ⇥ logZst+1,✓ ⇤ and logZst,✓ = log P a2A Za|st,✓ .
From a computational perspective, the above algorithm is not efficient: it has a nested-loop structure, which repeatedly computes the optimal policy ⇡⇤✓ under each variable ✓. It is known that when the underlying MDP is of high-dimension, such an algorithm can be computationally prohibitive [9, 10].
Recent work [2] proposed an algorithm called IQ-Learn to improve upon the MaxEnt-IRL by considering a saddle-point formulation:
min r max ⇡
n H(⇡) + E⌧⇠⇡ ⇥ 1X
t=0
t · r(st, at) ⇤ E⌧⇠⇡E
⇥ 1X
t=0
t · r(st, at) ⇤o (2)
where r(st, at) is the reward associated with state-action pair (st, at). The authors show that this problem can be transformed into an optimization problem only defined in terms of the soft Q-function, which implicitly represents both reward and policy. IQ-Learn is shown to be effective in imitating the expert behavior while only relying on the estimation of the soft Q-function. However, the implicit reward estimate obtained is not necessarily accurate since its soft Q-function estimate depends on the environment dynamics and does not strictly satisfy the soft-Bellman equation. Hence, it is difficult to transfer the recovered reward function to new environments.
3 Problem Formulation
In this section, we consider a ML formulation of the IRL problem and formalize a duality relationship with maximum entropy-based formulation (MaxEnt-IRL).
Maximum Log-Likelihood IRL (ML-IRL) A model of the expert’s behavior is a randomized policy ⇡✓(·|s), where ⇡✓ is a specific policy corresponding to the reward parameter ✓. With the state dynamics P(st+1|st, at), the discounted
log-likelihood of observing the expert trajectory ⌧ under model ⇡✓ can be written follows:
E⌧⇠⇡E ⇥ log
Y t 0 (P(st+1|st, at)⇡✓(at|st)) t⇤ = E⌧⇠⇡E ⇥X t 0 t log ⇡✓(at|st) ⇤
+ E⌧⇠⇡E ⇥X
t 0 t logP(st+1|st, at)
⇤ .
Then we consider the following maximum log-likelihood IRL formulation:
max ✓
L(✓) := E⌧⇠⇡E ⇥ 1X
t=0
t log ⇡✓(at|st) ⇤ (ML-IRL)
s.t ⇡✓ := argmax ⇡
E⌧⇠⇡ 1X
t=0
t ✓ r(st, at; ✓) +H(⇡(·|st)) ◆ , (3a)
where r(s, a; ✓) is the reward function and H(⇡(·|s)) := P
a2A ⇡(a|s) log ⇡(a|s). We now make some remarks about ML-IRL. First, the problem takes the form of a bi-level optimization problem, where the upper-level problem (ML-IRL) optimizes the reward parameter ✓, while the lower-level problem describes the corresponding policy ⇡✓ as the solution to an entropy-regularized MDP ([21, 22]). In what follows we will leverage recently developed (stochastic) algorithms for bi-level optimization [23–25], that avoid the high complexity resulted from nested loop algorithms. Second, it is reasonable to use the ML function as the loss, because it searches for a reward function which generates a behavior policy that can best fit the expert demonstrations. While the ML function has been considered in [26, 27], they rely on heuristic algorithms with nested-loop computations to solve their IRL formulations, and the theoretical properties are not studied. Finally, the lower-level problem has been well-studied in the literature [21, 22, 28–30]. The entropy regularization in (3a) ensures the uniqueness of the optimal policy ⇡✓ under the fixed reward function r(s, a; ✓) [21, 28]. Even when the underlying MDP is high-dimensional and/or complex, the optimal policy could still be obtained; see recent developments in [21, 22]. We close this section by formally establishing a connection between (MaxEnt-IRL) and (ML-IRL). Theorem 1. (Strong Duality) Suppose that the reward function is given as: r(s, a; ✓) := (s, a)T ✓, for all s 2 S and a 2 A. Then (ML-IRL) is the Lagrangian dual of (MaxEnt-IRL). Furthermore, strong duality holds, that is: L(✓⇤) = H(⇡⇤), where ✓⇤ and ⇡⇤ are the global optimal solutions for problems (ML-IRL) and (MaxEnt-IRL), respectively.
The proof of Theorem 1 is relegated to Appendix G. To our knowledge this result which specifically addresses the (MaxEnt-IRL) formulation is novel. Under finite horizon, a duality between ML estimation and maximum causal entropy is obtained in [18, Theorem 3]. However, the problem considered in that paper is not in RL nor IRL setting, therefore they cannot be directly used in the context of the present paper.
The above duality result reveals a strong connection between the two formulations under linear reward parameterization. Due to the duality result, we know that (ML-IRL) is a concave problem under linear reward parameterization. In this case, any stationary solution to (ML-IRL) is a global optimal estimator of the reward parameter.
4 The Proposed Algorithm
In this section, we design algorithms for (ML-IRL). Recall that one major drawback of algorithms for (MaxEnt-IRL) is that, they repeatedly solve certain policy optimization problem in the inner loop. Even though the recently proposed algorithm IQ-Learn [2] tries to improve the computational efficiency through implicitly representing the reward function and the policy by a Q-function approximator, it has sacrificed the estimation accuracy of the recovered reward. Therefore, one important goal of our design is to find provably efficient algorithms that can avoid high-complexity operations and accurately recover the reward function. Specifically, it is desirable that the resulting algorithm only uses a finite number of reward and policy updates to reach certain high-quality solutions.
To proceed, we will leverage the special bi-level structure of the ML-IRL problem. The idea is to alternate between one step of policy update to improve the solution of the lower-level problem, and
Algorithm 1 Maximum Likelihood Inverse Reinforcement Learning (ML-IRL) Input: Initialize reward parameter ✓0 and policy ⇡0. Set the reward parameter’s stepsize as ↵. for k = 0, 1, . . . ,K 1 do
Policy Evaluation: Compute Qsoftr✓k ,⇡k(·, ·) under reward function r(·, ·; ✓k) Policy Improvement: ⇡k+1(·|s) / exp(Qsoftr✓k ,⇡k(s, ·)), 8s 2 S . Data Sampling I: Sampling an expert trajectory ⌧Ek := {st, at}t 0 Data Sampling II: Sampling a trajectory ⌧Ak := {st, at}t 0 from the current policy ⇡k+1 Estimating Gradient: gk := h(✓k; ⌧Ek ) h(✓k; ⌧Ak ) where h(✓; ⌧) := P t 0
tr✓r(st, at; ✓) Reward Parameter Update: ✓k+1 := ✓k + ↵gk
end for
one step of the parameter update which improves the upper-level loss function. At each iteration k, given the current policy ⇡k and the reward parameter ✓k, a new policy ⇡k+1 is generated from the policy improvement step, and ✓k+1 is generated by the reward optimization step.
This kind of alternating update is efficient, because there is no need to completely solve the policy optimization subproblem, before updating the reward parameters. It has been used in many other RL related settings as well. For example, the well-known actor-critic (AC) algorithm for policy optimization [31, 32, 23] alternates between one step of policy update, and one step of critic parameter update. Below we present the details of our algorithm at a given iteration k.
Policy Improvement Step. Let us consider optimizing the lower-level problem, when the reward parameter ✓k is held fixed. Towards this end, define the so-called soft Q and soft value functions for a given policy ⇡k and a reward parameter ✓k:
V soft rk,⇡k(s) = E⇡k
1X
t=0
t ✓ r(st, at; ✓k) +H(⇡k(·|st)) ◆ s0 = s
(4a)
Q soft rk,⇡k(s, a) = r(s, a; ✓k) + Es0⇠P(·|s,a)
⇥ V
soft rk,⇡k(s)
⇤ (4b)
We will adopt the well-known soft policy iteration [21] to optimize the lower-level problem (3a). Under the current reward parameter ✓k and the policy ⇡k, the soft policy iteration generates a new policy ⇡k+1 as follows
⇡k+1(a|s) / exp Q soft r✓k ,⇡k (s, a) , 8s 2 S, a 2 A. (5)
Under a fixed reward function, it can be shown that the new policy ⇡k+1 monotonically improves ⇡k, and it converges linearly to the optimal policy; see [21, Theorem 4] and [28, Thoerem 1].
Note that in practice, we usually do not have direct access to the exact soft Q-function in (4b). In order to perform the policy improvement, a few stochastic update steps in soft Q-learning [21] or soft Actor-Critic (SAC) [22] could be used to replace the one-step soft policy iteration (5). In the appendix, we present Alg. 2 to demonstrate such practical implementation of our proposed algorithm.
Reward Optimization Step. We propose to use a stochastic gradient-type algorithm to optimize ✓. Towards this end, let us first derive the exact gradient rL(✓). See Appendix D for detailed proof. Lemma 1. The gradient of the likelihood function L(✓) can be expressed as follows:
rL(✓) = E⌧⇠⇡E X
t 0 tr✓r(st, at; ✓)
E⌧⇠⇡✓
X
t 0 tr✓r(st, at; ✓) . (6)
To obtain stochastic estimators of the exact gradient rL(✓k), we take two approximation steps: 1) approximate the optimal policy ⇡✓k by ⇡k+1 in (5), since the optimal policy ⇡✓k is not available throughout the algorithm; 2) sample one expert trajectory ⌧Ek which is already generated by the expert policy ⇡E; 3) sample one trajectory ⌧Ak from the current policy ⇡k+1.
Following the approximation steps mentioned above, we construct a stochastic estimator gk to approximate the exact gradient rL(✓k) in (6) as follows:
gk := h(✓k; ⌧ E k ) h(✓k; ⌧Ak ), where h(✓; ⌧) :=
X t 0 tr✓r(st, at; ✓). (7)
With the stochastic gradient estimator gk, the reward parameter ✓k is updated as:
✓k+1 = ✓k + ↵gk. (8)
where ↵ is the stepsize in updating the reward parameter.
In summary, the proposed algorithm for solving the ML-IRL problem (ML-IRL) is given in Alg. 1.
5 Theoretical Analysis
In this section, we present finite-time guarantees for the proposed algorithm.
To begin with, first recall that in Sec. 3, we have mentioned that (ML-IRL) is a bi-level problem, where the upper level (resp. the lower level) problem optimizes the reward parameter (resp. the policy). In order to solve (ML-IRL), our algorithm 1 has a singe-loop structure, which alternates between one step of policy update and one step of the reward parameter update. Such a single-loop structure indeed has computational benefit, but it also leads to potential unstableness, since the lower level problem can stay far away from its true solutions. Specifically, at each iteration k, the potential unstableness is induced by the distribution mismatch between the policy ⇡k+1 and ⇡✓k , when we use estimator gk (7) to approximate the exact gradient rL(✓k) (6) in updating the reward parameter ✓k. Towards stabilizing the algorithm, we adopt the so-called two-timescale stochastic approximation (TTSA) approach [33, 23], where the lower-level problem updates in a faster time-scale (i.e., converges faster) compared with its upper-level counterpart. Intuitively, the TTSA enables the ⇡k+1 tracks the optimal ⇡✓k , leading to a stable algorithm. In the proposed Algorithm 1, the policy (lower-level variable) is continuously updated by the soft policy iteration (5), and it is ‘fast’ because it converges linearly to the optimal policy under a fixed reward function [28, Theorem 1]. On the other hand, the reward parameter update (8) does not have such linear convergence property, therefore it works in a ‘slow’ timescale. To begin our analysis, let us first present a few technical assumptions. Assumption 1 (Ergodicity). For any policy ⇡, assume the Markov chain with transition kernel P is irreducible and aperiodic under policy ⇡. Then there exist constants > 0 and ⇢ 2 (0, 1) such that
sup s2S
kP(st 2 ·|s0 = s,⇡) µ⇡(·)kTV ⇢t, 8 t 0
where k · kTV is the total variation (TV) norm; µ⇡ is the stationary state distribution under ⇡.
Assumption 1 assumes the Markov chain mixes at a geometric rate. It is a common assumption in the iterature of RL [34, 35, 32], which holds for any time-homogeneous Markov chain with finite-state space or any uniformly ergodic Markov chain with general state space. Assumption 2. For any s 2 S , a 2 A and any reward parameter ✓, the following holds:
r✓r(s, a; ✓) Lr, (9a) r✓r(s, a; ✓1) r✓r(s, a; ✓2) Lgk✓1 ✓2k (9b)
where Lr and Lg are positive constants.
Assumption 2 assumes that the parameterized reward function has bounded gradient and is Lipschitz smooth. Such assumption in Lipschitz property are common in the literature of min-max / bi-level optimization [36, 23, 37, 25, 38].
Based on Assumptions 1 - 2, we next provide the following Lipschitz properties: Lemma 2. Suppose Assumptions 1 - 2 hold. For any reward parameter ✓1 and ✓2, the following results hold:
|Qsoftr✓1 ,⇡✓1 (s, a) Q soft r✓2 ,⇡✓2 (s, a)| Lqk✓1 ✓2k, 8s 2 S, a 2 A (10a)
krL(✓1) rL(✓2)k Lck✓1 ✓2k (10b)
where Q soft r✓,⇡✓ (·, ·) denotes the soft Q-function under the reward function r(·, ·; ✓) and the policy ⇡✓. The positive constants Lq and Lc are defined in Appendix E.
The Lipschitz properties identified in Lemma 2 are vital for the convergence analysis. Then we present the main results, which show the convergence speed of the policy {⇡k}k 0 and the reward parameter {✓k}k 0 in the Alg. 1. Please see Appendix E for the detailed proof.
Theorem 2. Suppose Assumptions 1 - 2 hold. Selecting stepsize ↵ := ↵0K for the reward update step (8) where ↵0 > 0 and 2 (0, 1) are some fixed constants, and K is the total number of iterations to be run by the algorithm. Then the following result holds:
1
K
K 1X
k=0
E ⇥ log ⇡k+1 log ⇡✓k 1 ⇤ = O(K 1) +O(K ) (11a)
1
K
K 1X
k=0
E ⇥ krL(✓k)k2 ⇤ = O(K ) +O(K 1+ ) +O(K 1) (11b)
where we denote k log ⇡k+1 log ⇡✓kk1 := maxs2S,a2A log ⇡k+1(a|s) log ⇡✓k(a|s)
. In particular, setting = 1/2, then both quantities in (11a) and (11b) converge with the rate O(K 1/2).
In Theorem 2, we present the finite-time guarantee for the convergence of the Alg.1. Moreover, as a special case, when the reward is parameterized as a linear function, we know that (ML-IRL) is concave and Theorem 2 provides a stronger guarantee which identify the global optimal reward estimator in finite time.
We provide a proof sketch below to present the key steps. The detailed proof is in Appendix H.
Proof sketch. We outline our main steps in analyzing (11a) and (11b) respectively.
In order to show the convergence of policy estimates in (11a), there are several key steps. First, we note that both policies ⇡k+1 and ⇡✓k are in the softmax parameterization, where ⇡k+1(·|s) / exp Q
soft r✓k ,⇡k (s, ·) and ⇡✓k(·|s) / exp Q soft r✓k ,⇡✓k (s, ·) . Then, we can show a Lipschitz continuity
property between the policy and the soft Q-function:
klog⇡k+1 log⇡✓kk1 2kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1,
where the infinty norm k · k1 is defined over the state-action space S ⇥A. Moreover, by analyzing the contraction property of the soft policy iteration (5), we bound kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 as:
kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 kQsoftr✓k 1 ,⇡k 1 Q soft r✓k 1 ,⇡✓k 1 k1 + 2Lqk✓k ✓k 1k.
To ensure that the error term k✓k ✓k 1k is small, we select the stepsize of reward parameters as ↵ := ↵0K , where K is the total number of iterations and > 0. Then, by combining previous two steps, we could further show the convergence rate of the policy estimates in (11a).
To prove the convergence of the reward parameters in (11b), we first leverage the Lipschitz smooth property of L(✓) in (10b). However, one technical challenge in the convergence analysis is how to handle the bias between the gradient estimator gk defined in (7) and the exact gradient rL(✓k). When we construct the gradient estimator gk in (7), we need to sample trajectories from the current policy ⇡k+1 and the expert dataset D. However, according to the expression of rL(✓k) in (6), the trajectories are sampled from the optimal policy ⇡✓k and the dataset D. Hence, there is a distribution mismatch between ⇡k+1 and ⇡✓k . Our key idea is to leverage (11a) to handle this distribution mismatch error, and thus show that the bias between gk and rL(✓k) could be controlled. To the best of our knowledge, Theorem 2 is the first non-asymptotic convergence result for IRL with nonlinear reward parameterization.
6 A Discussion over State-Only Reward
In this section we consider the IRL problems modeled by using rewards that are only a function of the state. A lower dimensional representation of the agent’s preferences (i.e. in terms only of states as opposed to states and actions) is more likely to facilitate counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. This is because the estimation of preferences which are only defined in terms of states is less sensitive to the specific environment dynamics in the expert’s demonstration dataset. Moreover, in application such as healthcare [39] and autonomous driving [40], where simply imitating the expert policy can potentially result in poor performance, since the learner and the expert may have different transition dynamics. Similar points have also been argued in recent works [14, 41–43].
Next, let us briefly discuss how we can understand (ML-IRL) and Alg.1, when the reward is parameterized as a state-only function. First, it turns out that there is an equivalent formulation of (ML-IRL), when the expert trajectories only contain the visited states. Lemma 3. Suppose the expert trajectories ⌧ is sampled from a policy ⇡E, and the reward is parameterized as a state-only function r(s; ✓). Then ML-IRL is equivalent to the following:
min ✓
Es0⇠⌘(·) ⇥ V soft r✓,⇡✓ (s0) ⇤ Es0⇠⌘(·) ⇥ V soft r✓,⇡E (s0) ⇤ (12a)
s.t. ⇡✓ := argmax ⇡
E⇡ 1X
t=0
t ✓ r(st; ✓) +H(⇡(·|st)) ◆ . (12b)
Please see Appendix F for the detailed derivation. Intuitively, the above lemma says that, when dealing with the state-only IRL, (ML-IRL) minimizes the gap between the soft value functions of the optimal policy ⇡✓ and that of the expert policy ⇡E. Moreover, Alg.1 can also be easily implemented with the state-only reward. In fact, the entire algorithm essentially stays the same, and the only change is that r(s, a; ✓) will be replaced by r(s; ✓). In this way, by only using the visited states in the trajectories, one can still compute the stochastic gradient estimator in (7). Therefore, even under the state-only IRL setting where the expert dataset only contains visited states, our formulation and the proposed algorithm still work if we parameterize the reward as a state-only function.
7 Numerical Results
In this section, we test the performance of our algorithm on a diverse collection of RL tasks and environments. In each experiment set, we train algorithms until convergence and average the scores of the trajectories over multiple random seeds. The hyperparameter settings and simulation details are provided in Appendix B.
MuJoCo Tasks For Inverse Reinforcement Learning. In this experiment set, we test the performance of our algorithm on imitating the expert behavior. We consider several high-dimensional robotics control tasks in MuJoCo [44]. Two class of existing algorithms are considered as the comparison baselines: 1) imitation learning algorithms that only learn the policy to imitate the expert, including Behavior Cloning (BC) [45] and Generative Adversarial Imitation Learning (GAIL) [10]; 2) IRL algorithms which learn a reward function and a policy simultaneously, including Adversarial Inverse Reinforcement Learning (AIRL) [11], f -IRL [14] and IQ-Learn [2]. To ensure fair comparison, all imitation learning / IRL algorithms use soft Actor-Critic [22] as the base RL algorithm. For the expert dataset, we use the data provided in the official implementation2 of f -IRL.
In this experiment, we implement two versions of our proposed algorithm: ML-IRL(State-Action) where the reward is parameterized as a function of state and action; ML-IRL(State-Only) which utilizes the state-only reward function. In Table 1, we present the simulation results under a limited data regime where the expert dataset only contains a single expert trajectory. The scores (cumulative rewards) reported in the table is averaged over 6 random seeds. In each random seed, we train algorithm from initialization and collect 20 trajectories to average their cumulative rewards after the algorithms converge. The results reported in Table 1 show that our proposed algorithms outperform the baselines. The numerical results with confidence intervals are in Table 3 (See Appendix).
We observe that BC fails to imitate the expert’s behavior. It is due to the fact that BC is based on supervised learning and thus could not learn a good policy under such a limited data regime. Moreover, we notice the training of IQ-Learn is unstable, which may be due to its inaccurate approximation to the soft Q-function. Therefore, in the MuJoCo tasks where IQ-Learn does not perform well, so that we cannot match the results presented in the original paper [2], we directly report results from there (and mark them by ⇤ in Table 1). The results of AIRL are not presented in Table 1 since it performs poorly even after spending significant efforts in parameter tuning (similar observations have been made in in [46, 14]).
Transfer Learning Across Changing Dynamics. We further evaluate IRL algorithms on the transfer learning setting. We follows the environment setup in [11], where two environments with different dynamics are considered: Custom-Ant vs Disabled-Ant. We compare ML-IRL(State-Only) with several existing IRL methods: 1) AIRL [11], 2) f -IRL [14]; 3) IQ-Learn [2].
2https://github.com/twni2016/f-IRL
We consider two transfer learning settings: 1) data transfer; 2) reward transfer. For both settings, the expert dataset / trajectories are generated in Custom-Ant. In the data transfer setting, we train IRL agents in Disabled-Ant by using the expert trajectories, which are generated in Custom-Ant. In the reward transfer setting, we first use IRL algorithms to infer the reward functions in Custom-Ant, and then transfer these recovered reward functions to Disabled-Ant for further evaluation. In both settings, we also train SAC with the ground-truth reward in Disabled-Ant and report the scores.
The numerical results are reoprted in Table 2. the proposed ML-IRL(State-Only) achieves superior performance compared with the existing IRL benchmarks in both settings. We notice that IQ-Learn fails in both settings since it indirectly recovers the reward function from a soft Q-function approximator, which could be inaccurate and is highly dependent upon the environment dynamics. Therefore, the reward function recovered by IQ-Learn can not be disentangled from the expert actions and environment dynamics, which leads to its failures in the transfer learning tasks.
8 Conclusion
In this paper, we present a maximum likelihood IRL formulation and propose a provably efficient algorithm with a single-loop structure. To our knowledge, we provide the first non-asymptotic analysis for IRL algorithm under nonlinear reward parameterization. As a by-product, when we parameterize the reward as a state-only function, our algorithm could work in state-only IRL setting and enable reward transfer to new environments with different dynamics. Our algorithm outperforms existing IRL methods on high-dimensional robotics control tasks and corresponding transfer learning settings. A limitation of our method is the requirement for online training, so one future direction of this work is to further extend our algorithm and the theoretical analysis to the offline IRL setting.
Potential Negative Social Impacts
Since IRL methods aim to recover the reward function and the associated optimal policy from the observed expert dataset, potential negative social impacts may occur if there are bad demonstrations included in the expert dataset. Thus, for sensitive applications such as autonomous driving and clinical decision support, additional care should be taken to avoid negative biases from the expert demonstrations and ensure safe adaptation.
Acknowledgments
We thank the anonymous reviewers for their valuable comments. M. Hong and S. Zeng are partially supported by NSF grants CIF-1910385, CMMI-1727757, and AFOSR grant 19RT0424. A. Garcia would like to acknowledge partial support from grant FA9550-19-1-00347 by AFOSR. | 1. What is the focus and contribution of the paper on IRL?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and empirical evaluation?
3. What are the weaknesses of the paper regarding its assumptions, comparisons with other works, and experimental limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper that the reviewer would like to raise? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper formalizes IRL as a entropy-regularized maximum likelihood problem. The authors show that this problem is dual to maximum entropy IRL, and then formulates this problem as a bi-level optimization. They propose an algorithm that iterates between single steps of evaluation and policy improvement (in contrast to most max entropy IRL methods that have to solve RL problems in an inner loop). For this algorithm, the authors provide a finite time convergence guarantee under some regularity conditions. Finally, they provide an empirical evaluation on standard MuJoCo task, demonstrating that their algorithm outperforms alternative IRL approaches.
Strengths And Weaknesses
Strengths
The paper provides the first finite-time convergence analysis without assuming linear rewards, as far as I can tell. The assumptions of the main theoretical results seem pretty reasonable; this is a solid theoretical contribution.
The duality between MaxEnt IRL and the proposed maximum likelihood formulation might be interesting to the IRL community and could motivate interesting follow-up work.
The authors provide a good experimental evaluation that covers most relevant baselines and shows that their algorithm can perform well empirically.
The authors discuss the novelty of their work and provide particularly appropriate comparisons to results in prior work at multiple point, e.g., for their ML formulation, the proposed algorithm, and the main theoretical result.
Weaknesses
The duality with MaxEnt IRL is not particularly novel and has been observed before in other context (as noted by the authors).
The results in Table 1 don't show very large differences, and it is hard to evaluate if these are significant differences, given that only 3 random seeds were run.
The experimental evaluation is limited to MuJoCo locomotion tasks which can sometimes be too simple to draw strong conclusions.
Overall, the writing could be clearer at times, and the structure of the paper could be improved. For example, the Introduction spends a lot of time discussion prior work before stating the contributions of this paper, which is sub-optimal.
Questions
Can you provide standard errors for the results in Table 1?
Why does the state-action version of the algorithm perform worse than the state-only algorithm in the half-cheetah environment?
Typos:
"inherent nested" -> "inherently nested" (line 4)
"Mujoco" -> "MuJoCo" (line 19)
"computational efficient" -> "computationally efficient" (line 53)
Limitations
I did not find any discussion of potential broader impact of this work. |
NIPS | Title
Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees
Abstract
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherently nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in MuJoCo and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.
1 Introduction
Given observed trajectories of states and actions implemented by an expert, we consider the problem of estimating the reinforcement learning environment in which the expert was trained. This problem is generally referred to as inverse reinforcement learning (IRL) (see [3] for a recent survey). Assuming the environment dynamics are known (or available online), the IRL problem consists of estimating the reward function and the expert’s policy (optimizing such rewards) that best fits the data. While there are limitations on the identifiability of rewards [4], the estimation of rewards based upon expert trajectories enables important counterfactual analysis such as the estimation of optimal policies under different environment dynamics and/or reinforcement learning of new tasks.
In the seminal work [5], the authors developed an IRL formulation, in which the model for the expert’s behavior is the policy that maximizes entropy subject to a constraint requiring that the expected
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
features under such policy match the empirical averages in the expert’s observation dataset. The algorithms developed for MaxEnt-IRL [5–7] have a nested loop structure, alternating between an outer loop with a reward update step, and an inner loop that calculates the explicit policy estimates. The computational burden of this nested structure is manageable in tabular environments, but it becomes significant in high dimensional settings requiring function approximation.
Towards developing more efficient IRL algorithms, a number of works [8–12] propose to leverage the idea of adversarial training [13]. These algorithms learn a non-stationary reward function through training a discriminator, which is then used to guide the policy to match the behavior trajectories from the expert dataset. However, [14] pointed out that the resulting discriminator (hence the reward function) typically cannot be used in new learning tasks, since it is highly dependent on the corresponding policy and current environment dynamics. Moreover, due to the brittle approximation techniques and sensitive hyperparameter choice in the adversarial training, these IRL algorithms can be unstable. [15, 16].
More recent works [1, 2] have developed algorithms to alleviate the computational burden of the nested-loop training procedures. In [1], the authors propose to model the IRL using certain maximum entropy RL problem with specific reward function (which assigns r = +1 for matching expert demonstrations and r = 0 for all other behaviors). Then a soft Q imitation learning (SQIL) algorithm is developed. In [2], the authors propose to transform the standard formulation of IRL (discussed above) into a single-level problem, through learning a soft Q-function to implicitly represent the reward function and the policy. An inverse soft-Q learning (IQ-Learn) algorithm is then developed, which is shown to be effective in estimating the policy for the environment that it is trained on. Despite being computationally efficient, IQ-Learn sacrifices the accuracy in estimating the rewards since it indirectly recovers rewards from a soft Q-function approximator which is highly dependent upon the environment dynamics and does not strictly satisfy the soft-Bellman equation. Therefore it is not well-suited for counterfactual prediction or transfer learning setting.
Finally, in f -IRL [14] the authors consider an approach for estimating rewards based on the minimization of several measures of divergence with respect to the expert’s state visitation measure. The approach is limited to estimating rewards that only depend on state. Moreover, while the results reported are based upon a single-loop implementation, the paper does not provide a convergence guarantee to support performance. We refer the readers to Appendix A for other related works.
Our Contributions. The goal of this work is to develop an algorithm for IRL which is capable of producing high-quality estimates of both rewards and behavior policies with finite-time guarantees. The major contributions of this work are listed below.
• We consider a formulation of IRL based on maximum likelihood (ML) estimation over optimal (entropy-regularized) policies, and prove that a strong duality relationship with maximum entropy IRL holds if rewards are represented by a linear combination of features. 1 The ML formulation is a bi-level optimization problem, where the upper-level problem maximizes the likelihood function, while the lower-level finds the optimal policy under the current reward parameterization. Such a bi-level structure is not only instrumental to the subsequent algorithm design, but is also flexible to incorporate the use of state-only, as well as the regular reward function (which depends on the state and action pair). The former is suitable for transfer learning since it is insensitive to the changes of the environment dynamics, while the latter can be used to efficiently imitate the expert policy.
• Based on the ML-IRL formulation, we develop an efficient algorithm. To avoid the computational burden of repeatedly solving the lower-level policy optimization problem, the proposed algorithm has a single-loop structure where the policy improvement step and reward optimization step are performed alternatingly so that each step can be performed relatively cheaply. Further, we show that the algorithm has strong theoretical guarantees: to achieve certain ✏-approximate stationary solution for a non-linearly parameterized problem, it requires O(✏ 2) steps of policy and reward updates each. To our knowledge, it is the first algorithm which has finite-time guarantee for the IRL problem under nonlinear parameterization of reward functions.
• We conduct extensive experiments to demonstrate that the proposed algorithm outperforms many state-of-the-art IRL algorithms in both policy estimation and reward recovery. In particular, when
1Heuristic arguments for this duality result are discussed in [5] wherein the distribution of state-action paths is approximated (see equation (4) in [5]) and the equivalence between maximum entropy estimation and maximum likelihood (over the class of exponential distributions) [17] is invoked.
transferring to a new environment, RL algorithms using rewards recovered by the proposed algorithm outperform those that use rewards recovered from existing IRL and imitation learning benchmarks.
2 Preliminaries
In this section, we review the fundamentals of the maximum entropy inverse reinforcement learning (MaxEnt-IRL). We consider an MDP defined by the tuple (S,A,P, ⌘, r, ); S and A denote the state space and the action space respectively; P(s0|s, a) : S ⇥A⇥ S ! [0, 1] denotes the transition probability; ⌘(·) denotes the distribution for the initial state; r(s, a) : S ⇥ A ! R is the reward function and is a discount factor.
The MaxEnt-IRL formulation [6, 18–20] consists of finding a policy maximizing entropy subject to the expected features under such policy matching the empirical averages in the expert’s observation dataset. Specifically, the MaxEnt-IRL formulation is given by:
max ⇡
H(⇡) := E⌧⇠⇡ 1X
t=0
t log ⇡(at|st)
(MaxEnt-IRL)
s.t. E⌧⇠⇡ 1X
t=0
t (st, at) = E⌧⇠⇡E
1X
t=0
t (st, at)
where ⌧ = {(st, at)}1t=0 denotes a trajectory, (s, a) is the feature vector of the state-action pair (s, a) and ⇡E denotes the expert policy. Let ✓ denote the dual variable for the linear constraint, then the Lagrangian of (MaxEnt-IRL) is given by
L(⇡, ✓) := H(⇡) + * ✓,E⌧⇠⇡ 1X
t=0
t (st, at)
E⌧⇠⇡E
1X
t=0
t (st, at) + . (1)
In [6, 18, 19], the authors proposed a "dual descent" algorithm, which alternates between i) solving max⇡ L(⇡, ✓) for fixed ✓, and ii) a gradient descent step to optimize the dual variable ✓. It is shown that the optimizer ⇡⇤✓ in step i) can be recursively defined as ⇡ ⇤ ✓(at|st) = Zat|st,✓ Zst,✓
, where logZat|st,✓ = (st, at) T ✓ + Est+1⇠P(·|st,at) ⇥ logZst+1,✓ ⇤ and logZst,✓ = log P a2A Za|st,✓ .
From a computational perspective, the above algorithm is not efficient: it has a nested-loop structure, which repeatedly computes the optimal policy ⇡⇤✓ under each variable ✓. It is known that when the underlying MDP is of high-dimension, such an algorithm can be computationally prohibitive [9, 10].
Recent work [2] proposed an algorithm called IQ-Learn to improve upon the MaxEnt-IRL by considering a saddle-point formulation:
min r max ⇡
n H(⇡) + E⌧⇠⇡ ⇥ 1X
t=0
t · r(st, at) ⇤ E⌧⇠⇡E
⇥ 1X
t=0
t · r(st, at) ⇤o (2)
where r(st, at) is the reward associated with state-action pair (st, at). The authors show that this problem can be transformed into an optimization problem only defined in terms of the soft Q-function, which implicitly represents both reward and policy. IQ-Learn is shown to be effective in imitating the expert behavior while only relying on the estimation of the soft Q-function. However, the implicit reward estimate obtained is not necessarily accurate since its soft Q-function estimate depends on the environment dynamics and does not strictly satisfy the soft-Bellman equation. Hence, it is difficult to transfer the recovered reward function to new environments.
3 Problem Formulation
In this section, we consider a ML formulation of the IRL problem and formalize a duality relationship with maximum entropy-based formulation (MaxEnt-IRL).
Maximum Log-Likelihood IRL (ML-IRL) A model of the expert’s behavior is a randomized policy ⇡✓(·|s), where ⇡✓ is a specific policy corresponding to the reward parameter ✓. With the state dynamics P(st+1|st, at), the discounted
log-likelihood of observing the expert trajectory ⌧ under model ⇡✓ can be written follows:
E⌧⇠⇡E ⇥ log
Y t 0 (P(st+1|st, at)⇡✓(at|st)) t⇤ = E⌧⇠⇡E ⇥X t 0 t log ⇡✓(at|st) ⇤
+ E⌧⇠⇡E ⇥X
t 0 t logP(st+1|st, at)
⇤ .
Then we consider the following maximum log-likelihood IRL formulation:
max ✓
L(✓) := E⌧⇠⇡E ⇥ 1X
t=0
t log ⇡✓(at|st) ⇤ (ML-IRL)
s.t ⇡✓ := argmax ⇡
E⌧⇠⇡ 1X
t=0
t ✓ r(st, at; ✓) +H(⇡(·|st)) ◆ , (3a)
where r(s, a; ✓) is the reward function and H(⇡(·|s)) := P
a2A ⇡(a|s) log ⇡(a|s). We now make some remarks about ML-IRL. First, the problem takes the form of a bi-level optimization problem, where the upper-level problem (ML-IRL) optimizes the reward parameter ✓, while the lower-level problem describes the corresponding policy ⇡✓ as the solution to an entropy-regularized MDP ([21, 22]). In what follows we will leverage recently developed (stochastic) algorithms for bi-level optimization [23–25], that avoid the high complexity resulted from nested loop algorithms. Second, it is reasonable to use the ML function as the loss, because it searches for a reward function which generates a behavior policy that can best fit the expert demonstrations. While the ML function has been considered in [26, 27], they rely on heuristic algorithms with nested-loop computations to solve their IRL formulations, and the theoretical properties are not studied. Finally, the lower-level problem has been well-studied in the literature [21, 22, 28–30]. The entropy regularization in (3a) ensures the uniqueness of the optimal policy ⇡✓ under the fixed reward function r(s, a; ✓) [21, 28]. Even when the underlying MDP is high-dimensional and/or complex, the optimal policy could still be obtained; see recent developments in [21, 22]. We close this section by formally establishing a connection between (MaxEnt-IRL) and (ML-IRL). Theorem 1. (Strong Duality) Suppose that the reward function is given as: r(s, a; ✓) := (s, a)T ✓, for all s 2 S and a 2 A. Then (ML-IRL) is the Lagrangian dual of (MaxEnt-IRL). Furthermore, strong duality holds, that is: L(✓⇤) = H(⇡⇤), where ✓⇤ and ⇡⇤ are the global optimal solutions for problems (ML-IRL) and (MaxEnt-IRL), respectively.
The proof of Theorem 1 is relegated to Appendix G. To our knowledge this result which specifically addresses the (MaxEnt-IRL) formulation is novel. Under finite horizon, a duality between ML estimation and maximum causal entropy is obtained in [18, Theorem 3]. However, the problem considered in that paper is not in RL nor IRL setting, therefore they cannot be directly used in the context of the present paper.
The above duality result reveals a strong connection between the two formulations under linear reward parameterization. Due to the duality result, we know that (ML-IRL) is a concave problem under linear reward parameterization. In this case, any stationary solution to (ML-IRL) is a global optimal estimator of the reward parameter.
4 The Proposed Algorithm
In this section, we design algorithms for (ML-IRL). Recall that one major drawback of algorithms for (MaxEnt-IRL) is that, they repeatedly solve certain policy optimization problem in the inner loop. Even though the recently proposed algorithm IQ-Learn [2] tries to improve the computational efficiency through implicitly representing the reward function and the policy by a Q-function approximator, it has sacrificed the estimation accuracy of the recovered reward. Therefore, one important goal of our design is to find provably efficient algorithms that can avoid high-complexity operations and accurately recover the reward function. Specifically, it is desirable that the resulting algorithm only uses a finite number of reward and policy updates to reach certain high-quality solutions.
To proceed, we will leverage the special bi-level structure of the ML-IRL problem. The idea is to alternate between one step of policy update to improve the solution of the lower-level problem, and
Algorithm 1 Maximum Likelihood Inverse Reinforcement Learning (ML-IRL) Input: Initialize reward parameter ✓0 and policy ⇡0. Set the reward parameter’s stepsize as ↵. for k = 0, 1, . . . ,K 1 do
Policy Evaluation: Compute Qsoftr✓k ,⇡k(·, ·) under reward function r(·, ·; ✓k) Policy Improvement: ⇡k+1(·|s) / exp(Qsoftr✓k ,⇡k(s, ·)), 8s 2 S . Data Sampling I: Sampling an expert trajectory ⌧Ek := {st, at}t 0 Data Sampling II: Sampling a trajectory ⌧Ak := {st, at}t 0 from the current policy ⇡k+1 Estimating Gradient: gk := h(✓k; ⌧Ek ) h(✓k; ⌧Ak ) where h(✓; ⌧) := P t 0
tr✓r(st, at; ✓) Reward Parameter Update: ✓k+1 := ✓k + ↵gk
end for
one step of the parameter update which improves the upper-level loss function. At each iteration k, given the current policy ⇡k and the reward parameter ✓k, a new policy ⇡k+1 is generated from the policy improvement step, and ✓k+1 is generated by the reward optimization step.
This kind of alternating update is efficient, because there is no need to completely solve the policy optimization subproblem, before updating the reward parameters. It has been used in many other RL related settings as well. For example, the well-known actor-critic (AC) algorithm for policy optimization [31, 32, 23] alternates between one step of policy update, and one step of critic parameter update. Below we present the details of our algorithm at a given iteration k.
Policy Improvement Step. Let us consider optimizing the lower-level problem, when the reward parameter ✓k is held fixed. Towards this end, define the so-called soft Q and soft value functions for a given policy ⇡k and a reward parameter ✓k:
V soft rk,⇡k(s) = E⇡k
1X
t=0
t ✓ r(st, at; ✓k) +H(⇡k(·|st)) ◆ s0 = s
(4a)
Q soft rk,⇡k(s, a) = r(s, a; ✓k) + Es0⇠P(·|s,a)
⇥ V
soft rk,⇡k(s)
⇤ (4b)
We will adopt the well-known soft policy iteration [21] to optimize the lower-level problem (3a). Under the current reward parameter ✓k and the policy ⇡k, the soft policy iteration generates a new policy ⇡k+1 as follows
⇡k+1(a|s) / exp Q soft r✓k ,⇡k (s, a) , 8s 2 S, a 2 A. (5)
Under a fixed reward function, it can be shown that the new policy ⇡k+1 monotonically improves ⇡k, and it converges linearly to the optimal policy; see [21, Theorem 4] and [28, Thoerem 1].
Note that in practice, we usually do not have direct access to the exact soft Q-function in (4b). In order to perform the policy improvement, a few stochastic update steps in soft Q-learning [21] or soft Actor-Critic (SAC) [22] could be used to replace the one-step soft policy iteration (5). In the appendix, we present Alg. 2 to demonstrate such practical implementation of our proposed algorithm.
Reward Optimization Step. We propose to use a stochastic gradient-type algorithm to optimize ✓. Towards this end, let us first derive the exact gradient rL(✓). See Appendix D for detailed proof. Lemma 1. The gradient of the likelihood function L(✓) can be expressed as follows:
rL(✓) = E⌧⇠⇡E X
t 0 tr✓r(st, at; ✓)
E⌧⇠⇡✓
X
t 0 tr✓r(st, at; ✓) . (6)
To obtain stochastic estimators of the exact gradient rL(✓k), we take two approximation steps: 1) approximate the optimal policy ⇡✓k by ⇡k+1 in (5), since the optimal policy ⇡✓k is not available throughout the algorithm; 2) sample one expert trajectory ⌧Ek which is already generated by the expert policy ⇡E; 3) sample one trajectory ⌧Ak from the current policy ⇡k+1.
Following the approximation steps mentioned above, we construct a stochastic estimator gk to approximate the exact gradient rL(✓k) in (6) as follows:
gk := h(✓k; ⌧ E k ) h(✓k; ⌧Ak ), where h(✓; ⌧) :=
X t 0 tr✓r(st, at; ✓). (7)
With the stochastic gradient estimator gk, the reward parameter ✓k is updated as:
✓k+1 = ✓k + ↵gk. (8)
where ↵ is the stepsize in updating the reward parameter.
In summary, the proposed algorithm for solving the ML-IRL problem (ML-IRL) is given in Alg. 1.
5 Theoretical Analysis
In this section, we present finite-time guarantees for the proposed algorithm.
To begin with, first recall that in Sec. 3, we have mentioned that (ML-IRL) is a bi-level problem, where the upper level (resp. the lower level) problem optimizes the reward parameter (resp. the policy). In order to solve (ML-IRL), our algorithm 1 has a singe-loop structure, which alternates between one step of policy update and one step of the reward parameter update. Such a single-loop structure indeed has computational benefit, but it also leads to potential unstableness, since the lower level problem can stay far away from its true solutions. Specifically, at each iteration k, the potential unstableness is induced by the distribution mismatch between the policy ⇡k+1 and ⇡✓k , when we use estimator gk (7) to approximate the exact gradient rL(✓k) (6) in updating the reward parameter ✓k. Towards stabilizing the algorithm, we adopt the so-called two-timescale stochastic approximation (TTSA) approach [33, 23], where the lower-level problem updates in a faster time-scale (i.e., converges faster) compared with its upper-level counterpart. Intuitively, the TTSA enables the ⇡k+1 tracks the optimal ⇡✓k , leading to a stable algorithm. In the proposed Algorithm 1, the policy (lower-level variable) is continuously updated by the soft policy iteration (5), and it is ‘fast’ because it converges linearly to the optimal policy under a fixed reward function [28, Theorem 1]. On the other hand, the reward parameter update (8) does not have such linear convergence property, therefore it works in a ‘slow’ timescale. To begin our analysis, let us first present a few technical assumptions. Assumption 1 (Ergodicity). For any policy ⇡, assume the Markov chain with transition kernel P is irreducible and aperiodic under policy ⇡. Then there exist constants > 0 and ⇢ 2 (0, 1) such that
sup s2S
kP(st 2 ·|s0 = s,⇡) µ⇡(·)kTV ⇢t, 8 t 0
where k · kTV is the total variation (TV) norm; µ⇡ is the stationary state distribution under ⇡.
Assumption 1 assumes the Markov chain mixes at a geometric rate. It is a common assumption in the iterature of RL [34, 35, 32], which holds for any time-homogeneous Markov chain with finite-state space or any uniformly ergodic Markov chain with general state space. Assumption 2. For any s 2 S , a 2 A and any reward parameter ✓, the following holds:
r✓r(s, a; ✓) Lr, (9a) r✓r(s, a; ✓1) r✓r(s, a; ✓2) Lgk✓1 ✓2k (9b)
where Lr and Lg are positive constants.
Assumption 2 assumes that the parameterized reward function has bounded gradient and is Lipschitz smooth. Such assumption in Lipschitz property are common in the literature of min-max / bi-level optimization [36, 23, 37, 25, 38].
Based on Assumptions 1 - 2, we next provide the following Lipschitz properties: Lemma 2. Suppose Assumptions 1 - 2 hold. For any reward parameter ✓1 and ✓2, the following results hold:
|Qsoftr✓1 ,⇡✓1 (s, a) Q soft r✓2 ,⇡✓2 (s, a)| Lqk✓1 ✓2k, 8s 2 S, a 2 A (10a)
krL(✓1) rL(✓2)k Lck✓1 ✓2k (10b)
where Q soft r✓,⇡✓ (·, ·) denotes the soft Q-function under the reward function r(·, ·; ✓) and the policy ⇡✓. The positive constants Lq and Lc are defined in Appendix E.
The Lipschitz properties identified in Lemma 2 are vital for the convergence analysis. Then we present the main results, which show the convergence speed of the policy {⇡k}k 0 and the reward parameter {✓k}k 0 in the Alg. 1. Please see Appendix E for the detailed proof.
Theorem 2. Suppose Assumptions 1 - 2 hold. Selecting stepsize ↵ := ↵0K for the reward update step (8) where ↵0 > 0 and 2 (0, 1) are some fixed constants, and K is the total number of iterations to be run by the algorithm. Then the following result holds:
1
K
K 1X
k=0
E ⇥ log ⇡k+1 log ⇡✓k 1 ⇤ = O(K 1) +O(K ) (11a)
1
K
K 1X
k=0
E ⇥ krL(✓k)k2 ⇤ = O(K ) +O(K 1+ ) +O(K 1) (11b)
where we denote k log ⇡k+1 log ⇡✓kk1 := maxs2S,a2A log ⇡k+1(a|s) log ⇡✓k(a|s)
. In particular, setting = 1/2, then both quantities in (11a) and (11b) converge with the rate O(K 1/2).
In Theorem 2, we present the finite-time guarantee for the convergence of the Alg.1. Moreover, as a special case, when the reward is parameterized as a linear function, we know that (ML-IRL) is concave and Theorem 2 provides a stronger guarantee which identify the global optimal reward estimator in finite time.
We provide a proof sketch below to present the key steps. The detailed proof is in Appendix H.
Proof sketch. We outline our main steps in analyzing (11a) and (11b) respectively.
In order to show the convergence of policy estimates in (11a), there are several key steps. First, we note that both policies ⇡k+1 and ⇡✓k are in the softmax parameterization, where ⇡k+1(·|s) / exp Q
soft r✓k ,⇡k (s, ·) and ⇡✓k(·|s) / exp Q soft r✓k ,⇡✓k (s, ·) . Then, we can show a Lipschitz continuity
property between the policy and the soft Q-function:
klog⇡k+1 log⇡✓kk1 2kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1,
where the infinty norm k · k1 is defined over the state-action space S ⇥A. Moreover, by analyzing the contraction property of the soft policy iteration (5), we bound kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 as:
kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 kQsoftr✓k 1 ,⇡k 1 Q soft r✓k 1 ,⇡✓k 1 k1 + 2Lqk✓k ✓k 1k.
To ensure that the error term k✓k ✓k 1k is small, we select the stepsize of reward parameters as ↵ := ↵0K , where K is the total number of iterations and > 0. Then, by combining previous two steps, we could further show the convergence rate of the policy estimates in (11a).
To prove the convergence of the reward parameters in (11b), we first leverage the Lipschitz smooth property of L(✓) in (10b). However, one technical challenge in the convergence analysis is how to handle the bias between the gradient estimator gk defined in (7) and the exact gradient rL(✓k). When we construct the gradient estimator gk in (7), we need to sample trajectories from the current policy ⇡k+1 and the expert dataset D. However, according to the expression of rL(✓k) in (6), the trajectories are sampled from the optimal policy ⇡✓k and the dataset D. Hence, there is a distribution mismatch between ⇡k+1 and ⇡✓k . Our key idea is to leverage (11a) to handle this distribution mismatch error, and thus show that the bias between gk and rL(✓k) could be controlled. To the best of our knowledge, Theorem 2 is the first non-asymptotic convergence result for IRL with nonlinear reward parameterization.
6 A Discussion over State-Only Reward
In this section we consider the IRL problems modeled by using rewards that are only a function of the state. A lower dimensional representation of the agent’s preferences (i.e. in terms only of states as opposed to states and actions) is more likely to facilitate counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. This is because the estimation of preferences which are only defined in terms of states is less sensitive to the specific environment dynamics in the expert’s demonstration dataset. Moreover, in application such as healthcare [39] and autonomous driving [40], where simply imitating the expert policy can potentially result in poor performance, since the learner and the expert may have different transition dynamics. Similar points have also been argued in recent works [14, 41–43].
Next, let us briefly discuss how we can understand (ML-IRL) and Alg.1, when the reward is parameterized as a state-only function. First, it turns out that there is an equivalent formulation of (ML-IRL), when the expert trajectories only contain the visited states. Lemma 3. Suppose the expert trajectories ⌧ is sampled from a policy ⇡E, and the reward is parameterized as a state-only function r(s; ✓). Then ML-IRL is equivalent to the following:
min ✓
Es0⇠⌘(·) ⇥ V soft r✓,⇡✓ (s0) ⇤ Es0⇠⌘(·) ⇥ V soft r✓,⇡E (s0) ⇤ (12a)
s.t. ⇡✓ := argmax ⇡
E⇡ 1X
t=0
t ✓ r(st; ✓) +H(⇡(·|st)) ◆ . (12b)
Please see Appendix F for the detailed derivation. Intuitively, the above lemma says that, when dealing with the state-only IRL, (ML-IRL) minimizes the gap between the soft value functions of the optimal policy ⇡✓ and that of the expert policy ⇡E. Moreover, Alg.1 can also be easily implemented with the state-only reward. In fact, the entire algorithm essentially stays the same, and the only change is that r(s, a; ✓) will be replaced by r(s; ✓). In this way, by only using the visited states in the trajectories, one can still compute the stochastic gradient estimator in (7). Therefore, even under the state-only IRL setting where the expert dataset only contains visited states, our formulation and the proposed algorithm still work if we parameterize the reward as a state-only function.
7 Numerical Results
In this section, we test the performance of our algorithm on a diverse collection of RL tasks and environments. In each experiment set, we train algorithms until convergence and average the scores of the trajectories over multiple random seeds. The hyperparameter settings and simulation details are provided in Appendix B.
MuJoCo Tasks For Inverse Reinforcement Learning. In this experiment set, we test the performance of our algorithm on imitating the expert behavior. We consider several high-dimensional robotics control tasks in MuJoCo [44]. Two class of existing algorithms are considered as the comparison baselines: 1) imitation learning algorithms that only learn the policy to imitate the expert, including Behavior Cloning (BC) [45] and Generative Adversarial Imitation Learning (GAIL) [10]; 2) IRL algorithms which learn a reward function and a policy simultaneously, including Adversarial Inverse Reinforcement Learning (AIRL) [11], f -IRL [14] and IQ-Learn [2]. To ensure fair comparison, all imitation learning / IRL algorithms use soft Actor-Critic [22] as the base RL algorithm. For the expert dataset, we use the data provided in the official implementation2 of f -IRL.
In this experiment, we implement two versions of our proposed algorithm: ML-IRL(State-Action) where the reward is parameterized as a function of state and action; ML-IRL(State-Only) which utilizes the state-only reward function. In Table 1, we present the simulation results under a limited data regime where the expert dataset only contains a single expert trajectory. The scores (cumulative rewards) reported in the table is averaged over 6 random seeds. In each random seed, we train algorithm from initialization and collect 20 trajectories to average their cumulative rewards after the algorithms converge. The results reported in Table 1 show that our proposed algorithms outperform the baselines. The numerical results with confidence intervals are in Table 3 (See Appendix).
We observe that BC fails to imitate the expert’s behavior. It is due to the fact that BC is based on supervised learning and thus could not learn a good policy under such a limited data regime. Moreover, we notice the training of IQ-Learn is unstable, which may be due to its inaccurate approximation to the soft Q-function. Therefore, in the MuJoCo tasks where IQ-Learn does not perform well, so that we cannot match the results presented in the original paper [2], we directly report results from there (and mark them by ⇤ in Table 1). The results of AIRL are not presented in Table 1 since it performs poorly even after spending significant efforts in parameter tuning (similar observations have been made in in [46, 14]).
Transfer Learning Across Changing Dynamics. We further evaluate IRL algorithms on the transfer learning setting. We follows the environment setup in [11], where two environments with different dynamics are considered: Custom-Ant vs Disabled-Ant. We compare ML-IRL(State-Only) with several existing IRL methods: 1) AIRL [11], 2) f -IRL [14]; 3) IQ-Learn [2].
2https://github.com/twni2016/f-IRL
We consider two transfer learning settings: 1) data transfer; 2) reward transfer. For both settings, the expert dataset / trajectories are generated in Custom-Ant. In the data transfer setting, we train IRL agents in Disabled-Ant by using the expert trajectories, which are generated in Custom-Ant. In the reward transfer setting, we first use IRL algorithms to infer the reward functions in Custom-Ant, and then transfer these recovered reward functions to Disabled-Ant for further evaluation. In both settings, we also train SAC with the ground-truth reward in Disabled-Ant and report the scores.
The numerical results are reoprted in Table 2. the proposed ML-IRL(State-Only) achieves superior performance compared with the existing IRL benchmarks in both settings. We notice that IQ-Learn fails in both settings since it indirectly recovers the reward function from a soft Q-function approximator, which could be inaccurate and is highly dependent upon the environment dynamics. Therefore, the reward function recovered by IQ-Learn can not be disentangled from the expert actions and environment dynamics, which leads to its failures in the transfer learning tasks.
8 Conclusion
In this paper, we present a maximum likelihood IRL formulation and propose a provably efficient algorithm with a single-loop structure. To our knowledge, we provide the first non-asymptotic analysis for IRL algorithm under nonlinear reward parameterization. As a by-product, when we parameterize the reward as a state-only function, our algorithm could work in state-only IRL setting and enable reward transfer to new environments with different dynamics. Our algorithm outperforms existing IRL methods on high-dimensional robotics control tasks and corresponding transfer learning settings. A limitation of our method is the requirement for online training, so one future direction of this work is to further extend our algorithm and the theoretical analysis to the offline IRL setting.
Potential Negative Social Impacts
Since IRL methods aim to recover the reward function and the associated optimal policy from the observed expert dataset, potential negative social impacts may occur if there are bad demonstrations included in the expert dataset. Thus, for sensitive applications such as autonomous driving and clinical decision support, additional care should be taken to avoid negative biases from the expert demonstrations and ensure safe adaptation.
Acknowledgments
We thank the anonymous reviewers for their valuable comments. M. Hong and S. Zeng are partially supported by NSF grants CIF-1910385, CMMI-1727757, and AFOSR grant 19RT0424. A. Garcia would like to acknowledge partial support from grant FA9550-19-1-00347 by AFOSR. | 1. What is the focus and contribution of the paper on IRL?
2. What are the strengths of the proposed approach, particularly in terms of reducing computational burden?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. Do you have any concerns or questions about the applicability of the proposed method in transfer learning tasks?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a novel single-loop IRL algorithm to reduce the computational burden of the previous IRL framework. The authors prove that the solution of the proposed framework is equivalent to the maximum entropy IRL framework with a linearly parameterized reward function. They also provide a convergence guarantee of the proposed algorithm. The experiment result seems promising.
Strengths And Weaknesses
Strengths:
The paper is well written.
The problem studied in this paper is important and the proposed method is novel.
The authors provide theoretical guarantees for the proposed framework.
Weaknesses:
There may need more literature review on the MaxEnt-IRL.
Although the ML-IRL framework could reduce the computational complexity, it is not obvious why the proposed ML-IRL framework works for the transfer learning tasks.
Questions
Could the authors discuss the MaxEnt-IRL literature?
Could the authors explain why the ML-IRL framework works for the transfer learning tasks?
Limitations
The authors adequately addressed the limitations and potential negative societal impact of their work. |
NIPS | Title
Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees
Abstract
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherently nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in MuJoCo and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.
1 Introduction
Given observed trajectories of states and actions implemented by an expert, we consider the problem of estimating the reinforcement learning environment in which the expert was trained. This problem is generally referred to as inverse reinforcement learning (IRL) (see [3] for a recent survey). Assuming the environment dynamics are known (or available online), the IRL problem consists of estimating the reward function and the expert’s policy (optimizing such rewards) that best fits the data. While there are limitations on the identifiability of rewards [4], the estimation of rewards based upon expert trajectories enables important counterfactual analysis such as the estimation of optimal policies under different environment dynamics and/or reinforcement learning of new tasks.
In the seminal work [5], the authors developed an IRL formulation, in which the model for the expert’s behavior is the policy that maximizes entropy subject to a constraint requiring that the expected
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
features under such policy match the empirical averages in the expert’s observation dataset. The algorithms developed for MaxEnt-IRL [5–7] have a nested loop structure, alternating between an outer loop with a reward update step, and an inner loop that calculates the explicit policy estimates. The computational burden of this nested structure is manageable in tabular environments, but it becomes significant in high dimensional settings requiring function approximation.
Towards developing more efficient IRL algorithms, a number of works [8–12] propose to leverage the idea of adversarial training [13]. These algorithms learn a non-stationary reward function through training a discriminator, which is then used to guide the policy to match the behavior trajectories from the expert dataset. However, [14] pointed out that the resulting discriminator (hence the reward function) typically cannot be used in new learning tasks, since it is highly dependent on the corresponding policy and current environment dynamics. Moreover, due to the brittle approximation techniques and sensitive hyperparameter choice in the adversarial training, these IRL algorithms can be unstable. [15, 16].
More recent works [1, 2] have developed algorithms to alleviate the computational burden of the nested-loop training procedures. In [1], the authors propose to model the IRL using certain maximum entropy RL problem with specific reward function (which assigns r = +1 for matching expert demonstrations and r = 0 for all other behaviors). Then a soft Q imitation learning (SQIL) algorithm is developed. In [2], the authors propose to transform the standard formulation of IRL (discussed above) into a single-level problem, through learning a soft Q-function to implicitly represent the reward function and the policy. An inverse soft-Q learning (IQ-Learn) algorithm is then developed, which is shown to be effective in estimating the policy for the environment that it is trained on. Despite being computationally efficient, IQ-Learn sacrifices the accuracy in estimating the rewards since it indirectly recovers rewards from a soft Q-function approximator which is highly dependent upon the environment dynamics and does not strictly satisfy the soft-Bellman equation. Therefore it is not well-suited for counterfactual prediction or transfer learning setting.
Finally, in f -IRL [14] the authors consider an approach for estimating rewards based on the minimization of several measures of divergence with respect to the expert’s state visitation measure. The approach is limited to estimating rewards that only depend on state. Moreover, while the results reported are based upon a single-loop implementation, the paper does not provide a convergence guarantee to support performance. We refer the readers to Appendix A for other related works.
Our Contributions. The goal of this work is to develop an algorithm for IRL which is capable of producing high-quality estimates of both rewards and behavior policies with finite-time guarantees. The major contributions of this work are listed below.
• We consider a formulation of IRL based on maximum likelihood (ML) estimation over optimal (entropy-regularized) policies, and prove that a strong duality relationship with maximum entropy IRL holds if rewards are represented by a linear combination of features. 1 The ML formulation is a bi-level optimization problem, where the upper-level problem maximizes the likelihood function, while the lower-level finds the optimal policy under the current reward parameterization. Such a bi-level structure is not only instrumental to the subsequent algorithm design, but is also flexible to incorporate the use of state-only, as well as the regular reward function (which depends on the state and action pair). The former is suitable for transfer learning since it is insensitive to the changes of the environment dynamics, while the latter can be used to efficiently imitate the expert policy.
• Based on the ML-IRL formulation, we develop an efficient algorithm. To avoid the computational burden of repeatedly solving the lower-level policy optimization problem, the proposed algorithm has a single-loop structure where the policy improvement step and reward optimization step are performed alternatingly so that each step can be performed relatively cheaply. Further, we show that the algorithm has strong theoretical guarantees: to achieve certain ✏-approximate stationary solution for a non-linearly parameterized problem, it requires O(✏ 2) steps of policy and reward updates each. To our knowledge, it is the first algorithm which has finite-time guarantee for the IRL problem under nonlinear parameterization of reward functions.
• We conduct extensive experiments to demonstrate that the proposed algorithm outperforms many state-of-the-art IRL algorithms in both policy estimation and reward recovery. In particular, when
1Heuristic arguments for this duality result are discussed in [5] wherein the distribution of state-action paths is approximated (see equation (4) in [5]) and the equivalence between maximum entropy estimation and maximum likelihood (over the class of exponential distributions) [17] is invoked.
transferring to a new environment, RL algorithms using rewards recovered by the proposed algorithm outperform those that use rewards recovered from existing IRL and imitation learning benchmarks.
2 Preliminaries
In this section, we review the fundamentals of the maximum entropy inverse reinforcement learning (MaxEnt-IRL). We consider an MDP defined by the tuple (S,A,P, ⌘, r, ); S and A denote the state space and the action space respectively; P(s0|s, a) : S ⇥A⇥ S ! [0, 1] denotes the transition probability; ⌘(·) denotes the distribution for the initial state; r(s, a) : S ⇥ A ! R is the reward function and is a discount factor.
The MaxEnt-IRL formulation [6, 18–20] consists of finding a policy maximizing entropy subject to the expected features under such policy matching the empirical averages in the expert’s observation dataset. Specifically, the MaxEnt-IRL formulation is given by:
max ⇡
H(⇡) := E⌧⇠⇡ 1X
t=0
t log ⇡(at|st)
(MaxEnt-IRL)
s.t. E⌧⇠⇡ 1X
t=0
t (st, at) = E⌧⇠⇡E
1X
t=0
t (st, at)
where ⌧ = {(st, at)}1t=0 denotes a trajectory, (s, a) is the feature vector of the state-action pair (s, a) and ⇡E denotes the expert policy. Let ✓ denote the dual variable for the linear constraint, then the Lagrangian of (MaxEnt-IRL) is given by
L(⇡, ✓) := H(⇡) + * ✓,E⌧⇠⇡ 1X
t=0
t (st, at)
E⌧⇠⇡E
1X
t=0
t (st, at) + . (1)
In [6, 18, 19], the authors proposed a "dual descent" algorithm, which alternates between i) solving max⇡ L(⇡, ✓) for fixed ✓, and ii) a gradient descent step to optimize the dual variable ✓. It is shown that the optimizer ⇡⇤✓ in step i) can be recursively defined as ⇡ ⇤ ✓(at|st) = Zat|st,✓ Zst,✓
, where logZat|st,✓ = (st, at) T ✓ + Est+1⇠P(·|st,at) ⇥ logZst+1,✓ ⇤ and logZst,✓ = log P a2A Za|st,✓ .
From a computational perspective, the above algorithm is not efficient: it has a nested-loop structure, which repeatedly computes the optimal policy ⇡⇤✓ under each variable ✓. It is known that when the underlying MDP is of high-dimension, such an algorithm can be computationally prohibitive [9, 10].
Recent work [2] proposed an algorithm called IQ-Learn to improve upon the MaxEnt-IRL by considering a saddle-point formulation:
min r max ⇡
n H(⇡) + E⌧⇠⇡ ⇥ 1X
t=0
t · r(st, at) ⇤ E⌧⇠⇡E
⇥ 1X
t=0
t · r(st, at) ⇤o (2)
where r(st, at) is the reward associated with state-action pair (st, at). The authors show that this problem can be transformed into an optimization problem only defined in terms of the soft Q-function, which implicitly represents both reward and policy. IQ-Learn is shown to be effective in imitating the expert behavior while only relying on the estimation of the soft Q-function. However, the implicit reward estimate obtained is not necessarily accurate since its soft Q-function estimate depends on the environment dynamics and does not strictly satisfy the soft-Bellman equation. Hence, it is difficult to transfer the recovered reward function to new environments.
3 Problem Formulation
In this section, we consider a ML formulation of the IRL problem and formalize a duality relationship with maximum entropy-based formulation (MaxEnt-IRL).
Maximum Log-Likelihood IRL (ML-IRL) A model of the expert’s behavior is a randomized policy ⇡✓(·|s), where ⇡✓ is a specific policy corresponding to the reward parameter ✓. With the state dynamics P(st+1|st, at), the discounted
log-likelihood of observing the expert trajectory ⌧ under model ⇡✓ can be written follows:
E⌧⇠⇡E ⇥ log
Y t 0 (P(st+1|st, at)⇡✓(at|st)) t⇤ = E⌧⇠⇡E ⇥X t 0 t log ⇡✓(at|st) ⇤
+ E⌧⇠⇡E ⇥X
t 0 t logP(st+1|st, at)
⇤ .
Then we consider the following maximum log-likelihood IRL formulation:
max ✓
L(✓) := E⌧⇠⇡E ⇥ 1X
t=0
t log ⇡✓(at|st) ⇤ (ML-IRL)
s.t ⇡✓ := argmax ⇡
E⌧⇠⇡ 1X
t=0
t ✓ r(st, at; ✓) +H(⇡(·|st)) ◆ , (3a)
where r(s, a; ✓) is the reward function and H(⇡(·|s)) := P
a2A ⇡(a|s) log ⇡(a|s). We now make some remarks about ML-IRL. First, the problem takes the form of a bi-level optimization problem, where the upper-level problem (ML-IRL) optimizes the reward parameter ✓, while the lower-level problem describes the corresponding policy ⇡✓ as the solution to an entropy-regularized MDP ([21, 22]). In what follows we will leverage recently developed (stochastic) algorithms for bi-level optimization [23–25], that avoid the high complexity resulted from nested loop algorithms. Second, it is reasonable to use the ML function as the loss, because it searches for a reward function which generates a behavior policy that can best fit the expert demonstrations. While the ML function has been considered in [26, 27], they rely on heuristic algorithms with nested-loop computations to solve their IRL formulations, and the theoretical properties are not studied. Finally, the lower-level problem has been well-studied in the literature [21, 22, 28–30]. The entropy regularization in (3a) ensures the uniqueness of the optimal policy ⇡✓ under the fixed reward function r(s, a; ✓) [21, 28]. Even when the underlying MDP is high-dimensional and/or complex, the optimal policy could still be obtained; see recent developments in [21, 22]. We close this section by formally establishing a connection between (MaxEnt-IRL) and (ML-IRL). Theorem 1. (Strong Duality) Suppose that the reward function is given as: r(s, a; ✓) := (s, a)T ✓, for all s 2 S and a 2 A. Then (ML-IRL) is the Lagrangian dual of (MaxEnt-IRL). Furthermore, strong duality holds, that is: L(✓⇤) = H(⇡⇤), where ✓⇤ and ⇡⇤ are the global optimal solutions for problems (ML-IRL) and (MaxEnt-IRL), respectively.
The proof of Theorem 1 is relegated to Appendix G. To our knowledge this result which specifically addresses the (MaxEnt-IRL) formulation is novel. Under finite horizon, a duality between ML estimation and maximum causal entropy is obtained in [18, Theorem 3]. However, the problem considered in that paper is not in RL nor IRL setting, therefore they cannot be directly used in the context of the present paper.
The above duality result reveals a strong connection between the two formulations under linear reward parameterization. Due to the duality result, we know that (ML-IRL) is a concave problem under linear reward parameterization. In this case, any stationary solution to (ML-IRL) is a global optimal estimator of the reward parameter.
4 The Proposed Algorithm
In this section, we design algorithms for (ML-IRL). Recall that one major drawback of algorithms for (MaxEnt-IRL) is that, they repeatedly solve certain policy optimization problem in the inner loop. Even though the recently proposed algorithm IQ-Learn [2] tries to improve the computational efficiency through implicitly representing the reward function and the policy by a Q-function approximator, it has sacrificed the estimation accuracy of the recovered reward. Therefore, one important goal of our design is to find provably efficient algorithms that can avoid high-complexity operations and accurately recover the reward function. Specifically, it is desirable that the resulting algorithm only uses a finite number of reward and policy updates to reach certain high-quality solutions.
To proceed, we will leverage the special bi-level structure of the ML-IRL problem. The idea is to alternate between one step of policy update to improve the solution of the lower-level problem, and
Algorithm 1 Maximum Likelihood Inverse Reinforcement Learning (ML-IRL) Input: Initialize reward parameter ✓0 and policy ⇡0. Set the reward parameter’s stepsize as ↵. for k = 0, 1, . . . ,K 1 do
Policy Evaluation: Compute Qsoftr✓k ,⇡k(·, ·) under reward function r(·, ·; ✓k) Policy Improvement: ⇡k+1(·|s) / exp(Qsoftr✓k ,⇡k(s, ·)), 8s 2 S . Data Sampling I: Sampling an expert trajectory ⌧Ek := {st, at}t 0 Data Sampling II: Sampling a trajectory ⌧Ak := {st, at}t 0 from the current policy ⇡k+1 Estimating Gradient: gk := h(✓k; ⌧Ek ) h(✓k; ⌧Ak ) where h(✓; ⌧) := P t 0
tr✓r(st, at; ✓) Reward Parameter Update: ✓k+1 := ✓k + ↵gk
end for
one step of the parameter update which improves the upper-level loss function. At each iteration k, given the current policy ⇡k and the reward parameter ✓k, a new policy ⇡k+1 is generated from the policy improvement step, and ✓k+1 is generated by the reward optimization step.
This kind of alternating update is efficient, because there is no need to completely solve the policy optimization subproblem, before updating the reward parameters. It has been used in many other RL related settings as well. For example, the well-known actor-critic (AC) algorithm for policy optimization [31, 32, 23] alternates between one step of policy update, and one step of critic parameter update. Below we present the details of our algorithm at a given iteration k.
Policy Improvement Step. Let us consider optimizing the lower-level problem, when the reward parameter ✓k is held fixed. Towards this end, define the so-called soft Q and soft value functions for a given policy ⇡k and a reward parameter ✓k:
V soft rk,⇡k(s) = E⇡k
1X
t=0
t ✓ r(st, at; ✓k) +H(⇡k(·|st)) ◆ s0 = s
(4a)
Q soft rk,⇡k(s, a) = r(s, a; ✓k) + Es0⇠P(·|s,a)
⇥ V
soft rk,⇡k(s)
⇤ (4b)
We will adopt the well-known soft policy iteration [21] to optimize the lower-level problem (3a). Under the current reward parameter ✓k and the policy ⇡k, the soft policy iteration generates a new policy ⇡k+1 as follows
⇡k+1(a|s) / exp Q soft r✓k ,⇡k (s, a) , 8s 2 S, a 2 A. (5)
Under a fixed reward function, it can be shown that the new policy ⇡k+1 monotonically improves ⇡k, and it converges linearly to the optimal policy; see [21, Theorem 4] and [28, Thoerem 1].
Note that in practice, we usually do not have direct access to the exact soft Q-function in (4b). In order to perform the policy improvement, a few stochastic update steps in soft Q-learning [21] or soft Actor-Critic (SAC) [22] could be used to replace the one-step soft policy iteration (5). In the appendix, we present Alg. 2 to demonstrate such practical implementation of our proposed algorithm.
Reward Optimization Step. We propose to use a stochastic gradient-type algorithm to optimize ✓. Towards this end, let us first derive the exact gradient rL(✓). See Appendix D for detailed proof. Lemma 1. The gradient of the likelihood function L(✓) can be expressed as follows:
rL(✓) = E⌧⇠⇡E X
t 0 tr✓r(st, at; ✓)
E⌧⇠⇡✓
X
t 0 tr✓r(st, at; ✓) . (6)
To obtain stochastic estimators of the exact gradient rL(✓k), we take two approximation steps: 1) approximate the optimal policy ⇡✓k by ⇡k+1 in (5), since the optimal policy ⇡✓k is not available throughout the algorithm; 2) sample one expert trajectory ⌧Ek which is already generated by the expert policy ⇡E; 3) sample one trajectory ⌧Ak from the current policy ⇡k+1.
Following the approximation steps mentioned above, we construct a stochastic estimator gk to approximate the exact gradient rL(✓k) in (6) as follows:
gk := h(✓k; ⌧ E k ) h(✓k; ⌧Ak ), where h(✓; ⌧) :=
X t 0 tr✓r(st, at; ✓). (7)
With the stochastic gradient estimator gk, the reward parameter ✓k is updated as:
✓k+1 = ✓k + ↵gk. (8)
where ↵ is the stepsize in updating the reward parameter.
In summary, the proposed algorithm for solving the ML-IRL problem (ML-IRL) is given in Alg. 1.
5 Theoretical Analysis
In this section, we present finite-time guarantees for the proposed algorithm.
To begin with, first recall that in Sec. 3, we have mentioned that (ML-IRL) is a bi-level problem, where the upper level (resp. the lower level) problem optimizes the reward parameter (resp. the policy). In order to solve (ML-IRL), our algorithm 1 has a singe-loop structure, which alternates between one step of policy update and one step of the reward parameter update. Such a single-loop structure indeed has computational benefit, but it also leads to potential unstableness, since the lower level problem can stay far away from its true solutions. Specifically, at each iteration k, the potential unstableness is induced by the distribution mismatch between the policy ⇡k+1 and ⇡✓k , when we use estimator gk (7) to approximate the exact gradient rL(✓k) (6) in updating the reward parameter ✓k. Towards stabilizing the algorithm, we adopt the so-called two-timescale stochastic approximation (TTSA) approach [33, 23], where the lower-level problem updates in a faster time-scale (i.e., converges faster) compared with its upper-level counterpart. Intuitively, the TTSA enables the ⇡k+1 tracks the optimal ⇡✓k , leading to a stable algorithm. In the proposed Algorithm 1, the policy (lower-level variable) is continuously updated by the soft policy iteration (5), and it is ‘fast’ because it converges linearly to the optimal policy under a fixed reward function [28, Theorem 1]. On the other hand, the reward parameter update (8) does not have such linear convergence property, therefore it works in a ‘slow’ timescale. To begin our analysis, let us first present a few technical assumptions. Assumption 1 (Ergodicity). For any policy ⇡, assume the Markov chain with transition kernel P is irreducible and aperiodic under policy ⇡. Then there exist constants > 0 and ⇢ 2 (0, 1) such that
sup s2S
kP(st 2 ·|s0 = s,⇡) µ⇡(·)kTV ⇢t, 8 t 0
where k · kTV is the total variation (TV) norm; µ⇡ is the stationary state distribution under ⇡.
Assumption 1 assumes the Markov chain mixes at a geometric rate. It is a common assumption in the iterature of RL [34, 35, 32], which holds for any time-homogeneous Markov chain with finite-state space or any uniformly ergodic Markov chain with general state space. Assumption 2. For any s 2 S , a 2 A and any reward parameter ✓, the following holds:
r✓r(s, a; ✓) Lr, (9a) r✓r(s, a; ✓1) r✓r(s, a; ✓2) Lgk✓1 ✓2k (9b)
where Lr and Lg are positive constants.
Assumption 2 assumes that the parameterized reward function has bounded gradient and is Lipschitz smooth. Such assumption in Lipschitz property are common in the literature of min-max / bi-level optimization [36, 23, 37, 25, 38].
Based on Assumptions 1 - 2, we next provide the following Lipschitz properties: Lemma 2. Suppose Assumptions 1 - 2 hold. For any reward parameter ✓1 and ✓2, the following results hold:
|Qsoftr✓1 ,⇡✓1 (s, a) Q soft r✓2 ,⇡✓2 (s, a)| Lqk✓1 ✓2k, 8s 2 S, a 2 A (10a)
krL(✓1) rL(✓2)k Lck✓1 ✓2k (10b)
where Q soft r✓,⇡✓ (·, ·) denotes the soft Q-function under the reward function r(·, ·; ✓) and the policy ⇡✓. The positive constants Lq and Lc are defined in Appendix E.
The Lipschitz properties identified in Lemma 2 are vital for the convergence analysis. Then we present the main results, which show the convergence speed of the policy {⇡k}k 0 and the reward parameter {✓k}k 0 in the Alg. 1. Please see Appendix E for the detailed proof.
Theorem 2. Suppose Assumptions 1 - 2 hold. Selecting stepsize ↵ := ↵0K for the reward update step (8) where ↵0 > 0 and 2 (0, 1) are some fixed constants, and K is the total number of iterations to be run by the algorithm. Then the following result holds:
1
K
K 1X
k=0
E ⇥ log ⇡k+1 log ⇡✓k 1 ⇤ = O(K 1) +O(K ) (11a)
1
K
K 1X
k=0
E ⇥ krL(✓k)k2 ⇤ = O(K ) +O(K 1+ ) +O(K 1) (11b)
where we denote k log ⇡k+1 log ⇡✓kk1 := maxs2S,a2A log ⇡k+1(a|s) log ⇡✓k(a|s)
. In particular, setting = 1/2, then both quantities in (11a) and (11b) converge with the rate O(K 1/2).
In Theorem 2, we present the finite-time guarantee for the convergence of the Alg.1. Moreover, as a special case, when the reward is parameterized as a linear function, we know that (ML-IRL) is concave and Theorem 2 provides a stronger guarantee which identify the global optimal reward estimator in finite time.
We provide a proof sketch below to present the key steps. The detailed proof is in Appendix H.
Proof sketch. We outline our main steps in analyzing (11a) and (11b) respectively.
In order to show the convergence of policy estimates in (11a), there are several key steps. First, we note that both policies ⇡k+1 and ⇡✓k are in the softmax parameterization, where ⇡k+1(·|s) / exp Q
soft r✓k ,⇡k (s, ·) and ⇡✓k(·|s) / exp Q soft r✓k ,⇡✓k (s, ·) . Then, we can show a Lipschitz continuity
property between the policy and the soft Q-function:
klog⇡k+1 log⇡✓kk1 2kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1,
where the infinty norm k · k1 is defined over the state-action space S ⇥A. Moreover, by analyzing the contraction property of the soft policy iteration (5), we bound kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 as:
kQsoftr✓k ,⇡k Q soft r✓k ,⇡✓k k1 kQsoftr✓k 1 ,⇡k 1 Q soft r✓k 1 ,⇡✓k 1 k1 + 2Lqk✓k ✓k 1k.
To ensure that the error term k✓k ✓k 1k is small, we select the stepsize of reward parameters as ↵ := ↵0K , where K is the total number of iterations and > 0. Then, by combining previous two steps, we could further show the convergence rate of the policy estimates in (11a).
To prove the convergence of the reward parameters in (11b), we first leverage the Lipschitz smooth property of L(✓) in (10b). However, one technical challenge in the convergence analysis is how to handle the bias between the gradient estimator gk defined in (7) and the exact gradient rL(✓k). When we construct the gradient estimator gk in (7), we need to sample trajectories from the current policy ⇡k+1 and the expert dataset D. However, according to the expression of rL(✓k) in (6), the trajectories are sampled from the optimal policy ⇡✓k and the dataset D. Hence, there is a distribution mismatch between ⇡k+1 and ⇡✓k . Our key idea is to leverage (11a) to handle this distribution mismatch error, and thus show that the bias between gk and rL(✓k) could be controlled. To the best of our knowledge, Theorem 2 is the first non-asymptotic convergence result for IRL with nonlinear reward parameterization.
6 A Discussion over State-Only Reward
In this section we consider the IRL problems modeled by using rewards that are only a function of the state. A lower dimensional representation of the agent’s preferences (i.e. in terms only of states as opposed to states and actions) is more likely to facilitate counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. This is because the estimation of preferences which are only defined in terms of states is less sensitive to the specific environment dynamics in the expert’s demonstration dataset. Moreover, in application such as healthcare [39] and autonomous driving [40], where simply imitating the expert policy can potentially result in poor performance, since the learner and the expert may have different transition dynamics. Similar points have also been argued in recent works [14, 41–43].
Next, let us briefly discuss how we can understand (ML-IRL) and Alg.1, when the reward is parameterized as a state-only function. First, it turns out that there is an equivalent formulation of (ML-IRL), when the expert trajectories only contain the visited states. Lemma 3. Suppose the expert trajectories ⌧ is sampled from a policy ⇡E, and the reward is parameterized as a state-only function r(s; ✓). Then ML-IRL is equivalent to the following:
min ✓
Es0⇠⌘(·) ⇥ V soft r✓,⇡✓ (s0) ⇤ Es0⇠⌘(·) ⇥ V soft r✓,⇡E (s0) ⇤ (12a)
s.t. ⇡✓ := argmax ⇡
E⇡ 1X
t=0
t ✓ r(st; ✓) +H(⇡(·|st)) ◆ . (12b)
Please see Appendix F for the detailed derivation. Intuitively, the above lemma says that, when dealing with the state-only IRL, (ML-IRL) minimizes the gap between the soft value functions of the optimal policy ⇡✓ and that of the expert policy ⇡E. Moreover, Alg.1 can also be easily implemented with the state-only reward. In fact, the entire algorithm essentially stays the same, and the only change is that r(s, a; ✓) will be replaced by r(s; ✓). In this way, by only using the visited states in the trajectories, one can still compute the stochastic gradient estimator in (7). Therefore, even under the state-only IRL setting where the expert dataset only contains visited states, our formulation and the proposed algorithm still work if we parameterize the reward as a state-only function.
7 Numerical Results
In this section, we test the performance of our algorithm on a diverse collection of RL tasks and environments. In each experiment set, we train algorithms until convergence and average the scores of the trajectories over multiple random seeds. The hyperparameter settings and simulation details are provided in Appendix B.
MuJoCo Tasks For Inverse Reinforcement Learning. In this experiment set, we test the performance of our algorithm on imitating the expert behavior. We consider several high-dimensional robotics control tasks in MuJoCo [44]. Two class of existing algorithms are considered as the comparison baselines: 1) imitation learning algorithms that only learn the policy to imitate the expert, including Behavior Cloning (BC) [45] and Generative Adversarial Imitation Learning (GAIL) [10]; 2) IRL algorithms which learn a reward function and a policy simultaneously, including Adversarial Inverse Reinforcement Learning (AIRL) [11], f -IRL [14] and IQ-Learn [2]. To ensure fair comparison, all imitation learning / IRL algorithms use soft Actor-Critic [22] as the base RL algorithm. For the expert dataset, we use the data provided in the official implementation2 of f -IRL.
In this experiment, we implement two versions of our proposed algorithm: ML-IRL(State-Action) where the reward is parameterized as a function of state and action; ML-IRL(State-Only) which utilizes the state-only reward function. In Table 1, we present the simulation results under a limited data regime where the expert dataset only contains a single expert trajectory. The scores (cumulative rewards) reported in the table is averaged over 6 random seeds. In each random seed, we train algorithm from initialization and collect 20 trajectories to average their cumulative rewards after the algorithms converge. The results reported in Table 1 show that our proposed algorithms outperform the baselines. The numerical results with confidence intervals are in Table 3 (See Appendix).
We observe that BC fails to imitate the expert’s behavior. It is due to the fact that BC is based on supervised learning and thus could not learn a good policy under such a limited data regime. Moreover, we notice the training of IQ-Learn is unstable, which may be due to its inaccurate approximation to the soft Q-function. Therefore, in the MuJoCo tasks where IQ-Learn does not perform well, so that we cannot match the results presented in the original paper [2], we directly report results from there (and mark them by ⇤ in Table 1). The results of AIRL are not presented in Table 1 since it performs poorly even after spending significant efforts in parameter tuning (similar observations have been made in in [46, 14]).
Transfer Learning Across Changing Dynamics. We further evaluate IRL algorithms on the transfer learning setting. We follows the environment setup in [11], where two environments with different dynamics are considered: Custom-Ant vs Disabled-Ant. We compare ML-IRL(State-Only) with several existing IRL methods: 1) AIRL [11], 2) f -IRL [14]; 3) IQ-Learn [2].
2https://github.com/twni2016/f-IRL
We consider two transfer learning settings: 1) data transfer; 2) reward transfer. For both settings, the expert dataset / trajectories are generated in Custom-Ant. In the data transfer setting, we train IRL agents in Disabled-Ant by using the expert trajectories, which are generated in Custom-Ant. In the reward transfer setting, we first use IRL algorithms to infer the reward functions in Custom-Ant, and then transfer these recovered reward functions to Disabled-Ant for further evaluation. In both settings, we also train SAC with the ground-truth reward in Disabled-Ant and report the scores.
The numerical results are reoprted in Table 2. the proposed ML-IRL(State-Only) achieves superior performance compared with the existing IRL benchmarks in both settings. We notice that IQ-Learn fails in both settings since it indirectly recovers the reward function from a soft Q-function approximator, which could be inaccurate and is highly dependent upon the environment dynamics. Therefore, the reward function recovered by IQ-Learn can not be disentangled from the expert actions and environment dynamics, which leads to its failures in the transfer learning tasks.
8 Conclusion
In this paper, we present a maximum likelihood IRL formulation and propose a provably efficient algorithm with a single-loop structure. To our knowledge, we provide the first non-asymptotic analysis for IRL algorithm under nonlinear reward parameterization. As a by-product, when we parameterize the reward as a state-only function, our algorithm could work in state-only IRL setting and enable reward transfer to new environments with different dynamics. Our algorithm outperforms existing IRL methods on high-dimensional robotics control tasks and corresponding transfer learning settings. A limitation of our method is the requirement for online training, so one future direction of this work is to further extend our algorithm and the theoretical analysis to the offline IRL setting.
Potential Negative Social Impacts
Since IRL methods aim to recover the reward function and the associated optimal policy from the observed expert dataset, potential negative social impacts may occur if there are bad demonstrations included in the expert dataset. Thus, for sensitive applications such as autonomous driving and clinical decision support, additional care should be taken to avoid negative biases from the expert demonstrations and ensure safe adaptation.
Acknowledgments
We thank the anonymous reviewers for their valuable comments. M. Hong and S. Zeng are partially supported by NSF grants CIF-1910385, CMMI-1727757, and AFOSR grant 19RT0424. A. Garcia would like to acknowledge partial support from grant FA9550-19-1-00347 by AFOSR. | 1. What is the main contribution of the paper regarding IRL?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any questions or concerns about the new formulation of IRL?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any limitations or potential drawbacks of the proposed method that the authors should consider? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In this paper, the authors proposed a new formulation of IRL based on maximum likelihood, which is equivalent to MaxEnt when the reward function is linear. By leveraging this formulation, the authors propose a new computationally efficient gradient-based iterative algorithm that does not require solving MDP in each iteration. The authors further prove a non-asymptotically rate of how fast the algorithm converges to a stationary point, which is the first non-asymptotic convergence result for ILR with nonlinear reward parameterization. Finally, the authors conduct extensive experiments that show that the proposed algorithm outperforms several state-of-the-art IRL algorithms.
Strengths And Weaknesses
Strength:
The authors claim that Thm 2 is the first non-asymptotic convergence result for ILR with nonlinear reward parameterization. I believe this is true as a recent paper [1] from ICML 2021 proves a non-asymptotic rate for a gradient-based method with linear reward parameterization (although that paper’s proof does not seem to have the assumption the value function can be accurately estimated).
Weakness:
I think the contribution above is important to the IRL community. However, other contributions of this paper may not be as significant as this one.
The new formulation of IRL. This maximum likelihood formulation is known [2]. The proof seems to be similar to Thm 3 in [3].
The algorithm. The authors mentioned that the algorithm enjoys computational efficiency and is capable of reward transferring. However, GAIL-type of algorithms that use state-dependent rewards in the discriminator [4] and scalable MaxEnt IRL algorithms [5] are able to achieve those benefits as well.
The experiments. I like the experiments on reward transferring. However, the main goal of proposing the new algorithm and theory is to reduce computation burden, but the experiments focus on the quality of the learned policy / reward after convergence. It’s unclear from the experiments whether the new algorithm is indeed faster. For example, an interesting experiment would be a comparison with MaxEnt IRL as in theory the proposed method is able to converge to a stationary point that’s the same as MaxEnt IRL, but faster.
Furthermore, the clarity of the paper can be improved. Line 118 to 135 is rather confusing and only until the second reading I was able to understand the math. In Line 119,\pi_\theta is parameterized by \theta. But in Line 123, \theta is used to parameterize the reward function.
[1] Kamoutsi, Angeliki, Goran Banjac, and John Lygeros. "Efficient Performance Bounds for Primal-Dual Reinforcement Learning from Demonstrations." International Conference on Machine Learning. PMLR, 2021.
[2] Jain, Vinamra, Prashant Doshi, and Bikramjit Banerjee. "Model-free IRL using maximum likelihood estimation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.
[3] Ziebart, Brian D., J. Andrew Bagnell, and Anind K. Dey. "The principle of maximum causal entropy for estimating interacting processes." IEEE Transactions on Information Theory 59.4 (2013): 1966-1980.
[4] Torabi, Faraz, Garrett Warnell, and Peter Stone. "Generative adversarial imitation from observation." arXiv preprint arXiv:1807.06158 (2018).
[5] Finn, Chelsea, Sergey Levine, and Pieter Abbeel. "Guided cost learning: Deep inverse optimal control via policy optimization." International conference on machine learning. PMLR, 2016.
Questions
In the algorithm, it’s assumed that the soft-Q function can be accurately estimated. Can the approximation in soft-Q function be considered in the theorem as well?
Interestingly, the gradient in Eq.(6) is the same as the gradient to r with fixed pi in Eq. (2) (MaxEnt). And the inner optimization in Eq. (2) is finding the optimal policy given r which is the same as solving Eq. (3a). Therefore, the proposed algorithm can be considered as solving Eq. (2) directly without the ML-IRL formulation. Would the authors provide some explanation on why formulating IRL in ML-IRL is fundamental to the convergence proof?
In the text the authors try to distinguish between MaxEnt IRL and adversarial IRL (e.g. Line 30 - 44). I wonder why that's the case as these two have been shown to be closely related [6, 7].
[6] Finn, Chelsea, et al. "A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models." arXiv preprint arXiv:1611.03852 (2016).
[7] Swamy, Gokul, et al. "Of moments and matching: A game-theoretic framework for closing the imitation gap." International Conference on Machine Learning. PMLR, 2021.
Limitations
Yes |
NIPS | Title
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
Abstract
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification of this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models in the following URL: https://github.com/giannisdaras/multilingual robustness.
1 Introduction
Converging evidence from cognitive science research indicates that bilingualism increases brain robustness by reducing the rate of cognitive decline due to aging [1, 2] and delaying the onset of symptoms of dementia [3, 4]. It appears that individuals who speak more than one language on a regular basis are able to maintain typical cognitive functioning despite neural degeneration. This mismatch between cognitive functioning and brain pathology is called Cognitive Reserve [5], and its underlying mechanisms are poorly understood and are an active topic of investigation.
Inspired by this research, we study whether artificial neural networks are more robust when trained on multiple languages or multiple tasks. Our experiments demonstrate that training on multiple tasks indeed increases structural robustness. We train monolingual and bilingual GPT-2 models with the same architecture and dataset sizes. Initially, monolingual GPT-2 [6] models are slightly outperforming the bilingual ones, but when we introduce structural noise (by randomly deleting neurons or adding noise to the weights) bilingual models degrade more gracefully and eventually outperform the monolingual models in the high-noise regime. For some amount of noise, bilingual models start outperforming the monolingual ones demonstrating a cross-over in performance due to their increased robustness. We observe this phenomenon for numerous models across three different types of corruption: additive Gaussian noise to the weights, random weight pruning and magnitude-based weight pruning [7].
Our Contributions: We provide a theoretical justification of this phenomenon by mathematically analyzing linear multitask representation learning [8, 9]. Our analysis shows that introducing more
˚equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
diverse tasks creates ℓ2 regularization in the linear task heads. Further, we formally connect the Euclidean norm of the learned representations to structural robustness under errors in the network weights. Our main theorem establishes that multitasking leads to higher robustness to additive noise for linear representations when the task vectors are selected as random and independent Gaussian vectors. Our results also establish that when the tasks are significantly overlapping, multitasking does not lead to higher robustness and hence task diversity is necessary.
We experimentally observe that multitasking increases structural robustness for numerous networks and multiple problems including MNIST, CIFAR10, Newsgroup20, GPT models and finetuned GPT models on GLUE tasks. We train networks under exactly comparable dataset and architecture conditions and show that models become more robust to structural failures as they are trained with more tasks. We experiment with three different types of structural failures and show robustness increases for all of them. We also experimentally observe that the addition of diverse tasks seems to regularize the model weights, as we predict in our theoretical analysis.
2 Theoretical Analysis
Building intuition. We start with a small numerical example to build intuition. Given a feature vector x P Rd we compute a k dimensional linear representation Wx using a matrix W P Rkˆd. We choose
W such that we best approximate a set of ground truth task vectors, tc1, c2, ..., cT u, that lie in Rd. The learned approximation is ĉi “ WT γi. Essentially, we use linear combinations of the columns of WT to approximate the task vectors. For simplicity, we assume that the columns of WT are unit norm. We study the case where k ă T , otherwise there are infinite solutions. Assume we work in d “ 3 dimensions with T “ 3 total tasks, c1 “ r1, 0, 0sT , c2 “ r0, 1, 0sT , c3 “ r0, 0, 1sT . Set our learned representation dimension to be k “ 1 dimensional. When T “ 2, using only the first two tasks c1, c2, an optimal solution is W “ 1?2 r1, 1, 0s. The corresponding linear head is now the scalar γ1 “ 1?2 “ γ2 and the approximate vectors are ĉ1 “ W
T γ1 “ r0.5, 0.5, 0sT “ ĉ2. Therefore the best one dimensional subspace to jointly approximate c1, c2 is the span of W “ 1? 2
r1, 1, 0s. Now we introduce one more task and find the one dimensional subspace that best approximates c1, c2, c3. That becomes W 1 “ 1?3 r1, 1, 1s with linear heads γ 1 1 “ 1?3 “ γ 1 2 “ γ13. The approximate vectors now are ĉ11 “ pW 1qT γ11 “ r1{3, 1{3, 1{3sT “ ĉ12 “ ĉ13. Notice that ||ĉ1i||2 “ 1{3 for 3 tasks but ||ĉi||2 “ 1{2 for two tasks. The point is that for more tasks, the vector that jointly approximates all task vectors becomes shorter. Equivalently, the ℓ2 norm of the linear task heads decreases from γi “ 1?2 to γ 1 i “ 1?3 as the tasks increased from two to three showing how multitasking creates regularization. A graphical representation of this example is given in Figure 2. It is important that the task vectors ci are orthogonal, increasing the effective dimensionality of the problem. The intuition is that diverse tasks increase the effective dimension, making the best approximation vector shorter.
Our main theoretical result is that this phenomenon is quite general and makes multitasking lead to structural robustness. We connect the norm of the approximated task vectors with robustness to weight perturbations and show that for Gaussian, independent task vectors the average norm shrinks as more tasks are added. This is intuitive since high dimensional Gaussian vectors are near-orthogonal. Surprisingly, we empirically show that task vectors for numerous problems also exhibit this behavior.
Analysis. We consider a neural network fθ : Rd Ñ Rk and a collection of tasks tT1, ..., TT u. We are trying to learn θ, γi P Rk to solve the following optimization problem:
argminθ,tγ1,...,γT u T ÿ
i“1 Epx,yqPTiLpγ T i fθpxq, yq. (1)
The neural network fθ can be as simple as a single matrix W : Rd Ñ Rk. For linear networks, we consider the following dataset generation process: for task Ti, we sample a Gaussian x and we generate its label y by taking the inner-product with a task vector ci, i.e. y “ cTi x for task Ti. Given infinite samples and MSE loss, the optimization problem of (1) is equivalent to the following problem. Definition 2.1 (Optimization Problem). Let k ă T ă d. We define the Factorized Best Rank-k approximation of a matrix C P RdˆT as the optimization problem:
W˚,Γ˚ “ argminWPRkˆd,ΓPRkˆT ˇ ˇ ˇ ˇWTΓ ´ C ˇ ˇ ˇ ˇ
2 F . (2)
We are interested in the case when the dimensionality of the representation k is smaller than the number of tasks T , otherwise the best Rank-k approximation of C is not unique.
The following Proposition states that in the considered setting, Problem 2 can be solved with SVD. Proposition 2.2. For any matrix C P RdˆT with distinct singular values, any solution of 2.1 satisfies:
W˚ T Γ˚ “ UΣkV T , (3) where UΣV T is the SVD of C and Σk is the same as Σ except than the last T ´ k diagonal entries that are zeroed out.
The fact that the Singular Value decomposition computes the best rank-k approximation to a matrix can be found in several textbooks e.g. Golub and Van Loan [10], Blum et al. [11].
This proposition establishes that W˚ “ UT and Γ˚ “ ΣkV T is a valid solution of (2). Onwards, we will be calling this the SVD Solution. Definition 2.3. We define the SVD solution of (2), to be:
WSVD “ UT , ΓSVD “ ΣkV T . (4)
We note that if any multitask learning algorithm is used to obtain W˚,Γ˚, one can run Gram-Schmidt to make W˚ orthonormal and hence obtain the factorization we use. It is important that W stays normalized and all scaling is pushed to Γ since to measure robustness to weight shifts, we are going to add noise to W only, and higher W scaling is equivalent to lower effective noise.
We study how the performance is affected when the representation network, fθ, is corrupted. Definition 2.4. For any sample x, the Mean Squared Error (MSE) for task i is defined to be the expected error between the model prediction under noise and the true value y. Namely,
MSEi “ Eθc “ pγTi fθcpxq ´ yq2 ‰
, (5) where fθc is the model that emerges after corrupting fθ.
This measures how well the model approximates the ground truth under the presence of noise and under the constraint of a joint representation for multiple tasks.
The simplest corruption process to study is adding noise to the representation matrix, i.e. Wc “ W ` N, Nij „ N p0, σ2q, i.i.d (6)
Then, we denote the mean squared error for the task i with MSEi,σ 2 and the average mean squared error across the T tasks with ĘMSET,σ 2
. We are now ready to introduce our results. Theorem 2.5 (Mean Squared Error for Additive Noise). Let C P RdˆT be a matrix with distinct singular values σ1 ą σ2 ą ... ą σT . Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 `
řk i“1 σipCq2
T ¨ σ2 . (7)
Average MSE under noise
Average MSE without noise
Noise Variance
As shown, the noisy MSE decomposes into the sum of the noiseless MSE plus the noise variance times a function that depends on the number of tasks:
RpT q “ řk i“1 σipCq2
T . (8)
It is important to emphasize that as more tasks are added, the matrix C changes, but the interlacing theorem allows us to connect the singular values of smaller submatrices, as discussed in the Appendix. RpT q is the robustness slope: if a model with T tasks has smaller slope, it will eventually outperform a model with, say T ´ 1 tasks and larger slope, for sufficiently large noise. This is true even if the noiseless performance for the T ´ 1-task model is better, indicating a cross-over in MSE. Therefore the key is understanding when the sum of the top k singular values of C scales sublinearly in T . This is not true for tasks that are aligned, but we can show it holds for independent Gaussian task vectors. We believe it holds for more general families of diverse task vectors and our experiments verify it also holds for numerous real task vectors learned from text and vision datasets.
Connection with l2 regularization. For the SVD solution (see Definition 4), the sum of the top-k singular values squared is the squared Frobenius norm of Γ. Indeed, we have that ||ΓSVD||2F “ ||ΣkV T ||2F . Since Σk is a diagonal matrix, each row of ΣkV T is a rescaling of the corresponding row of V T . Rows of V T have norm 1, hence the i-th row of ΣkV T will have norm σi. The Frobenius norm squared is just the sum of the squared norms of the rows. Hence, we get that
||ΓSVD||2F “ k ÿ
i“1 σipCq2. (9)
Using this simple observation, we can get the following alternative expression of Theorem 2.5. Corollary 2.6. Let C P RdˆT be a matrix with distinct singular values. Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 ` ||Γ|| 2 F
T σ2 . (10)
Corollary 2.6 provides two important insights: i) the normalization with the number of tasks that appears in (7) is justified since the Frobenius norm of Γ grows with the number of task, ii) if we can prove that the slope (defined in Equation (8)) is dropping, then we are effectively proving that multitasking gives l2 regularization as we showed in the toy introductory example. This also holds for the case of Gaussian, i.i.d. task vectors, as shown in the following theorem. Theorem 2.7. Let C P RdˆT be a random matrix with Gaussian, i.i.d. entries of variance 1{d and d “ ΩpT 3q. Let Ct, Ct`1 be the matrices formed by selecting the first t, pt ` 1q columns of C. Then, there is a noise level σthres such that with probability ě 1 ´ exp ´ ´Ω ´? d ¯¯
, the SVD solutions (see (4)) of (2) (for Ct, Ct`1 respectively), under the noise corruption model, satisfy:
ĘMSE t`1,σ2 ă ĘMSEt,σ 2 , @σ ě σthres. (11)
Remark 2.8. In words, this result shows that adding new tasks gives provably increased robustness to high noise corruption in the weights, when the task vectors are Gaussian. Remark 2.9. Observe that the MSE under noise drops for every single new task added. The assumption d “ ΩpT 3q, can be relaxed to d “ Ωpt3q, and we get increased robustness for the first t added tasks. Nevertheless, for most applications d “ ΩpT 3q is a realistic assumption: Even for our smallest dataset MNIST d “ 728, and we experiment with up to 10 tasks.
3 Experimental Evaluation
We divide the experimental section in two parts. In the first part, we add noise to the final linear representation layer of various networks and verify that our theoretical analysis agrees with experimentally observed multitasking robustness on real datasets (MNIST, CIFAR10, NewsGroup20). In the second part, we show that multitasking leads to robustness to general weight corruptions in any layer of a complex transformer. Specifically, we show that multilingual Language Models are more robust to weight shifts (across all the layers) compared to monolingual trained under the same setting. This is the first evidence of increased Cognitive Reserve in bilingual artificial neural networks.
Experiments with Linear Representation Layers. We perform experiments on three datasets (MNIST, CIFAR10, Newsgroup20) and two modalities (Vision and Language). The datasets normally involve one classification task each. We create multiple binary tasks by distinguishing between pairs of labels. For example, in CIFAR10, one task might be to distinguish between dogs and cats and another between airplanes and cars. We assign a value in r0, 1s to each sample for each task to transform them to regression tasks (to match our theory). For example, if task i is to distinguish between dogs and cats, value 0 corresponds to dog and value 1 to cat.
The second issue is learning the task vectors from training data. For MNIST, we can simply learn a linear layer C with columns tc1, ..., cT u such that: cTi x « y for each task. For more complex datasets like CIFAR or Newsgroup20, linear networks have lower performance and hence it is less interesting to examine their robustness. Instead, we first use another network to extract representations gθpxq and then learn a linear layer acting on the encodings such that cTi gθpxq « y. For CIFAR we used a pre-trained Resnet50 as the encoder while for NewsGroup, a pre-trained BERT [12]. We would like to point out that our theory is still valid for this case – this is equivalent to the linear layer C receiving inputs from a learned representation as opposed to the features directly. As the number of tasks increase, we reduce the number of training examples per task. We do this to make sure that the total training dataset size stays the same as the number of tasks increase.
Figure 3 shows how the average MSE behaves as noise increases for different number of tasks. Note that even though all models begin from roughly the same performance in the noiseless setting, the multitask models are much more robust to the corruption of their weights consistently among all the datasets and modalities. This is aligned with our theoretical analysis which predicts that the robustness slope (defined in Equation (8)) decreases with the number of tasks. We calculate robustness slopes for learned task vectors for real datasets and plot their decay in the Appendix, where we further include all the details of how these models were trained.
Experiments with Language Models. Our objective is to compare robustness to neural weight perturbations in monolingual and bilingual language models. We use the following perturbation
models: 1) Random deletion of weight parameters: we zero-out p percent of the attention layer weights, 2) Magnitude pruning: we sort model attention weights by the magnitude and delete the smallest p percent of weights [7], 3) Random normal noise: we add zero-mean random Gaussian noise with standard deviation σ2 to the attention weights.
On the selection of the linguistic pair, we selected Greek, a highly inflected language with very different morphology, syntax and phonology compared to English. It also uses a different script since Greek characters were not Romanized. This minimizes transfer between languages, something we wanted to avoid. In the Appendix, we present additional experiments for other Romance languages.
The dataset for the bilingual model is a concatenation of articles from English and Greek Wikipedia. To avoid the computational cost of training for a new language, we start from the pre-trained GPT-2 (small)[6] and we use the Language Model Recycling Technique, introduced in [13]. GPT-2 small is a transformer-based architecture for causal language modeling, with 12 attention blocks and 124M parameters. The tokenizer uses Byte Pair Encoding and has a vocabulary of 50, 257 tokens. For the bilingual model, we generate a new tokenizer, vocabulary and embedding layer without changing the architecture. We keep the vocabulary size the same, as changing the vocabulary size can affect the scale of the perplexity score for these models. Note that Wikipedia documents were not in the original training of GPT-2, but our monolingual baseline was subsequently finetuned on English Wikipedia. Details on all our training hyperparameters are included in the Appendix.
We measure the quality of generated text using perplexity. Our bilingual model achieves 89 perplexity on a randomly picked subset of the OSCAR [14] dataset and 76 perplexity on the English IMDB dataset [15]. Monolingual GPT-2 model achieves 36 perplexity on the IMDB dataset. In the Appendix we include generated text for both the models. Although the perplexity of the bilingual model does not match the pre-trained GPT-2, the generated text is of reasonable quality text in both languages.
Text Generation. Our first experiment is to compare the performance of both models under various parameter perturbations. First, we try deleting a random portion p (p from 0% to 40%) of attention layers’ weight to observe and compare the trend of decay in text generation quality between the two models. We evaluate both models on the IMDB dataset. As the graph in Figure 1 shows, the monolingual model starts with text predictions closer to the source text, resulting in lower perplexity without noise. However, as we delete a more significant portion of weights, the bilingual model matches the performance of the monolingual one and eventually outperforms that.
Next, we try magnitude-based pruning of a portion of weights, p, to observe and compare the trend of decay in text generation quality between the two models. We sort the attention layer weights by the magnitude and set p percent of weights with the lowest magnitude to zero. Again, we use the IMDB dataset to evaluate models. The graphs in Figure 4 show that as the training process continues, the model achieves a lower perplexity. Moreover, pruning additional weights has a less substantial impact on the model’s performance. This graph shows that training the pre-trained GPT-2 model for a few epochs on a bilingual dataset significantly improves robustness to weight perturbations.
In another experiment, we observe how the maximum singular value of the weight matrices changes throughout training process. We track the maximum singular value of attention layer weights. We use a pretrained GPT-2 model baseline, and train this model for 16k iterations on English text data from Wikipedia. Resuming from this checkpoint, we train two new models: 1) We continue training model
1 on task 1 (English Wikipedia dataset) for 16k more iterations. 2) We train a second model on a different English dataset, the LAMBADA dataset [16], for 16k more iterations. Figure 5 indicates the results of this experiment by plotting maximum singular values of the first attention layer. As the Figure shows, training model on a new dataset (task 2) results in a faster decay of the maximum singular value.
Text Classification. We conduct another set of experiments to observe the robustness of fine-tuned monolingual and bilingual GPT-2 models for text classification. In this section, we fine-tune both the monolingual and the bilingual GPT-2 models (previously trained) for downstream classification tasks using the GLUE benchmark [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] to compare the robustness of models to weight perturbations. The two perturbation methods tested in this section are random weight deletion and random Gaussian noise added to attention weights. For each task, we fine-tune both models for ten epochs. When applying random pruning, the accuracy of each model is evaluated after deleting p percent of model weights, p ranging from 0% to 45%. When perturbing model weights by adding noise, we try various Gaussian noise distributions with standard deviations ranging from 0 to 0.09. Experiment results can be found in the Appendix section.
Random Pruning. We compare the classification accuracy between the fine-tuned model from the monolingual pre-trained network and the fine-tuned model using the bilingual network. Each element in attention parameters is pruned with probability p, where p ranges from .0 to .45. We evaluate the classification accuracy for the following GLUE tasks: CoLA, QQP, SST2, MRPC, QNLI, and RTE.
We expect the accuracy of both models to decay as we prune a more considerable number of parameters. The monolingual model shows a faster decay in almost all tasks. For some tasks such as SST2, QQP, and MRPC, we observe that the bilingual model starts with lower accuracy, and its performance exceeds the monolingual model as we prune « 5% to « 25% of parameters. A detailed set of results in Table 2 show models’ average prediction accuracy on the GLUE benchmark.
Random Noise. We also experiment with adding Gaussian noise to the weights. We vary the noise standard deviation from .0 to 0.09. We evaluate the classification accuracy for the same tasks. When no noise is added to model parameters, the monolingual model performs slightly better for tasks like QQP and SST2. As we increase the noise, the accuracies of both models drop with almost identical rates. However, both graphs illustrate a cross-over point after which the bilingual model outperforms the monolingual. The bilingual model achieves significantly higher accuracy in the MRPC task when the standard deviation is greater than « 0.03. For CoLA and RTE, the monolingual model maintains maintains higher performance regardless of the noise level. A detailed set of results in the Appendix section shows models’ average prediction accuracy on the GLUE benchmark.
4 Related Work
Cognitive Reserve and Bilingualism. Our work is inspired by Cognitive Science and evidence of Cognitive Reserve in bilinguals. One implication of our theory is that multitasking leads to smaller weights on average. This could be related to studies performed in healthy older adults that indicate that despite overall less gray matter volume and poorer white matter integrity (i.e., poorer structural brain connectivity), older healthy bilinguals perform equally well or outperform monolinguals in several cognitive tasks [1, 2].
We would like to emphasize that our research is solely on artificial networks which have huge differences to biological neurons. No definite extrapolations should be made to Cognitive Neuroscience without further work. Nonetheless, we show that there is a simple mathematical abstraction that seems to align with the significantly more complex phenomena observed in bilingual cognitive reserve.
Multitask Learning. The most closely related work is by Mao et al. [29] which shows that multitask learning increases adversarial robustness. The intuition behind their proof is that, with task diversity, the gradient of the loss with respect to the wrong label is small as orthogonal tasks make gradients
that cancel out. Wu et al. [30] establishes a connection between robustness to weight perturbations and adversarial attacks. Our work is related but different since it directly establishes a connection between structural robustness and multitasking and shows a cross-over in performance across various domains and tasks. Our theoretical analysis is also completely different compared to prior works. More information on multitask learning can be found in Mao et al. [29] and Ghamizi et al. [31].
Many studies on network compression and the Lottery Ticket Hypothesis are related to our Magnitude Pruning experiments. LeCun et al. [32], Han et al. [7] find that selectively pruned networks can be trained from randomly initialized weights to match the performance of the original network. Frankle and Carbin [33] introduces the hypothesis that randomly initialized neural networks contain a very sparse sub-network that, if initialized correctly, can achieve the accuracy of the original model. Chen et al. [34] studies this in continual learning and examines various pruning methods.
5 Conclusions
We demonstrated a connection between multitask learning and robustness to structural failures for artificial neural networks. For linear representation learning we obtained a characterization of robustness through the spectrum of the task matrix. We showed that robustness comes from diverse tasks which imply a bounded spectral norm for C. One limitation of our theoretical work is that we did not analyze learning algorithms but directly used the SVD solution. It would be interesting to see if gradient descent introduces further regularization or other effects, especially in the non-linear case.
Experimentally, we observed increased robustness for both linguistic and non-linguistic tasks. More complex settings like multi-lingual models, cross-language transfer and their interactions remain to be explored. Finally, it remains open if bilingualism and cognitive reserve in humans can indeed be connected to our framework. It would be fascinating if neuroimaging techniques can measure any form of anatomical or functional regularization that bilingualism could be creating in humans.
6 Acknowledgments
This research has been supported by NSF Grants CCF 1763702, AF 1901292, CNS 2148141, Tripods CCF 1934932, IFML CCF 2019844, the Texas Advanced Computing Center (TACC) and research gifts by Western Digital, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco and the Archie Straiton Endowed Faculty Fellowship. | 1. What is the focus and contribution of the paper regarding multitask learning and its relationship with robustness to neuron failures?
2. What are the strengths of the proposed approach, particularly in terms of its mathematical analysis and experimental demonstrations?
3. Do you have any concerns or difficulties in understanding the practical implications and insights of the paper's findings?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper investigates the relationship between multitask learning and robustness to neuron failures. The paper mathematically shows that multitasking models are more robust to noises, and diverse tasks lead to higher robustness than similar tasks. The paper also presents experiments that demonstrate higher robustness in multitasking models (both linguistic and non-linguistic).
Strengths And Weaknesses
I think the strengths of this paper includes:
clearly written
interesting motivation bringing cognitive science and bilingualism into the space
interesting problem
experiments are conducted on several datasets and tasks.
the results could inspire other researchers
thorough appendix The weaknesses are:
the connection between the human and what the paper does sounds interesting, but I cannot fully understand it.
I'm having trouble in understanding any practical implications and insights.
Questions
What would be practical implications and insights?
Limitations
The authors addresses limitations and ethical considerations. |
NIPS | Title
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
Abstract
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification of this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models in the following URL: https://github.com/giannisdaras/multilingual robustness.
1 Introduction
Converging evidence from cognitive science research indicates that bilingualism increases brain robustness by reducing the rate of cognitive decline due to aging [1, 2] and delaying the onset of symptoms of dementia [3, 4]. It appears that individuals who speak more than one language on a regular basis are able to maintain typical cognitive functioning despite neural degeneration. This mismatch between cognitive functioning and brain pathology is called Cognitive Reserve [5], and its underlying mechanisms are poorly understood and are an active topic of investigation.
Inspired by this research, we study whether artificial neural networks are more robust when trained on multiple languages or multiple tasks. Our experiments demonstrate that training on multiple tasks indeed increases structural robustness. We train monolingual and bilingual GPT-2 models with the same architecture and dataset sizes. Initially, monolingual GPT-2 [6] models are slightly outperforming the bilingual ones, but when we introduce structural noise (by randomly deleting neurons or adding noise to the weights) bilingual models degrade more gracefully and eventually outperform the monolingual models in the high-noise regime. For some amount of noise, bilingual models start outperforming the monolingual ones demonstrating a cross-over in performance due to their increased robustness. We observe this phenomenon for numerous models across three different types of corruption: additive Gaussian noise to the weights, random weight pruning and magnitude-based weight pruning [7].
Our Contributions: We provide a theoretical justification of this phenomenon by mathematically analyzing linear multitask representation learning [8, 9]. Our analysis shows that introducing more
˚equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
diverse tasks creates ℓ2 regularization in the linear task heads. Further, we formally connect the Euclidean norm of the learned representations to structural robustness under errors in the network weights. Our main theorem establishes that multitasking leads to higher robustness to additive noise for linear representations when the task vectors are selected as random and independent Gaussian vectors. Our results also establish that when the tasks are significantly overlapping, multitasking does not lead to higher robustness and hence task diversity is necessary.
We experimentally observe that multitasking increases structural robustness for numerous networks and multiple problems including MNIST, CIFAR10, Newsgroup20, GPT models and finetuned GPT models on GLUE tasks. We train networks under exactly comparable dataset and architecture conditions and show that models become more robust to structural failures as they are trained with more tasks. We experiment with three different types of structural failures and show robustness increases for all of them. We also experimentally observe that the addition of diverse tasks seems to regularize the model weights, as we predict in our theoretical analysis.
2 Theoretical Analysis
Building intuition. We start with a small numerical example to build intuition. Given a feature vector x P Rd we compute a k dimensional linear representation Wx using a matrix W P Rkˆd. We choose
W such that we best approximate a set of ground truth task vectors, tc1, c2, ..., cT u, that lie in Rd. The learned approximation is ĉi “ WT γi. Essentially, we use linear combinations of the columns of WT to approximate the task vectors. For simplicity, we assume that the columns of WT are unit norm. We study the case where k ă T , otherwise there are infinite solutions. Assume we work in d “ 3 dimensions with T “ 3 total tasks, c1 “ r1, 0, 0sT , c2 “ r0, 1, 0sT , c3 “ r0, 0, 1sT . Set our learned representation dimension to be k “ 1 dimensional. When T “ 2, using only the first two tasks c1, c2, an optimal solution is W “ 1?2 r1, 1, 0s. The corresponding linear head is now the scalar γ1 “ 1?2 “ γ2 and the approximate vectors are ĉ1 “ W
T γ1 “ r0.5, 0.5, 0sT “ ĉ2. Therefore the best one dimensional subspace to jointly approximate c1, c2 is the span of W “ 1? 2
r1, 1, 0s. Now we introduce one more task and find the one dimensional subspace that best approximates c1, c2, c3. That becomes W 1 “ 1?3 r1, 1, 1s with linear heads γ 1 1 “ 1?3 “ γ 1 2 “ γ13. The approximate vectors now are ĉ11 “ pW 1qT γ11 “ r1{3, 1{3, 1{3sT “ ĉ12 “ ĉ13. Notice that ||ĉ1i||2 “ 1{3 for 3 tasks but ||ĉi||2 “ 1{2 for two tasks. The point is that for more tasks, the vector that jointly approximates all task vectors becomes shorter. Equivalently, the ℓ2 norm of the linear task heads decreases from γi “ 1?2 to γ 1 i “ 1?3 as the tasks increased from two to three showing how multitasking creates regularization. A graphical representation of this example is given in Figure 2. It is important that the task vectors ci are orthogonal, increasing the effective dimensionality of the problem. The intuition is that diverse tasks increase the effective dimension, making the best approximation vector shorter.
Our main theoretical result is that this phenomenon is quite general and makes multitasking lead to structural robustness. We connect the norm of the approximated task vectors with robustness to weight perturbations and show that for Gaussian, independent task vectors the average norm shrinks as more tasks are added. This is intuitive since high dimensional Gaussian vectors are near-orthogonal. Surprisingly, we empirically show that task vectors for numerous problems also exhibit this behavior.
Analysis. We consider a neural network fθ : Rd Ñ Rk and a collection of tasks tT1, ..., TT u. We are trying to learn θ, γi P Rk to solve the following optimization problem:
argminθ,tγ1,...,γT u T ÿ
i“1 Epx,yqPTiLpγ T i fθpxq, yq. (1)
The neural network fθ can be as simple as a single matrix W : Rd Ñ Rk. For linear networks, we consider the following dataset generation process: for task Ti, we sample a Gaussian x and we generate its label y by taking the inner-product with a task vector ci, i.e. y “ cTi x for task Ti. Given infinite samples and MSE loss, the optimization problem of (1) is equivalent to the following problem. Definition 2.1 (Optimization Problem). Let k ă T ă d. We define the Factorized Best Rank-k approximation of a matrix C P RdˆT as the optimization problem:
W˚,Γ˚ “ argminWPRkˆd,ΓPRkˆT ˇ ˇ ˇ ˇWTΓ ´ C ˇ ˇ ˇ ˇ
2 F . (2)
We are interested in the case when the dimensionality of the representation k is smaller than the number of tasks T , otherwise the best Rank-k approximation of C is not unique.
The following Proposition states that in the considered setting, Problem 2 can be solved with SVD. Proposition 2.2. For any matrix C P RdˆT with distinct singular values, any solution of 2.1 satisfies:
W˚ T Γ˚ “ UΣkV T , (3) where UΣV T is the SVD of C and Σk is the same as Σ except than the last T ´ k diagonal entries that are zeroed out.
The fact that the Singular Value decomposition computes the best rank-k approximation to a matrix can be found in several textbooks e.g. Golub and Van Loan [10], Blum et al. [11].
This proposition establishes that W˚ “ UT and Γ˚ “ ΣkV T is a valid solution of (2). Onwards, we will be calling this the SVD Solution. Definition 2.3. We define the SVD solution of (2), to be:
WSVD “ UT , ΓSVD “ ΣkV T . (4)
We note that if any multitask learning algorithm is used to obtain W˚,Γ˚, one can run Gram-Schmidt to make W˚ orthonormal and hence obtain the factorization we use. It is important that W stays normalized and all scaling is pushed to Γ since to measure robustness to weight shifts, we are going to add noise to W only, and higher W scaling is equivalent to lower effective noise.
We study how the performance is affected when the representation network, fθ, is corrupted. Definition 2.4. For any sample x, the Mean Squared Error (MSE) for task i is defined to be the expected error between the model prediction under noise and the true value y. Namely,
MSEi “ Eθc “ pγTi fθcpxq ´ yq2 ‰
, (5) where fθc is the model that emerges after corrupting fθ.
This measures how well the model approximates the ground truth under the presence of noise and under the constraint of a joint representation for multiple tasks.
The simplest corruption process to study is adding noise to the representation matrix, i.e. Wc “ W ` N, Nij „ N p0, σ2q, i.i.d (6)
Then, we denote the mean squared error for the task i with MSEi,σ 2 and the average mean squared error across the T tasks with ĘMSET,σ 2
. We are now ready to introduce our results. Theorem 2.5 (Mean Squared Error for Additive Noise). Let C P RdˆT be a matrix with distinct singular values σ1 ą σ2 ą ... ą σT . Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 `
řk i“1 σipCq2
T ¨ σ2 . (7)
Average MSE under noise
Average MSE without noise
Noise Variance
As shown, the noisy MSE decomposes into the sum of the noiseless MSE plus the noise variance times a function that depends on the number of tasks:
RpT q “ řk i“1 σipCq2
T . (8)
It is important to emphasize that as more tasks are added, the matrix C changes, but the interlacing theorem allows us to connect the singular values of smaller submatrices, as discussed in the Appendix. RpT q is the robustness slope: if a model with T tasks has smaller slope, it will eventually outperform a model with, say T ´ 1 tasks and larger slope, for sufficiently large noise. This is true even if the noiseless performance for the T ´ 1-task model is better, indicating a cross-over in MSE. Therefore the key is understanding when the sum of the top k singular values of C scales sublinearly in T . This is not true for tasks that are aligned, but we can show it holds for independent Gaussian task vectors. We believe it holds for more general families of diverse task vectors and our experiments verify it also holds for numerous real task vectors learned from text and vision datasets.
Connection with l2 regularization. For the SVD solution (see Definition 4), the sum of the top-k singular values squared is the squared Frobenius norm of Γ. Indeed, we have that ||ΓSVD||2F “ ||ΣkV T ||2F . Since Σk is a diagonal matrix, each row of ΣkV T is a rescaling of the corresponding row of V T . Rows of V T have norm 1, hence the i-th row of ΣkV T will have norm σi. The Frobenius norm squared is just the sum of the squared norms of the rows. Hence, we get that
||ΓSVD||2F “ k ÿ
i“1 σipCq2. (9)
Using this simple observation, we can get the following alternative expression of Theorem 2.5. Corollary 2.6. Let C P RdˆT be a matrix with distinct singular values. Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 ` ||Γ|| 2 F
T σ2 . (10)
Corollary 2.6 provides two important insights: i) the normalization with the number of tasks that appears in (7) is justified since the Frobenius norm of Γ grows with the number of task, ii) if we can prove that the slope (defined in Equation (8)) is dropping, then we are effectively proving that multitasking gives l2 regularization as we showed in the toy introductory example. This also holds for the case of Gaussian, i.i.d. task vectors, as shown in the following theorem. Theorem 2.7. Let C P RdˆT be a random matrix with Gaussian, i.i.d. entries of variance 1{d and d “ ΩpT 3q. Let Ct, Ct`1 be the matrices formed by selecting the first t, pt ` 1q columns of C. Then, there is a noise level σthres such that with probability ě 1 ´ exp ´ ´Ω ´? d ¯¯
, the SVD solutions (see (4)) of (2) (for Ct, Ct`1 respectively), under the noise corruption model, satisfy:
ĘMSE t`1,σ2 ă ĘMSEt,σ 2 , @σ ě σthres. (11)
Remark 2.8. In words, this result shows that adding new tasks gives provably increased robustness to high noise corruption in the weights, when the task vectors are Gaussian. Remark 2.9. Observe that the MSE under noise drops for every single new task added. The assumption d “ ΩpT 3q, can be relaxed to d “ Ωpt3q, and we get increased robustness for the first t added tasks. Nevertheless, for most applications d “ ΩpT 3q is a realistic assumption: Even for our smallest dataset MNIST d “ 728, and we experiment with up to 10 tasks.
3 Experimental Evaluation
We divide the experimental section in two parts. In the first part, we add noise to the final linear representation layer of various networks and verify that our theoretical analysis agrees with experimentally observed multitasking robustness on real datasets (MNIST, CIFAR10, NewsGroup20). In the second part, we show that multitasking leads to robustness to general weight corruptions in any layer of a complex transformer. Specifically, we show that multilingual Language Models are more robust to weight shifts (across all the layers) compared to monolingual trained under the same setting. This is the first evidence of increased Cognitive Reserve in bilingual artificial neural networks.
Experiments with Linear Representation Layers. We perform experiments on three datasets (MNIST, CIFAR10, Newsgroup20) and two modalities (Vision and Language). The datasets normally involve one classification task each. We create multiple binary tasks by distinguishing between pairs of labels. For example, in CIFAR10, one task might be to distinguish between dogs and cats and another between airplanes and cars. We assign a value in r0, 1s to each sample for each task to transform them to regression tasks (to match our theory). For example, if task i is to distinguish between dogs and cats, value 0 corresponds to dog and value 1 to cat.
The second issue is learning the task vectors from training data. For MNIST, we can simply learn a linear layer C with columns tc1, ..., cT u such that: cTi x « y for each task. For more complex datasets like CIFAR or Newsgroup20, linear networks have lower performance and hence it is less interesting to examine their robustness. Instead, we first use another network to extract representations gθpxq and then learn a linear layer acting on the encodings such that cTi gθpxq « y. For CIFAR we used a pre-trained Resnet50 as the encoder while for NewsGroup, a pre-trained BERT [12]. We would like to point out that our theory is still valid for this case – this is equivalent to the linear layer C receiving inputs from a learned representation as opposed to the features directly. As the number of tasks increase, we reduce the number of training examples per task. We do this to make sure that the total training dataset size stays the same as the number of tasks increase.
Figure 3 shows how the average MSE behaves as noise increases for different number of tasks. Note that even though all models begin from roughly the same performance in the noiseless setting, the multitask models are much more robust to the corruption of their weights consistently among all the datasets and modalities. This is aligned with our theoretical analysis which predicts that the robustness slope (defined in Equation (8)) decreases with the number of tasks. We calculate robustness slopes for learned task vectors for real datasets and plot their decay in the Appendix, where we further include all the details of how these models were trained.
Experiments with Language Models. Our objective is to compare robustness to neural weight perturbations in monolingual and bilingual language models. We use the following perturbation
models: 1) Random deletion of weight parameters: we zero-out p percent of the attention layer weights, 2) Magnitude pruning: we sort model attention weights by the magnitude and delete the smallest p percent of weights [7], 3) Random normal noise: we add zero-mean random Gaussian noise with standard deviation σ2 to the attention weights.
On the selection of the linguistic pair, we selected Greek, a highly inflected language with very different morphology, syntax and phonology compared to English. It also uses a different script since Greek characters were not Romanized. This minimizes transfer between languages, something we wanted to avoid. In the Appendix, we present additional experiments for other Romance languages.
The dataset for the bilingual model is a concatenation of articles from English and Greek Wikipedia. To avoid the computational cost of training for a new language, we start from the pre-trained GPT-2 (small)[6] and we use the Language Model Recycling Technique, introduced in [13]. GPT-2 small is a transformer-based architecture for causal language modeling, with 12 attention blocks and 124M parameters. The tokenizer uses Byte Pair Encoding and has a vocabulary of 50, 257 tokens. For the bilingual model, we generate a new tokenizer, vocabulary and embedding layer without changing the architecture. We keep the vocabulary size the same, as changing the vocabulary size can affect the scale of the perplexity score for these models. Note that Wikipedia documents were not in the original training of GPT-2, but our monolingual baseline was subsequently finetuned on English Wikipedia. Details on all our training hyperparameters are included in the Appendix.
We measure the quality of generated text using perplexity. Our bilingual model achieves 89 perplexity on a randomly picked subset of the OSCAR [14] dataset and 76 perplexity on the English IMDB dataset [15]. Monolingual GPT-2 model achieves 36 perplexity on the IMDB dataset. In the Appendix we include generated text for both the models. Although the perplexity of the bilingual model does not match the pre-trained GPT-2, the generated text is of reasonable quality text in both languages.
Text Generation. Our first experiment is to compare the performance of both models under various parameter perturbations. First, we try deleting a random portion p (p from 0% to 40%) of attention layers’ weight to observe and compare the trend of decay in text generation quality between the two models. We evaluate both models on the IMDB dataset. As the graph in Figure 1 shows, the monolingual model starts with text predictions closer to the source text, resulting in lower perplexity without noise. However, as we delete a more significant portion of weights, the bilingual model matches the performance of the monolingual one and eventually outperforms that.
Next, we try magnitude-based pruning of a portion of weights, p, to observe and compare the trend of decay in text generation quality between the two models. We sort the attention layer weights by the magnitude and set p percent of weights with the lowest magnitude to zero. Again, we use the IMDB dataset to evaluate models. The graphs in Figure 4 show that as the training process continues, the model achieves a lower perplexity. Moreover, pruning additional weights has a less substantial impact on the model’s performance. This graph shows that training the pre-trained GPT-2 model for a few epochs on a bilingual dataset significantly improves robustness to weight perturbations.
In another experiment, we observe how the maximum singular value of the weight matrices changes throughout training process. We track the maximum singular value of attention layer weights. We use a pretrained GPT-2 model baseline, and train this model for 16k iterations on English text data from Wikipedia. Resuming from this checkpoint, we train two new models: 1) We continue training model
1 on task 1 (English Wikipedia dataset) for 16k more iterations. 2) We train a second model on a different English dataset, the LAMBADA dataset [16], for 16k more iterations. Figure 5 indicates the results of this experiment by plotting maximum singular values of the first attention layer. As the Figure shows, training model on a new dataset (task 2) results in a faster decay of the maximum singular value.
Text Classification. We conduct another set of experiments to observe the robustness of fine-tuned monolingual and bilingual GPT-2 models for text classification. In this section, we fine-tune both the monolingual and the bilingual GPT-2 models (previously trained) for downstream classification tasks using the GLUE benchmark [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] to compare the robustness of models to weight perturbations. The two perturbation methods tested in this section are random weight deletion and random Gaussian noise added to attention weights. For each task, we fine-tune both models for ten epochs. When applying random pruning, the accuracy of each model is evaluated after deleting p percent of model weights, p ranging from 0% to 45%. When perturbing model weights by adding noise, we try various Gaussian noise distributions with standard deviations ranging from 0 to 0.09. Experiment results can be found in the Appendix section.
Random Pruning. We compare the classification accuracy between the fine-tuned model from the monolingual pre-trained network and the fine-tuned model using the bilingual network. Each element in attention parameters is pruned with probability p, where p ranges from .0 to .45. We evaluate the classification accuracy for the following GLUE tasks: CoLA, QQP, SST2, MRPC, QNLI, and RTE.
We expect the accuracy of both models to decay as we prune a more considerable number of parameters. The monolingual model shows a faster decay in almost all tasks. For some tasks such as SST2, QQP, and MRPC, we observe that the bilingual model starts with lower accuracy, and its performance exceeds the monolingual model as we prune « 5% to « 25% of parameters. A detailed set of results in Table 2 show models’ average prediction accuracy on the GLUE benchmark.
Random Noise. We also experiment with adding Gaussian noise to the weights. We vary the noise standard deviation from .0 to 0.09. We evaluate the classification accuracy for the same tasks. When no noise is added to model parameters, the monolingual model performs slightly better for tasks like QQP and SST2. As we increase the noise, the accuracies of both models drop with almost identical rates. However, both graphs illustrate a cross-over point after which the bilingual model outperforms the monolingual. The bilingual model achieves significantly higher accuracy in the MRPC task when the standard deviation is greater than « 0.03. For CoLA and RTE, the monolingual model maintains maintains higher performance regardless of the noise level. A detailed set of results in the Appendix section shows models’ average prediction accuracy on the GLUE benchmark.
4 Related Work
Cognitive Reserve and Bilingualism. Our work is inspired by Cognitive Science and evidence of Cognitive Reserve in bilinguals. One implication of our theory is that multitasking leads to smaller weights on average. This could be related to studies performed in healthy older adults that indicate that despite overall less gray matter volume and poorer white matter integrity (i.e., poorer structural brain connectivity), older healthy bilinguals perform equally well or outperform monolinguals in several cognitive tasks [1, 2].
We would like to emphasize that our research is solely on artificial networks which have huge differences to biological neurons. No definite extrapolations should be made to Cognitive Neuroscience without further work. Nonetheless, we show that there is a simple mathematical abstraction that seems to align with the significantly more complex phenomena observed in bilingual cognitive reserve.
Multitask Learning. The most closely related work is by Mao et al. [29] which shows that multitask learning increases adversarial robustness. The intuition behind their proof is that, with task diversity, the gradient of the loss with respect to the wrong label is small as orthogonal tasks make gradients
that cancel out. Wu et al. [30] establishes a connection between robustness to weight perturbations and adversarial attacks. Our work is related but different since it directly establishes a connection between structural robustness and multitasking and shows a cross-over in performance across various domains and tasks. Our theoretical analysis is also completely different compared to prior works. More information on multitask learning can be found in Mao et al. [29] and Ghamizi et al. [31].
Many studies on network compression and the Lottery Ticket Hypothesis are related to our Magnitude Pruning experiments. LeCun et al. [32], Han et al. [7] find that selectively pruned networks can be trained from randomly initialized weights to match the performance of the original network. Frankle and Carbin [33] introduces the hypothesis that randomly initialized neural networks contain a very sparse sub-network that, if initialized correctly, can achieve the accuracy of the original model. Chen et al. [34] studies this in continual learning and examines various pruning methods.
5 Conclusions
We demonstrated a connection between multitask learning and robustness to structural failures for artificial neural networks. For linear representation learning we obtained a characterization of robustness through the spectrum of the task matrix. We showed that robustness comes from diverse tasks which imply a bounded spectral norm for C. One limitation of our theoretical work is that we did not analyze learning algorithms but directly used the SVD solution. It would be interesting to see if gradient descent introduces further regularization or other effects, especially in the non-linear case.
Experimentally, we observed increased robustness for both linguistic and non-linguistic tasks. More complex settings like multi-lingual models, cross-language transfer and their interactions remain to be explored. Finally, it remains open if bilingualism and cognitive reserve in humans can indeed be connected to our framework. It would be fascinating if neuroimaging techniques can measure any form of anatomical or functional regularization that bilingualism could be creating in humans.
6 Acknowledgments
This research has been supported by NSF Grants CCF 1763702, AF 1901292, CNS 2148141, Tripods CCF 1934932, IFML CCF 2019844, the Texas Advanced Computing Center (TACC) and research gifts by Western Digital, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco and the Archie Straiton Endowed Faculty Fellowship. | 1. What is the main contribution of the paper regarding neural networks' resilience to structural noise?
2. What are the strengths and weaknesses of the proposed hypothesis and its experimental validation?
3. Do you have any concerns about the experimental paradigm or the jump between the predictions in the linear case and the non-linear one?
4. How do you think the results would change if the authors used a CE loss or corrupted a non-linear network in the image recognition experiments?
5. What are your suggestions for improving the paper, such as using different symbols or letters for different concepts?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper introduces the hypothesis that neural networks are more resilient to structural noise when they have been trained to solve multiple tasks, rather than a single one. The hypothesis finds inspiration in a linguistic phenomenon under which cognitive functions of multilingual speakers are more resilient to dementia and cognitive decline. The paper proposes a formal explanation for this phenomenon based on an analysis of a linear case in which the task is that of approximating a number
T
of target vectors
c
i
using linear combinations of an orthonormal weight matrix
W
T
. In this case, they show that increasing the number of approximated tasks effectively decreases both the expected MSE of the solutions and the Frobenius norm of the linear coefficients (thus inducing L2 regularization on them), thus making a connection between structural robustness and L2 regularization. Then, the authors experiment both with linear representation layers on image recognition datasets and on language modelling, observing that models trained on more than 1 task tend to be more robust to different kinds of perturbations.
Strengths And Weaknesses
Strengths
This paper presents an intriguing hypothesis on the properties of optimal solutions to multi-task learning setups.
It presents an insightful theoretical analysis of the linear case that could possibly be extended in the future to account for more realistic scenarios.
Backs the theoretical claim with multiple experiments on different datasets.
The paper is reasonably clear, but there are several points that would need clarification (see Weaknesses below).
Weaknesses
Some parts of the experimental paradigm look a bit arbitrary. For instance, the authors use IMDB as an evaluation set. Why that is not clear. Do the results replicate, say, on validation sets of English Wikipedia? A second experiment uses the LAMBADA dataset as a secondary task. LAMBADA is not a language modeling dataset. It's just an evaluation dataset containing only a few selected passages, so it is not clear why it is a good choice for these experiments. It is also not clear what is the connection between the attention weights and the predicted effect on L2 regularization.
There seem to be missing references that come up on a quick google search, such as this one: https://arxiv.org/pdf/2004.11072.pdf
The paper introduces experiments, whose results are only presented in the appendix. This is not appropriate. Either these experiments should be only briefly mentioned in the main body, or the full results should be discussed in it. Only introducing them seems like a way of working around the page limit.
There seems to be a strong jump between the predictions in the linear case to the non-linear one. The latter ones are only explored in the Transformer LM experiments. It would have been good to exploit the simpler image recognition experiments to also explore whether the results followed the predictions in non-linear networks trained with CE.
Questions
It is not clear to me whether the experiments were adequately controlled. Were the "monotask" and "multitask" models trained for a comparable number of iterations on a comparable number of data points?
T
as in number of tasks and
T
in
W
T
are completely different
T
s, right? The second stands for transpose? If so, this is a tad confusing. I would strongly recommend using a different symbol for the transpose (see https://tex.stackexchange.com/questions/30619/what-is-the-best-symbol-for-vector-matrix-transpose) or using a different letter for the number of tasks.
The image recognition experiments are performed on the linear case. What do you think would change if using a CE loss or corrupting a non-linear network as in the LM experiments?
Limitations
The authors have acknowledged that the studied networks are not state-of-the-art, but that is also not relevant to the question they want to study. They also make clear that beyond the loose inspiration on the cognitive phenomenon of "Bilingual Cognitive Reserve", no extrapolations should be made without significant work. To stay true to that acknowledged limitation, the authors could revise the title fragment "A Neural Model for Bilingual Cognitive Reserve", which for the uncareful reader could warrant such extrapolations. |
NIPS | Title
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
Abstract
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification of this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models in the following URL: https://github.com/giannisdaras/multilingual robustness.
1 Introduction
Converging evidence from cognitive science research indicates that bilingualism increases brain robustness by reducing the rate of cognitive decline due to aging [1, 2] and delaying the onset of symptoms of dementia [3, 4]. It appears that individuals who speak more than one language on a regular basis are able to maintain typical cognitive functioning despite neural degeneration. This mismatch between cognitive functioning and brain pathology is called Cognitive Reserve [5], and its underlying mechanisms are poorly understood and are an active topic of investigation.
Inspired by this research, we study whether artificial neural networks are more robust when trained on multiple languages or multiple tasks. Our experiments demonstrate that training on multiple tasks indeed increases structural robustness. We train monolingual and bilingual GPT-2 models with the same architecture and dataset sizes. Initially, monolingual GPT-2 [6] models are slightly outperforming the bilingual ones, but when we introduce structural noise (by randomly deleting neurons or adding noise to the weights) bilingual models degrade more gracefully and eventually outperform the monolingual models in the high-noise regime. For some amount of noise, bilingual models start outperforming the monolingual ones demonstrating a cross-over in performance due to their increased robustness. We observe this phenomenon for numerous models across three different types of corruption: additive Gaussian noise to the weights, random weight pruning and magnitude-based weight pruning [7].
Our Contributions: We provide a theoretical justification of this phenomenon by mathematically analyzing linear multitask representation learning [8, 9]. Our analysis shows that introducing more
˚equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
diverse tasks creates ℓ2 regularization in the linear task heads. Further, we formally connect the Euclidean norm of the learned representations to structural robustness under errors in the network weights. Our main theorem establishes that multitasking leads to higher robustness to additive noise for linear representations when the task vectors are selected as random and independent Gaussian vectors. Our results also establish that when the tasks are significantly overlapping, multitasking does not lead to higher robustness and hence task diversity is necessary.
We experimentally observe that multitasking increases structural robustness for numerous networks and multiple problems including MNIST, CIFAR10, Newsgroup20, GPT models and finetuned GPT models on GLUE tasks. We train networks under exactly comparable dataset and architecture conditions and show that models become more robust to structural failures as they are trained with more tasks. We experiment with three different types of structural failures and show robustness increases for all of them. We also experimentally observe that the addition of diverse tasks seems to regularize the model weights, as we predict in our theoretical analysis.
2 Theoretical Analysis
Building intuition. We start with a small numerical example to build intuition. Given a feature vector x P Rd we compute a k dimensional linear representation Wx using a matrix W P Rkˆd. We choose
W such that we best approximate a set of ground truth task vectors, tc1, c2, ..., cT u, that lie in Rd. The learned approximation is ĉi “ WT γi. Essentially, we use linear combinations of the columns of WT to approximate the task vectors. For simplicity, we assume that the columns of WT are unit norm. We study the case where k ă T , otherwise there are infinite solutions. Assume we work in d “ 3 dimensions with T “ 3 total tasks, c1 “ r1, 0, 0sT , c2 “ r0, 1, 0sT , c3 “ r0, 0, 1sT . Set our learned representation dimension to be k “ 1 dimensional. When T “ 2, using only the first two tasks c1, c2, an optimal solution is W “ 1?2 r1, 1, 0s. The corresponding linear head is now the scalar γ1 “ 1?2 “ γ2 and the approximate vectors are ĉ1 “ W
T γ1 “ r0.5, 0.5, 0sT “ ĉ2. Therefore the best one dimensional subspace to jointly approximate c1, c2 is the span of W “ 1? 2
r1, 1, 0s. Now we introduce one more task and find the one dimensional subspace that best approximates c1, c2, c3. That becomes W 1 “ 1?3 r1, 1, 1s with linear heads γ 1 1 “ 1?3 “ γ 1 2 “ γ13. The approximate vectors now are ĉ11 “ pW 1qT γ11 “ r1{3, 1{3, 1{3sT “ ĉ12 “ ĉ13. Notice that ||ĉ1i||2 “ 1{3 for 3 tasks but ||ĉi||2 “ 1{2 for two tasks. The point is that for more tasks, the vector that jointly approximates all task vectors becomes shorter. Equivalently, the ℓ2 norm of the linear task heads decreases from γi “ 1?2 to γ 1 i “ 1?3 as the tasks increased from two to three showing how multitasking creates regularization. A graphical representation of this example is given in Figure 2. It is important that the task vectors ci are orthogonal, increasing the effective dimensionality of the problem. The intuition is that diverse tasks increase the effective dimension, making the best approximation vector shorter.
Our main theoretical result is that this phenomenon is quite general and makes multitasking lead to structural robustness. We connect the norm of the approximated task vectors with robustness to weight perturbations and show that for Gaussian, independent task vectors the average norm shrinks as more tasks are added. This is intuitive since high dimensional Gaussian vectors are near-orthogonal. Surprisingly, we empirically show that task vectors for numerous problems also exhibit this behavior.
Analysis. We consider a neural network fθ : Rd Ñ Rk and a collection of tasks tT1, ..., TT u. We are trying to learn θ, γi P Rk to solve the following optimization problem:
argminθ,tγ1,...,γT u T ÿ
i“1 Epx,yqPTiLpγ T i fθpxq, yq. (1)
The neural network fθ can be as simple as a single matrix W : Rd Ñ Rk. For linear networks, we consider the following dataset generation process: for task Ti, we sample a Gaussian x and we generate its label y by taking the inner-product with a task vector ci, i.e. y “ cTi x for task Ti. Given infinite samples and MSE loss, the optimization problem of (1) is equivalent to the following problem. Definition 2.1 (Optimization Problem). Let k ă T ă d. We define the Factorized Best Rank-k approximation of a matrix C P RdˆT as the optimization problem:
W˚,Γ˚ “ argminWPRkˆd,ΓPRkˆT ˇ ˇ ˇ ˇWTΓ ´ C ˇ ˇ ˇ ˇ
2 F . (2)
We are interested in the case when the dimensionality of the representation k is smaller than the number of tasks T , otherwise the best Rank-k approximation of C is not unique.
The following Proposition states that in the considered setting, Problem 2 can be solved with SVD. Proposition 2.2. For any matrix C P RdˆT with distinct singular values, any solution of 2.1 satisfies:
W˚ T Γ˚ “ UΣkV T , (3) where UΣV T is the SVD of C and Σk is the same as Σ except than the last T ´ k diagonal entries that are zeroed out.
The fact that the Singular Value decomposition computes the best rank-k approximation to a matrix can be found in several textbooks e.g. Golub and Van Loan [10], Blum et al. [11].
This proposition establishes that W˚ “ UT and Γ˚ “ ΣkV T is a valid solution of (2). Onwards, we will be calling this the SVD Solution. Definition 2.3. We define the SVD solution of (2), to be:
WSVD “ UT , ΓSVD “ ΣkV T . (4)
We note that if any multitask learning algorithm is used to obtain W˚,Γ˚, one can run Gram-Schmidt to make W˚ orthonormal and hence obtain the factorization we use. It is important that W stays normalized and all scaling is pushed to Γ since to measure robustness to weight shifts, we are going to add noise to W only, and higher W scaling is equivalent to lower effective noise.
We study how the performance is affected when the representation network, fθ, is corrupted. Definition 2.4. For any sample x, the Mean Squared Error (MSE) for task i is defined to be the expected error between the model prediction under noise and the true value y. Namely,
MSEi “ Eθc “ pγTi fθcpxq ´ yq2 ‰
, (5) where fθc is the model that emerges after corrupting fθ.
This measures how well the model approximates the ground truth under the presence of noise and under the constraint of a joint representation for multiple tasks.
The simplest corruption process to study is adding noise to the representation matrix, i.e. Wc “ W ` N, Nij „ N p0, σ2q, i.i.d (6)
Then, we denote the mean squared error for the task i with MSEi,σ 2 and the average mean squared error across the T tasks with ĘMSET,σ 2
. We are now ready to introduce our results. Theorem 2.5 (Mean Squared Error for Additive Noise). Let C P RdˆT be a matrix with distinct singular values σ1 ą σ2 ą ... ą σT . Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 `
řk i“1 σipCq2
T ¨ σ2 . (7)
Average MSE under noise
Average MSE without noise
Noise Variance
As shown, the noisy MSE decomposes into the sum of the noiseless MSE plus the noise variance times a function that depends on the number of tasks:
RpT q “ řk i“1 σipCq2
T . (8)
It is important to emphasize that as more tasks are added, the matrix C changes, but the interlacing theorem allows us to connect the singular values of smaller submatrices, as discussed in the Appendix. RpT q is the robustness slope: if a model with T tasks has smaller slope, it will eventually outperform a model with, say T ´ 1 tasks and larger slope, for sufficiently large noise. This is true even if the noiseless performance for the T ´ 1-task model is better, indicating a cross-over in MSE. Therefore the key is understanding when the sum of the top k singular values of C scales sublinearly in T . This is not true for tasks that are aligned, but we can show it holds for independent Gaussian task vectors. We believe it holds for more general families of diverse task vectors and our experiments verify it also holds for numerous real task vectors learned from text and vision datasets.
Connection with l2 regularization. For the SVD solution (see Definition 4), the sum of the top-k singular values squared is the squared Frobenius norm of Γ. Indeed, we have that ||ΓSVD||2F “ ||ΣkV T ||2F . Since Σk is a diagonal matrix, each row of ΣkV T is a rescaling of the corresponding row of V T . Rows of V T have norm 1, hence the i-th row of ΣkV T will have norm σi. The Frobenius norm squared is just the sum of the squared norms of the rows. Hence, we get that
||ΓSVD||2F “ k ÿ
i“1 σipCq2. (9)
Using this simple observation, we can get the following alternative expression of Theorem 2.5. Corollary 2.6. Let C P RdˆT be a matrix with distinct singular values. Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 ` ||Γ|| 2 F
T σ2 . (10)
Corollary 2.6 provides two important insights: i) the normalization with the number of tasks that appears in (7) is justified since the Frobenius norm of Γ grows with the number of task, ii) if we can prove that the slope (defined in Equation (8)) is dropping, then we are effectively proving that multitasking gives l2 regularization as we showed in the toy introductory example. This also holds for the case of Gaussian, i.i.d. task vectors, as shown in the following theorem. Theorem 2.7. Let C P RdˆT be a random matrix with Gaussian, i.i.d. entries of variance 1{d and d “ ΩpT 3q. Let Ct, Ct`1 be the matrices formed by selecting the first t, pt ` 1q columns of C. Then, there is a noise level σthres such that with probability ě 1 ´ exp ´ ´Ω ´? d ¯¯
, the SVD solutions (see (4)) of (2) (for Ct, Ct`1 respectively), under the noise corruption model, satisfy:
ĘMSE t`1,σ2 ă ĘMSEt,σ 2 , @σ ě σthres. (11)
Remark 2.8. In words, this result shows that adding new tasks gives provably increased robustness to high noise corruption in the weights, when the task vectors are Gaussian. Remark 2.9. Observe that the MSE under noise drops for every single new task added. The assumption d “ ΩpT 3q, can be relaxed to d “ Ωpt3q, and we get increased robustness for the first t added tasks. Nevertheless, for most applications d “ ΩpT 3q is a realistic assumption: Even for our smallest dataset MNIST d “ 728, and we experiment with up to 10 tasks.
3 Experimental Evaluation
We divide the experimental section in two parts. In the first part, we add noise to the final linear representation layer of various networks and verify that our theoretical analysis agrees with experimentally observed multitasking robustness on real datasets (MNIST, CIFAR10, NewsGroup20). In the second part, we show that multitasking leads to robustness to general weight corruptions in any layer of a complex transformer. Specifically, we show that multilingual Language Models are more robust to weight shifts (across all the layers) compared to monolingual trained under the same setting. This is the first evidence of increased Cognitive Reserve in bilingual artificial neural networks.
Experiments with Linear Representation Layers. We perform experiments on three datasets (MNIST, CIFAR10, Newsgroup20) and two modalities (Vision and Language). The datasets normally involve one classification task each. We create multiple binary tasks by distinguishing between pairs of labels. For example, in CIFAR10, one task might be to distinguish between dogs and cats and another between airplanes and cars. We assign a value in r0, 1s to each sample for each task to transform them to regression tasks (to match our theory). For example, if task i is to distinguish between dogs and cats, value 0 corresponds to dog and value 1 to cat.
The second issue is learning the task vectors from training data. For MNIST, we can simply learn a linear layer C with columns tc1, ..., cT u such that: cTi x « y for each task. For more complex datasets like CIFAR or Newsgroup20, linear networks have lower performance and hence it is less interesting to examine their robustness. Instead, we first use another network to extract representations gθpxq and then learn a linear layer acting on the encodings such that cTi gθpxq « y. For CIFAR we used a pre-trained Resnet50 as the encoder while for NewsGroup, a pre-trained BERT [12]. We would like to point out that our theory is still valid for this case – this is equivalent to the linear layer C receiving inputs from a learned representation as opposed to the features directly. As the number of tasks increase, we reduce the number of training examples per task. We do this to make sure that the total training dataset size stays the same as the number of tasks increase.
Figure 3 shows how the average MSE behaves as noise increases for different number of tasks. Note that even though all models begin from roughly the same performance in the noiseless setting, the multitask models are much more robust to the corruption of their weights consistently among all the datasets and modalities. This is aligned with our theoretical analysis which predicts that the robustness slope (defined in Equation (8)) decreases with the number of tasks. We calculate robustness slopes for learned task vectors for real datasets and plot their decay in the Appendix, where we further include all the details of how these models were trained.
Experiments with Language Models. Our objective is to compare robustness to neural weight perturbations in monolingual and bilingual language models. We use the following perturbation
models: 1) Random deletion of weight parameters: we zero-out p percent of the attention layer weights, 2) Magnitude pruning: we sort model attention weights by the magnitude and delete the smallest p percent of weights [7], 3) Random normal noise: we add zero-mean random Gaussian noise with standard deviation σ2 to the attention weights.
On the selection of the linguistic pair, we selected Greek, a highly inflected language with very different morphology, syntax and phonology compared to English. It also uses a different script since Greek characters were not Romanized. This minimizes transfer between languages, something we wanted to avoid. In the Appendix, we present additional experiments for other Romance languages.
The dataset for the bilingual model is a concatenation of articles from English and Greek Wikipedia. To avoid the computational cost of training for a new language, we start from the pre-trained GPT-2 (small)[6] and we use the Language Model Recycling Technique, introduced in [13]. GPT-2 small is a transformer-based architecture for causal language modeling, with 12 attention blocks and 124M parameters. The tokenizer uses Byte Pair Encoding and has a vocabulary of 50, 257 tokens. For the bilingual model, we generate a new tokenizer, vocabulary and embedding layer without changing the architecture. We keep the vocabulary size the same, as changing the vocabulary size can affect the scale of the perplexity score for these models. Note that Wikipedia documents were not in the original training of GPT-2, but our monolingual baseline was subsequently finetuned on English Wikipedia. Details on all our training hyperparameters are included in the Appendix.
We measure the quality of generated text using perplexity. Our bilingual model achieves 89 perplexity on a randomly picked subset of the OSCAR [14] dataset and 76 perplexity on the English IMDB dataset [15]. Monolingual GPT-2 model achieves 36 perplexity on the IMDB dataset. In the Appendix we include generated text for both the models. Although the perplexity of the bilingual model does not match the pre-trained GPT-2, the generated text is of reasonable quality text in both languages.
Text Generation. Our first experiment is to compare the performance of both models under various parameter perturbations. First, we try deleting a random portion p (p from 0% to 40%) of attention layers’ weight to observe and compare the trend of decay in text generation quality between the two models. We evaluate both models on the IMDB dataset. As the graph in Figure 1 shows, the monolingual model starts with text predictions closer to the source text, resulting in lower perplexity without noise. However, as we delete a more significant portion of weights, the bilingual model matches the performance of the monolingual one and eventually outperforms that.
Next, we try magnitude-based pruning of a portion of weights, p, to observe and compare the trend of decay in text generation quality between the two models. We sort the attention layer weights by the magnitude and set p percent of weights with the lowest magnitude to zero. Again, we use the IMDB dataset to evaluate models. The graphs in Figure 4 show that as the training process continues, the model achieves a lower perplexity. Moreover, pruning additional weights has a less substantial impact on the model’s performance. This graph shows that training the pre-trained GPT-2 model for a few epochs on a bilingual dataset significantly improves robustness to weight perturbations.
In another experiment, we observe how the maximum singular value of the weight matrices changes throughout training process. We track the maximum singular value of attention layer weights. We use a pretrained GPT-2 model baseline, and train this model for 16k iterations on English text data from Wikipedia. Resuming from this checkpoint, we train two new models: 1) We continue training model
1 on task 1 (English Wikipedia dataset) for 16k more iterations. 2) We train a second model on a different English dataset, the LAMBADA dataset [16], for 16k more iterations. Figure 5 indicates the results of this experiment by plotting maximum singular values of the first attention layer. As the Figure shows, training model on a new dataset (task 2) results in a faster decay of the maximum singular value.
Text Classification. We conduct another set of experiments to observe the robustness of fine-tuned monolingual and bilingual GPT-2 models for text classification. In this section, we fine-tune both the monolingual and the bilingual GPT-2 models (previously trained) for downstream classification tasks using the GLUE benchmark [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] to compare the robustness of models to weight perturbations. The two perturbation methods tested in this section are random weight deletion and random Gaussian noise added to attention weights. For each task, we fine-tune both models for ten epochs. When applying random pruning, the accuracy of each model is evaluated after deleting p percent of model weights, p ranging from 0% to 45%. When perturbing model weights by adding noise, we try various Gaussian noise distributions with standard deviations ranging from 0 to 0.09. Experiment results can be found in the Appendix section.
Random Pruning. We compare the classification accuracy between the fine-tuned model from the monolingual pre-trained network and the fine-tuned model using the bilingual network. Each element in attention parameters is pruned with probability p, where p ranges from .0 to .45. We evaluate the classification accuracy for the following GLUE tasks: CoLA, QQP, SST2, MRPC, QNLI, and RTE.
We expect the accuracy of both models to decay as we prune a more considerable number of parameters. The monolingual model shows a faster decay in almost all tasks. For some tasks such as SST2, QQP, and MRPC, we observe that the bilingual model starts with lower accuracy, and its performance exceeds the monolingual model as we prune « 5% to « 25% of parameters. A detailed set of results in Table 2 show models’ average prediction accuracy on the GLUE benchmark.
Random Noise. We also experiment with adding Gaussian noise to the weights. We vary the noise standard deviation from .0 to 0.09. We evaluate the classification accuracy for the same tasks. When no noise is added to model parameters, the monolingual model performs slightly better for tasks like QQP and SST2. As we increase the noise, the accuracies of both models drop with almost identical rates. However, both graphs illustrate a cross-over point after which the bilingual model outperforms the monolingual. The bilingual model achieves significantly higher accuracy in the MRPC task when the standard deviation is greater than « 0.03. For CoLA and RTE, the monolingual model maintains maintains higher performance regardless of the noise level. A detailed set of results in the Appendix section shows models’ average prediction accuracy on the GLUE benchmark.
4 Related Work
Cognitive Reserve and Bilingualism. Our work is inspired by Cognitive Science and evidence of Cognitive Reserve in bilinguals. One implication of our theory is that multitasking leads to smaller weights on average. This could be related to studies performed in healthy older adults that indicate that despite overall less gray matter volume and poorer white matter integrity (i.e., poorer structural brain connectivity), older healthy bilinguals perform equally well or outperform monolinguals in several cognitive tasks [1, 2].
We would like to emphasize that our research is solely on artificial networks which have huge differences to biological neurons. No definite extrapolations should be made to Cognitive Neuroscience without further work. Nonetheless, we show that there is a simple mathematical abstraction that seems to align with the significantly more complex phenomena observed in bilingual cognitive reserve.
Multitask Learning. The most closely related work is by Mao et al. [29] which shows that multitask learning increases adversarial robustness. The intuition behind their proof is that, with task diversity, the gradient of the loss with respect to the wrong label is small as orthogonal tasks make gradients
that cancel out. Wu et al. [30] establishes a connection between robustness to weight perturbations and adversarial attacks. Our work is related but different since it directly establishes a connection between structural robustness and multitasking and shows a cross-over in performance across various domains and tasks. Our theoretical analysis is also completely different compared to prior works. More information on multitask learning can be found in Mao et al. [29] and Ghamizi et al. [31].
Many studies on network compression and the Lottery Ticket Hypothesis are related to our Magnitude Pruning experiments. LeCun et al. [32], Han et al. [7] find that selectively pruned networks can be trained from randomly initialized weights to match the performance of the original network. Frankle and Carbin [33] introduces the hypothesis that randomly initialized neural networks contain a very sparse sub-network that, if initialized correctly, can achieve the accuracy of the original model. Chen et al. [34] studies this in continual learning and examines various pruning methods.
5 Conclusions
We demonstrated a connection between multitask learning and robustness to structural failures for artificial neural networks. For linear representation learning we obtained a characterization of robustness through the spectrum of the task matrix. We showed that robustness comes from diverse tasks which imply a bounded spectral norm for C. One limitation of our theoretical work is that we did not analyze learning algorithms but directly used the SVD solution. It would be interesting to see if gradient descent introduces further regularization or other effects, especially in the non-linear case.
Experimentally, we observed increased robustness for both linguistic and non-linguistic tasks. More complex settings like multi-lingual models, cross-language transfer and their interactions remain to be explored. Finally, it remains open if bilingualism and cognitive reserve in humans can indeed be connected to our framework. It would be fascinating if neuroimaging techniques can measure any form of anatomical or functional regularization that bilingualism could be creating in humans.
6 Acknowledgments
This research has been supported by NSF Grants CCF 1763702, AF 1901292, CNS 2148141, Tripods CCF 1934932, IFML CCF 2019844, the Texas Advanced Computing Center (TACC) and research gifts by Western Digital, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco and the Archie Straiton Endowed Faculty Fellowship. | 1. What is the focus and contribution of the paper regarding multitask representation learning and artificial neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical basis and experimental studies?
3. What are the weaknesses of the paper, especially regarding its inspiration from cognitive science research and lack of discussion on certain neuroscience results?
4. Do you have any concerns regarding the addition of noise to the weights of language models or the association between linear mapping and mono or bilingual brain datasets?
5. What are the differences in the brain's language network for monolingual vs. bilingual people, and what linguistic properties are missing or active in these groups, both in humans and neural networks?
6. Are there any minor comments or suggestions you would like to provide, such as typos, missing citations, or unclear justifications? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper is inspired by cognitive research studies where bilingualism increases brain robustness and demonstrates whether artificial neural networks are more robust when trained on multiple languages or tasks. The theoretical justification was provided by analyzing linear multitask representation learning across three types of corruption: Additive Gaussian noise to the weights, random weight pruning, and magnitude-based weight pruning. The experimental studies on both vision tasks using CNNs, and language tasks using monolingual and bilingual GPT-2 models establish that multitasking leads to higher robustness for diverse task vectors.
Strengths And Weaknesses
Strengths:
Interpretation of both monolingual and bilingual language models and how these models connect to structural robustness is interesting.
The paper formed a theoretical basis based on earlier cognitive bilingual studies and concluded that robustness comes from diverse tasks.
Another strength of the paper is its vast number of experiments.
Weaknesses:
The authors, inspired by cognitive science research, indicate that bilingualism increases brain robustness by reducing the rate of cognitive decline due to aging.
However, the current work lacks a discussion on some known results from neuroscience?
In particular, I would have liked to see a discussion on how the brain learns the language structure of individuals who speak a single language vs. bilingual and neural networks.
It is unclear which linguistic properties are missing because of adding noise to neural network models?
Can the authors associate the linear mapping between these single or multi-task network models with mono or bilingual brain datasets?
Why do authors add noise to the weights of language models? Can we infer anything from the human brain? A clear justification is needed.
Questions
What is the difference in the brain's language network for monolingual vs. bilingual people?
What are linguistic properties missing in mono vs. bilingual people with aging or neural degeneration?
What are linguistic properties active in mono vs. bilingual people despite aging or neural degeneration?
Similarly, all the above questions need to be addressed for Neural networks as well.
Why do authors add noise to the weights of language models? Can we infer anything from the human brain? A clear justification is needed
Minor Comments: Typos:
Conclusion: For linear representation learning we obtained a characterization (, is missing before we)
No citations for GPT-2 model when first time was introduced in the introduction.
Missing citations for datasets and many acronyms
Limitations
The authors presented the problem clearly; however, the scope and applications of the problem are missing.
Authors could check the weaknesses section and add more limitations if they have not been addressed in this paper. |
NIPS | Title
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
Abstract
We find a surprising connection between multitask learning and robustness to neuron failures. Our experiments show that bilingual language models retain higher performance under various neuron perturbations, such as random deletions, magnitude pruning and weight noise compared to equivalent monolingual ones. We provide a theoretical justification of this robustness by mathematically analyzing linear representation learning and showing that multitasking creates more robust representations. Our analysis connects robustness to spectral properties of the learned representation and proves that multitasking leads to higher robustness for diverse task vectors. We open-source our code and models in the following URL: https://github.com/giannisdaras/multilingual robustness.
1 Introduction
Converging evidence from cognitive science research indicates that bilingualism increases brain robustness by reducing the rate of cognitive decline due to aging [1, 2] and delaying the onset of symptoms of dementia [3, 4]. It appears that individuals who speak more than one language on a regular basis are able to maintain typical cognitive functioning despite neural degeneration. This mismatch between cognitive functioning and brain pathology is called Cognitive Reserve [5], and its underlying mechanisms are poorly understood and are an active topic of investigation.
Inspired by this research, we study whether artificial neural networks are more robust when trained on multiple languages or multiple tasks. Our experiments demonstrate that training on multiple tasks indeed increases structural robustness. We train monolingual and bilingual GPT-2 models with the same architecture and dataset sizes. Initially, monolingual GPT-2 [6] models are slightly outperforming the bilingual ones, but when we introduce structural noise (by randomly deleting neurons or adding noise to the weights) bilingual models degrade more gracefully and eventually outperform the monolingual models in the high-noise regime. For some amount of noise, bilingual models start outperforming the monolingual ones demonstrating a cross-over in performance due to their increased robustness. We observe this phenomenon for numerous models across three different types of corruption: additive Gaussian noise to the weights, random weight pruning and magnitude-based weight pruning [7].
Our Contributions: We provide a theoretical justification of this phenomenon by mathematically analyzing linear multitask representation learning [8, 9]. Our analysis shows that introducing more
˚equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
diverse tasks creates ℓ2 regularization in the linear task heads. Further, we formally connect the Euclidean norm of the learned representations to structural robustness under errors in the network weights. Our main theorem establishes that multitasking leads to higher robustness to additive noise for linear representations when the task vectors are selected as random and independent Gaussian vectors. Our results also establish that when the tasks are significantly overlapping, multitasking does not lead to higher robustness and hence task diversity is necessary.
We experimentally observe that multitasking increases structural robustness for numerous networks and multiple problems including MNIST, CIFAR10, Newsgroup20, GPT models and finetuned GPT models on GLUE tasks. We train networks under exactly comparable dataset and architecture conditions and show that models become more robust to structural failures as they are trained with more tasks. We experiment with three different types of structural failures and show robustness increases for all of them. We also experimentally observe that the addition of diverse tasks seems to regularize the model weights, as we predict in our theoretical analysis.
2 Theoretical Analysis
Building intuition. We start with a small numerical example to build intuition. Given a feature vector x P Rd we compute a k dimensional linear representation Wx using a matrix W P Rkˆd. We choose
W such that we best approximate a set of ground truth task vectors, tc1, c2, ..., cT u, that lie in Rd. The learned approximation is ĉi “ WT γi. Essentially, we use linear combinations of the columns of WT to approximate the task vectors. For simplicity, we assume that the columns of WT are unit norm. We study the case where k ă T , otherwise there are infinite solutions. Assume we work in d “ 3 dimensions with T “ 3 total tasks, c1 “ r1, 0, 0sT , c2 “ r0, 1, 0sT , c3 “ r0, 0, 1sT . Set our learned representation dimension to be k “ 1 dimensional. When T “ 2, using only the first two tasks c1, c2, an optimal solution is W “ 1?2 r1, 1, 0s. The corresponding linear head is now the scalar γ1 “ 1?2 “ γ2 and the approximate vectors are ĉ1 “ W
T γ1 “ r0.5, 0.5, 0sT “ ĉ2. Therefore the best one dimensional subspace to jointly approximate c1, c2 is the span of W “ 1? 2
r1, 1, 0s. Now we introduce one more task and find the one dimensional subspace that best approximates c1, c2, c3. That becomes W 1 “ 1?3 r1, 1, 1s with linear heads γ 1 1 “ 1?3 “ γ 1 2 “ γ13. The approximate vectors now are ĉ11 “ pW 1qT γ11 “ r1{3, 1{3, 1{3sT “ ĉ12 “ ĉ13. Notice that ||ĉ1i||2 “ 1{3 for 3 tasks but ||ĉi||2 “ 1{2 for two tasks. The point is that for more tasks, the vector that jointly approximates all task vectors becomes shorter. Equivalently, the ℓ2 norm of the linear task heads decreases from γi “ 1?2 to γ 1 i “ 1?3 as the tasks increased from two to three showing how multitasking creates regularization. A graphical representation of this example is given in Figure 2. It is important that the task vectors ci are orthogonal, increasing the effective dimensionality of the problem. The intuition is that diverse tasks increase the effective dimension, making the best approximation vector shorter.
Our main theoretical result is that this phenomenon is quite general and makes multitasking lead to structural robustness. We connect the norm of the approximated task vectors with robustness to weight perturbations and show that for Gaussian, independent task vectors the average norm shrinks as more tasks are added. This is intuitive since high dimensional Gaussian vectors are near-orthogonal. Surprisingly, we empirically show that task vectors for numerous problems also exhibit this behavior.
Analysis. We consider a neural network fθ : Rd Ñ Rk and a collection of tasks tT1, ..., TT u. We are trying to learn θ, γi P Rk to solve the following optimization problem:
argminθ,tγ1,...,γT u T ÿ
i“1 Epx,yqPTiLpγ T i fθpxq, yq. (1)
The neural network fθ can be as simple as a single matrix W : Rd Ñ Rk. For linear networks, we consider the following dataset generation process: for task Ti, we sample a Gaussian x and we generate its label y by taking the inner-product with a task vector ci, i.e. y “ cTi x for task Ti. Given infinite samples and MSE loss, the optimization problem of (1) is equivalent to the following problem. Definition 2.1 (Optimization Problem). Let k ă T ă d. We define the Factorized Best Rank-k approximation of a matrix C P RdˆT as the optimization problem:
W˚,Γ˚ “ argminWPRkˆd,ΓPRkˆT ˇ ˇ ˇ ˇWTΓ ´ C ˇ ˇ ˇ ˇ
2 F . (2)
We are interested in the case when the dimensionality of the representation k is smaller than the number of tasks T , otherwise the best Rank-k approximation of C is not unique.
The following Proposition states that in the considered setting, Problem 2 can be solved with SVD. Proposition 2.2. For any matrix C P RdˆT with distinct singular values, any solution of 2.1 satisfies:
W˚ T Γ˚ “ UΣkV T , (3) where UΣV T is the SVD of C and Σk is the same as Σ except than the last T ´ k diagonal entries that are zeroed out.
The fact that the Singular Value decomposition computes the best rank-k approximation to a matrix can be found in several textbooks e.g. Golub and Van Loan [10], Blum et al. [11].
This proposition establishes that W˚ “ UT and Γ˚ “ ΣkV T is a valid solution of (2). Onwards, we will be calling this the SVD Solution. Definition 2.3. We define the SVD solution of (2), to be:
WSVD “ UT , ΓSVD “ ΣkV T . (4)
We note that if any multitask learning algorithm is used to obtain W˚,Γ˚, one can run Gram-Schmidt to make W˚ orthonormal and hence obtain the factorization we use. It is important that W stays normalized and all scaling is pushed to Γ since to measure robustness to weight shifts, we are going to add noise to W only, and higher W scaling is equivalent to lower effective noise.
We study how the performance is affected when the representation network, fθ, is corrupted. Definition 2.4. For any sample x, the Mean Squared Error (MSE) for task i is defined to be the expected error between the model prediction under noise and the true value y. Namely,
MSEi “ Eθc “ pγTi fθcpxq ´ yq2 ‰
, (5) where fθc is the model that emerges after corrupting fθ.
This measures how well the model approximates the ground truth under the presence of noise and under the constraint of a joint representation for multiple tasks.
The simplest corruption process to study is adding noise to the representation matrix, i.e. Wc “ W ` N, Nij „ N p0, σ2q, i.i.d (6)
Then, we denote the mean squared error for the task i with MSEi,σ 2 and the average mean squared error across the T tasks with ĘMSET,σ 2
. We are now ready to introduce our results. Theorem 2.5 (Mean Squared Error for Additive Noise). Let C P RdˆT be a matrix with distinct singular values σ1 ą σ2 ą ... ą σT . Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 `
řk i“1 σipCq2
T ¨ σ2 . (7)
Average MSE under noise
Average MSE without noise
Noise Variance
As shown, the noisy MSE decomposes into the sum of the noiseless MSE plus the noise variance times a function that depends on the number of tasks:
RpT q “ řk i“1 σipCq2
T . (8)
It is important to emphasize that as more tasks are added, the matrix C changes, but the interlacing theorem allows us to connect the singular values of smaller submatrices, as discussed in the Appendix. RpT q is the robustness slope: if a model with T tasks has smaller slope, it will eventually outperform a model with, say T ´ 1 tasks and larger slope, for sufficiently large noise. This is true even if the noiseless performance for the T ´ 1-task model is better, indicating a cross-over in MSE. Therefore the key is understanding when the sum of the top k singular values of C scales sublinearly in T . This is not true for tasks that are aligned, but we can show it holds for independent Gaussian task vectors. We believe it holds for more general families of diverse task vectors and our experiments verify it also holds for numerous real task vectors learned from text and vision datasets.
Connection with l2 regularization. For the SVD solution (see Definition 4), the sum of the top-k singular values squared is the squared Frobenius norm of Γ. Indeed, we have that ||ΓSVD||2F “ ||ΣkV T ||2F . Since Σk is a diagonal matrix, each row of ΣkV T is a rescaling of the corresponding row of V T . Rows of V T have norm 1, hence the i-th row of ΣkV T will have norm σi. The Frobenius norm squared is just the sum of the squared norms of the rows. Hence, we get that
||ΓSVD||2F “ k ÿ
i“1 σipCq2. (9)
Using this simple observation, we can get the following alternative expression of Theorem 2.5. Corollary 2.6. Let C P RdˆT be a matrix with distinct singular values. Let W,Γ be the SVD solution of (2). Under the Additive Noise Model defined in (6), we have that:
ĘMSE T,σ2 “ ĘMSET,0 ` ||Γ|| 2 F
T σ2 . (10)
Corollary 2.6 provides two important insights: i) the normalization with the number of tasks that appears in (7) is justified since the Frobenius norm of Γ grows with the number of task, ii) if we can prove that the slope (defined in Equation (8)) is dropping, then we are effectively proving that multitasking gives l2 regularization as we showed in the toy introductory example. This also holds for the case of Gaussian, i.i.d. task vectors, as shown in the following theorem. Theorem 2.7. Let C P RdˆT be a random matrix with Gaussian, i.i.d. entries of variance 1{d and d “ ΩpT 3q. Let Ct, Ct`1 be the matrices formed by selecting the first t, pt ` 1q columns of C. Then, there is a noise level σthres such that with probability ě 1 ´ exp ´ ´Ω ´? d ¯¯
, the SVD solutions (see (4)) of (2) (for Ct, Ct`1 respectively), under the noise corruption model, satisfy:
ĘMSE t`1,σ2 ă ĘMSEt,σ 2 , @σ ě σthres. (11)
Remark 2.8. In words, this result shows that adding new tasks gives provably increased robustness to high noise corruption in the weights, when the task vectors are Gaussian. Remark 2.9. Observe that the MSE under noise drops for every single new task added. The assumption d “ ΩpT 3q, can be relaxed to d “ Ωpt3q, and we get increased robustness for the first t added tasks. Nevertheless, for most applications d “ ΩpT 3q is a realistic assumption: Even for our smallest dataset MNIST d “ 728, and we experiment with up to 10 tasks.
3 Experimental Evaluation
We divide the experimental section in two parts. In the first part, we add noise to the final linear representation layer of various networks and verify that our theoretical analysis agrees with experimentally observed multitasking robustness on real datasets (MNIST, CIFAR10, NewsGroup20). In the second part, we show that multitasking leads to robustness to general weight corruptions in any layer of a complex transformer. Specifically, we show that multilingual Language Models are more robust to weight shifts (across all the layers) compared to monolingual trained under the same setting. This is the first evidence of increased Cognitive Reserve in bilingual artificial neural networks.
Experiments with Linear Representation Layers. We perform experiments on three datasets (MNIST, CIFAR10, Newsgroup20) and two modalities (Vision and Language). The datasets normally involve one classification task each. We create multiple binary tasks by distinguishing between pairs of labels. For example, in CIFAR10, one task might be to distinguish between dogs and cats and another between airplanes and cars. We assign a value in r0, 1s to each sample for each task to transform them to regression tasks (to match our theory). For example, if task i is to distinguish between dogs and cats, value 0 corresponds to dog and value 1 to cat.
The second issue is learning the task vectors from training data. For MNIST, we can simply learn a linear layer C with columns tc1, ..., cT u such that: cTi x « y for each task. For more complex datasets like CIFAR or Newsgroup20, linear networks have lower performance and hence it is less interesting to examine their robustness. Instead, we first use another network to extract representations gθpxq and then learn a linear layer acting on the encodings such that cTi gθpxq « y. For CIFAR we used a pre-trained Resnet50 as the encoder while for NewsGroup, a pre-trained BERT [12]. We would like to point out that our theory is still valid for this case – this is equivalent to the linear layer C receiving inputs from a learned representation as opposed to the features directly. As the number of tasks increase, we reduce the number of training examples per task. We do this to make sure that the total training dataset size stays the same as the number of tasks increase.
Figure 3 shows how the average MSE behaves as noise increases for different number of tasks. Note that even though all models begin from roughly the same performance in the noiseless setting, the multitask models are much more robust to the corruption of their weights consistently among all the datasets and modalities. This is aligned with our theoretical analysis which predicts that the robustness slope (defined in Equation (8)) decreases with the number of tasks. We calculate robustness slopes for learned task vectors for real datasets and plot their decay in the Appendix, where we further include all the details of how these models were trained.
Experiments with Language Models. Our objective is to compare robustness to neural weight perturbations in monolingual and bilingual language models. We use the following perturbation
models: 1) Random deletion of weight parameters: we zero-out p percent of the attention layer weights, 2) Magnitude pruning: we sort model attention weights by the magnitude and delete the smallest p percent of weights [7], 3) Random normal noise: we add zero-mean random Gaussian noise with standard deviation σ2 to the attention weights.
On the selection of the linguistic pair, we selected Greek, a highly inflected language with very different morphology, syntax and phonology compared to English. It also uses a different script since Greek characters were not Romanized. This minimizes transfer between languages, something we wanted to avoid. In the Appendix, we present additional experiments for other Romance languages.
The dataset for the bilingual model is a concatenation of articles from English and Greek Wikipedia. To avoid the computational cost of training for a new language, we start from the pre-trained GPT-2 (small)[6] and we use the Language Model Recycling Technique, introduced in [13]. GPT-2 small is a transformer-based architecture for causal language modeling, with 12 attention blocks and 124M parameters. The tokenizer uses Byte Pair Encoding and has a vocabulary of 50, 257 tokens. For the bilingual model, we generate a new tokenizer, vocabulary and embedding layer without changing the architecture. We keep the vocabulary size the same, as changing the vocabulary size can affect the scale of the perplexity score for these models. Note that Wikipedia documents were not in the original training of GPT-2, but our monolingual baseline was subsequently finetuned on English Wikipedia. Details on all our training hyperparameters are included in the Appendix.
We measure the quality of generated text using perplexity. Our bilingual model achieves 89 perplexity on a randomly picked subset of the OSCAR [14] dataset and 76 perplexity on the English IMDB dataset [15]. Monolingual GPT-2 model achieves 36 perplexity on the IMDB dataset. In the Appendix we include generated text for both the models. Although the perplexity of the bilingual model does not match the pre-trained GPT-2, the generated text is of reasonable quality text in both languages.
Text Generation. Our first experiment is to compare the performance of both models under various parameter perturbations. First, we try deleting a random portion p (p from 0% to 40%) of attention layers’ weight to observe and compare the trend of decay in text generation quality between the two models. We evaluate both models on the IMDB dataset. As the graph in Figure 1 shows, the monolingual model starts with text predictions closer to the source text, resulting in lower perplexity without noise. However, as we delete a more significant portion of weights, the bilingual model matches the performance of the monolingual one and eventually outperforms that.
Next, we try magnitude-based pruning of a portion of weights, p, to observe and compare the trend of decay in text generation quality between the two models. We sort the attention layer weights by the magnitude and set p percent of weights with the lowest magnitude to zero. Again, we use the IMDB dataset to evaluate models. The graphs in Figure 4 show that as the training process continues, the model achieves a lower perplexity. Moreover, pruning additional weights has a less substantial impact on the model’s performance. This graph shows that training the pre-trained GPT-2 model for a few epochs on a bilingual dataset significantly improves robustness to weight perturbations.
In another experiment, we observe how the maximum singular value of the weight matrices changes throughout training process. We track the maximum singular value of attention layer weights. We use a pretrained GPT-2 model baseline, and train this model for 16k iterations on English text data from Wikipedia. Resuming from this checkpoint, we train two new models: 1) We continue training model
1 on task 1 (English Wikipedia dataset) for 16k more iterations. 2) We train a second model on a different English dataset, the LAMBADA dataset [16], for 16k more iterations. Figure 5 indicates the results of this experiment by plotting maximum singular values of the first attention layer. As the Figure shows, training model on a new dataset (task 2) results in a faster decay of the maximum singular value.
Text Classification. We conduct another set of experiments to observe the robustness of fine-tuned monolingual and bilingual GPT-2 models for text classification. In this section, we fine-tune both the monolingual and the bilingual GPT-2 models (previously trained) for downstream classification tasks using the GLUE benchmark [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] to compare the robustness of models to weight perturbations. The two perturbation methods tested in this section are random weight deletion and random Gaussian noise added to attention weights. For each task, we fine-tune both models for ten epochs. When applying random pruning, the accuracy of each model is evaluated after deleting p percent of model weights, p ranging from 0% to 45%. When perturbing model weights by adding noise, we try various Gaussian noise distributions with standard deviations ranging from 0 to 0.09. Experiment results can be found in the Appendix section.
Random Pruning. We compare the classification accuracy between the fine-tuned model from the monolingual pre-trained network and the fine-tuned model using the bilingual network. Each element in attention parameters is pruned with probability p, where p ranges from .0 to .45. We evaluate the classification accuracy for the following GLUE tasks: CoLA, QQP, SST2, MRPC, QNLI, and RTE.
We expect the accuracy of both models to decay as we prune a more considerable number of parameters. The monolingual model shows a faster decay in almost all tasks. For some tasks such as SST2, QQP, and MRPC, we observe that the bilingual model starts with lower accuracy, and its performance exceeds the monolingual model as we prune « 5% to « 25% of parameters. A detailed set of results in Table 2 show models’ average prediction accuracy on the GLUE benchmark.
Random Noise. We also experiment with adding Gaussian noise to the weights. We vary the noise standard deviation from .0 to 0.09. We evaluate the classification accuracy for the same tasks. When no noise is added to model parameters, the monolingual model performs slightly better for tasks like QQP and SST2. As we increase the noise, the accuracies of both models drop with almost identical rates. However, both graphs illustrate a cross-over point after which the bilingual model outperforms the monolingual. The bilingual model achieves significantly higher accuracy in the MRPC task when the standard deviation is greater than « 0.03. For CoLA and RTE, the monolingual model maintains maintains higher performance regardless of the noise level. A detailed set of results in the Appendix section shows models’ average prediction accuracy on the GLUE benchmark.
4 Related Work
Cognitive Reserve and Bilingualism. Our work is inspired by Cognitive Science and evidence of Cognitive Reserve in bilinguals. One implication of our theory is that multitasking leads to smaller weights on average. This could be related to studies performed in healthy older adults that indicate that despite overall less gray matter volume and poorer white matter integrity (i.e., poorer structural brain connectivity), older healthy bilinguals perform equally well or outperform monolinguals in several cognitive tasks [1, 2].
We would like to emphasize that our research is solely on artificial networks which have huge differences to biological neurons. No definite extrapolations should be made to Cognitive Neuroscience without further work. Nonetheless, we show that there is a simple mathematical abstraction that seems to align with the significantly more complex phenomena observed in bilingual cognitive reserve.
Multitask Learning. The most closely related work is by Mao et al. [29] which shows that multitask learning increases adversarial robustness. The intuition behind their proof is that, with task diversity, the gradient of the loss with respect to the wrong label is small as orthogonal tasks make gradients
that cancel out. Wu et al. [30] establishes a connection between robustness to weight perturbations and adversarial attacks. Our work is related but different since it directly establishes a connection between structural robustness and multitasking and shows a cross-over in performance across various domains and tasks. Our theoretical analysis is also completely different compared to prior works. More information on multitask learning can be found in Mao et al. [29] and Ghamizi et al. [31].
Many studies on network compression and the Lottery Ticket Hypothesis are related to our Magnitude Pruning experiments. LeCun et al. [32], Han et al. [7] find that selectively pruned networks can be trained from randomly initialized weights to match the performance of the original network. Frankle and Carbin [33] introduces the hypothesis that randomly initialized neural networks contain a very sparse sub-network that, if initialized correctly, can achieve the accuracy of the original model. Chen et al. [34] studies this in continual learning and examines various pruning methods.
5 Conclusions
We demonstrated a connection between multitask learning and robustness to structural failures for artificial neural networks. For linear representation learning we obtained a characterization of robustness through the spectrum of the task matrix. We showed that robustness comes from diverse tasks which imply a bounded spectral norm for C. One limitation of our theoretical work is that we did not analyze learning algorithms but directly used the SVD solution. It would be interesting to see if gradient descent introduces further regularization or other effects, especially in the non-linear case.
Experimentally, we observed increased robustness for both linguistic and non-linguistic tasks. More complex settings like multi-lingual models, cross-language transfer and their interactions remain to be explored. Finally, it remains open if bilingualism and cognitive reserve in humans can indeed be connected to our framework. It would be fascinating if neuroimaging techniques can measure any form of anatomical or functional regularization that bilingualism could be creating in humans.
6 Acknowledgments
This research has been supported by NSF Grants CCF 1763702, AF 1901292, CNS 2148141, Tripods CCF 1934932, IFML CCF 2019844, the Texas Advanced Computing Center (TACC) and research gifts by Western Digital, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco and the Archie Straiton Endowed Faculty Fellowship. | 1. What is the focus and contribution of the paper regarding multi-task training in neural network optimization?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and experimental results?
3. Do you have any concerns or questions about the gap between the theory and experiments, or the lack of a baseline comparison using L2 regularization?
4. How do the results of the paper connect to the concept of cognitive reserve, and what future research directions could help explain this further?
5. Are there any minor issues or suggestions you have for improving the clarity and presentation of the paper's content, such as altering Figure 4 or Figure 5, or providing more context for certain tasks? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper argues that multi-task training functions as a regularizer in neural network optimization, yielding models which are more robust to different types of structural failures (perturbations or dropouts of weights). They demonstrate this formally by analyzing a representation learning task in which a model learns to predict multiple downstream task representations, and show a connection to L2-regularization under assumptions of the structural failure / noise process and orthogonality of task target representations. They next test the idea empirically in a broad set of experiments in multiple modalities (vision and language), and show that multi-task learning leads to slower performance decay (relative to single/fewer tasks) as the degree of various weight perturbations increases.
Strengths And Weaknesses
I think the paper presents an interesting idea, and the theory and experiments do a good job at backing up the claims as far as I can see.
The paper combines a clear theoretical presentation with several stages of theoretical experiments. But the theory addresses the properties of an ideal (SVD) solution rather than investigating actual optimization methods. This gap would be less problematic if the authors included L2 regularization as a baseline / comparison case in the experimental section (see Question 1 below).
The intuitive presentation is quite clear and worth the space in the paper, I think.
The link to bilingualism is interesting but I think not entirely necessary. While the path leading the authors to their discovery is interesting, there doesn't seem to be a necessary conceptual link in either direction: the theory or experiments aren't necessarily about bilingualism (certainly not -- there are experiments on vision tasks!), and the authors themselves say that these results shouldn't be taken to imply anything strong for the cognitive neuroscience of language/bilingualism.
Questions
The theory section of the paper reaches for an interesting connection between multitask training and L2 regularization, and this is later channeled in the weight norm table (Table 1) of the experimental section. But why not make this connection stronger in the experimental section, by using L2 regularization as a baseline / comparison case? I would expect to find that strong L2 leads to better performance, but that L2 models would still decay more quickly than multi-task trained models. The remaining gap in "cognitive reserve" between L2 and multi-task models would be explanatory work for future papers :)
The tasks in figure 3 don't seem to show a crossover point like the later language tasks (in which models trained on more tasks perform poorly in low-noise regimes, but perform better than fewer-task models in high-noise regimes). Why not?
Minor comments
Figure 4 (right) makes it hard to see the intended trend (change in performance as a function of epochs). Can we see a slice of the current x-axis with epoch on the x-axis? Or plot pruning probability as color and epoch as x-axis?
Figure 5 caption doesn't match text description. Is Task 2 bilingual training (according to caption) or training on English LAMBADA (according to main text description)?
Not sure, but it might be better to plot perplexity on a log scale in figure 1, 4 so that we can better see the differences between the two models.
Limitations
Yes. |
NIPS | Title
Bayesian Dyadic Trees and Histograms for Regression
Abstract
Many machine learning tools for regression are based on recursive partitioning of the covariate space into smaller regions, where the regression function can be estimated locally. Among these, regression trees and their ensembles have demonstrated impressive empirical performance. In this work, we shed light on the machinery behind Bayesian variants of these methods. In particular, we study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple regression case with just one predictor. We focus on the reconstruction of regression surfaces that are piecewise constant, where the number of jumps is unknown. We show that with suitably designed priors, posterior distributions concentrate around the true step regression function at a near-minimax rate. These results do not require the knowledge of the true number of steps, nor the width of the true partitioning cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the true piecewise regression function nearly as well as if we knew the exact number and location of jumps. Our results constitute the first step towards understanding why Bayesian trees and their ensembles have worked so well in practice. As an aside, we discuss prior distributions on balanced interval partitions and how they relate to an old problem in geometric probability. Namely, we relate the probability of covering the circumference of a circle with random arcs whose endpoints are confined to a grid, a new variant of the original problem.
1 Introduction
Histogram regression methods, such as regression trees [1] and their ensembles [2], have an impressive record of empirical success in many areas of application [3, 4, 5, 6, 7]. Tree-based machine learning (ML) methods build a piecewise constant reconstruction of the regression surface based on ideas of recursive partitioning. Perhaps the most popular partitioning schemes are the ones based on parallel-axis splits. One recent example is the Mondrian process [8], which was introduced to the ML community as a prior over tree data structures with interesting self-consistency properties. Many efficient algorithms exist that can be deployed to fit regression histograms underpinned by some partitioning scheme. Among these, Bayesian variants, such as Bayesian CART [9, 10] and BART [11], have appealed to umpteen practitioners. There are several reasons why. Bayesian tree-based regression tools (a) can adapt to regression surfaces without any need for pruning, (b) are reluctant to overfit, (c) provide an avenue for uncertainty statements via posterior distributions. While practical success stories abound [3, 4, 5, 6, 7], the theoretical understanding of Bayesian regression tree methods has been lacking. In this work, we study the quality of posterior distributions with regard to the three properties mentioned above. We provide first theoretical results that contribute to the understanding of Bayesian Gaussian regression methods based on recursive partitioning.
Our performance metric will be the speed of posterior concentration/contraction around the true regression function. This is ultimately a frequentist assessment, describing the typical behavior of the posterior under the true generative model [12]. Posterior concentration rate results are now slowly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entering the machine learning community as a tool for obtaining more insights into Bayesian methods [13, 14, 15, 16, 17]. Such results quantify not only the typical distance between a point estimator (posterior mean/median) and the truth, but also the typical spread of the posterior around the truth. Ideally, most of the posterior mass should be concentrated in a ball centered around the true value with a radius proportional to the minimax rate [12, 18]. Being inherently a performance measure of both location and spread, optimal posterior concentration provides a necessary certificate for further uncertainty quantification [19, 20, 21]. Beyond uncertainty assessment, theoretical guarantees that describe the average posterior shrinkage behavior have also been a valuable instrument for assessing the suitability of priors. As such, these results can often provide useful guidelines for the choice of tuning parameters, e.g. the latent Dirichlet allocation model [14].
Despite the rapid growth of this frequentist-Bayesian theory field, posterior concentration results for Bayesian regression histograms/trees/forests have, so far, been unavailable. Here, we adopt this theoretical framework to get new insights into why these methods work so well.
Related Work
Bayesian density estimation with step functions is a relatively well-studied problem [22, 23, 24]. The literature on Bayesian histogram regression is a bit less crowded. Perhaps the closest to our conceptual framework is the work by Coram and Lalley [25], who studied Bayesian non-parametric binary regression with uniform mixture priors on step functions. The authors focused on L1 consistency. Here, we focus on posterior concentration rather than consistency. We are not aware of any other related theoretical study of Bayesian histogram methods for Gaussian regression.
Our Contributions
In this work we focus on a canonical regression setting with merely one predictor. We study hierarchical priors on step functions and provide conditions under which the posteriors concentrate optimally around the true regression function. We consider the case when the true regression function itself is a step function, i.e. a tree or a tree ensemble, where the number and location of jumps is unknown.
We start with a very simple space of approximating step functions, supported on equally sized intervals where the number of splits is equipped with a prior. These partitions include dyadic regression trees. We show that for a suitable complexity prior, all relevant information about the true regression function (jump sizes and the number of jumps) is learned from the data automatically. During the course of the proof, we develop a notion of the complexity of a piecewise constant function relative to its approximating class.
Next, we take a larger approximating space consisting of functions supported on balanced partitions that do not necessarily have to be of equal size. These correspond to more general trees with splits at observed values. With a uniform prior over all balanced partitions, we are able to achieve a nearly ideal performance (as if we knew the number and the location of jumps). As an aside, we describe the distribution of interval lengths obtained when the splits are sampled uniformly from a grid. We relate this distribution to the probability of covering the circumference of a circle with random arcs, a problem in geometric probability that dates back to [26, 27]. Our version of this problem assumes that the splits are chosen from a discrete grid rather than from a unit interval.
Notation
With ∝ and . we will denote an equality and inequality, up to a constant. The ε-covering number of a set Ω for a semimetric d, denoted by N(ε,Ω, d), is the minimal number of d-balls of radius ε needed to cover the set Ω. We denote by φ(·) the standard normal density and by Pnf = ⊗ Pf,i the n-fold product measure of the n independent observations under (1) with a regression function f(·). By Pxn = 1n ∑n i=1 δxi we denote the empirical distribution of the observed covariates, by || · ||n the norm on L2(Pxn) and by || · ||2 the standard Euclidean norm.
2 Bayesian Histogram Regression
We consider a classical nonparametric regression model, where response variables Y (n) = (Y1, . . . , Yn)
′ are related to input variables x(n) = (x1, . . . , xn)′ through the function f0 as follows
Yi = f0(xi) + εi, εi ∼ N (0, 1), i = 1, . . . , n. (1)
We assume that the covariate values xi are one-dimensional, fixed and have been rescaled so that xi ∈ [0, 1]. Partitioning-based regression methods are often invariant to monotone transformations of observations. In particular, when f0 is a step function, standardizing the distance between the observations, and thereby the split points, has no effect on the nature of the estimation problem. Without loss of generality, we will thereby assume that the observations are aligned on an equispaced grid. Assumption 1. (Equispaced Grid) We assume that the scaled predictor values satisfy xi = in for each i = 1, . . . , n.
This assumption implies that partitions that are balanced in terms of the Lebesque measure will be balanced also in terms of the number of observations. A similar assumption was imposed by Donoho [28] in his study of Dyadic CART.
The underlying regression function f0 : [0, 1]→ R is assumed to be a step function, i.e.
f0(x) = K0∑ k=1 β0kIΩ0k(x),
where {Ω0k} K0 k=1 is a partition of [0, 1] into K0 non-overlapping intervals. We assume that {Ω0k} K0 k=1 is minimal, meaning that f0 cannot be represented with a smaller partition (with less than K0 pieces). Each partitioning cell Ω0k is associated with a step size β 0 k , determining the level of the function f0 on Ω0k. The entire vector of K0 step sizes will be denoted by β 0 = (β01 , . . . , β 0 K) ′.
One might like to think of f0 as a regression tree with K0 bottom leaves. Indeed, every step function can be associated with an equivalence class of trees that live on the same partition but differ in their tree topology. The number of bottom leaves K0 will be treated as unknown throughout this paper. Our goal will be designing a suitable class of priors on step functions so that the posterior concentrates tightly around f0. Our analysis with a single predictor has served as a precursor to a full-blown analysis for high-dimensional regression trees [29].
We consider an approximating space of all step functions (with K = 1, 2, . . . bottom leaves)
F = ∪∞K=1FK , (2)
which consists of smaller spaces (or shells) of all K-step functions
FK = { fβ : [0, 1]→ R; fβ(x) =
K∑ k=1 βkIΩk(x)
} ,
each indexed by a partition {Ωk}Kk=1 and a vector of K step heights β. The fundamental building block of our theoretical analysis will be the prior on F . This prior distribution has three main ingredients, described in detail below, (a) a prior on the number of steps K, (b) a prior on the partitions {Ωk}Kk=1 of size K, and (c) a prior on step sizes β = (β1, . . . , βK)′.
2.1 Prior πK(·) on the Number of Steps K
To avoid overfitting, we assign an exponentially decaying prior distribution that penalizes partitions with too many jumps. Definition 2.1. (Prior on K) The prior on the number of partitioning cells K satisfies
πK(k) ≡ Π(K = k) ∝ exp(−cK k log k) for k = 1, 2, . . . . (3)
This prior is no stranger to non-parametric problems. It was deployed for stepwise reconstructions of densities [24, 23] and regression surfaces [25]. When cK is large, this prior is concentrated on models with small complexity where overfitting should not occur. Decreasing cK leads to the smearing of the prior mass over partitions with more jumps. This is illustrated in Figure 1, which depicts the prior for various choices of cK . We provide recommendations for the choice of cK in Section 3.1.
2.2 Prior πΩ(· |K) on Interval Partitions {Ωk}Kk=1
After selecting the number of stepsK from πK(k), we assign a prior over interval partitions πΩ(· |K). We will consider two important special cases.
πK(k)
f0(x)
2.2.1 Equivalent Blocks
Perhaps the simplest partition is based on statistically equivalent blocks [30], where all the cells are required to have the same number of points. This is also known as the K-spacing rule that partitions the unit interval using order statistics of the observations. Definition 2.2. (Equivalent Blocks) Let x(i) denote the ith order statistic of x = (x1, . . . , xn)′, where x(n) ≡ 1 and n = Kc for some c ∈ N\{0}. Denote by x(0) ≡ 0. A partition {Ωk}Kk=1 consists of K equivalent blocks, when Ωk = (x(jk), x(jk+1)], where jk = (k − 1)c.
A variant of this definition can be obtained in terms of interval lengths rather than numbers of observations. Definition 2.3. (Equispaced Blocks) A partition {Ωk}Kk=1 consists of K equispaced blocks Ωk, when Ωk = ( k−1 K , k K ] for k = 1, . . . ,K.
When K = 2s for some s ∈ N\{0}, the equispaced partition corresponds to a full complete binary tree with splits at dyadic rationals. If the observations xi lie on a regular grid (Assumption 1), then Definition 2.2 and 2.3 are essentially equivalent. We will thereby focus on equivalent blocks (EB) and denote such a partition (for a given K > 0) with ΩEBK . Because there is only one such partition for each K, the prior πΩ(·|K) has a single point mass mass at ΩEBK . With Ω EB = ∪∞K=1Ω EB K we denote the set of all EB partitions for K = 1, 2, . . . . We will use these partitioning schemes as a jump-off point.
2.2.2 Balanced Intervals
Equivalent (equispaced) blocks are deterministic and, as such, do not provide much room for learning about the actual location of jumps in f0. Balanced intervals, introduced below, are a richer class of partitions that tolerate a bit more imbalance. First, we introduce the notion of cell counts µ(Ωk). For each interval Ωk, we write
µ(Ωk) = 1
n n∑ i=1 I(xi ∈ Ωk), (4)
the proportion of observations falling inside Ωk. Note that for equivalent blocks, we can write µ(Ω1) = · · · = µ(ΩK) = c/n = 1/K. Definition 2.4. (Balanced Intervals) A partition {Ωk}Kk=1 is balanced if
C2min K ≤ µ(Ωk) ≤ C2max K for all k = 1, . . . ,K (5)
for some universal constants Cmin ≤ 1 ≤ Cmax not depending on K.
The following variant of the balancing condition uses interval widths rather than cell counts: C̃2min/K ≤ |Ωk| ≤ C̃2max/K. Again, under Assumption 1, these two definitions are equivalent. In the sequel, we will denote by ΩBIK the set of all balanced partitions consisting of K intervals and by ΩBI = ∪∞K=1Ω BI K the set of all balanced intervals of sizes K = 1, 2, . . . . It is worth pointing out that the balance assumption on the interval partitions can be relaxed, at the expense of a log factor in the concentration rate [29].
With balanced partitions, the Kth shell FK of the approximating space F in (2) consists of all step functions that are supported on partitions ΩBIK and haveK−1 points of discontinuity uk ∈ In ≡ {xi : i = 1, . . . , n− 1} for k = 1, . . .K − 1. For equispaced blocks in Definition 2.3, we assumed that the points of subdivision were deterministic, i.e. uk = k/K. For balanced partitions, we assume that uk are random and chosen amongst the observed values xi. The order statistics of the vector of splits u = (u1, . . . , uK−1)
′ uniquely define a segmentation of [0, 1] into K intervals Ωk = (u(k−1), u(k)], where u(k) designates the kth smallest value in u and u(0) ≡ 0, u(K) = x(n) ≡ 1.
Our prior over balanced intervals πΩ(· |K) will be defined implicitly through a uniform prior over the split vectors u. Namely, the prior over balanced partitions ΩBIK satisfies
πΩ({Ωk}Kk=1 |K) = 1 card(ΩBIK ) I ( {Ωk}Kk=1 ∈ Ω BI K ) . (6)
In the following Lemma, we obtain upper bounds on card(ΩBIK ) and discuss how they relate to an old problem in geometric probability. In the sequel, we denote with |Ωk| the lengths of the segments defined through the split points u. Lemma 2.1. Assume that u = (u1, . . . , uK−1)′ is a vector of independent random variables obtained by uniform sampling (without replacement) from In. Then under Assumption 1, we have for 1/n < C < 1/K
Π ( min
1≤k≤K |Ωk| ≥ C
) = (bn(1−KC)c+K−1 K−1 )( n−1 K−1
) (7) and
Π ( max
1≤k≤K |Ωk| ≤ C
) = 1− ñ∑ k=1 (−1)k ( n− 1 k )(bn(1−k C)c+K−1 K−1 )( n−1 K−1
) , (8) where ñ = min{n− 1, b1/Cc}.
Proof. The denominator of (7) follows from the fact that there are n − 1 possible splits for the K − 1 points of discontinuity uk. The numerator is obtained after adapting the proof of Lemma
2 of Flatto and Konheim [31]. Without lost of generality, we will assume that C = a/n for some a = 1, . . . , bn/Kc so that n(1 −KC) is an integer. Because the jumps uk can only occur on the grid In, we have |Ωk| = j/n for some j = 1, . . . , n − 1. It follows from Lemma 1 of Flatto and Konheim [31] that the set EK = {|Ωk| : ∑K k=1 |Ωk| = 1 and |Ωk| ≥ C for k = 1, . . . ,K} lies
in the interior of a convex hull of K points vr = (1 − KC)er + C ∑K k=1 ek for r = 1, . . . ,K, where er = (er1, . . . , erK)′ are unit base vectors, i.e. erj = I(r = j). Two examples of the set EK (for K = 2 and K = 3) are depicted in Figure 2. In both figures, n = 10 (i.e. 9 candidate split points) and a = 2. With K = 2 (Figure 2(a)), there are only 7 = ( n(1−KC)+K−1 K−1 )
pairs of interval lengths (|Ω1|, |Ω2|)′ that satisfy the minimal cell condition. These points lie on a grid between the two vertices v1 = (1 − C,C) and v2 = (C, 1 − C). With K = 3, the convex hull of points v1 = (1 − 2C,C,C)′, v2 = (C, 1 − 2C,C)′ and v1 = (C,C, 1 − 2C)′ corresponds to a diagonal dissection of a cube of a side length (1 − 3C) (Figure 2(b), again with a = 2 and n = 10). The number of lattice points in the interior (and on the boundary) of such tetrahedron corresponds to an arithmetic sum 12 (n− 3a+ 2)(n− 3a+ 1) = ( n−3a+2 2 ) . So far, we showed (7) for K = 2 and K = 3. To complete the induction argument, suppose that the formula holds for some arbitrary K > 0. Then the size of the lattice inside (and on the boundary) of a (K + 1)-tetrahedron of a side length [1− (K + 1)C] can be obtained by summing lattice sizes inside K-tetrahedrons of increasing side lengths 0, √ 2/n, 2 √ 2/n, . . . , [1− (K + 1)C] √ 2/n, i.e.
n[1−(K+1)C]+K−1∑ j=K−1 ( j K − 1 ) = ( n[1− (K + 1)C] +K K ) ,
where we used the fact ∑N j=K ( j K ) = ( N+1 K+1 ) . The second statement (8) is obtained by writing the event as a complement of the union of events and applying the method of inclusion-exclusion.
Remark 2.1. Flatto and Konheim [31] showed that the probability of covering a circle with random arcs of length C is equal to the probability that all segments of the unit interval, obtained with iid random uniform splits, are smaller than C. Similarly, the probability (8) could be related to the probability of covering the circle with random arcs whose endpoints are chosen from a grid of n− 1 equidistant points on the circumference.
There are ( n−1 K−1 ) partitions of size K, of which (bn(1−C̃2min)c+K−1 K−1 ) satisfy the minimal cell width balancing condition (where C̃2min > K/n). This number gives an upper bound on the combinatorial complexity of balanced partitions card(ΩBIK ).
2.3 Prior π(β |K) on Step Heights β
To complete the prior on FK , we take independent normal priors on each of the coefficients. Namely
π(β |K) = K∏ k=1 φ(βk), (9)
where φ(·) is the standard normal density.
3 Main Results
A crucial ingredient of our proof will be understanding how well one can approximate f0 with other step functions (supported on partitions Ω, which are either equivalent blocks ΩEB or balanced partitions ΩBI ). We will describe the approximation error in terms of the overlap between the true partition {Ω0k} K0 k=1 and the approximating partitions {Ωk}Kk=1 ∈ Ω. More formally, we define the restricted cell count (according to Nobel [32]) as
m ( V ; {Ω0k} K0 k=1 ) = |Ω0k : Ω0k ∩ V 6= ∅|,
the number of cells in {Ω0k} K0 k=1 that overlap with an interval V ⊂ [0, 1]. Next, we define the complexity of f0 as the smallest size of a partition in Ω needed to completely cover f0 without any overlap.
Definition 3.1. (Complexity of f0 w.r.t. Ω) We define K(f0,Ω) as the smallest K such that there exists a K-partition {Ωk}Kk=1 in the class of partitions Ω for which
m (
Ωk; {Ω0k} K0 k=1
) = 1 for all k = 1, . . . ,K.
The number K(f0,Ω) will be referred to as the complexity of f0 w.r.t. Ω.
The complexity number K(f0,Ω) indicates the optimal number of steps needed to approximate f0 with a step function (supported on partitions in Ω) without any error. It depends on the true number of jumps K0 as well as the true interval lengths |Ω0k|. If the minimal partition {Ω0k} K0 k=1 resided in the approximating class, i.e. {Ω0k} K0 k=1 ∈ Ω, then we would obtain K(f0,Ω) = K0, the true number of steps. On the other hand, when {Ω0k} K0 k=1 /∈ Ω, the complexity number K(f0,Ω) can be much larger. This is illustrated in Figure 1 (right), where the true partition {Ω0k} K0 k=1 consists of K0 = 4 unequal pieces and we approximate it with equispaced blocks with K = 2, 5, 10 steps. Because the intervals Ω0k are not equal and the smallest one has a length 1/10, we need K(f0,Ω
EB) = 10 equispaced blocks to perfectly approximate f0. For our analysis, we do not need to assume that {Ω0k} K0 k=1 ∈ Ω (i.e. f0 does not need to be inside the approximating class) or that K(f0,Ω) is finite. The complexity number can increase with n, where sharper performance is obtained when f0 can be approximated error-free with some f ∈ Ω, where f has a small number of discontinuities relative to n. Another way to view K(f0,Ω) is as the ideal partition size on which the posterior should concentrate. If this number were known, we could achieve a near-minimax posterior concentration rate n−1/2 √ K(f0,Ω) log[n/K(f0,Ω)] (Remark 3.3). The actual minimax rate for estimating a
piece-wise constant f0 (consisting of K0 > 2 pieces) is n−1/2 √ K0 log(n/K0) [33]. In our main results, we will target the nearly optimal rate expressed in terms of K(f0,Ω).
3.1 Posterior Concentration for Equivalent Blocks
Our first result shows that the minimax rate is nearly achieved, without any assumptions on the number of pieces of f0 or the sizes of the pieces. Theorem 3.1. (Equivalent Blocks) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on equivalent blocks, equipped with priors πK(·) and π(β | K) as in (3) and (9). Denote with Kf0 ≡ K(f0,Ω
EB) and assume ‖β0‖2∞ . log n and Kf0 . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log (n/Kf0) | Y (n) ) → 0 (10)
in Pnf0 -probability, for every Mn →∞ as n→∞.
Before we proceed with the proof, a few remarks ought to be made. First, it is worthwhile to emphasize that the statement in Theorem 3.1 is a frequentist one as it relates to an aggregated behavior of the posterior distributions obtained under the true generative model Pnf0 .
Second, the theorem shows that the Bayesian procedure performs an automatic adaptation to K(f0,Ω
EB). The posterior will concentrate on EB partitions that are fine enough to approximate f0 well. Thus, we are able to recover the true function as well as if we knew K(f0,ΩEB).
Third, it is worth mentioning that, under Assumption 1, Theorem 3.1 holds for equivalent as well as equisized blocks. In this vein, it describes the speed of posterior concentration for dyadic regression trees. Indeed, as mentioned previously, with K = 2s for some s ∈ N\{0}, the equisized partition corresponds to a full binary tree with splits at dyadic rationals.
Another interesting insight is that the Gaussian prior (9), while selected for mathematical convenience, turns out to be sufficient for optimal recovery. In other words, despite the relatively large amount of mass near zero, the Gaussian prior does not rule out optimal posterior concentration. Our standard normal prior is a simpler version of the Bayesian CART prior, which determines the variance from the data [9].
Let Kf0 ≡ K(f0,Ω EB) be as in Definition 3.1. Theorem 3.1 is proved by verifying the three conditions of Theorem 4 of [18], for εn = n−1/2 √ Kf0 log(n/Kf0) and Fn = ⋃kn K=0 FK , with
kn of the order Kf0 log(n/Kf0). The approximating subspace Fn ⊂ F should be rich enough to approximate f0 well and it should receive most of the prior mass. The conditions for posterior contraction at the rate εn are:
(C1) sup ε>εn
logN ( ε 36 , {f ∈ Fn : ‖f − f0‖n < ε}, ‖.‖n ) ≤ nε2n,
(C2) Π(F\Fn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) = o(e−2nε 2 n),
(C3) Π(f ∈ Fn : jεn < ‖f − f0‖n ≤ 2jεn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) ≤ e
j2 4 nε 2 n for all sufficiently large j.
The entropy condition (C1) restricts attention to EB partitions with small K. As will be seen from the proof, the largest allowed partitions have at most (a constant multiple of) Kf0 log (n/Kf0) pieces..
Condition (C2) requires that the prior does not promote partitions with more than Kf0 log (n/Kf0) pieces. This property is guaranteed by the exponentially decaying prior πK(·), which penalizes large partitions.
The final condition, (C3), requires that the prior charges a ‖.‖n neighborhood of the true function. In our proof, we verify this condition by showing that the prior mass on step functions of the optimal size Kf0 is sufficiently large.
Proof. We verify the three conditions (C1), (C2) and (C3). (C1) Let ε > εn and K ∈ N. For fα, fβ ∈ FK , we have K−1‖α− β‖22 = ‖fα − fβ‖2n because µ(Ωk) = 1/K for each k. We now argue as in the proof of Theorem 12 of [18] to show that N ( ε 36 , {f ∈ FK : ‖f − f0‖n < ε}, ‖.‖n ) can be covered by the number of √ Kε/36-balls required
to cover a √ Kε-ball in RK . This number is bounded above by 108K . Summing over K, we recognize a geometric series. Taking the logarithm of the result, we find that (C1) is satisfied if log(108)(kn + 1) ≤ nε2n.
(C2) We bound the denominator by: Π(f ∈ F : ‖f − f0‖2n ≤ ε2) ≥ πK(Kf0)Π ( β ∈ RK(f0) : ‖β − βext0 ‖22 ≤ ε2Kf0 ) ,
where βext0 ∈ RKf0 is an extended version of β 0 ∈ RK0 , containing the coefficients for f0 expressed as a step function on the partition {Ω0k} Kf0 k=1. This can be bounded from below by
πK(Kf0) e‖β ext 0 ‖22/2
Π ( β ∈ RK(f0) : ‖β‖22 ≤ ε2Kf0/2 ) > πK(Kf0)
e‖β ext 0 ‖22/2 ∫ ε2Kf0/2 0 xKf0/2−1e−x/2 2Kf0/2Γ(Kf0/2) dx.
We bound this from below by bounding the exponential at the upper integration limit, yielding:
πK(Kf0) e‖β ext 0 ‖22/2
e−ε 2Kf0/4
2Kf0 Γ(Kf0/2 + 1) εKf0K
Kf0/2 f0 . (11)
For ε = εn → 0, we thus find that the denominator in (C2) can be lower bounded with eKf0 log εn−cK Kf0 logKf0−‖β ext 0 ‖ 2 2/2−Kf0/2[log 2+ε 2 n/2]. We bound the numerator:
Π(F\Fn) = Π ( ∞⋃ k=kn+1 Fk ) ∝ ∞∑ k=kn+1 e−cKk log k ≤ e−cK(kn+1) log(kn+1) + ∫ ∞ kn+1 e−cKx log x,
which is of order e−cK(kn+1) log(kn+1). Combining this bound with (11), we find that (C2) is met if:
e−Kf0 log εn+(cK+1)Kf0 logKf0+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0 as n→∞.
(C3) We bound the numerator by one, and use the bound (11) for the denominator. As εn → 0, we obtain the condition −Kf0 log εn + (cK + 1)Kf0 logKf0 +Kf0‖β 0‖2∞ ≤ j2 4 nε 2 n for all sufficiently large j.
Conclusion With εn = n−1/2 √ Kf0 log(n/Kf0), letting kn ∝ nε2n = Kf0 log(n/Kf0), the condition (C1) is met. With this choice of kn, the condition (C2) holds as well as long as ‖β0‖2∞ . log n and Kf0 . √ n. Finally, the condition (C3) is met for Kf0 . √ n. Remark 3.1. It is worth pointing out that the proof will hold for a larger class of priors on K, as long as the prior shrinks at least exponentially fast (meaning that it is bounded from above by ae−bK for constants a, b > 0). However, a prior at this exponential limit will require tuning, because the optimal a and b will depend on K(f0,ΩEB). We recommend using the prior (2.1) that prunes somewhat more aggressively, because it does not require tuning by the user. Indeed, Theorem 3.1 holds regardless of the choice of cK > 0. We conjecture, however, that values cK ≥ 1/K(f0,ΩEB) lead to a faster concentration speed and we suggest cK = 1 as a default option. Remark 3.2. When Kf0 is known, there is no need for assigning a prior πK(·) and the conditions (C1) and (C3) are verified similarly as before, fixing the number of steps at Kf0 .
3.2 Posterior Concentration for Balanced Intervals
An analogue of Theorem 3.1 can be obtained for balanced partitions from Section 2.2.2 that correspond to regression trees with splits at actual observations. Now, we assume that f0 is ΩBI -valid and carry out the proof with K(f0,ΩBI) instead of K(f0,ΩEB). The posterior concentration rate is only slightly worse. Theorem 3.2. (Balanced Intervals) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on balanced intervals equipped with priors πK(·), πΩ(·|K) and π(β | K) as in (3), (6) and (9). Denote with Kf0 ≡ K(f0,Ω
BI) and assume ‖β0‖2∞ . log 2β n and K(f0,ΩBI) . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log 2β(n/Kf0) | Y (n) ) → 0 (12)
in Pnf0 -probability, for every Mn →∞ as n→∞, where β > 1/2.
Proof. All three conditions (C1), (C2) and (C3) hold if we choose kn ∝ Kf0 [log(n/Kf0)]2β−1. The entropy condition will be satisfied when log (∑kn k=1 C kcard(ΩBIk ) ) . n ε2n for some C > 0, where
εn = n −1/2 √ Kf0 log 2β(n/Kf0). Using the upper bound card(Ω BI k ) < ( n−1 k−1 ) < ( n−1 kn−1 ) (because kn < n−1
2 for large enough n), the condition (C1) is verified. Using the fact that card(ΩKf0 ) . Kf0 log(n/Kf0), the condition (C2) will be satisfied when, for some D > 0, we have
e−Kf0 log εn+(cK+1)Kf0 logKf0+DKf0 log(n/Kf0 )+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0. (13)
This holds for our choice of kn under the assumption ‖β0‖2∞ . log 2β n and Kf0 .
√ n. These
choices also yield (C3). Remark 3.3. When Kf0 & √ n, Theorem 3.1 and Theorem 3.2 still hold, only with the bit slower
slower concentration rate n−1/2 √ Kf0 log n.
4 Discussion
We provided the first posterior concentration rate results for Bayesian non-parametric regression with step functions. We showed that under suitable complexity priors, the Bayesian procedure adapts to the unknown aspects of the target step function. Our approach can be extended in three ways: (a) to smooth f0 functions, (b) to dimension reduction with high-dimensional predictors, (c) to more general partitioning schemes that correspond to methods like Bayesian CART and BART. These three extensions are developed in our followup manuscript [29].
5 Acknowledgment
This work was supported by the James S. Kemper Foundation Faculty Research Fund at the University of Chicago Booth School of Business. | 1. What is the focus of the paper in regression trees?
2. What are the strengths of the proposed methods?
3. What are the weaknesses of the paper regarding its claims and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper gives a detailed theory of a very simple case of regression trees, where there is one predictor variable. The main contribution is probably the methods used. There is a follow-up manuscript, and I'd say the more extensive results should have made the paper to make it more interesting. |
NIPS | Title
Bayesian Dyadic Trees and Histograms for Regression
Abstract
Many machine learning tools for regression are based on recursive partitioning of the covariate space into smaller regions, where the regression function can be estimated locally. Among these, regression trees and their ensembles have demonstrated impressive empirical performance. In this work, we shed light on the machinery behind Bayesian variants of these methods. In particular, we study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple regression case with just one predictor. We focus on the reconstruction of regression surfaces that are piecewise constant, where the number of jumps is unknown. We show that with suitably designed priors, posterior distributions concentrate around the true step regression function at a near-minimax rate. These results do not require the knowledge of the true number of steps, nor the width of the true partitioning cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the true piecewise regression function nearly as well as if we knew the exact number and location of jumps. Our results constitute the first step towards understanding why Bayesian trees and their ensembles have worked so well in practice. As an aside, we discuss prior distributions on balanced interval partitions and how they relate to an old problem in geometric probability. Namely, we relate the probability of covering the circumference of a circle with random arcs whose endpoints are confined to a grid, a new variant of the original problem.
1 Introduction
Histogram regression methods, such as regression trees [1] and their ensembles [2], have an impressive record of empirical success in many areas of application [3, 4, 5, 6, 7]. Tree-based machine learning (ML) methods build a piecewise constant reconstruction of the regression surface based on ideas of recursive partitioning. Perhaps the most popular partitioning schemes are the ones based on parallel-axis splits. One recent example is the Mondrian process [8], which was introduced to the ML community as a prior over tree data structures with interesting self-consistency properties. Many efficient algorithms exist that can be deployed to fit regression histograms underpinned by some partitioning scheme. Among these, Bayesian variants, such as Bayesian CART [9, 10] and BART [11], have appealed to umpteen practitioners. There are several reasons why. Bayesian tree-based regression tools (a) can adapt to regression surfaces without any need for pruning, (b) are reluctant to overfit, (c) provide an avenue for uncertainty statements via posterior distributions. While practical success stories abound [3, 4, 5, 6, 7], the theoretical understanding of Bayesian regression tree methods has been lacking. In this work, we study the quality of posterior distributions with regard to the three properties mentioned above. We provide first theoretical results that contribute to the understanding of Bayesian Gaussian regression methods based on recursive partitioning.
Our performance metric will be the speed of posterior concentration/contraction around the true regression function. This is ultimately a frequentist assessment, describing the typical behavior of the posterior under the true generative model [12]. Posterior concentration rate results are now slowly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entering the machine learning community as a tool for obtaining more insights into Bayesian methods [13, 14, 15, 16, 17]. Such results quantify not only the typical distance between a point estimator (posterior mean/median) and the truth, but also the typical spread of the posterior around the truth. Ideally, most of the posterior mass should be concentrated in a ball centered around the true value with a radius proportional to the minimax rate [12, 18]. Being inherently a performance measure of both location and spread, optimal posterior concentration provides a necessary certificate for further uncertainty quantification [19, 20, 21]. Beyond uncertainty assessment, theoretical guarantees that describe the average posterior shrinkage behavior have also been a valuable instrument for assessing the suitability of priors. As such, these results can often provide useful guidelines for the choice of tuning parameters, e.g. the latent Dirichlet allocation model [14].
Despite the rapid growth of this frequentist-Bayesian theory field, posterior concentration results for Bayesian regression histograms/trees/forests have, so far, been unavailable. Here, we adopt this theoretical framework to get new insights into why these methods work so well.
Related Work
Bayesian density estimation with step functions is a relatively well-studied problem [22, 23, 24]. The literature on Bayesian histogram regression is a bit less crowded. Perhaps the closest to our conceptual framework is the work by Coram and Lalley [25], who studied Bayesian non-parametric binary regression with uniform mixture priors on step functions. The authors focused on L1 consistency. Here, we focus on posterior concentration rather than consistency. We are not aware of any other related theoretical study of Bayesian histogram methods for Gaussian regression.
Our Contributions
In this work we focus on a canonical regression setting with merely one predictor. We study hierarchical priors on step functions and provide conditions under which the posteriors concentrate optimally around the true regression function. We consider the case when the true regression function itself is a step function, i.e. a tree or a tree ensemble, where the number and location of jumps is unknown.
We start with a very simple space of approximating step functions, supported on equally sized intervals where the number of splits is equipped with a prior. These partitions include dyadic regression trees. We show that for a suitable complexity prior, all relevant information about the true regression function (jump sizes and the number of jumps) is learned from the data automatically. During the course of the proof, we develop a notion of the complexity of a piecewise constant function relative to its approximating class.
Next, we take a larger approximating space consisting of functions supported on balanced partitions that do not necessarily have to be of equal size. These correspond to more general trees with splits at observed values. With a uniform prior over all balanced partitions, we are able to achieve a nearly ideal performance (as if we knew the number and the location of jumps). As an aside, we describe the distribution of interval lengths obtained when the splits are sampled uniformly from a grid. We relate this distribution to the probability of covering the circumference of a circle with random arcs, a problem in geometric probability that dates back to [26, 27]. Our version of this problem assumes that the splits are chosen from a discrete grid rather than from a unit interval.
Notation
With ∝ and . we will denote an equality and inequality, up to a constant. The ε-covering number of a set Ω for a semimetric d, denoted by N(ε,Ω, d), is the minimal number of d-balls of radius ε needed to cover the set Ω. We denote by φ(·) the standard normal density and by Pnf = ⊗ Pf,i the n-fold product measure of the n independent observations under (1) with a regression function f(·). By Pxn = 1n ∑n i=1 δxi we denote the empirical distribution of the observed covariates, by || · ||n the norm on L2(Pxn) and by || · ||2 the standard Euclidean norm.
2 Bayesian Histogram Regression
We consider a classical nonparametric regression model, where response variables Y (n) = (Y1, . . . , Yn)
′ are related to input variables x(n) = (x1, . . . , xn)′ through the function f0 as follows
Yi = f0(xi) + εi, εi ∼ N (0, 1), i = 1, . . . , n. (1)
We assume that the covariate values xi are one-dimensional, fixed and have been rescaled so that xi ∈ [0, 1]. Partitioning-based regression methods are often invariant to monotone transformations of observations. In particular, when f0 is a step function, standardizing the distance between the observations, and thereby the split points, has no effect on the nature of the estimation problem. Without loss of generality, we will thereby assume that the observations are aligned on an equispaced grid. Assumption 1. (Equispaced Grid) We assume that the scaled predictor values satisfy xi = in for each i = 1, . . . , n.
This assumption implies that partitions that are balanced in terms of the Lebesque measure will be balanced also in terms of the number of observations. A similar assumption was imposed by Donoho [28] in his study of Dyadic CART.
The underlying regression function f0 : [0, 1]→ R is assumed to be a step function, i.e.
f0(x) = K0∑ k=1 β0kIΩ0k(x),
where {Ω0k} K0 k=1 is a partition of [0, 1] into K0 non-overlapping intervals. We assume that {Ω0k} K0 k=1 is minimal, meaning that f0 cannot be represented with a smaller partition (with less than K0 pieces). Each partitioning cell Ω0k is associated with a step size β 0 k , determining the level of the function f0 on Ω0k. The entire vector of K0 step sizes will be denoted by β 0 = (β01 , . . . , β 0 K) ′.
One might like to think of f0 as a regression tree with K0 bottom leaves. Indeed, every step function can be associated with an equivalence class of trees that live on the same partition but differ in their tree topology. The number of bottom leaves K0 will be treated as unknown throughout this paper. Our goal will be designing a suitable class of priors on step functions so that the posterior concentrates tightly around f0. Our analysis with a single predictor has served as a precursor to a full-blown analysis for high-dimensional regression trees [29].
We consider an approximating space of all step functions (with K = 1, 2, . . . bottom leaves)
F = ∪∞K=1FK , (2)
which consists of smaller spaces (or shells) of all K-step functions
FK = { fβ : [0, 1]→ R; fβ(x) =
K∑ k=1 βkIΩk(x)
} ,
each indexed by a partition {Ωk}Kk=1 and a vector of K step heights β. The fundamental building block of our theoretical analysis will be the prior on F . This prior distribution has three main ingredients, described in detail below, (a) a prior on the number of steps K, (b) a prior on the partitions {Ωk}Kk=1 of size K, and (c) a prior on step sizes β = (β1, . . . , βK)′.
2.1 Prior πK(·) on the Number of Steps K
To avoid overfitting, we assign an exponentially decaying prior distribution that penalizes partitions with too many jumps. Definition 2.1. (Prior on K) The prior on the number of partitioning cells K satisfies
πK(k) ≡ Π(K = k) ∝ exp(−cK k log k) for k = 1, 2, . . . . (3)
This prior is no stranger to non-parametric problems. It was deployed for stepwise reconstructions of densities [24, 23] and regression surfaces [25]. When cK is large, this prior is concentrated on models with small complexity where overfitting should not occur. Decreasing cK leads to the smearing of the prior mass over partitions with more jumps. This is illustrated in Figure 1, which depicts the prior for various choices of cK . We provide recommendations for the choice of cK in Section 3.1.
2.2 Prior πΩ(· |K) on Interval Partitions {Ωk}Kk=1
After selecting the number of stepsK from πK(k), we assign a prior over interval partitions πΩ(· |K). We will consider two important special cases.
πK(k)
f0(x)
2.2.1 Equivalent Blocks
Perhaps the simplest partition is based on statistically equivalent blocks [30], where all the cells are required to have the same number of points. This is also known as the K-spacing rule that partitions the unit interval using order statistics of the observations. Definition 2.2. (Equivalent Blocks) Let x(i) denote the ith order statistic of x = (x1, . . . , xn)′, where x(n) ≡ 1 and n = Kc for some c ∈ N\{0}. Denote by x(0) ≡ 0. A partition {Ωk}Kk=1 consists of K equivalent blocks, when Ωk = (x(jk), x(jk+1)], where jk = (k − 1)c.
A variant of this definition can be obtained in terms of interval lengths rather than numbers of observations. Definition 2.3. (Equispaced Blocks) A partition {Ωk}Kk=1 consists of K equispaced blocks Ωk, when Ωk = ( k−1 K , k K ] for k = 1, . . . ,K.
When K = 2s for some s ∈ N\{0}, the equispaced partition corresponds to a full complete binary tree with splits at dyadic rationals. If the observations xi lie on a regular grid (Assumption 1), then Definition 2.2 and 2.3 are essentially equivalent. We will thereby focus on equivalent blocks (EB) and denote such a partition (for a given K > 0) with ΩEBK . Because there is only one such partition for each K, the prior πΩ(·|K) has a single point mass mass at ΩEBK . With Ω EB = ∪∞K=1Ω EB K we denote the set of all EB partitions for K = 1, 2, . . . . We will use these partitioning schemes as a jump-off point.
2.2.2 Balanced Intervals
Equivalent (equispaced) blocks are deterministic and, as such, do not provide much room for learning about the actual location of jumps in f0. Balanced intervals, introduced below, are a richer class of partitions that tolerate a bit more imbalance. First, we introduce the notion of cell counts µ(Ωk). For each interval Ωk, we write
µ(Ωk) = 1
n n∑ i=1 I(xi ∈ Ωk), (4)
the proportion of observations falling inside Ωk. Note that for equivalent blocks, we can write µ(Ω1) = · · · = µ(ΩK) = c/n = 1/K. Definition 2.4. (Balanced Intervals) A partition {Ωk}Kk=1 is balanced if
C2min K ≤ µ(Ωk) ≤ C2max K for all k = 1, . . . ,K (5)
for some universal constants Cmin ≤ 1 ≤ Cmax not depending on K.
The following variant of the balancing condition uses interval widths rather than cell counts: C̃2min/K ≤ |Ωk| ≤ C̃2max/K. Again, under Assumption 1, these two definitions are equivalent. In the sequel, we will denote by ΩBIK the set of all balanced partitions consisting of K intervals and by ΩBI = ∪∞K=1Ω BI K the set of all balanced intervals of sizes K = 1, 2, . . . . It is worth pointing out that the balance assumption on the interval partitions can be relaxed, at the expense of a log factor in the concentration rate [29].
With balanced partitions, the Kth shell FK of the approximating space F in (2) consists of all step functions that are supported on partitions ΩBIK and haveK−1 points of discontinuity uk ∈ In ≡ {xi : i = 1, . . . , n− 1} for k = 1, . . .K − 1. For equispaced blocks in Definition 2.3, we assumed that the points of subdivision were deterministic, i.e. uk = k/K. For balanced partitions, we assume that uk are random and chosen amongst the observed values xi. The order statistics of the vector of splits u = (u1, . . . , uK−1)
′ uniquely define a segmentation of [0, 1] into K intervals Ωk = (u(k−1), u(k)], where u(k) designates the kth smallest value in u and u(0) ≡ 0, u(K) = x(n) ≡ 1.
Our prior over balanced intervals πΩ(· |K) will be defined implicitly through a uniform prior over the split vectors u. Namely, the prior over balanced partitions ΩBIK satisfies
πΩ({Ωk}Kk=1 |K) = 1 card(ΩBIK ) I ( {Ωk}Kk=1 ∈ Ω BI K ) . (6)
In the following Lemma, we obtain upper bounds on card(ΩBIK ) and discuss how they relate to an old problem in geometric probability. In the sequel, we denote with |Ωk| the lengths of the segments defined through the split points u. Lemma 2.1. Assume that u = (u1, . . . , uK−1)′ is a vector of independent random variables obtained by uniform sampling (without replacement) from In. Then under Assumption 1, we have for 1/n < C < 1/K
Π ( min
1≤k≤K |Ωk| ≥ C
) = (bn(1−KC)c+K−1 K−1 )( n−1 K−1
) (7) and
Π ( max
1≤k≤K |Ωk| ≤ C
) = 1− ñ∑ k=1 (−1)k ( n− 1 k )(bn(1−k C)c+K−1 K−1 )( n−1 K−1
) , (8) where ñ = min{n− 1, b1/Cc}.
Proof. The denominator of (7) follows from the fact that there are n − 1 possible splits for the K − 1 points of discontinuity uk. The numerator is obtained after adapting the proof of Lemma
2 of Flatto and Konheim [31]. Without lost of generality, we will assume that C = a/n for some a = 1, . . . , bn/Kc so that n(1 −KC) is an integer. Because the jumps uk can only occur on the grid In, we have |Ωk| = j/n for some j = 1, . . . , n − 1. It follows from Lemma 1 of Flatto and Konheim [31] that the set EK = {|Ωk| : ∑K k=1 |Ωk| = 1 and |Ωk| ≥ C for k = 1, . . . ,K} lies
in the interior of a convex hull of K points vr = (1 − KC)er + C ∑K k=1 ek for r = 1, . . . ,K, where er = (er1, . . . , erK)′ are unit base vectors, i.e. erj = I(r = j). Two examples of the set EK (for K = 2 and K = 3) are depicted in Figure 2. In both figures, n = 10 (i.e. 9 candidate split points) and a = 2. With K = 2 (Figure 2(a)), there are only 7 = ( n(1−KC)+K−1 K−1 )
pairs of interval lengths (|Ω1|, |Ω2|)′ that satisfy the minimal cell condition. These points lie on a grid between the two vertices v1 = (1 − C,C) and v2 = (C, 1 − C). With K = 3, the convex hull of points v1 = (1 − 2C,C,C)′, v2 = (C, 1 − 2C,C)′ and v1 = (C,C, 1 − 2C)′ corresponds to a diagonal dissection of a cube of a side length (1 − 3C) (Figure 2(b), again with a = 2 and n = 10). The number of lattice points in the interior (and on the boundary) of such tetrahedron corresponds to an arithmetic sum 12 (n− 3a+ 2)(n− 3a+ 1) = ( n−3a+2 2 ) . So far, we showed (7) for K = 2 and K = 3. To complete the induction argument, suppose that the formula holds for some arbitrary K > 0. Then the size of the lattice inside (and on the boundary) of a (K + 1)-tetrahedron of a side length [1− (K + 1)C] can be obtained by summing lattice sizes inside K-tetrahedrons of increasing side lengths 0, √ 2/n, 2 √ 2/n, . . . , [1− (K + 1)C] √ 2/n, i.e.
n[1−(K+1)C]+K−1∑ j=K−1 ( j K − 1 ) = ( n[1− (K + 1)C] +K K ) ,
where we used the fact ∑N j=K ( j K ) = ( N+1 K+1 ) . The second statement (8) is obtained by writing the event as a complement of the union of events and applying the method of inclusion-exclusion.
Remark 2.1. Flatto and Konheim [31] showed that the probability of covering a circle with random arcs of length C is equal to the probability that all segments of the unit interval, obtained with iid random uniform splits, are smaller than C. Similarly, the probability (8) could be related to the probability of covering the circle with random arcs whose endpoints are chosen from a grid of n− 1 equidistant points on the circumference.
There are ( n−1 K−1 ) partitions of size K, of which (bn(1−C̃2min)c+K−1 K−1 ) satisfy the minimal cell width balancing condition (where C̃2min > K/n). This number gives an upper bound on the combinatorial complexity of balanced partitions card(ΩBIK ).
2.3 Prior π(β |K) on Step Heights β
To complete the prior on FK , we take independent normal priors on each of the coefficients. Namely
π(β |K) = K∏ k=1 φ(βk), (9)
where φ(·) is the standard normal density.
3 Main Results
A crucial ingredient of our proof will be understanding how well one can approximate f0 with other step functions (supported on partitions Ω, which are either equivalent blocks ΩEB or balanced partitions ΩBI ). We will describe the approximation error in terms of the overlap between the true partition {Ω0k} K0 k=1 and the approximating partitions {Ωk}Kk=1 ∈ Ω. More formally, we define the restricted cell count (according to Nobel [32]) as
m ( V ; {Ω0k} K0 k=1 ) = |Ω0k : Ω0k ∩ V 6= ∅|,
the number of cells in {Ω0k} K0 k=1 that overlap with an interval V ⊂ [0, 1]. Next, we define the complexity of f0 as the smallest size of a partition in Ω needed to completely cover f0 without any overlap.
Definition 3.1. (Complexity of f0 w.r.t. Ω) We define K(f0,Ω) as the smallest K such that there exists a K-partition {Ωk}Kk=1 in the class of partitions Ω for which
m (
Ωk; {Ω0k} K0 k=1
) = 1 for all k = 1, . . . ,K.
The number K(f0,Ω) will be referred to as the complexity of f0 w.r.t. Ω.
The complexity number K(f0,Ω) indicates the optimal number of steps needed to approximate f0 with a step function (supported on partitions in Ω) without any error. It depends on the true number of jumps K0 as well as the true interval lengths |Ω0k|. If the minimal partition {Ω0k} K0 k=1 resided in the approximating class, i.e. {Ω0k} K0 k=1 ∈ Ω, then we would obtain K(f0,Ω) = K0, the true number of steps. On the other hand, when {Ω0k} K0 k=1 /∈ Ω, the complexity number K(f0,Ω) can be much larger. This is illustrated in Figure 1 (right), where the true partition {Ω0k} K0 k=1 consists of K0 = 4 unequal pieces and we approximate it with equispaced blocks with K = 2, 5, 10 steps. Because the intervals Ω0k are not equal and the smallest one has a length 1/10, we need K(f0,Ω
EB) = 10 equispaced blocks to perfectly approximate f0. For our analysis, we do not need to assume that {Ω0k} K0 k=1 ∈ Ω (i.e. f0 does not need to be inside the approximating class) or that K(f0,Ω) is finite. The complexity number can increase with n, where sharper performance is obtained when f0 can be approximated error-free with some f ∈ Ω, where f has a small number of discontinuities relative to n. Another way to view K(f0,Ω) is as the ideal partition size on which the posterior should concentrate. If this number were known, we could achieve a near-minimax posterior concentration rate n−1/2 √ K(f0,Ω) log[n/K(f0,Ω)] (Remark 3.3). The actual minimax rate for estimating a
piece-wise constant f0 (consisting of K0 > 2 pieces) is n−1/2 √ K0 log(n/K0) [33]. In our main results, we will target the nearly optimal rate expressed in terms of K(f0,Ω).
3.1 Posterior Concentration for Equivalent Blocks
Our first result shows that the minimax rate is nearly achieved, without any assumptions on the number of pieces of f0 or the sizes of the pieces. Theorem 3.1. (Equivalent Blocks) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on equivalent blocks, equipped with priors πK(·) and π(β | K) as in (3) and (9). Denote with Kf0 ≡ K(f0,Ω
EB) and assume ‖β0‖2∞ . log n and Kf0 . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log (n/Kf0) | Y (n) ) → 0 (10)
in Pnf0 -probability, for every Mn →∞ as n→∞.
Before we proceed with the proof, a few remarks ought to be made. First, it is worthwhile to emphasize that the statement in Theorem 3.1 is a frequentist one as it relates to an aggregated behavior of the posterior distributions obtained under the true generative model Pnf0 .
Second, the theorem shows that the Bayesian procedure performs an automatic adaptation to K(f0,Ω
EB). The posterior will concentrate on EB partitions that are fine enough to approximate f0 well. Thus, we are able to recover the true function as well as if we knew K(f0,ΩEB).
Third, it is worth mentioning that, under Assumption 1, Theorem 3.1 holds for equivalent as well as equisized blocks. In this vein, it describes the speed of posterior concentration for dyadic regression trees. Indeed, as mentioned previously, with K = 2s for some s ∈ N\{0}, the equisized partition corresponds to a full binary tree with splits at dyadic rationals.
Another interesting insight is that the Gaussian prior (9), while selected for mathematical convenience, turns out to be sufficient for optimal recovery. In other words, despite the relatively large amount of mass near zero, the Gaussian prior does not rule out optimal posterior concentration. Our standard normal prior is a simpler version of the Bayesian CART prior, which determines the variance from the data [9].
Let Kf0 ≡ K(f0,Ω EB) be as in Definition 3.1. Theorem 3.1 is proved by verifying the three conditions of Theorem 4 of [18], for εn = n−1/2 √ Kf0 log(n/Kf0) and Fn = ⋃kn K=0 FK , with
kn of the order Kf0 log(n/Kf0). The approximating subspace Fn ⊂ F should be rich enough to approximate f0 well and it should receive most of the prior mass. The conditions for posterior contraction at the rate εn are:
(C1) sup ε>εn
logN ( ε 36 , {f ∈ Fn : ‖f − f0‖n < ε}, ‖.‖n ) ≤ nε2n,
(C2) Π(F\Fn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) = o(e−2nε 2 n),
(C3) Π(f ∈ Fn : jεn < ‖f − f0‖n ≤ 2jεn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) ≤ e
j2 4 nε 2 n for all sufficiently large j.
The entropy condition (C1) restricts attention to EB partitions with small K. As will be seen from the proof, the largest allowed partitions have at most (a constant multiple of) Kf0 log (n/Kf0) pieces..
Condition (C2) requires that the prior does not promote partitions with more than Kf0 log (n/Kf0) pieces. This property is guaranteed by the exponentially decaying prior πK(·), which penalizes large partitions.
The final condition, (C3), requires that the prior charges a ‖.‖n neighborhood of the true function. In our proof, we verify this condition by showing that the prior mass on step functions of the optimal size Kf0 is sufficiently large.
Proof. We verify the three conditions (C1), (C2) and (C3). (C1) Let ε > εn and K ∈ N. For fα, fβ ∈ FK , we have K−1‖α− β‖22 = ‖fα − fβ‖2n because µ(Ωk) = 1/K for each k. We now argue as in the proof of Theorem 12 of [18] to show that N ( ε 36 , {f ∈ FK : ‖f − f0‖n < ε}, ‖.‖n ) can be covered by the number of √ Kε/36-balls required
to cover a √ Kε-ball in RK . This number is bounded above by 108K . Summing over K, we recognize a geometric series. Taking the logarithm of the result, we find that (C1) is satisfied if log(108)(kn + 1) ≤ nε2n.
(C2) We bound the denominator by: Π(f ∈ F : ‖f − f0‖2n ≤ ε2) ≥ πK(Kf0)Π ( β ∈ RK(f0) : ‖β − βext0 ‖22 ≤ ε2Kf0 ) ,
where βext0 ∈ RKf0 is an extended version of β 0 ∈ RK0 , containing the coefficients for f0 expressed as a step function on the partition {Ω0k} Kf0 k=1. This can be bounded from below by
πK(Kf0) e‖β ext 0 ‖22/2
Π ( β ∈ RK(f0) : ‖β‖22 ≤ ε2Kf0/2 ) > πK(Kf0)
e‖β ext 0 ‖22/2 ∫ ε2Kf0/2 0 xKf0/2−1e−x/2 2Kf0/2Γ(Kf0/2) dx.
We bound this from below by bounding the exponential at the upper integration limit, yielding:
πK(Kf0) e‖β ext 0 ‖22/2
e−ε 2Kf0/4
2Kf0 Γ(Kf0/2 + 1) εKf0K
Kf0/2 f0 . (11)
For ε = εn → 0, we thus find that the denominator in (C2) can be lower bounded with eKf0 log εn−cK Kf0 logKf0−‖β ext 0 ‖ 2 2/2−Kf0/2[log 2+ε 2 n/2]. We bound the numerator:
Π(F\Fn) = Π ( ∞⋃ k=kn+1 Fk ) ∝ ∞∑ k=kn+1 e−cKk log k ≤ e−cK(kn+1) log(kn+1) + ∫ ∞ kn+1 e−cKx log x,
which is of order e−cK(kn+1) log(kn+1). Combining this bound with (11), we find that (C2) is met if:
e−Kf0 log εn+(cK+1)Kf0 logKf0+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0 as n→∞.
(C3) We bound the numerator by one, and use the bound (11) for the denominator. As εn → 0, we obtain the condition −Kf0 log εn + (cK + 1)Kf0 logKf0 +Kf0‖β 0‖2∞ ≤ j2 4 nε 2 n for all sufficiently large j.
Conclusion With εn = n−1/2 √ Kf0 log(n/Kf0), letting kn ∝ nε2n = Kf0 log(n/Kf0), the condition (C1) is met. With this choice of kn, the condition (C2) holds as well as long as ‖β0‖2∞ . log n and Kf0 . √ n. Finally, the condition (C3) is met for Kf0 . √ n. Remark 3.1. It is worth pointing out that the proof will hold for a larger class of priors on K, as long as the prior shrinks at least exponentially fast (meaning that it is bounded from above by ae−bK for constants a, b > 0). However, a prior at this exponential limit will require tuning, because the optimal a and b will depend on K(f0,ΩEB). We recommend using the prior (2.1) that prunes somewhat more aggressively, because it does not require tuning by the user. Indeed, Theorem 3.1 holds regardless of the choice of cK > 0. We conjecture, however, that values cK ≥ 1/K(f0,ΩEB) lead to a faster concentration speed and we suggest cK = 1 as a default option. Remark 3.2. When Kf0 is known, there is no need for assigning a prior πK(·) and the conditions (C1) and (C3) are verified similarly as before, fixing the number of steps at Kf0 .
3.2 Posterior Concentration for Balanced Intervals
An analogue of Theorem 3.1 can be obtained for balanced partitions from Section 2.2.2 that correspond to regression trees with splits at actual observations. Now, we assume that f0 is ΩBI -valid and carry out the proof with K(f0,ΩBI) instead of K(f0,ΩEB). The posterior concentration rate is only slightly worse. Theorem 3.2. (Balanced Intervals) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on balanced intervals equipped with priors πK(·), πΩ(·|K) and π(β | K) as in (3), (6) and (9). Denote with Kf0 ≡ K(f0,Ω
BI) and assume ‖β0‖2∞ . log 2β n and K(f0,ΩBI) . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log 2β(n/Kf0) | Y (n) ) → 0 (12)
in Pnf0 -probability, for every Mn →∞ as n→∞, where β > 1/2.
Proof. All three conditions (C1), (C2) and (C3) hold if we choose kn ∝ Kf0 [log(n/Kf0)]2β−1. The entropy condition will be satisfied when log (∑kn k=1 C kcard(ΩBIk ) ) . n ε2n for some C > 0, where
εn = n −1/2 √ Kf0 log 2β(n/Kf0). Using the upper bound card(Ω BI k ) < ( n−1 k−1 ) < ( n−1 kn−1 ) (because kn < n−1
2 for large enough n), the condition (C1) is verified. Using the fact that card(ΩKf0 ) . Kf0 log(n/Kf0), the condition (C2) will be satisfied when, for some D > 0, we have
e−Kf0 log εn+(cK+1)Kf0 logKf0+DKf0 log(n/Kf0 )+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0. (13)
This holds for our choice of kn under the assumption ‖β0‖2∞ . log 2β n and Kf0 .
√ n. These
choices also yield (C3). Remark 3.3. When Kf0 & √ n, Theorem 3.1 and Theorem 3.2 still hold, only with the bit slower
slower concentration rate n−1/2 √ Kf0 log n.
4 Discussion
We provided the first posterior concentration rate results for Bayesian non-parametric regression with step functions. We showed that under suitable complexity priors, the Bayesian procedure adapts to the unknown aspects of the target step function. Our approach can be extended in three ways: (a) to smooth f0 functions, (b) to dimension reduction with high-dimensional predictors, (c) to more general partitioning schemes that correspond to methods like Bayesian CART and BART. These three extensions are developed in our followup manuscript [29].
5 Acknowledgment
This work was supported by the James S. Kemper Foundation Faculty Research Fund at the University of Chicago Booth School of Business. | 1. What is the main contribution of the paper regarding Bayesian regression histograms?
2. What are the strengths of the paper, particularly in terms of notation and theoretical analysis?
3. Do you have any concerns or suggestions regarding the paper's submission and relevance to NIPS?
4. Are there any minor issues or typos that need to be addressed in the paper? | Review | Review
This paper analyses concentration rates (speed of posterior concentration) for Bayesian regression histograms and demonstrates that under certain conditions and priors, the posterior distribution concentrates around the true step regression function at the minimax rate.
The notation is clear. Different approximating functions are considered, starting from the set of step functions supported on equally sized intervals, up to more flexible functions supported on balanced partitions. The most important part of the paper is building the prior on the space of approximating functions.
The paper is relatively clear and brings up an interesting first theoretical result regarding speed of posterior concentration for Bayesian regression histograms. The authors assume very simple conditions (one predictor, piecewise-constant functions), but this is necessary in order to get a first analysis.
Proof seems correct, although it is out of this reviewer's expertise. This reviewer wonders why the authors did not considered sending this work to the Annals of Statistics instead of NIPS, given the type of analysis and provided results.
Minor:
- You might want to check the reference of Bayesian Hierarchical Clustering (Heller et.al, 2005) and the literature of infinite mixture of experts.
- l. 116: You mention recommendations for the choice of c_K in Section 3.3, but this Section does not exist.
- Assumption 1 is referred to multiple times in page 4, but it is not defined until page 5.
Writing (typos):
- l. 83: we will
- l. 231: mass near zero
- l. 296: (b) not clear (extend to multi-dimensional?) |
NIPS | Title
Bayesian Dyadic Trees and Histograms for Regression
Abstract
Many machine learning tools for regression are based on recursive partitioning of the covariate space into smaller regions, where the regression function can be estimated locally. Among these, regression trees and their ensembles have demonstrated impressive empirical performance. In this work, we shed light on the machinery behind Bayesian variants of these methods. In particular, we study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple regression case with just one predictor. We focus on the reconstruction of regression surfaces that are piecewise constant, where the number of jumps is unknown. We show that with suitably designed priors, posterior distributions concentrate around the true step regression function at a near-minimax rate. These results do not require the knowledge of the true number of steps, nor the width of the true partitioning cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the true piecewise regression function nearly as well as if we knew the exact number and location of jumps. Our results constitute the first step towards understanding why Bayesian trees and their ensembles have worked so well in practice. As an aside, we discuss prior distributions on balanced interval partitions and how they relate to an old problem in geometric probability. Namely, we relate the probability of covering the circumference of a circle with random arcs whose endpoints are confined to a grid, a new variant of the original problem.
1 Introduction
Histogram regression methods, such as regression trees [1] and their ensembles [2], have an impressive record of empirical success in many areas of application [3, 4, 5, 6, 7]. Tree-based machine learning (ML) methods build a piecewise constant reconstruction of the regression surface based on ideas of recursive partitioning. Perhaps the most popular partitioning schemes are the ones based on parallel-axis splits. One recent example is the Mondrian process [8], which was introduced to the ML community as a prior over tree data structures with interesting self-consistency properties. Many efficient algorithms exist that can be deployed to fit regression histograms underpinned by some partitioning scheme. Among these, Bayesian variants, such as Bayesian CART [9, 10] and BART [11], have appealed to umpteen practitioners. There are several reasons why. Bayesian tree-based regression tools (a) can adapt to regression surfaces without any need for pruning, (b) are reluctant to overfit, (c) provide an avenue for uncertainty statements via posterior distributions. While practical success stories abound [3, 4, 5, 6, 7], the theoretical understanding of Bayesian regression tree methods has been lacking. In this work, we study the quality of posterior distributions with regard to the three properties mentioned above. We provide first theoretical results that contribute to the understanding of Bayesian Gaussian regression methods based on recursive partitioning.
Our performance metric will be the speed of posterior concentration/contraction around the true regression function. This is ultimately a frequentist assessment, describing the typical behavior of the posterior under the true generative model [12]. Posterior concentration rate results are now slowly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entering the machine learning community as a tool for obtaining more insights into Bayesian methods [13, 14, 15, 16, 17]. Such results quantify not only the typical distance between a point estimator (posterior mean/median) and the truth, but also the typical spread of the posterior around the truth. Ideally, most of the posterior mass should be concentrated in a ball centered around the true value with a radius proportional to the minimax rate [12, 18]. Being inherently a performance measure of both location and spread, optimal posterior concentration provides a necessary certificate for further uncertainty quantification [19, 20, 21]. Beyond uncertainty assessment, theoretical guarantees that describe the average posterior shrinkage behavior have also been a valuable instrument for assessing the suitability of priors. As such, these results can often provide useful guidelines for the choice of tuning parameters, e.g. the latent Dirichlet allocation model [14].
Despite the rapid growth of this frequentist-Bayesian theory field, posterior concentration results for Bayesian regression histograms/trees/forests have, so far, been unavailable. Here, we adopt this theoretical framework to get new insights into why these methods work so well.
Related Work
Bayesian density estimation with step functions is a relatively well-studied problem [22, 23, 24]. The literature on Bayesian histogram regression is a bit less crowded. Perhaps the closest to our conceptual framework is the work by Coram and Lalley [25], who studied Bayesian non-parametric binary regression with uniform mixture priors on step functions. The authors focused on L1 consistency. Here, we focus on posterior concentration rather than consistency. We are not aware of any other related theoretical study of Bayesian histogram methods for Gaussian regression.
Our Contributions
In this work we focus on a canonical regression setting with merely one predictor. We study hierarchical priors on step functions and provide conditions under which the posteriors concentrate optimally around the true regression function. We consider the case when the true regression function itself is a step function, i.e. a tree or a tree ensemble, where the number and location of jumps is unknown.
We start with a very simple space of approximating step functions, supported on equally sized intervals where the number of splits is equipped with a prior. These partitions include dyadic regression trees. We show that for a suitable complexity prior, all relevant information about the true regression function (jump sizes and the number of jumps) is learned from the data automatically. During the course of the proof, we develop a notion of the complexity of a piecewise constant function relative to its approximating class.
Next, we take a larger approximating space consisting of functions supported on balanced partitions that do not necessarily have to be of equal size. These correspond to more general trees with splits at observed values. With a uniform prior over all balanced partitions, we are able to achieve a nearly ideal performance (as if we knew the number and the location of jumps). As an aside, we describe the distribution of interval lengths obtained when the splits are sampled uniformly from a grid. We relate this distribution to the probability of covering the circumference of a circle with random arcs, a problem in geometric probability that dates back to [26, 27]. Our version of this problem assumes that the splits are chosen from a discrete grid rather than from a unit interval.
Notation
With ∝ and . we will denote an equality and inequality, up to a constant. The ε-covering number of a set Ω for a semimetric d, denoted by N(ε,Ω, d), is the minimal number of d-balls of radius ε needed to cover the set Ω. We denote by φ(·) the standard normal density and by Pnf = ⊗ Pf,i the n-fold product measure of the n independent observations under (1) with a regression function f(·). By Pxn = 1n ∑n i=1 δxi we denote the empirical distribution of the observed covariates, by || · ||n the norm on L2(Pxn) and by || · ||2 the standard Euclidean norm.
2 Bayesian Histogram Regression
We consider a classical nonparametric regression model, where response variables Y (n) = (Y1, . . . , Yn)
′ are related to input variables x(n) = (x1, . . . , xn)′ through the function f0 as follows
Yi = f0(xi) + εi, εi ∼ N (0, 1), i = 1, . . . , n. (1)
We assume that the covariate values xi are one-dimensional, fixed and have been rescaled so that xi ∈ [0, 1]. Partitioning-based regression methods are often invariant to monotone transformations of observations. In particular, when f0 is a step function, standardizing the distance between the observations, and thereby the split points, has no effect on the nature of the estimation problem. Without loss of generality, we will thereby assume that the observations are aligned on an equispaced grid. Assumption 1. (Equispaced Grid) We assume that the scaled predictor values satisfy xi = in for each i = 1, . . . , n.
This assumption implies that partitions that are balanced in terms of the Lebesque measure will be balanced also in terms of the number of observations. A similar assumption was imposed by Donoho [28] in his study of Dyadic CART.
The underlying regression function f0 : [0, 1]→ R is assumed to be a step function, i.e.
f0(x) = K0∑ k=1 β0kIΩ0k(x),
where {Ω0k} K0 k=1 is a partition of [0, 1] into K0 non-overlapping intervals. We assume that {Ω0k} K0 k=1 is minimal, meaning that f0 cannot be represented with a smaller partition (with less than K0 pieces). Each partitioning cell Ω0k is associated with a step size β 0 k , determining the level of the function f0 on Ω0k. The entire vector of K0 step sizes will be denoted by β 0 = (β01 , . . . , β 0 K) ′.
One might like to think of f0 as a regression tree with K0 bottom leaves. Indeed, every step function can be associated with an equivalence class of trees that live on the same partition but differ in their tree topology. The number of bottom leaves K0 will be treated as unknown throughout this paper. Our goal will be designing a suitable class of priors on step functions so that the posterior concentrates tightly around f0. Our analysis with a single predictor has served as a precursor to a full-blown analysis for high-dimensional regression trees [29].
We consider an approximating space of all step functions (with K = 1, 2, . . . bottom leaves)
F = ∪∞K=1FK , (2)
which consists of smaller spaces (or shells) of all K-step functions
FK = { fβ : [0, 1]→ R; fβ(x) =
K∑ k=1 βkIΩk(x)
} ,
each indexed by a partition {Ωk}Kk=1 and a vector of K step heights β. The fundamental building block of our theoretical analysis will be the prior on F . This prior distribution has three main ingredients, described in detail below, (a) a prior on the number of steps K, (b) a prior on the partitions {Ωk}Kk=1 of size K, and (c) a prior on step sizes β = (β1, . . . , βK)′.
2.1 Prior πK(·) on the Number of Steps K
To avoid overfitting, we assign an exponentially decaying prior distribution that penalizes partitions with too many jumps. Definition 2.1. (Prior on K) The prior on the number of partitioning cells K satisfies
πK(k) ≡ Π(K = k) ∝ exp(−cK k log k) for k = 1, 2, . . . . (3)
This prior is no stranger to non-parametric problems. It was deployed for stepwise reconstructions of densities [24, 23] and regression surfaces [25]. When cK is large, this prior is concentrated on models with small complexity where overfitting should not occur. Decreasing cK leads to the smearing of the prior mass over partitions with more jumps. This is illustrated in Figure 1, which depicts the prior for various choices of cK . We provide recommendations for the choice of cK in Section 3.1.
2.2 Prior πΩ(· |K) on Interval Partitions {Ωk}Kk=1
After selecting the number of stepsK from πK(k), we assign a prior over interval partitions πΩ(· |K). We will consider two important special cases.
πK(k)
f0(x)
2.2.1 Equivalent Blocks
Perhaps the simplest partition is based on statistically equivalent blocks [30], where all the cells are required to have the same number of points. This is also known as the K-spacing rule that partitions the unit interval using order statistics of the observations. Definition 2.2. (Equivalent Blocks) Let x(i) denote the ith order statistic of x = (x1, . . . , xn)′, where x(n) ≡ 1 and n = Kc for some c ∈ N\{0}. Denote by x(0) ≡ 0. A partition {Ωk}Kk=1 consists of K equivalent blocks, when Ωk = (x(jk), x(jk+1)], where jk = (k − 1)c.
A variant of this definition can be obtained in terms of interval lengths rather than numbers of observations. Definition 2.3. (Equispaced Blocks) A partition {Ωk}Kk=1 consists of K equispaced blocks Ωk, when Ωk = ( k−1 K , k K ] for k = 1, . . . ,K.
When K = 2s for some s ∈ N\{0}, the equispaced partition corresponds to a full complete binary tree with splits at dyadic rationals. If the observations xi lie on a regular grid (Assumption 1), then Definition 2.2 and 2.3 are essentially equivalent. We will thereby focus on equivalent blocks (EB) and denote such a partition (for a given K > 0) with ΩEBK . Because there is only one such partition for each K, the prior πΩ(·|K) has a single point mass mass at ΩEBK . With Ω EB = ∪∞K=1Ω EB K we denote the set of all EB partitions for K = 1, 2, . . . . We will use these partitioning schemes as a jump-off point.
2.2.2 Balanced Intervals
Equivalent (equispaced) blocks are deterministic and, as such, do not provide much room for learning about the actual location of jumps in f0. Balanced intervals, introduced below, are a richer class of partitions that tolerate a bit more imbalance. First, we introduce the notion of cell counts µ(Ωk). For each interval Ωk, we write
µ(Ωk) = 1
n n∑ i=1 I(xi ∈ Ωk), (4)
the proportion of observations falling inside Ωk. Note that for equivalent blocks, we can write µ(Ω1) = · · · = µ(ΩK) = c/n = 1/K. Definition 2.4. (Balanced Intervals) A partition {Ωk}Kk=1 is balanced if
C2min K ≤ µ(Ωk) ≤ C2max K for all k = 1, . . . ,K (5)
for some universal constants Cmin ≤ 1 ≤ Cmax not depending on K.
The following variant of the balancing condition uses interval widths rather than cell counts: C̃2min/K ≤ |Ωk| ≤ C̃2max/K. Again, under Assumption 1, these two definitions are equivalent. In the sequel, we will denote by ΩBIK the set of all balanced partitions consisting of K intervals and by ΩBI = ∪∞K=1Ω BI K the set of all balanced intervals of sizes K = 1, 2, . . . . It is worth pointing out that the balance assumption on the interval partitions can be relaxed, at the expense of a log factor in the concentration rate [29].
With balanced partitions, the Kth shell FK of the approximating space F in (2) consists of all step functions that are supported on partitions ΩBIK and haveK−1 points of discontinuity uk ∈ In ≡ {xi : i = 1, . . . , n− 1} for k = 1, . . .K − 1. For equispaced blocks in Definition 2.3, we assumed that the points of subdivision were deterministic, i.e. uk = k/K. For balanced partitions, we assume that uk are random and chosen amongst the observed values xi. The order statistics of the vector of splits u = (u1, . . . , uK−1)
′ uniquely define a segmentation of [0, 1] into K intervals Ωk = (u(k−1), u(k)], where u(k) designates the kth smallest value in u and u(0) ≡ 0, u(K) = x(n) ≡ 1.
Our prior over balanced intervals πΩ(· |K) will be defined implicitly through a uniform prior over the split vectors u. Namely, the prior over balanced partitions ΩBIK satisfies
πΩ({Ωk}Kk=1 |K) = 1 card(ΩBIK ) I ( {Ωk}Kk=1 ∈ Ω BI K ) . (6)
In the following Lemma, we obtain upper bounds on card(ΩBIK ) and discuss how they relate to an old problem in geometric probability. In the sequel, we denote with |Ωk| the lengths of the segments defined through the split points u. Lemma 2.1. Assume that u = (u1, . . . , uK−1)′ is a vector of independent random variables obtained by uniform sampling (without replacement) from In. Then under Assumption 1, we have for 1/n < C < 1/K
Π ( min
1≤k≤K |Ωk| ≥ C
) = (bn(1−KC)c+K−1 K−1 )( n−1 K−1
) (7) and
Π ( max
1≤k≤K |Ωk| ≤ C
) = 1− ñ∑ k=1 (−1)k ( n− 1 k )(bn(1−k C)c+K−1 K−1 )( n−1 K−1
) , (8) where ñ = min{n− 1, b1/Cc}.
Proof. The denominator of (7) follows from the fact that there are n − 1 possible splits for the K − 1 points of discontinuity uk. The numerator is obtained after adapting the proof of Lemma
2 of Flatto and Konheim [31]. Without lost of generality, we will assume that C = a/n for some a = 1, . . . , bn/Kc so that n(1 −KC) is an integer. Because the jumps uk can only occur on the grid In, we have |Ωk| = j/n for some j = 1, . . . , n − 1. It follows from Lemma 1 of Flatto and Konheim [31] that the set EK = {|Ωk| : ∑K k=1 |Ωk| = 1 and |Ωk| ≥ C for k = 1, . . . ,K} lies
in the interior of a convex hull of K points vr = (1 − KC)er + C ∑K k=1 ek for r = 1, . . . ,K, where er = (er1, . . . , erK)′ are unit base vectors, i.e. erj = I(r = j). Two examples of the set EK (for K = 2 and K = 3) are depicted in Figure 2. In both figures, n = 10 (i.e. 9 candidate split points) and a = 2. With K = 2 (Figure 2(a)), there are only 7 = ( n(1−KC)+K−1 K−1 )
pairs of interval lengths (|Ω1|, |Ω2|)′ that satisfy the minimal cell condition. These points lie on a grid between the two vertices v1 = (1 − C,C) and v2 = (C, 1 − C). With K = 3, the convex hull of points v1 = (1 − 2C,C,C)′, v2 = (C, 1 − 2C,C)′ and v1 = (C,C, 1 − 2C)′ corresponds to a diagonal dissection of a cube of a side length (1 − 3C) (Figure 2(b), again with a = 2 and n = 10). The number of lattice points in the interior (and on the boundary) of such tetrahedron corresponds to an arithmetic sum 12 (n− 3a+ 2)(n− 3a+ 1) = ( n−3a+2 2 ) . So far, we showed (7) for K = 2 and K = 3. To complete the induction argument, suppose that the formula holds for some arbitrary K > 0. Then the size of the lattice inside (and on the boundary) of a (K + 1)-tetrahedron of a side length [1− (K + 1)C] can be obtained by summing lattice sizes inside K-tetrahedrons of increasing side lengths 0, √ 2/n, 2 √ 2/n, . . . , [1− (K + 1)C] √ 2/n, i.e.
n[1−(K+1)C]+K−1∑ j=K−1 ( j K − 1 ) = ( n[1− (K + 1)C] +K K ) ,
where we used the fact ∑N j=K ( j K ) = ( N+1 K+1 ) . The second statement (8) is obtained by writing the event as a complement of the union of events and applying the method of inclusion-exclusion.
Remark 2.1. Flatto and Konheim [31] showed that the probability of covering a circle with random arcs of length C is equal to the probability that all segments of the unit interval, obtained with iid random uniform splits, are smaller than C. Similarly, the probability (8) could be related to the probability of covering the circle with random arcs whose endpoints are chosen from a grid of n− 1 equidistant points on the circumference.
There are ( n−1 K−1 ) partitions of size K, of which (bn(1−C̃2min)c+K−1 K−1 ) satisfy the minimal cell width balancing condition (where C̃2min > K/n). This number gives an upper bound on the combinatorial complexity of balanced partitions card(ΩBIK ).
2.3 Prior π(β |K) on Step Heights β
To complete the prior on FK , we take independent normal priors on each of the coefficients. Namely
π(β |K) = K∏ k=1 φ(βk), (9)
where φ(·) is the standard normal density.
3 Main Results
A crucial ingredient of our proof will be understanding how well one can approximate f0 with other step functions (supported on partitions Ω, which are either equivalent blocks ΩEB or balanced partitions ΩBI ). We will describe the approximation error in terms of the overlap between the true partition {Ω0k} K0 k=1 and the approximating partitions {Ωk}Kk=1 ∈ Ω. More formally, we define the restricted cell count (according to Nobel [32]) as
m ( V ; {Ω0k} K0 k=1 ) = |Ω0k : Ω0k ∩ V 6= ∅|,
the number of cells in {Ω0k} K0 k=1 that overlap with an interval V ⊂ [0, 1]. Next, we define the complexity of f0 as the smallest size of a partition in Ω needed to completely cover f0 without any overlap.
Definition 3.1. (Complexity of f0 w.r.t. Ω) We define K(f0,Ω) as the smallest K such that there exists a K-partition {Ωk}Kk=1 in the class of partitions Ω for which
m (
Ωk; {Ω0k} K0 k=1
) = 1 for all k = 1, . . . ,K.
The number K(f0,Ω) will be referred to as the complexity of f0 w.r.t. Ω.
The complexity number K(f0,Ω) indicates the optimal number of steps needed to approximate f0 with a step function (supported on partitions in Ω) without any error. It depends on the true number of jumps K0 as well as the true interval lengths |Ω0k|. If the minimal partition {Ω0k} K0 k=1 resided in the approximating class, i.e. {Ω0k} K0 k=1 ∈ Ω, then we would obtain K(f0,Ω) = K0, the true number of steps. On the other hand, when {Ω0k} K0 k=1 /∈ Ω, the complexity number K(f0,Ω) can be much larger. This is illustrated in Figure 1 (right), where the true partition {Ω0k} K0 k=1 consists of K0 = 4 unequal pieces and we approximate it with equispaced blocks with K = 2, 5, 10 steps. Because the intervals Ω0k are not equal and the smallest one has a length 1/10, we need K(f0,Ω
EB) = 10 equispaced blocks to perfectly approximate f0. For our analysis, we do not need to assume that {Ω0k} K0 k=1 ∈ Ω (i.e. f0 does not need to be inside the approximating class) or that K(f0,Ω) is finite. The complexity number can increase with n, where sharper performance is obtained when f0 can be approximated error-free with some f ∈ Ω, where f has a small number of discontinuities relative to n. Another way to view K(f0,Ω) is as the ideal partition size on which the posterior should concentrate. If this number were known, we could achieve a near-minimax posterior concentration rate n−1/2 √ K(f0,Ω) log[n/K(f0,Ω)] (Remark 3.3). The actual minimax rate for estimating a
piece-wise constant f0 (consisting of K0 > 2 pieces) is n−1/2 √ K0 log(n/K0) [33]. In our main results, we will target the nearly optimal rate expressed in terms of K(f0,Ω).
3.1 Posterior Concentration for Equivalent Blocks
Our first result shows that the minimax rate is nearly achieved, without any assumptions on the number of pieces of f0 or the sizes of the pieces. Theorem 3.1. (Equivalent Blocks) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on equivalent blocks, equipped with priors πK(·) and π(β | K) as in (3) and (9). Denote with Kf0 ≡ K(f0,Ω
EB) and assume ‖β0‖2∞ . log n and Kf0 . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log (n/Kf0) | Y (n) ) → 0 (10)
in Pnf0 -probability, for every Mn →∞ as n→∞.
Before we proceed with the proof, a few remarks ought to be made. First, it is worthwhile to emphasize that the statement in Theorem 3.1 is a frequentist one as it relates to an aggregated behavior of the posterior distributions obtained under the true generative model Pnf0 .
Second, the theorem shows that the Bayesian procedure performs an automatic adaptation to K(f0,Ω
EB). The posterior will concentrate on EB partitions that are fine enough to approximate f0 well. Thus, we are able to recover the true function as well as if we knew K(f0,ΩEB).
Third, it is worth mentioning that, under Assumption 1, Theorem 3.1 holds for equivalent as well as equisized blocks. In this vein, it describes the speed of posterior concentration for dyadic regression trees. Indeed, as mentioned previously, with K = 2s for some s ∈ N\{0}, the equisized partition corresponds to a full binary tree with splits at dyadic rationals.
Another interesting insight is that the Gaussian prior (9), while selected for mathematical convenience, turns out to be sufficient for optimal recovery. In other words, despite the relatively large amount of mass near zero, the Gaussian prior does not rule out optimal posterior concentration. Our standard normal prior is a simpler version of the Bayesian CART prior, which determines the variance from the data [9].
Let Kf0 ≡ K(f0,Ω EB) be as in Definition 3.1. Theorem 3.1 is proved by verifying the three conditions of Theorem 4 of [18], for εn = n−1/2 √ Kf0 log(n/Kf0) and Fn = ⋃kn K=0 FK , with
kn of the order Kf0 log(n/Kf0). The approximating subspace Fn ⊂ F should be rich enough to approximate f0 well and it should receive most of the prior mass. The conditions for posterior contraction at the rate εn are:
(C1) sup ε>εn
logN ( ε 36 , {f ∈ Fn : ‖f − f0‖n < ε}, ‖.‖n ) ≤ nε2n,
(C2) Π(F\Fn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) = o(e−2nε 2 n),
(C3) Π(f ∈ Fn : jεn < ‖f − f0‖n ≤ 2jεn)
Π(f ∈ F : ‖f − f0‖2n ≤ ε2n) ≤ e
j2 4 nε 2 n for all sufficiently large j.
The entropy condition (C1) restricts attention to EB partitions with small K. As will be seen from the proof, the largest allowed partitions have at most (a constant multiple of) Kf0 log (n/Kf0) pieces..
Condition (C2) requires that the prior does not promote partitions with more than Kf0 log (n/Kf0) pieces. This property is guaranteed by the exponentially decaying prior πK(·), which penalizes large partitions.
The final condition, (C3), requires that the prior charges a ‖.‖n neighborhood of the true function. In our proof, we verify this condition by showing that the prior mass on step functions of the optimal size Kf0 is sufficiently large.
Proof. We verify the three conditions (C1), (C2) and (C3). (C1) Let ε > εn and K ∈ N. For fα, fβ ∈ FK , we have K−1‖α− β‖22 = ‖fα − fβ‖2n because µ(Ωk) = 1/K for each k. We now argue as in the proof of Theorem 12 of [18] to show that N ( ε 36 , {f ∈ FK : ‖f − f0‖n < ε}, ‖.‖n ) can be covered by the number of √ Kε/36-balls required
to cover a √ Kε-ball in RK . This number is bounded above by 108K . Summing over K, we recognize a geometric series. Taking the logarithm of the result, we find that (C1) is satisfied if log(108)(kn + 1) ≤ nε2n.
(C2) We bound the denominator by: Π(f ∈ F : ‖f − f0‖2n ≤ ε2) ≥ πK(Kf0)Π ( β ∈ RK(f0) : ‖β − βext0 ‖22 ≤ ε2Kf0 ) ,
where βext0 ∈ RKf0 is an extended version of β 0 ∈ RK0 , containing the coefficients for f0 expressed as a step function on the partition {Ω0k} Kf0 k=1. This can be bounded from below by
πK(Kf0) e‖β ext 0 ‖22/2
Π ( β ∈ RK(f0) : ‖β‖22 ≤ ε2Kf0/2 ) > πK(Kf0)
e‖β ext 0 ‖22/2 ∫ ε2Kf0/2 0 xKf0/2−1e−x/2 2Kf0/2Γ(Kf0/2) dx.
We bound this from below by bounding the exponential at the upper integration limit, yielding:
πK(Kf0) e‖β ext 0 ‖22/2
e−ε 2Kf0/4
2Kf0 Γ(Kf0/2 + 1) εKf0K
Kf0/2 f0 . (11)
For ε = εn → 0, we thus find that the denominator in (C2) can be lower bounded with eKf0 log εn−cK Kf0 logKf0−‖β ext 0 ‖ 2 2/2−Kf0/2[log 2+ε 2 n/2]. We bound the numerator:
Π(F\Fn) = Π ( ∞⋃ k=kn+1 Fk ) ∝ ∞∑ k=kn+1 e−cKk log k ≤ e−cK(kn+1) log(kn+1) + ∫ ∞ kn+1 e−cKx log x,
which is of order e−cK(kn+1) log(kn+1). Combining this bound with (11), we find that (C2) is met if:
e−Kf0 log εn+(cK+1)Kf0 logKf0+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0 as n→∞.
(C3) We bound the numerator by one, and use the bound (11) for the denominator. As εn → 0, we obtain the condition −Kf0 log εn + (cK + 1)Kf0 logKf0 +Kf0‖β 0‖2∞ ≤ j2 4 nε 2 n for all sufficiently large j.
Conclusion With εn = n−1/2 √ Kf0 log(n/Kf0), letting kn ∝ nε2n = Kf0 log(n/Kf0), the condition (C1) is met. With this choice of kn, the condition (C2) holds as well as long as ‖β0‖2∞ . log n and Kf0 . √ n. Finally, the condition (C3) is met for Kf0 . √ n. Remark 3.1. It is worth pointing out that the proof will hold for a larger class of priors on K, as long as the prior shrinks at least exponentially fast (meaning that it is bounded from above by ae−bK for constants a, b > 0). However, a prior at this exponential limit will require tuning, because the optimal a and b will depend on K(f0,ΩEB). We recommend using the prior (2.1) that prunes somewhat more aggressively, because it does not require tuning by the user. Indeed, Theorem 3.1 holds regardless of the choice of cK > 0. We conjecture, however, that values cK ≥ 1/K(f0,ΩEB) lead to a faster concentration speed and we suggest cK = 1 as a default option. Remark 3.2. When Kf0 is known, there is no need for assigning a prior πK(·) and the conditions (C1) and (C3) are verified similarly as before, fixing the number of steps at Kf0 .
3.2 Posterior Concentration for Balanced Intervals
An analogue of Theorem 3.1 can be obtained for balanced partitions from Section 2.2.2 that correspond to regression trees with splits at actual observations. Now, we assume that f0 is ΩBI -valid and carry out the proof with K(f0,ΩBI) instead of K(f0,ΩEB). The posterior concentration rate is only slightly worse. Theorem 3.2. (Balanced Intervals) Let f0 : [0, 1]→ R be a step function with K0 steps, where K0 is unknown. Denote by F the set of all step functions supported on balanced intervals equipped with priors πK(·), πΩ(·|K) and π(β | K) as in (3), (6) and (9). Denote with Kf0 ≡ K(f0,Ω
BI) and assume ‖β0‖2∞ . log 2β n and K(f0,ΩBI) . √ n. Then, under Assumption 1, we have
Π ( f ∈ F : ‖f − f0‖n ≥Mnn−1/2 √ Kf0 log 2β(n/Kf0) | Y (n) ) → 0 (12)
in Pnf0 -probability, for every Mn →∞ as n→∞, where β > 1/2.
Proof. All three conditions (C1), (C2) and (C3) hold if we choose kn ∝ Kf0 [log(n/Kf0)]2β−1. The entropy condition will be satisfied when log (∑kn k=1 C kcard(ΩBIk ) ) . n ε2n for some C > 0, where
εn = n −1/2 √ Kf0 log 2β(n/Kf0). Using the upper bound card(Ω BI k ) < ( n−1 k−1 ) < ( n−1 kn−1 ) (because kn < n−1
2 for large enough n), the condition (C1) is verified. Using the fact that card(ΩKf0 ) . Kf0 log(n/Kf0), the condition (C2) will be satisfied when, for some D > 0, we have
e−Kf0 log εn+(cK+1)Kf0 logKf0+DKf0 log(n/Kf0 )+Kf0‖β 0‖2∞−cK(kn+1) log(kn+1)+2nε 2 n → 0. (13)
This holds for our choice of kn under the assumption ‖β0‖2∞ . log 2β n and Kf0 .
√ n. These
choices also yield (C3). Remark 3.3. When Kf0 & √ n, Theorem 3.1 and Theorem 3.2 still hold, only with the bit slower
slower concentration rate n−1/2 √ Kf0 log n.
4 Discussion
We provided the first posterior concentration rate results for Bayesian non-parametric regression with step functions. We showed that under suitable complexity priors, the Bayesian procedure adapts to the unknown aspects of the target step function. Our approach can be extended in three ways: (a) to smooth f0 functions, (b) to dimension reduction with high-dimensional predictors, (c) to more general partitioning schemes that correspond to methods like Bayesian CART and BART. These three extensions are developed in our followup manuscript [29].
5 Acknowledgment
This work was supported by the James S. Kemper Foundation Faculty Research Fund at the University of Chicago Booth School of Business. | 1. What is the main contribution of the paper in Bayesian regression histograms?
2. What are the strengths and weaknesses of the proposed approach in the paper?
3. Are there any concerns or questions regarding the paper's content, such as unclear expressions, assumptions, or minor comments?
4. How does the reviewer assess the clarity and flexibility of the paper's methodology? | Review | Review
In this paper, Authors focus on Bayesian regression histograms aiming at regression with one explanatory variable. They develop an approach for constructing regression surface that is piece-wise constant in which no.of jumps are unknown. The approach has some merits and I have some concerns given below, which I think authors should address for better clarity.
1. Line 116: There is no section 3.3 in the paper.
2. In the proof of lemma 2.1 in the supplementary file, it is not clear ( at least to me) how you arrived at the expression above line 24. I followed your explanation but couldn't get it.
3. In line 178: There should be a better explanation for assuming x's are fixed. What if not?
4. Line 183: Do you refer this xi to be the scaled predictor?
Line: 296: Minor comment: How flexible to extend this to dimension reduction with many predictors?
In a practical application, How one can select Ck? and How sensitive the theorem 3.1 to this choice? |
NIPS | Title
The Many Faces of Adversarial Risk
Abstract
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk—not all mathematically rigorous and differing subtly in the details—have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with∞-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
1 Introduction
Neural networks are known to be vulnerable to adversarial attacks, which are imperceptible perturbations to input data that maximize loss [38, 15, 5]. Developing algorithms resistant to such attacks has received considerable attention in recent years [8, 28, 24, 20], motivated by safety-critical applications such as autonomous driving [18, 27], medical imaging [17, 23, 22] and law [21, 6].
A classification algorithm with high accuracy (low risk) in the absence of an adversary may have poor accuracy (high risk) when an adversary is present. Thus, a modified notion known as adversarial risk is used to quantify the adversarial robustness of algorithms. Algorithms that minimize adversarial risk are deemed robust. Procedures for finding them have been effective in practice [24, 41, 28], spurring numerous theoretical investigations into adversarial risk and its minimizers.
There is no universally agreed upon definition of adversarial risk. Even the simplest setting of binary classification in Rd with an `2 adversary admits various definitions involving set expansions [9, 16], transport maps [29], Markov kernels [31], and couplings [26]. These works broadly interpret adversarial risk as a measure of robustness to small perturbations, but their definitions differ in subtle details such as the class of adversaries and algorithms considered, budget constraints placed on the adversary, assumptions on the loss function, and the geometry of decision boundaries.
Optimal adversarial risk is most commonly defined as the minimax risk under adversarial contamination [24, 33]. Other notable characterizations include an optimal transport cost between data generating distributions in [30, 2, 10, 11], the optimal value of a distributionally robust optimization problem [36, 35, 40], and the value of a two-player zero-sum game [26, 29, 3, 4].
The diversity of definitions for adversarial risk makes it challenging to compare approaches. Moreover, not all approaches are rigorous. For instance, the classes of adversarial strategies and classifier
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
algorithms are often unclear, and issues of measurability are ignored. Although this may be harmless for applied research, it has led to incorrect proofs and insufficient assumptions in some theoretical works; a mathematically rigorous foundation for adversarial risk is essential for future research.
In this paper, we examine various notions of adversarial risk for binary classification in a nonparametric setting, where the decision boundary (or decision region) of a classifier is an arbitrary subset of the input space. We present rigorous definitions of adversarial risk and identify conditions under which these definitions are equivalent. We consider the general setting of Polish spaces (complete, separable metric spaces), and present stronger results for the Euclidean space (Rd). Our contributions are as follows:
• We examine the definition of adversarial risk based on set expansions. For Polish spaces, we observe that adversarial risk is not Borel measurable, and hence, not well-defined when the decision region is an arbitrary set. We show that the problem can be resolved by considering a Polish space equipped with the universal completion of the Borel σ-algebra and restricting the decision regions to Borel sets. For the Euclidean space with the Lebesgue σ-algebra, we show that adversarial risk is well-defined for any Lebesgue measurable decision region. Our key lemma (Lemma 4.3) shows that the Lebesgue σ-algebra is preferred over the Borel σ-algebra because set expansions are Lebesgue measurable but not necessarily Borel measurable. These results are contained in Section 4.
• We show that the definition of adversarial risk using set expansions is identical to a notion of risk that appears in robust hypothesis testing with∞-Wasserstein uncertainty sets. We prove this result in Polish spaces using the theory of measurable selections [1, 43]. In Rd, we are able to use the powerful theory of Choquet capacities [7] (in particular, Huber and Strassen’s 2-alternating capacities [19]) to establish results of a similar nature. These results are contained in Section 5.
• We consider the binary classification setup with unequal priors and show (under suitable assumptions) that the optimal adversarial risk from the above definitions is characterized by an unbalanced optimal transport cost between data-generating distributions. For both Polish spaces and Rd, the main tool we use is Theorem 6.1 in which we generalize a classical result of Strassen on excess-cost optimal transport [37, 42] from probability measures to finite measures with possibly unequal mass. This generalizes results of [31, 2] on binary classification, which were only for equal priors. These results are contained in Section 6.
• We consider the setup of a zero-sum game between the adversary and the algorithm. We show that the value of this game (adversarial risk) is equal to the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets centered around true data-generating distributions. We prove the existence of a pure Nash equilibrium in this game for Rd and for Polish spaces with a midpoint property. This extends/strengthens the results of [26, 29, 3] to non-parametric classifiers. These results are contained in Section 7.
The paper is organized as follows: In Section 2, we present preliminary definitions from optimal transport and metric space topology. In Section 3, we discuss various definitions of adversarial risk and present more related work. Sections 4, 5, 6 and 7 contain our main contributions summarized above. We conclude the paper in Section 8 and discuss future research directions.
We emphasize that rectifying measure theoretic issues with existing formulations of adversarial risk is one of our contributions, but not the main focus of our paper. We start our presentation with fixing measurability and well-definedness (in Section 4) because otherwise we will not be able to rigorously present our main results in the subsequent sections, namely: relation to robust hypothesis testing and Choquet capacities in section 5, generalizing the results of [2, 30] in section, 6 proving minimax theorems and existence of Nash equilibria and extending the results of [26, 3, 29] in section 7.
Notation: Throughout the paper, we use X to denote a Polish space (a complete, separable metric space) with metric d and Borel σ-algebra B(X ). For x ∈ X and r ≥ 0, let Br(x) denote the closed ball of radius r centered at x. We use P(X ) andM(X ) to denote the space of probability measures and finite measures defined on the measure space (X ,B(X )), respectively. Let B(X ) denote the universal completion of B(X ). Let P(X ) andM(X ) denote the space of probability measures and finite measures defined on the complete measure space (X ,B(X )). For µ, ν ∈ M(X ), we say ν dominates µ if µ(A) ≤ ν(A) for all A ∈ B(X ) and write µ ν. When X is Rd, we use L(X ) to
denote the Lebesgue σ-algebra and λ to denote the d-dimensional Lebesgue measure. Note that L(X ) = B(X ) for X = Rd. For a positive integer n, we use [n] to denote the finite set {1, . . . , n}.
2 Preliminaries
2.1 Metric Space Topology
We introduce three different notions of set expansions. For ≥ 0 and A ∈ B(X ), the -Minkowski expansion of A is given by A⊕ := ∪a∈AB (a). The -closed expansion of A is defined as A := {x ∈ X : d(x,A) ≤ }, where d(x,A) = infa∈A d(x, a). The -open expansion of A is defined as A ) := {x ∈ X : d(x,A) < }. We use the notation A− to denote ((Ac) )c. Similarly, A := ((Ac)⊕ )c. For example, consider the set A = (0, 1] in the space (X , d) = (R, | · |) and > 0. Then A⊕ = (− , 1 + ], A = [− , 1 + ] and A ) = (− , 1 + ). For any A ∈ B(X ), A is closed and A ) is open. Hence, A , A ) ∈ B(X ). Moreover, A ) ⊆ A⊕ ⊆ A . However, A⊕ may not be in B(X ) (see Appendix for an example). In general, the Minkowski sum of two Borel sets need not be Borel [13], and that of two Lebesgue measurable sets need not be Lebesgue measurable [34].
2.2 Optimal Transport
Let µ, ν ∈ P(X ). A coupling between µ and ν is a joint probability measure π ∈ P(X 2) with marginals µ and ν. The set Π(µ, ν) ⊆ P(X 2) denotes the set of all couplings between µ and ν. The optimal transport cost between µ and ν under a cost function c : X × X → [0,∞) is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x ′)dπ(x, x′). For a positive integer p, the p-Wasserstein distance between µ and ν is defined as, Wp(µ, ν) = (Tdp(µ, ν)) 1 p . The∞-Wasserstein metric is defined as W∞(µ, ν) = limp→∞Wp(µ, ν). It can also be expressed in the following ways:
W∞(µ, ν) = inf π∈Π(µ,ν) ess sup (x,x′)∼π
d(x, x′) = inf{δ > 0 : µ(A) ≤ ν(Aδ)∀A ∈ B(X )}. (1)
Given a µ ∈ P(X ) and a measurable function f : X → X , the push-forward of µ by f is defined as a probability measure f]µ ∈ P(X ) given by f]µ = µ(f−1(A)) for all A ∈ B(X ).
3 Adversarial Risk: Definitions and Related Work
We consider a binary classification setting on feature space X . Let p0, p1 ∈ P(X ) be the datagenerating distributions for labels 0 and 1, respectively. Let the prior probabilities for labels 0 and 1 be in the ratio T : 1 where we assume T ≥ 1 without loss of generality. For a space of classifiers parametrized by w ∈ W and a loss function ` : (X × Y)×W → [0,∞), the adversarial risk of a classifier w ∈ W under an adversarial budget of ≥ 0 is defined as [24, 33],
R⊕ (`, w) = E(x,y)
[ sup
d(x,x′)≤ `((x′, y), w)
] . (2)
If the loss function `(·, w) is measurable, upper semi-continuous and bounded above for all w ∈ W , [26] show that R (`, w) is well-defined. But in general, it may not be so. A case of special interest is the 0-1 loss function with non-parametric classifiers of the form fA(x) := 1{x ∈ A} where A ∈ B(X ). In this case, `0/1((x, y), A) = 1{x ∈ A, y = 0}+ 1{x ∈ Ac, y = 1}. Hence,
R⊕ (`0/1, A) = T
T + 1 Ep0
[ sup
d(x,x′)≤ 1{x′ ∈ A}
] + 1
T + 1 Ep1
[ sup
d(x,x′)≤ 1{x′ ∈ Ac}
]
= T
T + 1 p0(A
⊕ ) + 1
T + 1 p1((A
c)⊕ ). (3)
A problem with the formulation in equation 3 is the ambiguity over the measurability of the sets A⊕ and (Ac)⊕ . Even when A ∈ B(X ), it is not guaranteed that A⊕ , (Ac)⊕ ∈ B(X ) (see Appendix for an example). Hence, R⊕ (`0/1, A) is not well-defined for all A ∈ B(X ). It is shown in [31] that R⊕ (`0/1, A) is well-defined when A is either closed or open, but its validity beyond that is unknown.
A simple fix to this measurability problem is to use closed set expansion instead of the Minkowski set expansion, as done in [25]. This leads to the following formulation.
R (`0/1, A) = T
T + 1 p0(A
) + 1
T + 1 p1((A
c) ). (4)
The above definition is well-defined for any A ∈ B(X ) because A and (Ac) are both closed and hence, measurable. However, under the above definition, a point x ∈ A may be perturbed to x′ ∈ A such that d(x, x′) > . For example, when A = (0, 1), we have A = [− , ] and an adversary may transport x = δ > 0 to x′ = − , violating the budget constraint at x. Remark 1. The formulations in equations (2), (3) and (4) can give a strictly positive adversarial risk even for a “perfect” (i.e. Bayes optimal) classifier. This is consistent with the literature on adversarial examples where even a perfect classifier is forced to make errors in the presence of evasion attacks. These formulations of adversarial risk correspond to “constant-in-the-ball” risk of [16] and “corrupted-instance” risk in [9, 25]. Here, an adversarial risk of zero is only possible if the supports of p0 and p1 are non-overlapping and separated by at least 2 . This is not the case with other formulations of adversarial risk such as “exact-in-the-ball” risk [16], “prediction-change risk and “error-region” risk [9, 25]. We focus on the “corrupted-instance” family of risks in this work.
Another approach to defining adversarial risk is by explicitly defining the class of adversaries of budget as measurable transport maps f : X → X that push-forward the true data distribution such that no point is transported by more than a distance of ; i.e., d(x, f(x)) ≤ . The transport map-based adversarial risk [29] is formally defined as follows:
RF (`0/1, A) = sup f0,f1:X→X
∀x∈X ,d(x,fi(x))≤
T
T + 1 f0]p0(A) +
1
T + 1 f1]p1((A
c)). (5)
Yet another definition uses the robust hypothesis testing framework with W∞ uncertainty sets. In this approach, an adversary perturbs the true distribution pi to a corrupted distribution p′i such that W∞(pi, p ′ i) ≤ . From (1), this is equivalent to the existence of a coupling π ∈ Π(pi, p′i) such that ess sup(x,x′)∼π d(x, x ′) ≤ . The adversarial risk with such an adversary is given by
RΓ (`0/1, A) = sup W∞(p1,p′1),W∞(p0,p ′ 0)≤
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (6)
Clearly, RF (`0/1, A) ≤ RΓ (`0/1, A), but conditions for equality were not studied in prior work. Moreover, their relation to set expansion-based definitions in (3) and (4) was also unknown.
Next we discuss some characterizations of optimal adversarial risk, defined as R∗⊕ := infA∈B(X )R⊕ (`0/1, A). In [30, 2], it is shown that R∗ = 1 2 [1 − D (p0, p1)] for equal priors (T = 1), where D is an optimal transport cost defined as follows. Definition 1 (D cost). Let µ, ν ∈ P(X ). Let ≥ 0. Let c : X 2 → {0, 1} be such that c (x, x′) = 1{(x, x′) ∈ X × X : d(x, x′) > 2 }. Then for µ, ν ∈ P(X ) and ≥ 0, D (µ, ν) = Tc (µ, ν).
For = 0, D reduces to the total variation distance. While D0 is a metric on P(X ), D (for > 0) is neither a metric nor a pseudometric [31].
Other formulations of optimal adversarial risk are inspired from game theory [29, 26, 3]. Consider a game between two players: (1) The adversary whose action space is pairs of distributions p′0, p ′ 1 ∈ P(X ), and (2) The algorithm whose action space is the space of decision regions of the form A ∈ B(X )}. For T > 0, define r : B(X ) × P(X ) × P(X ) → [0, 1] as, r(A,µ, ν) = TT+1µ(A) + 1 T+1ν((A
c)). The payoff when the algorithm plays first is given by infA∈B(X ) supW∞(p0,p′0),W∞(p1,p′1)≤ r(A, p ′ 0, p ′ 1), and this quantity is interpreted as the optimal adversarial risk in this setup.
4 Well-Definedness of Adversarial Risk
As stated in Section 3, R⊕ (`0/1, A) may not be well-defined for some decision regions A ∈ B(X ) because of the non-measurability of the sets A⊕ and (Ac)⊕ . Specifically, we have the following lemma.
Lemma 4.1. For any > 0, there exists A ∈ B(X ) such that A⊕ /∈ B(X ).
In this section, we lay down the conditions under which the ambiguity can be resolved. We begin by presenting a Lemma that shows that A⊕ is an analytic set (i.e. a continuous image of a Borel set) whenever A is Borel. It is known that an analytic sets are universally measurable, i.e. they belong in B(X ), the universal completion of the Borel σ-algebra B(X ), and are measurable with respect to any finite measure defined on the complete measure space, (X ,B(X )). Lemma 4.2. Let A ∈ B(X ). Then, A⊕ is an analytic set. Consequently, A⊕ ∈ B(X ).
By virtue of the previous lemma, we have the following. Theorem 4.1. Let p0, p1 ∈ P(X ). Then for any A ∈ B(X ), R⊕ (`0/1, A) is well-defined.
For the special case of X = Rd, we can further strengthen Theorem 4.1 to include all Lebesgue measurable sets L(X ) instead of just Borel sets B(X ). For this, we use the concept of porous sets. Definition 2 (Porous set). A set E ⊆ X is said to be porous if there exists α ∈ (0, 1) and r0 > 0 such that for every r ∈ (0, r0] and every x ∈ X , there is an x′ ∈ X such that Bαr(x′) ⊆ Br(x)\E.
Porous sets are a subclass of nowhere dense sets. Importantly, λ(E) = 0 for any porous set E ⊆ Rd [47]. By the following lemma, the set difference between the closed/open set expansions is porous. Lemma 4.3. Let (X , d) = (Rd, ‖ · ‖) and A ∈ L(X ). Then E = A \A ) is porous.
Lemma 4.3 plays a crucial role in proving that A⊕ ∈ L(X ) whenever A ∈ L(X ). We recall that A⊕ is the Minkowski sum of A with the closed -ball. In general, the Minkowski sum of two Lebesgue measurable sets is not always Lebesgue measurable [34, 14]. So the fact that one of them is a closed ball in case of A⊕ is important. In the following theorem, we use Lemma 4.3 to prove the measurability of A⊕ and in turn prove that R⊕ (`0/1, A) is well-defined for any A ∈ L(X ). Theorem 4.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) is well-defined. If, in addition, p0 and p1 are absolutely continuous with respect to the Lebesgue measure, then R⊕ (`0/1, A) = R (`0/1, A).
5 Equivalence with∞-Wasserstein Robustness
In this section, we show the conditions under which R⊕ (`0/1, A) is equivalent to other notions of adversarial risk based on transport maps and W∞ robustness.
5.1 W∞ Robustness in Polish Spaces via Measurable Selections
We begin by presenting a lemma that links the measure of -Minkowsi set expansion to the worst case measure over a W∞ probability ball of radius . Lemma 5.1. Let µ ∈ P(X ) and A ∈ B(X ). Then supW∞(µ,µ′)≤ µ
′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is achieved by a µ∗ ∈ P(X ) that is induced from µ via a measurable transport map φ : X → X (i.e. µ∗ = φ]µ) satisfying d(x, φ(x)) ≤ for all x ∈ X .
A crucial step in the proof of Lemma 5.1 is finding a measurable transport map φ such that φ−1(A) = A⊕ and d(x, φ(x)) ≤ for all x ∈ X . In the following theorem, we use Lemma 5.1 to establish the equivalence between three different notions of adversarial risk introduced in section 3. Theorem 5.1. Let p0, p1 ∈ P(X ) and A ∈ B(X ). Then R⊕ (`0/1, A) = RF (`0/1, A) = RΓ (`0/1, A). In addition, the supremum over f0 and f1 in RF (`0/1, A) is attained. Similarly, the supremum over p′0 and p ′ 1 in RΓ (`0/1, A) is attained.
5.2 W∞ Robustness in Rd via 2-Alternating Capacities
In this subsection, we establish a connection between adversarial risk and Choquet capacities [7] in Rd. This connection allows us to extend Theorem 5.1 from Borel sets to the broader class of Lebesgue measurable sets. We will again use this connection for proving minimax theorems and existence of Nash equilibria in Section 7.1. We begin with the following definitions. Definition 3 (Capacity). A set function v : B(X )→ [0, 1] is a capacity if it satisfies the following conditions: (1) v(∅) = 0 and v(X ) = 1; (2) For A,B ∈ B(X ), A ⊆ B =⇒ v(A) ≤ v(B); (3) An ↑ A =⇒ v(An) ↑ v(A); and (4) Fn ↓ F , Fn closed =⇒ v(Fn) ↓ v(F ). Definition 4 (2-Alternating Capacity). A capacity v defined on the measure space (X ,B(X )) is called 2-alternating if v(A ∪B) + v(A ∩B) ≤ v(A) + v(B) for all A,B ∈ B(X ).
For any compact set of probability measures Ξ ⊆ P(X ), the upper probability v(A) = supµ∈Ξ µ(A) is a capacity [19]. The upper probability of -neighborhoods of a µ ∈ P(X ) defined using either the total variation metric or the Levy-Prokhorov metric can be shown to be a 2-alternating capacity [19]. The following lemma shows that A 7→ µ(A⊕ ) is a 2-alternating capacity under some conditions. Lemma 5.2. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ) and let ≥ 0. Define a set function v on X such that for any A ∈ L(X ), v(A) := µ(A⊕ ). Then v is a 2-alternating capacity.
Now we relate the capacity defined in Lemma 5.2 to the W∞ metric. Since the -neighborhood of a µ ∈ P(X ) in W∞ metric is a compact set of probability measures [46], the upper probability over this W∞ -ball is a capacity. The following lemma shows that it is a 2-alternating capacity, and identifies it with the capacity defined in Lemma 5.2. Lemma 5.3. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ). Then for any A ∈ L(X ), supW∞(µ,µ′)≤ µ ′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is attained.
Lemma 5.3 plays a similar role to Lemma 5.1 in proving the following equivalence between adversarial robustness and W∞ robustness. Theorem 5.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) = RΓ (`0/1, A), and the supremum over p ′ 0 and p ′ 1 in RΓ (`0/1, A) is attained.
The proof follows by converting the expression for RΓ into one for R⊕ using Lemma 5.3. Unlike Theorem 5.1, Theorem 5.2 does not show the equivalence of RF (`0/1, A) with the other definitions under the relaxed assumption of A ∈ L(X ). This is because Lemma 5.3 does not provide a pushforward map φ such that µ∗ = φ]µ with µ∗ attaining the supremum over the W∞ ball.
6 Optimal Adversarial Risk via Generalized Strassen’s Theorem
In section 5, we analyzed adversarial risk for a specific decision region A ∈ B(X ). In this section, we analyze infimum of adversarial risk over all possible decision regions; i.e., the optimal adversarial risk. We show that optimal adversarial risk in binary classification with unequal priors is characterized by an unbalanced optimal transport cost between data-generating distributions. Our main technical lemma generalizes Strassen’s theorem to unbalanced optimal transport. We present this result in subsection 6.1 and present our characterization of optimal adversarial risk in subsection 6.2.
6.1 Unbalanced Optimal Transport & Generalized Strassen’s Theorem
Recall from Section 3 that the optimal transport cost D characterizes the optimal adversarial risk in binary classification for equal priors. The following result gives an alternative characterization of D . Proposition 6.1 (Strassen’s theorem). [Corollary 1.28 in [42]] Let µ, ν ∈ P(X ). Let ≥ 0. Then
sup A∈B(X )
µ(A)− ν(A2 ) = D (µ, ν). (7)
Proposition 6.1 is a special case of Kantorovich-Rubinstein duality [42] applied to {0, 1}-valued cost functions. We now generalize this result to measures with unequal masses. We begin with some definitions that generalize the concepts we introduced in subsection 2.2.
Let µ, ν ∈M(X ) be such that µ(X ) ≤ ν(X ). A coupling between µ and ν is a measure π ∈M(X 2) such that for any A ∈ B(X ), π(A×X ) = µ(A) and π(X ×A) ≤ ν(A). The set Π(µ, ν) is defined to be the set of all couplings between µ and ν. For a cost function c : X 2 → [0,∞), the optimal transport cost between µ and ν under c is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x
′)dπ(x, x′). Theorem 6.1 (Generalized Strassen’s theorem). Let µ, ν ∈M(X ) be such that 0 < M = µ(X ) ≤ ν(X ). Let > 0. Define c : X 2 → {0, 1} as c (x, x′) = 1{(x, x′) ∈ X 2 : d(x, x′) > 2 }. Then
sup A∈B(X ) µ(A)− ν(A2 ) = Tc (µ, ν) = M inf ν′∈P(X ):ν′ ν/M D (µ/M, ν ′) . (8)
Moreover, the infimum on the right hand side is attained. (Equivalently, there is a coupling π ∈ Π(µ, ν) that attains the unbalanced optimal transport cost Tc (µ, ν).)
Our proof of Theorem 6.1 leverages strong duality in linear programming. We first establish (8) for discrete measures on a finite support. We then apply the discrete result on a sequence of measures supported on a countable dense subset of the Polish space X . Using the tightness of finite measures on X , we construct an optimal coupling that achieves the cost Tc (µ, ν) in (8). We then show that the constructed coupling satisfies (8). This proof strategy is adapted from the works of [12] and [32].
6.2 Optimal Adversarial Risk for Unequal Priors
Generalized Strassen’s theorem involves closed set expansions. The following lemma allows us to switch to Minkowski set expansions. Lemma 6.1. Let µ, ν ∈ M(X ) and let ≥ 0. Then supA∈B(X ) µ(A) − ν(A2 ) = supA∈B(X ) µ(A
)− ν(A⊕ ). Moreover, the supremum in the right hand side of the above equality can be replaced by a supremum over closed sets.
Using the above lemma and the generalized Strassen’s theorem, we show the following result on optimal adversarial risk for unequal priors, generalizing the result of [30, 2]. Theorem 6.2. Let p0, p1 ∈ P(X ) and let ≥ 0. Then,
inf A∈B(X )
R⊕ (`0/1, A) = 1
T + 1
[ 1− inf
q∈P(X ):q Tp0 D (q, p1)
] . (9)
Moreover, the infimum on the left hand side can be replaced by an infimum over closed sets.
The proof follows by using Lemma 6.1 to convert the expression with Minkowski expansion to one with closed expansions, followed by an application of Theorem 6.1 to arrive at the final optimal transport-based expression. Theorem 6.2 extends the result of [31] in two ways: (1) the infimum is
taken over all sets for which R⊕ (`0/1, A) is well-defined, instead of restricting to closed sets, and (2) the priors on both labels can be unequal. We also note that for (X , d) = (Rd, ‖ · ‖), (9) holds with the infimum on the left hand side taken over all A ∈ L(X ).
7 Minimax Theorems and Nash Equilibria
In this section, we revisit the zero-sum game between the adversary and the algorithm introduced in section 3. Recall that for A ∈ B(X ) and p′0, p′1 ∈ P(X ), the payoff function is given by
r(A, p′0, p ′ 1) =
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (10)
The max-min inequality gives us
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈A
r(A, p′0, p ′ 1) ≤ inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (11)
If the inequality in (11) is an equality, we say that the game has zero duality gap, and admits a value equal to either expression in (11). Then there is no advantage to a player making the first move. Our minimax theorems establish such an equality. If in addition to having an equality in (11), there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left-hand side and A∗ ∈ B(X ) that achieves the infimum on the right-hand side, we say that ((p∗0, p ∗ 1), A ∗) is a pure Nash equilibrium of the game.
In Section 7.1, we prove the minimax theorem and the existence of a pure Nash equilibrium in Rd using the theory of 2-alternating capacities [19] and the relation to adversarial risk from Section 5.2. Section 7.2 extends these results to more general Polish spaces with a “midpoint property."
7.1 Minimax Theorem in Rd via 2-Alternating Capacities
The following theorem proves the minimax equality and the existence of a Nash equilibrium for the adversarial robustness game in Rd. Theorem 7.1 (Minimax theorem in Rd). Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10). Then,
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈L(X )
r(A, p′0, p ′ 1) = inf
A∈L(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (12)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) and A∗ ∈ L(X ) that achieve the supremum and infimum on the left and right hand sides of the above equation.
Crucial to the proof of Theorem 7.1 is Lemma 5.2, which shows that the set-valued maps A 7→ p0(A
⊕ ) and Ac 7→ p1((Ac)⊕ ) are 2-alternating capacities. The same proof technique is not applicable in general Polish spaces because the map A 7→ µ(A⊕ ) is not a capacity for a general µ ∈ P(X ). This is because A⊕ is not measurable for all A ∈ B(X ).
7.2 Minimax Theorem in Polish Spaces via Optimal Transport
We now extend the minimax theorem from Rd to general Polish spaces with the following property. Definition 5 (Midpoint property). A metric space (X , d) is said to have the midpoint property if for every x1, x2 ∈ X , there exists x ∈ X such that, d(x1, x) = d(x, x2) = d(x1, x2)/2.
Any normed vector space with distance defined as d(x, x′) = ‖x − x′‖ satisfies the midpoint property. An example of a metric space without this property is the discrete metric space where d(x, x′) = 1{x 6= x′}. The midpoint property plays a crucial role in proving the following theorem, which shows that the D transport cost between two distributions is the shortest total variation distance between their -neighborhoods in W∞ metric. A similar result was also presented in [11]. Theorem 7.2 (D as shortest DTV between W∞ balls). Let (X , d) have the midpoint property. Let µ, ν ∈ P(X ) and let ≥ 0. Then D (µ, ν) = infW∞(µ,µ′),W∞(ν,ν′)≤ DTV (µ′, ν′). Moreover, the infimum over DTV in the above equation is attained.
The following theorem uses Theorem 7.2 to prove the minimax equality and the existence of a Nash equilibrium for any Polish space with the midpoint property for the case of equal priors.
Theorem 7.3 (Minimax theorem for equal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10) with T = 1. Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (13)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left hand side.
Proof. For µ ∈ P(X ), let WB(µ) denote the set of all µ′ ∈ P(X ) such that W∞(µ, µ′) ≤ .
inf A∈B(X ) sup p′0∈WB(p0) p′1∈WB(p1)
r(A, p′0, p ′ 1) = inf
A∈B(X ) RΓ (`0/1, A)
(i) = inf
A∈B(X ) R⊕ (`0/1, A)
(ii) =
1 2 [1−D (p0, p1)]
sup p′0∈WB(p0) p′1∈WB(p1) inf A∈B(X )
r(A, p′0, p ′ 1) (iii) = sup
p′0∈WB(p0) p′1∈WB(p1)
1 2 [1−DTV (p′0, p′1)] = 1 2 1− inf p′0∈WB(p0) p′1∈WB(p1) DTV (p ′ 0, p ′ 1) , where (i) follows from Theorem 5.1, (ii) from Theorem 6.2, and (iii) again from Theorem 6.2 with = 0. The expressions on the right extremes of the above equations are equal by Theorem 7.2. The existence of p∗0, p ∗ 1 ∈ P(X ) follows Theorem 7.2.
To prove the minimax theorem for unequal priors, we need the following generalization of Theorem 7.2 to finite measures of unequal mass. Lemma 7.1. Let p0, p1 ∈ P(X ) and let ≥ 0. Then for T ≥ 1,
inf q∈P(X ):q Tp0 D (q, p1) = inf q∈P(X ):q Tp0 inf W∞(q,q′),W∞(p1,p′1)≤
DTV (q ′, p′1)
= inf W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf q′∈P(X ):q′ Tp′0
DTV (q ′, p′1) (14)
Now, we prove the minimax equality for unequal priors. Theorem 7.4 (Minimax theorem for unequal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. For T > 0, define r as in (10). Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (15)
The proof uses: (1) the characterization of inf-sup payoff in terms of unbalanced optimal transport using Theorem 5.1; (2) Lemma 7.1; and (3) the minimax equality of Theorem 7.3 for equal priors. Remark 2. Unlike Theorem 7.1, Theorems 7.3 and 7.4 do not guarantee the existence of an optimal decision region A∗. While Theorem 7.3 guarantees the existence of worst-case pair of perturbed distributions p∗0, p ∗ 1, Theorem 7.4 does not do so. Nevertheless, an approximate pure Nash equilibrium exists in all the cases. This is in sharp contrast with the non-existence of Nash equilibrium proven in [29] (which considers a different notion of adversarial perturbations). Remark 3. A recent work [26] shows the existence of mixed Nash equilibrium for randomized classifiers parametrized by points in a Polish space (see also [29, 3]). Fan’s minimax theorem used in this result is inapplicable in our setting of non-parametric, decision region-based classifiers. Instead, we applied the theory of Choquet capacities (in Rd) and generalized Strassen’s duality theorem (in Polish spaces), which is novel to the best of our knowledge.
8 Discussion
We examined different notions of adversarial risk in a binary classification setting with 0-1 loss function and laid down the conditions under which these definitions are equivalent. By verifying the conditions in Sections 4 and 5, researchers may use different definitions interchangeably. Several definitions have also been proposed for adversarial risk under general loss functions [31, 26] using analogous constructions like transport maps, couplings and suprema over -neighborhoods. Extending our equivalence results to more general loss functions is left for future work.
summarize the results of Section 6 and Section 7. For equal priors (T = 1), A and B denote two ways of obtaining the optimal adversarial risk, R∗⊕ : 1) A , which denotes the D cost between the true label distributions p0 and p1, and 2) B , which denotes the shortest total variation distance between∞-Wasserstein balls of radius around p0 and p1. For unequal priors (T > 1), C , D and E denote three equivalent ways of obtaining R∗⊕ . The black dotted balls denote∞-Wasserstein balls and the blue dashed balls denote sets defined using stochastic domination. The order in which the two types of balls appear around p0 is reversed between D and E .
We analyzed optimal adversarial risk for (non-parametric) decision region-based classifiers. Using a formulation of optimal transport between finite measures of unequal mass, we extended the optimal transport based characterization of adversarial risk of [30, 2] to unequal priors by generalizing Strassen’s theorem. This may find applications in the study of excess cost optimal transport [45, 44]. A recent work [39] obtains a different characterization of optimal adversarial risk using optimal transport on the product space X × Y where Y is the label space. Further, they show the evolution of the optimal classifier A∗ as grows, in terms of a mean curvature flow. This raises an interesting question on the evolution of the optimal adversarial distributions p∗0, p ∗ 1 ∈ P(X ) with .
We proved a minimax theorem for adversarial robustness game and the existence of a Nash equilibrium. We constructed the worst-case pair of distributions p∗0, p ∗ 1 ∈ P(X ) in terms of true data distributions and showed that their total variation distance gives the optimal adversarial risk. Identifying worst case distributions could lead to a new approach to developing robust algorithms.
We used Choquet capacities for results in Rd and measurable selections in Polish spaces. Specifically, we showed that the measure of -Minkowski expansion is a 2-alternating capacity. This connection could help generalize our results to total variation and Prokhorov distance based contaminations.
Limitations: We largely focused on the binary classification setup with 0-1 loss function. While it may be possible extend our results on measurability and relation to∞-Wasserstein distributional robustness to more general loss functions and a multi-class setup, it is unclear how our results on generalized Strassen’s theorem and Nash equilibria can be extended further. Our results on various equivalent formulations of optimal adversarial risk are specific to adversarial perturbations (or equivalently,∞-Wasserstein distributional perturbations), and we did not investigate more general perturbation models. | 1. What is the focus of the paper regarding adversarial risk in binary classification?
2. What are the strengths of the proposed approach, particularly in its theoretical contributions?
3. Do you have any questions or concerns about the paper's methodology or findings?
4. How does the reviewer assess the clarity, significance, originality, and quality of the paper's content?
5. Are there any potential limitations or areas for improvement in the paper's approach or results? | Summary Of The Paper
Review | Summary Of The Paper
This paper formalizes different definitions of adversarial risk for binary classification and studies the equivalence between those definitions. Moreover, the authors analyze optimal adversarial risk through the lens of optimal transport, which leads to the existence of a Nash equilibrium of the adversarial robustness game. The contribution of this paper is theoretical.
Review
Originality
This paper is original and novel as far as I can see. It seems that one of the closest work to this paper is M. S. Pydi and V. Jog. Adversarial risk via optimal transport and optimal couplings. The authors discuss their advances from this one.
Quality
I did not notice wrong statements in the main paper. However, I only read the proofs in the main paper.
My questions are the following.
This paper defines the adversarial risk as various set functions, i.e., loss function w.r.t. the decision region
A
. However, it is more general to define the adversarial risk w.r.t. the model
f
itself. For example, given the input space
X
, output space
Y
, the data generating distribution
p
∈
P
(
X
×
Y
)
, and a cost function
c
:
Y
×
Y
→
[
0
,
+
∞
)
, the adversarial loss function can be modeled by
R
(
f
)
=
∫
sup
x
′
∈
B
ϵ
(
x
)
c
(
f
(
x
′
)
,
y
)
d
p
(
x
,
y
)
.
For binary classification, any function
f
:
X
→
R
defines a decision region by
A
=
f
−
1
(
(
0
,
+
∞
)
)
. By defining
c
(
⋅
,
⋅
)
to be the
ℓ
0
/
1
loss based on the decision region, it seems that it recovers the
R
⊕
ϵ
loss. Moreover, the above
R
(
f
)
can be applied to other settings, e.g., multi-class classification and regression.
Denoting
ℓ
ϵ
(
x
,
y
)
=
sup
x
′
∈
B
ϵ
(
x
)
c
(
f
(
x
′
)
,
y
)
, to study the well-definedness for
R
(
f
)
, it becomes to study the measurability of
ℓ
ϵ
and the integrability of
ℓ
ϵ
w.r.t. the data generating measure
p
. Therefore, does studying the well-definedness of
R
(
f
)
give the well-definedness of the set-based risk function? Moreover, what are the benefits of studying the set-based risks over the function based risk
R
(
f
)
?
For Theorem 6.1, it is mentioned that the infimum on the right hand side is attained. I'm curious that is the supremum on the left hand side attained? It seems that it leads to whether the optimal
A
in Theorem 6.2 is attained.
For non-theory people, the contribution of this paper may be limited. For example, an empirical person may think that the measurability is never a problem in real-world machine learning. Since NeurIPS is a big conference where attenders have diverse background, is there any point of this paper that may intrigue an empirical person?
Clarity
This paper is rigorous and well-written as far I see.
Significance
It is important in the sense that this paper rigorously defines several adversarial risk functions, filling the gap of previous work and building the ground for future work. |
NIPS | Title
The Many Faces of Adversarial Risk
Abstract
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk—not all mathematically rigorous and differing subtly in the details—have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with∞-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
1 Introduction
Neural networks are known to be vulnerable to adversarial attacks, which are imperceptible perturbations to input data that maximize loss [38, 15, 5]. Developing algorithms resistant to such attacks has received considerable attention in recent years [8, 28, 24, 20], motivated by safety-critical applications such as autonomous driving [18, 27], medical imaging [17, 23, 22] and law [21, 6].
A classification algorithm with high accuracy (low risk) in the absence of an adversary may have poor accuracy (high risk) when an adversary is present. Thus, a modified notion known as adversarial risk is used to quantify the adversarial robustness of algorithms. Algorithms that minimize adversarial risk are deemed robust. Procedures for finding them have been effective in practice [24, 41, 28], spurring numerous theoretical investigations into adversarial risk and its minimizers.
There is no universally agreed upon definition of adversarial risk. Even the simplest setting of binary classification in Rd with an `2 adversary admits various definitions involving set expansions [9, 16], transport maps [29], Markov kernels [31], and couplings [26]. These works broadly interpret adversarial risk as a measure of robustness to small perturbations, but their definitions differ in subtle details such as the class of adversaries and algorithms considered, budget constraints placed on the adversary, assumptions on the loss function, and the geometry of decision boundaries.
Optimal adversarial risk is most commonly defined as the minimax risk under adversarial contamination [24, 33]. Other notable characterizations include an optimal transport cost between data generating distributions in [30, 2, 10, 11], the optimal value of a distributionally robust optimization problem [36, 35, 40], and the value of a two-player zero-sum game [26, 29, 3, 4].
The diversity of definitions for adversarial risk makes it challenging to compare approaches. Moreover, not all approaches are rigorous. For instance, the classes of adversarial strategies and classifier
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
algorithms are often unclear, and issues of measurability are ignored. Although this may be harmless for applied research, it has led to incorrect proofs and insufficient assumptions in some theoretical works; a mathematically rigorous foundation for adversarial risk is essential for future research.
In this paper, we examine various notions of adversarial risk for binary classification in a nonparametric setting, where the decision boundary (or decision region) of a classifier is an arbitrary subset of the input space. We present rigorous definitions of adversarial risk and identify conditions under which these definitions are equivalent. We consider the general setting of Polish spaces (complete, separable metric spaces), and present stronger results for the Euclidean space (Rd). Our contributions are as follows:
• We examine the definition of adversarial risk based on set expansions. For Polish spaces, we observe that adversarial risk is not Borel measurable, and hence, not well-defined when the decision region is an arbitrary set. We show that the problem can be resolved by considering a Polish space equipped with the universal completion of the Borel σ-algebra and restricting the decision regions to Borel sets. For the Euclidean space with the Lebesgue σ-algebra, we show that adversarial risk is well-defined for any Lebesgue measurable decision region. Our key lemma (Lemma 4.3) shows that the Lebesgue σ-algebra is preferred over the Borel σ-algebra because set expansions are Lebesgue measurable but not necessarily Borel measurable. These results are contained in Section 4.
• We show that the definition of adversarial risk using set expansions is identical to a notion of risk that appears in robust hypothesis testing with∞-Wasserstein uncertainty sets. We prove this result in Polish spaces using the theory of measurable selections [1, 43]. In Rd, we are able to use the powerful theory of Choquet capacities [7] (in particular, Huber and Strassen’s 2-alternating capacities [19]) to establish results of a similar nature. These results are contained in Section 5.
• We consider the binary classification setup with unequal priors and show (under suitable assumptions) that the optimal adversarial risk from the above definitions is characterized by an unbalanced optimal transport cost between data-generating distributions. For both Polish spaces and Rd, the main tool we use is Theorem 6.1 in which we generalize a classical result of Strassen on excess-cost optimal transport [37, 42] from probability measures to finite measures with possibly unequal mass. This generalizes results of [31, 2] on binary classification, which were only for equal priors. These results are contained in Section 6.
• We consider the setup of a zero-sum game between the adversary and the algorithm. We show that the value of this game (adversarial risk) is equal to the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets centered around true data-generating distributions. We prove the existence of a pure Nash equilibrium in this game for Rd and for Polish spaces with a midpoint property. This extends/strengthens the results of [26, 29, 3] to non-parametric classifiers. These results are contained in Section 7.
The paper is organized as follows: In Section 2, we present preliminary definitions from optimal transport and metric space topology. In Section 3, we discuss various definitions of adversarial risk and present more related work. Sections 4, 5, 6 and 7 contain our main contributions summarized above. We conclude the paper in Section 8 and discuss future research directions.
We emphasize that rectifying measure theoretic issues with existing formulations of adversarial risk is one of our contributions, but not the main focus of our paper. We start our presentation with fixing measurability and well-definedness (in Section 4) because otherwise we will not be able to rigorously present our main results in the subsequent sections, namely: relation to robust hypothesis testing and Choquet capacities in section 5, generalizing the results of [2, 30] in section, 6 proving minimax theorems and existence of Nash equilibria and extending the results of [26, 3, 29] in section 7.
Notation: Throughout the paper, we use X to denote a Polish space (a complete, separable metric space) with metric d and Borel σ-algebra B(X ). For x ∈ X and r ≥ 0, let Br(x) denote the closed ball of radius r centered at x. We use P(X ) andM(X ) to denote the space of probability measures and finite measures defined on the measure space (X ,B(X )), respectively. Let B(X ) denote the universal completion of B(X ). Let P(X ) andM(X ) denote the space of probability measures and finite measures defined on the complete measure space (X ,B(X )). For µ, ν ∈ M(X ), we say ν dominates µ if µ(A) ≤ ν(A) for all A ∈ B(X ) and write µ ν. When X is Rd, we use L(X ) to
denote the Lebesgue σ-algebra and λ to denote the d-dimensional Lebesgue measure. Note that L(X ) = B(X ) for X = Rd. For a positive integer n, we use [n] to denote the finite set {1, . . . , n}.
2 Preliminaries
2.1 Metric Space Topology
We introduce three different notions of set expansions. For ≥ 0 and A ∈ B(X ), the -Minkowski expansion of A is given by A⊕ := ∪a∈AB (a). The -closed expansion of A is defined as A := {x ∈ X : d(x,A) ≤ }, where d(x,A) = infa∈A d(x, a). The -open expansion of A is defined as A ) := {x ∈ X : d(x,A) < }. We use the notation A− to denote ((Ac) )c. Similarly, A := ((Ac)⊕ )c. For example, consider the set A = (0, 1] in the space (X , d) = (R, | · |) and > 0. Then A⊕ = (− , 1 + ], A = [− , 1 + ] and A ) = (− , 1 + ). For any A ∈ B(X ), A is closed and A ) is open. Hence, A , A ) ∈ B(X ). Moreover, A ) ⊆ A⊕ ⊆ A . However, A⊕ may not be in B(X ) (see Appendix for an example). In general, the Minkowski sum of two Borel sets need not be Borel [13], and that of two Lebesgue measurable sets need not be Lebesgue measurable [34].
2.2 Optimal Transport
Let µ, ν ∈ P(X ). A coupling between µ and ν is a joint probability measure π ∈ P(X 2) with marginals µ and ν. The set Π(µ, ν) ⊆ P(X 2) denotes the set of all couplings between µ and ν. The optimal transport cost between µ and ν under a cost function c : X × X → [0,∞) is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x ′)dπ(x, x′). For a positive integer p, the p-Wasserstein distance between µ and ν is defined as, Wp(µ, ν) = (Tdp(µ, ν)) 1 p . The∞-Wasserstein metric is defined as W∞(µ, ν) = limp→∞Wp(µ, ν). It can also be expressed in the following ways:
W∞(µ, ν) = inf π∈Π(µ,ν) ess sup (x,x′)∼π
d(x, x′) = inf{δ > 0 : µ(A) ≤ ν(Aδ)∀A ∈ B(X )}. (1)
Given a µ ∈ P(X ) and a measurable function f : X → X , the push-forward of µ by f is defined as a probability measure f]µ ∈ P(X ) given by f]µ = µ(f−1(A)) for all A ∈ B(X ).
3 Adversarial Risk: Definitions and Related Work
We consider a binary classification setting on feature space X . Let p0, p1 ∈ P(X ) be the datagenerating distributions for labels 0 and 1, respectively. Let the prior probabilities for labels 0 and 1 be in the ratio T : 1 where we assume T ≥ 1 without loss of generality. For a space of classifiers parametrized by w ∈ W and a loss function ` : (X × Y)×W → [0,∞), the adversarial risk of a classifier w ∈ W under an adversarial budget of ≥ 0 is defined as [24, 33],
R⊕ (`, w) = E(x,y)
[ sup
d(x,x′)≤ `((x′, y), w)
] . (2)
If the loss function `(·, w) is measurable, upper semi-continuous and bounded above for all w ∈ W , [26] show that R (`, w) is well-defined. But in general, it may not be so. A case of special interest is the 0-1 loss function with non-parametric classifiers of the form fA(x) := 1{x ∈ A} where A ∈ B(X ). In this case, `0/1((x, y), A) = 1{x ∈ A, y = 0}+ 1{x ∈ Ac, y = 1}. Hence,
R⊕ (`0/1, A) = T
T + 1 Ep0
[ sup
d(x,x′)≤ 1{x′ ∈ A}
] + 1
T + 1 Ep1
[ sup
d(x,x′)≤ 1{x′ ∈ Ac}
]
= T
T + 1 p0(A
⊕ ) + 1
T + 1 p1((A
c)⊕ ). (3)
A problem with the formulation in equation 3 is the ambiguity over the measurability of the sets A⊕ and (Ac)⊕ . Even when A ∈ B(X ), it is not guaranteed that A⊕ , (Ac)⊕ ∈ B(X ) (see Appendix for an example). Hence, R⊕ (`0/1, A) is not well-defined for all A ∈ B(X ). It is shown in [31] that R⊕ (`0/1, A) is well-defined when A is either closed or open, but its validity beyond that is unknown.
A simple fix to this measurability problem is to use closed set expansion instead of the Minkowski set expansion, as done in [25]. This leads to the following formulation.
R (`0/1, A) = T
T + 1 p0(A
) + 1
T + 1 p1((A
c) ). (4)
The above definition is well-defined for any A ∈ B(X ) because A and (Ac) are both closed and hence, measurable. However, under the above definition, a point x ∈ A may be perturbed to x′ ∈ A such that d(x, x′) > . For example, when A = (0, 1), we have A = [− , ] and an adversary may transport x = δ > 0 to x′ = − , violating the budget constraint at x. Remark 1. The formulations in equations (2), (3) and (4) can give a strictly positive adversarial risk even for a “perfect” (i.e. Bayes optimal) classifier. This is consistent with the literature on adversarial examples where even a perfect classifier is forced to make errors in the presence of evasion attacks. These formulations of adversarial risk correspond to “constant-in-the-ball” risk of [16] and “corrupted-instance” risk in [9, 25]. Here, an adversarial risk of zero is only possible if the supports of p0 and p1 are non-overlapping and separated by at least 2 . This is not the case with other formulations of adversarial risk such as “exact-in-the-ball” risk [16], “prediction-change risk and “error-region” risk [9, 25]. We focus on the “corrupted-instance” family of risks in this work.
Another approach to defining adversarial risk is by explicitly defining the class of adversaries of budget as measurable transport maps f : X → X that push-forward the true data distribution such that no point is transported by more than a distance of ; i.e., d(x, f(x)) ≤ . The transport map-based adversarial risk [29] is formally defined as follows:
RF (`0/1, A) = sup f0,f1:X→X
∀x∈X ,d(x,fi(x))≤
T
T + 1 f0]p0(A) +
1
T + 1 f1]p1((A
c)). (5)
Yet another definition uses the robust hypothesis testing framework with W∞ uncertainty sets. In this approach, an adversary perturbs the true distribution pi to a corrupted distribution p′i such that W∞(pi, p ′ i) ≤ . From (1), this is equivalent to the existence of a coupling π ∈ Π(pi, p′i) such that ess sup(x,x′)∼π d(x, x ′) ≤ . The adversarial risk with such an adversary is given by
RΓ (`0/1, A) = sup W∞(p1,p′1),W∞(p0,p ′ 0)≤
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (6)
Clearly, RF (`0/1, A) ≤ RΓ (`0/1, A), but conditions for equality were not studied in prior work. Moreover, their relation to set expansion-based definitions in (3) and (4) was also unknown.
Next we discuss some characterizations of optimal adversarial risk, defined as R∗⊕ := infA∈B(X )R⊕ (`0/1, A). In [30, 2], it is shown that R∗ = 1 2 [1 − D (p0, p1)] for equal priors (T = 1), where D is an optimal transport cost defined as follows. Definition 1 (D cost). Let µ, ν ∈ P(X ). Let ≥ 0. Let c : X 2 → {0, 1} be such that c (x, x′) = 1{(x, x′) ∈ X × X : d(x, x′) > 2 }. Then for µ, ν ∈ P(X ) and ≥ 0, D (µ, ν) = Tc (µ, ν).
For = 0, D reduces to the total variation distance. While D0 is a metric on P(X ), D (for > 0) is neither a metric nor a pseudometric [31].
Other formulations of optimal adversarial risk are inspired from game theory [29, 26, 3]. Consider a game between two players: (1) The adversary whose action space is pairs of distributions p′0, p ′ 1 ∈ P(X ), and (2) The algorithm whose action space is the space of decision regions of the form A ∈ B(X )}. For T > 0, define r : B(X ) × P(X ) × P(X ) → [0, 1] as, r(A,µ, ν) = TT+1µ(A) + 1 T+1ν((A
c)). The payoff when the algorithm plays first is given by infA∈B(X ) supW∞(p0,p′0),W∞(p1,p′1)≤ r(A, p ′ 0, p ′ 1), and this quantity is interpreted as the optimal adversarial risk in this setup.
4 Well-Definedness of Adversarial Risk
As stated in Section 3, R⊕ (`0/1, A) may not be well-defined for some decision regions A ∈ B(X ) because of the non-measurability of the sets A⊕ and (Ac)⊕ . Specifically, we have the following lemma.
Lemma 4.1. For any > 0, there exists A ∈ B(X ) such that A⊕ /∈ B(X ).
In this section, we lay down the conditions under which the ambiguity can be resolved. We begin by presenting a Lemma that shows that A⊕ is an analytic set (i.e. a continuous image of a Borel set) whenever A is Borel. It is known that an analytic sets are universally measurable, i.e. they belong in B(X ), the universal completion of the Borel σ-algebra B(X ), and are measurable with respect to any finite measure defined on the complete measure space, (X ,B(X )). Lemma 4.2. Let A ∈ B(X ). Then, A⊕ is an analytic set. Consequently, A⊕ ∈ B(X ).
By virtue of the previous lemma, we have the following. Theorem 4.1. Let p0, p1 ∈ P(X ). Then for any A ∈ B(X ), R⊕ (`0/1, A) is well-defined.
For the special case of X = Rd, we can further strengthen Theorem 4.1 to include all Lebesgue measurable sets L(X ) instead of just Borel sets B(X ). For this, we use the concept of porous sets. Definition 2 (Porous set). A set E ⊆ X is said to be porous if there exists α ∈ (0, 1) and r0 > 0 such that for every r ∈ (0, r0] and every x ∈ X , there is an x′ ∈ X such that Bαr(x′) ⊆ Br(x)\E.
Porous sets are a subclass of nowhere dense sets. Importantly, λ(E) = 0 for any porous set E ⊆ Rd [47]. By the following lemma, the set difference between the closed/open set expansions is porous. Lemma 4.3. Let (X , d) = (Rd, ‖ · ‖) and A ∈ L(X ). Then E = A \A ) is porous.
Lemma 4.3 plays a crucial role in proving that A⊕ ∈ L(X ) whenever A ∈ L(X ). We recall that A⊕ is the Minkowski sum of A with the closed -ball. In general, the Minkowski sum of two Lebesgue measurable sets is not always Lebesgue measurable [34, 14]. So the fact that one of them is a closed ball in case of A⊕ is important. In the following theorem, we use Lemma 4.3 to prove the measurability of A⊕ and in turn prove that R⊕ (`0/1, A) is well-defined for any A ∈ L(X ). Theorem 4.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) is well-defined. If, in addition, p0 and p1 are absolutely continuous with respect to the Lebesgue measure, then R⊕ (`0/1, A) = R (`0/1, A).
5 Equivalence with∞-Wasserstein Robustness
In this section, we show the conditions under which R⊕ (`0/1, A) is equivalent to other notions of adversarial risk based on transport maps and W∞ robustness.
5.1 W∞ Robustness in Polish Spaces via Measurable Selections
We begin by presenting a lemma that links the measure of -Minkowsi set expansion to the worst case measure over a W∞ probability ball of radius . Lemma 5.1. Let µ ∈ P(X ) and A ∈ B(X ). Then supW∞(µ,µ′)≤ µ
′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is achieved by a µ∗ ∈ P(X ) that is induced from µ via a measurable transport map φ : X → X (i.e. µ∗ = φ]µ) satisfying d(x, φ(x)) ≤ for all x ∈ X .
A crucial step in the proof of Lemma 5.1 is finding a measurable transport map φ such that φ−1(A) = A⊕ and d(x, φ(x)) ≤ for all x ∈ X . In the following theorem, we use Lemma 5.1 to establish the equivalence between three different notions of adversarial risk introduced in section 3. Theorem 5.1. Let p0, p1 ∈ P(X ) and A ∈ B(X ). Then R⊕ (`0/1, A) = RF (`0/1, A) = RΓ (`0/1, A). In addition, the supremum over f0 and f1 in RF (`0/1, A) is attained. Similarly, the supremum over p′0 and p ′ 1 in RΓ (`0/1, A) is attained.
5.2 W∞ Robustness in Rd via 2-Alternating Capacities
In this subsection, we establish a connection between adversarial risk and Choquet capacities [7] in Rd. This connection allows us to extend Theorem 5.1 from Borel sets to the broader class of Lebesgue measurable sets. We will again use this connection for proving minimax theorems and existence of Nash equilibria in Section 7.1. We begin with the following definitions. Definition 3 (Capacity). A set function v : B(X )→ [0, 1] is a capacity if it satisfies the following conditions: (1) v(∅) = 0 and v(X ) = 1; (2) For A,B ∈ B(X ), A ⊆ B =⇒ v(A) ≤ v(B); (3) An ↑ A =⇒ v(An) ↑ v(A); and (4) Fn ↓ F , Fn closed =⇒ v(Fn) ↓ v(F ). Definition 4 (2-Alternating Capacity). A capacity v defined on the measure space (X ,B(X )) is called 2-alternating if v(A ∪B) + v(A ∩B) ≤ v(A) + v(B) for all A,B ∈ B(X ).
For any compact set of probability measures Ξ ⊆ P(X ), the upper probability v(A) = supµ∈Ξ µ(A) is a capacity [19]. The upper probability of -neighborhoods of a µ ∈ P(X ) defined using either the total variation metric or the Levy-Prokhorov metric can be shown to be a 2-alternating capacity [19]. The following lemma shows that A 7→ µ(A⊕ ) is a 2-alternating capacity under some conditions. Lemma 5.2. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ) and let ≥ 0. Define a set function v on X such that for any A ∈ L(X ), v(A) := µ(A⊕ ). Then v is a 2-alternating capacity.
Now we relate the capacity defined in Lemma 5.2 to the W∞ metric. Since the -neighborhood of a µ ∈ P(X ) in W∞ metric is a compact set of probability measures [46], the upper probability over this W∞ -ball is a capacity. The following lemma shows that it is a 2-alternating capacity, and identifies it with the capacity defined in Lemma 5.2. Lemma 5.3. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ). Then for any A ∈ L(X ), supW∞(µ,µ′)≤ µ ′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is attained.
Lemma 5.3 plays a similar role to Lemma 5.1 in proving the following equivalence between adversarial robustness and W∞ robustness. Theorem 5.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) = RΓ (`0/1, A), and the supremum over p ′ 0 and p ′ 1 in RΓ (`0/1, A) is attained.
The proof follows by converting the expression for RΓ into one for R⊕ using Lemma 5.3. Unlike Theorem 5.1, Theorem 5.2 does not show the equivalence of RF (`0/1, A) with the other definitions under the relaxed assumption of A ∈ L(X ). This is because Lemma 5.3 does not provide a pushforward map φ such that µ∗ = φ]µ with µ∗ attaining the supremum over the W∞ ball.
6 Optimal Adversarial Risk via Generalized Strassen’s Theorem
In section 5, we analyzed adversarial risk for a specific decision region A ∈ B(X ). In this section, we analyze infimum of adversarial risk over all possible decision regions; i.e., the optimal adversarial risk. We show that optimal adversarial risk in binary classification with unequal priors is characterized by an unbalanced optimal transport cost between data-generating distributions. Our main technical lemma generalizes Strassen’s theorem to unbalanced optimal transport. We present this result in subsection 6.1 and present our characterization of optimal adversarial risk in subsection 6.2.
6.1 Unbalanced Optimal Transport & Generalized Strassen’s Theorem
Recall from Section 3 that the optimal transport cost D characterizes the optimal adversarial risk in binary classification for equal priors. The following result gives an alternative characterization of D . Proposition 6.1 (Strassen’s theorem). [Corollary 1.28 in [42]] Let µ, ν ∈ P(X ). Let ≥ 0. Then
sup A∈B(X )
µ(A)− ν(A2 ) = D (µ, ν). (7)
Proposition 6.1 is a special case of Kantorovich-Rubinstein duality [42] applied to {0, 1}-valued cost functions. We now generalize this result to measures with unequal masses. We begin with some definitions that generalize the concepts we introduced in subsection 2.2.
Let µ, ν ∈M(X ) be such that µ(X ) ≤ ν(X ). A coupling between µ and ν is a measure π ∈M(X 2) such that for any A ∈ B(X ), π(A×X ) = µ(A) and π(X ×A) ≤ ν(A). The set Π(µ, ν) is defined to be the set of all couplings between µ and ν. For a cost function c : X 2 → [0,∞), the optimal transport cost between µ and ν under c is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x
′)dπ(x, x′). Theorem 6.1 (Generalized Strassen’s theorem). Let µ, ν ∈M(X ) be such that 0 < M = µ(X ) ≤ ν(X ). Let > 0. Define c : X 2 → {0, 1} as c (x, x′) = 1{(x, x′) ∈ X 2 : d(x, x′) > 2 }. Then
sup A∈B(X ) µ(A)− ν(A2 ) = Tc (µ, ν) = M inf ν′∈P(X ):ν′ ν/M D (µ/M, ν ′) . (8)
Moreover, the infimum on the right hand side is attained. (Equivalently, there is a coupling π ∈ Π(µ, ν) that attains the unbalanced optimal transport cost Tc (µ, ν).)
Our proof of Theorem 6.1 leverages strong duality in linear programming. We first establish (8) for discrete measures on a finite support. We then apply the discrete result on a sequence of measures supported on a countable dense subset of the Polish space X . Using the tightness of finite measures on X , we construct an optimal coupling that achieves the cost Tc (µ, ν) in (8). We then show that the constructed coupling satisfies (8). This proof strategy is adapted from the works of [12] and [32].
6.2 Optimal Adversarial Risk for Unequal Priors
Generalized Strassen’s theorem involves closed set expansions. The following lemma allows us to switch to Minkowski set expansions. Lemma 6.1. Let µ, ν ∈ M(X ) and let ≥ 0. Then supA∈B(X ) µ(A) − ν(A2 ) = supA∈B(X ) µ(A
)− ν(A⊕ ). Moreover, the supremum in the right hand side of the above equality can be replaced by a supremum over closed sets.
Using the above lemma and the generalized Strassen’s theorem, we show the following result on optimal adversarial risk for unequal priors, generalizing the result of [30, 2]. Theorem 6.2. Let p0, p1 ∈ P(X ) and let ≥ 0. Then,
inf A∈B(X )
R⊕ (`0/1, A) = 1
T + 1
[ 1− inf
q∈P(X ):q Tp0 D (q, p1)
] . (9)
Moreover, the infimum on the left hand side can be replaced by an infimum over closed sets.
The proof follows by using Lemma 6.1 to convert the expression with Minkowski expansion to one with closed expansions, followed by an application of Theorem 6.1 to arrive at the final optimal transport-based expression. Theorem 6.2 extends the result of [31] in two ways: (1) the infimum is
taken over all sets for which R⊕ (`0/1, A) is well-defined, instead of restricting to closed sets, and (2) the priors on both labels can be unequal. We also note that for (X , d) = (Rd, ‖ · ‖), (9) holds with the infimum on the left hand side taken over all A ∈ L(X ).
7 Minimax Theorems and Nash Equilibria
In this section, we revisit the zero-sum game between the adversary and the algorithm introduced in section 3. Recall that for A ∈ B(X ) and p′0, p′1 ∈ P(X ), the payoff function is given by
r(A, p′0, p ′ 1) =
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (10)
The max-min inequality gives us
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈A
r(A, p′0, p ′ 1) ≤ inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (11)
If the inequality in (11) is an equality, we say that the game has zero duality gap, and admits a value equal to either expression in (11). Then there is no advantage to a player making the first move. Our minimax theorems establish such an equality. If in addition to having an equality in (11), there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left-hand side and A∗ ∈ B(X ) that achieves the infimum on the right-hand side, we say that ((p∗0, p ∗ 1), A ∗) is a pure Nash equilibrium of the game.
In Section 7.1, we prove the minimax theorem and the existence of a pure Nash equilibrium in Rd using the theory of 2-alternating capacities [19] and the relation to adversarial risk from Section 5.2. Section 7.2 extends these results to more general Polish spaces with a “midpoint property."
7.1 Minimax Theorem in Rd via 2-Alternating Capacities
The following theorem proves the minimax equality and the existence of a Nash equilibrium for the adversarial robustness game in Rd. Theorem 7.1 (Minimax theorem in Rd). Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10). Then,
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈L(X )
r(A, p′0, p ′ 1) = inf
A∈L(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (12)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) and A∗ ∈ L(X ) that achieve the supremum and infimum on the left and right hand sides of the above equation.
Crucial to the proof of Theorem 7.1 is Lemma 5.2, which shows that the set-valued maps A 7→ p0(A
⊕ ) and Ac 7→ p1((Ac)⊕ ) are 2-alternating capacities. The same proof technique is not applicable in general Polish spaces because the map A 7→ µ(A⊕ ) is not a capacity for a general µ ∈ P(X ). This is because A⊕ is not measurable for all A ∈ B(X ).
7.2 Minimax Theorem in Polish Spaces via Optimal Transport
We now extend the minimax theorem from Rd to general Polish spaces with the following property. Definition 5 (Midpoint property). A metric space (X , d) is said to have the midpoint property if for every x1, x2 ∈ X , there exists x ∈ X such that, d(x1, x) = d(x, x2) = d(x1, x2)/2.
Any normed vector space with distance defined as d(x, x′) = ‖x − x′‖ satisfies the midpoint property. An example of a metric space without this property is the discrete metric space where d(x, x′) = 1{x 6= x′}. The midpoint property plays a crucial role in proving the following theorem, which shows that the D transport cost between two distributions is the shortest total variation distance between their -neighborhoods in W∞ metric. A similar result was also presented in [11]. Theorem 7.2 (D as shortest DTV between W∞ balls). Let (X , d) have the midpoint property. Let µ, ν ∈ P(X ) and let ≥ 0. Then D (µ, ν) = infW∞(µ,µ′),W∞(ν,ν′)≤ DTV (µ′, ν′). Moreover, the infimum over DTV in the above equation is attained.
The following theorem uses Theorem 7.2 to prove the minimax equality and the existence of a Nash equilibrium for any Polish space with the midpoint property for the case of equal priors.
Theorem 7.3 (Minimax theorem for equal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10) with T = 1. Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (13)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left hand side.
Proof. For µ ∈ P(X ), let WB(µ) denote the set of all µ′ ∈ P(X ) such that W∞(µ, µ′) ≤ .
inf A∈B(X ) sup p′0∈WB(p0) p′1∈WB(p1)
r(A, p′0, p ′ 1) = inf
A∈B(X ) RΓ (`0/1, A)
(i) = inf
A∈B(X ) R⊕ (`0/1, A)
(ii) =
1 2 [1−D (p0, p1)]
sup p′0∈WB(p0) p′1∈WB(p1) inf A∈B(X )
r(A, p′0, p ′ 1) (iii) = sup
p′0∈WB(p0) p′1∈WB(p1)
1 2 [1−DTV (p′0, p′1)] = 1 2 1− inf p′0∈WB(p0) p′1∈WB(p1) DTV (p ′ 0, p ′ 1) , where (i) follows from Theorem 5.1, (ii) from Theorem 6.2, and (iii) again from Theorem 6.2 with = 0. The expressions on the right extremes of the above equations are equal by Theorem 7.2. The existence of p∗0, p ∗ 1 ∈ P(X ) follows Theorem 7.2.
To prove the minimax theorem for unequal priors, we need the following generalization of Theorem 7.2 to finite measures of unequal mass. Lemma 7.1. Let p0, p1 ∈ P(X ) and let ≥ 0. Then for T ≥ 1,
inf q∈P(X ):q Tp0 D (q, p1) = inf q∈P(X ):q Tp0 inf W∞(q,q′),W∞(p1,p′1)≤
DTV (q ′, p′1)
= inf W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf q′∈P(X ):q′ Tp′0
DTV (q ′, p′1) (14)
Now, we prove the minimax equality for unequal priors. Theorem 7.4 (Minimax theorem for unequal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. For T > 0, define r as in (10). Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (15)
The proof uses: (1) the characterization of inf-sup payoff in terms of unbalanced optimal transport using Theorem 5.1; (2) Lemma 7.1; and (3) the minimax equality of Theorem 7.3 for equal priors. Remark 2. Unlike Theorem 7.1, Theorems 7.3 and 7.4 do not guarantee the existence of an optimal decision region A∗. While Theorem 7.3 guarantees the existence of worst-case pair of perturbed distributions p∗0, p ∗ 1, Theorem 7.4 does not do so. Nevertheless, an approximate pure Nash equilibrium exists in all the cases. This is in sharp contrast with the non-existence of Nash equilibrium proven in [29] (which considers a different notion of adversarial perturbations). Remark 3. A recent work [26] shows the existence of mixed Nash equilibrium for randomized classifiers parametrized by points in a Polish space (see also [29, 3]). Fan’s minimax theorem used in this result is inapplicable in our setting of non-parametric, decision region-based classifiers. Instead, we applied the theory of Choquet capacities (in Rd) and generalized Strassen’s duality theorem (in Polish spaces), which is novel to the best of our knowledge.
8 Discussion
We examined different notions of adversarial risk in a binary classification setting with 0-1 loss function and laid down the conditions under which these definitions are equivalent. By verifying the conditions in Sections 4 and 5, researchers may use different definitions interchangeably. Several definitions have also been proposed for adversarial risk under general loss functions [31, 26] using analogous constructions like transport maps, couplings and suprema over -neighborhoods. Extending our equivalence results to more general loss functions is left for future work.
summarize the results of Section 6 and Section 7. For equal priors (T = 1), A and B denote two ways of obtaining the optimal adversarial risk, R∗⊕ : 1) A , which denotes the D cost between the true label distributions p0 and p1, and 2) B , which denotes the shortest total variation distance between∞-Wasserstein balls of radius around p0 and p1. For unequal priors (T > 1), C , D and E denote three equivalent ways of obtaining R∗⊕ . The black dotted balls denote∞-Wasserstein balls and the blue dashed balls denote sets defined using stochastic domination. The order in which the two types of balls appear around p0 is reversed between D and E .
We analyzed optimal adversarial risk for (non-parametric) decision region-based classifiers. Using a formulation of optimal transport between finite measures of unequal mass, we extended the optimal transport based characterization of adversarial risk of [30, 2] to unequal priors by generalizing Strassen’s theorem. This may find applications in the study of excess cost optimal transport [45, 44]. A recent work [39] obtains a different characterization of optimal adversarial risk using optimal transport on the product space X × Y where Y is the label space. Further, they show the evolution of the optimal classifier A∗ as grows, in terms of a mean curvature flow. This raises an interesting question on the evolution of the optimal adversarial distributions p∗0, p ∗ 1 ∈ P(X ) with .
We proved a minimax theorem for adversarial robustness game and the existence of a Nash equilibrium. We constructed the worst-case pair of distributions p∗0, p ∗ 1 ∈ P(X ) in terms of true data distributions and showed that their total variation distance gives the optimal adversarial risk. Identifying worst case distributions could lead to a new approach to developing robust algorithms.
We used Choquet capacities for results in Rd and measurable selections in Polish spaces. Specifically, we showed that the measure of -Minkowski expansion is a 2-alternating capacity. This connection could help generalize our results to total variation and Prokhorov distance based contaminations.
Limitations: We largely focused on the binary classification setup with 0-1 loss function. While it may be possible extend our results on measurability and relation to∞-Wasserstein distributional robustness to more general loss functions and a multi-class setup, it is unclear how our results on generalized Strassen’s theorem and Nash equilibria can be extended further. Our results on various equivalent formulations of optimal adversarial risk are specific to adversarial perturbations (or equivalently,∞-Wasserstein distributional perturbations), and we did not investigate more general perturbation models. | 1. What is the main contribution of the paper regarding adversarial risk?
2. What are the strengths of the proposed approach, particularly in its ability to handle subtle measure-theoretic issues?
3. Are there any limitations or potential drawbacks to the methodology that could impact its practical applicability?
4. How might the paper's findings contribute to future research or applications in areas beyond theoretical computer science?
5. Can you provide additional explanations or clarifications regarding specific points raised in the review, such as the use of "decision regions," formulation of adversarial risk, and compatibility with related works? | Summary Of The Paper
Review | Summary Of The Paper
The paper focuses on formalizing the definition of "adversarial risk" which is risk under test-time perturbation attacks. The focus is on subtle measure theoretic issues that arise when the distributions are continuous.
Review
Several definitions are given for adversarial risk in the literature. Most are not formal, and some are formal. The paper’s starting point is the formal ones. Yet, it revisits those definitions in the general case where distributions are not discrete, and hence subtle points arise about measurability of events and in particular whether or not certain probabilities associated with adversarial risk are always well defined. In particular, it is argued that some definitions use a union of Minkowski balls, which in general is not a measurable set. Other subtle points arise depending on what measure is used (e.g., Lebesgue or not). The paper also gives conditions under which: (1) various definitions become equivalent, (2) certain objects of interest exist and are characterized by intuitive formulations (e.g., optimal classifier under attack and Nash equilibrium in adversary’s game against algorithm designers).
As far as I understand, these points only arise when we pay attention to subtle aspects of formalism in a measure theoretic sense for general measures. So, this makes the work quite theoretical, as “in practice” (e.g., when we deal with algorithmic attacks and distributions), everything is discrete and none of the issues studied here arise. So, I don’t think the paper has a “practical” effect, as basically all the definitions that paper revisits are fine in the discrete setting and all the subtle points the paper mentions only show up when we go beyond the discrete case and start to care about measurability of sets in the continuous regime.
In fact, typically, by just assuming the space to be Polish these issues go away (e.g., see the last paragraph of page 11 here http://cgm.cs.mcgill.ca/~breed/conc/Talagrand.pdf) but evidently, in the context of adversarial risk and optimal adversarial strategies certain issues still exist, and the paper shows how to resolve them by carefully finding sufficient conditions.
The writing is very clear and precise and readable, and that is a big plus for a paper that has such a goal of formalizing notions.
Overall, I think the paper fills a gap in an important direction, even though this gap is purely theoretical and has no immediate impact on the “practical” side. Still it is important to have a general and precise theory for this phenomenon, and that could be important to practice also in the long run. Considering the quality and clarity of the writing and its content, I think the paper will be useful for the program and recommend acceptance.
Comments:
What is the main reason for focusing on binary classification? Any major limitations for larger label sets?
The term “decision region” is used in line 46 which is not clear. It is clarified later in line 109, but it is confusing the first time around.
The paper’s formulation of adversarial risk does not pay any attention to whether or not the perturbed point is in fact misclassified. Of course, if the perturbed point is “close” to the original point to the degree that the “truth” does not change, this would happen and this is how adversarial examples are perceived (“imperceptible” changes in a setting where humans are the judge for true labels). The issue is discussed here: http://proceedings.mlr.press/v89/suggala19a/suggala19a.pdf and the cited paper [9]. This aspect does not negatively affect this paper, becaue the paper's focus is mostly on "welldefinedness of events", yet it would be good to know how this can fit into this paper’s formal studies.
The letter “T” is used for both the probability weight and the transportation notation.
Line 156 : “It generalizes a similar result of [32] for closed and open sets. ” What is the key new insight? In general, it would be good to point out the new part of the generalized arguments.
Line 290: “Our minimax theorems establish such an equality.” But in the cited paper [31] the authors say "Then we demonstrate, in Section 4, the non-existence of a Nash equilibrium in the deterministic setting of this game". How is their result compatible with yours? Please clarify.
Line 362: “transport characterization adversarial risk” I think you need to add “of” after “characterization” |
NIPS | Title
The Many Faces of Adversarial Risk
Abstract
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk—not all mathematically rigorous and differing subtly in the details—have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with∞-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
1 Introduction
Neural networks are known to be vulnerable to adversarial attacks, which are imperceptible perturbations to input data that maximize loss [38, 15, 5]. Developing algorithms resistant to such attacks has received considerable attention in recent years [8, 28, 24, 20], motivated by safety-critical applications such as autonomous driving [18, 27], medical imaging [17, 23, 22] and law [21, 6].
A classification algorithm with high accuracy (low risk) in the absence of an adversary may have poor accuracy (high risk) when an adversary is present. Thus, a modified notion known as adversarial risk is used to quantify the adversarial robustness of algorithms. Algorithms that minimize adversarial risk are deemed robust. Procedures for finding them have been effective in practice [24, 41, 28], spurring numerous theoretical investigations into adversarial risk and its minimizers.
There is no universally agreed upon definition of adversarial risk. Even the simplest setting of binary classification in Rd with an `2 adversary admits various definitions involving set expansions [9, 16], transport maps [29], Markov kernels [31], and couplings [26]. These works broadly interpret adversarial risk as a measure of robustness to small perturbations, but their definitions differ in subtle details such as the class of adversaries and algorithms considered, budget constraints placed on the adversary, assumptions on the loss function, and the geometry of decision boundaries.
Optimal adversarial risk is most commonly defined as the minimax risk under adversarial contamination [24, 33]. Other notable characterizations include an optimal transport cost between data generating distributions in [30, 2, 10, 11], the optimal value of a distributionally robust optimization problem [36, 35, 40], and the value of a two-player zero-sum game [26, 29, 3, 4].
The diversity of definitions for adversarial risk makes it challenging to compare approaches. Moreover, not all approaches are rigorous. For instance, the classes of adversarial strategies and classifier
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
algorithms are often unclear, and issues of measurability are ignored. Although this may be harmless for applied research, it has led to incorrect proofs and insufficient assumptions in some theoretical works; a mathematically rigorous foundation for adversarial risk is essential for future research.
In this paper, we examine various notions of adversarial risk for binary classification in a nonparametric setting, where the decision boundary (or decision region) of a classifier is an arbitrary subset of the input space. We present rigorous definitions of adversarial risk and identify conditions under which these definitions are equivalent. We consider the general setting of Polish spaces (complete, separable metric spaces), and present stronger results for the Euclidean space (Rd). Our contributions are as follows:
• We examine the definition of adversarial risk based on set expansions. For Polish spaces, we observe that adversarial risk is not Borel measurable, and hence, not well-defined when the decision region is an arbitrary set. We show that the problem can be resolved by considering a Polish space equipped with the universal completion of the Borel σ-algebra and restricting the decision regions to Borel sets. For the Euclidean space with the Lebesgue σ-algebra, we show that adversarial risk is well-defined for any Lebesgue measurable decision region. Our key lemma (Lemma 4.3) shows that the Lebesgue σ-algebra is preferred over the Borel σ-algebra because set expansions are Lebesgue measurable but not necessarily Borel measurable. These results are contained in Section 4.
• We show that the definition of adversarial risk using set expansions is identical to a notion of risk that appears in robust hypothesis testing with∞-Wasserstein uncertainty sets. We prove this result in Polish spaces using the theory of measurable selections [1, 43]. In Rd, we are able to use the powerful theory of Choquet capacities [7] (in particular, Huber and Strassen’s 2-alternating capacities [19]) to establish results of a similar nature. These results are contained in Section 5.
• We consider the binary classification setup with unequal priors and show (under suitable assumptions) that the optimal adversarial risk from the above definitions is characterized by an unbalanced optimal transport cost between data-generating distributions. For both Polish spaces and Rd, the main tool we use is Theorem 6.1 in which we generalize a classical result of Strassen on excess-cost optimal transport [37, 42] from probability measures to finite measures with possibly unequal mass. This generalizes results of [31, 2] on binary classification, which were only for equal priors. These results are contained in Section 6.
• We consider the setup of a zero-sum game between the adversary and the algorithm. We show that the value of this game (adversarial risk) is equal to the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets centered around true data-generating distributions. We prove the existence of a pure Nash equilibrium in this game for Rd and for Polish spaces with a midpoint property. This extends/strengthens the results of [26, 29, 3] to non-parametric classifiers. These results are contained in Section 7.
The paper is organized as follows: In Section 2, we present preliminary definitions from optimal transport and metric space topology. In Section 3, we discuss various definitions of adversarial risk and present more related work. Sections 4, 5, 6 and 7 contain our main contributions summarized above. We conclude the paper in Section 8 and discuss future research directions.
We emphasize that rectifying measure theoretic issues with existing formulations of adversarial risk is one of our contributions, but not the main focus of our paper. We start our presentation with fixing measurability and well-definedness (in Section 4) because otherwise we will not be able to rigorously present our main results in the subsequent sections, namely: relation to robust hypothesis testing and Choquet capacities in section 5, generalizing the results of [2, 30] in section, 6 proving minimax theorems and existence of Nash equilibria and extending the results of [26, 3, 29] in section 7.
Notation: Throughout the paper, we use X to denote a Polish space (a complete, separable metric space) with metric d and Borel σ-algebra B(X ). For x ∈ X and r ≥ 0, let Br(x) denote the closed ball of radius r centered at x. We use P(X ) andM(X ) to denote the space of probability measures and finite measures defined on the measure space (X ,B(X )), respectively. Let B(X ) denote the universal completion of B(X ). Let P(X ) andM(X ) denote the space of probability measures and finite measures defined on the complete measure space (X ,B(X )). For µ, ν ∈ M(X ), we say ν dominates µ if µ(A) ≤ ν(A) for all A ∈ B(X ) and write µ ν. When X is Rd, we use L(X ) to
denote the Lebesgue σ-algebra and λ to denote the d-dimensional Lebesgue measure. Note that L(X ) = B(X ) for X = Rd. For a positive integer n, we use [n] to denote the finite set {1, . . . , n}.
2 Preliminaries
2.1 Metric Space Topology
We introduce three different notions of set expansions. For ≥ 0 and A ∈ B(X ), the -Minkowski expansion of A is given by A⊕ := ∪a∈AB (a). The -closed expansion of A is defined as A := {x ∈ X : d(x,A) ≤ }, where d(x,A) = infa∈A d(x, a). The -open expansion of A is defined as A ) := {x ∈ X : d(x,A) < }. We use the notation A− to denote ((Ac) )c. Similarly, A := ((Ac)⊕ )c. For example, consider the set A = (0, 1] in the space (X , d) = (R, | · |) and > 0. Then A⊕ = (− , 1 + ], A = [− , 1 + ] and A ) = (− , 1 + ). For any A ∈ B(X ), A is closed and A ) is open. Hence, A , A ) ∈ B(X ). Moreover, A ) ⊆ A⊕ ⊆ A . However, A⊕ may not be in B(X ) (see Appendix for an example). In general, the Minkowski sum of two Borel sets need not be Borel [13], and that of two Lebesgue measurable sets need not be Lebesgue measurable [34].
2.2 Optimal Transport
Let µ, ν ∈ P(X ). A coupling between µ and ν is a joint probability measure π ∈ P(X 2) with marginals µ and ν. The set Π(µ, ν) ⊆ P(X 2) denotes the set of all couplings between µ and ν. The optimal transport cost between µ and ν under a cost function c : X × X → [0,∞) is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x ′)dπ(x, x′). For a positive integer p, the p-Wasserstein distance between µ and ν is defined as, Wp(µ, ν) = (Tdp(µ, ν)) 1 p . The∞-Wasserstein metric is defined as W∞(µ, ν) = limp→∞Wp(µ, ν). It can also be expressed in the following ways:
W∞(µ, ν) = inf π∈Π(µ,ν) ess sup (x,x′)∼π
d(x, x′) = inf{δ > 0 : µ(A) ≤ ν(Aδ)∀A ∈ B(X )}. (1)
Given a µ ∈ P(X ) and a measurable function f : X → X , the push-forward of µ by f is defined as a probability measure f]µ ∈ P(X ) given by f]µ = µ(f−1(A)) for all A ∈ B(X ).
3 Adversarial Risk: Definitions and Related Work
We consider a binary classification setting on feature space X . Let p0, p1 ∈ P(X ) be the datagenerating distributions for labels 0 and 1, respectively. Let the prior probabilities for labels 0 and 1 be in the ratio T : 1 where we assume T ≥ 1 without loss of generality. For a space of classifiers parametrized by w ∈ W and a loss function ` : (X × Y)×W → [0,∞), the adversarial risk of a classifier w ∈ W under an adversarial budget of ≥ 0 is defined as [24, 33],
R⊕ (`, w) = E(x,y)
[ sup
d(x,x′)≤ `((x′, y), w)
] . (2)
If the loss function `(·, w) is measurable, upper semi-continuous and bounded above for all w ∈ W , [26] show that R (`, w) is well-defined. But in general, it may not be so. A case of special interest is the 0-1 loss function with non-parametric classifiers of the form fA(x) := 1{x ∈ A} where A ∈ B(X ). In this case, `0/1((x, y), A) = 1{x ∈ A, y = 0}+ 1{x ∈ Ac, y = 1}. Hence,
R⊕ (`0/1, A) = T
T + 1 Ep0
[ sup
d(x,x′)≤ 1{x′ ∈ A}
] + 1
T + 1 Ep1
[ sup
d(x,x′)≤ 1{x′ ∈ Ac}
]
= T
T + 1 p0(A
⊕ ) + 1
T + 1 p1((A
c)⊕ ). (3)
A problem with the formulation in equation 3 is the ambiguity over the measurability of the sets A⊕ and (Ac)⊕ . Even when A ∈ B(X ), it is not guaranteed that A⊕ , (Ac)⊕ ∈ B(X ) (see Appendix for an example). Hence, R⊕ (`0/1, A) is not well-defined for all A ∈ B(X ). It is shown in [31] that R⊕ (`0/1, A) is well-defined when A is either closed or open, but its validity beyond that is unknown.
A simple fix to this measurability problem is to use closed set expansion instead of the Minkowski set expansion, as done in [25]. This leads to the following formulation.
R (`0/1, A) = T
T + 1 p0(A
) + 1
T + 1 p1((A
c) ). (4)
The above definition is well-defined for any A ∈ B(X ) because A and (Ac) are both closed and hence, measurable. However, under the above definition, a point x ∈ A may be perturbed to x′ ∈ A such that d(x, x′) > . For example, when A = (0, 1), we have A = [− , ] and an adversary may transport x = δ > 0 to x′ = − , violating the budget constraint at x. Remark 1. The formulations in equations (2), (3) and (4) can give a strictly positive adversarial risk even for a “perfect” (i.e. Bayes optimal) classifier. This is consistent with the literature on adversarial examples where even a perfect classifier is forced to make errors in the presence of evasion attacks. These formulations of adversarial risk correspond to “constant-in-the-ball” risk of [16] and “corrupted-instance” risk in [9, 25]. Here, an adversarial risk of zero is only possible if the supports of p0 and p1 are non-overlapping and separated by at least 2 . This is not the case with other formulations of adversarial risk such as “exact-in-the-ball” risk [16], “prediction-change risk and “error-region” risk [9, 25]. We focus on the “corrupted-instance” family of risks in this work.
Another approach to defining adversarial risk is by explicitly defining the class of adversaries of budget as measurable transport maps f : X → X that push-forward the true data distribution such that no point is transported by more than a distance of ; i.e., d(x, f(x)) ≤ . The transport map-based adversarial risk [29] is formally defined as follows:
RF (`0/1, A) = sup f0,f1:X→X
∀x∈X ,d(x,fi(x))≤
T
T + 1 f0]p0(A) +
1
T + 1 f1]p1((A
c)). (5)
Yet another definition uses the robust hypothesis testing framework with W∞ uncertainty sets. In this approach, an adversary perturbs the true distribution pi to a corrupted distribution p′i such that W∞(pi, p ′ i) ≤ . From (1), this is equivalent to the existence of a coupling π ∈ Π(pi, p′i) such that ess sup(x,x′)∼π d(x, x ′) ≤ . The adversarial risk with such an adversary is given by
RΓ (`0/1, A) = sup W∞(p1,p′1),W∞(p0,p ′ 0)≤
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (6)
Clearly, RF (`0/1, A) ≤ RΓ (`0/1, A), but conditions for equality were not studied in prior work. Moreover, their relation to set expansion-based definitions in (3) and (4) was also unknown.
Next we discuss some characterizations of optimal adversarial risk, defined as R∗⊕ := infA∈B(X )R⊕ (`0/1, A). In [30, 2], it is shown that R∗ = 1 2 [1 − D (p0, p1)] for equal priors (T = 1), where D is an optimal transport cost defined as follows. Definition 1 (D cost). Let µ, ν ∈ P(X ). Let ≥ 0. Let c : X 2 → {0, 1} be such that c (x, x′) = 1{(x, x′) ∈ X × X : d(x, x′) > 2 }. Then for µ, ν ∈ P(X ) and ≥ 0, D (µ, ν) = Tc (µ, ν).
For = 0, D reduces to the total variation distance. While D0 is a metric on P(X ), D (for > 0) is neither a metric nor a pseudometric [31].
Other formulations of optimal adversarial risk are inspired from game theory [29, 26, 3]. Consider a game between two players: (1) The adversary whose action space is pairs of distributions p′0, p ′ 1 ∈ P(X ), and (2) The algorithm whose action space is the space of decision regions of the form A ∈ B(X )}. For T > 0, define r : B(X ) × P(X ) × P(X ) → [0, 1] as, r(A,µ, ν) = TT+1µ(A) + 1 T+1ν((A
c)). The payoff when the algorithm plays first is given by infA∈B(X ) supW∞(p0,p′0),W∞(p1,p′1)≤ r(A, p ′ 0, p ′ 1), and this quantity is interpreted as the optimal adversarial risk in this setup.
4 Well-Definedness of Adversarial Risk
As stated in Section 3, R⊕ (`0/1, A) may not be well-defined for some decision regions A ∈ B(X ) because of the non-measurability of the sets A⊕ and (Ac)⊕ . Specifically, we have the following lemma.
Lemma 4.1. For any > 0, there exists A ∈ B(X ) such that A⊕ /∈ B(X ).
In this section, we lay down the conditions under which the ambiguity can be resolved. We begin by presenting a Lemma that shows that A⊕ is an analytic set (i.e. a continuous image of a Borel set) whenever A is Borel. It is known that an analytic sets are universally measurable, i.e. they belong in B(X ), the universal completion of the Borel σ-algebra B(X ), and are measurable with respect to any finite measure defined on the complete measure space, (X ,B(X )). Lemma 4.2. Let A ∈ B(X ). Then, A⊕ is an analytic set. Consequently, A⊕ ∈ B(X ).
By virtue of the previous lemma, we have the following. Theorem 4.1. Let p0, p1 ∈ P(X ). Then for any A ∈ B(X ), R⊕ (`0/1, A) is well-defined.
For the special case of X = Rd, we can further strengthen Theorem 4.1 to include all Lebesgue measurable sets L(X ) instead of just Borel sets B(X ). For this, we use the concept of porous sets. Definition 2 (Porous set). A set E ⊆ X is said to be porous if there exists α ∈ (0, 1) and r0 > 0 such that for every r ∈ (0, r0] and every x ∈ X , there is an x′ ∈ X such that Bαr(x′) ⊆ Br(x)\E.
Porous sets are a subclass of nowhere dense sets. Importantly, λ(E) = 0 for any porous set E ⊆ Rd [47]. By the following lemma, the set difference between the closed/open set expansions is porous. Lemma 4.3. Let (X , d) = (Rd, ‖ · ‖) and A ∈ L(X ). Then E = A \A ) is porous.
Lemma 4.3 plays a crucial role in proving that A⊕ ∈ L(X ) whenever A ∈ L(X ). We recall that A⊕ is the Minkowski sum of A with the closed -ball. In general, the Minkowski sum of two Lebesgue measurable sets is not always Lebesgue measurable [34, 14]. So the fact that one of them is a closed ball in case of A⊕ is important. In the following theorem, we use Lemma 4.3 to prove the measurability of A⊕ and in turn prove that R⊕ (`0/1, A) is well-defined for any A ∈ L(X ). Theorem 4.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) is well-defined. If, in addition, p0 and p1 are absolutely continuous with respect to the Lebesgue measure, then R⊕ (`0/1, A) = R (`0/1, A).
5 Equivalence with∞-Wasserstein Robustness
In this section, we show the conditions under which R⊕ (`0/1, A) is equivalent to other notions of adversarial risk based on transport maps and W∞ robustness.
5.1 W∞ Robustness in Polish Spaces via Measurable Selections
We begin by presenting a lemma that links the measure of -Minkowsi set expansion to the worst case measure over a W∞ probability ball of radius . Lemma 5.1. Let µ ∈ P(X ) and A ∈ B(X ). Then supW∞(µ,µ′)≤ µ
′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is achieved by a µ∗ ∈ P(X ) that is induced from µ via a measurable transport map φ : X → X (i.e. µ∗ = φ]µ) satisfying d(x, φ(x)) ≤ for all x ∈ X .
A crucial step in the proof of Lemma 5.1 is finding a measurable transport map φ such that φ−1(A) = A⊕ and d(x, φ(x)) ≤ for all x ∈ X . In the following theorem, we use Lemma 5.1 to establish the equivalence between three different notions of adversarial risk introduced in section 3. Theorem 5.1. Let p0, p1 ∈ P(X ) and A ∈ B(X ). Then R⊕ (`0/1, A) = RF (`0/1, A) = RΓ (`0/1, A). In addition, the supremum over f0 and f1 in RF (`0/1, A) is attained. Similarly, the supremum over p′0 and p ′ 1 in RΓ (`0/1, A) is attained.
5.2 W∞ Robustness in Rd via 2-Alternating Capacities
In this subsection, we establish a connection between adversarial risk and Choquet capacities [7] in Rd. This connection allows us to extend Theorem 5.1 from Borel sets to the broader class of Lebesgue measurable sets. We will again use this connection for proving minimax theorems and existence of Nash equilibria in Section 7.1. We begin with the following definitions. Definition 3 (Capacity). A set function v : B(X )→ [0, 1] is a capacity if it satisfies the following conditions: (1) v(∅) = 0 and v(X ) = 1; (2) For A,B ∈ B(X ), A ⊆ B =⇒ v(A) ≤ v(B); (3) An ↑ A =⇒ v(An) ↑ v(A); and (4) Fn ↓ F , Fn closed =⇒ v(Fn) ↓ v(F ). Definition 4 (2-Alternating Capacity). A capacity v defined on the measure space (X ,B(X )) is called 2-alternating if v(A ∪B) + v(A ∩B) ≤ v(A) + v(B) for all A,B ∈ B(X ).
For any compact set of probability measures Ξ ⊆ P(X ), the upper probability v(A) = supµ∈Ξ µ(A) is a capacity [19]. The upper probability of -neighborhoods of a µ ∈ P(X ) defined using either the total variation metric or the Levy-Prokhorov metric can be shown to be a 2-alternating capacity [19]. The following lemma shows that A 7→ µ(A⊕ ) is a 2-alternating capacity under some conditions. Lemma 5.2. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ) and let ≥ 0. Define a set function v on X such that for any A ∈ L(X ), v(A) := µ(A⊕ ). Then v is a 2-alternating capacity.
Now we relate the capacity defined in Lemma 5.2 to the W∞ metric. Since the -neighborhood of a µ ∈ P(X ) in W∞ metric is a compact set of probability measures [46], the upper probability over this W∞ -ball is a capacity. The following lemma shows that it is a 2-alternating capacity, and identifies it with the capacity defined in Lemma 5.2. Lemma 5.3. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ). Then for any A ∈ L(X ), supW∞(µ,µ′)≤ µ ′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is attained.
Lemma 5.3 plays a similar role to Lemma 5.1 in proving the following equivalence between adversarial robustness and W∞ robustness. Theorem 5.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) = RΓ (`0/1, A), and the supremum over p ′ 0 and p ′ 1 in RΓ (`0/1, A) is attained.
The proof follows by converting the expression for RΓ into one for R⊕ using Lemma 5.3. Unlike Theorem 5.1, Theorem 5.2 does not show the equivalence of RF (`0/1, A) with the other definitions under the relaxed assumption of A ∈ L(X ). This is because Lemma 5.3 does not provide a pushforward map φ such that µ∗ = φ]µ with µ∗ attaining the supremum over the W∞ ball.
6 Optimal Adversarial Risk via Generalized Strassen’s Theorem
In section 5, we analyzed adversarial risk for a specific decision region A ∈ B(X ). In this section, we analyze infimum of adversarial risk over all possible decision regions; i.e., the optimal adversarial risk. We show that optimal adversarial risk in binary classification with unequal priors is characterized by an unbalanced optimal transport cost between data-generating distributions. Our main technical lemma generalizes Strassen’s theorem to unbalanced optimal transport. We present this result in subsection 6.1 and present our characterization of optimal adversarial risk in subsection 6.2.
6.1 Unbalanced Optimal Transport & Generalized Strassen’s Theorem
Recall from Section 3 that the optimal transport cost D characterizes the optimal adversarial risk in binary classification for equal priors. The following result gives an alternative characterization of D . Proposition 6.1 (Strassen’s theorem). [Corollary 1.28 in [42]] Let µ, ν ∈ P(X ). Let ≥ 0. Then
sup A∈B(X )
µ(A)− ν(A2 ) = D (µ, ν). (7)
Proposition 6.1 is a special case of Kantorovich-Rubinstein duality [42] applied to {0, 1}-valued cost functions. We now generalize this result to measures with unequal masses. We begin with some definitions that generalize the concepts we introduced in subsection 2.2.
Let µ, ν ∈M(X ) be such that µ(X ) ≤ ν(X ). A coupling between µ and ν is a measure π ∈M(X 2) such that for any A ∈ B(X ), π(A×X ) = µ(A) and π(X ×A) ≤ ν(A). The set Π(µ, ν) is defined to be the set of all couplings between µ and ν. For a cost function c : X 2 → [0,∞), the optimal transport cost between µ and ν under c is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x
′)dπ(x, x′). Theorem 6.1 (Generalized Strassen’s theorem). Let µ, ν ∈M(X ) be such that 0 < M = µ(X ) ≤ ν(X ). Let > 0. Define c : X 2 → {0, 1} as c (x, x′) = 1{(x, x′) ∈ X 2 : d(x, x′) > 2 }. Then
sup A∈B(X ) µ(A)− ν(A2 ) = Tc (µ, ν) = M inf ν′∈P(X ):ν′ ν/M D (µ/M, ν ′) . (8)
Moreover, the infimum on the right hand side is attained. (Equivalently, there is a coupling π ∈ Π(µ, ν) that attains the unbalanced optimal transport cost Tc (µ, ν).)
Our proof of Theorem 6.1 leverages strong duality in linear programming. We first establish (8) for discrete measures on a finite support. We then apply the discrete result on a sequence of measures supported on a countable dense subset of the Polish space X . Using the tightness of finite measures on X , we construct an optimal coupling that achieves the cost Tc (µ, ν) in (8). We then show that the constructed coupling satisfies (8). This proof strategy is adapted from the works of [12] and [32].
6.2 Optimal Adversarial Risk for Unequal Priors
Generalized Strassen’s theorem involves closed set expansions. The following lemma allows us to switch to Minkowski set expansions. Lemma 6.1. Let µ, ν ∈ M(X ) and let ≥ 0. Then supA∈B(X ) µ(A) − ν(A2 ) = supA∈B(X ) µ(A
)− ν(A⊕ ). Moreover, the supremum in the right hand side of the above equality can be replaced by a supremum over closed sets.
Using the above lemma and the generalized Strassen’s theorem, we show the following result on optimal adversarial risk for unequal priors, generalizing the result of [30, 2]. Theorem 6.2. Let p0, p1 ∈ P(X ) and let ≥ 0. Then,
inf A∈B(X )
R⊕ (`0/1, A) = 1
T + 1
[ 1− inf
q∈P(X ):q Tp0 D (q, p1)
] . (9)
Moreover, the infimum on the left hand side can be replaced by an infimum over closed sets.
The proof follows by using Lemma 6.1 to convert the expression with Minkowski expansion to one with closed expansions, followed by an application of Theorem 6.1 to arrive at the final optimal transport-based expression. Theorem 6.2 extends the result of [31] in two ways: (1) the infimum is
taken over all sets for which R⊕ (`0/1, A) is well-defined, instead of restricting to closed sets, and (2) the priors on both labels can be unequal. We also note that for (X , d) = (Rd, ‖ · ‖), (9) holds with the infimum on the left hand side taken over all A ∈ L(X ).
7 Minimax Theorems and Nash Equilibria
In this section, we revisit the zero-sum game between the adversary and the algorithm introduced in section 3. Recall that for A ∈ B(X ) and p′0, p′1 ∈ P(X ), the payoff function is given by
r(A, p′0, p ′ 1) =
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (10)
The max-min inequality gives us
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈A
r(A, p′0, p ′ 1) ≤ inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (11)
If the inequality in (11) is an equality, we say that the game has zero duality gap, and admits a value equal to either expression in (11). Then there is no advantage to a player making the first move. Our minimax theorems establish such an equality. If in addition to having an equality in (11), there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left-hand side and A∗ ∈ B(X ) that achieves the infimum on the right-hand side, we say that ((p∗0, p ∗ 1), A ∗) is a pure Nash equilibrium of the game.
In Section 7.1, we prove the minimax theorem and the existence of a pure Nash equilibrium in Rd using the theory of 2-alternating capacities [19] and the relation to adversarial risk from Section 5.2. Section 7.2 extends these results to more general Polish spaces with a “midpoint property."
7.1 Minimax Theorem in Rd via 2-Alternating Capacities
The following theorem proves the minimax equality and the existence of a Nash equilibrium for the adversarial robustness game in Rd. Theorem 7.1 (Minimax theorem in Rd). Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10). Then,
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈L(X )
r(A, p′0, p ′ 1) = inf
A∈L(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (12)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) and A∗ ∈ L(X ) that achieve the supremum and infimum on the left and right hand sides of the above equation.
Crucial to the proof of Theorem 7.1 is Lemma 5.2, which shows that the set-valued maps A 7→ p0(A
⊕ ) and Ac 7→ p1((Ac)⊕ ) are 2-alternating capacities. The same proof technique is not applicable in general Polish spaces because the map A 7→ µ(A⊕ ) is not a capacity for a general µ ∈ P(X ). This is because A⊕ is not measurable for all A ∈ B(X ).
7.2 Minimax Theorem in Polish Spaces via Optimal Transport
We now extend the minimax theorem from Rd to general Polish spaces with the following property. Definition 5 (Midpoint property). A metric space (X , d) is said to have the midpoint property if for every x1, x2 ∈ X , there exists x ∈ X such that, d(x1, x) = d(x, x2) = d(x1, x2)/2.
Any normed vector space with distance defined as d(x, x′) = ‖x − x′‖ satisfies the midpoint property. An example of a metric space without this property is the discrete metric space where d(x, x′) = 1{x 6= x′}. The midpoint property plays a crucial role in proving the following theorem, which shows that the D transport cost between two distributions is the shortest total variation distance between their -neighborhoods in W∞ metric. A similar result was also presented in [11]. Theorem 7.2 (D as shortest DTV between W∞ balls). Let (X , d) have the midpoint property. Let µ, ν ∈ P(X ) and let ≥ 0. Then D (µ, ν) = infW∞(µ,µ′),W∞(ν,ν′)≤ DTV (µ′, ν′). Moreover, the infimum over DTV in the above equation is attained.
The following theorem uses Theorem 7.2 to prove the minimax equality and the existence of a Nash equilibrium for any Polish space with the midpoint property for the case of equal priors.
Theorem 7.3 (Minimax theorem for equal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10) with T = 1. Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (13)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left hand side.
Proof. For µ ∈ P(X ), let WB(µ) denote the set of all µ′ ∈ P(X ) such that W∞(µ, µ′) ≤ .
inf A∈B(X ) sup p′0∈WB(p0) p′1∈WB(p1)
r(A, p′0, p ′ 1) = inf
A∈B(X ) RΓ (`0/1, A)
(i) = inf
A∈B(X ) R⊕ (`0/1, A)
(ii) =
1 2 [1−D (p0, p1)]
sup p′0∈WB(p0) p′1∈WB(p1) inf A∈B(X )
r(A, p′0, p ′ 1) (iii) = sup
p′0∈WB(p0) p′1∈WB(p1)
1 2 [1−DTV (p′0, p′1)] = 1 2 1− inf p′0∈WB(p0) p′1∈WB(p1) DTV (p ′ 0, p ′ 1) , where (i) follows from Theorem 5.1, (ii) from Theorem 6.2, and (iii) again from Theorem 6.2 with = 0. The expressions on the right extremes of the above equations are equal by Theorem 7.2. The existence of p∗0, p ∗ 1 ∈ P(X ) follows Theorem 7.2.
To prove the minimax theorem for unequal priors, we need the following generalization of Theorem 7.2 to finite measures of unequal mass. Lemma 7.1. Let p0, p1 ∈ P(X ) and let ≥ 0. Then for T ≥ 1,
inf q∈P(X ):q Tp0 D (q, p1) = inf q∈P(X ):q Tp0 inf W∞(q,q′),W∞(p1,p′1)≤
DTV (q ′, p′1)
= inf W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf q′∈P(X ):q′ Tp′0
DTV (q ′, p′1) (14)
Now, we prove the minimax equality for unequal priors. Theorem 7.4 (Minimax theorem for unequal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. For T > 0, define r as in (10). Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (15)
The proof uses: (1) the characterization of inf-sup payoff in terms of unbalanced optimal transport using Theorem 5.1; (2) Lemma 7.1; and (3) the minimax equality of Theorem 7.3 for equal priors. Remark 2. Unlike Theorem 7.1, Theorems 7.3 and 7.4 do not guarantee the existence of an optimal decision region A∗. While Theorem 7.3 guarantees the existence of worst-case pair of perturbed distributions p∗0, p ∗ 1, Theorem 7.4 does not do so. Nevertheless, an approximate pure Nash equilibrium exists in all the cases. This is in sharp contrast with the non-existence of Nash equilibrium proven in [29] (which considers a different notion of adversarial perturbations). Remark 3. A recent work [26] shows the existence of mixed Nash equilibrium for randomized classifiers parametrized by points in a Polish space (see also [29, 3]). Fan’s minimax theorem used in this result is inapplicable in our setting of non-parametric, decision region-based classifiers. Instead, we applied the theory of Choquet capacities (in Rd) and generalized Strassen’s duality theorem (in Polish spaces), which is novel to the best of our knowledge.
8 Discussion
We examined different notions of adversarial risk in a binary classification setting with 0-1 loss function and laid down the conditions under which these definitions are equivalent. By verifying the conditions in Sections 4 and 5, researchers may use different definitions interchangeably. Several definitions have also been proposed for adversarial risk under general loss functions [31, 26] using analogous constructions like transport maps, couplings and suprema over -neighborhoods. Extending our equivalence results to more general loss functions is left for future work.
summarize the results of Section 6 and Section 7. For equal priors (T = 1), A and B denote two ways of obtaining the optimal adversarial risk, R∗⊕ : 1) A , which denotes the D cost between the true label distributions p0 and p1, and 2) B , which denotes the shortest total variation distance between∞-Wasserstein balls of radius around p0 and p1. For unequal priors (T > 1), C , D and E denote three equivalent ways of obtaining R∗⊕ . The black dotted balls denote∞-Wasserstein balls and the blue dashed balls denote sets defined using stochastic domination. The order in which the two types of balls appear around p0 is reversed between D and E .
We analyzed optimal adversarial risk for (non-parametric) decision region-based classifiers. Using a formulation of optimal transport between finite measures of unequal mass, we extended the optimal transport based characterization of adversarial risk of [30, 2] to unequal priors by generalizing Strassen’s theorem. This may find applications in the study of excess cost optimal transport [45, 44]. A recent work [39] obtains a different characterization of optimal adversarial risk using optimal transport on the product space X × Y where Y is the label space. Further, they show the evolution of the optimal classifier A∗ as grows, in terms of a mean curvature flow. This raises an interesting question on the evolution of the optimal adversarial distributions p∗0, p ∗ 1 ∈ P(X ) with .
We proved a minimax theorem for adversarial robustness game and the existence of a Nash equilibrium. We constructed the worst-case pair of distributions p∗0, p ∗ 1 ∈ P(X ) in terms of true data distributions and showed that their total variation distance gives the optimal adversarial risk. Identifying worst case distributions could lead to a new approach to developing robust algorithms.
We used Choquet capacities for results in Rd and measurable selections in Polish spaces. Specifically, we showed that the measure of -Minkowski expansion is a 2-alternating capacity. This connection could help generalize our results to total variation and Prokhorov distance based contaminations.
Limitations: We largely focused on the binary classification setup with 0-1 loss function. While it may be possible extend our results on measurability and relation to∞-Wasserstein distributional robustness to more general loss functions and a multi-class setup, it is unclear how our results on generalized Strassen’s theorem and Nash equilibria can be extended further. Our results on various equivalent formulations of optimal adversarial risk are specific to adversarial perturbations (or equivalently,∞-Wasserstein distributional perturbations), and we did not investigate more general perturbation models. | 1. What is the focus of the paper regarding adversarial risk, and how does it extend previous works?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. Do you have any concerns or questions regarding the definition of adversarial risk and its measurability?
4. How does the paper's approach differ from prior works in terms of existence of Pure Nash equilibria, and what are the implications of this result?
5. What are your suggestions for improving the paper's readability and clarity, especially for non-experts?
6. How does the paper compare to other recent works in this area, and what are the main differences and contributions?
7. Are there any specific points in the proof or arguments that you would like the authors to clarify or expand upon? | Summary Of The Paper
Review | Summary Of The Paper
The paper extends previous works of the study of the adversarial risk. The authors interest in the well definition of the adversarial risk. They then show a minimax theorem on the game between the classifier and the adversary.
Review
Overall I enjoyed reading the paper. The paper brings insights on Adversarial risks extending previous results from the literature. The paper is well written and easy to follow. The current trend of theoretical papers on adversarial attacks can be only beneficial for the understanding of the topic. I tried to make a thorough reading of the proofs, but I still might have missed some points. I would like to discuss the following points with the authors:
On the definition of the adversarial risk. I believe it is provable that
A
⊕
ϵ
is analytic (cf 1 for more details) and hence universally measurable. Idea: write
1
x
∈
A
⊕
ϵ
as $\sup_{a\in A} 1_{x\in\bar{B}\epsilon(a)}
.
T
h
e
n
y
o
u
c
a
n
u
s
e
t
h
e
r
e
s
u
l
t
s
f
r
o
m
B
e
r
t
s
e
k
a
s
a
n
d
S
h
r
e
v
e
(
2004
)
,
P
r
o
p
7.39
a
n
d
C
o
r
o
l
l
a
r
y
7.42
.1
.
M
a
y
b
e
t
h
e
a
u
t
h
o
r
s
c
a
n
u
s
e
t
h
e
s
e
i
d
e
a
s
t
o
p
r
o
v
e
a
m
o
r
e
g
e
n
e
r
a
l
r
e
s
u
l
t
.
I
t
h
i
n
k
a
l
l
f
u
r
t
h
e
r
r
e
s
u
l
t
s
c
a
n
b
e
e
x
t
e
n
d
e
d
t
o
a
l
l
B
o
r
e
l
s
e
t
s
i
n
s
t
e
a
d
t
h
a
t
o
n
l
y
G\delta
a
n
d
F_\sigma$ sets. I am happy to discuss more about this during the rebuttal phase. I might be mistaken, but I think it worth discussing. It would simplify some results in the paper I’d say.
The authors said they prove the existence of Pure Nash equilibria. Although the classifier is deterministic, the attacker is still « randomized ». Maybe they should mention that the equilibria are hybrid. This result seems right and reasonable.
I would also suggest the authors to give some more intuition to the different notions of risks the authors introduced. For instance, draw some figures to explain it. It would be more readable for non advertised readers
Finally, although I enjoyed reading the paper, I also find that the paper lacks a bit of novelty with regards to existing literature (Pydi et al., Dohmatob, Bhagoji et al.). These papers have proved the majority of the results exposed in the paper (
W
∞
,
c
ϵ
costs,etc.). The authors managed to treat unbalanced probabilities. This is my major concern about accepting/rejecting the paper. I am willing to discuss with the authors and the other reviewers on this.
For now, I rate this paper as borderline, but I am willing to change my rating if the authors explain to me more clearly the novelties of their paper. Moreover, I hope authors can get rid of their hypothesis on
F
σ
and
G
δ
. |
NIPS | Title
The Many Faces of Adversarial Risk
Abstract
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk—not all mathematically rigorous and differing subtly in the details—have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with∞-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
1 Introduction
Neural networks are known to be vulnerable to adversarial attacks, which are imperceptible perturbations to input data that maximize loss [38, 15, 5]. Developing algorithms resistant to such attacks has received considerable attention in recent years [8, 28, 24, 20], motivated by safety-critical applications such as autonomous driving [18, 27], medical imaging [17, 23, 22] and law [21, 6].
A classification algorithm with high accuracy (low risk) in the absence of an adversary may have poor accuracy (high risk) when an adversary is present. Thus, a modified notion known as adversarial risk is used to quantify the adversarial robustness of algorithms. Algorithms that minimize adversarial risk are deemed robust. Procedures for finding them have been effective in practice [24, 41, 28], spurring numerous theoretical investigations into adversarial risk and its minimizers.
There is no universally agreed upon definition of adversarial risk. Even the simplest setting of binary classification in Rd with an `2 adversary admits various definitions involving set expansions [9, 16], transport maps [29], Markov kernels [31], and couplings [26]. These works broadly interpret adversarial risk as a measure of robustness to small perturbations, but their definitions differ in subtle details such as the class of adversaries and algorithms considered, budget constraints placed on the adversary, assumptions on the loss function, and the geometry of decision boundaries.
Optimal adversarial risk is most commonly defined as the minimax risk under adversarial contamination [24, 33]. Other notable characterizations include an optimal transport cost between data generating distributions in [30, 2, 10, 11], the optimal value of a distributionally robust optimization problem [36, 35, 40], and the value of a two-player zero-sum game [26, 29, 3, 4].
The diversity of definitions for adversarial risk makes it challenging to compare approaches. Moreover, not all approaches are rigorous. For instance, the classes of adversarial strategies and classifier
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
algorithms are often unclear, and issues of measurability are ignored. Although this may be harmless for applied research, it has led to incorrect proofs and insufficient assumptions in some theoretical works; a mathematically rigorous foundation for adversarial risk is essential for future research.
In this paper, we examine various notions of adversarial risk for binary classification in a nonparametric setting, where the decision boundary (or decision region) of a classifier is an arbitrary subset of the input space. We present rigorous definitions of adversarial risk and identify conditions under which these definitions are equivalent. We consider the general setting of Polish spaces (complete, separable metric spaces), and present stronger results for the Euclidean space (Rd). Our contributions are as follows:
• We examine the definition of adversarial risk based on set expansions. For Polish spaces, we observe that adversarial risk is not Borel measurable, and hence, not well-defined when the decision region is an arbitrary set. We show that the problem can be resolved by considering a Polish space equipped with the universal completion of the Borel σ-algebra and restricting the decision regions to Borel sets. For the Euclidean space with the Lebesgue σ-algebra, we show that adversarial risk is well-defined for any Lebesgue measurable decision region. Our key lemma (Lemma 4.3) shows that the Lebesgue σ-algebra is preferred over the Borel σ-algebra because set expansions are Lebesgue measurable but not necessarily Borel measurable. These results are contained in Section 4.
• We show that the definition of adversarial risk using set expansions is identical to a notion of risk that appears in robust hypothesis testing with∞-Wasserstein uncertainty sets. We prove this result in Polish spaces using the theory of measurable selections [1, 43]. In Rd, we are able to use the powerful theory of Choquet capacities [7] (in particular, Huber and Strassen’s 2-alternating capacities [19]) to establish results of a similar nature. These results are contained in Section 5.
• We consider the binary classification setup with unequal priors and show (under suitable assumptions) that the optimal adversarial risk from the above definitions is characterized by an unbalanced optimal transport cost between data-generating distributions. For both Polish spaces and Rd, the main tool we use is Theorem 6.1 in which we generalize a classical result of Strassen on excess-cost optimal transport [37, 42] from probability measures to finite measures with possibly unequal mass. This generalizes results of [31, 2] on binary classification, which were only for equal priors. These results are contained in Section 6.
• We consider the setup of a zero-sum game between the adversary and the algorithm. We show that the value of this game (adversarial risk) is equal to the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets centered around true data-generating distributions. We prove the existence of a pure Nash equilibrium in this game for Rd and for Polish spaces with a midpoint property. This extends/strengthens the results of [26, 29, 3] to non-parametric classifiers. These results are contained in Section 7.
The paper is organized as follows: In Section 2, we present preliminary definitions from optimal transport and metric space topology. In Section 3, we discuss various definitions of adversarial risk and present more related work. Sections 4, 5, 6 and 7 contain our main contributions summarized above. We conclude the paper in Section 8 and discuss future research directions.
We emphasize that rectifying measure theoretic issues with existing formulations of adversarial risk is one of our contributions, but not the main focus of our paper. We start our presentation with fixing measurability and well-definedness (in Section 4) because otherwise we will not be able to rigorously present our main results in the subsequent sections, namely: relation to robust hypothesis testing and Choquet capacities in section 5, generalizing the results of [2, 30] in section, 6 proving minimax theorems and existence of Nash equilibria and extending the results of [26, 3, 29] in section 7.
Notation: Throughout the paper, we use X to denote a Polish space (a complete, separable metric space) with metric d and Borel σ-algebra B(X ). For x ∈ X and r ≥ 0, let Br(x) denote the closed ball of radius r centered at x. We use P(X ) andM(X ) to denote the space of probability measures and finite measures defined on the measure space (X ,B(X )), respectively. Let B(X ) denote the universal completion of B(X ). Let P(X ) andM(X ) denote the space of probability measures and finite measures defined on the complete measure space (X ,B(X )). For µ, ν ∈ M(X ), we say ν dominates µ if µ(A) ≤ ν(A) for all A ∈ B(X ) and write µ ν. When X is Rd, we use L(X ) to
denote the Lebesgue σ-algebra and λ to denote the d-dimensional Lebesgue measure. Note that L(X ) = B(X ) for X = Rd. For a positive integer n, we use [n] to denote the finite set {1, . . . , n}.
2 Preliminaries
2.1 Metric Space Topology
We introduce three different notions of set expansions. For ≥ 0 and A ∈ B(X ), the -Minkowski expansion of A is given by A⊕ := ∪a∈AB (a). The -closed expansion of A is defined as A := {x ∈ X : d(x,A) ≤ }, where d(x,A) = infa∈A d(x, a). The -open expansion of A is defined as A ) := {x ∈ X : d(x,A) < }. We use the notation A− to denote ((Ac) )c. Similarly, A := ((Ac)⊕ )c. For example, consider the set A = (0, 1] in the space (X , d) = (R, | · |) and > 0. Then A⊕ = (− , 1 + ], A = [− , 1 + ] and A ) = (− , 1 + ). For any A ∈ B(X ), A is closed and A ) is open. Hence, A , A ) ∈ B(X ). Moreover, A ) ⊆ A⊕ ⊆ A . However, A⊕ may not be in B(X ) (see Appendix for an example). In general, the Minkowski sum of two Borel sets need not be Borel [13], and that of two Lebesgue measurable sets need not be Lebesgue measurable [34].
2.2 Optimal Transport
Let µ, ν ∈ P(X ). A coupling between µ and ν is a joint probability measure π ∈ P(X 2) with marginals µ and ν. The set Π(µ, ν) ⊆ P(X 2) denotes the set of all couplings between µ and ν. The optimal transport cost between µ and ν under a cost function c : X × X → [0,∞) is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x ′)dπ(x, x′). For a positive integer p, the p-Wasserstein distance between µ and ν is defined as, Wp(µ, ν) = (Tdp(µ, ν)) 1 p . The∞-Wasserstein metric is defined as W∞(µ, ν) = limp→∞Wp(µ, ν). It can also be expressed in the following ways:
W∞(µ, ν) = inf π∈Π(µ,ν) ess sup (x,x′)∼π
d(x, x′) = inf{δ > 0 : µ(A) ≤ ν(Aδ)∀A ∈ B(X )}. (1)
Given a µ ∈ P(X ) and a measurable function f : X → X , the push-forward of µ by f is defined as a probability measure f]µ ∈ P(X ) given by f]µ = µ(f−1(A)) for all A ∈ B(X ).
3 Adversarial Risk: Definitions and Related Work
We consider a binary classification setting on feature space X . Let p0, p1 ∈ P(X ) be the datagenerating distributions for labels 0 and 1, respectively. Let the prior probabilities for labels 0 and 1 be in the ratio T : 1 where we assume T ≥ 1 without loss of generality. For a space of classifiers parametrized by w ∈ W and a loss function ` : (X × Y)×W → [0,∞), the adversarial risk of a classifier w ∈ W under an adversarial budget of ≥ 0 is defined as [24, 33],
R⊕ (`, w) = E(x,y)
[ sup
d(x,x′)≤ `((x′, y), w)
] . (2)
If the loss function `(·, w) is measurable, upper semi-continuous and bounded above for all w ∈ W , [26] show that R (`, w) is well-defined. But in general, it may not be so. A case of special interest is the 0-1 loss function with non-parametric classifiers of the form fA(x) := 1{x ∈ A} where A ∈ B(X ). In this case, `0/1((x, y), A) = 1{x ∈ A, y = 0}+ 1{x ∈ Ac, y = 1}. Hence,
R⊕ (`0/1, A) = T
T + 1 Ep0
[ sup
d(x,x′)≤ 1{x′ ∈ A}
] + 1
T + 1 Ep1
[ sup
d(x,x′)≤ 1{x′ ∈ Ac}
]
= T
T + 1 p0(A
⊕ ) + 1
T + 1 p1((A
c)⊕ ). (3)
A problem with the formulation in equation 3 is the ambiguity over the measurability of the sets A⊕ and (Ac)⊕ . Even when A ∈ B(X ), it is not guaranteed that A⊕ , (Ac)⊕ ∈ B(X ) (see Appendix for an example). Hence, R⊕ (`0/1, A) is not well-defined for all A ∈ B(X ). It is shown in [31] that R⊕ (`0/1, A) is well-defined when A is either closed or open, but its validity beyond that is unknown.
A simple fix to this measurability problem is to use closed set expansion instead of the Minkowski set expansion, as done in [25]. This leads to the following formulation.
R (`0/1, A) = T
T + 1 p0(A
) + 1
T + 1 p1((A
c) ). (4)
The above definition is well-defined for any A ∈ B(X ) because A and (Ac) are both closed and hence, measurable. However, under the above definition, a point x ∈ A may be perturbed to x′ ∈ A such that d(x, x′) > . For example, when A = (0, 1), we have A = [− , ] and an adversary may transport x = δ > 0 to x′ = − , violating the budget constraint at x. Remark 1. The formulations in equations (2), (3) and (4) can give a strictly positive adversarial risk even for a “perfect” (i.e. Bayes optimal) classifier. This is consistent with the literature on adversarial examples where even a perfect classifier is forced to make errors in the presence of evasion attacks. These formulations of adversarial risk correspond to “constant-in-the-ball” risk of [16] and “corrupted-instance” risk in [9, 25]. Here, an adversarial risk of zero is only possible if the supports of p0 and p1 are non-overlapping and separated by at least 2 . This is not the case with other formulations of adversarial risk such as “exact-in-the-ball” risk [16], “prediction-change risk and “error-region” risk [9, 25]. We focus on the “corrupted-instance” family of risks in this work.
Another approach to defining adversarial risk is by explicitly defining the class of adversaries of budget as measurable transport maps f : X → X that push-forward the true data distribution such that no point is transported by more than a distance of ; i.e., d(x, f(x)) ≤ . The transport map-based adversarial risk [29] is formally defined as follows:
RF (`0/1, A) = sup f0,f1:X→X
∀x∈X ,d(x,fi(x))≤
T
T + 1 f0]p0(A) +
1
T + 1 f1]p1((A
c)). (5)
Yet another definition uses the robust hypothesis testing framework with W∞ uncertainty sets. In this approach, an adversary perturbs the true distribution pi to a corrupted distribution p′i such that W∞(pi, p ′ i) ≤ . From (1), this is equivalent to the existence of a coupling π ∈ Π(pi, p′i) such that ess sup(x,x′)∼π d(x, x ′) ≤ . The adversarial risk with such an adversary is given by
RΓ (`0/1, A) = sup W∞(p1,p′1),W∞(p0,p ′ 0)≤
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (6)
Clearly, RF (`0/1, A) ≤ RΓ (`0/1, A), but conditions for equality were not studied in prior work. Moreover, their relation to set expansion-based definitions in (3) and (4) was also unknown.
Next we discuss some characterizations of optimal adversarial risk, defined as R∗⊕ := infA∈B(X )R⊕ (`0/1, A). In [30, 2], it is shown that R∗ = 1 2 [1 − D (p0, p1)] for equal priors (T = 1), where D is an optimal transport cost defined as follows. Definition 1 (D cost). Let µ, ν ∈ P(X ). Let ≥ 0. Let c : X 2 → {0, 1} be such that c (x, x′) = 1{(x, x′) ∈ X × X : d(x, x′) > 2 }. Then for µ, ν ∈ P(X ) and ≥ 0, D (µ, ν) = Tc (µ, ν).
For = 0, D reduces to the total variation distance. While D0 is a metric on P(X ), D (for > 0) is neither a metric nor a pseudometric [31].
Other formulations of optimal adversarial risk are inspired from game theory [29, 26, 3]. Consider a game between two players: (1) The adversary whose action space is pairs of distributions p′0, p ′ 1 ∈ P(X ), and (2) The algorithm whose action space is the space of decision regions of the form A ∈ B(X )}. For T > 0, define r : B(X ) × P(X ) × P(X ) → [0, 1] as, r(A,µ, ν) = TT+1µ(A) + 1 T+1ν((A
c)). The payoff when the algorithm plays first is given by infA∈B(X ) supW∞(p0,p′0),W∞(p1,p′1)≤ r(A, p ′ 0, p ′ 1), and this quantity is interpreted as the optimal adversarial risk in this setup.
4 Well-Definedness of Adversarial Risk
As stated in Section 3, R⊕ (`0/1, A) may not be well-defined for some decision regions A ∈ B(X ) because of the non-measurability of the sets A⊕ and (Ac)⊕ . Specifically, we have the following lemma.
Lemma 4.1. For any > 0, there exists A ∈ B(X ) such that A⊕ /∈ B(X ).
In this section, we lay down the conditions under which the ambiguity can be resolved. We begin by presenting a Lemma that shows that A⊕ is an analytic set (i.e. a continuous image of a Borel set) whenever A is Borel. It is known that an analytic sets are universally measurable, i.e. they belong in B(X ), the universal completion of the Borel σ-algebra B(X ), and are measurable with respect to any finite measure defined on the complete measure space, (X ,B(X )). Lemma 4.2. Let A ∈ B(X ). Then, A⊕ is an analytic set. Consequently, A⊕ ∈ B(X ).
By virtue of the previous lemma, we have the following. Theorem 4.1. Let p0, p1 ∈ P(X ). Then for any A ∈ B(X ), R⊕ (`0/1, A) is well-defined.
For the special case of X = Rd, we can further strengthen Theorem 4.1 to include all Lebesgue measurable sets L(X ) instead of just Borel sets B(X ). For this, we use the concept of porous sets. Definition 2 (Porous set). A set E ⊆ X is said to be porous if there exists α ∈ (0, 1) and r0 > 0 such that for every r ∈ (0, r0] and every x ∈ X , there is an x′ ∈ X such that Bαr(x′) ⊆ Br(x)\E.
Porous sets are a subclass of nowhere dense sets. Importantly, λ(E) = 0 for any porous set E ⊆ Rd [47]. By the following lemma, the set difference between the closed/open set expansions is porous. Lemma 4.3. Let (X , d) = (Rd, ‖ · ‖) and A ∈ L(X ). Then E = A \A ) is porous.
Lemma 4.3 plays a crucial role in proving that A⊕ ∈ L(X ) whenever A ∈ L(X ). We recall that A⊕ is the Minkowski sum of A with the closed -ball. In general, the Minkowski sum of two Lebesgue measurable sets is not always Lebesgue measurable [34, 14]. So the fact that one of them is a closed ball in case of A⊕ is important. In the following theorem, we use Lemma 4.3 to prove the measurability of A⊕ and in turn prove that R⊕ (`0/1, A) is well-defined for any A ∈ L(X ). Theorem 4.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) is well-defined. If, in addition, p0 and p1 are absolutely continuous with respect to the Lebesgue measure, then R⊕ (`0/1, A) = R (`0/1, A).
5 Equivalence with∞-Wasserstein Robustness
In this section, we show the conditions under which R⊕ (`0/1, A) is equivalent to other notions of adversarial risk based on transport maps and W∞ robustness.
5.1 W∞ Robustness in Polish Spaces via Measurable Selections
We begin by presenting a lemma that links the measure of -Minkowsi set expansion to the worst case measure over a W∞ probability ball of radius . Lemma 5.1. Let µ ∈ P(X ) and A ∈ B(X ). Then supW∞(µ,µ′)≤ µ
′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is achieved by a µ∗ ∈ P(X ) that is induced from µ via a measurable transport map φ : X → X (i.e. µ∗ = φ]µ) satisfying d(x, φ(x)) ≤ for all x ∈ X .
A crucial step in the proof of Lemma 5.1 is finding a measurable transport map φ such that φ−1(A) = A⊕ and d(x, φ(x)) ≤ for all x ∈ X . In the following theorem, we use Lemma 5.1 to establish the equivalence between three different notions of adversarial risk introduced in section 3. Theorem 5.1. Let p0, p1 ∈ P(X ) and A ∈ B(X ). Then R⊕ (`0/1, A) = RF (`0/1, A) = RΓ (`0/1, A). In addition, the supremum over f0 and f1 in RF (`0/1, A) is attained. Similarly, the supremum over p′0 and p ′ 1 in RΓ (`0/1, A) is attained.
5.2 W∞ Robustness in Rd via 2-Alternating Capacities
In this subsection, we establish a connection between adversarial risk and Choquet capacities [7] in Rd. This connection allows us to extend Theorem 5.1 from Borel sets to the broader class of Lebesgue measurable sets. We will again use this connection for proving minimax theorems and existence of Nash equilibria in Section 7.1. We begin with the following definitions. Definition 3 (Capacity). A set function v : B(X )→ [0, 1] is a capacity if it satisfies the following conditions: (1) v(∅) = 0 and v(X ) = 1; (2) For A,B ∈ B(X ), A ⊆ B =⇒ v(A) ≤ v(B); (3) An ↑ A =⇒ v(An) ↑ v(A); and (4) Fn ↓ F , Fn closed =⇒ v(Fn) ↓ v(F ). Definition 4 (2-Alternating Capacity). A capacity v defined on the measure space (X ,B(X )) is called 2-alternating if v(A ∪B) + v(A ∩B) ≤ v(A) + v(B) for all A,B ∈ B(X ).
For any compact set of probability measures Ξ ⊆ P(X ), the upper probability v(A) = supµ∈Ξ µ(A) is a capacity [19]. The upper probability of -neighborhoods of a µ ∈ P(X ) defined using either the total variation metric or the Levy-Prokhorov metric can be shown to be a 2-alternating capacity [19]. The following lemma shows that A 7→ µ(A⊕ ) is a 2-alternating capacity under some conditions. Lemma 5.2. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ) and let ≥ 0. Define a set function v on X such that for any A ∈ L(X ), v(A) := µ(A⊕ ). Then v is a 2-alternating capacity.
Now we relate the capacity defined in Lemma 5.2 to the W∞ metric. Since the -neighborhood of a µ ∈ P(X ) in W∞ metric is a compact set of probability measures [46], the upper probability over this W∞ -ball is a capacity. The following lemma shows that it is a 2-alternating capacity, and identifies it with the capacity defined in Lemma 5.2. Lemma 5.3. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ). Then for any A ∈ L(X ), supW∞(µ,µ′)≤ µ ′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is attained.
Lemma 5.3 plays a similar role to Lemma 5.1 in proving the following equivalence between adversarial robustness and W∞ robustness. Theorem 5.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) = RΓ (`0/1, A), and the supremum over p ′ 0 and p ′ 1 in RΓ (`0/1, A) is attained.
The proof follows by converting the expression for RΓ into one for R⊕ using Lemma 5.3. Unlike Theorem 5.1, Theorem 5.2 does not show the equivalence of RF (`0/1, A) with the other definitions under the relaxed assumption of A ∈ L(X ). This is because Lemma 5.3 does not provide a pushforward map φ such that µ∗ = φ]µ with µ∗ attaining the supremum over the W∞ ball.
6 Optimal Adversarial Risk via Generalized Strassen’s Theorem
In section 5, we analyzed adversarial risk for a specific decision region A ∈ B(X ). In this section, we analyze infimum of adversarial risk over all possible decision regions; i.e., the optimal adversarial risk. We show that optimal adversarial risk in binary classification with unequal priors is characterized by an unbalanced optimal transport cost between data-generating distributions. Our main technical lemma generalizes Strassen’s theorem to unbalanced optimal transport. We present this result in subsection 6.1 and present our characterization of optimal adversarial risk in subsection 6.2.
6.1 Unbalanced Optimal Transport & Generalized Strassen’s Theorem
Recall from Section 3 that the optimal transport cost D characterizes the optimal adversarial risk in binary classification for equal priors. The following result gives an alternative characterization of D . Proposition 6.1 (Strassen’s theorem). [Corollary 1.28 in [42]] Let µ, ν ∈ P(X ). Let ≥ 0. Then
sup A∈B(X )
µ(A)− ν(A2 ) = D (µ, ν). (7)
Proposition 6.1 is a special case of Kantorovich-Rubinstein duality [42] applied to {0, 1}-valued cost functions. We now generalize this result to measures with unequal masses. We begin with some definitions that generalize the concepts we introduced in subsection 2.2.
Let µ, ν ∈M(X ) be such that µ(X ) ≤ ν(X ). A coupling between µ and ν is a measure π ∈M(X 2) such that for any A ∈ B(X ), π(A×X ) = µ(A) and π(X ×A) ≤ ν(A). The set Π(µ, ν) is defined to be the set of all couplings between µ and ν. For a cost function c : X 2 → [0,∞), the optimal transport cost between µ and ν under c is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x
′)dπ(x, x′). Theorem 6.1 (Generalized Strassen’s theorem). Let µ, ν ∈M(X ) be such that 0 < M = µ(X ) ≤ ν(X ). Let > 0. Define c : X 2 → {0, 1} as c (x, x′) = 1{(x, x′) ∈ X 2 : d(x, x′) > 2 }. Then
sup A∈B(X ) µ(A)− ν(A2 ) = Tc (µ, ν) = M inf ν′∈P(X ):ν′ ν/M D (µ/M, ν ′) . (8)
Moreover, the infimum on the right hand side is attained. (Equivalently, there is a coupling π ∈ Π(µ, ν) that attains the unbalanced optimal transport cost Tc (µ, ν).)
Our proof of Theorem 6.1 leverages strong duality in linear programming. We first establish (8) for discrete measures on a finite support. We then apply the discrete result on a sequence of measures supported on a countable dense subset of the Polish space X . Using the tightness of finite measures on X , we construct an optimal coupling that achieves the cost Tc (µ, ν) in (8). We then show that the constructed coupling satisfies (8). This proof strategy is adapted from the works of [12] and [32].
6.2 Optimal Adversarial Risk for Unequal Priors
Generalized Strassen’s theorem involves closed set expansions. The following lemma allows us to switch to Minkowski set expansions. Lemma 6.1. Let µ, ν ∈ M(X ) and let ≥ 0. Then supA∈B(X ) µ(A) − ν(A2 ) = supA∈B(X ) µ(A
)− ν(A⊕ ). Moreover, the supremum in the right hand side of the above equality can be replaced by a supremum over closed sets.
Using the above lemma and the generalized Strassen’s theorem, we show the following result on optimal adversarial risk for unequal priors, generalizing the result of [30, 2]. Theorem 6.2. Let p0, p1 ∈ P(X ) and let ≥ 0. Then,
inf A∈B(X )
R⊕ (`0/1, A) = 1
T + 1
[ 1− inf
q∈P(X ):q Tp0 D (q, p1)
] . (9)
Moreover, the infimum on the left hand side can be replaced by an infimum over closed sets.
The proof follows by using Lemma 6.1 to convert the expression with Minkowski expansion to one with closed expansions, followed by an application of Theorem 6.1 to arrive at the final optimal transport-based expression. Theorem 6.2 extends the result of [31] in two ways: (1) the infimum is
taken over all sets for which R⊕ (`0/1, A) is well-defined, instead of restricting to closed sets, and (2) the priors on both labels can be unequal. We also note that for (X , d) = (Rd, ‖ · ‖), (9) holds with the infimum on the left hand side taken over all A ∈ L(X ).
7 Minimax Theorems and Nash Equilibria
In this section, we revisit the zero-sum game between the adversary and the algorithm introduced in section 3. Recall that for A ∈ B(X ) and p′0, p′1 ∈ P(X ), the payoff function is given by
r(A, p′0, p ′ 1) =
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (10)
The max-min inequality gives us
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈A
r(A, p′0, p ′ 1) ≤ inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (11)
If the inequality in (11) is an equality, we say that the game has zero duality gap, and admits a value equal to either expression in (11). Then there is no advantage to a player making the first move. Our minimax theorems establish such an equality. If in addition to having an equality in (11), there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left-hand side and A∗ ∈ B(X ) that achieves the infimum on the right-hand side, we say that ((p∗0, p ∗ 1), A ∗) is a pure Nash equilibrium of the game.
In Section 7.1, we prove the minimax theorem and the existence of a pure Nash equilibrium in Rd using the theory of 2-alternating capacities [19] and the relation to adversarial risk from Section 5.2. Section 7.2 extends these results to more general Polish spaces with a “midpoint property."
7.1 Minimax Theorem in Rd via 2-Alternating Capacities
The following theorem proves the minimax equality and the existence of a Nash equilibrium for the adversarial robustness game in Rd. Theorem 7.1 (Minimax theorem in Rd). Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10). Then,
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈L(X )
r(A, p′0, p ′ 1) = inf
A∈L(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (12)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) and A∗ ∈ L(X ) that achieve the supremum and infimum on the left and right hand sides of the above equation.
Crucial to the proof of Theorem 7.1 is Lemma 5.2, which shows that the set-valued maps A 7→ p0(A
⊕ ) and Ac 7→ p1((Ac)⊕ ) are 2-alternating capacities. The same proof technique is not applicable in general Polish spaces because the map A 7→ µ(A⊕ ) is not a capacity for a general µ ∈ P(X ). This is because A⊕ is not measurable for all A ∈ B(X ).
7.2 Minimax Theorem in Polish Spaces via Optimal Transport
We now extend the minimax theorem from Rd to general Polish spaces with the following property. Definition 5 (Midpoint property). A metric space (X , d) is said to have the midpoint property if for every x1, x2 ∈ X , there exists x ∈ X such that, d(x1, x) = d(x, x2) = d(x1, x2)/2.
Any normed vector space with distance defined as d(x, x′) = ‖x − x′‖ satisfies the midpoint property. An example of a metric space without this property is the discrete metric space where d(x, x′) = 1{x 6= x′}. The midpoint property plays a crucial role in proving the following theorem, which shows that the D transport cost between two distributions is the shortest total variation distance between their -neighborhoods in W∞ metric. A similar result was also presented in [11]. Theorem 7.2 (D as shortest DTV between W∞ balls). Let (X , d) have the midpoint property. Let µ, ν ∈ P(X ) and let ≥ 0. Then D (µ, ν) = infW∞(µ,µ′),W∞(ν,ν′)≤ DTV (µ′, ν′). Moreover, the infimum over DTV in the above equation is attained.
The following theorem uses Theorem 7.2 to prove the minimax equality and the existence of a Nash equilibrium for any Polish space with the midpoint property for the case of equal priors.
Theorem 7.3 (Minimax theorem for equal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10) with T = 1. Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (13)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left hand side.
Proof. For µ ∈ P(X ), let WB(µ) denote the set of all µ′ ∈ P(X ) such that W∞(µ, µ′) ≤ .
inf A∈B(X ) sup p′0∈WB(p0) p′1∈WB(p1)
r(A, p′0, p ′ 1) = inf
A∈B(X ) RΓ (`0/1, A)
(i) = inf
A∈B(X ) R⊕ (`0/1, A)
(ii) =
1 2 [1−D (p0, p1)]
sup p′0∈WB(p0) p′1∈WB(p1) inf A∈B(X )
r(A, p′0, p ′ 1) (iii) = sup
p′0∈WB(p0) p′1∈WB(p1)
1 2 [1−DTV (p′0, p′1)] = 1 2 1− inf p′0∈WB(p0) p′1∈WB(p1) DTV (p ′ 0, p ′ 1) , where (i) follows from Theorem 5.1, (ii) from Theorem 6.2, and (iii) again from Theorem 6.2 with = 0. The expressions on the right extremes of the above equations are equal by Theorem 7.2. The existence of p∗0, p ∗ 1 ∈ P(X ) follows Theorem 7.2.
To prove the minimax theorem for unequal priors, we need the following generalization of Theorem 7.2 to finite measures of unequal mass. Lemma 7.1. Let p0, p1 ∈ P(X ) and let ≥ 0. Then for T ≥ 1,
inf q∈P(X ):q Tp0 D (q, p1) = inf q∈P(X ):q Tp0 inf W∞(q,q′),W∞(p1,p′1)≤
DTV (q ′, p′1)
= inf W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf q′∈P(X ):q′ Tp′0
DTV (q ′, p′1) (14)
Now, we prove the minimax equality for unequal priors. Theorem 7.4 (Minimax theorem for unequal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. For T > 0, define r as in (10). Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (15)
The proof uses: (1) the characterization of inf-sup payoff in terms of unbalanced optimal transport using Theorem 5.1; (2) Lemma 7.1; and (3) the minimax equality of Theorem 7.3 for equal priors. Remark 2. Unlike Theorem 7.1, Theorems 7.3 and 7.4 do not guarantee the existence of an optimal decision region A∗. While Theorem 7.3 guarantees the existence of worst-case pair of perturbed distributions p∗0, p ∗ 1, Theorem 7.4 does not do so. Nevertheless, an approximate pure Nash equilibrium exists in all the cases. This is in sharp contrast with the non-existence of Nash equilibrium proven in [29] (which considers a different notion of adversarial perturbations). Remark 3. A recent work [26] shows the existence of mixed Nash equilibrium for randomized classifiers parametrized by points in a Polish space (see also [29, 3]). Fan’s minimax theorem used in this result is inapplicable in our setting of non-parametric, decision region-based classifiers. Instead, we applied the theory of Choquet capacities (in Rd) and generalized Strassen’s duality theorem (in Polish spaces), which is novel to the best of our knowledge.
8 Discussion
We examined different notions of adversarial risk in a binary classification setting with 0-1 loss function and laid down the conditions under which these definitions are equivalent. By verifying the conditions in Sections 4 and 5, researchers may use different definitions interchangeably. Several definitions have also been proposed for adversarial risk under general loss functions [31, 26] using analogous constructions like transport maps, couplings and suprema over -neighborhoods. Extending our equivalence results to more general loss functions is left for future work.
summarize the results of Section 6 and Section 7. For equal priors (T = 1), A and B denote two ways of obtaining the optimal adversarial risk, R∗⊕ : 1) A , which denotes the D cost between the true label distributions p0 and p1, and 2) B , which denotes the shortest total variation distance between∞-Wasserstein balls of radius around p0 and p1. For unequal priors (T > 1), C , D and E denote three equivalent ways of obtaining R∗⊕ . The black dotted balls denote∞-Wasserstein balls and the blue dashed balls denote sets defined using stochastic domination. The order in which the two types of balls appear around p0 is reversed between D and E .
We analyzed optimal adversarial risk for (non-parametric) decision region-based classifiers. Using a formulation of optimal transport between finite measures of unequal mass, we extended the optimal transport based characterization of adversarial risk of [30, 2] to unequal priors by generalizing Strassen’s theorem. This may find applications in the study of excess cost optimal transport [45, 44]. A recent work [39] obtains a different characterization of optimal adversarial risk using optimal transport on the product space X × Y where Y is the label space. Further, they show the evolution of the optimal classifier A∗ as grows, in terms of a mean curvature flow. This raises an interesting question on the evolution of the optimal adversarial distributions p∗0, p ∗ 1 ∈ P(X ) with .
We proved a minimax theorem for adversarial robustness game and the existence of a Nash equilibrium. We constructed the worst-case pair of distributions p∗0, p ∗ 1 ∈ P(X ) in terms of true data distributions and showed that their total variation distance gives the optimal adversarial risk. Identifying worst case distributions could lead to a new approach to developing robust algorithms.
We used Choquet capacities for results in Rd and measurable selections in Polish spaces. Specifically, we showed that the measure of -Minkowski expansion is a 2-alternating capacity. This connection could help generalize our results to total variation and Prokhorov distance based contaminations.
Limitations: We largely focused on the binary classification setup with 0-1 loss function. While it may be possible extend our results on measurability and relation to∞-Wasserstein distributional robustness to more general loss functions and a multi-class setup, it is unclear how our results on generalized Strassen’s theorem and Nash equilibria can be extended further. Our results on various equivalent formulations of optimal adversarial risk are specific to adversarial perturbations (or equivalently,∞-Wasserstein distributional perturbations), and we did not investigate more general perturbation models. | 1. What is the focus of the paper regarding theoretical insights on adversarial risk?
2. What are the strengths of the proposed analysis, particularly in utilizing various mathematical tools?
3. What are the weaknesses of the paper regarding its organization and notations?
4. How can the authors improve the clarity and structure of the paper for better understanding?
5. What are the significant contributions of the paper, and how can they be illustrated with examples or simulations?
6. Are there any minor issues or missing references that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper provides a lot of theoretical insights to connect the different definitions of adversarial risk. It also discusses some other properties such as well-definedness and minimax theorem.
Review
Originality: this paper utilizes a lot of different mathematical tools and provides rigorous analysis. Such an analysis is complicated and is novel based on my knowledge.
Quality, clarity and significance: my biggest concern is that the paper piles up a lot of different definitions and mathematical symbols, which may prevent readers from related areas to understand it (e.g. people who work on empirical study of adversarial training). The authors are strongly encouraged to reorganize this paper and simplify some notations so that it has a more clear structure and is easier to understand. Two examples:
(1) Although this paper uses the definition of adversarial risk in [9], the definition in [9] is self is much easier than the one in this paper to understand. Please consider use simple definitions first (even informal), and try to provide their more formal definitions later.
(2) The four contributions listed in Section 1 need to have a clear connection with the following five sections in this paper. I would suggest to first write down the corresponding sections when listing the contributions, and recall the readers about the this when starting each section.
The authors also need to emphasize the significance of this paper, e.g. provide some concrete easy-to-understand example where some definitions of adversarial risk may fail while others work well, and provide some experiments (or even simple simulation) to help illustration. It is hard to justify the significance of this paper from its current writing.
A minor issue: A missing related reference: Xu, Q., Bello, K., & Honorio, J. (2020). A Le Cam Type Bound for Adversarial Learning and Applications. arXiv preprint arXiv:2007.00289.
====================================================
After author feedback:
The authors provided useful materials for the readability based on my major concern. Raised my score from 5 to 6. |
NIPS | Title
The Many Faces of Adversarial Risk
Abstract
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk—not all mathematically rigorous and differing subtly in the details—have appeared in the literature. In this paper, we revisit these definitions, make them rigorous, and critically examine their similarities and differences. Our technical tools derive from optimal transport, robust statistics, functional analysis, and game theory. Our contributions include the following: generalizing Strassen’s theorem to the unbalanced optimal transport setting with applications to adversarial classification with unequal priors; showing an equivalence between adversarial robustness and robust hypothesis testing with∞-Wasserstein uncertainty sets; proving the existence of a pure Nash equilibrium in the two-player game between the adversary and the algorithm; and characterizing adversarial risk by the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets. Our results generalize and deepen recently discovered connections between optimal transport and adversarial robustness and reveal new connections to Choquet capacities and game theory.
1 Introduction
Neural networks are known to be vulnerable to adversarial attacks, which are imperceptible perturbations to input data that maximize loss [38, 15, 5]. Developing algorithms resistant to such attacks has received considerable attention in recent years [8, 28, 24, 20], motivated by safety-critical applications such as autonomous driving [18, 27], medical imaging [17, 23, 22] and law [21, 6].
A classification algorithm with high accuracy (low risk) in the absence of an adversary may have poor accuracy (high risk) when an adversary is present. Thus, a modified notion known as adversarial risk is used to quantify the adversarial robustness of algorithms. Algorithms that minimize adversarial risk are deemed robust. Procedures for finding them have been effective in practice [24, 41, 28], spurring numerous theoretical investigations into adversarial risk and its minimizers.
There is no universally agreed upon definition of adversarial risk. Even the simplest setting of binary classification in Rd with an `2 adversary admits various definitions involving set expansions [9, 16], transport maps [29], Markov kernels [31], and couplings [26]. These works broadly interpret adversarial risk as a measure of robustness to small perturbations, but their definitions differ in subtle details such as the class of adversaries and algorithms considered, budget constraints placed on the adversary, assumptions on the loss function, and the geometry of decision boundaries.
Optimal adversarial risk is most commonly defined as the minimax risk under adversarial contamination [24, 33]. Other notable characterizations include an optimal transport cost between data generating distributions in [30, 2, 10, 11], the optimal value of a distributionally robust optimization problem [36, 35, 40], and the value of a two-player zero-sum game [26, 29, 3, 4].
The diversity of definitions for adversarial risk makes it challenging to compare approaches. Moreover, not all approaches are rigorous. For instance, the classes of adversarial strategies and classifier
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
algorithms are often unclear, and issues of measurability are ignored. Although this may be harmless for applied research, it has led to incorrect proofs and insufficient assumptions in some theoretical works; a mathematically rigorous foundation for adversarial risk is essential for future research.
In this paper, we examine various notions of adversarial risk for binary classification in a nonparametric setting, where the decision boundary (or decision region) of a classifier is an arbitrary subset of the input space. We present rigorous definitions of adversarial risk and identify conditions under which these definitions are equivalent. We consider the general setting of Polish spaces (complete, separable metric spaces), and present stronger results for the Euclidean space (Rd). Our contributions are as follows:
• We examine the definition of adversarial risk based on set expansions. For Polish spaces, we observe that adversarial risk is not Borel measurable, and hence, not well-defined when the decision region is an arbitrary set. We show that the problem can be resolved by considering a Polish space equipped with the universal completion of the Borel σ-algebra and restricting the decision regions to Borel sets. For the Euclidean space with the Lebesgue σ-algebra, we show that adversarial risk is well-defined for any Lebesgue measurable decision region. Our key lemma (Lemma 4.3) shows that the Lebesgue σ-algebra is preferred over the Borel σ-algebra because set expansions are Lebesgue measurable but not necessarily Borel measurable. These results are contained in Section 4.
• We show that the definition of adversarial risk using set expansions is identical to a notion of risk that appears in robust hypothesis testing with∞-Wasserstein uncertainty sets. We prove this result in Polish spaces using the theory of measurable selections [1, 43]. In Rd, we are able to use the powerful theory of Choquet capacities [7] (in particular, Huber and Strassen’s 2-alternating capacities [19]) to establish results of a similar nature. These results are contained in Section 5.
• We consider the binary classification setup with unequal priors and show (under suitable assumptions) that the optimal adversarial risk from the above definitions is characterized by an unbalanced optimal transport cost between data-generating distributions. For both Polish spaces and Rd, the main tool we use is Theorem 6.1 in which we generalize a classical result of Strassen on excess-cost optimal transport [37, 42] from probability measures to finite measures with possibly unequal mass. This generalizes results of [31, 2] on binary classification, which were only for equal priors. These results are contained in Section 6.
• We consider the setup of a zero-sum game between the adversary and the algorithm. We show that the value of this game (adversarial risk) is equal to the minimum Bayes error between a pair of distributions belonging to the∞-Wasserstein uncertainty sets centered around true data-generating distributions. We prove the existence of a pure Nash equilibrium in this game for Rd and for Polish spaces with a midpoint property. This extends/strengthens the results of [26, 29, 3] to non-parametric classifiers. These results are contained in Section 7.
The paper is organized as follows: In Section 2, we present preliminary definitions from optimal transport and metric space topology. In Section 3, we discuss various definitions of adversarial risk and present more related work. Sections 4, 5, 6 and 7 contain our main contributions summarized above. We conclude the paper in Section 8 and discuss future research directions.
We emphasize that rectifying measure theoretic issues with existing formulations of adversarial risk is one of our contributions, but not the main focus of our paper. We start our presentation with fixing measurability and well-definedness (in Section 4) because otherwise we will not be able to rigorously present our main results in the subsequent sections, namely: relation to robust hypothesis testing and Choquet capacities in section 5, generalizing the results of [2, 30] in section, 6 proving minimax theorems and existence of Nash equilibria and extending the results of [26, 3, 29] in section 7.
Notation: Throughout the paper, we use X to denote a Polish space (a complete, separable metric space) with metric d and Borel σ-algebra B(X ). For x ∈ X and r ≥ 0, let Br(x) denote the closed ball of radius r centered at x. We use P(X ) andM(X ) to denote the space of probability measures and finite measures defined on the measure space (X ,B(X )), respectively. Let B(X ) denote the universal completion of B(X ). Let P(X ) andM(X ) denote the space of probability measures and finite measures defined on the complete measure space (X ,B(X )). For µ, ν ∈ M(X ), we say ν dominates µ if µ(A) ≤ ν(A) for all A ∈ B(X ) and write µ ν. When X is Rd, we use L(X ) to
denote the Lebesgue σ-algebra and λ to denote the d-dimensional Lebesgue measure. Note that L(X ) = B(X ) for X = Rd. For a positive integer n, we use [n] to denote the finite set {1, . . . , n}.
2 Preliminaries
2.1 Metric Space Topology
We introduce three different notions of set expansions. For ≥ 0 and A ∈ B(X ), the -Minkowski expansion of A is given by A⊕ := ∪a∈AB (a). The -closed expansion of A is defined as A := {x ∈ X : d(x,A) ≤ }, where d(x,A) = infa∈A d(x, a). The -open expansion of A is defined as A ) := {x ∈ X : d(x,A) < }. We use the notation A− to denote ((Ac) )c. Similarly, A := ((Ac)⊕ )c. For example, consider the set A = (0, 1] in the space (X , d) = (R, | · |) and > 0. Then A⊕ = (− , 1 + ], A = [− , 1 + ] and A ) = (− , 1 + ). For any A ∈ B(X ), A is closed and A ) is open. Hence, A , A ) ∈ B(X ). Moreover, A ) ⊆ A⊕ ⊆ A . However, A⊕ may not be in B(X ) (see Appendix for an example). In general, the Minkowski sum of two Borel sets need not be Borel [13], and that of two Lebesgue measurable sets need not be Lebesgue measurable [34].
2.2 Optimal Transport
Let µ, ν ∈ P(X ). A coupling between µ and ν is a joint probability measure π ∈ P(X 2) with marginals µ and ν. The set Π(µ, ν) ⊆ P(X 2) denotes the set of all couplings between µ and ν. The optimal transport cost between µ and ν under a cost function c : X × X → [0,∞) is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x ′)dπ(x, x′). For a positive integer p, the p-Wasserstein distance between µ and ν is defined as, Wp(µ, ν) = (Tdp(µ, ν)) 1 p . The∞-Wasserstein metric is defined as W∞(µ, ν) = limp→∞Wp(µ, ν). It can also be expressed in the following ways:
W∞(µ, ν) = inf π∈Π(µ,ν) ess sup (x,x′)∼π
d(x, x′) = inf{δ > 0 : µ(A) ≤ ν(Aδ)∀A ∈ B(X )}. (1)
Given a µ ∈ P(X ) and a measurable function f : X → X , the push-forward of µ by f is defined as a probability measure f]µ ∈ P(X ) given by f]µ = µ(f−1(A)) for all A ∈ B(X ).
3 Adversarial Risk: Definitions and Related Work
We consider a binary classification setting on feature space X . Let p0, p1 ∈ P(X ) be the datagenerating distributions for labels 0 and 1, respectively. Let the prior probabilities for labels 0 and 1 be in the ratio T : 1 where we assume T ≥ 1 without loss of generality. For a space of classifiers parametrized by w ∈ W and a loss function ` : (X × Y)×W → [0,∞), the adversarial risk of a classifier w ∈ W under an adversarial budget of ≥ 0 is defined as [24, 33],
R⊕ (`, w) = E(x,y)
[ sup
d(x,x′)≤ `((x′, y), w)
] . (2)
If the loss function `(·, w) is measurable, upper semi-continuous and bounded above for all w ∈ W , [26] show that R (`, w) is well-defined. But in general, it may not be so. A case of special interest is the 0-1 loss function with non-parametric classifiers of the form fA(x) := 1{x ∈ A} where A ∈ B(X ). In this case, `0/1((x, y), A) = 1{x ∈ A, y = 0}+ 1{x ∈ Ac, y = 1}. Hence,
R⊕ (`0/1, A) = T
T + 1 Ep0
[ sup
d(x,x′)≤ 1{x′ ∈ A}
] + 1
T + 1 Ep1
[ sup
d(x,x′)≤ 1{x′ ∈ Ac}
]
= T
T + 1 p0(A
⊕ ) + 1
T + 1 p1((A
c)⊕ ). (3)
A problem with the formulation in equation 3 is the ambiguity over the measurability of the sets A⊕ and (Ac)⊕ . Even when A ∈ B(X ), it is not guaranteed that A⊕ , (Ac)⊕ ∈ B(X ) (see Appendix for an example). Hence, R⊕ (`0/1, A) is not well-defined for all A ∈ B(X ). It is shown in [31] that R⊕ (`0/1, A) is well-defined when A is either closed or open, but its validity beyond that is unknown.
A simple fix to this measurability problem is to use closed set expansion instead of the Minkowski set expansion, as done in [25]. This leads to the following formulation.
R (`0/1, A) = T
T + 1 p0(A
) + 1
T + 1 p1((A
c) ). (4)
The above definition is well-defined for any A ∈ B(X ) because A and (Ac) are both closed and hence, measurable. However, under the above definition, a point x ∈ A may be perturbed to x′ ∈ A such that d(x, x′) > . For example, when A = (0, 1), we have A = [− , ] and an adversary may transport x = δ > 0 to x′ = − , violating the budget constraint at x. Remark 1. The formulations in equations (2), (3) and (4) can give a strictly positive adversarial risk even for a “perfect” (i.e. Bayes optimal) classifier. This is consistent with the literature on adversarial examples where even a perfect classifier is forced to make errors in the presence of evasion attacks. These formulations of adversarial risk correspond to “constant-in-the-ball” risk of [16] and “corrupted-instance” risk in [9, 25]. Here, an adversarial risk of zero is only possible if the supports of p0 and p1 are non-overlapping and separated by at least 2 . This is not the case with other formulations of adversarial risk such as “exact-in-the-ball” risk [16], “prediction-change risk and “error-region” risk [9, 25]. We focus on the “corrupted-instance” family of risks in this work.
Another approach to defining adversarial risk is by explicitly defining the class of adversaries of budget as measurable transport maps f : X → X that push-forward the true data distribution such that no point is transported by more than a distance of ; i.e., d(x, f(x)) ≤ . The transport map-based adversarial risk [29] is formally defined as follows:
RF (`0/1, A) = sup f0,f1:X→X
∀x∈X ,d(x,fi(x))≤
T
T + 1 f0]p0(A) +
1
T + 1 f1]p1((A
c)). (5)
Yet another definition uses the robust hypothesis testing framework with W∞ uncertainty sets. In this approach, an adversary perturbs the true distribution pi to a corrupted distribution p′i such that W∞(pi, p ′ i) ≤ . From (1), this is equivalent to the existence of a coupling π ∈ Π(pi, p′i) such that ess sup(x,x′)∼π d(x, x ′) ≤ . The adversarial risk with such an adversary is given by
RΓ (`0/1, A) = sup W∞(p1,p′1),W∞(p0,p ′ 0)≤
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (6)
Clearly, RF (`0/1, A) ≤ RΓ (`0/1, A), but conditions for equality were not studied in prior work. Moreover, their relation to set expansion-based definitions in (3) and (4) was also unknown.
Next we discuss some characterizations of optimal adversarial risk, defined as R∗⊕ := infA∈B(X )R⊕ (`0/1, A). In [30, 2], it is shown that R∗ = 1 2 [1 − D (p0, p1)] for equal priors (T = 1), where D is an optimal transport cost defined as follows. Definition 1 (D cost). Let µ, ν ∈ P(X ). Let ≥ 0. Let c : X 2 → {0, 1} be such that c (x, x′) = 1{(x, x′) ∈ X × X : d(x, x′) > 2 }. Then for µ, ν ∈ P(X ) and ≥ 0, D (µ, ν) = Tc (µ, ν).
For = 0, D reduces to the total variation distance. While D0 is a metric on P(X ), D (for > 0) is neither a metric nor a pseudometric [31].
Other formulations of optimal adversarial risk are inspired from game theory [29, 26, 3]. Consider a game between two players: (1) The adversary whose action space is pairs of distributions p′0, p ′ 1 ∈ P(X ), and (2) The algorithm whose action space is the space of decision regions of the form A ∈ B(X )}. For T > 0, define r : B(X ) × P(X ) × P(X ) → [0, 1] as, r(A,µ, ν) = TT+1µ(A) + 1 T+1ν((A
c)). The payoff when the algorithm plays first is given by infA∈B(X ) supW∞(p0,p′0),W∞(p1,p′1)≤ r(A, p ′ 0, p ′ 1), and this quantity is interpreted as the optimal adversarial risk in this setup.
4 Well-Definedness of Adversarial Risk
As stated in Section 3, R⊕ (`0/1, A) may not be well-defined for some decision regions A ∈ B(X ) because of the non-measurability of the sets A⊕ and (Ac)⊕ . Specifically, we have the following lemma.
Lemma 4.1. For any > 0, there exists A ∈ B(X ) such that A⊕ /∈ B(X ).
In this section, we lay down the conditions under which the ambiguity can be resolved. We begin by presenting a Lemma that shows that A⊕ is an analytic set (i.e. a continuous image of a Borel set) whenever A is Borel. It is known that an analytic sets are universally measurable, i.e. they belong in B(X ), the universal completion of the Borel σ-algebra B(X ), and are measurable with respect to any finite measure defined on the complete measure space, (X ,B(X )). Lemma 4.2. Let A ∈ B(X ). Then, A⊕ is an analytic set. Consequently, A⊕ ∈ B(X ).
By virtue of the previous lemma, we have the following. Theorem 4.1. Let p0, p1 ∈ P(X ). Then for any A ∈ B(X ), R⊕ (`0/1, A) is well-defined.
For the special case of X = Rd, we can further strengthen Theorem 4.1 to include all Lebesgue measurable sets L(X ) instead of just Borel sets B(X ). For this, we use the concept of porous sets. Definition 2 (Porous set). A set E ⊆ X is said to be porous if there exists α ∈ (0, 1) and r0 > 0 such that for every r ∈ (0, r0] and every x ∈ X , there is an x′ ∈ X such that Bαr(x′) ⊆ Br(x)\E.
Porous sets are a subclass of nowhere dense sets. Importantly, λ(E) = 0 for any porous set E ⊆ Rd [47]. By the following lemma, the set difference between the closed/open set expansions is porous. Lemma 4.3. Let (X , d) = (Rd, ‖ · ‖) and A ∈ L(X ). Then E = A \A ) is porous.
Lemma 4.3 plays a crucial role in proving that A⊕ ∈ L(X ) whenever A ∈ L(X ). We recall that A⊕ is the Minkowski sum of A with the closed -ball. In general, the Minkowski sum of two Lebesgue measurable sets is not always Lebesgue measurable [34, 14]. So the fact that one of them is a closed ball in case of A⊕ is important. In the following theorem, we use Lemma 4.3 to prove the measurability of A⊕ and in turn prove that R⊕ (`0/1, A) is well-defined for any A ∈ L(X ). Theorem 4.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) is well-defined. If, in addition, p0 and p1 are absolutely continuous with respect to the Lebesgue measure, then R⊕ (`0/1, A) = R (`0/1, A).
5 Equivalence with∞-Wasserstein Robustness
In this section, we show the conditions under which R⊕ (`0/1, A) is equivalent to other notions of adversarial risk based on transport maps and W∞ robustness.
5.1 W∞ Robustness in Polish Spaces via Measurable Selections
We begin by presenting a lemma that links the measure of -Minkowsi set expansion to the worst case measure over a W∞ probability ball of radius . Lemma 5.1. Let µ ∈ P(X ) and A ∈ B(X ). Then supW∞(µ,µ′)≤ µ
′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is achieved by a µ∗ ∈ P(X ) that is induced from µ via a measurable transport map φ : X → X (i.e. µ∗ = φ]µ) satisfying d(x, φ(x)) ≤ for all x ∈ X .
A crucial step in the proof of Lemma 5.1 is finding a measurable transport map φ such that φ−1(A) = A⊕ and d(x, φ(x)) ≤ for all x ∈ X . In the following theorem, we use Lemma 5.1 to establish the equivalence between three different notions of adversarial risk introduced in section 3. Theorem 5.1. Let p0, p1 ∈ P(X ) and A ∈ B(X ). Then R⊕ (`0/1, A) = RF (`0/1, A) = RΓ (`0/1, A). In addition, the supremum over f0 and f1 in RF (`0/1, A) is attained. Similarly, the supremum over p′0 and p ′ 1 in RΓ (`0/1, A) is attained.
5.2 W∞ Robustness in Rd via 2-Alternating Capacities
In this subsection, we establish a connection between adversarial risk and Choquet capacities [7] in Rd. This connection allows us to extend Theorem 5.1 from Borel sets to the broader class of Lebesgue measurable sets. We will again use this connection for proving minimax theorems and existence of Nash equilibria in Section 7.1. We begin with the following definitions. Definition 3 (Capacity). A set function v : B(X )→ [0, 1] is a capacity if it satisfies the following conditions: (1) v(∅) = 0 and v(X ) = 1; (2) For A,B ∈ B(X ), A ⊆ B =⇒ v(A) ≤ v(B); (3) An ↑ A =⇒ v(An) ↑ v(A); and (4) Fn ↓ F , Fn closed =⇒ v(Fn) ↓ v(F ). Definition 4 (2-Alternating Capacity). A capacity v defined on the measure space (X ,B(X )) is called 2-alternating if v(A ∪B) + v(A ∩B) ≤ v(A) + v(B) for all A,B ∈ B(X ).
For any compact set of probability measures Ξ ⊆ P(X ), the upper probability v(A) = supµ∈Ξ µ(A) is a capacity [19]. The upper probability of -neighborhoods of a µ ∈ P(X ) defined using either the total variation metric or the Levy-Prokhorov metric can be shown to be a 2-alternating capacity [19]. The following lemma shows that A 7→ µ(A⊕ ) is a 2-alternating capacity under some conditions. Lemma 5.2. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ) and let ≥ 0. Define a set function v on X such that for any A ∈ L(X ), v(A) := µ(A⊕ ). Then v is a 2-alternating capacity.
Now we relate the capacity defined in Lemma 5.2 to the W∞ metric. Since the -neighborhood of a µ ∈ P(X ) in W∞ metric is a compact set of probability measures [46], the upper probability over this W∞ -ball is a capacity. The following lemma shows that it is a 2-alternating capacity, and identifies it with the capacity defined in Lemma 5.2. Lemma 5.3. Let (X , d) = (Rd, ‖ · ‖). Let µ ∈ P(X ). Then for any A ∈ L(X ), supW∞(µ,µ′)≤ µ ′(A) = µ(A⊕ ). Moreover, the supremum in the previous equation is attained.
Lemma 5.3 plays a similar role to Lemma 5.1 in proving the following equivalence between adversarial robustness and W∞ robustness. Theorem 5.2. Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Then for any A ∈ L(X ), R⊕ (`0/1, A) = RΓ (`0/1, A), and the supremum over p ′ 0 and p ′ 1 in RΓ (`0/1, A) is attained.
The proof follows by converting the expression for RΓ into one for R⊕ using Lemma 5.3. Unlike Theorem 5.1, Theorem 5.2 does not show the equivalence of RF (`0/1, A) with the other definitions under the relaxed assumption of A ∈ L(X ). This is because Lemma 5.3 does not provide a pushforward map φ such that µ∗ = φ]µ with µ∗ attaining the supremum over the W∞ ball.
6 Optimal Adversarial Risk via Generalized Strassen’s Theorem
In section 5, we analyzed adversarial risk for a specific decision region A ∈ B(X ). In this section, we analyze infimum of adversarial risk over all possible decision regions; i.e., the optimal adversarial risk. We show that optimal adversarial risk in binary classification with unequal priors is characterized by an unbalanced optimal transport cost between data-generating distributions. Our main technical lemma generalizes Strassen’s theorem to unbalanced optimal transport. We present this result in subsection 6.1 and present our characterization of optimal adversarial risk in subsection 6.2.
6.1 Unbalanced Optimal Transport & Generalized Strassen’s Theorem
Recall from Section 3 that the optimal transport cost D characterizes the optimal adversarial risk in binary classification for equal priors. The following result gives an alternative characterization of D . Proposition 6.1 (Strassen’s theorem). [Corollary 1.28 in [42]] Let µ, ν ∈ P(X ). Let ≥ 0. Then
sup A∈B(X )
µ(A)− ν(A2 ) = D (µ, ν). (7)
Proposition 6.1 is a special case of Kantorovich-Rubinstein duality [42] applied to {0, 1}-valued cost functions. We now generalize this result to measures with unequal masses. We begin with some definitions that generalize the concepts we introduced in subsection 2.2.
Let µ, ν ∈M(X ) be such that µ(X ) ≤ ν(X ). A coupling between µ and ν is a measure π ∈M(X 2) such that for any A ∈ B(X ), π(A×X ) = µ(A) and π(X ×A) ≤ ν(A). The set Π(µ, ν) is defined to be the set of all couplings between µ and ν. For a cost function c : X 2 → [0,∞), the optimal transport cost between µ and ν under c is defined as Tc(µ, ν) = infπ∈Π(µ,ν) ∫ X 2 c(x, x
′)dπ(x, x′). Theorem 6.1 (Generalized Strassen’s theorem). Let µ, ν ∈M(X ) be such that 0 < M = µ(X ) ≤ ν(X ). Let > 0. Define c : X 2 → {0, 1} as c (x, x′) = 1{(x, x′) ∈ X 2 : d(x, x′) > 2 }. Then
sup A∈B(X ) µ(A)− ν(A2 ) = Tc (µ, ν) = M inf ν′∈P(X ):ν′ ν/M D (µ/M, ν ′) . (8)
Moreover, the infimum on the right hand side is attained. (Equivalently, there is a coupling π ∈ Π(µ, ν) that attains the unbalanced optimal transport cost Tc (µ, ν).)
Our proof of Theorem 6.1 leverages strong duality in linear programming. We first establish (8) for discrete measures on a finite support. We then apply the discrete result on a sequence of measures supported on a countable dense subset of the Polish space X . Using the tightness of finite measures on X , we construct an optimal coupling that achieves the cost Tc (µ, ν) in (8). We then show that the constructed coupling satisfies (8). This proof strategy is adapted from the works of [12] and [32].
6.2 Optimal Adversarial Risk for Unequal Priors
Generalized Strassen’s theorem involves closed set expansions. The following lemma allows us to switch to Minkowski set expansions. Lemma 6.1. Let µ, ν ∈ M(X ) and let ≥ 0. Then supA∈B(X ) µ(A) − ν(A2 ) = supA∈B(X ) µ(A
)− ν(A⊕ ). Moreover, the supremum in the right hand side of the above equality can be replaced by a supremum over closed sets.
Using the above lemma and the generalized Strassen’s theorem, we show the following result on optimal adversarial risk for unequal priors, generalizing the result of [30, 2]. Theorem 6.2. Let p0, p1 ∈ P(X ) and let ≥ 0. Then,
inf A∈B(X )
R⊕ (`0/1, A) = 1
T + 1
[ 1− inf
q∈P(X ):q Tp0 D (q, p1)
] . (9)
Moreover, the infimum on the left hand side can be replaced by an infimum over closed sets.
The proof follows by using Lemma 6.1 to convert the expression with Minkowski expansion to one with closed expansions, followed by an application of Theorem 6.1 to arrive at the final optimal transport-based expression. Theorem 6.2 extends the result of [31] in two ways: (1) the infimum is
taken over all sets for which R⊕ (`0/1, A) is well-defined, instead of restricting to closed sets, and (2) the priors on both labels can be unequal. We also note that for (X , d) = (Rd, ‖ · ‖), (9) holds with the infimum on the left hand side taken over all A ∈ L(X ).
7 Minimax Theorems and Nash Equilibria
In this section, we revisit the zero-sum game between the adversary and the algorithm introduced in section 3. Recall that for A ∈ B(X ) and p′0, p′1 ∈ P(X ), the payoff function is given by
r(A, p′0, p ′ 1) =
T
T + 1 p′0(A) +
1
T + 1 p′1((A c)). (10)
The max-min inequality gives us
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈A
r(A, p′0, p ′ 1) ≤ inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (11)
If the inequality in (11) is an equality, we say that the game has zero duality gap, and admits a value equal to either expression in (11). Then there is no advantage to a player making the first move. Our minimax theorems establish such an equality. If in addition to having an equality in (11), there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left-hand side and A∗ ∈ B(X ) that achieves the infimum on the right-hand side, we say that ((p∗0, p ∗ 1), A ∗) is a pure Nash equilibrium of the game.
In Section 7.1, we prove the minimax theorem and the existence of a pure Nash equilibrium in Rd using the theory of 2-alternating capacities [19] and the relation to adversarial risk from Section 5.2. Section 7.2 extends these results to more general Polish spaces with a “midpoint property."
7.1 Minimax Theorem in Rd via 2-Alternating Capacities
The following theorem proves the minimax equality and the existence of a Nash equilibrium for the adversarial robustness game in Rd. Theorem 7.1 (Minimax theorem in Rd). Let (X , d) = (Rd, ‖ · ‖). Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10). Then,
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈L(X )
r(A, p′0, p ′ 1) = inf
A∈L(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (12)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) and A∗ ∈ L(X ) that achieve the supremum and infimum on the left and right hand sides of the above equation.
Crucial to the proof of Theorem 7.1 is Lemma 5.2, which shows that the set-valued maps A 7→ p0(A
⊕ ) and Ac 7→ p1((Ac)⊕ ) are 2-alternating capacities. The same proof technique is not applicable in general Polish spaces because the map A 7→ µ(A⊕ ) is not a capacity for a general µ ∈ P(X ). This is because A⊕ is not measurable for all A ∈ B(X ).
7.2 Minimax Theorem in Polish Spaces via Optimal Transport
We now extend the minimax theorem from Rd to general Polish spaces with the following property. Definition 5 (Midpoint property). A metric space (X , d) is said to have the midpoint property if for every x1, x2 ∈ X , there exists x ∈ X such that, d(x1, x) = d(x, x2) = d(x1, x2)/2.
Any normed vector space with distance defined as d(x, x′) = ‖x − x′‖ satisfies the midpoint property. An example of a metric space without this property is the discrete metric space where d(x, x′) = 1{x 6= x′}. The midpoint property plays a crucial role in proving the following theorem, which shows that the D transport cost between two distributions is the shortest total variation distance between their -neighborhoods in W∞ metric. A similar result was also presented in [11]. Theorem 7.2 (D as shortest DTV between W∞ balls). Let (X , d) have the midpoint property. Let µ, ν ∈ P(X ) and let ≥ 0. Then D (µ, ν) = infW∞(µ,µ′),W∞(ν,ν′)≤ DTV (µ′, ν′). Moreover, the infimum over DTV in the above equation is attained.
The following theorem uses Theorem 7.2 to prove the minimax equality and the existence of a Nash equilibrium for any Polish space with the midpoint property for the case of equal priors.
Theorem 7.3 (Minimax theorem for equal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. Define r as in (10) with T = 1. Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (13)
Moreover, there exist p∗0, p ∗ 1 ∈ P(X ) that achieve the supremum on the left hand side.
Proof. For µ ∈ P(X ), let WB(µ) denote the set of all µ′ ∈ P(X ) such that W∞(µ, µ′) ≤ .
inf A∈B(X ) sup p′0∈WB(p0) p′1∈WB(p1)
r(A, p′0, p ′ 1) = inf
A∈B(X ) RΓ (`0/1, A)
(i) = inf
A∈B(X ) R⊕ (`0/1, A)
(ii) =
1 2 [1−D (p0, p1)]
sup p′0∈WB(p0) p′1∈WB(p1) inf A∈B(X )
r(A, p′0, p ′ 1) (iii) = sup
p′0∈WB(p0) p′1∈WB(p1)
1 2 [1−DTV (p′0, p′1)] = 1 2 1− inf p′0∈WB(p0) p′1∈WB(p1) DTV (p ′ 0, p ′ 1) , where (i) follows from Theorem 5.1, (ii) from Theorem 6.2, and (iii) again from Theorem 6.2 with = 0. The expressions on the right extremes of the above equations are equal by Theorem 7.2. The existence of p∗0, p ∗ 1 ∈ P(X ) follows Theorem 7.2.
To prove the minimax theorem for unequal priors, we need the following generalization of Theorem 7.2 to finite measures of unequal mass. Lemma 7.1. Let p0, p1 ∈ P(X ) and let ≥ 0. Then for T ≥ 1,
inf q∈P(X ):q Tp0 D (q, p1) = inf q∈P(X ):q Tp0 inf W∞(q,q′),W∞(p1,p′1)≤
DTV (q ′, p′1)
= inf W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf q′∈P(X ):q′ Tp′0
DTV (q ′, p′1) (14)
Now, we prove the minimax equality for unequal priors. Theorem 7.4 (Minimax theorem for unequal priors). Let (X , d) have the midpoint property. Let p0, p1 ∈ P(X ) and let ≥ 0. For T > 0, define r as in (10). Then
sup W∞(p0,p′0),W∞(p1,p ′ 1)≤ inf A∈B(X )
r(A, p′0, p ′ 1) = inf
A∈B(X ) sup
W∞(p0,p′0),W∞(p1,p ′ 1)≤
r(A, p′0, p ′ 1). (15)
The proof uses: (1) the characterization of inf-sup payoff in terms of unbalanced optimal transport using Theorem 5.1; (2) Lemma 7.1; and (3) the minimax equality of Theorem 7.3 for equal priors. Remark 2. Unlike Theorem 7.1, Theorems 7.3 and 7.4 do not guarantee the existence of an optimal decision region A∗. While Theorem 7.3 guarantees the existence of worst-case pair of perturbed distributions p∗0, p ∗ 1, Theorem 7.4 does not do so. Nevertheless, an approximate pure Nash equilibrium exists in all the cases. This is in sharp contrast with the non-existence of Nash equilibrium proven in [29] (which considers a different notion of adversarial perturbations). Remark 3. A recent work [26] shows the existence of mixed Nash equilibrium for randomized classifiers parametrized by points in a Polish space (see also [29, 3]). Fan’s minimax theorem used in this result is inapplicable in our setting of non-parametric, decision region-based classifiers. Instead, we applied the theory of Choquet capacities (in Rd) and generalized Strassen’s duality theorem (in Polish spaces), which is novel to the best of our knowledge.
8 Discussion
We examined different notions of adversarial risk in a binary classification setting with 0-1 loss function and laid down the conditions under which these definitions are equivalent. By verifying the conditions in Sections 4 and 5, researchers may use different definitions interchangeably. Several definitions have also been proposed for adversarial risk under general loss functions [31, 26] using analogous constructions like transport maps, couplings and suprema over -neighborhoods. Extending our equivalence results to more general loss functions is left for future work.
summarize the results of Section 6 and Section 7. For equal priors (T = 1), A and B denote two ways of obtaining the optimal adversarial risk, R∗⊕ : 1) A , which denotes the D cost between the true label distributions p0 and p1, and 2) B , which denotes the shortest total variation distance between∞-Wasserstein balls of radius around p0 and p1. For unequal priors (T > 1), C , D and E denote three equivalent ways of obtaining R∗⊕ . The black dotted balls denote∞-Wasserstein balls and the blue dashed balls denote sets defined using stochastic domination. The order in which the two types of balls appear around p0 is reversed between D and E .
We analyzed optimal adversarial risk for (non-parametric) decision region-based classifiers. Using a formulation of optimal transport between finite measures of unequal mass, we extended the optimal transport based characterization of adversarial risk of [30, 2] to unequal priors by generalizing Strassen’s theorem. This may find applications in the study of excess cost optimal transport [45, 44]. A recent work [39] obtains a different characterization of optimal adversarial risk using optimal transport on the product space X × Y where Y is the label space. Further, they show the evolution of the optimal classifier A∗ as grows, in terms of a mean curvature flow. This raises an interesting question on the evolution of the optimal adversarial distributions p∗0, p ∗ 1 ∈ P(X ) with .
We proved a minimax theorem for adversarial robustness game and the existence of a Nash equilibrium. We constructed the worst-case pair of distributions p∗0, p ∗ 1 ∈ P(X ) in terms of true data distributions and showed that their total variation distance gives the optimal adversarial risk. Identifying worst case distributions could lead to a new approach to developing robust algorithms.
We used Choquet capacities for results in Rd and measurable selections in Polish spaces. Specifically, we showed that the measure of -Minkowski expansion is a 2-alternating capacity. This connection could help generalize our results to total variation and Prokhorov distance based contaminations.
Limitations: We largely focused on the binary classification setup with 0-1 loss function. While it may be possible extend our results on measurability and relation to∞-Wasserstein distributional robustness to more general loss functions and a multi-class setup, it is unclear how our results on generalized Strassen’s theorem and Nash equilibria can be extended further. Our results on various equivalent formulations of optimal adversarial risk are specific to adversarial perturbations (or equivalently,∞-Wasserstein distributional perturbations), and we did not investigate more general perturbation models. | 1. What are the contributions and strengths of the paper regarding the unified approach to defining adversarial risk?
2. Are there any weaknesses or limitations in the paper's discussion of different definitions of adversarial risk?
3. How does the reviewer assess the paper's novelty and significance in connecting different definitions of adversarial risk?
4. Do the authors provide sufficient explanations and examples to help readers understand the variations of adversarial risk discussed in the paper?
5. Does the paper adequately address the relationship between the different definitions of adversarial risk and their applications in various settings? | Summary Of The Paper
Review | Summary Of The Paper
The paper discusses different definitions of adversarial risk that have been used in various papers throughout the years, where, by using tools from optimal transport theory, robust statistics, functional analysis, and game theory, these definitions are viewed under a different lens, and in several ways new connections (and separations) between some of these definitions are established.
Review
The paper first gathers and discusses different notions of adversarial risk and then takes a unified approach on looking at these definitions in different settings. Though I have to admit that I have not carefully looked into the proofs, I think that for the largest part the paper is ok and provides the reader with new insights connecting different definitions of adversarial risk. Having said that, there are some claims in the related literature that are a little bit confusing to the reader and indeed not all the definitions are discussed.
Section 3 is very important as the authors discuss 4 different definitions of adversarial risk. However, the way some definitions are presented is confusing for the reader.
For example, the authors claim that the formulation of adversarial risk studied in [9, 16] is a special case of the approach followed in [25, 34]. This is only partially true as [9] and [16] spend a significant amount explaining the differences between different definitions and in particular those that are used in [25] and [34]. So, while the papers [9] and [16] indeed discuss, among other things, a special case of the definitions used in [25] and [34], they nevertheless take a different route and explain that when misclassification is the goal for the generation of adversarial examples, then the definitions used in [25] and [34] may in turn compute completely wrong values. As an extreme case, for example, when PAC learning Boolean functions, it is often-times the case that we learn the target function completely. Then, in such situations [9] and [16] claim that indeed the learnt model is infinitely robust since it cannot be fooled by any adversarial example. However, both [25] and [34] can somehow claim that they can find adversarial examples in these situations, by making the model flip the prediction label. However, when the ground truth also changes, then there is no misclassification -- which is really the whole point behind adversarial examples. This is also true in equations (2) and (3) in this paper where we can see that the adversarial risk would be positive, even if the model was entirely correct against a ground function that does the labeling.
And to go one step further, equation (3) in the current paper still makes this assumption, and I do not believe it is even studied in [26] which is cited as the paper where this definition has its origin. [26] uses the same definition of "error-region adversarial risk" that is used in [9] (or what is called "exact in the ball adversarial risk" in [16]).
The above points need to be clarified so that everything is crystal clear to the reader. The current paper studies variations of adversarial risk that follow along the lines of "corrupted-inputs adversarial risk" [9] or "constant in the ball adversarial risk" [16]. Such versions of risk are indeed special cases of [25] and [34]. But in general, and that is the main point of [9] and [16], these definitions do not compute, adversarial examples -- again, assuming that misclassification is the goal of an adversarial example, and I believe we all agree that this is indeed the goal.
Hence, as far as I understand, the current paper indeed tries to formulate variations of the "corrupted-inputs / constant in the ball" adversarial risk. This has indeed its own merit, but the presentation in section 3 needs to be more careful and really explain things in a better way. The current paper does not touch upon the different definitions that are out there -- in fact, in some situations I believe that the presentation is entirely misleading.
Getting along I must admit that I really appreciate all the effort and the work that the authors have done and indeed I find the different connections that are established, to be interesting. The authors try to be clear and make interesting comments on their results. Nevertheless, it is also a little bit unclear to me up to what point we can accept some of the consequences, in general, as this assumption of maintaining the ground-truth label in some region around a sampled point does not necessarily hold.
Post-rebuttal:
Thank you for the response to all the reviews. I am looking forward to seeing the improvements that have been suggested on clarifications by myself as well as from the other reviewers. Assuming these clarifications will be seen in the final manuscript, I am increasing my score from 5 to 6. Thank you for a very interesting paper! |
NIPS | Title
Bandit Learning in Many-to-one Matching Markets with Uniqueness Conditions
Abstract
An emerging line of research is dedicated to the problem of one-to-one matching 1 markets with bandits, where the preference of one side is unknown and thus we 2 need to match while learning the preference through multiple rounds of interaction. 3 However, in many real-world applications such as online recruitment platform for 4 short-term workers, one side of the market can select more than one participant from 5 the other side, which motivates the study of the many-to-one matching problem. 6 Moreover, the existence of a unique stable matching is crucial to the competitive 7 equilibrium of the market. In this paper, we first introduce a more general new α̃8 condition to guarantee the uniqueness of stable matching in many-to-one matching 9 problems, which generalizes some established uniqueness conditions such as SPC 10 and Serial Dictatorship, and recovers the known α-condition if the problem is 11 reduced to one-to-one matching. Under this new condition, we design an MO12 UCB-D4 algorithm withO ( NK log(T ) ∆2 ) regret bound, where T is the time horizon, 13 N is the number of agents, K is the number of arms, and ∆ is the minimum 14 reward gap. Extensive experiments show that our algorithm achieves uniform good 15 performances under different uniqueness conditions. 16
N/A
An emerging line of research is dedicated to the problem of one-to-one matching1 markets with bandits, where the preference of one side is unknown and thus we2 need to match while learning the preference through multiple rounds of interaction.3 However, in many real-world applications such as online recruitment platform for4 short-term workers, one side of the market can select more than one participant from5 the other side, which motivates the study of the many-to-one matching problem.6 Moreover, the existence of a unique stable matching is crucial to the competitive7 equilibrium of the market. In this paper, we first introduce a more general new α̃-8 condition to guarantee the uniqueness of stable matching in many-to-one matching9 problems, which generalizes some established uniqueness conditions such as SPC10 and Serial Dictatorship, and recovers the known α-condition if the problem is11 reduced to one-to-one matching. Under this new condition, we design an MO-12 UCB-D4 algorithm withO ( NK log(T )
∆2
) regret bound, where T is the time horizon,13
N is the number of agents, K is the number of arms, and ∆ is the minimum14 reward gap. Extensive experiments show that our algorithm achieves uniform good15 performances under different uniqueness conditions.16
1 Introduction17
The rise of platforms for the online matching market has led to an emergence of opportunities for18 companies to participate in personalized decision-making [14, 18]. Companies (like Thumbtack19 and Taskrabbit and Upwork platforms) use online platforms to address short-term needs or seasonal20 spikes in production demands, accommodate workers who are voluntarily looking for more flexible21 work arrangements or probation period before permanent employment. The supply and demand22 sides in two-sided markets make policies on the basis of their diversified needs, which is abstracted23 as a matching market with agent side and arm side, and each side has a preference profile over the24 opposite side. They choose from the other side according to preference and perform a matching. The25 stability of the matching result is a key property of the market [32, 1, 27].26
The preferences in the online labor market may be unknown to one side in advance, thus matching27 while learning the preferences is necessary. The multi-armed bandit (MAB) [36, 13, 4] is an important28 tool for N independent agents in matching market simultaneously selecting arms adaptively from29 received rewards at each round. The idea of applying MAB to one-to-one matching problems,30 introduced by [21], assumes that there is a central platform to make decisions for all agents. Following31 this, other works [22, 34, 7] consider a more general decentralized setting where there is no central32 platform to arrange matchings, and our work is also based on this setting.33
However, it is not enough to just study the one-to-one setting. Take online short-term worker34 employment as an example, it is an online platform design with an iterative matching, where35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
employers have numerous similar short-term tasks or internships to be recruited. Workers can only36 choose one task according to the company’s needs at a time while one company can accept more37 than one employee. Each company makes a fixed ranking for candidates according to its own38 requirements but workers have no knowledge of companies’ preferences. The reward for workers39 is a comprehensive consideration of salary and job environment. Since tasks are short-term, each40 candidate can try many times in different companies to choose the most suitable job. We abstract41 companies as arms and workers as agents. Each arm has a capacity q which is the maximum number42 of agents this arm can accommodate. When an arm faces multiple choices, it accepts its most q43 preferred agents. Agents thus compete for arms and may receive zero reward if losing the conflict. It44 is worth mentioning that arms with capacity q in the many-to-one matching can not just be replaced45 by q independent individuals with the same preference since there would be implicit competition46 among different replicates of this arm, not equal treatment. In addition, when multiple agents select47 one arm at a time, there may be no collision, which will hinder the communication among different48 agents under the decentralized assumption. They cannot distinguish who is more preferred by this49 arm in one round as it can accept more than one agent while this can be done in one-to-one case.50 Communication here lets each agent learn more about the preferences of arms and other agents, so as51 to formulate better policies to reduce collisions and learn fast about their stable results.52
This work focuses on a many-to-one market under uniqueness conditions. Previous work [10, 15]53 emphasize the importance of constructing a unique stable matching for the equilibrium of matching54 problems and some existing uniqueness conditions are studied in many-to-one matching, such as55 Sequential Preference Condition (SPC) and Acyclicity [26, 2]. Our work is motivated by [7], but the56 unique one-to-one mapping between arms and agents in their study which gives a surrogate threshold57 for arm elimination does not work in the many-to-one setting. And the uniqueness conditions in58 many-to-one matching are not well-studied, which also brings a challenge to identify and leverage59 the relationship between the resulting stable matching and preferences of two sides in the design60 of bandit algorithms. We propose an α̃-condition that can guarantee a unique stable matching and61 recover α-condition [19] if reduced to the one-to-one setting. We establish the relationships between62 our new α̃-condition and existing uniqueness conditions in many-to-one setting.63
In this paper, we study the bandit algorithm for a decentralized many-to-one matching market64 with uniqueness conditions. Under our newly introduced α̃-condition, we design an MO-UCB-D465 algorithm with arm elimination and the regret can be upper bounded by O ( NK log(T )
∆2
) , where N66
is the number of agents, K is the number of arms, and ∆ is the minimum reward gap. Finally,67 we conduct a series of experiments to simulate our algorithm under various conditions of Serial68 dictatorship, SPC and α̃-condition to study the stability and regret of the algorithm.69
Related Work The study of matching markets has a long history in economics and operation70 research [8, 6, 32] with real applications like school enrollment, labor employment, hospital resource71 allocation, and so on [1, 23, 31, 17]. A salient feature of market matching is making decisions for72 competing players on both sides [36, 12]. MAB is an important tool to study matching problems under73 uncertainty to obtain a maximum reward, and upper confidence bound algorithm (UCB) [4] is a typical74 algorithm, which sets a confidence interval to represent uncertainty. Matching market with MAB is75 studied in both centralized and decentralized setting [21, 22]. Following these, Abishek Sankararaman76 et al. [34] propose a phased UCB algorithm under a uniqueness condition, Serial Dictatorship, to77 manage collisions. They solve the problem of the decentralized market without knowing arm-gaps78 or time horizon, and reduce the probability of linear regret through non-monotonic arm elimination.79 The introduction of the uniqueness condition plays an important role in the equilibrium of matching80 results [15, 7]. Under a stronger and robust condition, Uniqueness Consistency [19], Soumya Basu81 et.al [7] apply MAB to online matching and obtain robust results that the subset of stable matchings82 being separated from the system does not affect other stable matchings.83
We discuss many-to-one problems such as online short-term employment and MOOC [14, 24, 18] as84 the one-to-one setting has limitations in practice. Somouaoga Bonkoungo [9] runs a student-proposing85 deferred acceptance algorithm (DA) [12] to study decentralized college admission. Ahmet Altinok86 [3] considers dynamic matching in many-to-one that can be solved as if it is static many-to-one or87 dynamic one-to-one under certain assumptions. As the existence and uniqueness of competitive88 equilibrium and core are important to allocations, the unique stable results need to be considered [27].89 Similar to conditions for unique stable matching in one-to-one, some uniqueness conditions of stable90 results in the many-to-one setting also are studied [16, 28, 15, 2, 27].91
2 Setting92
This paper considers a many-to-one matching marketM = (K,J ,P), where K = [K],J = [N ]93 are a finite arm set and a finite agent set, respectively. And each arm k has a capacity qk ≥ 1. To94 guarantee that no agents will be unmatched, we focus on the market with N ≤ ∑K i=1 qi. P is the95
fixed preference order of agents and arms, which is ranked by the mean reward. We assume that arm96 preferences for agents are unknown and needed to be learned. If agent j prefers arm k over k′, which97 also means that µj,k > µj,k′ , we denote by k ≻j k′. And the preference is strict that µj,k ̸= µj,k′ if98 k ̸= k′. Similarly, each arm k has a fixed and known preference ≻k over all agents, and specially,99 j ≻k j′ means that arm k prefers agent j over j′. Throughout, we focus on the market where all100 agent-arm pairs are mutually acceptable, that is, j ≻k ∅ and k ≻j ∅ for all k ∈ [K] and j ∈ [N ].101 Let mapping m be the matching result. mt(j) is the matched arm for agent j at time t, and γt(k) is102 the agents set matched with arm k1. Every time agent j selects an arm It(j), and we use Mt(j) to103 denote whether j is successfully matched with its selected arm. Mt(j) = 1 if agent j is matched with104 It(j), and Mt(j) = 0, otherwise. If multiple agents select arm k at the same time, only top qk agents105 can successfully match. The agent j matched with arm k can observe the reward Xj,mt(j)(t), where106 the random reward Xj,k(t) ∈ [0, 1] is independently drawn from a fixed distribution with mean µj,k.107 While the unmatched ones have collisions and receive zero reward. Generally, the reward obtained by108 agent j is Xj,It(j)(t)Mt(j).109
An agent j and an arm k form a blocking pair for a matching m if they are not matched but prefer110 each other over their assignments, i.e. k ≻j m(j) and ∃j′ ∈ γ(k), j ≻k j′. We say a matching111 satisfies individually rationality (IR), if aj ≻pi ∅ and pi ≻aj ∅ for all i ∈ [N ] and j ∈ [K], that is,112 every worker prefers to find a job rather than do nothing, and every company also wants to recruit113 workers rather than not recruit anyone. Under the IR condition, a matching in the many-to-one setting114 is stable if there does not exist a blocking pair [33, 35].115
This paper considers the matching markets under the uniqueness condition. Thus the overall goal is116 to find the unique stable matching between the agent side and arm side through iterations. Let m∗(j)117 be the stable matched arm for agent j under the stable matching m∗. The reward obtained by agent j118 is compared against the reward received by matching with m∗(j) at each time. We aim to minimize119 the expected stable regret for agent j over time horizon T , which is defined as120
Rj(T ) = Tµj,m∗(j) − E
[ T∑
t=1
Mt(j)Xj,It(j)(t)
] .
3 Algorithm121
In this section, we introduce our MO-UCB-D4 Algorithm (Many-to-one UCB with Decentralized122 Dominated arms Deletion and Local Deletion Algorithm) (Algorithm 1) for the decentralized many-123 to-one market, where there is no platform to arrange actions for agents, which leads to conflicts124 among agents. The MO-UCB-D4 algorithm for each agent j first takes agent set J and arm set K as125 input and chooses a parameter θ ∈ (0, 1/K) (discussed in Section C). It sets multiple phases, and126 each phase i mainly includes regret minimization block (line 6 - 12) and communication block (line127 13 - 16) with duration 2i−1, i = 1, 2, · · · .128 For each agent j in phase i, the algorithm adds arm deletion to reduce potential conflicts, which129 mainly contains global deletion and local deletion. The former eliminates the arms most preferred130 by agents who rank higher than agent j and obtain active set Chj [i] (line 4), and the latter deletes131 the arms that still have many conflicts with agent j after global deletion (line 6). We set a collision132 counter Cj,k[i] to record the number of collisions for agent j pulling arm k.133
In regret minimization block of phase i, we use Lj [i] = {k : Cj,k[i] ≥ ⌈θ2i⌉} to represent the134 arms that collide more times than a threshold ⌈θ2i⌉ when matching with agent j. Arms in Lj [i] are135 first locally deleted to reduce potential collisions for agent j (line 6). After that, agent j selects an136 optimal action It(j) from remaining arms in Chj [i]\Lj [i] in phase i according to UCB index, which is137 computed by µ̂j,k(t−1)+ √ 2α log(t) Nj,k(t−1) (line 7), where Nj,k(t−1) is the number that agent j and arm138
1The mapping m is not reversible as it is not a injective, thus we do not use m−1t (k).
Algorithm 1 MO-UCB-D4 algorithm (for agent j) Input:
θ ∈ (0, 1/K), α > 1. 1: Set global dominated set Gj [0] = ϕ 2: for phase i = 1, 2, ... do 3: Reset the collision set Cj,k[i] = 0, ∀k ∈ [K]; 4: Reset active arms set Chj [i] = [K]\Gj [i− 1]; 5: if t < 2i +NK(i− 1) then 6: Local deletion Lj [i] = {k : Cjk[i] ≥ ⌈θ2i⌉}; 7: Play arm It(j) ∈ argmax
k∈Chj [i]\Lj [i]
( µ̂j,k(t− 1) + √ 2α log(t) Nj,k(t−1) ) ;
8: if k = It(j) is successfully matched with agent j, i.e. mt(j) = k then 9: Update estimate µ̂j,k(t) and matching count Nj,k(t);
10: else 11: Cj,k[i] = Cj,k[i] + 1; 12: end if 13: else if t = 2i +NK(i− 1) then 14: Oj [i]← most matched arm in phase i; 15: Gj [i]← COMMUNICATION(i,Oj [i]); 16: end if 17: end for
k have been matched at time t− 1. If the selected arm is successfully matched with agent j, then the139 algorithm updates estimated reward µ̂j,k(t) = 1Nj,k(t) ∑t s=1 1{Is(j) = k and Ms(j) = 1} Xj,k(t)140 and Nj,k(t) (line 9). Otherwise, the collision happens (line 11) and j receives zero reward. The141 regret minimization block identifies the most played arm Oj [i] for agent j in each phase i, which is142 estimated as the best arm for j, thus making optimal policy to minimize expected regret.143
Algorithm 2 COMMUNICATION Input:
Phase number i, and most played arms Oj [i] for agent j, ∀j ∈ [N ] . 1: Set C = ∅; 2: for t = 1, 2, · · · , NK − 1 do 3: if K(j − 1) ≤ t ≤ Kj − 1 then 4: Agent j plays arm It(j) = (t mod K) + 1; 5: if Collision Occurs then 6: C = C ∪ {It(j)}; 7: end if 8: else 9: Play arm It(j) = Oj [i];
10: end if 11: end for 12: RETURN C;
In the communication block (Algorithm 2), there are N sub-blocks, each with duration K. In the144 ℓ− th sub-block, only agent ℓ pulls arm 1, arm 2, · · · , arm K in round-robin while the other agents145 select their most preferred arms estimated as the most played ones (line 4). This block aims to detect146 globally dominated arms for agent j: Gj [i] ⊂ {Oj′ [i] : j′ ≻Oj′ [i] j}. Under stable matching m
∗, the147 globally dominated arms set for agent j is denoted as G∗j . After the communication block in phase148 i, each agent j updates its active arms set Chj [i+ 1] for phase i+ 1, by globally deleting arms set149 Gj [i], and enters into the next phase (line 4 in Algorithm 1).150
Hence, multi-phases setting can guarantee that the active set in different phases has no inclusion151 relationship so that if an agent deletes an arm in a certain phase, this arm can still be selected in the152 later rounds. This ensures that each agent will not permanently eliminate its stable matched arm, and153 when the agent mistakenly deletes an arm, it will not lead to linear regret.154
4 Results155
4.1 Uniqueness Conditions156
4.1.1 α̃-condition157
Constructing a unique stable matching plays an important role in market equilibrium and fairness158 [10, 15]. With uniqueness, there would be no dispute about adopting stable matching preferred159 by which side, thus it is more fair. When the preferences of agents and arms are given by some160 utility functions instead of random preferences, like the payments for workers in the labor markets,161 the stable matching is usually unique. Thus the assumption of the unique stable matching is quite162 common in real applications. In this section, we propose a new uniqueness condition, α̃-condition.163 First, we introduce uniqueness consistency (Unqc) [19], which guarantees robustness and uniqueness164 of markets.165 Definition 1. A preference profile satisfies uniqueness consistency if and only if166
(i) there exists a unique stable matching m∗;167
(ii) for any subset of arms or agents, the restriction of the preference profile on this subset with their168 stable-matched pair has a unique stable matching.169
It guarantees that even if an arbitrary subset of agents are deleted out of the system with their170 respective stable matched arms, there still exists a unique stable matching among the remaining171 agents and arms. This condition allows any algorithm to identify at least one stable pair in a unique172 stable matching system and guides the system to a global unique stable matching in an iterative173 manner. To obtain consistent stable results in the many-to-one market, we propose a new α̃-condition,174 which is a sufficient and necessary condition for Unqc (proved in Appendix B).175
We considers a finite set of arms [K] = {1, 2, · · · ,K} and a finite set of agents [N ] = {1, 2, · · · , N}176 with preference profile P . Assume that [N ]r={A1, A2, · · · , AN} is a permutation of {1, 2, · · · , N}177 and [K]r={c1, c2, · · · , cK} is a permutation of {1, 2, · · · ,K}. Denote [N ], [K] as the left order and178 [N ]r, [K]r as the right order. The k-th arm in the right order set [K]r has the index ck in the left179 order set [K] and the j-th agent in the right order set [N ]r has the index Aj in the left order set [N ].180 Considering arm capacity, we denote γ∗(ck) (right order) as the stable matched agents set for arm ck.181 Definition 2. A many-to-one matching market satisfies the α̃-condition if,182
(i) The left order of agents and arms satisfies
∀j ∈ [N ],∀k > j, k ∈ [K], µj,m∗(j) > µj,k , where m∗(j) is agent j’s stable matched arm;183
(ii) The right order of agents and arms satisfies
∀k < k′ ≤ K, ck ∈ [K]r, Ak′ ⊂ [N ]r, γ∗(ck) ≻ck A∑k′−1 i=1 qci+1 ,
where the set γ∗(ck) is more preferred than A∑k′−1 i=1 qci+1 means that the least preferred agent in184
γ∗(ck) for ck is better than A∑k′−1 i=1 qci+1 for ck.185
Under our α̃-condition, the left order and the right order satisfy the following rule. The left order186 gives rankings according to agents’ preferences. The first agent in the left order set [N ] prefers arm 1187 in [K] most and has it as the stable matched arm. Similar properties for the agent 2 to q1 since arm 1188 has q1 capacity. Then the (q1 + 1)-th agent in the left order set [N ] has arm 2 in [K] as her stable189 matched arm and prefers arm 2 most except arm 1. The remaining agents follow similarly. Similarly,190 the right order gives rankings according to arms’ preferences. The first arm 1 in the right order set191 [K]r most prefers first qc1 agents in the right order set [N ]r and takes them as its stable matched192 agents. The remaining arms follow similarly.193
This condition is more general than existing uniqueness conditions like SPC [28] and can recover194 the known α-condition in one-to-one matching market [19]. The relationship between the existing195 uniqueness conditions and our proposed conditions will be analyzed in detail later in Section 4.1.2.196
The main idea from one-to-one to many-to-one analysis is to replace individuals with sets. In197 general, under α̃-condition, the left order satisfies that when arm 1 to arm k − 1 are removed, agents198
(∑k−1 i=1 qi + 1 ) to (∑k i=1 qi )
prefer k most, and the right order means that when A1 to agents199 A∑k−1 i=1 qi are removed, arm k prefers agents Ak = {A∑k−1 i=1 qci+1 , A∑k−1 i=1 qci+2 , · · · , A∑k i=1 qci },200 where Ak is the agent set that are most qk preferred by arm k among those who have not been201 matched by arm 1, 2, · · · , k − 1. Te next theorem give a summary.202
Theorem 1. If a market M = (K,J ,P) satisfies α̃-condition, then m∗( ∑j−1
i=1 qi + 1) =203 m∗( ∑j−1 i=1 qi + 2) = · · · = m∗( ∑j i=1 qi) = j (the left order), γ ∗(ck) = Ak and m∗(Aj) = cj (the204 right order) under stable matching.205
Under α̃-condition, the stable matched arm may not be the most preferred one for each agent j,206 j ∈ [N ], thus (i) we do not have m∗(j) to be dominated only by the agent 1 to agent j − 1, i.e. there207 may exist j′ > j, s.t. j′ ≻m∗(j) j; (ii) the left order may not be identical to the right order, we208 define a mapping lr to match the index of an agent in the left order with the index in the right order,209 i.e. Alr(j) = j. From Theorem 1, the stable matched set for arm k is its first qk preferred agents210 γ∗(ck) = Ak. We define lr as lr(i) = max{j : Aj ∈ γ∗(m∗(i)), j ∈ [N ]}, that is, in the right211 order, the mapping for arm k ∈ [K] is the least preferred one among its most qk preferred agents.212 Note that this mapping is not an injective, i.e. ∃j, j′, s.t. agent j = Alr(j) = Alr(j′). An intuitive213 representation can be seen in Figure 4 in Appendix A.1.214
4.1.2 Unique Stable Conditions in Many-to-one Matching215
Uniqueness consistency (Unqc) leads the stable matching to a robust one which is a desirable property216 in large dynamic markets with constant individual departure [7]. A precondition of Unqc is to ensure217 global unique stability, hence finding uniqueness conditions is essential.218
The existing unique stable conditions are well established in one-to-one setting (analysis can be219 found in Appendix B), and in this section, we focus on uniqueness conditions in many-to-one market,220 such as SPC, [28], Aligned Preference, Serial Dictatorship Top-top match and Acyclicity [26, 2, 28]221 (Definition 9, 7, 8, 10 in Appendix B.2). Takashi Akahoshi [2] proposes a necessary and sufficient222 condition for uniqueness of stable matching in many-to-one matching where unacceptable agents223 and arms may exist on both sides. We denote their condition as Acyclicity∗. Under our setting, both224 two sides are acceptable, and we first give the proof of that Acyclicity∗ is a necessary and sufficient225 condition for uniqueness in this setting (see Section B.2.4 in Appendix B). We then give relationships226 between our newly α̃-condition and other existing uniqueness conditions, intuitively expressed in227 Figure 1, and we give proof for this section in Appendix B.2.228
Lemma 1. In a many-to-one matching marketM = (K,J ,P), both Serial Dictatorship and Aligned229 Preference can produce a unique stable matching and they are equivalent.230
Theorem 2. In a many-to-one matching marketM = (K,J ,P), our α̃-condition satisfies:231 (i) SPC is a sufficient condition to α̃-condition;232
(ii) α̃-condition is a necessary and sufficient condition to Unqc;233
(iii) α̃-condition is a sufficient condition to Acyclicity∗.234
4.2 Theoretical Results of Regret235
We then provide theoretical results of MO-UCB-D4 algorithm under our α̃-condition. Recall that G∗j236 is the globally dominated arms for agent j under stable matching m∗. For each arm k /∈ G∗j , we give237 the definition of the blocking agents for arm k and agent j: Bjk = {j′ : j′ ≻k j, k /∈ G∗j}, which238 contains agents more preferred by arm k than j. The hidden arms for agent j is Hj = {k : k /∈239 G∗j} ∩ {k : Bjk ̸= ∅}. The reward gap for agent j and arm k is defined as ∆jk = |µj,m∗(j) − µj,k|240 and the minimum reward gap across all arms and agents is ∆ = minj∈[N ]{mink∈[K] ∆j,k}. We241 assume that the reward is different for each agent, thus ∆j,k > 0 for every agent j and arm k.242
Theorem 3. (Regret upper bound) Let Jmax(j) = max {j + 1, {j′ : ∃k ∈ Hj , j′ ∈ Bjk}} be the243 max blocking agent for agent j and fα̃(j) = j + lrmax(j) is a fixed factor depends on both the left244 order and the right order for agent j. Following MO-UCB-D4 algorithm with horizon T , the expected245 regret of a stable matching under α̃-condition (Definition 2) for agent j ∈ [N ] is upper bounded by246
E [Rj(T )] ≤ ∑
k/∈G∗j∪m∗(j)
8α
∆jk
( log(T ) + √ π
α log(T ) ) + ∑ k/∈G∗j ∑ j′∈Bjk:k/∈G∗j′ 8αµj,m∗(j) ∆2j′k ( log(T ) + √ π α log(T ) )
+ cj log2(T ) +O
( N2K2
∆2 + ( min(1, θ|Hj |)fα(Jmax(j) ) + fα̃(j)− 1)2i ∗ +N2Ki∗ ) ,
where i∗ = max{8, i1, i2} (then i∗ ≤ 8 and i1, i2 are defined in equation (3)), and lrmax(j) =247 max{lr(j′) : 1 ≤ j′ ≤ j}, is the maximum right order mapping for agent j′ who ranks higher than248 j.249
From Theorem 3, the scale of the regret upper bound under α̃-condition is O (
NK log(T ) ∆2
) and the250
proof is in Section 3.251
Proof Sketch of Theorem 3. Under α̃-condition, we only need to discuss the regret of the unique252 result. We construct a good phase (in Appendix A.2) and denote that the time point of agent j253 reaching its good phase by τj . After τj , agent j could identify its best arm and matches with his254 stable pair. Thus, from phase τj on-wards, agent j + 1 will find the set of globally dominated arms255 G∗j+1 and will eliminate arm m
∗(j) if m∗(j) brings collisions in communication block according256 to Algorithm 1. Global deletion here follows the left order. Then when agent j enters into regret257 minimization block next phase, the times it plays a sub-optimal arm is small which leads to a small258 total number of collisions experienced by agent j + 1. Then the process of each agent after good259 phase is divided into two stages: before τj and after τj . After τj , according to the causes of regret, it260 is divided into four blocks: collision, local deletion, communication, and sub-optimal play. Phases261 before τj can be bounded by induction. The regret decomposition is bound by the following.262
Lemma 2. (Regret Decomposition) For a stable matching under α̃-condition, the upper bound of263 regret for the agent j ∈ [N ] under our algorithm can be decomposed by:264
E [Rj(T )] ≤ E [ SFαj ]︸ ︷︷ ︸ (Regret before phase Fαj ) +min(θ|Hj |, 1)E [ SVαj ]︸ ︷︷ ︸ (Local deletion) + ( (K − 1 + |Bj,m∗(j)|) log2(T ) +NKE [Vαj ] )︸ ︷︷ ︸ (Communication)
+ ∑ k/∈G∗j ∑ j′∈Bj,k:k/∈G∗j′ 8αµj,m∗(j) ∆2j′,k ( log(T ) + √ π α log(T ) ) ︸ ︷︷ ︸
(Collision) + ∑
k/∈G∗j∪m∗(j)
8α
∆j,k (log(T ) +
√ π
α log(T ))
︸ ︷︷ ︸ (Sub-optimal play)
+NK ( 1 + (ϕ(α) + 1) 8α
∆2
) ,
where Fαj , Vαj are the time points when agent j enters into α̃-Good phase and α̃-Low Collision265 phase respectively, mentioned as "good phase" above, are defined in Appendix A.2.266
5 Difficulties and Solutions267
While putting forward our α̃-condition in the many-to-one setting, many new problems need to be268 taken into account.269
From one-to-one setting to many-to-one setting First, although we assume that arm preference is270 over individuals rather than combination of agents, the agents matched by one arm are not independent.271 Specially, arms with capacity q can not just be replaced by q independent individuals with the same272 preference. Since there would be implicit competition among different replicates of this arm, and it273 can reject the previously accepted agents when it faces a more preferred agent. Secondly, collisions274 among agents is one of main causes of regret in decentralized setting, while capacity will hinder the275 collision-reducing process. In communication block, when two agents select one arm at a time, as276 an arm can accept more than one agent, these two cannot distinguish who is more preferred by this277 arm, while it can be done in one-to-one markets. Thus it is more difficult to identify arm preferences278 for each agent. The lr in [7] is a one-to-one mapping that corresponds the agent index in the left279 order and the agent index in the right order, which is related to regret bound (Theorem 3 in [7] and280 Theorem 3 in our work). While it does not hold in our setting. To give a descriptive range of matched281 result for each arm under α̃-condition, we need to define a new mapping.282
In order to solve these problems, we explain as follows: First, since capacity influence the com-283 munication among agents, we add communication block and introduce an arm set G∗j , which will284 be deleted before each phase to reduce collisions, where G∗j contains arms that will block agent j285 globally under stable matching m∗. Second, the idea from one-to-one to many-to-one is a transition286 from individual to set. It is natural to split sets into individuals or design a bridge to correspond sets287 to individuals. We construct a new mapping lr (Figure 4 in Appendix A) from agent j in the left order288 to agents in the right order under α̃-condition. lr maps each arm k to the least preferred one of its289 stable matched agents in the right order, thus giving a matching between individuals and individuals290 and constructing the range of the stable matched agents set (Theorem 1). Except lr, capacity also291 influences regret mainly in communication block, as mentioned in the first paragraph.292
From α-condition to α̃-condition To extend α-condition to the many-to-one setting, it needs293 to define preferences among sets. However, there might be exponential number of sets due to the294 combinatorial structure and simply constraining preferences over all possible sets will lead to high295 complexity. Motivated by α-condition which characterizes properties of matched pairs in one-to-one296 setting, we come up with a possible constraint by regarding the arm and its least preferred agent in the297 matched set as the matched pair and define preferences according to this grouping. It turns out that298 we only need to define the preferences of arms over disjoint sets of agents to complete the extension299 as α-condition is defined under the stable matching, which can also fit the regret analysis well. As a300 summary, there might be other possible ways to extend the α-condition but we present a successful301 trial to not only give a good extension with similar inclusion relationships but also guarantee good302 regret bound.303
6 Experiments304
In this section, we verify the experimental results of our MO-UCB-D4 algorithm (Algorithm 1) for305 decentralized many-to-one matching markets. For all experiments, the rankings of all agents and306 arms are sampled uniformly. We set the reward value towards the least preferred arm to be 1/N307 and the most preferred one as 1 for each agent, then the reward gap between any adjacently ranked308 arms is ∆ = 1/N . The reward for agent j matches with arm k at time t Xj,k(t) is sampled from309 Ber(µj,k). The capacity is equally set as q = N/K. We investigate how the cumulative regret and310 cumulative market unstability depend on the size of the market and the number of arms under three311 different unique stability conditions: Serial Dictatorship, SPC, α̃-condition. The former cumulative312 regret is the total mean reward gap between the stable matching result and the simulated result, and313 the latter cumulative unstability is defined as the number of unstable matchings in round t. In our314 experiments, all results are averaged over 10 independent runs, hence the error bars are calculated as315 standard deviations divided by √ 10.316
Varying the market size To test effects on two indicators, cumulative regret and cumulative317 unstability, we first varying N with fixed K with market size of N ∈ {10, 20, 30, 40} agents318
and K = 5 arms. The number of rounds is set to be 100, 000. The cumulative regret in Figure319 2(a)(c)(e) show an increasing trend with convergence as the number of agents increases under these320 three conditions. When the number of agents increases, there is a high probability of collisions321 among different agents, resulting in the increase of cumulative regret. Similar results for cumulative322 unstability are shown in Figure 2(b)(d)(f). When N is larger, the number of unstable pairs becomes323 more. With the increase of the number of rounds, both two indicators increase first and then tend to324 be stable. The jumping points are caused by multi-phases setting of MO-UCB-D4 algorithm.325
Varying arm capacity The number of arms K is chosen by K ∈ {2, 5, 10, 20}, with N = 20 and326 q = N/K. The number of rounds we set is 400, 000. With the increase of K, both the cumulative327 regret in Figure 3(a)(c)(e) and the cumulative unstability in Figure 3(b)(d)(f) increase monotonously.328 When K increases, the capacity qk for each arm k decreases, and then the number of collisions329 will increase, which leads to an increase of cumulative regret. And it also leads to more unstable330 pairs, which needs more communication blocks to converge to a stable matching. Under these three331 conditions, the performances of the algorithm are similar.332
7 Conclusion333
We are the first to study the bandit algorithm for the many-to-one matching market under the unique334 stable matching. This work focuses on a decentralized market. A new α̃-condition is proposed335 to guarantee a unique stable outcome in many-to-one market, which is more general than existing336 uniqueness conditions like SPC, Serial Dictatorship and could recover the usual α-condition in337 one-to-one setting. We propose a phase-based algorithm of MO-UCB-D4 with arm-elimination,338 which obtains O ( NK log(T )
∆2
) stable regret under α̃-condition. By carefully defining a mapping from339
arms to the least preferred agent in its stable matched set, we could effectively correspond arms and340 agents by individual-to-individual. A series of experiments under two environments of varying the341 market size and varying arm capacity are conducted. The results show that our algorithm performs342 well under Serial Dictatorship, SPC and α̃-condition respectively.343
References344 [1] Azar Abizada. Stability and incentives for college admissions with budget constraints. Theoreti-345 cal Economics, 11(2):735–756, 2016.346
[2] Takashi Akahoshi. Singleton core in many-to-one matching problems. Mathematical Social347 Sciences, 72:7–13, 2014.348
[3] Ahmet Altinok. Dynamic many-to-one matching. Available at SSRN 3526522, 2019.349
[4] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed350 bandit problem. Machine learning, 47(2):235–256, 2002.351
[5] Orly Avner and Shie Mannor. Concurrent bandits and cognitive radio networks. In Joint352 European Conference on Machine Learning and Knowledge Discovery in Databases, pages353 66–81. Springer, 2014.354
[6] Sophie Bade. Random serial dictatorship: the one and only. Mathematics of Operations355 Research, 45(1):353–368, 2020.356
[7] Soumya Basu, Karthik Abinav Sankararaman, and Abishek Sankararaman. Beyond log2(t)357 regret for decentralized bandits in matching markets. In International Conference on Machine358 Learning, pages 705–715, 2021.359
[8] Anna Bogomolnaia and Hervé Moulin. A new solution to the random assignment problem.360 Journal of Economic theory, 100(2):295–328, 2001.361
[9] Somouaoga Bonkoungou. Decentralized college admissions under single application. Review362 of Economic Design, 25(1):65–91, 2021.363
[10] Simon Clark. The uniqueness of stable matchings. Contributions in Theoretical Economics,364 6(1), 2006.365
[11] Jan Eeckhout. On the uniqueness of stable marriage matchings. Economics Letters, 69(1):1–8,366 2000.367
[12] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The368 American Mathematical Monthly, 69(1):9–15, 1962.369
[13] Aurélien Garivier, Tor Lattimore, and Emilie Kaufmann. On explore-then-commit strategies.370 Advances in Neural Information Processing Systems, 29:784–792, 2016.371
[14] Virginia Gunn, Bertina Kreshpaj, Nuria Matilla-Santander, Emilia F Vignola, David H Weg-372 man, Christer Hogstedt, Emily Q Ahonen, Theo Bodin, Cecilia Orellana, Sherry Baron, et al.373 Initiatives addressing precarious employment and its effects on workers’ health and well-being:374 A systematic review. International Journal of Environmental Research and Public Health,375 19(4):2232, 2022.376
[15] Gregory Z Gutin, Philip R Neary, and Anders Yeo. Unique stable matchings. arXiv preprint377 arXiv:2106.12977, 2021.378
[16] Guillaume Haeringer and Flip Klijn. Constrained school choice. Journal of Economic theory,379 144(5):1921–1947, 2009.380
[17] John William Hatfield, Fuhito Kojima, and Scott Duke Kominers. Investment incentives in381 labor market matching. American Economic Review, 104(5):436–41, 2014.382
[18] Ramesh Johari, Vijay Kamble, and Yash Kanoria. Matching while learning. Operations383 Research, 69(2):655–681, 2021.384
[19] Alexander Karpov. A necessary and sufficient condition for uniqueness consistency in the stable385 marriage matching problem. Economics Letters, 178:63–65, 2019.386
[20] Bettina Klaus and Flip Klijn. Local and global consistency properties for student placement.387 Journal of Mathematical Economics, 49(3):222–229, 2013.388
[21] Lydia T Liu, Horia Mania, and Michael Jordan. Competing bandits in matching markets. In389 International Conference on Artificial Intelligence and Statistics, pages 1618–1628. PMLR,390 2020.391
[22] Lydia T Liu, Feng Ruan, Horia Mania, and Michael I Jordan. Bandit learning in decentralized392 matching markets. arXiv preprint arXiv:2012.07348, 2020.393
[23] Jinpeng Ma. The singleton core in the college admissions problem and its application to the394 national resident matching program (nrmp). Games and Economic Behavior, 69(1):150–164,395 2010.396
[24] Onkar Malgonde, He Zhang, Balaji Padmanabhan, and Moez Limayem. Taming complexity in397 search matching: Two-sided recommender systems on digital platforms. Mis Quarterly, 44(1),398 2020.399
[25] Hai Nguyen, Thành Nguyen, and Alexander Teytelboym. Stability in matching markets with400 complex constraints. Management Science, 67(12):7438–7454, 2021.401
[26] Muriel Niederle and Leeat Yariv. Decentralized matching with aligned preferences. Technical402 report, National Bureau of Economic Research, 2009.403
[27] Jaeok Park. Competitive equilibrium and singleton cores in generalized matching problems.404 International Journal of Game Theory, 46(2):487–509, 2017.405
[28] Philip J Reny. A simple sufficient condition for a unique and student-efficient stable matching406 in the college admissions problem. Economic Theory Bulletin, 9(1):7–9, 2021.407
[29] Antonio Romero-Medina and Matteo Triossi. Acyclicity and singleton cores in matching408 markets. Economics Letters, 118(1):237–239, 2013.409
[30] Jonathan Rosenski, Ohad Shamir, and Liran Szlak. Multi-player bandits–a musical chairs410 approach. In International Conference on Machine Learning, pages 155–163. PMLR, 2016.411
[31] Alvin E Roth. On the allocation of residents to rural hospitals: a general property of two-sided412 matching markets. Econometrica: Journal of the Econometric Society, pages 425–427, 1986.413
[32] Alvin E Roth and Marilda Sotomayor. Two-sided matching. Handbook of game theory with414 economic applications, 1:485–541, 1992.415
[33] Hannu Salonen and Mikko AA Salonen. Mutually best matches. Mathematical Social Sciences,416 91:42–50, 2018.417
[34] Abishek Sankararaman, Soumya Basu, and Karthik Abinav Sankararaman. Dominate or delete:418 Decentralized competing bandits in serial dictatorship. In International Conference on Artificial419 Intelligence and Statistics, pages 1252–1260. PMLR, 2021.420
[35] Jay Sethuraman, Chung-Piaw Teo, Liwen Qian, et al. Many-to-one stable matching: Geometry421 and fairness. Mathematics of Operations Research, 31(3):581–596, 2006.422
[36] William R Thompson. On the likelihood that one unknown probability exceeds another in view423 of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.424
Checklist425
1. For all authors...426
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s427 contributions and scope? [Yes] Please see Abstract and Section 1.428
(b) Did you describe the limitations of your work? [Yes] Please see Section C.4.429 (c) Did you discuss any potential negative societal impacts of your work? [N/A] This430 work mainly focuses on the online learning theory, which does not have any potential431 negative societal impacts.432
(d) Have you read the ethics review guidelines and ensured that your paper conforms to433 them? [Yes]434
2. If you are including theoretical results...435 (a) Did you state the full set of assumptions of all theoretical results? [Yes] Please see436 Section 2.437 (b) Did you include complete proofs of all theoretical results? [Yes] Please see Appendix.438
3. If you ran experiments...439 (a) Did you include the code, data, and instructions needed to reproduce the main exper-440 imental results (either in the supplemental material or as a URL)? [Yes] Please see441 supplemental material.442
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they443 were chosen)? [Yes] Please see Section 6 and supplemental material.444
(c) Did you report error bars (e.g., with respect to the random seed after running experi-445 ments multiple times)? [Yes] Please see Section 6.446
(d) Did you include the total amount of compute and the type of resources used (e.g., type447 of GPUs, internal cluster, or cloud provider)? [N/A]448
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...449 (a) If your work uses existing assets, did you cite the creators? [N/A]450 (b) Did you mention the license of the assets? [N/A]451 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]452
453
(d) Did you discuss whether and how consent was obtained from people whose data you’re454 using/curating? [N/A]455
(e) Did you discuss whether the data you are using/curating contains personally identifiable456 information or offensive content? [N/A]457
5. If you used crowdsourcing or conducted research with human subjects...458 (a) Did you include the full text of instructions given to participants and screenshots, if459 applicable? [N/A]460 (b) Did you describe any potential participant risks, with links to Institutional Review461 Board (IRB) approvals, if applicable? [N/A]462 (c) Did you include the estimated hourly wage paid to participants and the total amount463 spent on participant compensation? [N/A]464 | 1. What is the focus and contribution of the paper regarding many-to-one matching markets?
2. What are the strengths of the proposed algorithm, particularly in its ability to generalize to one-to-one matching markets?
3. What are the weaknesses of the paper, especially regarding competition mechanisms and potential extensions to markets with multiple equilibria?
4. Do you have any concerns or suggestions for improving the paper's theoretical analysis and algorithmic approach?
5. What are the limitations of the paper, and how might they be addressed in future research? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper studies many-to-one matching markets - which is a generalization of the one-to-one matching markets studied in recent years in the online learning community. The paper identifies a combinatorial condition that guarantees an unique Nash equilibrium to the underlying game if all participants knew of their preferences. This condition is a multi-way generalization of the recently used alpha condition for one-to-one matching. The paper shows that if the underlying market satisfies this unique Nash Equilibria criteria, then the decentralized learning algorithm yields logarithmic regret. The proposed algorithm is a natural variant of the UCB-D4 which was shown to be a good method for the one-to-one matching case.
Strengths And Weaknesses
Strengths
Comprehensive treatment of the problem
Builds and contributes to an emerging theory on decentralized bandit learning in matching markets
Their algorithms and results recover the known results in one-to-one matching market when the system is one-to-one. In this sense the proposed algorithm is a strict generalization of prior algorithms
Weakness
The paper does not address issues such as competition mechanism while learning.
However, I believe the weakness to be minor and can be easily addressed in the discussion sections of the revision of the paper.
Questions
What do you think are the additional challenges in extending this to markets with multiple equilibria ?
Can this algorithm be generalized if both sides of the market need to perform learning ?
Limitations
Yes. This is primarily a theory paper. |
NIPS | Title
Bandit Learning in Many-to-one Matching Markets with Uniqueness Conditions
Abstract
An emerging line of research is dedicated to the problem of one-to-one matching 1 markets with bandits, where the preference of one side is unknown and thus we 2 need to match while learning the preference through multiple rounds of interaction. 3 However, in many real-world applications such as online recruitment platform for 4 short-term workers, one side of the market can select more than one participant from 5 the other side, which motivates the study of the many-to-one matching problem. 6 Moreover, the existence of a unique stable matching is crucial to the competitive 7 equilibrium of the market. In this paper, we first introduce a more general new α̃8 condition to guarantee the uniqueness of stable matching in many-to-one matching 9 problems, which generalizes some established uniqueness conditions such as SPC 10 and Serial Dictatorship, and recovers the known α-condition if the problem is 11 reduced to one-to-one matching. Under this new condition, we design an MO12 UCB-D4 algorithm withO ( NK log(T ) ∆2 ) regret bound, where T is the time horizon, 13 N is the number of agents, K is the number of arms, and ∆ is the minimum 14 reward gap. Extensive experiments show that our algorithm achieves uniform good 15 performances under different uniqueness conditions. 16
N/A
An emerging line of research is dedicated to the problem of one-to-one matching1 markets with bandits, where the preference of one side is unknown and thus we2 need to match while learning the preference through multiple rounds of interaction.3 However, in many real-world applications such as online recruitment platform for4 short-term workers, one side of the market can select more than one participant from5 the other side, which motivates the study of the many-to-one matching problem.6 Moreover, the existence of a unique stable matching is crucial to the competitive7 equilibrium of the market. In this paper, we first introduce a more general new α̃-8 condition to guarantee the uniqueness of stable matching in many-to-one matching9 problems, which generalizes some established uniqueness conditions such as SPC10 and Serial Dictatorship, and recovers the known α-condition if the problem is11 reduced to one-to-one matching. Under this new condition, we design an MO-12 UCB-D4 algorithm withO ( NK log(T )
∆2
) regret bound, where T is the time horizon,13
N is the number of agents, K is the number of arms, and ∆ is the minimum14 reward gap. Extensive experiments show that our algorithm achieves uniform good15 performances under different uniqueness conditions.16
1 Introduction17
The rise of platforms for the online matching market has led to an emergence of opportunities for18 companies to participate in personalized decision-making [14, 18]. Companies (like Thumbtack19 and Taskrabbit and Upwork platforms) use online platforms to address short-term needs or seasonal20 spikes in production demands, accommodate workers who are voluntarily looking for more flexible21 work arrangements or probation period before permanent employment. The supply and demand22 sides in two-sided markets make policies on the basis of their diversified needs, which is abstracted23 as a matching market with agent side and arm side, and each side has a preference profile over the24 opposite side. They choose from the other side according to preference and perform a matching. The25 stability of the matching result is a key property of the market [32, 1, 27].26
The preferences in the online labor market may be unknown to one side in advance, thus matching27 while learning the preferences is necessary. The multi-armed bandit (MAB) [36, 13, 4] is an important28 tool for N independent agents in matching market simultaneously selecting arms adaptively from29 received rewards at each round. The idea of applying MAB to one-to-one matching problems,30 introduced by [21], assumes that there is a central platform to make decisions for all agents. Following31 this, other works [22, 34, 7] consider a more general decentralized setting where there is no central32 platform to arrange matchings, and our work is also based on this setting.33
However, it is not enough to just study the one-to-one setting. Take online short-term worker34 employment as an example, it is an online platform design with an iterative matching, where35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
employers have numerous similar short-term tasks or internships to be recruited. Workers can only36 choose one task according to the company’s needs at a time while one company can accept more37 than one employee. Each company makes a fixed ranking for candidates according to its own38 requirements but workers have no knowledge of companies’ preferences. The reward for workers39 is a comprehensive consideration of salary and job environment. Since tasks are short-term, each40 candidate can try many times in different companies to choose the most suitable job. We abstract41 companies as arms and workers as agents. Each arm has a capacity q which is the maximum number42 of agents this arm can accommodate. When an arm faces multiple choices, it accepts its most q43 preferred agents. Agents thus compete for arms and may receive zero reward if losing the conflict. It44 is worth mentioning that arms with capacity q in the many-to-one matching can not just be replaced45 by q independent individuals with the same preference since there would be implicit competition46 among different replicates of this arm, not equal treatment. In addition, when multiple agents select47 one arm at a time, there may be no collision, which will hinder the communication among different48 agents under the decentralized assumption. They cannot distinguish who is more preferred by this49 arm in one round as it can accept more than one agent while this can be done in one-to-one case.50 Communication here lets each agent learn more about the preferences of arms and other agents, so as51 to formulate better policies to reduce collisions and learn fast about their stable results.52
This work focuses on a many-to-one market under uniqueness conditions. Previous work [10, 15]53 emphasize the importance of constructing a unique stable matching for the equilibrium of matching54 problems and some existing uniqueness conditions are studied in many-to-one matching, such as55 Sequential Preference Condition (SPC) and Acyclicity [26, 2]. Our work is motivated by [7], but the56 unique one-to-one mapping between arms and agents in their study which gives a surrogate threshold57 for arm elimination does not work in the many-to-one setting. And the uniqueness conditions in58 many-to-one matching are not well-studied, which also brings a challenge to identify and leverage59 the relationship between the resulting stable matching and preferences of two sides in the design60 of bandit algorithms. We propose an α̃-condition that can guarantee a unique stable matching and61 recover α-condition [19] if reduced to the one-to-one setting. We establish the relationships between62 our new α̃-condition and existing uniqueness conditions in many-to-one setting.63
In this paper, we study the bandit algorithm for a decentralized many-to-one matching market64 with uniqueness conditions. Under our newly introduced α̃-condition, we design an MO-UCB-D465 algorithm with arm elimination and the regret can be upper bounded by O ( NK log(T )
∆2
) , where N66
is the number of agents, K is the number of arms, and ∆ is the minimum reward gap. Finally,67 we conduct a series of experiments to simulate our algorithm under various conditions of Serial68 dictatorship, SPC and α̃-condition to study the stability and regret of the algorithm.69
Related Work The study of matching markets has a long history in economics and operation70 research [8, 6, 32] with real applications like school enrollment, labor employment, hospital resource71 allocation, and so on [1, 23, 31, 17]. A salient feature of market matching is making decisions for72 competing players on both sides [36, 12]. MAB is an important tool to study matching problems under73 uncertainty to obtain a maximum reward, and upper confidence bound algorithm (UCB) [4] is a typical74 algorithm, which sets a confidence interval to represent uncertainty. Matching market with MAB is75 studied in both centralized and decentralized setting [21, 22]. Following these, Abishek Sankararaman76 et al. [34] propose a phased UCB algorithm under a uniqueness condition, Serial Dictatorship, to77 manage collisions. They solve the problem of the decentralized market without knowing arm-gaps78 or time horizon, and reduce the probability of linear regret through non-monotonic arm elimination.79 The introduction of the uniqueness condition plays an important role in the equilibrium of matching80 results [15, 7]. Under a stronger and robust condition, Uniqueness Consistency [19], Soumya Basu81 et.al [7] apply MAB to online matching and obtain robust results that the subset of stable matchings82 being separated from the system does not affect other stable matchings.83
We discuss many-to-one problems such as online short-term employment and MOOC [14, 24, 18] as84 the one-to-one setting has limitations in practice. Somouaoga Bonkoungo [9] runs a student-proposing85 deferred acceptance algorithm (DA) [12] to study decentralized college admission. Ahmet Altinok86 [3] considers dynamic matching in many-to-one that can be solved as if it is static many-to-one or87 dynamic one-to-one under certain assumptions. As the existence and uniqueness of competitive88 equilibrium and core are important to allocations, the unique stable results need to be considered [27].89 Similar to conditions for unique stable matching in one-to-one, some uniqueness conditions of stable90 results in the many-to-one setting also are studied [16, 28, 15, 2, 27].91
2 Setting92
This paper considers a many-to-one matching marketM = (K,J ,P), where K = [K],J = [N ]93 are a finite arm set and a finite agent set, respectively. And each arm k has a capacity qk ≥ 1. To94 guarantee that no agents will be unmatched, we focus on the market with N ≤ ∑K i=1 qi. P is the95
fixed preference order of agents and arms, which is ranked by the mean reward. We assume that arm96 preferences for agents are unknown and needed to be learned. If agent j prefers arm k over k′, which97 also means that µj,k > µj,k′ , we denote by k ≻j k′. And the preference is strict that µj,k ̸= µj,k′ if98 k ̸= k′. Similarly, each arm k has a fixed and known preference ≻k over all agents, and specially,99 j ≻k j′ means that arm k prefers agent j over j′. Throughout, we focus on the market where all100 agent-arm pairs are mutually acceptable, that is, j ≻k ∅ and k ≻j ∅ for all k ∈ [K] and j ∈ [N ].101 Let mapping m be the matching result. mt(j) is the matched arm for agent j at time t, and γt(k) is102 the agents set matched with arm k1. Every time agent j selects an arm It(j), and we use Mt(j) to103 denote whether j is successfully matched with its selected arm. Mt(j) = 1 if agent j is matched with104 It(j), and Mt(j) = 0, otherwise. If multiple agents select arm k at the same time, only top qk agents105 can successfully match. The agent j matched with arm k can observe the reward Xj,mt(j)(t), where106 the random reward Xj,k(t) ∈ [0, 1] is independently drawn from a fixed distribution with mean µj,k.107 While the unmatched ones have collisions and receive zero reward. Generally, the reward obtained by108 agent j is Xj,It(j)(t)Mt(j).109
An agent j and an arm k form a blocking pair for a matching m if they are not matched but prefer110 each other over their assignments, i.e. k ≻j m(j) and ∃j′ ∈ γ(k), j ≻k j′. We say a matching111 satisfies individually rationality (IR), if aj ≻pi ∅ and pi ≻aj ∅ for all i ∈ [N ] and j ∈ [K], that is,112 every worker prefers to find a job rather than do nothing, and every company also wants to recruit113 workers rather than not recruit anyone. Under the IR condition, a matching in the many-to-one setting114 is stable if there does not exist a blocking pair [33, 35].115
This paper considers the matching markets under the uniqueness condition. Thus the overall goal is116 to find the unique stable matching between the agent side and arm side through iterations. Let m∗(j)117 be the stable matched arm for agent j under the stable matching m∗. The reward obtained by agent j118 is compared against the reward received by matching with m∗(j) at each time. We aim to minimize119 the expected stable regret for agent j over time horizon T , which is defined as120
Rj(T ) = Tµj,m∗(j) − E
[ T∑
t=1
Mt(j)Xj,It(j)(t)
] .
3 Algorithm121
In this section, we introduce our MO-UCB-D4 Algorithm (Many-to-one UCB with Decentralized122 Dominated arms Deletion and Local Deletion Algorithm) (Algorithm 1) for the decentralized many-123 to-one market, where there is no platform to arrange actions for agents, which leads to conflicts124 among agents. The MO-UCB-D4 algorithm for each agent j first takes agent set J and arm set K as125 input and chooses a parameter θ ∈ (0, 1/K) (discussed in Section C). It sets multiple phases, and126 each phase i mainly includes regret minimization block (line 6 - 12) and communication block (line127 13 - 16) with duration 2i−1, i = 1, 2, · · · .128 For each agent j in phase i, the algorithm adds arm deletion to reduce potential conflicts, which129 mainly contains global deletion and local deletion. The former eliminates the arms most preferred130 by agents who rank higher than agent j and obtain active set Chj [i] (line 4), and the latter deletes131 the arms that still have many conflicts with agent j after global deletion (line 6). We set a collision132 counter Cj,k[i] to record the number of collisions for agent j pulling arm k.133
In regret minimization block of phase i, we use Lj [i] = {k : Cj,k[i] ≥ ⌈θ2i⌉} to represent the134 arms that collide more times than a threshold ⌈θ2i⌉ when matching with agent j. Arms in Lj [i] are135 first locally deleted to reduce potential collisions for agent j (line 6). After that, agent j selects an136 optimal action It(j) from remaining arms in Chj [i]\Lj [i] in phase i according to UCB index, which is137 computed by µ̂j,k(t−1)+ √ 2α log(t) Nj,k(t−1) (line 7), where Nj,k(t−1) is the number that agent j and arm138
1The mapping m is not reversible as it is not a injective, thus we do not use m−1t (k).
Algorithm 1 MO-UCB-D4 algorithm (for agent j) Input:
θ ∈ (0, 1/K), α > 1. 1: Set global dominated set Gj [0] = ϕ 2: for phase i = 1, 2, ... do 3: Reset the collision set Cj,k[i] = 0, ∀k ∈ [K]; 4: Reset active arms set Chj [i] = [K]\Gj [i− 1]; 5: if t < 2i +NK(i− 1) then 6: Local deletion Lj [i] = {k : Cjk[i] ≥ ⌈θ2i⌉}; 7: Play arm It(j) ∈ argmax
k∈Chj [i]\Lj [i]
( µ̂j,k(t− 1) + √ 2α log(t) Nj,k(t−1) ) ;
8: if k = It(j) is successfully matched with agent j, i.e. mt(j) = k then 9: Update estimate µ̂j,k(t) and matching count Nj,k(t);
10: else 11: Cj,k[i] = Cj,k[i] + 1; 12: end if 13: else if t = 2i +NK(i− 1) then 14: Oj [i]← most matched arm in phase i; 15: Gj [i]← COMMUNICATION(i,Oj [i]); 16: end if 17: end for
k have been matched at time t− 1. If the selected arm is successfully matched with agent j, then the139 algorithm updates estimated reward µ̂j,k(t) = 1Nj,k(t) ∑t s=1 1{Is(j) = k and Ms(j) = 1} Xj,k(t)140 and Nj,k(t) (line 9). Otherwise, the collision happens (line 11) and j receives zero reward. The141 regret minimization block identifies the most played arm Oj [i] for agent j in each phase i, which is142 estimated as the best arm for j, thus making optimal policy to minimize expected regret.143
Algorithm 2 COMMUNICATION Input:
Phase number i, and most played arms Oj [i] for agent j, ∀j ∈ [N ] . 1: Set C = ∅; 2: for t = 1, 2, · · · , NK − 1 do 3: if K(j − 1) ≤ t ≤ Kj − 1 then 4: Agent j plays arm It(j) = (t mod K) + 1; 5: if Collision Occurs then 6: C = C ∪ {It(j)}; 7: end if 8: else 9: Play arm It(j) = Oj [i];
10: end if 11: end for 12: RETURN C;
In the communication block (Algorithm 2), there are N sub-blocks, each with duration K. In the144 ℓ− th sub-block, only agent ℓ pulls arm 1, arm 2, · · · , arm K in round-robin while the other agents145 select their most preferred arms estimated as the most played ones (line 4). This block aims to detect146 globally dominated arms for agent j: Gj [i] ⊂ {Oj′ [i] : j′ ≻Oj′ [i] j}. Under stable matching m
∗, the147 globally dominated arms set for agent j is denoted as G∗j . After the communication block in phase148 i, each agent j updates its active arms set Chj [i+ 1] for phase i+ 1, by globally deleting arms set149 Gj [i], and enters into the next phase (line 4 in Algorithm 1).150
Hence, multi-phases setting can guarantee that the active set in different phases has no inclusion151 relationship so that if an agent deletes an arm in a certain phase, this arm can still be selected in the152 later rounds. This ensures that each agent will not permanently eliminate its stable matched arm, and153 when the agent mistakenly deletes an arm, it will not lead to linear regret.154
4 Results155
4.1 Uniqueness Conditions156
4.1.1 α̃-condition157
Constructing a unique stable matching plays an important role in market equilibrium and fairness158 [10, 15]. With uniqueness, there would be no dispute about adopting stable matching preferred159 by which side, thus it is more fair. When the preferences of agents and arms are given by some160 utility functions instead of random preferences, like the payments for workers in the labor markets,161 the stable matching is usually unique. Thus the assumption of the unique stable matching is quite162 common in real applications. In this section, we propose a new uniqueness condition, α̃-condition.163 First, we introduce uniqueness consistency (Unqc) [19], which guarantees robustness and uniqueness164 of markets.165 Definition 1. A preference profile satisfies uniqueness consistency if and only if166
(i) there exists a unique stable matching m∗;167
(ii) for any subset of arms or agents, the restriction of the preference profile on this subset with their168 stable-matched pair has a unique stable matching.169
It guarantees that even if an arbitrary subset of agents are deleted out of the system with their170 respective stable matched arms, there still exists a unique stable matching among the remaining171 agents and arms. This condition allows any algorithm to identify at least one stable pair in a unique172 stable matching system and guides the system to a global unique stable matching in an iterative173 manner. To obtain consistent stable results in the many-to-one market, we propose a new α̃-condition,174 which is a sufficient and necessary condition for Unqc (proved in Appendix B).175
We considers a finite set of arms [K] = {1, 2, · · · ,K} and a finite set of agents [N ] = {1, 2, · · · , N}176 with preference profile P . Assume that [N ]r={A1, A2, · · · , AN} is a permutation of {1, 2, · · · , N}177 and [K]r={c1, c2, · · · , cK} is a permutation of {1, 2, · · · ,K}. Denote [N ], [K] as the left order and178 [N ]r, [K]r as the right order. The k-th arm in the right order set [K]r has the index ck in the left179 order set [K] and the j-th agent in the right order set [N ]r has the index Aj in the left order set [N ].180 Considering arm capacity, we denote γ∗(ck) (right order) as the stable matched agents set for arm ck.181 Definition 2. A many-to-one matching market satisfies the α̃-condition if,182
(i) The left order of agents and arms satisfies
∀j ∈ [N ],∀k > j, k ∈ [K], µj,m∗(j) > µj,k , where m∗(j) is agent j’s stable matched arm;183
(ii) The right order of agents and arms satisfies
∀k < k′ ≤ K, ck ∈ [K]r, Ak′ ⊂ [N ]r, γ∗(ck) ≻ck A∑k′−1 i=1 qci+1 ,
where the set γ∗(ck) is more preferred than A∑k′−1 i=1 qci+1 means that the least preferred agent in184
γ∗(ck) for ck is better than A∑k′−1 i=1 qci+1 for ck.185
Under our α̃-condition, the left order and the right order satisfy the following rule. The left order186 gives rankings according to agents’ preferences. The first agent in the left order set [N ] prefers arm 1187 in [K] most and has it as the stable matched arm. Similar properties for the agent 2 to q1 since arm 1188 has q1 capacity. Then the (q1 + 1)-th agent in the left order set [N ] has arm 2 in [K] as her stable189 matched arm and prefers arm 2 most except arm 1. The remaining agents follow similarly. Similarly,190 the right order gives rankings according to arms’ preferences. The first arm 1 in the right order set191 [K]r most prefers first qc1 agents in the right order set [N ]r and takes them as its stable matched192 agents. The remaining arms follow similarly.193
This condition is more general than existing uniqueness conditions like SPC [28] and can recover194 the known α-condition in one-to-one matching market [19]. The relationship between the existing195 uniqueness conditions and our proposed conditions will be analyzed in detail later in Section 4.1.2.196
The main idea from one-to-one to many-to-one analysis is to replace individuals with sets. In197 general, under α̃-condition, the left order satisfies that when arm 1 to arm k − 1 are removed, agents198
(∑k−1 i=1 qi + 1 ) to (∑k i=1 qi )
prefer k most, and the right order means that when A1 to agents199 A∑k−1 i=1 qi are removed, arm k prefers agents Ak = {A∑k−1 i=1 qci+1 , A∑k−1 i=1 qci+2 , · · · , A∑k i=1 qci },200 where Ak is the agent set that are most qk preferred by arm k among those who have not been201 matched by arm 1, 2, · · · , k − 1. Te next theorem give a summary.202
Theorem 1. If a market M = (K,J ,P) satisfies α̃-condition, then m∗( ∑j−1
i=1 qi + 1) =203 m∗( ∑j−1 i=1 qi + 2) = · · · = m∗( ∑j i=1 qi) = j (the left order), γ ∗(ck) = Ak and m∗(Aj) = cj (the204 right order) under stable matching.205
Under α̃-condition, the stable matched arm may not be the most preferred one for each agent j,206 j ∈ [N ], thus (i) we do not have m∗(j) to be dominated only by the agent 1 to agent j − 1, i.e. there207 may exist j′ > j, s.t. j′ ≻m∗(j) j; (ii) the left order may not be identical to the right order, we208 define a mapping lr to match the index of an agent in the left order with the index in the right order,209 i.e. Alr(j) = j. From Theorem 1, the stable matched set for arm k is its first qk preferred agents210 γ∗(ck) = Ak. We define lr as lr(i) = max{j : Aj ∈ γ∗(m∗(i)), j ∈ [N ]}, that is, in the right211 order, the mapping for arm k ∈ [K] is the least preferred one among its most qk preferred agents.212 Note that this mapping is not an injective, i.e. ∃j, j′, s.t. agent j = Alr(j) = Alr(j′). An intuitive213 representation can be seen in Figure 4 in Appendix A.1.214
4.1.2 Unique Stable Conditions in Many-to-one Matching215
Uniqueness consistency (Unqc) leads the stable matching to a robust one which is a desirable property216 in large dynamic markets with constant individual departure [7]. A precondition of Unqc is to ensure217 global unique stability, hence finding uniqueness conditions is essential.218
The existing unique stable conditions are well established in one-to-one setting (analysis can be219 found in Appendix B), and in this section, we focus on uniqueness conditions in many-to-one market,220 such as SPC, [28], Aligned Preference, Serial Dictatorship Top-top match and Acyclicity [26, 2, 28]221 (Definition 9, 7, 8, 10 in Appendix B.2). Takashi Akahoshi [2] proposes a necessary and sufficient222 condition for uniqueness of stable matching in many-to-one matching where unacceptable agents223 and arms may exist on both sides. We denote their condition as Acyclicity∗. Under our setting, both224 two sides are acceptable, and we first give the proof of that Acyclicity∗ is a necessary and sufficient225 condition for uniqueness in this setting (see Section B.2.4 in Appendix B). We then give relationships226 between our newly α̃-condition and other existing uniqueness conditions, intuitively expressed in227 Figure 1, and we give proof for this section in Appendix B.2.228
Lemma 1. In a many-to-one matching marketM = (K,J ,P), both Serial Dictatorship and Aligned229 Preference can produce a unique stable matching and they are equivalent.230
Theorem 2. In a many-to-one matching marketM = (K,J ,P), our α̃-condition satisfies:231 (i) SPC is a sufficient condition to α̃-condition;232
(ii) α̃-condition is a necessary and sufficient condition to Unqc;233
(iii) α̃-condition is a sufficient condition to Acyclicity∗.234
4.2 Theoretical Results of Regret235
We then provide theoretical results of MO-UCB-D4 algorithm under our α̃-condition. Recall that G∗j236 is the globally dominated arms for agent j under stable matching m∗. For each arm k /∈ G∗j , we give237 the definition of the blocking agents for arm k and agent j: Bjk = {j′ : j′ ≻k j, k /∈ G∗j}, which238 contains agents more preferred by arm k than j. The hidden arms for agent j is Hj = {k : k /∈239 G∗j} ∩ {k : Bjk ̸= ∅}. The reward gap for agent j and arm k is defined as ∆jk = |µj,m∗(j) − µj,k|240 and the minimum reward gap across all arms and agents is ∆ = minj∈[N ]{mink∈[K] ∆j,k}. We241 assume that the reward is different for each agent, thus ∆j,k > 0 for every agent j and arm k.242
Theorem 3. (Regret upper bound) Let Jmax(j) = max {j + 1, {j′ : ∃k ∈ Hj , j′ ∈ Bjk}} be the243 max blocking agent for agent j and fα̃(j) = j + lrmax(j) is a fixed factor depends on both the left244 order and the right order for agent j. Following MO-UCB-D4 algorithm with horizon T , the expected245 regret of a stable matching under α̃-condition (Definition 2) for agent j ∈ [N ] is upper bounded by246
E [Rj(T )] ≤ ∑
k/∈G∗j∪m∗(j)
8α
∆jk
( log(T ) + √ π
α log(T ) ) + ∑ k/∈G∗j ∑ j′∈Bjk:k/∈G∗j′ 8αµj,m∗(j) ∆2j′k ( log(T ) + √ π α log(T ) )
+ cj log2(T ) +O
( N2K2
∆2 + ( min(1, θ|Hj |)fα(Jmax(j) ) + fα̃(j)− 1)2i ∗ +N2Ki∗ ) ,
where i∗ = max{8, i1, i2} (then i∗ ≤ 8 and i1, i2 are defined in equation (3)), and lrmax(j) =247 max{lr(j′) : 1 ≤ j′ ≤ j}, is the maximum right order mapping for agent j′ who ranks higher than248 j.249
From Theorem 3, the scale of the regret upper bound under α̃-condition is O (
NK log(T ) ∆2
) and the250
proof is in Section 3.251
Proof Sketch of Theorem 3. Under α̃-condition, we only need to discuss the regret of the unique252 result. We construct a good phase (in Appendix A.2) and denote that the time point of agent j253 reaching its good phase by τj . After τj , agent j could identify its best arm and matches with his254 stable pair. Thus, from phase τj on-wards, agent j + 1 will find the set of globally dominated arms255 G∗j+1 and will eliminate arm m
∗(j) if m∗(j) brings collisions in communication block according256 to Algorithm 1. Global deletion here follows the left order. Then when agent j enters into regret257 minimization block next phase, the times it plays a sub-optimal arm is small which leads to a small258 total number of collisions experienced by agent j + 1. Then the process of each agent after good259 phase is divided into two stages: before τj and after τj . After τj , according to the causes of regret, it260 is divided into four blocks: collision, local deletion, communication, and sub-optimal play. Phases261 before τj can be bounded by induction. The regret decomposition is bound by the following.262
Lemma 2. (Regret Decomposition) For a stable matching under α̃-condition, the upper bound of263 regret for the agent j ∈ [N ] under our algorithm can be decomposed by:264
E [Rj(T )] ≤ E [ SFαj ]︸ ︷︷ ︸ (Regret before phase Fαj ) +min(θ|Hj |, 1)E [ SVαj ]︸ ︷︷ ︸ (Local deletion) + ( (K − 1 + |Bj,m∗(j)|) log2(T ) +NKE [Vαj ] )︸ ︷︷ ︸ (Communication)
+ ∑ k/∈G∗j ∑ j′∈Bj,k:k/∈G∗j′ 8αµj,m∗(j) ∆2j′,k ( log(T ) + √ π α log(T ) ) ︸ ︷︷ ︸
(Collision) + ∑
k/∈G∗j∪m∗(j)
8α
∆j,k (log(T ) +
√ π
α log(T ))
︸ ︷︷ ︸ (Sub-optimal play)
+NK ( 1 + (ϕ(α) + 1) 8α
∆2
) ,
where Fαj , Vαj are the time points when agent j enters into α̃-Good phase and α̃-Low Collision265 phase respectively, mentioned as "good phase" above, are defined in Appendix A.2.266
5 Difficulties and Solutions267
While putting forward our α̃-condition in the many-to-one setting, many new problems need to be268 taken into account.269
From one-to-one setting to many-to-one setting First, although we assume that arm preference is270 over individuals rather than combination of agents, the agents matched by one arm are not independent.271 Specially, arms with capacity q can not just be replaced by q independent individuals with the same272 preference. Since there would be implicit competition among different replicates of this arm, and it273 can reject the previously accepted agents when it faces a more preferred agent. Secondly, collisions274 among agents is one of main causes of regret in decentralized setting, while capacity will hinder the275 collision-reducing process. In communication block, when two agents select one arm at a time, as276 an arm can accept more than one agent, these two cannot distinguish who is more preferred by this277 arm, while it can be done in one-to-one markets. Thus it is more difficult to identify arm preferences278 for each agent. The lr in [7] is a one-to-one mapping that corresponds the agent index in the left279 order and the agent index in the right order, which is related to regret bound (Theorem 3 in [7] and280 Theorem 3 in our work). While it does not hold in our setting. To give a descriptive range of matched281 result for each arm under α̃-condition, we need to define a new mapping.282
In order to solve these problems, we explain as follows: First, since capacity influence the com-283 munication among agents, we add communication block and introduce an arm set G∗j , which will284 be deleted before each phase to reduce collisions, where G∗j contains arms that will block agent j285 globally under stable matching m∗. Second, the idea from one-to-one to many-to-one is a transition286 from individual to set. It is natural to split sets into individuals or design a bridge to correspond sets287 to individuals. We construct a new mapping lr (Figure 4 in Appendix A) from agent j in the left order288 to agents in the right order under α̃-condition. lr maps each arm k to the least preferred one of its289 stable matched agents in the right order, thus giving a matching between individuals and individuals290 and constructing the range of the stable matched agents set (Theorem 1). Except lr, capacity also291 influences regret mainly in communication block, as mentioned in the first paragraph.292
From α-condition to α̃-condition To extend α-condition to the many-to-one setting, it needs293 to define preferences among sets. However, there might be exponential number of sets due to the294 combinatorial structure and simply constraining preferences over all possible sets will lead to high295 complexity. Motivated by α-condition which characterizes properties of matched pairs in one-to-one296 setting, we come up with a possible constraint by regarding the arm and its least preferred agent in the297 matched set as the matched pair and define preferences according to this grouping. It turns out that298 we only need to define the preferences of arms over disjoint sets of agents to complete the extension299 as α-condition is defined under the stable matching, which can also fit the regret analysis well. As a300 summary, there might be other possible ways to extend the α-condition but we present a successful301 trial to not only give a good extension with similar inclusion relationships but also guarantee good302 regret bound.303
6 Experiments304
In this section, we verify the experimental results of our MO-UCB-D4 algorithm (Algorithm 1) for305 decentralized many-to-one matching markets. For all experiments, the rankings of all agents and306 arms are sampled uniformly. We set the reward value towards the least preferred arm to be 1/N307 and the most preferred one as 1 for each agent, then the reward gap between any adjacently ranked308 arms is ∆ = 1/N . The reward for agent j matches with arm k at time t Xj,k(t) is sampled from309 Ber(µj,k). The capacity is equally set as q = N/K. We investigate how the cumulative regret and310 cumulative market unstability depend on the size of the market and the number of arms under three311 different unique stability conditions: Serial Dictatorship, SPC, α̃-condition. The former cumulative312 regret is the total mean reward gap between the stable matching result and the simulated result, and313 the latter cumulative unstability is defined as the number of unstable matchings in round t. In our314 experiments, all results are averaged over 10 independent runs, hence the error bars are calculated as315 standard deviations divided by √ 10.316
Varying the market size To test effects on two indicators, cumulative regret and cumulative317 unstability, we first varying N with fixed K with market size of N ∈ {10, 20, 30, 40} agents318
and K = 5 arms. The number of rounds is set to be 100, 000. The cumulative regret in Figure319 2(a)(c)(e) show an increasing trend with convergence as the number of agents increases under these320 three conditions. When the number of agents increases, there is a high probability of collisions321 among different agents, resulting in the increase of cumulative regret. Similar results for cumulative322 unstability are shown in Figure 2(b)(d)(f). When N is larger, the number of unstable pairs becomes323 more. With the increase of the number of rounds, both two indicators increase first and then tend to324 be stable. The jumping points are caused by multi-phases setting of MO-UCB-D4 algorithm.325
Varying arm capacity The number of arms K is chosen by K ∈ {2, 5, 10, 20}, with N = 20 and326 q = N/K. The number of rounds we set is 400, 000. With the increase of K, both the cumulative327 regret in Figure 3(a)(c)(e) and the cumulative unstability in Figure 3(b)(d)(f) increase monotonously.328 When K increases, the capacity qk for each arm k decreases, and then the number of collisions329 will increase, which leads to an increase of cumulative regret. And it also leads to more unstable330 pairs, which needs more communication blocks to converge to a stable matching. Under these three331 conditions, the performances of the algorithm are similar.332
7 Conclusion333
We are the first to study the bandit algorithm for the many-to-one matching market under the unique334 stable matching. This work focuses on a decentralized market. A new α̃-condition is proposed335 to guarantee a unique stable outcome in many-to-one market, which is more general than existing336 uniqueness conditions like SPC, Serial Dictatorship and could recover the usual α-condition in337 one-to-one setting. We propose a phase-based algorithm of MO-UCB-D4 with arm-elimination,338 which obtains O ( NK log(T )
∆2
) stable regret under α̃-condition. By carefully defining a mapping from339
arms to the least preferred agent in its stable matched set, we could effectively correspond arms and340 agents by individual-to-individual. A series of experiments under two environments of varying the341 market size and varying arm capacity are conducted. The results show that our algorithm performs342 well under Serial Dictatorship, SPC and α̃-condition respectively.343
References344 [1] Azar Abizada. Stability and incentives for college admissions with budget constraints. Theoreti-345 cal Economics, 11(2):735–756, 2016.346
[2] Takashi Akahoshi. Singleton core in many-to-one matching problems. Mathematical Social347 Sciences, 72:7–13, 2014.348
[3] Ahmet Altinok. Dynamic many-to-one matching. Available at SSRN 3526522, 2019.349
[4] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed350 bandit problem. Machine learning, 47(2):235–256, 2002.351
[5] Orly Avner and Shie Mannor. Concurrent bandits and cognitive radio networks. In Joint352 European Conference on Machine Learning and Knowledge Discovery in Databases, pages353 66–81. Springer, 2014.354
[6] Sophie Bade. Random serial dictatorship: the one and only. Mathematics of Operations355 Research, 45(1):353–368, 2020.356
[7] Soumya Basu, Karthik Abinav Sankararaman, and Abishek Sankararaman. Beyond log2(t)357 regret for decentralized bandits in matching markets. In International Conference on Machine358 Learning, pages 705–715, 2021.359
[8] Anna Bogomolnaia and Hervé Moulin. A new solution to the random assignment problem.360 Journal of Economic theory, 100(2):295–328, 2001.361
[9] Somouaoga Bonkoungou. Decentralized college admissions under single application. Review362 of Economic Design, 25(1):65–91, 2021.363
[10] Simon Clark. The uniqueness of stable matchings. Contributions in Theoretical Economics,364 6(1), 2006.365
[11] Jan Eeckhout. On the uniqueness of stable marriage matchings. Economics Letters, 69(1):1–8,366 2000.367
[12] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The368 American Mathematical Monthly, 69(1):9–15, 1962.369
[13] Aurélien Garivier, Tor Lattimore, and Emilie Kaufmann. On explore-then-commit strategies.370 Advances in Neural Information Processing Systems, 29:784–792, 2016.371
[14] Virginia Gunn, Bertina Kreshpaj, Nuria Matilla-Santander, Emilia F Vignola, David H Weg-372 man, Christer Hogstedt, Emily Q Ahonen, Theo Bodin, Cecilia Orellana, Sherry Baron, et al.373 Initiatives addressing precarious employment and its effects on workers’ health and well-being:374 A systematic review. International Journal of Environmental Research and Public Health,375 19(4):2232, 2022.376
[15] Gregory Z Gutin, Philip R Neary, and Anders Yeo. Unique stable matchings. arXiv preprint377 arXiv:2106.12977, 2021.378
[16] Guillaume Haeringer and Flip Klijn. Constrained school choice. Journal of Economic theory,379 144(5):1921–1947, 2009.380
[17] John William Hatfield, Fuhito Kojima, and Scott Duke Kominers. Investment incentives in381 labor market matching. American Economic Review, 104(5):436–41, 2014.382
[18] Ramesh Johari, Vijay Kamble, and Yash Kanoria. Matching while learning. Operations383 Research, 69(2):655–681, 2021.384
[19] Alexander Karpov. A necessary and sufficient condition for uniqueness consistency in the stable385 marriage matching problem. Economics Letters, 178:63–65, 2019.386
[20] Bettina Klaus and Flip Klijn. Local and global consistency properties for student placement.387 Journal of Mathematical Economics, 49(3):222–229, 2013.388
[21] Lydia T Liu, Horia Mania, and Michael Jordan. Competing bandits in matching markets. In389 International Conference on Artificial Intelligence and Statistics, pages 1618–1628. PMLR,390 2020.391
[22] Lydia T Liu, Feng Ruan, Horia Mania, and Michael I Jordan. Bandit learning in decentralized392 matching markets. arXiv preprint arXiv:2012.07348, 2020.393
[23] Jinpeng Ma. The singleton core in the college admissions problem and its application to the394 national resident matching program (nrmp). Games and Economic Behavior, 69(1):150–164,395 2010.396
[24] Onkar Malgonde, He Zhang, Balaji Padmanabhan, and Moez Limayem. Taming complexity in397 search matching: Two-sided recommender systems on digital platforms. Mis Quarterly, 44(1),398 2020.399
[25] Hai Nguyen, Thành Nguyen, and Alexander Teytelboym. Stability in matching markets with400 complex constraints. Management Science, 67(12):7438–7454, 2021.401
[26] Muriel Niederle and Leeat Yariv. Decentralized matching with aligned preferences. Technical402 report, National Bureau of Economic Research, 2009.403
[27] Jaeok Park. Competitive equilibrium and singleton cores in generalized matching problems.404 International Journal of Game Theory, 46(2):487–509, 2017.405
[28] Philip J Reny. A simple sufficient condition for a unique and student-efficient stable matching406 in the college admissions problem. Economic Theory Bulletin, 9(1):7–9, 2021.407
[29] Antonio Romero-Medina and Matteo Triossi. Acyclicity and singleton cores in matching408 markets. Economics Letters, 118(1):237–239, 2013.409
[30] Jonathan Rosenski, Ohad Shamir, and Liran Szlak. Multi-player bandits–a musical chairs410 approach. In International Conference on Machine Learning, pages 155–163. PMLR, 2016.411
[31] Alvin E Roth. On the allocation of residents to rural hospitals: a general property of two-sided412 matching markets. Econometrica: Journal of the Econometric Society, pages 425–427, 1986.413
[32] Alvin E Roth and Marilda Sotomayor. Two-sided matching. Handbook of game theory with414 economic applications, 1:485–541, 1992.415
[33] Hannu Salonen and Mikko AA Salonen. Mutually best matches. Mathematical Social Sciences,416 91:42–50, 2018.417
[34] Abishek Sankararaman, Soumya Basu, and Karthik Abinav Sankararaman. Dominate or delete:418 Decentralized competing bandits in serial dictatorship. In International Conference on Artificial419 Intelligence and Statistics, pages 1252–1260. PMLR, 2021.420
[35] Jay Sethuraman, Chung-Piaw Teo, Liwen Qian, et al. Many-to-one stable matching: Geometry421 and fairness. Mathematics of Operations Research, 31(3):581–596, 2006.422
[36] William R Thompson. On the likelihood that one unknown probability exceeds another in view423 of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.424
Checklist425
1. For all authors...426
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s427 contributions and scope? [Yes] Please see Abstract and Section 1.428
(b) Did you describe the limitations of your work? [Yes] Please see Section C.4.429 (c) Did you discuss any potential negative societal impacts of your work? [N/A] This430 work mainly focuses on the online learning theory, which does not have any potential431 negative societal impacts.432
(d) Have you read the ethics review guidelines and ensured that your paper conforms to433 them? [Yes]434
2. If you are including theoretical results...435 (a) Did you state the full set of assumptions of all theoretical results? [Yes] Please see436 Section 2.437 (b) Did you include complete proofs of all theoretical results? [Yes] Please see Appendix.438
3. If you ran experiments...439 (a) Did you include the code, data, and instructions needed to reproduce the main exper-440 imental results (either in the supplemental material or as a URL)? [Yes] Please see441 supplemental material.442
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they443 were chosen)? [Yes] Please see Section 6 and supplemental material.444
(c) Did you report error bars (e.g., with respect to the random seed after running experi-445 ments multiple times)? [Yes] Please see Section 6.446
(d) Did you include the total amount of compute and the type of resources used (e.g., type447 of GPUs, internal cluster, or cloud provider)? [N/A]448
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...449 (a) If your work uses existing assets, did you cite the creators? [N/A]450 (b) Did you mention the license of the assets? [N/A]451 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]452
453
(d) Did you discuss whether and how consent was obtained from people whose data you’re454 using/curating? [N/A]455
(e) Did you discuss whether the data you are using/curating contains personally identifiable456 information or offensive content? [N/A]457
5. If you used crowdsourcing or conducted research with human subjects...458 (a) Did you include the full text of instructions given to participants and screenshots, if459 applicable? [N/A]460 (b) Did you describe any potential participant risks, with links to Institutional Review461 Board (IRB) approvals, if applicable? [N/A]462 (c) Did you include the estimated hourly wage paid to participants and the total amount463 spent on participant compensation? [N/A]464 | 1. What is the focus and contribution of the paper regarding bandit learning in many-to-one matching markets?
2. What are the strengths and weaknesses of the proposed UCB-based arm elimination algorithm?
3. Do you have any concerns or questions regarding the presentation and organization of the paper's content?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can the authors provide more information on the optimality and lower bounds of the regret bound, as well as potential improvements in future work?
6. Are there any limitations or shortcomings in the paper that the authors should address? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the problem of bandit learning in many-to-one matching markets. They generalize the
α
-condition in a marriage problem to a proposed
α
~
-condition that is a matching guarantee for uniqueness consistency in many-to-one matching markets. Under the proposed condition the authors present a UCB-based arm elimination algorithm that obtains logarithmic regret . The authors verify their proposed algorithm by simulations under several different optimality conditions.
Strengths And Weaknesses
Strengths:
The paper presents a novel algorithm with a competitive regret bound.
The new
α
~
-condition proposed appears to generalize the
α
-condition from the stable marriage problem, and the discussion for the condition puts it in comparison with prior work.
Weaknesses:
The paper has sections that are not very well-written. While the authors do a good job of explaining their new stability condition in Section 4.1, the discussion with prior conditions (deferred to 4.3) should essentially be put together. In Section 5, the authors present the technical challenges in extending bandit matching to the many-to-one setting, however it is unclear / difficult to follow how each of the steps translate into the final regret bound which is presented earlier in Section 4.2.
Regret Bound: It is not clear which contributions are novel and which proof techniques are reused from prior work. All the proofs are deferred to the appendix, and the proof-sketch provided in Section 4.2 is not very informative.
There are no discussions on which parts of the proof reuse prior work and which parts are novel.
There are no discussions on optimality and lower bounds beyond the mentioning of the logarithmic factor.
There are no discussions on which aspects of the algorithm can be improved in future work, and what the authors conjecture to be the optimal rate.
Experiments: While the authors indeed experiment with several different criteria and different number of agents, they do not have any baseline comparisons, making it difficult to compare their experimental results in a relative sense. It would be worthwhile if the authors could come up with some meaningful albeit sub-optimal baselines to highlight the importance of each component in their algorithm design.
Questions
Can the authors address some of the queries from the previous section? Specifically, can they contrast their work from prior work in terms of (i) theory and regret bound, (ii) novel components in the regret analysis, (iii) specific shortcomings in prior work, (iv) baselines for experiments, (v) optimality of regret bound?
Limitations
The authors do not discuss limitations. |
NIPS | Title
Bandit Learning in Many-to-one Matching Markets with Uniqueness Conditions
Abstract
An emerging line of research is dedicated to the problem of one-to-one matching 1 markets with bandits, where the preference of one side is unknown and thus we 2 need to match while learning the preference through multiple rounds of interaction. 3 However, in many real-world applications such as online recruitment platform for 4 short-term workers, one side of the market can select more than one participant from 5 the other side, which motivates the study of the many-to-one matching problem. 6 Moreover, the existence of a unique stable matching is crucial to the competitive 7 equilibrium of the market. In this paper, we first introduce a more general new α̃8 condition to guarantee the uniqueness of stable matching in many-to-one matching 9 problems, which generalizes some established uniqueness conditions such as SPC 10 and Serial Dictatorship, and recovers the known α-condition if the problem is 11 reduced to one-to-one matching. Under this new condition, we design an MO12 UCB-D4 algorithm withO ( NK log(T ) ∆2 ) regret bound, where T is the time horizon, 13 N is the number of agents, K is the number of arms, and ∆ is the minimum 14 reward gap. Extensive experiments show that our algorithm achieves uniform good 15 performances under different uniqueness conditions. 16
N/A
An emerging line of research is dedicated to the problem of one-to-one matching1 markets with bandits, where the preference of one side is unknown and thus we2 need to match while learning the preference through multiple rounds of interaction.3 However, in many real-world applications such as online recruitment platform for4 short-term workers, one side of the market can select more than one participant from5 the other side, which motivates the study of the many-to-one matching problem.6 Moreover, the existence of a unique stable matching is crucial to the competitive7 equilibrium of the market. In this paper, we first introduce a more general new α̃-8 condition to guarantee the uniqueness of stable matching in many-to-one matching9 problems, which generalizes some established uniqueness conditions such as SPC10 and Serial Dictatorship, and recovers the known α-condition if the problem is11 reduced to one-to-one matching. Under this new condition, we design an MO-12 UCB-D4 algorithm withO ( NK log(T )
∆2
) regret bound, where T is the time horizon,13
N is the number of agents, K is the number of arms, and ∆ is the minimum14 reward gap. Extensive experiments show that our algorithm achieves uniform good15 performances under different uniqueness conditions.16
1 Introduction17
The rise of platforms for the online matching market has led to an emergence of opportunities for18 companies to participate in personalized decision-making [14, 18]. Companies (like Thumbtack19 and Taskrabbit and Upwork platforms) use online platforms to address short-term needs or seasonal20 spikes in production demands, accommodate workers who are voluntarily looking for more flexible21 work arrangements or probation period before permanent employment. The supply and demand22 sides in two-sided markets make policies on the basis of their diversified needs, which is abstracted23 as a matching market with agent side and arm side, and each side has a preference profile over the24 opposite side. They choose from the other side according to preference and perform a matching. The25 stability of the matching result is a key property of the market [32, 1, 27].26
The preferences in the online labor market may be unknown to one side in advance, thus matching27 while learning the preferences is necessary. The multi-armed bandit (MAB) [36, 13, 4] is an important28 tool for N independent agents in matching market simultaneously selecting arms adaptively from29 received rewards at each round. The idea of applying MAB to one-to-one matching problems,30 introduced by [21], assumes that there is a central platform to make decisions for all agents. Following31 this, other works [22, 34, 7] consider a more general decentralized setting where there is no central32 platform to arrange matchings, and our work is also based on this setting.33
However, it is not enough to just study the one-to-one setting. Take online short-term worker34 employment as an example, it is an online platform design with an iterative matching, where35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
employers have numerous similar short-term tasks or internships to be recruited. Workers can only36 choose one task according to the company’s needs at a time while one company can accept more37 than one employee. Each company makes a fixed ranking for candidates according to its own38 requirements but workers have no knowledge of companies’ preferences. The reward for workers39 is a comprehensive consideration of salary and job environment. Since tasks are short-term, each40 candidate can try many times in different companies to choose the most suitable job. We abstract41 companies as arms and workers as agents. Each arm has a capacity q which is the maximum number42 of agents this arm can accommodate. When an arm faces multiple choices, it accepts its most q43 preferred agents. Agents thus compete for arms and may receive zero reward if losing the conflict. It44 is worth mentioning that arms with capacity q in the many-to-one matching can not just be replaced45 by q independent individuals with the same preference since there would be implicit competition46 among different replicates of this arm, not equal treatment. In addition, when multiple agents select47 one arm at a time, there may be no collision, which will hinder the communication among different48 agents under the decentralized assumption. They cannot distinguish who is more preferred by this49 arm in one round as it can accept more than one agent while this can be done in one-to-one case.50 Communication here lets each agent learn more about the preferences of arms and other agents, so as51 to formulate better policies to reduce collisions and learn fast about their stable results.52
This work focuses on a many-to-one market under uniqueness conditions. Previous work [10, 15]53 emphasize the importance of constructing a unique stable matching for the equilibrium of matching54 problems and some existing uniqueness conditions are studied in many-to-one matching, such as55 Sequential Preference Condition (SPC) and Acyclicity [26, 2]. Our work is motivated by [7], but the56 unique one-to-one mapping between arms and agents in their study which gives a surrogate threshold57 for arm elimination does not work in the many-to-one setting. And the uniqueness conditions in58 many-to-one matching are not well-studied, which also brings a challenge to identify and leverage59 the relationship between the resulting stable matching and preferences of two sides in the design60 of bandit algorithms. We propose an α̃-condition that can guarantee a unique stable matching and61 recover α-condition [19] if reduced to the one-to-one setting. We establish the relationships between62 our new α̃-condition and existing uniqueness conditions in many-to-one setting.63
In this paper, we study the bandit algorithm for a decentralized many-to-one matching market64 with uniqueness conditions. Under our newly introduced α̃-condition, we design an MO-UCB-D465 algorithm with arm elimination and the regret can be upper bounded by O ( NK log(T )
∆2
) , where N66
is the number of agents, K is the number of arms, and ∆ is the minimum reward gap. Finally,67 we conduct a series of experiments to simulate our algorithm under various conditions of Serial68 dictatorship, SPC and α̃-condition to study the stability and regret of the algorithm.69
Related Work The study of matching markets has a long history in economics and operation70 research [8, 6, 32] with real applications like school enrollment, labor employment, hospital resource71 allocation, and so on [1, 23, 31, 17]. A salient feature of market matching is making decisions for72 competing players on both sides [36, 12]. MAB is an important tool to study matching problems under73 uncertainty to obtain a maximum reward, and upper confidence bound algorithm (UCB) [4] is a typical74 algorithm, which sets a confidence interval to represent uncertainty. Matching market with MAB is75 studied in both centralized and decentralized setting [21, 22]. Following these, Abishek Sankararaman76 et al. [34] propose a phased UCB algorithm under a uniqueness condition, Serial Dictatorship, to77 manage collisions. They solve the problem of the decentralized market without knowing arm-gaps78 or time horizon, and reduce the probability of linear regret through non-monotonic arm elimination.79 The introduction of the uniqueness condition plays an important role in the equilibrium of matching80 results [15, 7]. Under a stronger and robust condition, Uniqueness Consistency [19], Soumya Basu81 et.al [7] apply MAB to online matching and obtain robust results that the subset of stable matchings82 being separated from the system does not affect other stable matchings.83
We discuss many-to-one problems such as online short-term employment and MOOC [14, 24, 18] as84 the one-to-one setting has limitations in practice. Somouaoga Bonkoungo [9] runs a student-proposing85 deferred acceptance algorithm (DA) [12] to study decentralized college admission. Ahmet Altinok86 [3] considers dynamic matching in many-to-one that can be solved as if it is static many-to-one or87 dynamic one-to-one under certain assumptions. As the existence and uniqueness of competitive88 equilibrium and core are important to allocations, the unique stable results need to be considered [27].89 Similar to conditions for unique stable matching in one-to-one, some uniqueness conditions of stable90 results in the many-to-one setting also are studied [16, 28, 15, 2, 27].91
2 Setting92
This paper considers a many-to-one matching marketM = (K,J ,P), where K = [K],J = [N ]93 are a finite arm set and a finite agent set, respectively. And each arm k has a capacity qk ≥ 1. To94 guarantee that no agents will be unmatched, we focus on the market with N ≤ ∑K i=1 qi. P is the95
fixed preference order of agents and arms, which is ranked by the mean reward. We assume that arm96 preferences for agents are unknown and needed to be learned. If agent j prefers arm k over k′, which97 also means that µj,k > µj,k′ , we denote by k ≻j k′. And the preference is strict that µj,k ̸= µj,k′ if98 k ̸= k′. Similarly, each arm k has a fixed and known preference ≻k over all agents, and specially,99 j ≻k j′ means that arm k prefers agent j over j′. Throughout, we focus on the market where all100 agent-arm pairs are mutually acceptable, that is, j ≻k ∅ and k ≻j ∅ for all k ∈ [K] and j ∈ [N ].101 Let mapping m be the matching result. mt(j) is the matched arm for agent j at time t, and γt(k) is102 the agents set matched with arm k1. Every time agent j selects an arm It(j), and we use Mt(j) to103 denote whether j is successfully matched with its selected arm. Mt(j) = 1 if agent j is matched with104 It(j), and Mt(j) = 0, otherwise. If multiple agents select arm k at the same time, only top qk agents105 can successfully match. The agent j matched with arm k can observe the reward Xj,mt(j)(t), where106 the random reward Xj,k(t) ∈ [0, 1] is independently drawn from a fixed distribution with mean µj,k.107 While the unmatched ones have collisions and receive zero reward. Generally, the reward obtained by108 agent j is Xj,It(j)(t)Mt(j).109
An agent j and an arm k form a blocking pair for a matching m if they are not matched but prefer110 each other over their assignments, i.e. k ≻j m(j) and ∃j′ ∈ γ(k), j ≻k j′. We say a matching111 satisfies individually rationality (IR), if aj ≻pi ∅ and pi ≻aj ∅ for all i ∈ [N ] and j ∈ [K], that is,112 every worker prefers to find a job rather than do nothing, and every company also wants to recruit113 workers rather than not recruit anyone. Under the IR condition, a matching in the many-to-one setting114 is stable if there does not exist a blocking pair [33, 35].115
This paper considers the matching markets under the uniqueness condition. Thus the overall goal is116 to find the unique stable matching between the agent side and arm side through iterations. Let m∗(j)117 be the stable matched arm for agent j under the stable matching m∗. The reward obtained by agent j118 is compared against the reward received by matching with m∗(j) at each time. We aim to minimize119 the expected stable regret for agent j over time horizon T , which is defined as120
Rj(T ) = Tµj,m∗(j) − E
[ T∑
t=1
Mt(j)Xj,It(j)(t)
] .
3 Algorithm121
In this section, we introduce our MO-UCB-D4 Algorithm (Many-to-one UCB with Decentralized122 Dominated arms Deletion and Local Deletion Algorithm) (Algorithm 1) for the decentralized many-123 to-one market, where there is no platform to arrange actions for agents, which leads to conflicts124 among agents. The MO-UCB-D4 algorithm for each agent j first takes agent set J and arm set K as125 input and chooses a parameter θ ∈ (0, 1/K) (discussed in Section C). It sets multiple phases, and126 each phase i mainly includes regret minimization block (line 6 - 12) and communication block (line127 13 - 16) with duration 2i−1, i = 1, 2, · · · .128 For each agent j in phase i, the algorithm adds arm deletion to reduce potential conflicts, which129 mainly contains global deletion and local deletion. The former eliminates the arms most preferred130 by agents who rank higher than agent j and obtain active set Chj [i] (line 4), and the latter deletes131 the arms that still have many conflicts with agent j after global deletion (line 6). We set a collision132 counter Cj,k[i] to record the number of collisions for agent j pulling arm k.133
In regret minimization block of phase i, we use Lj [i] = {k : Cj,k[i] ≥ ⌈θ2i⌉} to represent the134 arms that collide more times than a threshold ⌈θ2i⌉ when matching with agent j. Arms in Lj [i] are135 first locally deleted to reduce potential collisions for agent j (line 6). After that, agent j selects an136 optimal action It(j) from remaining arms in Chj [i]\Lj [i] in phase i according to UCB index, which is137 computed by µ̂j,k(t−1)+ √ 2α log(t) Nj,k(t−1) (line 7), where Nj,k(t−1) is the number that agent j and arm138
1The mapping m is not reversible as it is not a injective, thus we do not use m−1t (k).
Algorithm 1 MO-UCB-D4 algorithm (for agent j) Input:
θ ∈ (0, 1/K), α > 1. 1: Set global dominated set Gj [0] = ϕ 2: for phase i = 1, 2, ... do 3: Reset the collision set Cj,k[i] = 0, ∀k ∈ [K]; 4: Reset active arms set Chj [i] = [K]\Gj [i− 1]; 5: if t < 2i +NK(i− 1) then 6: Local deletion Lj [i] = {k : Cjk[i] ≥ ⌈θ2i⌉}; 7: Play arm It(j) ∈ argmax
k∈Chj [i]\Lj [i]
( µ̂j,k(t− 1) + √ 2α log(t) Nj,k(t−1) ) ;
8: if k = It(j) is successfully matched with agent j, i.e. mt(j) = k then 9: Update estimate µ̂j,k(t) and matching count Nj,k(t);
10: else 11: Cj,k[i] = Cj,k[i] + 1; 12: end if 13: else if t = 2i +NK(i− 1) then 14: Oj [i]← most matched arm in phase i; 15: Gj [i]← COMMUNICATION(i,Oj [i]); 16: end if 17: end for
k have been matched at time t− 1. If the selected arm is successfully matched with agent j, then the139 algorithm updates estimated reward µ̂j,k(t) = 1Nj,k(t) ∑t s=1 1{Is(j) = k and Ms(j) = 1} Xj,k(t)140 and Nj,k(t) (line 9). Otherwise, the collision happens (line 11) and j receives zero reward. The141 regret minimization block identifies the most played arm Oj [i] for agent j in each phase i, which is142 estimated as the best arm for j, thus making optimal policy to minimize expected regret.143
Algorithm 2 COMMUNICATION Input:
Phase number i, and most played arms Oj [i] for agent j, ∀j ∈ [N ] . 1: Set C = ∅; 2: for t = 1, 2, · · · , NK − 1 do 3: if K(j − 1) ≤ t ≤ Kj − 1 then 4: Agent j plays arm It(j) = (t mod K) + 1; 5: if Collision Occurs then 6: C = C ∪ {It(j)}; 7: end if 8: else 9: Play arm It(j) = Oj [i];
10: end if 11: end for 12: RETURN C;
In the communication block (Algorithm 2), there are N sub-blocks, each with duration K. In the144 ℓ− th sub-block, only agent ℓ pulls arm 1, arm 2, · · · , arm K in round-robin while the other agents145 select their most preferred arms estimated as the most played ones (line 4). This block aims to detect146 globally dominated arms for agent j: Gj [i] ⊂ {Oj′ [i] : j′ ≻Oj′ [i] j}. Under stable matching m
∗, the147 globally dominated arms set for agent j is denoted as G∗j . After the communication block in phase148 i, each agent j updates its active arms set Chj [i+ 1] for phase i+ 1, by globally deleting arms set149 Gj [i], and enters into the next phase (line 4 in Algorithm 1).150
Hence, multi-phases setting can guarantee that the active set in different phases has no inclusion151 relationship so that if an agent deletes an arm in a certain phase, this arm can still be selected in the152 later rounds. This ensures that each agent will not permanently eliminate its stable matched arm, and153 when the agent mistakenly deletes an arm, it will not lead to linear regret.154
4 Results155
4.1 Uniqueness Conditions156
4.1.1 α̃-condition157
Constructing a unique stable matching plays an important role in market equilibrium and fairness158 [10, 15]. With uniqueness, there would be no dispute about adopting stable matching preferred159 by which side, thus it is more fair. When the preferences of agents and arms are given by some160 utility functions instead of random preferences, like the payments for workers in the labor markets,161 the stable matching is usually unique. Thus the assumption of the unique stable matching is quite162 common in real applications. In this section, we propose a new uniqueness condition, α̃-condition.163 First, we introduce uniqueness consistency (Unqc) [19], which guarantees robustness and uniqueness164 of markets.165 Definition 1. A preference profile satisfies uniqueness consistency if and only if166
(i) there exists a unique stable matching m∗;167
(ii) for any subset of arms or agents, the restriction of the preference profile on this subset with their168 stable-matched pair has a unique stable matching.169
It guarantees that even if an arbitrary subset of agents are deleted out of the system with their170 respective stable matched arms, there still exists a unique stable matching among the remaining171 agents and arms. This condition allows any algorithm to identify at least one stable pair in a unique172 stable matching system and guides the system to a global unique stable matching in an iterative173 manner. To obtain consistent stable results in the many-to-one market, we propose a new α̃-condition,174 which is a sufficient and necessary condition for Unqc (proved in Appendix B).175
We considers a finite set of arms [K] = {1, 2, · · · ,K} and a finite set of agents [N ] = {1, 2, · · · , N}176 with preference profile P . Assume that [N ]r={A1, A2, · · · , AN} is a permutation of {1, 2, · · · , N}177 and [K]r={c1, c2, · · · , cK} is a permutation of {1, 2, · · · ,K}. Denote [N ], [K] as the left order and178 [N ]r, [K]r as the right order. The k-th arm in the right order set [K]r has the index ck in the left179 order set [K] and the j-th agent in the right order set [N ]r has the index Aj in the left order set [N ].180 Considering arm capacity, we denote γ∗(ck) (right order) as the stable matched agents set for arm ck.181 Definition 2. A many-to-one matching market satisfies the α̃-condition if,182
(i) The left order of agents and arms satisfies
∀j ∈ [N ],∀k > j, k ∈ [K], µj,m∗(j) > µj,k , where m∗(j) is agent j’s stable matched arm;183
(ii) The right order of agents and arms satisfies
∀k < k′ ≤ K, ck ∈ [K]r, Ak′ ⊂ [N ]r, γ∗(ck) ≻ck A∑k′−1 i=1 qci+1 ,
where the set γ∗(ck) is more preferred than A∑k′−1 i=1 qci+1 means that the least preferred agent in184
γ∗(ck) for ck is better than A∑k′−1 i=1 qci+1 for ck.185
Under our α̃-condition, the left order and the right order satisfy the following rule. The left order186 gives rankings according to agents’ preferences. The first agent in the left order set [N ] prefers arm 1187 in [K] most and has it as the stable matched arm. Similar properties for the agent 2 to q1 since arm 1188 has q1 capacity. Then the (q1 + 1)-th agent in the left order set [N ] has arm 2 in [K] as her stable189 matched arm and prefers arm 2 most except arm 1. The remaining agents follow similarly. Similarly,190 the right order gives rankings according to arms’ preferences. The first arm 1 in the right order set191 [K]r most prefers first qc1 agents in the right order set [N ]r and takes them as its stable matched192 agents. The remaining arms follow similarly.193
This condition is more general than existing uniqueness conditions like SPC [28] and can recover194 the known α-condition in one-to-one matching market [19]. The relationship between the existing195 uniqueness conditions and our proposed conditions will be analyzed in detail later in Section 4.1.2.196
The main idea from one-to-one to many-to-one analysis is to replace individuals with sets. In197 general, under α̃-condition, the left order satisfies that when arm 1 to arm k − 1 are removed, agents198
(∑k−1 i=1 qi + 1 ) to (∑k i=1 qi )
prefer k most, and the right order means that when A1 to agents199 A∑k−1 i=1 qi are removed, arm k prefers agents Ak = {A∑k−1 i=1 qci+1 , A∑k−1 i=1 qci+2 , · · · , A∑k i=1 qci },200 where Ak is the agent set that are most qk preferred by arm k among those who have not been201 matched by arm 1, 2, · · · , k − 1. Te next theorem give a summary.202
Theorem 1. If a market M = (K,J ,P) satisfies α̃-condition, then m∗( ∑j−1
i=1 qi + 1) =203 m∗( ∑j−1 i=1 qi + 2) = · · · = m∗( ∑j i=1 qi) = j (the left order), γ ∗(ck) = Ak and m∗(Aj) = cj (the204 right order) under stable matching.205
Under α̃-condition, the stable matched arm may not be the most preferred one for each agent j,206 j ∈ [N ], thus (i) we do not have m∗(j) to be dominated only by the agent 1 to agent j − 1, i.e. there207 may exist j′ > j, s.t. j′ ≻m∗(j) j; (ii) the left order may not be identical to the right order, we208 define a mapping lr to match the index of an agent in the left order with the index in the right order,209 i.e. Alr(j) = j. From Theorem 1, the stable matched set for arm k is its first qk preferred agents210 γ∗(ck) = Ak. We define lr as lr(i) = max{j : Aj ∈ γ∗(m∗(i)), j ∈ [N ]}, that is, in the right211 order, the mapping for arm k ∈ [K] is the least preferred one among its most qk preferred agents.212 Note that this mapping is not an injective, i.e. ∃j, j′, s.t. agent j = Alr(j) = Alr(j′). An intuitive213 representation can be seen in Figure 4 in Appendix A.1.214
4.1.2 Unique Stable Conditions in Many-to-one Matching215
Uniqueness consistency (Unqc) leads the stable matching to a robust one which is a desirable property216 in large dynamic markets with constant individual departure [7]. A precondition of Unqc is to ensure217 global unique stability, hence finding uniqueness conditions is essential.218
The existing unique stable conditions are well established in one-to-one setting (analysis can be219 found in Appendix B), and in this section, we focus on uniqueness conditions in many-to-one market,220 such as SPC, [28], Aligned Preference, Serial Dictatorship Top-top match and Acyclicity [26, 2, 28]221 (Definition 9, 7, 8, 10 in Appendix B.2). Takashi Akahoshi [2] proposes a necessary and sufficient222 condition for uniqueness of stable matching in many-to-one matching where unacceptable agents223 and arms may exist on both sides. We denote their condition as Acyclicity∗. Under our setting, both224 two sides are acceptable, and we first give the proof of that Acyclicity∗ is a necessary and sufficient225 condition for uniqueness in this setting (see Section B.2.4 in Appendix B). We then give relationships226 between our newly α̃-condition and other existing uniqueness conditions, intuitively expressed in227 Figure 1, and we give proof for this section in Appendix B.2.228
Lemma 1. In a many-to-one matching marketM = (K,J ,P), both Serial Dictatorship and Aligned229 Preference can produce a unique stable matching and they are equivalent.230
Theorem 2. In a many-to-one matching marketM = (K,J ,P), our α̃-condition satisfies:231 (i) SPC is a sufficient condition to α̃-condition;232
(ii) α̃-condition is a necessary and sufficient condition to Unqc;233
(iii) α̃-condition is a sufficient condition to Acyclicity∗.234
4.2 Theoretical Results of Regret235
We then provide theoretical results of MO-UCB-D4 algorithm under our α̃-condition. Recall that G∗j236 is the globally dominated arms for agent j under stable matching m∗. For each arm k /∈ G∗j , we give237 the definition of the blocking agents for arm k and agent j: Bjk = {j′ : j′ ≻k j, k /∈ G∗j}, which238 contains agents more preferred by arm k than j. The hidden arms for agent j is Hj = {k : k /∈239 G∗j} ∩ {k : Bjk ̸= ∅}. The reward gap for agent j and arm k is defined as ∆jk = |µj,m∗(j) − µj,k|240 and the minimum reward gap across all arms and agents is ∆ = minj∈[N ]{mink∈[K] ∆j,k}. We241 assume that the reward is different for each agent, thus ∆j,k > 0 for every agent j and arm k.242
Theorem 3. (Regret upper bound) Let Jmax(j) = max {j + 1, {j′ : ∃k ∈ Hj , j′ ∈ Bjk}} be the243 max blocking agent for agent j and fα̃(j) = j + lrmax(j) is a fixed factor depends on both the left244 order and the right order for agent j. Following MO-UCB-D4 algorithm with horizon T , the expected245 regret of a stable matching under α̃-condition (Definition 2) for agent j ∈ [N ] is upper bounded by246
E [Rj(T )] ≤ ∑
k/∈G∗j∪m∗(j)
8α
∆jk
( log(T ) + √ π
α log(T ) ) + ∑ k/∈G∗j ∑ j′∈Bjk:k/∈G∗j′ 8αµj,m∗(j) ∆2j′k ( log(T ) + √ π α log(T ) )
+ cj log2(T ) +O
( N2K2
∆2 + ( min(1, θ|Hj |)fα(Jmax(j) ) + fα̃(j)− 1)2i ∗ +N2Ki∗ ) ,
where i∗ = max{8, i1, i2} (then i∗ ≤ 8 and i1, i2 are defined in equation (3)), and lrmax(j) =247 max{lr(j′) : 1 ≤ j′ ≤ j}, is the maximum right order mapping for agent j′ who ranks higher than248 j.249
From Theorem 3, the scale of the regret upper bound under α̃-condition is O (
NK log(T ) ∆2
) and the250
proof is in Section 3.251
Proof Sketch of Theorem 3. Under α̃-condition, we only need to discuss the regret of the unique252 result. We construct a good phase (in Appendix A.2) and denote that the time point of agent j253 reaching its good phase by τj . After τj , agent j could identify its best arm and matches with his254 stable pair. Thus, from phase τj on-wards, agent j + 1 will find the set of globally dominated arms255 G∗j+1 and will eliminate arm m
∗(j) if m∗(j) brings collisions in communication block according256 to Algorithm 1. Global deletion here follows the left order. Then when agent j enters into regret257 minimization block next phase, the times it plays a sub-optimal arm is small which leads to a small258 total number of collisions experienced by agent j + 1. Then the process of each agent after good259 phase is divided into two stages: before τj and after τj . After τj , according to the causes of regret, it260 is divided into four blocks: collision, local deletion, communication, and sub-optimal play. Phases261 before τj can be bounded by induction. The regret decomposition is bound by the following.262
Lemma 2. (Regret Decomposition) For a stable matching under α̃-condition, the upper bound of263 regret for the agent j ∈ [N ] under our algorithm can be decomposed by:264
E [Rj(T )] ≤ E [ SFαj ]︸ ︷︷ ︸ (Regret before phase Fαj ) +min(θ|Hj |, 1)E [ SVαj ]︸ ︷︷ ︸ (Local deletion) + ( (K − 1 + |Bj,m∗(j)|) log2(T ) +NKE [Vαj ] )︸ ︷︷ ︸ (Communication)
+ ∑ k/∈G∗j ∑ j′∈Bj,k:k/∈G∗j′ 8αµj,m∗(j) ∆2j′,k ( log(T ) + √ π α log(T ) ) ︸ ︷︷ ︸
(Collision) + ∑
k/∈G∗j∪m∗(j)
8α
∆j,k (log(T ) +
√ π
α log(T ))
︸ ︷︷ ︸ (Sub-optimal play)
+NK ( 1 + (ϕ(α) + 1) 8α
∆2
) ,
where Fαj , Vαj are the time points when agent j enters into α̃-Good phase and α̃-Low Collision265 phase respectively, mentioned as "good phase" above, are defined in Appendix A.2.266
5 Difficulties and Solutions267
While putting forward our α̃-condition in the many-to-one setting, many new problems need to be268 taken into account.269
From one-to-one setting to many-to-one setting First, although we assume that arm preference is270 over individuals rather than combination of agents, the agents matched by one arm are not independent.271 Specially, arms with capacity q can not just be replaced by q independent individuals with the same272 preference. Since there would be implicit competition among different replicates of this arm, and it273 can reject the previously accepted agents when it faces a more preferred agent. Secondly, collisions274 among agents is one of main causes of regret in decentralized setting, while capacity will hinder the275 collision-reducing process. In communication block, when two agents select one arm at a time, as276 an arm can accept more than one agent, these two cannot distinguish who is more preferred by this277 arm, while it can be done in one-to-one markets. Thus it is more difficult to identify arm preferences278 for each agent. The lr in [7] is a one-to-one mapping that corresponds the agent index in the left279 order and the agent index in the right order, which is related to regret bound (Theorem 3 in [7] and280 Theorem 3 in our work). While it does not hold in our setting. To give a descriptive range of matched281 result for each arm under α̃-condition, we need to define a new mapping.282
In order to solve these problems, we explain as follows: First, since capacity influence the com-283 munication among agents, we add communication block and introduce an arm set G∗j , which will284 be deleted before each phase to reduce collisions, where G∗j contains arms that will block agent j285 globally under stable matching m∗. Second, the idea from one-to-one to many-to-one is a transition286 from individual to set. It is natural to split sets into individuals or design a bridge to correspond sets287 to individuals. We construct a new mapping lr (Figure 4 in Appendix A) from agent j in the left order288 to agents in the right order under α̃-condition. lr maps each arm k to the least preferred one of its289 stable matched agents in the right order, thus giving a matching between individuals and individuals290 and constructing the range of the stable matched agents set (Theorem 1). Except lr, capacity also291 influences regret mainly in communication block, as mentioned in the first paragraph.292
From α-condition to α̃-condition To extend α-condition to the many-to-one setting, it needs293 to define preferences among sets. However, there might be exponential number of sets due to the294 combinatorial structure and simply constraining preferences over all possible sets will lead to high295 complexity. Motivated by α-condition which characterizes properties of matched pairs in one-to-one296 setting, we come up with a possible constraint by regarding the arm and its least preferred agent in the297 matched set as the matched pair and define preferences according to this grouping. It turns out that298 we only need to define the preferences of arms over disjoint sets of agents to complete the extension299 as α-condition is defined under the stable matching, which can also fit the regret analysis well. As a300 summary, there might be other possible ways to extend the α-condition but we present a successful301 trial to not only give a good extension with similar inclusion relationships but also guarantee good302 regret bound.303
6 Experiments304
In this section, we verify the experimental results of our MO-UCB-D4 algorithm (Algorithm 1) for305 decentralized many-to-one matching markets. For all experiments, the rankings of all agents and306 arms are sampled uniformly. We set the reward value towards the least preferred arm to be 1/N307 and the most preferred one as 1 for each agent, then the reward gap between any adjacently ranked308 arms is ∆ = 1/N . The reward for agent j matches with arm k at time t Xj,k(t) is sampled from309 Ber(µj,k). The capacity is equally set as q = N/K. We investigate how the cumulative regret and310 cumulative market unstability depend on the size of the market and the number of arms under three311 different unique stability conditions: Serial Dictatorship, SPC, α̃-condition. The former cumulative312 regret is the total mean reward gap between the stable matching result and the simulated result, and313 the latter cumulative unstability is defined as the number of unstable matchings in round t. In our314 experiments, all results are averaged over 10 independent runs, hence the error bars are calculated as315 standard deviations divided by √ 10.316
Varying the market size To test effects on two indicators, cumulative regret and cumulative317 unstability, we first varying N with fixed K with market size of N ∈ {10, 20, 30, 40} agents318
and K = 5 arms. The number of rounds is set to be 100, 000. The cumulative regret in Figure319 2(a)(c)(e) show an increasing trend with convergence as the number of agents increases under these320 three conditions. When the number of agents increases, there is a high probability of collisions321 among different agents, resulting in the increase of cumulative regret. Similar results for cumulative322 unstability are shown in Figure 2(b)(d)(f). When N is larger, the number of unstable pairs becomes323 more. With the increase of the number of rounds, both two indicators increase first and then tend to324 be stable. The jumping points are caused by multi-phases setting of MO-UCB-D4 algorithm.325
Varying arm capacity The number of arms K is chosen by K ∈ {2, 5, 10, 20}, with N = 20 and326 q = N/K. The number of rounds we set is 400, 000. With the increase of K, both the cumulative327 regret in Figure 3(a)(c)(e) and the cumulative unstability in Figure 3(b)(d)(f) increase monotonously.328 When K increases, the capacity qk for each arm k decreases, and then the number of collisions329 will increase, which leads to an increase of cumulative regret. And it also leads to more unstable330 pairs, which needs more communication blocks to converge to a stable matching. Under these three331 conditions, the performances of the algorithm are similar.332
7 Conclusion333
We are the first to study the bandit algorithm for the many-to-one matching market under the unique334 stable matching. This work focuses on a decentralized market. A new α̃-condition is proposed335 to guarantee a unique stable outcome in many-to-one market, which is more general than existing336 uniqueness conditions like SPC, Serial Dictatorship and could recover the usual α-condition in337 one-to-one setting. We propose a phase-based algorithm of MO-UCB-D4 with arm-elimination,338 which obtains O ( NK log(T )
∆2
) stable regret under α̃-condition. By carefully defining a mapping from339
arms to the least preferred agent in its stable matched set, we could effectively correspond arms and340 agents by individual-to-individual. A series of experiments under two environments of varying the341 market size and varying arm capacity are conducted. The results show that our algorithm performs342 well under Serial Dictatorship, SPC and α̃-condition respectively.343
References344 [1] Azar Abizada. Stability and incentives for college admissions with budget constraints. Theoreti-345 cal Economics, 11(2):735–756, 2016.346
[2] Takashi Akahoshi. Singleton core in many-to-one matching problems. Mathematical Social347 Sciences, 72:7–13, 2014.348
[3] Ahmet Altinok. Dynamic many-to-one matching. Available at SSRN 3526522, 2019.349
[4] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed350 bandit problem. Machine learning, 47(2):235–256, 2002.351
[5] Orly Avner and Shie Mannor. Concurrent bandits and cognitive radio networks. In Joint352 European Conference on Machine Learning and Knowledge Discovery in Databases, pages353 66–81. Springer, 2014.354
[6] Sophie Bade. Random serial dictatorship: the one and only. Mathematics of Operations355 Research, 45(1):353–368, 2020.356
[7] Soumya Basu, Karthik Abinav Sankararaman, and Abishek Sankararaman. Beyond log2(t)357 regret for decentralized bandits in matching markets. In International Conference on Machine358 Learning, pages 705–715, 2021.359
[8] Anna Bogomolnaia and Hervé Moulin. A new solution to the random assignment problem.360 Journal of Economic theory, 100(2):295–328, 2001.361
[9] Somouaoga Bonkoungou. Decentralized college admissions under single application. Review362 of Economic Design, 25(1):65–91, 2021.363
[10] Simon Clark. The uniqueness of stable matchings. Contributions in Theoretical Economics,364 6(1), 2006.365
[11] Jan Eeckhout. On the uniqueness of stable marriage matchings. Economics Letters, 69(1):1–8,366 2000.367
[12] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The368 American Mathematical Monthly, 69(1):9–15, 1962.369
[13] Aurélien Garivier, Tor Lattimore, and Emilie Kaufmann. On explore-then-commit strategies.370 Advances in Neural Information Processing Systems, 29:784–792, 2016.371
[14] Virginia Gunn, Bertina Kreshpaj, Nuria Matilla-Santander, Emilia F Vignola, David H Weg-372 man, Christer Hogstedt, Emily Q Ahonen, Theo Bodin, Cecilia Orellana, Sherry Baron, et al.373 Initiatives addressing precarious employment and its effects on workers’ health and well-being:374 A systematic review. International Journal of Environmental Research and Public Health,375 19(4):2232, 2022.376
[15] Gregory Z Gutin, Philip R Neary, and Anders Yeo. Unique stable matchings. arXiv preprint377 arXiv:2106.12977, 2021.378
[16] Guillaume Haeringer and Flip Klijn. Constrained school choice. Journal of Economic theory,379 144(5):1921–1947, 2009.380
[17] John William Hatfield, Fuhito Kojima, and Scott Duke Kominers. Investment incentives in381 labor market matching. American Economic Review, 104(5):436–41, 2014.382
[18] Ramesh Johari, Vijay Kamble, and Yash Kanoria. Matching while learning. Operations383 Research, 69(2):655–681, 2021.384
[19] Alexander Karpov. A necessary and sufficient condition for uniqueness consistency in the stable385 marriage matching problem. Economics Letters, 178:63–65, 2019.386
[20] Bettina Klaus and Flip Klijn. Local and global consistency properties for student placement.387 Journal of Mathematical Economics, 49(3):222–229, 2013.388
[21] Lydia T Liu, Horia Mania, and Michael Jordan. Competing bandits in matching markets. In389 International Conference on Artificial Intelligence and Statistics, pages 1618–1628. PMLR,390 2020.391
[22] Lydia T Liu, Feng Ruan, Horia Mania, and Michael I Jordan. Bandit learning in decentralized392 matching markets. arXiv preprint arXiv:2012.07348, 2020.393
[23] Jinpeng Ma. The singleton core in the college admissions problem and its application to the394 national resident matching program (nrmp). Games and Economic Behavior, 69(1):150–164,395 2010.396
[24] Onkar Malgonde, He Zhang, Balaji Padmanabhan, and Moez Limayem. Taming complexity in397 search matching: Two-sided recommender systems on digital platforms. Mis Quarterly, 44(1),398 2020.399
[25] Hai Nguyen, Thành Nguyen, and Alexander Teytelboym. Stability in matching markets with400 complex constraints. Management Science, 67(12):7438–7454, 2021.401
[26] Muriel Niederle and Leeat Yariv. Decentralized matching with aligned preferences. Technical402 report, National Bureau of Economic Research, 2009.403
[27] Jaeok Park. Competitive equilibrium and singleton cores in generalized matching problems.404 International Journal of Game Theory, 46(2):487–509, 2017.405
[28] Philip J Reny. A simple sufficient condition for a unique and student-efficient stable matching406 in the college admissions problem. Economic Theory Bulletin, 9(1):7–9, 2021.407
[29] Antonio Romero-Medina and Matteo Triossi. Acyclicity and singleton cores in matching408 markets. Economics Letters, 118(1):237–239, 2013.409
[30] Jonathan Rosenski, Ohad Shamir, and Liran Szlak. Multi-player bandits–a musical chairs410 approach. In International Conference on Machine Learning, pages 155–163. PMLR, 2016.411
[31] Alvin E Roth. On the allocation of residents to rural hospitals: a general property of two-sided412 matching markets. Econometrica: Journal of the Econometric Society, pages 425–427, 1986.413
[32] Alvin E Roth and Marilda Sotomayor. Two-sided matching. Handbook of game theory with414 economic applications, 1:485–541, 1992.415
[33] Hannu Salonen and Mikko AA Salonen. Mutually best matches. Mathematical Social Sciences,416 91:42–50, 2018.417
[34] Abishek Sankararaman, Soumya Basu, and Karthik Abinav Sankararaman. Dominate or delete:418 Decentralized competing bandits in serial dictatorship. In International Conference on Artificial419 Intelligence and Statistics, pages 1252–1260. PMLR, 2021.420
[35] Jay Sethuraman, Chung-Piaw Teo, Liwen Qian, et al. Many-to-one stable matching: Geometry421 and fairness. Mathematics of Operations Research, 31(3):581–596, 2006.422
[36] William R Thompson. On the likelihood that one unknown probability exceeds another in view423 of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.424
Checklist425
1. For all authors...426
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s427 contributions and scope? [Yes] Please see Abstract and Section 1.428
(b) Did you describe the limitations of your work? [Yes] Please see Section C.4.429 (c) Did you discuss any potential negative societal impacts of your work? [N/A] This430 work mainly focuses on the online learning theory, which does not have any potential431 negative societal impacts.432
(d) Have you read the ethics review guidelines and ensured that your paper conforms to433 them? [Yes]434
2. If you are including theoretical results...435 (a) Did you state the full set of assumptions of all theoretical results? [Yes] Please see436 Section 2.437 (b) Did you include complete proofs of all theoretical results? [Yes] Please see Appendix.438
3. If you ran experiments...439 (a) Did you include the code, data, and instructions needed to reproduce the main exper-440 imental results (either in the supplemental material or as a URL)? [Yes] Please see441 supplemental material.442
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they443 were chosen)? [Yes] Please see Section 6 and supplemental material.444
(c) Did you report error bars (e.g., with respect to the random seed after running experi-445 ments multiple times)? [Yes] Please see Section 6.446
(d) Did you include the total amount of compute and the type of resources used (e.g., type447 of GPUs, internal cluster, or cloud provider)? [N/A]448
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...449 (a) If your work uses existing assets, did you cite the creators? [N/A]450 (b) Did you mention the license of the assets? [N/A]451 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]452
453
(d) Did you discuss whether and how consent was obtained from people whose data you’re454 using/curating? [N/A]455
(e) Did you discuss whether the data you are using/curating contains personally identifiable456 information or offensive content? [N/A]457
5. If you used crowdsourcing or conducted research with human subjects...458 (a) Did you include the full text of instructions given to participants and screenshots, if459 applicable? [N/A]460 (b) Did you describe any potential participant risks, with links to Institutional Review461 Board (IRB) approvals, if applicable? [N/A]462 (c) Did you include the estimated hourly wage paid to participants and the total amount463 spent on participant compensation? [N/A]464 | 1. What is the focus of the paper regarding decentralized learning and matching markets?
2. What are the strengths of the proposed model and algorithm?
3. What are the weaknesses of the writing and the limitations of the results?
4. Do you have any questions regarding the definitions and assumptions used in the paper?
5. How does the reviewer assess the novelty and scalability of the approach? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies decentralized learning and matching in many-to-one matching markets. They introduce a model for this setting, develop a bandit learning algorithm under a uniqueness assumption about the set of stable matchings, and obtain a logarithmic instance-dependent bound on its regret.
Strengths And Weaknesses
Strengths:
The problem of many-to-one matching that the authors study is well-motivated, as it is a salient aspect of many real-world matching markets (e.g., schools, employment) in which learning and adaptivity could play a useful role.
It is nice to see simulations of the algorithm to validate the performance. (However, it would also be nice to see comparisons against other methods.)
Weaknesses:
The writing was difficult to follow, with several crucial definitions and assumptions unclear (see below).
The focus on unique stable matching seems like a strong limitation of the results, as in many markets where the preferences are not in a sense “acyclic” or “aligned”, multiple stable matchings exist. The proposed approach does not appear to scale gracefully to this setting.
While the consideration of many-to-one matching is new, this work only considers a limited form of many-to-one matching where each “arm” has a total preference ordering agents.
The regret bounds are in the minimum reward gap across all agents in arms; this assumption does not permit agents to be indifferent (or nearly indifferent), even for arms far down in their ranking list.
Questions
I was not able to understand the definitions of “left order” and “right order”; it seems like there is a missing clause in the first sentence of the paragraph starting at line 173. Would it be possible to elaborate further on the definition?
In Definition 2, there appears to be a missing term in the summations. What should that term be?
I am also somewhat confused about the comparisons between assumptions on the preferences under “Conditions for Unique Stable matching”. For instance, the condition of Akahoshi is a property of the preference profiles of a single side (under which there exists a unique stable matching). On the other hand, uniqueness consistency is a property of the full preference profiles (for both sides). Could you clarify what the precise comparison being made is?
Limitations
The authors describe the limitations of their results in Section C.3. It may be useful to move this discussion into the main body. |
NIPS | Title
Bandit Learning in Many-to-one Matching Markets with Uniqueness Conditions
Abstract
An emerging line of research is dedicated to the problem of one-to-one matching 1 markets with bandits, where the preference of one side is unknown and thus we 2 need to match while learning the preference through multiple rounds of interaction. 3 However, in many real-world applications such as online recruitment platform for 4 short-term workers, one side of the market can select more than one participant from 5 the other side, which motivates the study of the many-to-one matching problem. 6 Moreover, the existence of a unique stable matching is crucial to the competitive 7 equilibrium of the market. In this paper, we first introduce a more general new α̃8 condition to guarantee the uniqueness of stable matching in many-to-one matching 9 problems, which generalizes some established uniqueness conditions such as SPC 10 and Serial Dictatorship, and recovers the known α-condition if the problem is 11 reduced to one-to-one matching. Under this new condition, we design an MO12 UCB-D4 algorithm withO ( NK log(T ) ∆2 ) regret bound, where T is the time horizon, 13 N is the number of agents, K is the number of arms, and ∆ is the minimum 14 reward gap. Extensive experiments show that our algorithm achieves uniform good 15 performances under different uniqueness conditions. 16
N/A
An emerging line of research is dedicated to the problem of one-to-one matching1 markets with bandits, where the preference of one side is unknown and thus we2 need to match while learning the preference through multiple rounds of interaction.3 However, in many real-world applications such as online recruitment platform for4 short-term workers, one side of the market can select more than one participant from5 the other side, which motivates the study of the many-to-one matching problem.6 Moreover, the existence of a unique stable matching is crucial to the competitive7 equilibrium of the market. In this paper, we first introduce a more general new α̃-8 condition to guarantee the uniqueness of stable matching in many-to-one matching9 problems, which generalizes some established uniqueness conditions such as SPC10 and Serial Dictatorship, and recovers the known α-condition if the problem is11 reduced to one-to-one matching. Under this new condition, we design an MO-12 UCB-D4 algorithm withO ( NK log(T )
∆2
) regret bound, where T is the time horizon,13
N is the number of agents, K is the number of arms, and ∆ is the minimum14 reward gap. Extensive experiments show that our algorithm achieves uniform good15 performances under different uniqueness conditions.16
1 Introduction17
The rise of platforms for the online matching market has led to an emergence of opportunities for18 companies to participate in personalized decision-making [14, 18]. Companies (like Thumbtack19 and Taskrabbit and Upwork platforms) use online platforms to address short-term needs or seasonal20 spikes in production demands, accommodate workers who are voluntarily looking for more flexible21 work arrangements or probation period before permanent employment. The supply and demand22 sides in two-sided markets make policies on the basis of their diversified needs, which is abstracted23 as a matching market with agent side and arm side, and each side has a preference profile over the24 opposite side. They choose from the other side according to preference and perform a matching. The25 stability of the matching result is a key property of the market [32, 1, 27].26
The preferences in the online labor market may be unknown to one side in advance, thus matching27 while learning the preferences is necessary. The multi-armed bandit (MAB) [36, 13, 4] is an important28 tool for N independent agents in matching market simultaneously selecting arms adaptively from29 received rewards at each round. The idea of applying MAB to one-to-one matching problems,30 introduced by [21], assumes that there is a central platform to make decisions for all agents. Following31 this, other works [22, 34, 7] consider a more general decentralized setting where there is no central32 platform to arrange matchings, and our work is also based on this setting.33
However, it is not enough to just study the one-to-one setting. Take online short-term worker34 employment as an example, it is an online platform design with an iterative matching, where35
Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute.
employers have numerous similar short-term tasks or internships to be recruited. Workers can only36 choose one task according to the company’s needs at a time while one company can accept more37 than one employee. Each company makes a fixed ranking for candidates according to its own38 requirements but workers have no knowledge of companies’ preferences. The reward for workers39 is a comprehensive consideration of salary and job environment. Since tasks are short-term, each40 candidate can try many times in different companies to choose the most suitable job. We abstract41 companies as arms and workers as agents. Each arm has a capacity q which is the maximum number42 of agents this arm can accommodate. When an arm faces multiple choices, it accepts its most q43 preferred agents. Agents thus compete for arms and may receive zero reward if losing the conflict. It44 is worth mentioning that arms with capacity q in the many-to-one matching can not just be replaced45 by q independent individuals with the same preference since there would be implicit competition46 among different replicates of this arm, not equal treatment. In addition, when multiple agents select47 one arm at a time, there may be no collision, which will hinder the communication among different48 agents under the decentralized assumption. They cannot distinguish who is more preferred by this49 arm in one round as it can accept more than one agent while this can be done in one-to-one case.50 Communication here lets each agent learn more about the preferences of arms and other agents, so as51 to formulate better policies to reduce collisions and learn fast about their stable results.52
This work focuses on a many-to-one market under uniqueness conditions. Previous work [10, 15]53 emphasize the importance of constructing a unique stable matching for the equilibrium of matching54 problems and some existing uniqueness conditions are studied in many-to-one matching, such as55 Sequential Preference Condition (SPC) and Acyclicity [26, 2]. Our work is motivated by [7], but the56 unique one-to-one mapping between arms and agents in their study which gives a surrogate threshold57 for arm elimination does not work in the many-to-one setting. And the uniqueness conditions in58 many-to-one matching are not well-studied, which also brings a challenge to identify and leverage59 the relationship between the resulting stable matching and preferences of two sides in the design60 of bandit algorithms. We propose an α̃-condition that can guarantee a unique stable matching and61 recover α-condition [19] if reduced to the one-to-one setting. We establish the relationships between62 our new α̃-condition and existing uniqueness conditions in many-to-one setting.63
In this paper, we study the bandit algorithm for a decentralized many-to-one matching market64 with uniqueness conditions. Under our newly introduced α̃-condition, we design an MO-UCB-D465 algorithm with arm elimination and the regret can be upper bounded by O ( NK log(T )
∆2
) , where N66
is the number of agents, K is the number of arms, and ∆ is the minimum reward gap. Finally,67 we conduct a series of experiments to simulate our algorithm under various conditions of Serial68 dictatorship, SPC and α̃-condition to study the stability and regret of the algorithm.69
Related Work The study of matching markets has a long history in economics and operation70 research [8, 6, 32] with real applications like school enrollment, labor employment, hospital resource71 allocation, and so on [1, 23, 31, 17]. A salient feature of market matching is making decisions for72 competing players on both sides [36, 12]. MAB is an important tool to study matching problems under73 uncertainty to obtain a maximum reward, and upper confidence bound algorithm (UCB) [4] is a typical74 algorithm, which sets a confidence interval to represent uncertainty. Matching market with MAB is75 studied in both centralized and decentralized setting [21, 22]. Following these, Abishek Sankararaman76 et al. [34] propose a phased UCB algorithm under a uniqueness condition, Serial Dictatorship, to77 manage collisions. They solve the problem of the decentralized market without knowing arm-gaps78 or time horizon, and reduce the probability of linear regret through non-monotonic arm elimination.79 The introduction of the uniqueness condition plays an important role in the equilibrium of matching80 results [15, 7]. Under a stronger and robust condition, Uniqueness Consistency [19], Soumya Basu81 et.al [7] apply MAB to online matching and obtain robust results that the subset of stable matchings82 being separated from the system does not affect other stable matchings.83
We discuss many-to-one problems such as online short-term employment and MOOC [14, 24, 18] as84 the one-to-one setting has limitations in practice. Somouaoga Bonkoungo [9] runs a student-proposing85 deferred acceptance algorithm (DA) [12] to study decentralized college admission. Ahmet Altinok86 [3] considers dynamic matching in many-to-one that can be solved as if it is static many-to-one or87 dynamic one-to-one under certain assumptions. As the existence and uniqueness of competitive88 equilibrium and core are important to allocations, the unique stable results need to be considered [27].89 Similar to conditions for unique stable matching in one-to-one, some uniqueness conditions of stable90 results in the many-to-one setting also are studied [16, 28, 15, 2, 27].91
2 Setting92
This paper considers a many-to-one matching marketM = (K,J ,P), where K = [K],J = [N ]93 are a finite arm set and a finite agent set, respectively. And each arm k has a capacity qk ≥ 1. To94 guarantee that no agents will be unmatched, we focus on the market with N ≤ ∑K i=1 qi. P is the95
fixed preference order of agents and arms, which is ranked by the mean reward. We assume that arm96 preferences for agents are unknown and needed to be learned. If agent j prefers arm k over k′, which97 also means that µj,k > µj,k′ , we denote by k ≻j k′. And the preference is strict that µj,k ̸= µj,k′ if98 k ̸= k′. Similarly, each arm k has a fixed and known preference ≻k over all agents, and specially,99 j ≻k j′ means that arm k prefers agent j over j′. Throughout, we focus on the market where all100 agent-arm pairs are mutually acceptable, that is, j ≻k ∅ and k ≻j ∅ for all k ∈ [K] and j ∈ [N ].101 Let mapping m be the matching result. mt(j) is the matched arm for agent j at time t, and γt(k) is102 the agents set matched with arm k1. Every time agent j selects an arm It(j), and we use Mt(j) to103 denote whether j is successfully matched with its selected arm. Mt(j) = 1 if agent j is matched with104 It(j), and Mt(j) = 0, otherwise. If multiple agents select arm k at the same time, only top qk agents105 can successfully match. The agent j matched with arm k can observe the reward Xj,mt(j)(t), where106 the random reward Xj,k(t) ∈ [0, 1] is independently drawn from a fixed distribution with mean µj,k.107 While the unmatched ones have collisions and receive zero reward. Generally, the reward obtained by108 agent j is Xj,It(j)(t)Mt(j).109
An agent j and an arm k form a blocking pair for a matching m if they are not matched but prefer110 each other over their assignments, i.e. k ≻j m(j) and ∃j′ ∈ γ(k), j ≻k j′. We say a matching111 satisfies individually rationality (IR), if aj ≻pi ∅ and pi ≻aj ∅ for all i ∈ [N ] and j ∈ [K], that is,112 every worker prefers to find a job rather than do nothing, and every company also wants to recruit113 workers rather than not recruit anyone. Under the IR condition, a matching in the many-to-one setting114 is stable if there does not exist a blocking pair [33, 35].115
This paper considers the matching markets under the uniqueness condition. Thus the overall goal is116 to find the unique stable matching between the agent side and arm side through iterations. Let m∗(j)117 be the stable matched arm for agent j under the stable matching m∗. The reward obtained by agent j118 is compared against the reward received by matching with m∗(j) at each time. We aim to minimize119 the expected stable regret for agent j over time horizon T , which is defined as120
Rj(T ) = Tµj,m∗(j) − E
[ T∑
t=1
Mt(j)Xj,It(j)(t)
] .
3 Algorithm121
In this section, we introduce our MO-UCB-D4 Algorithm (Many-to-one UCB with Decentralized122 Dominated arms Deletion and Local Deletion Algorithm) (Algorithm 1) for the decentralized many-123 to-one market, where there is no platform to arrange actions for agents, which leads to conflicts124 among agents. The MO-UCB-D4 algorithm for each agent j first takes agent set J and arm set K as125 input and chooses a parameter θ ∈ (0, 1/K) (discussed in Section C). It sets multiple phases, and126 each phase i mainly includes regret minimization block (line 6 - 12) and communication block (line127 13 - 16) with duration 2i−1, i = 1, 2, · · · .128 For each agent j in phase i, the algorithm adds arm deletion to reduce potential conflicts, which129 mainly contains global deletion and local deletion. The former eliminates the arms most preferred130 by agents who rank higher than agent j and obtain active set Chj [i] (line 4), and the latter deletes131 the arms that still have many conflicts with agent j after global deletion (line 6). We set a collision132 counter Cj,k[i] to record the number of collisions for agent j pulling arm k.133
In regret minimization block of phase i, we use Lj [i] = {k : Cj,k[i] ≥ ⌈θ2i⌉} to represent the134 arms that collide more times than a threshold ⌈θ2i⌉ when matching with agent j. Arms in Lj [i] are135 first locally deleted to reduce potential collisions for agent j (line 6). After that, agent j selects an136 optimal action It(j) from remaining arms in Chj [i]\Lj [i] in phase i according to UCB index, which is137 computed by µ̂j,k(t−1)+ √ 2α log(t) Nj,k(t−1) (line 7), where Nj,k(t−1) is the number that agent j and arm138
1The mapping m is not reversible as it is not a injective, thus we do not use m−1t (k).
Algorithm 1 MO-UCB-D4 algorithm (for agent j) Input:
θ ∈ (0, 1/K), α > 1. 1: Set global dominated set Gj [0] = ϕ 2: for phase i = 1, 2, ... do 3: Reset the collision set Cj,k[i] = 0, ∀k ∈ [K]; 4: Reset active arms set Chj [i] = [K]\Gj [i− 1]; 5: if t < 2i +NK(i− 1) then 6: Local deletion Lj [i] = {k : Cjk[i] ≥ ⌈θ2i⌉}; 7: Play arm It(j) ∈ argmax
k∈Chj [i]\Lj [i]
( µ̂j,k(t− 1) + √ 2α log(t) Nj,k(t−1) ) ;
8: if k = It(j) is successfully matched with agent j, i.e. mt(j) = k then 9: Update estimate µ̂j,k(t) and matching count Nj,k(t);
10: else 11: Cj,k[i] = Cj,k[i] + 1; 12: end if 13: else if t = 2i +NK(i− 1) then 14: Oj [i]← most matched arm in phase i; 15: Gj [i]← COMMUNICATION(i,Oj [i]); 16: end if 17: end for
k have been matched at time t− 1. If the selected arm is successfully matched with agent j, then the139 algorithm updates estimated reward µ̂j,k(t) = 1Nj,k(t) ∑t s=1 1{Is(j) = k and Ms(j) = 1} Xj,k(t)140 and Nj,k(t) (line 9). Otherwise, the collision happens (line 11) and j receives zero reward. The141 regret minimization block identifies the most played arm Oj [i] for agent j in each phase i, which is142 estimated as the best arm for j, thus making optimal policy to minimize expected regret.143
Algorithm 2 COMMUNICATION Input:
Phase number i, and most played arms Oj [i] for agent j, ∀j ∈ [N ] . 1: Set C = ∅; 2: for t = 1, 2, · · · , NK − 1 do 3: if K(j − 1) ≤ t ≤ Kj − 1 then 4: Agent j plays arm It(j) = (t mod K) + 1; 5: if Collision Occurs then 6: C = C ∪ {It(j)}; 7: end if 8: else 9: Play arm It(j) = Oj [i];
10: end if 11: end for 12: RETURN C;
In the communication block (Algorithm 2), there are N sub-blocks, each with duration K. In the144 ℓ− th sub-block, only agent ℓ pulls arm 1, arm 2, · · · , arm K in round-robin while the other agents145 select their most preferred arms estimated as the most played ones (line 4). This block aims to detect146 globally dominated arms for agent j: Gj [i] ⊂ {Oj′ [i] : j′ ≻Oj′ [i] j}. Under stable matching m
∗, the147 globally dominated arms set for agent j is denoted as G∗j . After the communication block in phase148 i, each agent j updates its active arms set Chj [i+ 1] for phase i+ 1, by globally deleting arms set149 Gj [i], and enters into the next phase (line 4 in Algorithm 1).150
Hence, multi-phases setting can guarantee that the active set in different phases has no inclusion151 relationship so that if an agent deletes an arm in a certain phase, this arm can still be selected in the152 later rounds. This ensures that each agent will not permanently eliminate its stable matched arm, and153 when the agent mistakenly deletes an arm, it will not lead to linear regret.154
4 Results155
4.1 Uniqueness Conditions156
4.1.1 α̃-condition157
Constructing a unique stable matching plays an important role in market equilibrium and fairness158 [10, 15]. With uniqueness, there would be no dispute about adopting stable matching preferred159 by which side, thus it is more fair. When the preferences of agents and arms are given by some160 utility functions instead of random preferences, like the payments for workers in the labor markets,161 the stable matching is usually unique. Thus the assumption of the unique stable matching is quite162 common in real applications. In this section, we propose a new uniqueness condition, α̃-condition.163 First, we introduce uniqueness consistency (Unqc) [19], which guarantees robustness and uniqueness164 of markets.165 Definition 1. A preference profile satisfies uniqueness consistency if and only if166
(i) there exists a unique stable matching m∗;167
(ii) for any subset of arms or agents, the restriction of the preference profile on this subset with their168 stable-matched pair has a unique stable matching.169
It guarantees that even if an arbitrary subset of agents are deleted out of the system with their170 respective stable matched arms, there still exists a unique stable matching among the remaining171 agents and arms. This condition allows any algorithm to identify at least one stable pair in a unique172 stable matching system and guides the system to a global unique stable matching in an iterative173 manner. To obtain consistent stable results in the many-to-one market, we propose a new α̃-condition,174 which is a sufficient and necessary condition for Unqc (proved in Appendix B).175
We considers a finite set of arms [K] = {1, 2, · · · ,K} and a finite set of agents [N ] = {1, 2, · · · , N}176 with preference profile P . Assume that [N ]r={A1, A2, · · · , AN} is a permutation of {1, 2, · · · , N}177 and [K]r={c1, c2, · · · , cK} is a permutation of {1, 2, · · · ,K}. Denote [N ], [K] as the left order and178 [N ]r, [K]r as the right order. The k-th arm in the right order set [K]r has the index ck in the left179 order set [K] and the j-th agent in the right order set [N ]r has the index Aj in the left order set [N ].180 Considering arm capacity, we denote γ∗(ck) (right order) as the stable matched agents set for arm ck.181 Definition 2. A many-to-one matching market satisfies the α̃-condition if,182
(i) The left order of agents and arms satisfies
∀j ∈ [N ],∀k > j, k ∈ [K], µj,m∗(j) > µj,k , where m∗(j) is agent j’s stable matched arm;183
(ii) The right order of agents and arms satisfies
∀k < k′ ≤ K, ck ∈ [K]r, Ak′ ⊂ [N ]r, γ∗(ck) ≻ck A∑k′−1 i=1 qci+1 ,
where the set γ∗(ck) is more preferred than A∑k′−1 i=1 qci+1 means that the least preferred agent in184
γ∗(ck) for ck is better than A∑k′−1 i=1 qci+1 for ck.185
Under our α̃-condition, the left order and the right order satisfy the following rule. The left order186 gives rankings according to agents’ preferences. The first agent in the left order set [N ] prefers arm 1187 in [K] most and has it as the stable matched arm. Similar properties for the agent 2 to q1 since arm 1188 has q1 capacity. Then the (q1 + 1)-th agent in the left order set [N ] has arm 2 in [K] as her stable189 matched arm and prefers arm 2 most except arm 1. The remaining agents follow similarly. Similarly,190 the right order gives rankings according to arms’ preferences. The first arm 1 in the right order set191 [K]r most prefers first qc1 agents in the right order set [N ]r and takes them as its stable matched192 agents. The remaining arms follow similarly.193
This condition is more general than existing uniqueness conditions like SPC [28] and can recover194 the known α-condition in one-to-one matching market [19]. The relationship between the existing195 uniqueness conditions and our proposed conditions will be analyzed in detail later in Section 4.1.2.196
The main idea from one-to-one to many-to-one analysis is to replace individuals with sets. In197 general, under α̃-condition, the left order satisfies that when arm 1 to arm k − 1 are removed, agents198
(∑k−1 i=1 qi + 1 ) to (∑k i=1 qi )
prefer k most, and the right order means that when A1 to agents199 A∑k−1 i=1 qi are removed, arm k prefers agents Ak = {A∑k−1 i=1 qci+1 , A∑k−1 i=1 qci+2 , · · · , A∑k i=1 qci },200 where Ak is the agent set that are most qk preferred by arm k among those who have not been201 matched by arm 1, 2, · · · , k − 1. Te next theorem give a summary.202
Theorem 1. If a market M = (K,J ,P) satisfies α̃-condition, then m∗( ∑j−1
i=1 qi + 1) =203 m∗( ∑j−1 i=1 qi + 2) = · · · = m∗( ∑j i=1 qi) = j (the left order), γ ∗(ck) = Ak and m∗(Aj) = cj (the204 right order) under stable matching.205
Under α̃-condition, the stable matched arm may not be the most preferred one for each agent j,206 j ∈ [N ], thus (i) we do not have m∗(j) to be dominated only by the agent 1 to agent j − 1, i.e. there207 may exist j′ > j, s.t. j′ ≻m∗(j) j; (ii) the left order may not be identical to the right order, we208 define a mapping lr to match the index of an agent in the left order with the index in the right order,209 i.e. Alr(j) = j. From Theorem 1, the stable matched set for arm k is its first qk preferred agents210 γ∗(ck) = Ak. We define lr as lr(i) = max{j : Aj ∈ γ∗(m∗(i)), j ∈ [N ]}, that is, in the right211 order, the mapping for arm k ∈ [K] is the least preferred one among its most qk preferred agents.212 Note that this mapping is not an injective, i.e. ∃j, j′, s.t. agent j = Alr(j) = Alr(j′). An intuitive213 representation can be seen in Figure 4 in Appendix A.1.214
4.1.2 Unique Stable Conditions in Many-to-one Matching215
Uniqueness consistency (Unqc) leads the stable matching to a robust one which is a desirable property216 in large dynamic markets with constant individual departure [7]. A precondition of Unqc is to ensure217 global unique stability, hence finding uniqueness conditions is essential.218
The existing unique stable conditions are well established in one-to-one setting (analysis can be219 found in Appendix B), and in this section, we focus on uniqueness conditions in many-to-one market,220 such as SPC, [28], Aligned Preference, Serial Dictatorship Top-top match and Acyclicity [26, 2, 28]221 (Definition 9, 7, 8, 10 in Appendix B.2). Takashi Akahoshi [2] proposes a necessary and sufficient222 condition for uniqueness of stable matching in many-to-one matching where unacceptable agents223 and arms may exist on both sides. We denote their condition as Acyclicity∗. Under our setting, both224 two sides are acceptable, and we first give the proof of that Acyclicity∗ is a necessary and sufficient225 condition for uniqueness in this setting (see Section B.2.4 in Appendix B). We then give relationships226 between our newly α̃-condition and other existing uniqueness conditions, intuitively expressed in227 Figure 1, and we give proof for this section in Appendix B.2.228
Lemma 1. In a many-to-one matching marketM = (K,J ,P), both Serial Dictatorship and Aligned229 Preference can produce a unique stable matching and they are equivalent.230
Theorem 2. In a many-to-one matching marketM = (K,J ,P), our α̃-condition satisfies:231 (i) SPC is a sufficient condition to α̃-condition;232
(ii) α̃-condition is a necessary and sufficient condition to Unqc;233
(iii) α̃-condition is a sufficient condition to Acyclicity∗.234
4.2 Theoretical Results of Regret235
We then provide theoretical results of MO-UCB-D4 algorithm under our α̃-condition. Recall that G∗j236 is the globally dominated arms for agent j under stable matching m∗. For each arm k /∈ G∗j , we give237 the definition of the blocking agents for arm k and agent j: Bjk = {j′ : j′ ≻k j, k /∈ G∗j}, which238 contains agents more preferred by arm k than j. The hidden arms for agent j is Hj = {k : k /∈239 G∗j} ∩ {k : Bjk ̸= ∅}. The reward gap for agent j and arm k is defined as ∆jk = |µj,m∗(j) − µj,k|240 and the minimum reward gap across all arms and agents is ∆ = minj∈[N ]{mink∈[K] ∆j,k}. We241 assume that the reward is different for each agent, thus ∆j,k > 0 for every agent j and arm k.242
Theorem 3. (Regret upper bound) Let Jmax(j) = max {j + 1, {j′ : ∃k ∈ Hj , j′ ∈ Bjk}} be the243 max blocking agent for agent j and fα̃(j) = j + lrmax(j) is a fixed factor depends on both the left244 order and the right order for agent j. Following MO-UCB-D4 algorithm with horizon T , the expected245 regret of a stable matching under α̃-condition (Definition 2) for agent j ∈ [N ] is upper bounded by246
E [Rj(T )] ≤ ∑
k/∈G∗j∪m∗(j)
8α
∆jk
( log(T ) + √ π
α log(T ) ) + ∑ k/∈G∗j ∑ j′∈Bjk:k/∈G∗j′ 8αµj,m∗(j) ∆2j′k ( log(T ) + √ π α log(T ) )
+ cj log2(T ) +O
( N2K2
∆2 + ( min(1, θ|Hj |)fα(Jmax(j) ) + fα̃(j)− 1)2i ∗ +N2Ki∗ ) ,
where i∗ = max{8, i1, i2} (then i∗ ≤ 8 and i1, i2 are defined in equation (3)), and lrmax(j) =247 max{lr(j′) : 1 ≤ j′ ≤ j}, is the maximum right order mapping for agent j′ who ranks higher than248 j.249
From Theorem 3, the scale of the regret upper bound under α̃-condition is O (
NK log(T ) ∆2
) and the250
proof is in Section 3.251
Proof Sketch of Theorem 3. Under α̃-condition, we only need to discuss the regret of the unique252 result. We construct a good phase (in Appendix A.2) and denote that the time point of agent j253 reaching its good phase by τj . After τj , agent j could identify its best arm and matches with his254 stable pair. Thus, from phase τj on-wards, agent j + 1 will find the set of globally dominated arms255 G∗j+1 and will eliminate arm m
∗(j) if m∗(j) brings collisions in communication block according256 to Algorithm 1. Global deletion here follows the left order. Then when agent j enters into regret257 minimization block next phase, the times it plays a sub-optimal arm is small which leads to a small258 total number of collisions experienced by agent j + 1. Then the process of each agent after good259 phase is divided into two stages: before τj and after τj . After τj , according to the causes of regret, it260 is divided into four blocks: collision, local deletion, communication, and sub-optimal play. Phases261 before τj can be bounded by induction. The regret decomposition is bound by the following.262
Lemma 2. (Regret Decomposition) For a stable matching under α̃-condition, the upper bound of263 regret for the agent j ∈ [N ] under our algorithm can be decomposed by:264
E [Rj(T )] ≤ E [ SFαj ]︸ ︷︷ ︸ (Regret before phase Fαj ) +min(θ|Hj |, 1)E [ SVαj ]︸ ︷︷ ︸ (Local deletion) + ( (K − 1 + |Bj,m∗(j)|) log2(T ) +NKE [Vαj ] )︸ ︷︷ ︸ (Communication)
+ ∑ k/∈G∗j ∑ j′∈Bj,k:k/∈G∗j′ 8αµj,m∗(j) ∆2j′,k ( log(T ) + √ π α log(T ) ) ︸ ︷︷ ︸
(Collision) + ∑
k/∈G∗j∪m∗(j)
8α
∆j,k (log(T ) +
√ π
α log(T ))
︸ ︷︷ ︸ (Sub-optimal play)
+NK ( 1 + (ϕ(α) + 1) 8α
∆2
) ,
where Fαj , Vαj are the time points when agent j enters into α̃-Good phase and α̃-Low Collision265 phase respectively, mentioned as "good phase" above, are defined in Appendix A.2.266
5 Difficulties and Solutions267
While putting forward our α̃-condition in the many-to-one setting, many new problems need to be268 taken into account.269
From one-to-one setting to many-to-one setting First, although we assume that arm preference is270 over individuals rather than combination of agents, the agents matched by one arm are not independent.271 Specially, arms with capacity q can not just be replaced by q independent individuals with the same272 preference. Since there would be implicit competition among different replicates of this arm, and it273 can reject the previously accepted agents when it faces a more preferred agent. Secondly, collisions274 among agents is one of main causes of regret in decentralized setting, while capacity will hinder the275 collision-reducing process. In communication block, when two agents select one arm at a time, as276 an arm can accept more than one agent, these two cannot distinguish who is more preferred by this277 arm, while it can be done in one-to-one markets. Thus it is more difficult to identify arm preferences278 for each agent. The lr in [7] is a one-to-one mapping that corresponds the agent index in the left279 order and the agent index in the right order, which is related to regret bound (Theorem 3 in [7] and280 Theorem 3 in our work). While it does not hold in our setting. To give a descriptive range of matched281 result for each arm under α̃-condition, we need to define a new mapping.282
In order to solve these problems, we explain as follows: First, since capacity influence the com-283 munication among agents, we add communication block and introduce an arm set G∗j , which will284 be deleted before each phase to reduce collisions, where G∗j contains arms that will block agent j285 globally under stable matching m∗. Second, the idea from one-to-one to many-to-one is a transition286 from individual to set. It is natural to split sets into individuals or design a bridge to correspond sets287 to individuals. We construct a new mapping lr (Figure 4 in Appendix A) from agent j in the left order288 to agents in the right order under α̃-condition. lr maps each arm k to the least preferred one of its289 stable matched agents in the right order, thus giving a matching between individuals and individuals290 and constructing the range of the stable matched agents set (Theorem 1). Except lr, capacity also291 influences regret mainly in communication block, as mentioned in the first paragraph.292
From α-condition to α̃-condition To extend α-condition to the many-to-one setting, it needs293 to define preferences among sets. However, there might be exponential number of sets due to the294 combinatorial structure and simply constraining preferences over all possible sets will lead to high295 complexity. Motivated by α-condition which characterizes properties of matched pairs in one-to-one296 setting, we come up with a possible constraint by regarding the arm and its least preferred agent in the297 matched set as the matched pair and define preferences according to this grouping. It turns out that298 we only need to define the preferences of arms over disjoint sets of agents to complete the extension299 as α-condition is defined under the stable matching, which can also fit the regret analysis well. As a300 summary, there might be other possible ways to extend the α-condition but we present a successful301 trial to not only give a good extension with similar inclusion relationships but also guarantee good302 regret bound.303
6 Experiments304
In this section, we verify the experimental results of our MO-UCB-D4 algorithm (Algorithm 1) for305 decentralized many-to-one matching markets. For all experiments, the rankings of all agents and306 arms are sampled uniformly. We set the reward value towards the least preferred arm to be 1/N307 and the most preferred one as 1 for each agent, then the reward gap between any adjacently ranked308 arms is ∆ = 1/N . The reward for agent j matches with arm k at time t Xj,k(t) is sampled from309 Ber(µj,k). The capacity is equally set as q = N/K. We investigate how the cumulative regret and310 cumulative market unstability depend on the size of the market and the number of arms under three311 different unique stability conditions: Serial Dictatorship, SPC, α̃-condition. The former cumulative312 regret is the total mean reward gap between the stable matching result and the simulated result, and313 the latter cumulative unstability is defined as the number of unstable matchings in round t. In our314 experiments, all results are averaged over 10 independent runs, hence the error bars are calculated as315 standard deviations divided by √ 10.316
Varying the market size To test effects on two indicators, cumulative regret and cumulative317 unstability, we first varying N with fixed K with market size of N ∈ {10, 20, 30, 40} agents318
and K = 5 arms. The number of rounds is set to be 100, 000. The cumulative regret in Figure319 2(a)(c)(e) show an increasing trend with convergence as the number of agents increases under these320 three conditions. When the number of agents increases, there is a high probability of collisions321 among different agents, resulting in the increase of cumulative regret. Similar results for cumulative322 unstability are shown in Figure 2(b)(d)(f). When N is larger, the number of unstable pairs becomes323 more. With the increase of the number of rounds, both two indicators increase first and then tend to324 be stable. The jumping points are caused by multi-phases setting of MO-UCB-D4 algorithm.325
Varying arm capacity The number of arms K is chosen by K ∈ {2, 5, 10, 20}, with N = 20 and326 q = N/K. The number of rounds we set is 400, 000. With the increase of K, both the cumulative327 regret in Figure 3(a)(c)(e) and the cumulative unstability in Figure 3(b)(d)(f) increase monotonously.328 When K increases, the capacity qk for each arm k decreases, and then the number of collisions329 will increase, which leads to an increase of cumulative regret. And it also leads to more unstable330 pairs, which needs more communication blocks to converge to a stable matching. Under these three331 conditions, the performances of the algorithm are similar.332
7 Conclusion333
We are the first to study the bandit algorithm for the many-to-one matching market under the unique334 stable matching. This work focuses on a decentralized market. A new α̃-condition is proposed335 to guarantee a unique stable outcome in many-to-one market, which is more general than existing336 uniqueness conditions like SPC, Serial Dictatorship and could recover the usual α-condition in337 one-to-one setting. We propose a phase-based algorithm of MO-UCB-D4 with arm-elimination,338 which obtains O ( NK log(T )
∆2
) stable regret under α̃-condition. By carefully defining a mapping from339
arms to the least preferred agent in its stable matched set, we could effectively correspond arms and340 agents by individual-to-individual. A series of experiments under two environments of varying the341 market size and varying arm capacity are conducted. The results show that our algorithm performs342 well under Serial Dictatorship, SPC and α̃-condition respectively.343
References344 [1] Azar Abizada. Stability and incentives for college admissions with budget constraints. Theoreti-345 cal Economics, 11(2):735–756, 2016.346
[2] Takashi Akahoshi. Singleton core in many-to-one matching problems. Mathematical Social347 Sciences, 72:7–13, 2014.348
[3] Ahmet Altinok. Dynamic many-to-one matching. Available at SSRN 3526522, 2019.349
[4] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed350 bandit problem. Machine learning, 47(2):235–256, 2002.351
[5] Orly Avner and Shie Mannor. Concurrent bandits and cognitive radio networks. In Joint352 European Conference on Machine Learning and Knowledge Discovery in Databases, pages353 66–81. Springer, 2014.354
[6] Sophie Bade. Random serial dictatorship: the one and only. Mathematics of Operations355 Research, 45(1):353–368, 2020.356
[7] Soumya Basu, Karthik Abinav Sankararaman, and Abishek Sankararaman. Beyond log2(t)357 regret for decentralized bandits in matching markets. In International Conference on Machine358 Learning, pages 705–715, 2021.359
[8] Anna Bogomolnaia and Hervé Moulin. A new solution to the random assignment problem.360 Journal of Economic theory, 100(2):295–328, 2001.361
[9] Somouaoga Bonkoungou. Decentralized college admissions under single application. Review362 of Economic Design, 25(1):65–91, 2021.363
[10] Simon Clark. The uniqueness of stable matchings. Contributions in Theoretical Economics,364 6(1), 2006.365
[11] Jan Eeckhout. On the uniqueness of stable marriage matchings. Economics Letters, 69(1):1–8,366 2000.367
[12] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. The368 American Mathematical Monthly, 69(1):9–15, 1962.369
[13] Aurélien Garivier, Tor Lattimore, and Emilie Kaufmann. On explore-then-commit strategies.370 Advances in Neural Information Processing Systems, 29:784–792, 2016.371
[14] Virginia Gunn, Bertina Kreshpaj, Nuria Matilla-Santander, Emilia F Vignola, David H Weg-372 man, Christer Hogstedt, Emily Q Ahonen, Theo Bodin, Cecilia Orellana, Sherry Baron, et al.373 Initiatives addressing precarious employment and its effects on workers’ health and well-being:374 A systematic review. International Journal of Environmental Research and Public Health,375 19(4):2232, 2022.376
[15] Gregory Z Gutin, Philip R Neary, and Anders Yeo. Unique stable matchings. arXiv preprint377 arXiv:2106.12977, 2021.378
[16] Guillaume Haeringer and Flip Klijn. Constrained school choice. Journal of Economic theory,379 144(5):1921–1947, 2009.380
[17] John William Hatfield, Fuhito Kojima, and Scott Duke Kominers. Investment incentives in381 labor market matching. American Economic Review, 104(5):436–41, 2014.382
[18] Ramesh Johari, Vijay Kamble, and Yash Kanoria. Matching while learning. Operations383 Research, 69(2):655–681, 2021.384
[19] Alexander Karpov. A necessary and sufficient condition for uniqueness consistency in the stable385 marriage matching problem. Economics Letters, 178:63–65, 2019.386
[20] Bettina Klaus and Flip Klijn. Local and global consistency properties for student placement.387 Journal of Mathematical Economics, 49(3):222–229, 2013.388
[21] Lydia T Liu, Horia Mania, and Michael Jordan. Competing bandits in matching markets. In389 International Conference on Artificial Intelligence and Statistics, pages 1618–1628. PMLR,390 2020.391
[22] Lydia T Liu, Feng Ruan, Horia Mania, and Michael I Jordan. Bandit learning in decentralized392 matching markets. arXiv preprint arXiv:2012.07348, 2020.393
[23] Jinpeng Ma. The singleton core in the college admissions problem and its application to the394 national resident matching program (nrmp). Games and Economic Behavior, 69(1):150–164,395 2010.396
[24] Onkar Malgonde, He Zhang, Balaji Padmanabhan, and Moez Limayem. Taming complexity in397 search matching: Two-sided recommender systems on digital platforms. Mis Quarterly, 44(1),398 2020.399
[25] Hai Nguyen, Thành Nguyen, and Alexander Teytelboym. Stability in matching markets with400 complex constraints. Management Science, 67(12):7438–7454, 2021.401
[26] Muriel Niederle and Leeat Yariv. Decentralized matching with aligned preferences. Technical402 report, National Bureau of Economic Research, 2009.403
[27] Jaeok Park. Competitive equilibrium and singleton cores in generalized matching problems.404 International Journal of Game Theory, 46(2):487–509, 2017.405
[28] Philip J Reny. A simple sufficient condition for a unique and student-efficient stable matching406 in the college admissions problem. Economic Theory Bulletin, 9(1):7–9, 2021.407
[29] Antonio Romero-Medina and Matteo Triossi. Acyclicity and singleton cores in matching408 markets. Economics Letters, 118(1):237–239, 2013.409
[30] Jonathan Rosenski, Ohad Shamir, and Liran Szlak. Multi-player bandits–a musical chairs410 approach. In International Conference on Machine Learning, pages 155–163. PMLR, 2016.411
[31] Alvin E Roth. On the allocation of residents to rural hospitals: a general property of two-sided412 matching markets. Econometrica: Journal of the Econometric Society, pages 425–427, 1986.413
[32] Alvin E Roth and Marilda Sotomayor. Two-sided matching. Handbook of game theory with414 economic applications, 1:485–541, 1992.415
[33] Hannu Salonen and Mikko AA Salonen. Mutually best matches. Mathematical Social Sciences,416 91:42–50, 2018.417
[34] Abishek Sankararaman, Soumya Basu, and Karthik Abinav Sankararaman. Dominate or delete:418 Decentralized competing bandits in serial dictatorship. In International Conference on Artificial419 Intelligence and Statistics, pages 1252–1260. PMLR, 2021.420
[35] Jay Sethuraman, Chung-Piaw Teo, Liwen Qian, et al. Many-to-one stable matching: Geometry421 and fairness. Mathematics of Operations Research, 31(3):581–596, 2006.422
[36] William R Thompson. On the likelihood that one unknown probability exceeds another in view423 of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.424
Checklist425
1. For all authors...426
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s427 contributions and scope? [Yes] Please see Abstract and Section 1.428
(b) Did you describe the limitations of your work? [Yes] Please see Section C.4.429 (c) Did you discuss any potential negative societal impacts of your work? [N/A] This430 work mainly focuses on the online learning theory, which does not have any potential431 negative societal impacts.432
(d) Have you read the ethics review guidelines and ensured that your paper conforms to433 them? [Yes]434
2. If you are including theoretical results...435 (a) Did you state the full set of assumptions of all theoretical results? [Yes] Please see436 Section 2.437 (b) Did you include complete proofs of all theoretical results? [Yes] Please see Appendix.438
3. If you ran experiments...439 (a) Did you include the code, data, and instructions needed to reproduce the main exper-440 imental results (either in the supplemental material or as a URL)? [Yes] Please see441 supplemental material.442
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they443 were chosen)? [Yes] Please see Section 6 and supplemental material.444
(c) Did you report error bars (e.g., with respect to the random seed after running experi-445 ments multiple times)? [Yes] Please see Section 6.446
(d) Did you include the total amount of compute and the type of resources used (e.g., type447 of GPUs, internal cluster, or cloud provider)? [N/A]448
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...449 (a) If your work uses existing assets, did you cite the creators? [N/A]450 (b) Did you mention the license of the assets? [N/A]451 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]452
453
(d) Did you discuss whether and how consent was obtained from people whose data you’re454 using/curating? [N/A]455
(e) Did you discuss whether the data you are using/curating contains personally identifiable456 information or offensive content? [N/A]457
5. If you used crowdsourcing or conducted research with human subjects...458 (a) Did you include the full text of instructions given to participants and screenshots, if459 applicable? [N/A]460 (b) Did you describe any potential participant risks, with links to Institutional Review461 Board (IRB) approvals, if applicable? [N/A]462 (c) Did you include the estimated hourly wage paid to participants and the total amount463 spent on participant compensation? [N/A]464 | 1. What is the focus and contribution of the paper regarding many-to-one matching markets?
2. What are the strengths of the proposed approach, particularly in terms of the \tilde{alpha}-condition and logarithmic regret?
3. What are the weaknesses of the paper, especially regarding the relationship between capacity and consistency?
4. Do you have any concerns or questions about the proof or the extension of the \tilde{\alpha}} condition?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors study the problem of minimizing regret in many-to-one matching markets with bandit feedback. In a many-to-one matching markets, each right-side agent (aka arms) can match upto a maximum number of left-side agents (aka agents). The authors develop \tilde{alpha}-condition as a sufficient condition for uniqueness consistency in many-to-one matching markets. Under this condition they provide logarithmic regret for the bandit learning problem for the MO-UCB-D4 algorithm.
Strengths And Weaknesses
Strengths:
This work is an interesting addition to learning in matching markets with bandit feedback, as it studies the many-to-one matching markets for the first time.
It develops the \tilde{\alpha}-condition (adapted from one-to-one matching markets) as a sufficient and necessary condition for uniqueness consistency.
Weaknesses:
The relationship of capacity q_k and lr_{max}(j) is not discussed properly, elaboration needed (see questions).
Counterexample of Acyclicity* holding and \tilde{\alpha}-condition not holding is missing. This will improve the understanding in section 4.3.
The \tilde{\alpha} condition is also very closely related to Karpov et al. The difficulty in extending this to the many-to-one matching is unclear.
The discussion on why deriving regret bound for many-to-one case is difficult than Basu et al. is not stated clearly. Specifically, the role of moving from ordering of individuals to ordering of sets need to be elaborated (probably in derivation of the \tilde{\alpha} condition as well).
Questions
The authors' comments on increase in q_k decreases lr_{max}(j) seems a bit less rigorous. Is there a more rigorous statement -- e.g. Does partial ordering of q_k introduces partial order on lr_{max}(j)?
Please discuss the other points mentioned in the weaknesses.
Limitations
This work seems theoretical in nature, and negative societal impact, if any, is not immediate. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the focus of the paper regarding deep networks and sparse recovery problems?
2. What are the strengths and weaknesses of the proposed approach compared to traditional methods?
3. How does the reviewer assess the technical quality and novelty of the paper's content?
4. What are the potential impacts of the paper's findings on the field, and how might they influence other iterative sparse recovery algorithms?
5. What are some minor issues with the paper's presentation that the reviewer notes? | Review | Review
The authors show that deep networks with hand-crafted structure inspired from IHT can solve sparse recovery problems, in particular with coherent dictionaries and adversarial RIP constants. Notes on technical quality: The arguments and claim are very broad while the practical experiments are highly limited. The work deserves more rigorous real data experimental analysis, specially studying multiple contributing factors in isolation. Notes on novelty: The paper has interesting theoretical contributions and implications for some applications that off-the-shelf sparse recovery algorithms may fail. Notes on potential impact: Unfolding conventional sparse estimation algorithms to produce deep networks is interesting. In particular, developing a modified IHT that can perform like a system with an ideal RIP constant can potentially impact derivation of other traditional iterative sparse recovery algorithms so that they can be adapted to the adversarial condition, and better exploit abundance of training data. Notes on qualitative assessment: This is an interesting study, and the authors address many aspects in reasonable depth. However, the paper as a single document is not self-contained; many arguments are only speculatively asserted. This is evidently a small subset of a comprehensive work, but the amount of squeezed information and lack of supporting evidence makes it hard to follow at various parts. More specific remarks: - L4: â... hand-crafted layer weightsâ: The argument holds in scenarios that the forward model for (natural) dictionaries is presumed or known. How does this generalize to data-driven dictionary learning for sparse representation? - L206: The adaptation to IHT would benefit from precise formalism or block diagrams describing and contrasting multiple network structures. - L241: Similar to above argument, the multi-label classification loss function is not clear. Minor typos: L69: â...the the...â L72: â...we development the first...â L102 & 158: â...we recovery the true...â L155: D should be boldface. Equation 10: define E. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the focus of the paper regarding sparse linear estimation?
2. What are the main contributions and novel aspects of the proposed approach?
3. How does the reviewer assess the significance and relevance of the paper's content?
4. Are there any concerns or suggestions regarding the proposed method, particularly its architecture and variations?
5. Do you have any questions regarding the clarity and quality of the paper's writing? | Review | Review
This paper considers the problem of sparse linear estimation with structured dictionary. It shows that trained deep neural network provides better estimation than traditional sparse estimation algorithms (including those using deep learning, [11]) that assume incoherence of the dictionary. They illustrate this for dictionary having low-rank (or approximately low-rank) structure. The main claims are empirical, though some side theoretical insights are provided. The authors stress the importance of layer-wise independent weights. On the one hand, sparse linear estimation with structured dictionaries is an important open problem. Attacking it with trained deep neural nets is a nice idea that should be of interest to the NIPS community. On the other hand, for low-rank dictionaries it is not very surprising that multilayer neural network work well, because being low-rank is a kind of two-layer structure. For this reason could maybe even two-layer architecture be sufficient for this task? Although the authors consider some variants of the neural network (changing the non-linearity of the loss function), they do not consider changing the number of layers nor their width. I am also a little reluctant about authors focus on the RIP property, that is sufficient but not necessary for success. Spell checking is still needed. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the focus of the paper regarding deep neural networks?
2. How does the proposed approach compare to other existing models, both hand-crafted and learning-based?
3. Can you provide examples or explanations of how the proposed framework can be applied to solve problems that are difficult for shallow models to handle?
4. Are there any suggestions for improving the presentation of the paper, such as adding figures or clarifying certain points?
5. Are there any minor errors or typos in the review that should be addressed? | Review | Review
This manuscript presents a generic deep neural network (DNN) solution for the l_0 norm sparse coding problem. The authors uses Iterative Hard Thresholding (IHT) as an example to justify the rightness of an alternative DNN method, including, hand-crafted and a learning-based models. Computational experiments on synthetic data and real data show better results than existing models such as IHT and l_1 norm sparse coding. Overall, this paper proposes a theoretically sound solution for the recovery of l_0 norm sparsity. Since the derivation of this framework is quite general, more potential models can be designed with the inspiration of this work. This work will be quite interesting for reminding machine learning scientists of thinking of deep learning methods for problems hardly solvable by shallow models. The presentation of this manuscript could be improved by providing a figure in the main manuscript showing the structures of the derived DNNs (somehow provided in the supplementary file). Minors: 1. Line 69: the the -> the 2. Line 80: Thesholding -> Thresholding 3. Line 102: recovery -> recover 4. Could not understand line 112. 5. Line 155: matrix D should be in bold face. 6. Line 338: an an -> an 7: Line 367: Discussion/Conclusion should be in a level-1 section. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the focus of the paper regarding deep neural networks?
2. What are the novel aspects introduced by the paper compared to prior works like [25]?
3. How does the reviewer assess the quality and effectiveness of the proposed approach?
4. Are there any concerns or limitations regarding the method's performance or applicability? | Review | Review
The paper describes an interpretation of unfolded iterative hard thresholding as deep neural networks. The novelty lies in the use of a multiclass loss instead the squared loss used normally, and the use of batch normalization along LSTM-like cells to avoid bad local minima. The paper is well written according to my personal opinion. The novelties introduced, although not being groundbreaking, are effective and interesting improvements over [25]. The improvements on running time and error are notorious. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the main contribution of the paper in terms of deep neural networks for sparse approximation?
2. How does the proposed network compare to existing algorithms, particularly regarding its ability to handle dictionaries with high RIP constants?
3. What are some potential limitations or concerns regarding the applicability of the proposed approach in real-world scenarios?
4. Are there any questions about the relationship between the theoretical analysis and the empirical success of the algorithm? | Review | Review
The paper describes a deep neural network for sparse approximation that deals with dictionary with high RIP constant. The network is inspired by unfolding the operations in the Iterative Hard Thresholding (IHT) algorithm. It is further generalized with a parameterization scheme such that the optimal solution is a fixed point. Compared to the IHT algorithm, it is shown that a network with learnable parameter could potentially find the optimal solution for some dictionaries with high RIP constant. Some structural variation of the neural net are discussed, e.g. softmax loss on the support of the solution and other techniques to avoid local minima. Experiments show that the proposed network outperform the existing algorithms. The authors also discuss the influence of various components of the network on the performance. The algorithm is applied on the task of photometric stereo problem and achieved state-of-the-art result.The paper presents an example of a dictionary with high RIP constant, i.e. a matrix with columns randomly draw from a unit ball plus a low rank matrix, and show in proposition 2 that the optimal solution of such dictionary can be found if the parameter in the network is set to a certain value. My concerns are: 1. does the dictionary in the real world application has the same structure as the example used to justified the algorithm? 2. is the parameter learnt by gradient descent the same as or close to the theoretical result in proposition 2? I believe more analysis can be done to strengthen the connection between the empirical success of the algorithm and the theory that motivates it. |
NIPS | Title
Maximal Sparsity with Deep Networks?
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system, which can effectively learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.
1 Introduction
Our launching point is the optimization problem
min x ‖x‖0 s.t. y = Φx, (1)
where y ∈ Rn is an observed vector, Φ ∈ Rn×m is some known, overcomplete dictionary of feature/basis vectors withm > n, and ‖·‖0 denotes the `0 norm of a vector, or a count of the number of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible vector x∗ (or approximately feasible if the constraint is relaxed). Unfortunately however, direct assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient alternatives that return a maximally sparse x∗ with high probability in restricted regimes are sought. Popular examples with varying degrees of computational overhead include convex relaxations such as `1-norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP) [18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16]. However, a fundamental weakness underlies them all: If the Gram matrix Φ>Φ has significant offdiagonal energy, indicative of strong coherence between columns of Φ, then estimation of x∗ may be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present, the null-space of Φ is more likely to include large numbers of approximately sparse vectors that tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical applications.
In this paper we consider recent developments in the field of deep learning as an entry point for improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm execution can be interpreted as passing an input through an extremely deep network with constant weights (dependent on Φ) at every layer. This ‘unfolding’ viewpoint immediately suggests that we consider substituting discriminatively learned weights in place of those inspired by the original sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number of {x∗,y} pairs, a trained network may be capable of producing quality sparse estimates with a few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x∗ with greater accuracy than conventional, state-of-the-art optimization-based algorithms, especially with a highly coherent Φ. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation in Φ. This helps to quantify the limits of shared layer weights, which are the standard template of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of LSTM cells [12] that account for multi-resolution structure in Φ in a previously unexplored fashion. Note that we defer all proofs, as well as many additional analyses and problem details, to a longer companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding of the essential factors governing performance, we establish the degree to which it is favorable to diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue that layer-wise independent weights and/or activations are essential, while retainment of original thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We also recast the the core problem as deep multi-label classification given that optimal support pattern recovery is the primary concern. This allows us to adopt a novel training paradigm that is less sensitive to the specific distribution encountered during testing. Ultimately, we development the first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from ∼ 104 − 106 surface points. We achieve state-of-the-art performance using only weak supervision despite a minuscule computational budget appropriate for real-time mobile environments.
2 From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine IHT because it is representative of many other sparse estimation paradigms and is amenable to theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be replaced by the equivalent problem
min x
1 2‖y −Φx‖ 2 2 s.t. ‖x‖0 ≤ k. (2)
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x∗ after t iterations. The aggregate IHT update computes
x(t+1) = Hk [ x(t) − µΦ> ( Φx(t) − y )] , (3)
where µ is a step-size parameter and Hk[·] is a hard-thresholding operator that sets all but the k largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size µ = 1 leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed to reduce (2) at each step before eventually converging to the globally optimal solution. These results hinge on properties of Φ which relate to the coherence structure of dictionary columns as encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary Φ satisfies the Restricted Isometry Property (RIP) with constant δk[Φ] < 1 if
(1− δk[Φ])‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δk[Φ])‖x‖22 (4)
holds for all {x : ‖x‖0 ≤ k}.
In brief, the smaller the value of the RIP constant δk[Φ], the closer any sub-matrix of Φ with k columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that dictionaries with smaller values of δk[Φ] lead to sparse recovery problems that are inherently easier to solve. For example, in the context of IHT, it has been shown [3] that if y = Φx∗, with ‖x∗‖0 ≤ k and δ3k[Φ] < 1/ √ 32, then at iteration t of (3) we will have ‖x(t) − x∗‖2 ≤ 2−t‖x∗‖2. It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Moreover, it can be shown that this x∗ is also the unique, optimal solution to (1) [5].
The success of IHT in recovering maximally sparse solutions crucially depends on the RIP-based condition that δ3k[Φ] < 1/ √ 32, which heavily constrains the degree of correlation structure in Φ that can be tolerated. While dictionaries with columns drawn independently and uniformly from the surface of a unit hypersphere (or with elements drawn iid fromN (0, 1/n) ) will satisfy this condition with high probability provided k is small enough [6], for many/most practical problems of interest we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries in high dimensions where tight bounds exist, we cannot even compute the value of δ3k[Φ], which requires calculating the spectral norm of ( m 3k ) subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms of dictionary coherence that can easily disrupt performance. Consider the situation where Φ =[ A+ uv> ] N , where columns of A ∈ Rn×m and u ∈ Rn are drawn iid from the surface of a unit hypersphere, while v ∈ Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal normalization matrix that scales each column of Φ to have unit `2 norm. It then follows that if is sufficiently small, the rank-one component begins to dominate, and there is no value of 3k such that δ3k[Φ] < 1/ √ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for improvement to the extent that they might be able to compensate for disruptive correlations in Φ.
For example, at the most basic level we might consider general networks with the layer t defined by x(t+1) = f [ Ψx(t) + Γy ] , (5)
where f : Rm → Rm is a non-linear activation function, and Ψ ∈ Rm×m and Γ ∈ Rm×n are arbitrary. Moreover, given access to training pairs {x∗,y}, where x∗ is a sparse vector such that y = Φx∗, we can optimize Ψ and Γ using traditional stochastic gradient descent just like any other DNN structure. We will first precisely characterize the extent to which this adaptation affords any benefit over IHT where f(·) = Hk[·]. Later we will consider flexible, layer-specific non-linearities f (t) and parameters {Ψ(t),Γ(t)}.
3 Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk[·] across all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a much wider range of activation functions f . In general it is difficult to analyze how arbitrary Ψ and Γ may improve upon the fixed parameterization from (3) where Ψ = I − Φ>Φ and Γ = Φ> (assuming µ = 1). Fortunately though, we can significantly collapse the space of potential weight matrices by including the natural requirement that if x∗ represents the true, maximally sparse solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
diverge away from the globally optimal value of x, something IHT itself will never do. These considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f(·) = Hk[·] and let x∗ denote any unique, maximally sparse feasible solution to y = Φx with ‖x‖0 ≤ k. Then to ensure that any such x∗ is a fixed point it must be that Ψ = I − ΓΦ.
Although Γ remains unconstrained, this result has restricted Ψ to be a rank-n factor, parameterized by Γ, subtracted from an identity matrix. Certainly this represents a significant contraction of the space of ‘reasonable’ parameterizations for a general IHT layer. In light of Proposition 1, we may then further consider whether the added generality of Γ (as opposed to the original fixed assignment Γ = Φ>) affords any further benefit to the revised IHT update
x(t+1) = Hk [ (I − ΓΦ)x(t) + Γy ] . (6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min x
1 2x >ΓΦx− x>Γy s.t. ‖x‖0 ≤ k. (7)
However, if ΓΦ is not positive semi-definite, then this objective is no longer even convex, and combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from (2). Consequently it does not immediately appear that Γ 6= Φ> is likely to provide any tangible benefit. However, there do exist important exceptions. The first indication of how learning a general Γ might help comes from the following result:
Proposition 2 Suppose that Γ = DΦ>WW>, where W is an arbitrary matrix of appropriate dimension and D is a full-rank diagonal that jointly solve
δ∗3k [Φ] , inf W ,D δ3k [WΦD] . (8)
Moreover, assume that Φ is substituted with ΦD in (6), meaning we have simply replaced Φ with a new dictionary that has scaled columns. Given these qualifications, if y = Φx∗, with ‖x∗‖0 ≤ k and δ∗3k [Φ] < 1/ √ 32, then at iteration t of (6)
‖D−1x(t) −D−1x∗‖2 ≤ 2−t‖D−1x∗‖2. (9)
It follows that as t → ∞, x(t) → x∗, meaning that we recover the true, generating x∗. Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be discovered. And it should be emphasized that rescaling Φ by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal `0-norm support pattern.1
But the real advantage over regular IHT comes from the fact that δ∗3k [Φ] ≤ δk [Φ], and in many practical cases, δ∗3k [Φ] δ3k [Φ], which implies success can be guaranteed across a much wider range of RIP conditions. For example, if we revisit the dictionary Φ = [ A+ uv> ] N , an immediate benefit can be observed. More concretely, for sufficiently small we argued that δ3k [Φ] > 1/ √ 32 for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown that δ∗3k [Φ] will remain quite small, satisfying δ ∗ 3k [Φ] ≈ δ3k [A], implying that performance will nearly match that of an equivalent recovery problem using A (and as we discussed above, δ3k [A] is likely to be relatively small per its unique, randomized design). The following result generalizes a sufficient regime whereby this is possible:
Corollary 1 Suppose Φ = [ A+ ∆r]N , where elements of A are drawn iid fromN (0, 1/n), ∆r is any arbitrary matrix with rank[∆r] = r < n, and N is a diagonal matrix (e.g, one that enforces unit `2 column norms). Then
E (δ∗3k [Φ]) ≤ E ( δ3k [ Ã ]) , (10)
where à denotes the matrix A with any r rows removed. 1Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under some fixed rescaling of Φ, i.e., an operation that preserves the optimal support pattern.
Additionally, as the size of Φ grows proportionally larger, it can be shown that with overwhelming probability δ∗3k [Φ] ≤ δ3k [ Ã ] . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ∆r at the cost of implicitly losing r measurements (linearly independent rows of A, and implicitly the corresponding elements of y). Therefore, at least provided that r is sufficiently small such that δ3k [ Ã ] ≈ δ3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of course in practice we may not know how Φ decomposes as some Φ ≈ [ A+ ∆r]N ; however, to the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole point of incorporating a DNN network to begin with. If we have access to a large number of training pairs {x∗,y} generated using the true Φ, then during the course of the learning process a useful W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering δ3k[Φ] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby columns of Φ are grouped into a number of tight clusters with minimal angular dispersion. While the clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In some sense this model represents the simplest partitioning of dictionary column correlation structure into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since the implicit ∆r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can remedy the problem. The core principle is to design a network such that earlier layers/iterations are tasked with exposing the correct support at the cluster level, without concern for accuracy within each cluster. Once the correct cluster support has been obtained, later layers can then be charged with estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ‘remember’ previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to deeper layers. Practically speaking this adaptation can be computed by passing the prior layer’s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {Ψ(t),Γ(t)} to vary from iteration to iteration t sequencing through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x∗ in situations where existing algorithms fail.
4 Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on the dictionary Φ having columns drawn (approximately) from a uniform distribution on the surface of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on a single universal scale. The clustered dictionary model described in the previous section considers a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
structures across more than two scales. However, without clearly partitioned scales it is much less obvious how one would devise an optimal IHT modification. It is in this context that learning flexible algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x∗,y} pairs linked via some fixed Φ, we can replace manual filter construction with a learning-based approach. On this point, although we view our results from Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special about the specific hard-threshold operator and layer-wise construction we employed per se, as long as we allow for deep, adaptable layers that can account for structure at multiple scales. For example, we expect that it is more important to establish a robust training pipeline that avoids stalling at the hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x∗, the main challenge is estimating supp[x∗]. Once the support is obtained, computing the actual nonzero coefficients just boils down to solving a least squares problem. But any learning system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the form ‖y − Φx‖22, as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More specifically, instead of directly estimating x∗, we attempt to learn s∗ = [s∗1, . . . , s ∗ m] >, where s∗i equals the indicator function I[x∗i 6= 0]. For this purpose we may then incorporate a traditional multi-label classification loss function via a final softmax output layer, which forces the network to only concern itself with learning support patterns. This substitution is further justified by the fact that even with traditional IHT, the support pattern will be accurately recovered before the iterations converge exactly to x∗. Therefore we may expect that fewer layers (as well as training data) are required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating functions. An even simpler version of this concept involves direct, un-gated connections that allow much deeper ‘residual’ networks to be trained [10] (which is even suggestive of the residual factor embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14] to aid convergence, for our basic feedforward pipeline, along with an alternative structure based on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the constellation of sub-optimal local minima during the training process.
5 Experiments and Applications
Synthetic Tests with Correlated Dictionaries: We generate a dictionary matrix Φ ∈ Rn×m using Φ = ∑n i=1 1 i2uiv > i , where ui ∈ Rn and vi ∈ Rm have iid elements drawn from N (0, 1). We also rescale each column of Φ to have unit `2 norm. Φ generated in this way has super-linear decaying singular values (indicating correlation between the columns) but is not constrained to any specific structure. Many dictionaries in real applications have such a property. As a basic experiment, we generateN = 700000 ground truth samples x∗ ∈ Rm by randomly selecting d nonzero entries, with nonzero amplitudes drawn iid from the uniform distribution U [−0.5, 0.5], excluding the interval [−0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then create y ∈ Rn via y = Φx∗. As d increases, the estimation problem becomes more difficult. In fact, to guarantee success with such correlated data (and high RIP constant) requires evaluating on the order of ( m n ) linear systems of size n×n, which is infeasible even for small values, indicative of how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing our arguments in Section 4, we explored both a feedforward network with residual connections [10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our network, a common all-or-nothing metric in the compressive sensing literature. Detailed network design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data described above. Note that given the correlated Φ matrix, the recovery performance of IHT, and to a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated RIP constant will be quite large by construction. In contrast our two methods achieve uniformly higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize using weights derived from the original sparse estimation algorithms, but such an initialization will be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution dictionary structure, especially one that is not confined to only possess some low rank correlating component. (iii) The quadratic loss function used by existing methods does not adequately focus resources on the crux of the problem, which is accurate support recovery. In contrast we adopt an initialization motivated by DNN-based training considerations, unique layer weights to handle a multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes: (1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold activations. In particular, we utilize the so-called HELUσ function introduced in [23], which is a continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is changed to ∑N1 i=1 ‖a(i) − y(i)‖22 (where a is the output of the last fully-connected layer) during training. Figure 1(middle) displays the associated recovery percentages, where we observe that in each case performance degrades. Without the residual design, and also with the inclusion of a rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit, consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate at times from the original training set. To explore robustness to such mismatch, as well as different amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians N (±0.3, 0.1), where the mean is selected with equal probability as either−0.3 or 0.3, thus avoiding tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using LSTM-Net, but the Res-net showed similar pattern.) The label ‘U2U’ refers to training and testing with the uniformly distributed amplitudes, while ‘U2N’ uses uniform training set and a Gaussian test set. Analogous definitions apply for ‘N2N’ and ‘N2U’. In all cases we note that the performance is
quite stable across training and testing conditions. We would argue that our recasting of the problem as multi-label classification contributes, at least in part, to this robustness. The application example described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface point from a Lambertian scene under different lighting directions. Then the resulting measurements from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ∈ Rq , can be expressed as o = ρLn, where n ∈ R3 denotes the true 3D surface normal, each row of L ∈ Rq×3 defines a lighting direction, and ρ is the diffuse albedo, acting here as a scalar multiplier [24]. If specular highlights, shadows, or other gross outliers are present, then the observations are more realistically modeled as o = ρLn + e, where e is an an unknown sparse vector [13, 25]. It is apparent that, since n is unconstrained, e need not compensate for any component of o in the range of L. Given that null[L>] is the orthogonal complement to range[L], we may consider the following problem
min e ‖e‖0 s.t. Projnull[L>](o) = Projnull[L>](e) (11)
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256×256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly selected from a hemisphere with the object placed at the center. To apply our method, we first compute Φ using the appropriate projection operator derived from the lighting matrix L. As real-world training data is expensive to acquire, we instead use weak supervision by synthetically generating a training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled uniformly from the range [d1, d2]. The values of d1 and d2 can be tuned in practice. Nonzero values of e are assigned iid random values from a Gaussian distribution whose mean and variance are also tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in applications of photometric stereo. Finally, for each e we can naturally compute observations via the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure and optimization parameters as in Section 5; no application-specific tuning was introduced. We then deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our DNN model to approximately solve (11). Since the network output will be a probability map for the outlier support set instead of the actual values of e, we choose the 4 indices with the least probability as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude faster. Overall though, this application example illustrates that weak supervision with mismatched synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution structure can provably solve certain specific classes of sparse recovery problems where existing algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of layers/iterations and compensate for dictionary coherence under more general conditions.
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804, NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University. | 1. What is the focus of the paper regarding the problem of sparse recovery?
2. What is the novel aspect introduced by the authors in using deep neural networks for sparse recovery?
3. What are the concerns regarding the motivation behind the approach proposed in the paper?
4. How does the reviewer assess the readability of the content and the clarity of the described network structure and optimization algorithm?
5. Are there any gaps between the theoretical analysis and practical applications discussed in the paper? | Review | Review
This paper investigates the problem of sparse recovery of a linear system with unknown dictionary. Given a set of training signal-response pairs generated from an unknown linear system, the authors introduce a predictive sparse recovery model based on deep neural network (DNN) training. The main idea is to assume that the sparse signal can be encoded by a deep network built over the observation. Inspired by iterative hard-thresholding (IHT), the hard-thresholding operation is applied as activation function on the hidden layers to induce the nonlinearity. Numerical results on synthetic data and photometric stereo applications are reported to show the actual performance of the proposed model. The idea of using DNN to approximately unfold the IHT iteration procedure is interesting and novel as far as I am aware of. I have, however, the following major concerns on its content: (1) Motivation is unconvincing: The training pairs (x, y) are assumed to obey a linear system y=Ax with unknown measurement matrix A. Given ample training data, why should we bother to use DNN-type models if A can simply be estimated via, e.g., least squared regression? Apparently, when A is recovered we may then apply IHT on a testing response vector with the estimated measurement matrix to approximately recover the sparse signal. Actually, I note the simulation study uses a training set of size 600,000 and the measurement matrix is only of size 20x100 << 600,000. In such a setting, it should not be uneasy to accurately estimate A from the data. (2) Another major concern is its readability. The somewhat overcomplicated writing style in fact underscores the limited innovation of model and algorithmic contributions. Throughout the paper, I cannot find a clear description of the network structure and its optimization algorithm. (3) There is a clear gap between theory and applications. The main sparse recovery result is about parameter estimation error in a noiseless linear system. The presented application of multi-resolution sparse estimation is defined as a multi-label classification problem to which it remains unclear whether the developed theory can apply. |
NIPS | Title
FLEX: Unifying Evaluation for Few-Shot NLP
Abstract
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark,2 which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew,3 a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
1 Introduction
Few-shot learning, the challenge of learning from a small number of examples, is critical for developing efficient, robust NLP techniques [71, 76]. In recent years, separate threads of few-shot NLP research have pursued goals like generalization to new classes [e.g., 5, 25], adaptation to new domains and tasks [e.g., 3, 4, 21], and direct application of pretrained language models (LMs) [e.g., 10, 24, 55, 56]. Unfortunately, despite the shared goal of advancing few-shot NLP techniques, the community does not know which techniques work best or even if they perform better than simple baselines. Evaluation suites across these research threads are disjoint, lack challenging-yet-realistic testing setups (e.g., class imbalance, variable training set sizes, etc.), and do not employ careful experimental design to ensure accurate and precise evaluation estimates and minimal computational burden. Prior work in few-shot learning outside of NLP serves as a stark warning of the consequences of improper measurement: Dhillon et al. [19] showed that techniques from several years of prior work did not make clear progress due to large overlapping accuracy distributions and, moreover, do not outperform a simple, carefully-tuned baseline.
Need for systematic benchmark design As such, a high-quality benchmark is urgently needed to enable rigorous comparison of techniques across disjoint, highly-active threads of few-shot NLP research. But what should such an evaluation suite look like? Some best practices for evaluation of few-shot methods have been introduced in the computer vision (CV) literature [19, 67] and should
∗Equal contribution 2Benchmark, leaderboard, and benchmark creation toolkit: https://github.com/allenai/flex.
Apache License 2.0 3Few-shot model: https://github.com/allenai/unifew. Apache License 2.0
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
be applied to NLP. However, unifying few-shot NLP work introduces new challenges. For example, the benchmark needs to test all types of transfer studied in separate research threads to measure progress on new techniques that make gains in each of these important generalization settings (§2). Also, given the importance of zero-shot learning and learning from task descriptions [29, 73], the benchmark needs to include zero-shot episodes and textual labels to enable measuring progress for models that do not use conventional supervised training, including methods that leverage the latent knowledge in pretrained LMs [10, 24, 78]. Further, the benchmark must accommodate new, computationally-expensive approaches, without overly reducing the number of evaluation episodes at the expense of statistical accuracy [3, 24, 75].
Need for a robust few-shot model Recent prompt-based models [10] have shown strong results in few-shot learning. These models leverage the power of (often large) pretrained language models and adapt the format of downstream tasks to the underlying pretraining objective (e.g., Masked Language Modeling). This way, given the right natural language prompt (and sometimes verbalizers [55] and additional demonstrative examples), the model can quickly fine-tune on the downstream task [24, 43, 44, 55, 56]. However, adapting task formats to the underlying (masked) language modeling objectives is not straightforward; such models have been shown to be sensitive to varying choices of the prompt/demonstrations, training settings, hyperparameters, and learning algorithms [33, 50, 78], often requiring large held out sets and/or complex methods to overcomes such challenges. Can models eschew complex prompt engineering by unifying pretraining and downstream task formats?
In this paper, we tackle these key issues by introducing FLEX—Few-shot Language Evaluation across (X) many transfer types—and contributing the following:
• FLEX Principles (§3), a set of requirements and best practices for few-shot NLP evaluation that enables unified, rigorous, valid, and cost-sensitive measurements.
– Sample Size Design: In support of valid, cost-sensitive measurement, we introduce a novel approach to few-shot sample size design (§5) that optimizes for a benchmark’s statistical accuracy and precision while keeping computational costs accessible to a broad range of researchers.
• FLEX benchmark (§4), an implementation of the FLEX Principles. It tests across four few-shot transfer settings,7 and includes a public leaderboard for few-shot NLP that covers 20 datasets across diverse NLP tasks (e.g., NLI, relation classification, entity typing). Table 1 summarizes key differences between FLEX and other few-shot NLP evaluation suites.
4The total number of training shots in each episode, not number of shots per class per episode. 5Most users use unlabeled examples, though recently, Tam et al. [65] do not. 6Average (avg), confidence interval (CI), standard deviation (SD), individual episode metrics 7Prior work evaluated at most two settings.
• UniFew (§6), a prompt-based model for few-shot learning in NLP. While most existing methods leverage pre-trained LMs for few-shot learning, LM pre-training tasks do not closely match natural downstream task formats, requiring complex methods (e.g., extensive prompt-engineering, use of verbalizers, episodic hyperparameter tuning, custom learning algorithms) to make these models work in few-shot setting. Instead, the key idea of our model, UniFew, is to close the gap between pre-training and fine-tuning formats by posing tasks as multiple-choice QA and using an underlying model that is pre-trained on a similar natural QA task format. This eliminates the need for complexities of adapting downstream tasks to the LM objectives, while resulting in competitive performance with both recent few-shot and meta-learning methods.
To aid similar efforts, our release of FLEX includes a toolkit for benchmark creation and few-shot NLP model development, which we used to create the FLEX benchmark and train UniFew.
2 Background and Related Work
We first provide background and notation for few-shot learning and evaluation, then discuss related work in NLP and outside NLP that motivated us to create the FLEX Principles and benchmark.
Few-shot background and notation Broadly, modern approaches to few-shot learning are evaluated in a three-phase procedure [68]. In the first phase, a general-purpose pretrained model is obtained. In the subsequent “meta-training” phase,8 techniques aim to adapt the model to be well-suited for few-shot generalization. Finally, a “meta-testing” phase evaluates the adapted model in new few-shot prediction settings.
Let D be a dataset of (x, y) examples with full label set YD. From it, we construct three sets of episodes, corresponding to meta-training, meta-validation, and meta-testing and denoted by Etrain, Eval, and Etest, respectively. Each episode in each of these sets is a few-shot problem with its own test Eset and other attributes. Formally, each episode E is a tuple (DE ), where YE is a train,DE test,YDval,DE D sampled subset of labels in YD and DE are disjoint sets of examples from D with labels in train|val|test EYD . 9 For each episode, the model’s objective is to correctly predict labels for examples DEtest. To accomplish this, models make use of labeled examples in DEtrain, which is typically configured such that each label i in YE has KE provided examples; KE is known as the shot, and the setting when aD i i class has no examples in DE = 0) is called zero-shot.train (i.e., KEi
Few-shot evaluation in NLP Research in few-shot NLP has proceeded in several parallel threads, each focused on a different type of transfer ability [76]. Each thread has separate evaluation practices, and the vast majority of few-shot NLP research has limited evaluation to a single transfer type (see Table 1). Here, we describe these types of transfer and their evaluation practices.
Following the CV literature [67, 68], one thread of few-shot NLP focuses on class transfer, the problem of generalizing from a supervised set of classes at meta-train time to a different set of classes from the same dataset at meta-test time. Evaluation typically involves splitting classes YD into YDtrain, YDval and YD disjoint subsets. Class transfer has been studied on many text classification tasks [5],test including relation classification [25, 28, 64], intent classification [37, 64], inter alia. In contrast, domain transfer keeps the same classes between meta-training and meta-testing but changes the textual domain (e.g., generalizing from MNLI to science-focused SciTail [4, 21]). Evaluation then requires identifying pairs of datasets with the same classes YD, where one dataset’s episodes are assigned to Etrain and the other’s to Etest. Domain transfer has also been studied on many tasks [3, 4], including dialogue intent detection & slot tagging [31], sentiment classification [77], NLI [21], and machine translation [27, 58].
Researchers have also begun to study task transfer, the problem of generalizing from a set of tasks at meta-train time to unseen tasks at meta-test time. Evaluation requires tasks (e.g., NLI) appearing in Etest not to appear in Etrain or Eval. Prior work has used GLUE tasks [70] for meta-training before meta-testing on tasks such as entity typing [3, 4], while other work instead used GLUE for
8Meta-training may include a “meta-validation" component, for validating generalization. 9In the few-shot literature, DE test are also called the support and query sets, and |YDE | the way.train and DE
meta-testing [21]. Very recent work has studied task transfer over a large set of datasets [75, 80]. A limited amount of work evaluates both domain and task transfer [3, 4, 21]. An important emerging line of work (not noted by Yin [76]) is pretraining transfer, the problem of whether pretrained language models can perform well at meta-test time without any meta-training. Evaluation in this setting requires Etrain, Eval = ∅. Prior work has shown that pretrained language models are capable of surprising performance on many few-shot tasks, even without fine-tuning [10]. More recent work, mainly focusing on text classification, has reported further gains with cloze-style formats [55, 56, 65], prompt engineering [24], or calibration [78]. FLEX is designed to exercise all four of these transfer types from previous work.
Few-shot evaluation outside NLP The few-shot learning literature has largely focused on image classification, with the introduction of increasingly complex meta-learning algorithms [e.g., 23, 39, 54, 61, 68]. However, more recent work has shown that simple fine-tuning baselines are in fact competitive, and attribute this delayed discovery to problematic evaluation methodology [15, 19]. FLEX adopts recommended methodology [19, 67], and we introduce an analogous baseline (UniFew) to provide a strong measurement foundation for few-shot NLP.
3 FLEX Principles for Few-Shot NLP Evaluation
We now enumerate key desiderata for a few-shot NLP benchmark capable of solving the urgent problems with few-shot NLP evaluation, including separate evaluations for each transfer type and failure to incorporate best measurement practices from other domains (§2).
Diversity of transfer types To make NLP models broadly useful, few-shot NLP techniques must be capable of class, domain, and task transfer. Moreover, techniques should make use of the relevant supervision provided during meta-training to increase performance compared to the pretraining transfer setting. The benchmark should measure all four transfer settings to ensure that the community develops techniques that improve on strong pretraining transfer baselines, and enable comparison across these currently separate threads of research.
Variable number of shots and classes To better simulate a variety of real-world scenarios, the benchmark should include a variety of training set sizes and numbers of classes [67]. Testing robustness to these factors is crucial; few-shot techniques are often sensitive to changes in these factors [12], yet all prior few-shot NLP evaluations we are aware of used a fixed number of training shots and classes, known in advance during meta-training.
Unbalanced training sets The benchmark should also include unbalanced training sets with different training shots per class, another realistic setting adopted by CV benchmarks [67]. Class imbalance has also been observed to degrade performance [11, 47], yet prior few-shot NLP evaluations do not include this setting either.
Textual labels While numerical label values are often used in classification tasks, descriptive textual labels are also present for many tasks. Making these textual labels available for use by few-shot techniques enables the development of techniques that can leverage the class name, like in-context learning [10], template generation [24], and meta-learning [45]. Textual labels are crucial in particular for zero-shot evaluation.
Zero-shot evaluation We believe zero-shot evaluation is integral to the goals of few-shot evaluation. Similar to the motivation for measuring pretraining transfer, zero-shot evaluation is an important use case and also provides a strong baseline for some tasks. In the absence of training examples, textual class labels or richer task descriptions [73] must be provided. Some recent few-shot NLP work [e.g., 10, 24] evaluated with zero training shots, but most [e.g., 3, 5, 75] did not.
No extra meta-testing data We believe the benchmark should not provide validation data (DE =val ∅,∀E ∈ Etest) or unlabeled data for meta-testing tasks, since few-shot learning seeks to enable high performance in environments where collecting additional data is costly.10 Variation in these dimensions in prior NLP work makes comparison of results extremely difficult because it is often under-reported and gives unfair advantage to approaches that leverage such data [50]. For example, per-episode hyperparameter tuning on extra data has been shown to greatly inflate evaluation scores [24]. A few researchers [5, 65] follow our suggested approach, but others have used many
10Unlabeled data collection can be costly too, e.g. due to manual filtering [16].
different settings, from validation sets of various sizes [10, 24, 79] to no validation set but a large set of unlabeled examples [55, 56].
Principled sample size design Promising few-shot techniques can incur significant computational cost per episode, e.g., due to fine-tuning model parameters [4], searching for prompts [24], inter alia. To alleviate these costs, related works often evaluate with a limited number of episodes, which precludes statistically accurate or precise performance estimates. We believe the benchmark’s test sample size should be optimized to enable proper performance evaluation for such techniques, while ensuring the computational burden is inclusive toward researchers without large compute resources.
Proper reporting of CIs, SDs, and individual results The benchmark should report confidence intervals (CIs) of performance estimates and follow recent guidelines [19] to report standard deviations (SDs) for understanding variability. Moreover, we newly advocate for controlling for the same sampled few-shot episodes across all methods and reporting individual episode results, so that researchers can run higher-powered paired statistical tests when comparing results [22], crucial when the benchmark has been optimized for low evaluation budgets.
4 FLEX Benchmark
The FLEX benchmark is a unifying, rigorous evaluation suite for few-shot learning in NLP, which implements the desiderata outlined in the previous section. In this section, we describe detailed design decisions and our accompanying few-shot NLP toolkit (§4.4), which we are releasing to facilitate easily adding NLP datasets and advanced sampling options to future benchmarks. We also describe the FLEX leaderboard (§4.5).
4.1 Task and Dataset Selection
Following GLUE [70] and other prior work [3, 5, 24, 78], we focus on tasks formatted as classification. Despite recent advances, NLP state-of-the-art models remain significantly worse than human performance on many text classification tasks, particularly in the few-shot setting. Automatic scoring of classification tasks is also more reliable than text generation tasks.
We selected datasets across three recent few-shot NLP evaluation suites, which separately studied class transfer [5], domain and task transfer [3, 4], and pretraining transfer [24]. Our benchmark includes a broad mix of tasks (NLI, question classification, entity typing, relation classification, and sentiment analysis) and formats (document, sentence, sentence pair). More complete dataset and license details are available in the following subsection and Appendix A.
4.2 Meta-Evaluation Protocols
As discussed earlier, FLEX evaluates four different types of transfer: Class, Domain, Task, and Pretraining Transfer. To support all types, we report results to the FLEX benchmark both without metatraining (pretraining-only) and with meta-training. This reporting scheme evaluates the performance of the basic pretrained model and the benefit (or lack thereof) of meta-training. A similar reporting scheme was proposed by Triantafillou et al. [67] for CV.
Pretraining-Only In this setting, the pretrained model is directly meta-tested on our benchmark without any additional training. This is the Pretraining Transfer setting, and it is the most difficult, but given the recent success of pretrained models in NLP for few-shot learning [10, 24], we believe that comparison to models without any meta-training is important for NLP tasks.
Meta-Trained In this setting, the model is meta-trained then meta-tested on our benchmark. We carefully selected and split datasets across meta-train/validation/test in order to enable testing of Class, Domain, and Task transfer with a single meta-training phase (to reduce computational burden). Datasets involved in each transfer setting (detailed split information in Table 4 in Appendix A):
• Class Transfer: FewRel [28], HuffPost [46], Amazon [30], 20News [38], and Reuters [41] take part in meta-training and meta-testing but with different classes.
• Domain Transfer: MR [49], CR [32], SNLI [9], and SciTail [35] are only in the meta-testing phase, but the corresponding sentiment and NLI datasets exist in the meta-training phase (MNLI [74], QNLI [52], and SST-2 [62]).
• Task Transfer: Subj [48], TREC [69], and CoNLL [66] are also for meta-testing only, and they represent tasks that the model does not encounter during meta-training.
Instead of per-episode hyperparameter tuning, we provide meta-validation episodes Eval for learning (during meta-training) global hyperparameters that work across all episodes. Specifically, the metavalidation dataset splits (see Table 4) consist of CoLa [72] for task transfer, WNLI [40] for domain transfer, and the validation splits used by Bao et al. [5] for all class transfer datasets. Following [3], we also include meta-training datasets MRPC [20], RTE [6, 8, 17, 26], and QQP [70].
4.3 Episode Sampling
We describe how our benchmark samples meta-testing episodes Etest. For meta-training, we allow users to sample from Etrain, Eval in any way, or directly use the underlying dataset splits. Number of classes For Class Transfer datasets, FLEX evaluates model robustness to variable number of new classes. When constructing episode E from one of these datasets D, our benchmark samples an episode-specific number of classes from dataset D, the sampler picks a random number from the range YE ∼ Unif(5,min(|YD|, 10)). 11 For Domain and Task Transfer, the number of classes is fixed D to the maximum number of classes in each dataset because Class Transfer is not being evaluated.
Number of shots Following prior work outside NLP [47, 67], our benchmark samples the training shot independently for each episode E and class i, as KEi ∼ Unif(Kmin,Kmax), where Kmin = 1. Given strong performance of NLP models with few or even zero examples [10, 73] and following prior work [5], we set the limit Kmax = 5. Separately, we allocate an equal number of episodes as zero-shot, where we instead set DE = ∅ (equivalently, Ktrain Ei = 0,∀i). In each episode, examples are sampled uniformly at random without replacement (but can be reused across episodes).12 Following Triantafillou et al. [67], we select a testing shot that is balanced across classes and leaves roughly half of examples for sampling the training examples. The total number of episodes for each reported configuration (pair of dataset and either zero- or few-shot) is set to 90 using Sample Size Design (§5).
4.4 Extensible Toolkit for Benchmark Creation and Model Training & Evaluation
Alongside the FLEX benchmark, we release an extensible, highly-configurable Python toolkit, which we used to generate the benchmark, and train and evaluate our models. Unlike existing meta-learning frameworks (e.g., Torchmeta [18], learn2learn [2]), our framework makes available a wide range of community-contributed NLP datasets and utilities via HuggingFace Datasets [42].13 Our code also provides advanced sampling utilities (e.g., for class imbalance), ensures reproducibility by checksumming generated episodes, and reports all recommended statistics.
4.5 Public Leaderboard
We provide public leaderboards for each of the meta-evaluation protocols: Pretraining-Only14 and Meta-Trained.15 Submissions take the form of a text label predictions file, which is produced by our toolkit. Results are reported with confidence intervals, standard deviations, and individual predictions on request. See Appendix G for a screenshot of the results interface.
5 Sample Size Design: Balancing Statistical Measurement & Compute Cost
We demonstrate a principled approach to determining the optimal sample size configuration in our few-shot benchmark. A proper benchmark should produce performance estimates that are accurate, close to the true value, and precise, low variance. A large (test) sample size can achieve this, yet must be considered alongside computational cost so that a broad community of researchers with differing amounts of compute resources can participate. This decision is further complicated in the few-shot
11We limit to 10 classes to avoid undue burden on in-context approaches that fit examples in memory [10], and use a lower bound of 5 classes to match prior work [5].
12These samples represent an unbiased performance estimate, but do not eliminate underlying dataset biases. 13Apache License 2.0. Full license details for all software dependencies available in Appendix F. 14https://leaderboard.allenai.org/flex/ 15https://leaderboard.allenai.org/flex_meta/
setting, where sample size refers to both the number of test episodes |Etest| and the number of test examples |DE test| across alltest| per episode E ∈ Etest. For practicality, we consider |Dtest|, the mean |DE episodes, rather than every |DEtest|. It remains unknown how one should best distribute test examples between |Etest| and |Dtest|: More episodes each with fewer examples, or fewer episodes each with many examples? Prior work has been inconsistent in this regard. For example, Gao et al. [24] used |Etest| = 5 and large |Dtest|, while Bao et al. [5] used |Etest| = 1000 and much smaller |Dtest|. Inspired by simulation techniques for informing statistically-powered experimental design [13], we study how different configurations of |Etest| and |Dtest| across different compute budgets C impact the accuracy and precision of our estimated CIs, specifically with respect to coverage probability [53] and width. First, we estimate per-episode and per-test-example costs of our few-shot model (§6) to obtain valid (C, |Etest|, |Dtest|) configurations s.t. the full benchmark completes within given C (GPU-hours).16 Then, for each (C, |Etest|, |Dtest|), we perform 1000 simulation runs, in which each run samples predictions under a true model accuracy µacc and computes a single 95% CI, its width, and whether it correctly covers µacc. Averaging over simulation runs gives us estimates for the coverage probability and width of our benchmark’s CI for a single (C, |Etest|, |Dtest|). We repeat this whole procedure for different µacc ∈ {0.3, 0.35, . . . , 0.95} to cover a wide range of possible model performances observed across many datasets (see Table 3).
Figure 1 shows CI coverage probability and width for many (C, |Etest|, |Dtest|) configurations. First, we find in Figure 1a that sufficiently-many test episodes (i.e., |Etest| > 60) is needed to guarantee coverage probability of our CIs is within one percentage point of the target 95%, a trend that holds regardless of compute budget. Small |Etest| also corresponds to large CI widths across all considered budgets in Figure 1b. This suggests that the choices of |Etest| = 1, 5, 10 in prior work [4, 24, 56, 75] can mean inaccurate and wide CIs, while choices of |Etest| = 1000 [5] can be prohibitively costly for methods with high training cost.
Next, Figure 1b reveals (i) diminishing returns in CI width (decrease in y-axis) as compute increases, and (ii) existence of an optimal balance between |Etest| and |Dtest| for each budget. Restricting our consideration to budgets with optima satisfying sufficient coverage probability (|Etest| > 60), the minimum viable budget is 36 GPU-hours. Then, assessing the marginal benefit of each 12 GPU-hour budget increase in terms of marginal reduction in CI width between optima, we arrive at our FLEX
16Costs estimated using a Quadro RTX-8000 GPU with 48Gb memory. For few-shot settings, model was trained with 300 steps. Per-episode and per-test-example costs were approx. 95–98 and 0.7–0.11 GPU-sec, respectively. Using a model with high per-episode cost for this analysis allows us to define a lower-bound sample size requirement; we can always test inexpensive or zero-shot models on more |Etest| or Dtest within budget.
configuration of |Etest| = 90 and |Dtest| ≈ 470 under a budget of C = 48 GPU-hours.17 Further details are in Appendix B.
6 UniFew: A Few-Shot Learning Model by Unifying Pre-training and Downstream Task Formats
Despite their encouraging results, existing works on few-shot learning in NLP are based on either customized and often complex meta-learning algorithms [3, 4, 5, 60], heavy manual/automated engineering of textual descriptions or prompts [24, 55, 59, 78], ordering of training examples [44, 56], extensive hyperparameter tuning on held-out sets [24, 44, 55], or custom learning algorithms [55, 65]. We present UniFew, a strong few-shot learning model across all transfer settings and datasets tested, that eschews the need for incorporating the above-mentioned complexities and challenges.
UniFew is a prompt-based model [56], a class of models that tailor the input/output format of their data to match the format used during pretraining. While this technique allows them to perform a task without the need for additional classification layers, prompt-based models are typically sensitive to the choice of the prompts, which can require extensive search, trial-and-error, and even additional models to get right [24, 78]. To avoid this issue while still leveraging the strong capabilities of pretrained models, UniFew (1) converts examples into multiple-choice question-answer (QA) format, and (2) uses UnifiedQA [34], a T5 [51] model further pretrained on a large collection of QA pairs.18,19
Compared to other prompt-based models, UniFew has two main strengths. First, the prompt design problem is much simpler because UnifiedQA questions had well-defined formats. For example, we only need four general prompt templates which cover all 20 datasets in the FLEX benchmark, while prior works have needed specialized prompts for each dataset. Second, UnifiedQA’s multiple-choice format ensures the model outputs a valid class label, without the need for learned or manually-defined mappings or verbalizers required for other prompt-based methods [24, 55].20 In concurrent work, Zhong et al. [80] also show the benefit of performing meta-tuning on a variety of datasets; while their task setup as Q/A is similar to UniFew, they focus exclusively on binary zero-shot classification tasks and, unlike UniFew, do not handle multi-class or few-shot problems.
We experiment with UniFew both without and with meta-training on the FLEX benchmark’s metatraining data, following the FLEX protocol (§4.2). We call the meta-trained variant UniFewmeta. We use simple prompts in the format of question followed by choices followed by the answer (according to the UnifiedQA original format). The exact prompts used are provided in Appendix C.
Training details For meta-training and meta-validation of UniFew, we sampled Etrain and Eval with 5-class, 5-training-shot sampling with the same number of shots per class.21 We trained the model for total number of 30K steps, using a linear learning rate scheduler with peak rate of 3e−5, 200 warmup steps, and batch size of 4; we selected the best checkpoint based on Eval performance. At meta-test time, for each episode, we trained the model on the episode’s training examples (if they exist) and predicted the outputs on test examples. For training at meta-test time, we used constant learning rate of 3e−5 and batch size of 4, and trained the model for 400 steps.22 We used NVidia RTX8000 GPUs, which take about 7 GPU-hours for meta-training and 48 GPU-hours for meta-testing. For meta-testing we split the episodes among 8 GPUs to speed up evaluations.
7 Experiments
Comparing UniFew with prior work To demonstrate the efficacy of UniFew, we evaluate it against state-of-the-art approaches for few-shot and meta-learning in NLP: LM-BFF [24], a language
17Consider budget increases 36→ 48, 48→ 60, 60→ 72 and 72→ 80. The first reduces CI width by 13%. Further increases reduce CI width by an additional 9%, 7%, and 5%, respectively. We choose C = 48 based on these diminishing returns.
18UnifiedQA and T5 both use Apache License 2.0. We use publicly-released large-size model weights. 19None of the supervised datasets in the pretraining of UnifiedQA or T5 are in FLEX. 20In rare cases, especially for zero-shot, UnifiedQA may generate an invalid answer (e.g., “Yes, Yes, No”
instead of “Yes”). We use simple heuristics to normalize the answer in such cases. 21Users of FLEX can specify the sampling configuration of Etrain and Eval as desired. 22For comparison with [24] we trained the model for 600 steps.
B H-SMLMT
model prompt-based fine-tuning method, as well as Distributional Signatures (DS) [5] and H-SMLMT [4], two state-of-the-art meta-learning techniques. Refer to Appendix D for details on these methods.
We compare to these methods using the datasets in the FLEX benchmark to establish the quality of our model. Since we constructed our benchmark from disjoint subsets of datasets evaluated in each of these prior works (§4.1), we compare each method with its corresponding subset of datasets. Each of these prior works evaluates their methods using different experimental setups (classes, number of episodes, shots) than our benchmark and was not designed to handle FLEX’s challenging episode characteristics like class imbalance. To enable fair comparison, we test UniFew on the exact data splits released by the authors when available (H-SMLMT and LM-BFF). For DS, we sample (balanced) episodes using our framework after matching their test settings (number of shots and classes, class splits, etc.) and reproduce their reported results to within 1% absolute difference using their model code; we use these episodes for our experiments. The results in Table 2 show that UniFewmeta outperforms both H-SMLMT and DS meta-learning approaches by relatively large margins, while achieving competitive results compared with LM-BFF. Note that UniFew’s strong results are without meta-learning approaches, extensive prompt-engineering, or per-episode hyperparameter search.
Evaluating UniFew on the FLEX benchmark Having established UniFew as a strong model comparable to recent, state-of-the art techniques, we present its results on the final version of our benchmark (with class imbalance, etc.). From Table 3, we observe three findings. First, pretraining is an effective technique for infusing an NLP model with the ability to perform few-shot generalization even without any meta-training, as UniFew is able to score Δfew = +12.8 higher when provided
23Gao et al. [24]’s automatic prompt search and in-context learning are not available in the zero-shot setting, so they instead use manually-designed prompts.
24Zero-shot results from Gao et al. [24] are on the entire test set, so there is no reported standard deviation. 2516/16 denotes 16 shots for training plus 16 more for validation which we only use for early stopping while
Gao et al. [24] use for grid-search hyperparameter tuning.
few rather than zero examples. Second, by comparing UniFewmeta and UniFew, we see that metatraining has a substantial impact on zero-shot performance (Δmeta = +14.5), but its benefit, while still substantial, is less in the few-shot setting (Δmeta = +8.6). Third, while meta-training adds roughly the same benefit to zero and few-shot performance for both domain and task transfer settings, meta-training disproportionately benefits zero-shot class transfer (Δmeta = +16.2) over few-shot class transfer (Δmeta = +4.3). Such observations are made possible through unified evaluation and comparison across different transfer types. The full FLEX benchmark results broken down by individual datasets are in Appendix E.
8 Limitations and Future Work
While the initial FLEX benchmark is focused on classification tasks, we aim to use our benchmark creation toolkit (§4.4) to incorporate additional task formats like span selection or text generation. Furthermore, the benchmark currently only supports English language tasks; to study language transfer, we aim to incorporate new datasets using our toolkit. Adding diverse datasets has its own challenges; while we’ve selected datasets for our benchmark based on prior work adoption and have attempted to verify their licensing for research use, we were unable to find license details for some datasets (Appendix A). We believe it is crucial to continually evolve the suite of datasets to remain challenging for the best models [36] and to tackle real-world challenges [1].
In addition, Sample Size Design (§5) simulations currently rely on our own available training estimates. We plan to gather a more representative sample from community leaderboard submissions.
Our public leaderboard could benefit from extended support for detailed comparisons between submissions based on properties of techniques. For example, approaches may vary in terms of model characteristics (e.g., number of parameters), data and supervision used during pretraining, amount of compute, etc. We encourage reporting all these factors to enable the community to analyze and make progress on important sub-spaces in the overall few-shot technique design space.
Finally, we believe the benefits of improving few-shot NLP techniques outweigh potential risks, but we acknowledge potential harms associated with language models [7, 14, 57, 63]. Few-shot models learn a task from a few examples but rely heavily on knowledge encoded in the pretrained model. Thus, few-shot models are more likely to inherit the biases of the pretrained models, compared to more fully supervised models; as the community focuses more on few-shot learning, it is more important than ever for future pretrained models to be careful about biases in the underlying pretraining corpora.
9 Conclusion
In this work, we unify and bring rigor to few-shot NLP evaluation. We formulate the FLEX Principles, a set of requirements and best practices that enables unified, rigorous, valid, and cost-sensitive measurement. We advance the principles with new Sample Size Design methodology for optimizing statistical accuracy and precision while keeping costs low. The FLEX benchmark is our instantiation of the FLEX Principles; it employs Sample Size Design and includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard with diverse NLP tasks. We present UniFew, a promptbased model that aligns pretraining and downstream task formats, achieving results competitive with recent few-shot methods despite using trivial prompt engineering. Finally, we release an extensible, open-source toolkit (used to train UniFew and generate the FLEX benchmark) to support future benchmark creation and few-shot NLP model training.
Acknowledgments and Disclosure of Funding
We would like to thank Chandra Bhagavatula, Matt Gardner, Matt Peters, Doug Downey, Dan Weld, and the four anonymous reviewers for helpful comments, suggestions and feedback. We would also like to acknowledge the large community effort involved in the creation of the datasets and open-source tools we utilize. | 1. What is the focus of the paper regarding few-shot text classification?
2. What are the strengths of the proposed benchmark and leaderboard?
3. Are there any weaknesses or limitations in the paper's approach or contributions?
4. How does the reviewer assess the novelty and significance of the combined evaluation framework?
5. What additional information would the reviewer like to see in the paper regarding the prompt-based approach? | Summary Of The Paper
Review | Summary Of The Paper
Fleet is a new benchmark and leaderboard that combines previous work on few-shot text classification. While there are no new tasks, the new benhmark adds value by combining complimentary lines of work in a well designed evaluation framework.
Review
This paper presents a new benchmark and associated leaderboard that aggregates text classification tasks from three previous works on few-shot learning.
The tasks are chosen to cover a range of domain, task, and learning transfer. There is a detailed investigation of the trade-off between large test sets with few test episodes, and smaller test sets with many test episodes. The test setting is chosen to maximise precision of the benchmark given a moderate compute budget.
The paper presents a new prompt-based approach to few-shot learning that converts each of the classification tasks into a multi-choice QA task. The details of this conversion are not given in the paper and I would like to see them listed. Comparisons are also made to previous models that were applied to some subset of the tasks included in Fleet.
While this paper does not introduce any new tasks or evaluation methodologies, it does add value by combining several lines of complimentary work on few-shot learning into a single evaluation framework that has been well designed. The prompt based UniFew baseline also seems both simple and effective, although I would like to see more details of the prompt template design in the main paper. |
NIPS | Title
FLEX: Unifying Evaluation for Few-Shot NLP
Abstract
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark,2 which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew,3 a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
1 Introduction
Few-shot learning, the challenge of learning from a small number of examples, is critical for developing efficient, robust NLP techniques [71, 76]. In recent years, separate threads of few-shot NLP research have pursued goals like generalization to new classes [e.g., 5, 25], adaptation to new domains and tasks [e.g., 3, 4, 21], and direct application of pretrained language models (LMs) [e.g., 10, 24, 55, 56]. Unfortunately, despite the shared goal of advancing few-shot NLP techniques, the community does not know which techniques work best or even if they perform better than simple baselines. Evaluation suites across these research threads are disjoint, lack challenging-yet-realistic testing setups (e.g., class imbalance, variable training set sizes, etc.), and do not employ careful experimental design to ensure accurate and precise evaluation estimates and minimal computational burden. Prior work in few-shot learning outside of NLP serves as a stark warning of the consequences of improper measurement: Dhillon et al. [19] showed that techniques from several years of prior work did not make clear progress due to large overlapping accuracy distributions and, moreover, do not outperform a simple, carefully-tuned baseline.
Need for systematic benchmark design As such, a high-quality benchmark is urgently needed to enable rigorous comparison of techniques across disjoint, highly-active threads of few-shot NLP research. But what should such an evaluation suite look like? Some best practices for evaluation of few-shot methods have been introduced in the computer vision (CV) literature [19, 67] and should
∗Equal contribution 2Benchmark, leaderboard, and benchmark creation toolkit: https://github.com/allenai/flex.
Apache License 2.0 3Few-shot model: https://github.com/allenai/unifew. Apache License 2.0
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
be applied to NLP. However, unifying few-shot NLP work introduces new challenges. For example, the benchmark needs to test all types of transfer studied in separate research threads to measure progress on new techniques that make gains in each of these important generalization settings (§2). Also, given the importance of zero-shot learning and learning from task descriptions [29, 73], the benchmark needs to include zero-shot episodes and textual labels to enable measuring progress for models that do not use conventional supervised training, including methods that leverage the latent knowledge in pretrained LMs [10, 24, 78]. Further, the benchmark must accommodate new, computationally-expensive approaches, without overly reducing the number of evaluation episodes at the expense of statistical accuracy [3, 24, 75].
Need for a robust few-shot model Recent prompt-based models [10] have shown strong results in few-shot learning. These models leverage the power of (often large) pretrained language models and adapt the format of downstream tasks to the underlying pretraining objective (e.g., Masked Language Modeling). This way, given the right natural language prompt (and sometimes verbalizers [55] and additional demonstrative examples), the model can quickly fine-tune on the downstream task [24, 43, 44, 55, 56]. However, adapting task formats to the underlying (masked) language modeling objectives is not straightforward; such models have been shown to be sensitive to varying choices of the prompt/demonstrations, training settings, hyperparameters, and learning algorithms [33, 50, 78], often requiring large held out sets and/or complex methods to overcomes such challenges. Can models eschew complex prompt engineering by unifying pretraining and downstream task formats?
In this paper, we tackle these key issues by introducing FLEX—Few-shot Language Evaluation across (X) many transfer types—and contributing the following:
• FLEX Principles (§3), a set of requirements and best practices for few-shot NLP evaluation that enables unified, rigorous, valid, and cost-sensitive measurements.
– Sample Size Design: In support of valid, cost-sensitive measurement, we introduce a novel approach to few-shot sample size design (§5) that optimizes for a benchmark’s statistical accuracy and precision while keeping computational costs accessible to a broad range of researchers.
• FLEX benchmark (§4), an implementation of the FLEX Principles. It tests across four few-shot transfer settings,7 and includes a public leaderboard for few-shot NLP that covers 20 datasets across diverse NLP tasks (e.g., NLI, relation classification, entity typing). Table 1 summarizes key differences between FLEX and other few-shot NLP evaluation suites.
4The total number of training shots in each episode, not number of shots per class per episode. 5Most users use unlabeled examples, though recently, Tam et al. [65] do not. 6Average (avg), confidence interval (CI), standard deviation (SD), individual episode metrics 7Prior work evaluated at most two settings.
• UniFew (§6), a prompt-based model for few-shot learning in NLP. While most existing methods leverage pre-trained LMs for few-shot learning, LM pre-training tasks do not closely match natural downstream task formats, requiring complex methods (e.g., extensive prompt-engineering, use of verbalizers, episodic hyperparameter tuning, custom learning algorithms) to make these models work in few-shot setting. Instead, the key idea of our model, UniFew, is to close the gap between pre-training and fine-tuning formats by posing tasks as multiple-choice QA and using an underlying model that is pre-trained on a similar natural QA task format. This eliminates the need for complexities of adapting downstream tasks to the LM objectives, while resulting in competitive performance with both recent few-shot and meta-learning methods.
To aid similar efforts, our release of FLEX includes a toolkit for benchmark creation and few-shot NLP model development, which we used to create the FLEX benchmark and train UniFew.
2 Background and Related Work
We first provide background and notation for few-shot learning and evaluation, then discuss related work in NLP and outside NLP that motivated us to create the FLEX Principles and benchmark.
Few-shot background and notation Broadly, modern approaches to few-shot learning are evaluated in a three-phase procedure [68]. In the first phase, a general-purpose pretrained model is obtained. In the subsequent “meta-training” phase,8 techniques aim to adapt the model to be well-suited for few-shot generalization. Finally, a “meta-testing” phase evaluates the adapted model in new few-shot prediction settings.
Let D be a dataset of (x, y) examples with full label set YD. From it, we construct three sets of episodes, corresponding to meta-training, meta-validation, and meta-testing and denoted by Etrain, Eval, and Etest, respectively. Each episode in each of these sets is a few-shot problem with its own test Eset and other attributes. Formally, each episode E is a tuple (DE ), where YE is a train,DE test,YDval,DE D sampled subset of labels in YD and DE are disjoint sets of examples from D with labels in train|val|test EYD . 9 For each episode, the model’s objective is to correctly predict labels for examples DEtest. To accomplish this, models make use of labeled examples in DEtrain, which is typically configured such that each label i in YE has KE provided examples; KE is known as the shot, and the setting when aD i i class has no examples in DE = 0) is called zero-shot.train (i.e., KEi
Few-shot evaluation in NLP Research in few-shot NLP has proceeded in several parallel threads, each focused on a different type of transfer ability [76]. Each thread has separate evaluation practices, and the vast majority of few-shot NLP research has limited evaluation to a single transfer type (see Table 1). Here, we describe these types of transfer and their evaluation practices.
Following the CV literature [67, 68], one thread of few-shot NLP focuses on class transfer, the problem of generalizing from a supervised set of classes at meta-train time to a different set of classes from the same dataset at meta-test time. Evaluation typically involves splitting classes YD into YDtrain, YDval and YD disjoint subsets. Class transfer has been studied on many text classification tasks [5],test including relation classification [25, 28, 64], intent classification [37, 64], inter alia. In contrast, domain transfer keeps the same classes between meta-training and meta-testing but changes the textual domain (e.g., generalizing from MNLI to science-focused SciTail [4, 21]). Evaluation then requires identifying pairs of datasets with the same classes YD, where one dataset’s episodes are assigned to Etrain and the other’s to Etest. Domain transfer has also been studied on many tasks [3, 4], including dialogue intent detection & slot tagging [31], sentiment classification [77], NLI [21], and machine translation [27, 58].
Researchers have also begun to study task transfer, the problem of generalizing from a set of tasks at meta-train time to unseen tasks at meta-test time. Evaluation requires tasks (e.g., NLI) appearing in Etest not to appear in Etrain or Eval. Prior work has used GLUE tasks [70] for meta-training before meta-testing on tasks such as entity typing [3, 4], while other work instead used GLUE for
8Meta-training may include a “meta-validation" component, for validating generalization. 9In the few-shot literature, DE test are also called the support and query sets, and |YDE | the way.train and DE
meta-testing [21]. Very recent work has studied task transfer over a large set of datasets [75, 80]. A limited amount of work evaluates both domain and task transfer [3, 4, 21]. An important emerging line of work (not noted by Yin [76]) is pretraining transfer, the problem of whether pretrained language models can perform well at meta-test time without any meta-training. Evaluation in this setting requires Etrain, Eval = ∅. Prior work has shown that pretrained language models are capable of surprising performance on many few-shot tasks, even without fine-tuning [10]. More recent work, mainly focusing on text classification, has reported further gains with cloze-style formats [55, 56, 65], prompt engineering [24], or calibration [78]. FLEX is designed to exercise all four of these transfer types from previous work.
Few-shot evaluation outside NLP The few-shot learning literature has largely focused on image classification, with the introduction of increasingly complex meta-learning algorithms [e.g., 23, 39, 54, 61, 68]. However, more recent work has shown that simple fine-tuning baselines are in fact competitive, and attribute this delayed discovery to problematic evaluation methodology [15, 19]. FLEX adopts recommended methodology [19, 67], and we introduce an analogous baseline (UniFew) to provide a strong measurement foundation for few-shot NLP.
3 FLEX Principles for Few-Shot NLP Evaluation
We now enumerate key desiderata for a few-shot NLP benchmark capable of solving the urgent problems with few-shot NLP evaluation, including separate evaluations for each transfer type and failure to incorporate best measurement practices from other domains (§2).
Diversity of transfer types To make NLP models broadly useful, few-shot NLP techniques must be capable of class, domain, and task transfer. Moreover, techniques should make use of the relevant supervision provided during meta-training to increase performance compared to the pretraining transfer setting. The benchmark should measure all four transfer settings to ensure that the community develops techniques that improve on strong pretraining transfer baselines, and enable comparison across these currently separate threads of research.
Variable number of shots and classes To better simulate a variety of real-world scenarios, the benchmark should include a variety of training set sizes and numbers of classes [67]. Testing robustness to these factors is crucial; few-shot techniques are often sensitive to changes in these factors [12], yet all prior few-shot NLP evaluations we are aware of used a fixed number of training shots and classes, known in advance during meta-training.
Unbalanced training sets The benchmark should also include unbalanced training sets with different training shots per class, another realistic setting adopted by CV benchmarks [67]. Class imbalance has also been observed to degrade performance [11, 47], yet prior few-shot NLP evaluations do not include this setting either.
Textual labels While numerical label values are often used in classification tasks, descriptive textual labels are also present for many tasks. Making these textual labels available for use by few-shot techniques enables the development of techniques that can leverage the class name, like in-context learning [10], template generation [24], and meta-learning [45]. Textual labels are crucial in particular for zero-shot evaluation.
Zero-shot evaluation We believe zero-shot evaluation is integral to the goals of few-shot evaluation. Similar to the motivation for measuring pretraining transfer, zero-shot evaluation is an important use case and also provides a strong baseline for some tasks. In the absence of training examples, textual class labels or richer task descriptions [73] must be provided. Some recent few-shot NLP work [e.g., 10, 24] evaluated with zero training shots, but most [e.g., 3, 5, 75] did not.
No extra meta-testing data We believe the benchmark should not provide validation data (DE =val ∅,∀E ∈ Etest) or unlabeled data for meta-testing tasks, since few-shot learning seeks to enable high performance in environments where collecting additional data is costly.10 Variation in these dimensions in prior NLP work makes comparison of results extremely difficult because it is often under-reported and gives unfair advantage to approaches that leverage such data [50]. For example, per-episode hyperparameter tuning on extra data has been shown to greatly inflate evaluation scores [24]. A few researchers [5, 65] follow our suggested approach, but others have used many
10Unlabeled data collection can be costly too, e.g. due to manual filtering [16].
different settings, from validation sets of various sizes [10, 24, 79] to no validation set but a large set of unlabeled examples [55, 56].
Principled sample size design Promising few-shot techniques can incur significant computational cost per episode, e.g., due to fine-tuning model parameters [4], searching for prompts [24], inter alia. To alleviate these costs, related works often evaluate with a limited number of episodes, which precludes statistically accurate or precise performance estimates. We believe the benchmark’s test sample size should be optimized to enable proper performance evaluation for such techniques, while ensuring the computational burden is inclusive toward researchers without large compute resources.
Proper reporting of CIs, SDs, and individual results The benchmark should report confidence intervals (CIs) of performance estimates and follow recent guidelines [19] to report standard deviations (SDs) for understanding variability. Moreover, we newly advocate for controlling for the same sampled few-shot episodes across all methods and reporting individual episode results, so that researchers can run higher-powered paired statistical tests when comparing results [22], crucial when the benchmark has been optimized for low evaluation budgets.
4 FLEX Benchmark
The FLEX benchmark is a unifying, rigorous evaluation suite for few-shot learning in NLP, which implements the desiderata outlined in the previous section. In this section, we describe detailed design decisions and our accompanying few-shot NLP toolkit (§4.4), which we are releasing to facilitate easily adding NLP datasets and advanced sampling options to future benchmarks. We also describe the FLEX leaderboard (§4.5).
4.1 Task and Dataset Selection
Following GLUE [70] and other prior work [3, 5, 24, 78], we focus on tasks formatted as classification. Despite recent advances, NLP state-of-the-art models remain significantly worse than human performance on many text classification tasks, particularly in the few-shot setting. Automatic scoring of classification tasks is also more reliable than text generation tasks.
We selected datasets across three recent few-shot NLP evaluation suites, which separately studied class transfer [5], domain and task transfer [3, 4], and pretraining transfer [24]. Our benchmark includes a broad mix of tasks (NLI, question classification, entity typing, relation classification, and sentiment analysis) and formats (document, sentence, sentence pair). More complete dataset and license details are available in the following subsection and Appendix A.
4.2 Meta-Evaluation Protocols
As discussed earlier, FLEX evaluates four different types of transfer: Class, Domain, Task, and Pretraining Transfer. To support all types, we report results to the FLEX benchmark both without metatraining (pretraining-only) and with meta-training. This reporting scheme evaluates the performance of the basic pretrained model and the benefit (or lack thereof) of meta-training. A similar reporting scheme was proposed by Triantafillou et al. [67] for CV.
Pretraining-Only In this setting, the pretrained model is directly meta-tested on our benchmark without any additional training. This is the Pretraining Transfer setting, and it is the most difficult, but given the recent success of pretrained models in NLP for few-shot learning [10, 24], we believe that comparison to models without any meta-training is important for NLP tasks.
Meta-Trained In this setting, the model is meta-trained then meta-tested on our benchmark. We carefully selected and split datasets across meta-train/validation/test in order to enable testing of Class, Domain, and Task transfer with a single meta-training phase (to reduce computational burden). Datasets involved in each transfer setting (detailed split information in Table 4 in Appendix A):
• Class Transfer: FewRel [28], HuffPost [46], Amazon [30], 20News [38], and Reuters [41] take part in meta-training and meta-testing but with different classes.
• Domain Transfer: MR [49], CR [32], SNLI [9], and SciTail [35] are only in the meta-testing phase, but the corresponding sentiment and NLI datasets exist in the meta-training phase (MNLI [74], QNLI [52], and SST-2 [62]).
• Task Transfer: Subj [48], TREC [69], and CoNLL [66] are also for meta-testing only, and they represent tasks that the model does not encounter during meta-training.
Instead of per-episode hyperparameter tuning, we provide meta-validation episodes Eval for learning (during meta-training) global hyperparameters that work across all episodes. Specifically, the metavalidation dataset splits (see Table 4) consist of CoLa [72] for task transfer, WNLI [40] for domain transfer, and the validation splits used by Bao et al. [5] for all class transfer datasets. Following [3], we also include meta-training datasets MRPC [20], RTE [6, 8, 17, 26], and QQP [70].
4.3 Episode Sampling
We describe how our benchmark samples meta-testing episodes Etest. For meta-training, we allow users to sample from Etrain, Eval in any way, or directly use the underlying dataset splits. Number of classes For Class Transfer datasets, FLEX evaluates model robustness to variable number of new classes. When constructing episode E from one of these datasets D, our benchmark samples an episode-specific number of classes from dataset D, the sampler picks a random number from the range YE ∼ Unif(5,min(|YD|, 10)). 11 For Domain and Task Transfer, the number of classes is fixed D to the maximum number of classes in each dataset because Class Transfer is not being evaluated.
Number of shots Following prior work outside NLP [47, 67], our benchmark samples the training shot independently for each episode E and class i, as KEi ∼ Unif(Kmin,Kmax), where Kmin = 1. Given strong performance of NLP models with few or even zero examples [10, 73] and following prior work [5], we set the limit Kmax = 5. Separately, we allocate an equal number of episodes as zero-shot, where we instead set DE = ∅ (equivalently, Ktrain Ei = 0,∀i). In each episode, examples are sampled uniformly at random without replacement (but can be reused across episodes).12 Following Triantafillou et al. [67], we select a testing shot that is balanced across classes and leaves roughly half of examples for sampling the training examples. The total number of episodes for each reported configuration (pair of dataset and either zero- or few-shot) is set to 90 using Sample Size Design (§5).
4.4 Extensible Toolkit for Benchmark Creation and Model Training & Evaluation
Alongside the FLEX benchmark, we release an extensible, highly-configurable Python toolkit, which we used to generate the benchmark, and train and evaluate our models. Unlike existing meta-learning frameworks (e.g., Torchmeta [18], learn2learn [2]), our framework makes available a wide range of community-contributed NLP datasets and utilities via HuggingFace Datasets [42].13 Our code also provides advanced sampling utilities (e.g., for class imbalance), ensures reproducibility by checksumming generated episodes, and reports all recommended statistics.
4.5 Public Leaderboard
We provide public leaderboards for each of the meta-evaluation protocols: Pretraining-Only14 and Meta-Trained.15 Submissions take the form of a text label predictions file, which is produced by our toolkit. Results are reported with confidence intervals, standard deviations, and individual predictions on request. See Appendix G for a screenshot of the results interface.
5 Sample Size Design: Balancing Statistical Measurement & Compute Cost
We demonstrate a principled approach to determining the optimal sample size configuration in our few-shot benchmark. A proper benchmark should produce performance estimates that are accurate, close to the true value, and precise, low variance. A large (test) sample size can achieve this, yet must be considered alongside computational cost so that a broad community of researchers with differing amounts of compute resources can participate. This decision is further complicated in the few-shot
11We limit to 10 classes to avoid undue burden on in-context approaches that fit examples in memory [10], and use a lower bound of 5 classes to match prior work [5].
12These samples represent an unbiased performance estimate, but do not eliminate underlying dataset biases. 13Apache License 2.0. Full license details for all software dependencies available in Appendix F. 14https://leaderboard.allenai.org/flex/ 15https://leaderboard.allenai.org/flex_meta/
setting, where sample size refers to both the number of test episodes |Etest| and the number of test examples |DE test| across alltest| per episode E ∈ Etest. For practicality, we consider |Dtest|, the mean |DE episodes, rather than every |DEtest|. It remains unknown how one should best distribute test examples between |Etest| and |Dtest|: More episodes each with fewer examples, or fewer episodes each with many examples? Prior work has been inconsistent in this regard. For example, Gao et al. [24] used |Etest| = 5 and large |Dtest|, while Bao et al. [5] used |Etest| = 1000 and much smaller |Dtest|. Inspired by simulation techniques for informing statistically-powered experimental design [13], we study how different configurations of |Etest| and |Dtest| across different compute budgets C impact the accuracy and precision of our estimated CIs, specifically with respect to coverage probability [53] and width. First, we estimate per-episode and per-test-example costs of our few-shot model (§6) to obtain valid (C, |Etest|, |Dtest|) configurations s.t. the full benchmark completes within given C (GPU-hours).16 Then, for each (C, |Etest|, |Dtest|), we perform 1000 simulation runs, in which each run samples predictions under a true model accuracy µacc and computes a single 95% CI, its width, and whether it correctly covers µacc. Averaging over simulation runs gives us estimates for the coverage probability and width of our benchmark’s CI for a single (C, |Etest|, |Dtest|). We repeat this whole procedure for different µacc ∈ {0.3, 0.35, . . . , 0.95} to cover a wide range of possible model performances observed across many datasets (see Table 3).
Figure 1 shows CI coverage probability and width for many (C, |Etest|, |Dtest|) configurations. First, we find in Figure 1a that sufficiently-many test episodes (i.e., |Etest| > 60) is needed to guarantee coverage probability of our CIs is within one percentage point of the target 95%, a trend that holds regardless of compute budget. Small |Etest| also corresponds to large CI widths across all considered budgets in Figure 1b. This suggests that the choices of |Etest| = 1, 5, 10 in prior work [4, 24, 56, 75] can mean inaccurate and wide CIs, while choices of |Etest| = 1000 [5] can be prohibitively costly for methods with high training cost.
Next, Figure 1b reveals (i) diminishing returns in CI width (decrease in y-axis) as compute increases, and (ii) existence of an optimal balance between |Etest| and |Dtest| for each budget. Restricting our consideration to budgets with optima satisfying sufficient coverage probability (|Etest| > 60), the minimum viable budget is 36 GPU-hours. Then, assessing the marginal benefit of each 12 GPU-hour budget increase in terms of marginal reduction in CI width between optima, we arrive at our FLEX
16Costs estimated using a Quadro RTX-8000 GPU with 48Gb memory. For few-shot settings, model was trained with 300 steps. Per-episode and per-test-example costs were approx. 95–98 and 0.7–0.11 GPU-sec, respectively. Using a model with high per-episode cost for this analysis allows us to define a lower-bound sample size requirement; we can always test inexpensive or zero-shot models on more |Etest| or Dtest within budget.
configuration of |Etest| = 90 and |Dtest| ≈ 470 under a budget of C = 48 GPU-hours.17 Further details are in Appendix B.
6 UniFew: A Few-Shot Learning Model by Unifying Pre-training and Downstream Task Formats
Despite their encouraging results, existing works on few-shot learning in NLP are based on either customized and often complex meta-learning algorithms [3, 4, 5, 60], heavy manual/automated engineering of textual descriptions or prompts [24, 55, 59, 78], ordering of training examples [44, 56], extensive hyperparameter tuning on held-out sets [24, 44, 55], or custom learning algorithms [55, 65]. We present UniFew, a strong few-shot learning model across all transfer settings and datasets tested, that eschews the need for incorporating the above-mentioned complexities and challenges.
UniFew is a prompt-based model [56], a class of models that tailor the input/output format of their data to match the format used during pretraining. While this technique allows them to perform a task without the need for additional classification layers, prompt-based models are typically sensitive to the choice of the prompts, which can require extensive search, trial-and-error, and even additional models to get right [24, 78]. To avoid this issue while still leveraging the strong capabilities of pretrained models, UniFew (1) converts examples into multiple-choice question-answer (QA) format, and (2) uses UnifiedQA [34], a T5 [51] model further pretrained on a large collection of QA pairs.18,19
Compared to other prompt-based models, UniFew has two main strengths. First, the prompt design problem is much simpler because UnifiedQA questions had well-defined formats. For example, we only need four general prompt templates which cover all 20 datasets in the FLEX benchmark, while prior works have needed specialized prompts for each dataset. Second, UnifiedQA’s multiple-choice format ensures the model outputs a valid class label, without the need for learned or manually-defined mappings or verbalizers required for other prompt-based methods [24, 55].20 In concurrent work, Zhong et al. [80] also show the benefit of performing meta-tuning on a variety of datasets; while their task setup as Q/A is similar to UniFew, they focus exclusively on binary zero-shot classification tasks and, unlike UniFew, do not handle multi-class or few-shot problems.
We experiment with UniFew both without and with meta-training on the FLEX benchmark’s metatraining data, following the FLEX protocol (§4.2). We call the meta-trained variant UniFewmeta. We use simple prompts in the format of question followed by choices followed by the answer (according to the UnifiedQA original format). The exact prompts used are provided in Appendix C.
Training details For meta-training and meta-validation of UniFew, we sampled Etrain and Eval with 5-class, 5-training-shot sampling with the same number of shots per class.21 We trained the model for total number of 30K steps, using a linear learning rate scheduler with peak rate of 3e−5, 200 warmup steps, and batch size of 4; we selected the best checkpoint based on Eval performance. At meta-test time, for each episode, we trained the model on the episode’s training examples (if they exist) and predicted the outputs on test examples. For training at meta-test time, we used constant learning rate of 3e−5 and batch size of 4, and trained the model for 400 steps.22 We used NVidia RTX8000 GPUs, which take about 7 GPU-hours for meta-training and 48 GPU-hours for meta-testing. For meta-testing we split the episodes among 8 GPUs to speed up evaluations.
7 Experiments
Comparing UniFew with prior work To demonstrate the efficacy of UniFew, we evaluate it against state-of-the-art approaches for few-shot and meta-learning in NLP: LM-BFF [24], a language
17Consider budget increases 36→ 48, 48→ 60, 60→ 72 and 72→ 80. The first reduces CI width by 13%. Further increases reduce CI width by an additional 9%, 7%, and 5%, respectively. We choose C = 48 based on these diminishing returns.
18UnifiedQA and T5 both use Apache License 2.0. We use publicly-released large-size model weights. 19None of the supervised datasets in the pretraining of UnifiedQA or T5 are in FLEX. 20In rare cases, especially for zero-shot, UnifiedQA may generate an invalid answer (e.g., “Yes, Yes, No”
instead of “Yes”). We use simple heuristics to normalize the answer in such cases. 21Users of FLEX can specify the sampling configuration of Etrain and Eval as desired. 22For comparison with [24] we trained the model for 600 steps.
B H-SMLMT
model prompt-based fine-tuning method, as well as Distributional Signatures (DS) [5] and H-SMLMT [4], two state-of-the-art meta-learning techniques. Refer to Appendix D for details on these methods.
We compare to these methods using the datasets in the FLEX benchmark to establish the quality of our model. Since we constructed our benchmark from disjoint subsets of datasets evaluated in each of these prior works (§4.1), we compare each method with its corresponding subset of datasets. Each of these prior works evaluates their methods using different experimental setups (classes, number of episodes, shots) than our benchmark and was not designed to handle FLEX’s challenging episode characteristics like class imbalance. To enable fair comparison, we test UniFew on the exact data splits released by the authors when available (H-SMLMT and LM-BFF). For DS, we sample (balanced) episodes using our framework after matching their test settings (number of shots and classes, class splits, etc.) and reproduce their reported results to within 1% absolute difference using their model code; we use these episodes for our experiments. The results in Table 2 show that UniFewmeta outperforms both H-SMLMT and DS meta-learning approaches by relatively large margins, while achieving competitive results compared with LM-BFF. Note that UniFew’s strong results are without meta-learning approaches, extensive prompt-engineering, or per-episode hyperparameter search.
Evaluating UniFew on the FLEX benchmark Having established UniFew as a strong model comparable to recent, state-of-the art techniques, we present its results on the final version of our benchmark (with class imbalance, etc.). From Table 3, we observe three findings. First, pretraining is an effective technique for infusing an NLP model with the ability to perform few-shot generalization even without any meta-training, as UniFew is able to score Δfew = +12.8 higher when provided
23Gao et al. [24]’s automatic prompt search and in-context learning are not available in the zero-shot setting, so they instead use manually-designed prompts.
24Zero-shot results from Gao et al. [24] are on the entire test set, so there is no reported standard deviation. 2516/16 denotes 16 shots for training plus 16 more for validation which we only use for early stopping while
Gao et al. [24] use for grid-search hyperparameter tuning.
few rather than zero examples. Second, by comparing UniFewmeta and UniFew, we see that metatraining has a substantial impact on zero-shot performance (Δmeta = +14.5), but its benefit, while still substantial, is less in the few-shot setting (Δmeta = +8.6). Third, while meta-training adds roughly the same benefit to zero and few-shot performance for both domain and task transfer settings, meta-training disproportionately benefits zero-shot class transfer (Δmeta = +16.2) over few-shot class transfer (Δmeta = +4.3). Such observations are made possible through unified evaluation and comparison across different transfer types. The full FLEX benchmark results broken down by individual datasets are in Appendix E.
8 Limitations and Future Work
While the initial FLEX benchmark is focused on classification tasks, we aim to use our benchmark creation toolkit (§4.4) to incorporate additional task formats like span selection or text generation. Furthermore, the benchmark currently only supports English language tasks; to study language transfer, we aim to incorporate new datasets using our toolkit. Adding diverse datasets has its own challenges; while we’ve selected datasets for our benchmark based on prior work adoption and have attempted to verify their licensing for research use, we were unable to find license details for some datasets (Appendix A). We believe it is crucial to continually evolve the suite of datasets to remain challenging for the best models [36] and to tackle real-world challenges [1].
In addition, Sample Size Design (§5) simulations currently rely on our own available training estimates. We plan to gather a more representative sample from community leaderboard submissions.
Our public leaderboard could benefit from extended support for detailed comparisons between submissions based on properties of techniques. For example, approaches may vary in terms of model characteristics (e.g., number of parameters), data and supervision used during pretraining, amount of compute, etc. We encourage reporting all these factors to enable the community to analyze and make progress on important sub-spaces in the overall few-shot technique design space.
Finally, we believe the benefits of improving few-shot NLP techniques outweigh potential risks, but we acknowledge potential harms associated with language models [7, 14, 57, 63]. Few-shot models learn a task from a few examples but rely heavily on knowledge encoded in the pretrained model. Thus, few-shot models are more likely to inherit the biases of the pretrained models, compared to more fully supervised models; as the community focuses more on few-shot learning, it is more important than ever for future pretrained models to be careful about biases in the underlying pretraining corpora.
9 Conclusion
In this work, we unify and bring rigor to few-shot NLP evaluation. We formulate the FLEX Principles, a set of requirements and best practices that enables unified, rigorous, valid, and cost-sensitive measurement. We advance the principles with new Sample Size Design methodology for optimizing statistical accuracy and precision while keeping costs low. The FLEX benchmark is our instantiation of the FLEX Principles; it employs Sample Size Design and includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard with diverse NLP tasks. We present UniFew, a promptbased model that aligns pretraining and downstream task formats, achieving results competitive with recent few-shot methods despite using trivial prompt engineering. Finally, we release an extensible, open-source toolkit (used to train UniFew and generate the FLEX benchmark) to support future benchmark creation and few-shot NLP model training.
Acknowledgments and Disclosure of Funding
We would like to thank Chandra Bhagavatula, Matt Gardner, Matt Peters, Doug Downey, Dan Weld, and the four anonymous reviewers for helpful comments, suggestions and feedback. We would also like to acknowledge the large community effort involved in the creation of the datasets and open-source tools we utilize. | 1. What is the focus and contribution of the paper regarding few-shot learning?
2. What are the strengths of the proposed benchmark, particularly in its coverage and evaluation?
3. What are the weaknesses of the paper, especially regarding its limitations in task types and language?
4. How does the reviewer suggest improving the benchmark, and what additional information would they like to see included?
5. Are there any minor comments or suggestions for improvement in the review? | Summary Of The Paper
Review | Summary Of The Paper
The authors introduce FLEET a new benchmark for evaluating few-shot learning. It's focused on classification tasks in NLP. The features of this new benchmark are:
evaluation of 4 different types of transfer
textual labels for zero-shot classification
experimentally determined sample size
In addition, they propose a new strong baseline called UniFew which is prompt-based model built on top of UnifiedQA. They show that this method is competitive with other few-shot systems. Finally, they release the code and framework for further extension.
Review
Overall, this work is going to be useful for the few-shot learning community.
Positives:
Coverage across 20 different classification tasks
Evaluation of different transfer types like class transfer, domain transfer, etc.
Incorporating learnings from other fields like computer vision.
Proposal of a strong baseline which is evaluated on the benchmark and compared against other SOTA techniques.
Negatives:
The benchmark only includes classification tasks and in English only. More content on how this can be expanded to multilingual settings and other task types would be useful.
More details on how the benchmark will look and how it's been designed would have been helpful. Screenshots of the baselines and the metrics reported would be good.
Questions for the authors:
Can existing benchmarks like GLUE or SuperGLUE be converted to few-shot benchmarks by creating few-shot versions of them? Wouldn't that be better since we will know both few-shot and fully supervised performance on the same set of tasks? It's a bit suboptimal for everyone to create a different set of tasks as a benchmark.
Minor comments:
Include references about multilingual benchmarks: XTREME, XGLUE, XTREME-R
Number of shots and classes: the numbers chosen here are a bit arbitrary and more concrete experiments could have been better.
The list of tasks is in the Appendix. This is important and should have been included in the main paper |
NIPS | Title
FLEX: Unifying Evaluation for Few-Shot NLP
Abstract
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark,2 which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew,3 a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
1 Introduction
Few-shot learning, the challenge of learning from a small number of examples, is critical for developing efficient, robust NLP techniques [71, 76]. In recent years, separate threads of few-shot NLP research have pursued goals like generalization to new classes [e.g., 5, 25], adaptation to new domains and tasks [e.g., 3, 4, 21], and direct application of pretrained language models (LMs) [e.g., 10, 24, 55, 56]. Unfortunately, despite the shared goal of advancing few-shot NLP techniques, the community does not know which techniques work best or even if they perform better than simple baselines. Evaluation suites across these research threads are disjoint, lack challenging-yet-realistic testing setups (e.g., class imbalance, variable training set sizes, etc.), and do not employ careful experimental design to ensure accurate and precise evaluation estimates and minimal computational burden. Prior work in few-shot learning outside of NLP serves as a stark warning of the consequences of improper measurement: Dhillon et al. [19] showed that techniques from several years of prior work did not make clear progress due to large overlapping accuracy distributions and, moreover, do not outperform a simple, carefully-tuned baseline.
Need for systematic benchmark design As such, a high-quality benchmark is urgently needed to enable rigorous comparison of techniques across disjoint, highly-active threads of few-shot NLP research. But what should such an evaluation suite look like? Some best practices for evaluation of few-shot methods have been introduced in the computer vision (CV) literature [19, 67] and should
∗Equal contribution 2Benchmark, leaderboard, and benchmark creation toolkit: https://github.com/allenai/flex.
Apache License 2.0 3Few-shot model: https://github.com/allenai/unifew. Apache License 2.0
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
be applied to NLP. However, unifying few-shot NLP work introduces new challenges. For example, the benchmark needs to test all types of transfer studied in separate research threads to measure progress on new techniques that make gains in each of these important generalization settings (§2). Also, given the importance of zero-shot learning and learning from task descriptions [29, 73], the benchmark needs to include zero-shot episodes and textual labels to enable measuring progress for models that do not use conventional supervised training, including methods that leverage the latent knowledge in pretrained LMs [10, 24, 78]. Further, the benchmark must accommodate new, computationally-expensive approaches, without overly reducing the number of evaluation episodes at the expense of statistical accuracy [3, 24, 75].
Need for a robust few-shot model Recent prompt-based models [10] have shown strong results in few-shot learning. These models leverage the power of (often large) pretrained language models and adapt the format of downstream tasks to the underlying pretraining objective (e.g., Masked Language Modeling). This way, given the right natural language prompt (and sometimes verbalizers [55] and additional demonstrative examples), the model can quickly fine-tune on the downstream task [24, 43, 44, 55, 56]. However, adapting task formats to the underlying (masked) language modeling objectives is not straightforward; such models have been shown to be sensitive to varying choices of the prompt/demonstrations, training settings, hyperparameters, and learning algorithms [33, 50, 78], often requiring large held out sets and/or complex methods to overcomes such challenges. Can models eschew complex prompt engineering by unifying pretraining and downstream task formats?
In this paper, we tackle these key issues by introducing FLEX—Few-shot Language Evaluation across (X) many transfer types—and contributing the following:
• FLEX Principles (§3), a set of requirements and best practices for few-shot NLP evaluation that enables unified, rigorous, valid, and cost-sensitive measurements.
– Sample Size Design: In support of valid, cost-sensitive measurement, we introduce a novel approach to few-shot sample size design (§5) that optimizes for a benchmark’s statistical accuracy and precision while keeping computational costs accessible to a broad range of researchers.
• FLEX benchmark (§4), an implementation of the FLEX Principles. It tests across four few-shot transfer settings,7 and includes a public leaderboard for few-shot NLP that covers 20 datasets across diverse NLP tasks (e.g., NLI, relation classification, entity typing). Table 1 summarizes key differences between FLEX and other few-shot NLP evaluation suites.
4The total number of training shots in each episode, not number of shots per class per episode. 5Most users use unlabeled examples, though recently, Tam et al. [65] do not. 6Average (avg), confidence interval (CI), standard deviation (SD), individual episode metrics 7Prior work evaluated at most two settings.
• UniFew (§6), a prompt-based model for few-shot learning in NLP. While most existing methods leverage pre-trained LMs for few-shot learning, LM pre-training tasks do not closely match natural downstream task formats, requiring complex methods (e.g., extensive prompt-engineering, use of verbalizers, episodic hyperparameter tuning, custom learning algorithms) to make these models work in few-shot setting. Instead, the key idea of our model, UniFew, is to close the gap between pre-training and fine-tuning formats by posing tasks as multiple-choice QA and using an underlying model that is pre-trained on a similar natural QA task format. This eliminates the need for complexities of adapting downstream tasks to the LM objectives, while resulting in competitive performance with both recent few-shot and meta-learning methods.
To aid similar efforts, our release of FLEX includes a toolkit for benchmark creation and few-shot NLP model development, which we used to create the FLEX benchmark and train UniFew.
2 Background and Related Work
We first provide background and notation for few-shot learning and evaluation, then discuss related work in NLP and outside NLP that motivated us to create the FLEX Principles and benchmark.
Few-shot background and notation Broadly, modern approaches to few-shot learning are evaluated in a three-phase procedure [68]. In the first phase, a general-purpose pretrained model is obtained. In the subsequent “meta-training” phase,8 techniques aim to adapt the model to be well-suited for few-shot generalization. Finally, a “meta-testing” phase evaluates the adapted model in new few-shot prediction settings.
Let D be a dataset of (x, y) examples with full label set YD. From it, we construct three sets of episodes, corresponding to meta-training, meta-validation, and meta-testing and denoted by Etrain, Eval, and Etest, respectively. Each episode in each of these sets is a few-shot problem with its own test Eset and other attributes. Formally, each episode E is a tuple (DE ), where YE is a train,DE test,YDval,DE D sampled subset of labels in YD and DE are disjoint sets of examples from D with labels in train|val|test EYD . 9 For each episode, the model’s objective is to correctly predict labels for examples DEtest. To accomplish this, models make use of labeled examples in DEtrain, which is typically configured such that each label i in YE has KE provided examples; KE is known as the shot, and the setting when aD i i class has no examples in DE = 0) is called zero-shot.train (i.e., KEi
Few-shot evaluation in NLP Research in few-shot NLP has proceeded in several parallel threads, each focused on a different type of transfer ability [76]. Each thread has separate evaluation practices, and the vast majority of few-shot NLP research has limited evaluation to a single transfer type (see Table 1). Here, we describe these types of transfer and their evaluation practices.
Following the CV literature [67, 68], one thread of few-shot NLP focuses on class transfer, the problem of generalizing from a supervised set of classes at meta-train time to a different set of classes from the same dataset at meta-test time. Evaluation typically involves splitting classes YD into YDtrain, YDval and YD disjoint subsets. Class transfer has been studied on many text classification tasks [5],test including relation classification [25, 28, 64], intent classification [37, 64], inter alia. In contrast, domain transfer keeps the same classes between meta-training and meta-testing but changes the textual domain (e.g., generalizing from MNLI to science-focused SciTail [4, 21]). Evaluation then requires identifying pairs of datasets with the same classes YD, where one dataset’s episodes are assigned to Etrain and the other’s to Etest. Domain transfer has also been studied on many tasks [3, 4], including dialogue intent detection & slot tagging [31], sentiment classification [77], NLI [21], and machine translation [27, 58].
Researchers have also begun to study task transfer, the problem of generalizing from a set of tasks at meta-train time to unseen tasks at meta-test time. Evaluation requires tasks (e.g., NLI) appearing in Etest not to appear in Etrain or Eval. Prior work has used GLUE tasks [70] for meta-training before meta-testing on tasks such as entity typing [3, 4], while other work instead used GLUE for
8Meta-training may include a “meta-validation" component, for validating generalization. 9In the few-shot literature, DE test are also called the support and query sets, and |YDE | the way.train and DE
meta-testing [21]. Very recent work has studied task transfer over a large set of datasets [75, 80]. A limited amount of work evaluates both domain and task transfer [3, 4, 21]. An important emerging line of work (not noted by Yin [76]) is pretraining transfer, the problem of whether pretrained language models can perform well at meta-test time without any meta-training. Evaluation in this setting requires Etrain, Eval = ∅. Prior work has shown that pretrained language models are capable of surprising performance on many few-shot tasks, even without fine-tuning [10]. More recent work, mainly focusing on text classification, has reported further gains with cloze-style formats [55, 56, 65], prompt engineering [24], or calibration [78]. FLEX is designed to exercise all four of these transfer types from previous work.
Few-shot evaluation outside NLP The few-shot learning literature has largely focused on image classification, with the introduction of increasingly complex meta-learning algorithms [e.g., 23, 39, 54, 61, 68]. However, more recent work has shown that simple fine-tuning baselines are in fact competitive, and attribute this delayed discovery to problematic evaluation methodology [15, 19]. FLEX adopts recommended methodology [19, 67], and we introduce an analogous baseline (UniFew) to provide a strong measurement foundation for few-shot NLP.
3 FLEX Principles for Few-Shot NLP Evaluation
We now enumerate key desiderata for a few-shot NLP benchmark capable of solving the urgent problems with few-shot NLP evaluation, including separate evaluations for each transfer type and failure to incorporate best measurement practices from other domains (§2).
Diversity of transfer types To make NLP models broadly useful, few-shot NLP techniques must be capable of class, domain, and task transfer. Moreover, techniques should make use of the relevant supervision provided during meta-training to increase performance compared to the pretraining transfer setting. The benchmark should measure all four transfer settings to ensure that the community develops techniques that improve on strong pretraining transfer baselines, and enable comparison across these currently separate threads of research.
Variable number of shots and classes To better simulate a variety of real-world scenarios, the benchmark should include a variety of training set sizes and numbers of classes [67]. Testing robustness to these factors is crucial; few-shot techniques are often sensitive to changes in these factors [12], yet all prior few-shot NLP evaluations we are aware of used a fixed number of training shots and classes, known in advance during meta-training.
Unbalanced training sets The benchmark should also include unbalanced training sets with different training shots per class, another realistic setting adopted by CV benchmarks [67]. Class imbalance has also been observed to degrade performance [11, 47], yet prior few-shot NLP evaluations do not include this setting either.
Textual labels While numerical label values are often used in classification tasks, descriptive textual labels are also present for many tasks. Making these textual labels available for use by few-shot techniques enables the development of techniques that can leverage the class name, like in-context learning [10], template generation [24], and meta-learning [45]. Textual labels are crucial in particular for zero-shot evaluation.
Zero-shot evaluation We believe zero-shot evaluation is integral to the goals of few-shot evaluation. Similar to the motivation for measuring pretraining transfer, zero-shot evaluation is an important use case and also provides a strong baseline for some tasks. In the absence of training examples, textual class labels or richer task descriptions [73] must be provided. Some recent few-shot NLP work [e.g., 10, 24] evaluated with zero training shots, but most [e.g., 3, 5, 75] did not.
No extra meta-testing data We believe the benchmark should not provide validation data (DE =val ∅,∀E ∈ Etest) or unlabeled data for meta-testing tasks, since few-shot learning seeks to enable high performance in environments where collecting additional data is costly.10 Variation in these dimensions in prior NLP work makes comparison of results extremely difficult because it is often under-reported and gives unfair advantage to approaches that leverage such data [50]. For example, per-episode hyperparameter tuning on extra data has been shown to greatly inflate evaluation scores [24]. A few researchers [5, 65] follow our suggested approach, but others have used many
10Unlabeled data collection can be costly too, e.g. due to manual filtering [16].
different settings, from validation sets of various sizes [10, 24, 79] to no validation set but a large set of unlabeled examples [55, 56].
Principled sample size design Promising few-shot techniques can incur significant computational cost per episode, e.g., due to fine-tuning model parameters [4], searching for prompts [24], inter alia. To alleviate these costs, related works often evaluate with a limited number of episodes, which precludes statistically accurate or precise performance estimates. We believe the benchmark’s test sample size should be optimized to enable proper performance evaluation for such techniques, while ensuring the computational burden is inclusive toward researchers without large compute resources.
Proper reporting of CIs, SDs, and individual results The benchmark should report confidence intervals (CIs) of performance estimates and follow recent guidelines [19] to report standard deviations (SDs) for understanding variability. Moreover, we newly advocate for controlling for the same sampled few-shot episodes across all methods and reporting individual episode results, so that researchers can run higher-powered paired statistical tests when comparing results [22], crucial when the benchmark has been optimized for low evaluation budgets.
4 FLEX Benchmark
The FLEX benchmark is a unifying, rigorous evaluation suite for few-shot learning in NLP, which implements the desiderata outlined in the previous section. In this section, we describe detailed design decisions and our accompanying few-shot NLP toolkit (§4.4), which we are releasing to facilitate easily adding NLP datasets and advanced sampling options to future benchmarks. We also describe the FLEX leaderboard (§4.5).
4.1 Task and Dataset Selection
Following GLUE [70] and other prior work [3, 5, 24, 78], we focus on tasks formatted as classification. Despite recent advances, NLP state-of-the-art models remain significantly worse than human performance on many text classification tasks, particularly in the few-shot setting. Automatic scoring of classification tasks is also more reliable than text generation tasks.
We selected datasets across three recent few-shot NLP evaluation suites, which separately studied class transfer [5], domain and task transfer [3, 4], and pretraining transfer [24]. Our benchmark includes a broad mix of tasks (NLI, question classification, entity typing, relation classification, and sentiment analysis) and formats (document, sentence, sentence pair). More complete dataset and license details are available in the following subsection and Appendix A.
4.2 Meta-Evaluation Protocols
As discussed earlier, FLEX evaluates four different types of transfer: Class, Domain, Task, and Pretraining Transfer. To support all types, we report results to the FLEX benchmark both without metatraining (pretraining-only) and with meta-training. This reporting scheme evaluates the performance of the basic pretrained model and the benefit (or lack thereof) of meta-training. A similar reporting scheme was proposed by Triantafillou et al. [67] for CV.
Pretraining-Only In this setting, the pretrained model is directly meta-tested on our benchmark without any additional training. This is the Pretraining Transfer setting, and it is the most difficult, but given the recent success of pretrained models in NLP for few-shot learning [10, 24], we believe that comparison to models without any meta-training is important for NLP tasks.
Meta-Trained In this setting, the model is meta-trained then meta-tested on our benchmark. We carefully selected and split datasets across meta-train/validation/test in order to enable testing of Class, Domain, and Task transfer with a single meta-training phase (to reduce computational burden). Datasets involved in each transfer setting (detailed split information in Table 4 in Appendix A):
• Class Transfer: FewRel [28], HuffPost [46], Amazon [30], 20News [38], and Reuters [41] take part in meta-training and meta-testing but with different classes.
• Domain Transfer: MR [49], CR [32], SNLI [9], and SciTail [35] are only in the meta-testing phase, but the corresponding sentiment and NLI datasets exist in the meta-training phase (MNLI [74], QNLI [52], and SST-2 [62]).
• Task Transfer: Subj [48], TREC [69], and CoNLL [66] are also for meta-testing only, and they represent tasks that the model does not encounter during meta-training.
Instead of per-episode hyperparameter tuning, we provide meta-validation episodes Eval for learning (during meta-training) global hyperparameters that work across all episodes. Specifically, the metavalidation dataset splits (see Table 4) consist of CoLa [72] for task transfer, WNLI [40] for domain transfer, and the validation splits used by Bao et al. [5] for all class transfer datasets. Following [3], we also include meta-training datasets MRPC [20], RTE [6, 8, 17, 26], and QQP [70].
4.3 Episode Sampling
We describe how our benchmark samples meta-testing episodes Etest. For meta-training, we allow users to sample from Etrain, Eval in any way, or directly use the underlying dataset splits. Number of classes For Class Transfer datasets, FLEX evaluates model robustness to variable number of new classes. When constructing episode E from one of these datasets D, our benchmark samples an episode-specific number of classes from dataset D, the sampler picks a random number from the range YE ∼ Unif(5,min(|YD|, 10)). 11 For Domain and Task Transfer, the number of classes is fixed D to the maximum number of classes in each dataset because Class Transfer is not being evaluated.
Number of shots Following prior work outside NLP [47, 67], our benchmark samples the training shot independently for each episode E and class i, as KEi ∼ Unif(Kmin,Kmax), where Kmin = 1. Given strong performance of NLP models with few or even zero examples [10, 73] and following prior work [5], we set the limit Kmax = 5. Separately, we allocate an equal number of episodes as zero-shot, where we instead set DE = ∅ (equivalently, Ktrain Ei = 0,∀i). In each episode, examples are sampled uniformly at random without replacement (but can be reused across episodes).12 Following Triantafillou et al. [67], we select a testing shot that is balanced across classes and leaves roughly half of examples for sampling the training examples. The total number of episodes for each reported configuration (pair of dataset and either zero- or few-shot) is set to 90 using Sample Size Design (§5).
4.4 Extensible Toolkit for Benchmark Creation and Model Training & Evaluation
Alongside the FLEX benchmark, we release an extensible, highly-configurable Python toolkit, which we used to generate the benchmark, and train and evaluate our models. Unlike existing meta-learning frameworks (e.g., Torchmeta [18], learn2learn [2]), our framework makes available a wide range of community-contributed NLP datasets and utilities via HuggingFace Datasets [42].13 Our code also provides advanced sampling utilities (e.g., for class imbalance), ensures reproducibility by checksumming generated episodes, and reports all recommended statistics.
4.5 Public Leaderboard
We provide public leaderboards for each of the meta-evaluation protocols: Pretraining-Only14 and Meta-Trained.15 Submissions take the form of a text label predictions file, which is produced by our toolkit. Results are reported with confidence intervals, standard deviations, and individual predictions on request. See Appendix G for a screenshot of the results interface.
5 Sample Size Design: Balancing Statistical Measurement & Compute Cost
We demonstrate a principled approach to determining the optimal sample size configuration in our few-shot benchmark. A proper benchmark should produce performance estimates that are accurate, close to the true value, and precise, low variance. A large (test) sample size can achieve this, yet must be considered alongside computational cost so that a broad community of researchers with differing amounts of compute resources can participate. This decision is further complicated in the few-shot
11We limit to 10 classes to avoid undue burden on in-context approaches that fit examples in memory [10], and use a lower bound of 5 classes to match prior work [5].
12These samples represent an unbiased performance estimate, but do not eliminate underlying dataset biases. 13Apache License 2.0. Full license details for all software dependencies available in Appendix F. 14https://leaderboard.allenai.org/flex/ 15https://leaderboard.allenai.org/flex_meta/
setting, where sample size refers to both the number of test episodes |Etest| and the number of test examples |DE test| across alltest| per episode E ∈ Etest. For practicality, we consider |Dtest|, the mean |DE episodes, rather than every |DEtest|. It remains unknown how one should best distribute test examples between |Etest| and |Dtest|: More episodes each with fewer examples, or fewer episodes each with many examples? Prior work has been inconsistent in this regard. For example, Gao et al. [24] used |Etest| = 5 and large |Dtest|, while Bao et al. [5] used |Etest| = 1000 and much smaller |Dtest|. Inspired by simulation techniques for informing statistically-powered experimental design [13], we study how different configurations of |Etest| and |Dtest| across different compute budgets C impact the accuracy and precision of our estimated CIs, specifically with respect to coverage probability [53] and width. First, we estimate per-episode and per-test-example costs of our few-shot model (§6) to obtain valid (C, |Etest|, |Dtest|) configurations s.t. the full benchmark completes within given C (GPU-hours).16 Then, for each (C, |Etest|, |Dtest|), we perform 1000 simulation runs, in which each run samples predictions under a true model accuracy µacc and computes a single 95% CI, its width, and whether it correctly covers µacc. Averaging over simulation runs gives us estimates for the coverage probability and width of our benchmark’s CI for a single (C, |Etest|, |Dtest|). We repeat this whole procedure for different µacc ∈ {0.3, 0.35, . . . , 0.95} to cover a wide range of possible model performances observed across many datasets (see Table 3).
Figure 1 shows CI coverage probability and width for many (C, |Etest|, |Dtest|) configurations. First, we find in Figure 1a that sufficiently-many test episodes (i.e., |Etest| > 60) is needed to guarantee coverage probability of our CIs is within one percentage point of the target 95%, a trend that holds regardless of compute budget. Small |Etest| also corresponds to large CI widths across all considered budgets in Figure 1b. This suggests that the choices of |Etest| = 1, 5, 10 in prior work [4, 24, 56, 75] can mean inaccurate and wide CIs, while choices of |Etest| = 1000 [5] can be prohibitively costly for methods with high training cost.
Next, Figure 1b reveals (i) diminishing returns in CI width (decrease in y-axis) as compute increases, and (ii) existence of an optimal balance between |Etest| and |Dtest| for each budget. Restricting our consideration to budgets with optima satisfying sufficient coverage probability (|Etest| > 60), the minimum viable budget is 36 GPU-hours. Then, assessing the marginal benefit of each 12 GPU-hour budget increase in terms of marginal reduction in CI width between optima, we arrive at our FLEX
16Costs estimated using a Quadro RTX-8000 GPU with 48Gb memory. For few-shot settings, model was trained with 300 steps. Per-episode and per-test-example costs were approx. 95–98 and 0.7–0.11 GPU-sec, respectively. Using a model with high per-episode cost for this analysis allows us to define a lower-bound sample size requirement; we can always test inexpensive or zero-shot models on more |Etest| or Dtest within budget.
configuration of |Etest| = 90 and |Dtest| ≈ 470 under a budget of C = 48 GPU-hours.17 Further details are in Appendix B.
6 UniFew: A Few-Shot Learning Model by Unifying Pre-training and Downstream Task Formats
Despite their encouraging results, existing works on few-shot learning in NLP are based on either customized and often complex meta-learning algorithms [3, 4, 5, 60], heavy manual/automated engineering of textual descriptions or prompts [24, 55, 59, 78], ordering of training examples [44, 56], extensive hyperparameter tuning on held-out sets [24, 44, 55], or custom learning algorithms [55, 65]. We present UniFew, a strong few-shot learning model across all transfer settings and datasets tested, that eschews the need for incorporating the above-mentioned complexities and challenges.
UniFew is a prompt-based model [56], a class of models that tailor the input/output format of their data to match the format used during pretraining. While this technique allows them to perform a task without the need for additional classification layers, prompt-based models are typically sensitive to the choice of the prompts, which can require extensive search, trial-and-error, and even additional models to get right [24, 78]. To avoid this issue while still leveraging the strong capabilities of pretrained models, UniFew (1) converts examples into multiple-choice question-answer (QA) format, and (2) uses UnifiedQA [34], a T5 [51] model further pretrained on a large collection of QA pairs.18,19
Compared to other prompt-based models, UniFew has two main strengths. First, the prompt design problem is much simpler because UnifiedQA questions had well-defined formats. For example, we only need four general prompt templates which cover all 20 datasets in the FLEX benchmark, while prior works have needed specialized prompts for each dataset. Second, UnifiedQA’s multiple-choice format ensures the model outputs a valid class label, without the need for learned or manually-defined mappings or verbalizers required for other prompt-based methods [24, 55].20 In concurrent work, Zhong et al. [80] also show the benefit of performing meta-tuning on a variety of datasets; while their task setup as Q/A is similar to UniFew, they focus exclusively on binary zero-shot classification tasks and, unlike UniFew, do not handle multi-class or few-shot problems.
We experiment with UniFew both without and with meta-training on the FLEX benchmark’s metatraining data, following the FLEX protocol (§4.2). We call the meta-trained variant UniFewmeta. We use simple prompts in the format of question followed by choices followed by the answer (according to the UnifiedQA original format). The exact prompts used are provided in Appendix C.
Training details For meta-training and meta-validation of UniFew, we sampled Etrain and Eval with 5-class, 5-training-shot sampling with the same number of shots per class.21 We trained the model for total number of 30K steps, using a linear learning rate scheduler with peak rate of 3e−5, 200 warmup steps, and batch size of 4; we selected the best checkpoint based on Eval performance. At meta-test time, for each episode, we trained the model on the episode’s training examples (if they exist) and predicted the outputs on test examples. For training at meta-test time, we used constant learning rate of 3e−5 and batch size of 4, and trained the model for 400 steps.22 We used NVidia RTX8000 GPUs, which take about 7 GPU-hours for meta-training and 48 GPU-hours for meta-testing. For meta-testing we split the episodes among 8 GPUs to speed up evaluations.
7 Experiments
Comparing UniFew with prior work To demonstrate the efficacy of UniFew, we evaluate it against state-of-the-art approaches for few-shot and meta-learning in NLP: LM-BFF [24], a language
17Consider budget increases 36→ 48, 48→ 60, 60→ 72 and 72→ 80. The first reduces CI width by 13%. Further increases reduce CI width by an additional 9%, 7%, and 5%, respectively. We choose C = 48 based on these diminishing returns.
18UnifiedQA and T5 both use Apache License 2.0. We use publicly-released large-size model weights. 19None of the supervised datasets in the pretraining of UnifiedQA or T5 are in FLEX. 20In rare cases, especially for zero-shot, UnifiedQA may generate an invalid answer (e.g., “Yes, Yes, No”
instead of “Yes”). We use simple heuristics to normalize the answer in such cases. 21Users of FLEX can specify the sampling configuration of Etrain and Eval as desired. 22For comparison with [24] we trained the model for 600 steps.
B H-SMLMT
model prompt-based fine-tuning method, as well as Distributional Signatures (DS) [5] and H-SMLMT [4], two state-of-the-art meta-learning techniques. Refer to Appendix D for details on these methods.
We compare to these methods using the datasets in the FLEX benchmark to establish the quality of our model. Since we constructed our benchmark from disjoint subsets of datasets evaluated in each of these prior works (§4.1), we compare each method with its corresponding subset of datasets. Each of these prior works evaluates their methods using different experimental setups (classes, number of episodes, shots) than our benchmark and was not designed to handle FLEX’s challenging episode characteristics like class imbalance. To enable fair comparison, we test UniFew on the exact data splits released by the authors when available (H-SMLMT and LM-BFF). For DS, we sample (balanced) episodes using our framework after matching their test settings (number of shots and classes, class splits, etc.) and reproduce their reported results to within 1% absolute difference using their model code; we use these episodes for our experiments. The results in Table 2 show that UniFewmeta outperforms both H-SMLMT and DS meta-learning approaches by relatively large margins, while achieving competitive results compared with LM-BFF. Note that UniFew’s strong results are without meta-learning approaches, extensive prompt-engineering, or per-episode hyperparameter search.
Evaluating UniFew on the FLEX benchmark Having established UniFew as a strong model comparable to recent, state-of-the art techniques, we present its results on the final version of our benchmark (with class imbalance, etc.). From Table 3, we observe three findings. First, pretraining is an effective technique for infusing an NLP model with the ability to perform few-shot generalization even without any meta-training, as UniFew is able to score Δfew = +12.8 higher when provided
23Gao et al. [24]’s automatic prompt search and in-context learning are not available in the zero-shot setting, so they instead use manually-designed prompts.
24Zero-shot results from Gao et al. [24] are on the entire test set, so there is no reported standard deviation. 2516/16 denotes 16 shots for training plus 16 more for validation which we only use for early stopping while
Gao et al. [24] use for grid-search hyperparameter tuning.
few rather than zero examples. Second, by comparing UniFewmeta and UniFew, we see that metatraining has a substantial impact on zero-shot performance (Δmeta = +14.5), but its benefit, while still substantial, is less in the few-shot setting (Δmeta = +8.6). Third, while meta-training adds roughly the same benefit to zero and few-shot performance for both domain and task transfer settings, meta-training disproportionately benefits zero-shot class transfer (Δmeta = +16.2) over few-shot class transfer (Δmeta = +4.3). Such observations are made possible through unified evaluation and comparison across different transfer types. The full FLEX benchmark results broken down by individual datasets are in Appendix E.
8 Limitations and Future Work
While the initial FLEX benchmark is focused on classification tasks, we aim to use our benchmark creation toolkit (§4.4) to incorporate additional task formats like span selection or text generation. Furthermore, the benchmark currently only supports English language tasks; to study language transfer, we aim to incorporate new datasets using our toolkit. Adding diverse datasets has its own challenges; while we’ve selected datasets for our benchmark based on prior work adoption and have attempted to verify their licensing for research use, we were unable to find license details for some datasets (Appendix A). We believe it is crucial to continually evolve the suite of datasets to remain challenging for the best models [36] and to tackle real-world challenges [1].
In addition, Sample Size Design (§5) simulations currently rely on our own available training estimates. We plan to gather a more representative sample from community leaderboard submissions.
Our public leaderboard could benefit from extended support for detailed comparisons between submissions based on properties of techniques. For example, approaches may vary in terms of model characteristics (e.g., number of parameters), data and supervision used during pretraining, amount of compute, etc. We encourage reporting all these factors to enable the community to analyze and make progress on important sub-spaces in the overall few-shot technique design space.
Finally, we believe the benefits of improving few-shot NLP techniques outweigh potential risks, but we acknowledge potential harms associated with language models [7, 14, 57, 63]. Few-shot models learn a task from a few examples but rely heavily on knowledge encoded in the pretrained model. Thus, few-shot models are more likely to inherit the biases of the pretrained models, compared to more fully supervised models; as the community focuses more on few-shot learning, it is more important than ever for future pretrained models to be careful about biases in the underlying pretraining corpora.
9 Conclusion
In this work, we unify and bring rigor to few-shot NLP evaluation. We formulate the FLEX Principles, a set of requirements and best practices that enables unified, rigorous, valid, and cost-sensitive measurement. We advance the principles with new Sample Size Design methodology for optimizing statistical accuracy and precision while keeping costs low. The FLEX benchmark is our instantiation of the FLEX Principles; it employs Sample Size Design and includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard with diverse NLP tasks. We present UniFew, a promptbased model that aligns pretraining and downstream task formats, achieving results competitive with recent few-shot methods despite using trivial prompt engineering. Finally, we release an extensible, open-source toolkit (used to train UniFew and generate the FLEX benchmark) to support future benchmark creation and few-shot NLP model training.
Acknowledgments and Disclosure of Funding
We would like to thank Chandra Bhagavatula, Matt Gardner, Matt Peters, Doug Downey, Dan Weld, and the four anonymous reviewers for helpful comments, suggestions and feedback. We would also like to acknowledge the large community effort involved in the creation of the datasets and open-source tools we utilize. | 1. What is the focus and contribution of the paper on few-shot NLP?
2. What are the strengths of the proposed FLEET benchmark, particularly in addressing issues in previous benchmarks?
3. What are the weaknesses and limitations of the paper, especially regarding the number of entity typing datasets included?
4. How does the reviewer assess the effectiveness and simplicity of the prompt-based learning baseline provided by the authors? | Summary Of The Paper
Review | Summary Of The Paper
The authors of this paper present a few-shot NLP benchmark called FLEET. FLEET consists of a group of NLP datasets and evaluates four transfer settings: class transfer, domain transfer, task transfer, and pre-train transfer. The authors also provide a simple yet effective prompt-based learning baseline along with the FLEET benchmark.
Review
Strength:
A new few-shot benchmark with a variety of datasets and tasks. It provides separate evaluations for each transfer type and should be able to provide a comprehensive evaluation proxy for few-shot NLP frameworks.
Addresses several issues existing in previous few-shot NLP benchmarks, e.g. class imbalance and inconsistent problem settings.
A simple prompt-based learning framework with a decent performance on FLEET and other test sets that were used in previous work.
Weakness & Questions:
Currently, FLEET only contains one entity typing dataset (CoNLL). This dataset is only used for task & pre-train transfer. One could a) include more entity typing datasets b) and offer class transfer or domain transfer on the entity typing task. |
NIPS | Title
FLEX: Unifying Evaluation for Few-Shot NLP
Abstract
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark,2 which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew,3 a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
1 Introduction
Few-shot learning, the challenge of learning from a small number of examples, is critical for developing efficient, robust NLP techniques [71, 76]. In recent years, separate threads of few-shot NLP research have pursued goals like generalization to new classes [e.g., 5, 25], adaptation to new domains and tasks [e.g., 3, 4, 21], and direct application of pretrained language models (LMs) [e.g., 10, 24, 55, 56]. Unfortunately, despite the shared goal of advancing few-shot NLP techniques, the community does not know which techniques work best or even if they perform better than simple baselines. Evaluation suites across these research threads are disjoint, lack challenging-yet-realistic testing setups (e.g., class imbalance, variable training set sizes, etc.), and do not employ careful experimental design to ensure accurate and precise evaluation estimates and minimal computational burden. Prior work in few-shot learning outside of NLP serves as a stark warning of the consequences of improper measurement: Dhillon et al. [19] showed that techniques from several years of prior work did not make clear progress due to large overlapping accuracy distributions and, moreover, do not outperform a simple, carefully-tuned baseline.
Need for systematic benchmark design As such, a high-quality benchmark is urgently needed to enable rigorous comparison of techniques across disjoint, highly-active threads of few-shot NLP research. But what should such an evaluation suite look like? Some best practices for evaluation of few-shot methods have been introduced in the computer vision (CV) literature [19, 67] and should
∗Equal contribution 2Benchmark, leaderboard, and benchmark creation toolkit: https://github.com/allenai/flex.
Apache License 2.0 3Few-shot model: https://github.com/allenai/unifew. Apache License 2.0
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
be applied to NLP. However, unifying few-shot NLP work introduces new challenges. For example, the benchmark needs to test all types of transfer studied in separate research threads to measure progress on new techniques that make gains in each of these important generalization settings (§2). Also, given the importance of zero-shot learning and learning from task descriptions [29, 73], the benchmark needs to include zero-shot episodes and textual labels to enable measuring progress for models that do not use conventional supervised training, including methods that leverage the latent knowledge in pretrained LMs [10, 24, 78]. Further, the benchmark must accommodate new, computationally-expensive approaches, without overly reducing the number of evaluation episodes at the expense of statistical accuracy [3, 24, 75].
Need for a robust few-shot model Recent prompt-based models [10] have shown strong results in few-shot learning. These models leverage the power of (often large) pretrained language models and adapt the format of downstream tasks to the underlying pretraining objective (e.g., Masked Language Modeling). This way, given the right natural language prompt (and sometimes verbalizers [55] and additional demonstrative examples), the model can quickly fine-tune on the downstream task [24, 43, 44, 55, 56]. However, adapting task formats to the underlying (masked) language modeling objectives is not straightforward; such models have been shown to be sensitive to varying choices of the prompt/demonstrations, training settings, hyperparameters, and learning algorithms [33, 50, 78], often requiring large held out sets and/or complex methods to overcomes such challenges. Can models eschew complex prompt engineering by unifying pretraining and downstream task formats?
In this paper, we tackle these key issues by introducing FLEX—Few-shot Language Evaluation across (X) many transfer types—and contributing the following:
• FLEX Principles (§3), a set of requirements and best practices for few-shot NLP evaluation that enables unified, rigorous, valid, and cost-sensitive measurements.
– Sample Size Design: In support of valid, cost-sensitive measurement, we introduce a novel approach to few-shot sample size design (§5) that optimizes for a benchmark’s statistical accuracy and precision while keeping computational costs accessible to a broad range of researchers.
• FLEX benchmark (§4), an implementation of the FLEX Principles. It tests across four few-shot transfer settings,7 and includes a public leaderboard for few-shot NLP that covers 20 datasets across diverse NLP tasks (e.g., NLI, relation classification, entity typing). Table 1 summarizes key differences between FLEX and other few-shot NLP evaluation suites.
4The total number of training shots in each episode, not number of shots per class per episode. 5Most users use unlabeled examples, though recently, Tam et al. [65] do not. 6Average (avg), confidence interval (CI), standard deviation (SD), individual episode metrics 7Prior work evaluated at most two settings.
• UniFew (§6), a prompt-based model for few-shot learning in NLP. While most existing methods leverage pre-trained LMs for few-shot learning, LM pre-training tasks do not closely match natural downstream task formats, requiring complex methods (e.g., extensive prompt-engineering, use of verbalizers, episodic hyperparameter tuning, custom learning algorithms) to make these models work in few-shot setting. Instead, the key idea of our model, UniFew, is to close the gap between pre-training and fine-tuning formats by posing tasks as multiple-choice QA and using an underlying model that is pre-trained on a similar natural QA task format. This eliminates the need for complexities of adapting downstream tasks to the LM objectives, while resulting in competitive performance with both recent few-shot and meta-learning methods.
To aid similar efforts, our release of FLEX includes a toolkit for benchmark creation and few-shot NLP model development, which we used to create the FLEX benchmark and train UniFew.
2 Background and Related Work
We first provide background and notation for few-shot learning and evaluation, then discuss related work in NLP and outside NLP that motivated us to create the FLEX Principles and benchmark.
Few-shot background and notation Broadly, modern approaches to few-shot learning are evaluated in a three-phase procedure [68]. In the first phase, a general-purpose pretrained model is obtained. In the subsequent “meta-training” phase,8 techniques aim to adapt the model to be well-suited for few-shot generalization. Finally, a “meta-testing” phase evaluates the adapted model in new few-shot prediction settings.
Let D be a dataset of (x, y) examples with full label set YD. From it, we construct three sets of episodes, corresponding to meta-training, meta-validation, and meta-testing and denoted by Etrain, Eval, and Etest, respectively. Each episode in each of these sets is a few-shot problem with its own test Eset and other attributes. Formally, each episode E is a tuple (DE ), where YE is a train,DE test,YDval,DE D sampled subset of labels in YD and DE are disjoint sets of examples from D with labels in train|val|test EYD . 9 For each episode, the model’s objective is to correctly predict labels for examples DEtest. To accomplish this, models make use of labeled examples in DEtrain, which is typically configured such that each label i in YE has KE provided examples; KE is known as the shot, and the setting when aD i i class has no examples in DE = 0) is called zero-shot.train (i.e., KEi
Few-shot evaluation in NLP Research in few-shot NLP has proceeded in several parallel threads, each focused on a different type of transfer ability [76]. Each thread has separate evaluation practices, and the vast majority of few-shot NLP research has limited evaluation to a single transfer type (see Table 1). Here, we describe these types of transfer and their evaluation practices.
Following the CV literature [67, 68], one thread of few-shot NLP focuses on class transfer, the problem of generalizing from a supervised set of classes at meta-train time to a different set of classes from the same dataset at meta-test time. Evaluation typically involves splitting classes YD into YDtrain, YDval and YD disjoint subsets. Class transfer has been studied on many text classification tasks [5],test including relation classification [25, 28, 64], intent classification [37, 64], inter alia. In contrast, domain transfer keeps the same classes between meta-training and meta-testing but changes the textual domain (e.g., generalizing from MNLI to science-focused SciTail [4, 21]). Evaluation then requires identifying pairs of datasets with the same classes YD, where one dataset’s episodes are assigned to Etrain and the other’s to Etest. Domain transfer has also been studied on many tasks [3, 4], including dialogue intent detection & slot tagging [31], sentiment classification [77], NLI [21], and machine translation [27, 58].
Researchers have also begun to study task transfer, the problem of generalizing from a set of tasks at meta-train time to unseen tasks at meta-test time. Evaluation requires tasks (e.g., NLI) appearing in Etest not to appear in Etrain or Eval. Prior work has used GLUE tasks [70] for meta-training before meta-testing on tasks such as entity typing [3, 4], while other work instead used GLUE for
8Meta-training may include a “meta-validation" component, for validating generalization. 9In the few-shot literature, DE test are also called the support and query sets, and |YDE | the way.train and DE
meta-testing [21]. Very recent work has studied task transfer over a large set of datasets [75, 80]. A limited amount of work evaluates both domain and task transfer [3, 4, 21]. An important emerging line of work (not noted by Yin [76]) is pretraining transfer, the problem of whether pretrained language models can perform well at meta-test time without any meta-training. Evaluation in this setting requires Etrain, Eval = ∅. Prior work has shown that pretrained language models are capable of surprising performance on many few-shot tasks, even without fine-tuning [10]. More recent work, mainly focusing on text classification, has reported further gains with cloze-style formats [55, 56, 65], prompt engineering [24], or calibration [78]. FLEX is designed to exercise all four of these transfer types from previous work.
Few-shot evaluation outside NLP The few-shot learning literature has largely focused on image classification, with the introduction of increasingly complex meta-learning algorithms [e.g., 23, 39, 54, 61, 68]. However, more recent work has shown that simple fine-tuning baselines are in fact competitive, and attribute this delayed discovery to problematic evaluation methodology [15, 19]. FLEX adopts recommended methodology [19, 67], and we introduce an analogous baseline (UniFew) to provide a strong measurement foundation for few-shot NLP.
3 FLEX Principles for Few-Shot NLP Evaluation
We now enumerate key desiderata for a few-shot NLP benchmark capable of solving the urgent problems with few-shot NLP evaluation, including separate evaluations for each transfer type and failure to incorporate best measurement practices from other domains (§2).
Diversity of transfer types To make NLP models broadly useful, few-shot NLP techniques must be capable of class, domain, and task transfer. Moreover, techniques should make use of the relevant supervision provided during meta-training to increase performance compared to the pretraining transfer setting. The benchmark should measure all four transfer settings to ensure that the community develops techniques that improve on strong pretraining transfer baselines, and enable comparison across these currently separate threads of research.
Variable number of shots and classes To better simulate a variety of real-world scenarios, the benchmark should include a variety of training set sizes and numbers of classes [67]. Testing robustness to these factors is crucial; few-shot techniques are often sensitive to changes in these factors [12], yet all prior few-shot NLP evaluations we are aware of used a fixed number of training shots and classes, known in advance during meta-training.
Unbalanced training sets The benchmark should also include unbalanced training sets with different training shots per class, another realistic setting adopted by CV benchmarks [67]. Class imbalance has also been observed to degrade performance [11, 47], yet prior few-shot NLP evaluations do not include this setting either.
Textual labels While numerical label values are often used in classification tasks, descriptive textual labels are also present for many tasks. Making these textual labels available for use by few-shot techniques enables the development of techniques that can leverage the class name, like in-context learning [10], template generation [24], and meta-learning [45]. Textual labels are crucial in particular for zero-shot evaluation.
Zero-shot evaluation We believe zero-shot evaluation is integral to the goals of few-shot evaluation. Similar to the motivation for measuring pretraining transfer, zero-shot evaluation is an important use case and also provides a strong baseline for some tasks. In the absence of training examples, textual class labels or richer task descriptions [73] must be provided. Some recent few-shot NLP work [e.g., 10, 24] evaluated with zero training shots, but most [e.g., 3, 5, 75] did not.
No extra meta-testing data We believe the benchmark should not provide validation data (DE =val ∅,∀E ∈ Etest) or unlabeled data for meta-testing tasks, since few-shot learning seeks to enable high performance in environments where collecting additional data is costly.10 Variation in these dimensions in prior NLP work makes comparison of results extremely difficult because it is often under-reported and gives unfair advantage to approaches that leverage such data [50]. For example, per-episode hyperparameter tuning on extra data has been shown to greatly inflate evaluation scores [24]. A few researchers [5, 65] follow our suggested approach, but others have used many
10Unlabeled data collection can be costly too, e.g. due to manual filtering [16].
different settings, from validation sets of various sizes [10, 24, 79] to no validation set but a large set of unlabeled examples [55, 56].
Principled sample size design Promising few-shot techniques can incur significant computational cost per episode, e.g., due to fine-tuning model parameters [4], searching for prompts [24], inter alia. To alleviate these costs, related works often evaluate with a limited number of episodes, which precludes statistically accurate or precise performance estimates. We believe the benchmark’s test sample size should be optimized to enable proper performance evaluation for such techniques, while ensuring the computational burden is inclusive toward researchers without large compute resources.
Proper reporting of CIs, SDs, and individual results The benchmark should report confidence intervals (CIs) of performance estimates and follow recent guidelines [19] to report standard deviations (SDs) for understanding variability. Moreover, we newly advocate for controlling for the same sampled few-shot episodes across all methods and reporting individual episode results, so that researchers can run higher-powered paired statistical tests when comparing results [22], crucial when the benchmark has been optimized for low evaluation budgets.
4 FLEX Benchmark
The FLEX benchmark is a unifying, rigorous evaluation suite for few-shot learning in NLP, which implements the desiderata outlined in the previous section. In this section, we describe detailed design decisions and our accompanying few-shot NLP toolkit (§4.4), which we are releasing to facilitate easily adding NLP datasets and advanced sampling options to future benchmarks. We also describe the FLEX leaderboard (§4.5).
4.1 Task and Dataset Selection
Following GLUE [70] and other prior work [3, 5, 24, 78], we focus on tasks formatted as classification. Despite recent advances, NLP state-of-the-art models remain significantly worse than human performance on many text classification tasks, particularly in the few-shot setting. Automatic scoring of classification tasks is also more reliable than text generation tasks.
We selected datasets across three recent few-shot NLP evaluation suites, which separately studied class transfer [5], domain and task transfer [3, 4], and pretraining transfer [24]. Our benchmark includes a broad mix of tasks (NLI, question classification, entity typing, relation classification, and sentiment analysis) and formats (document, sentence, sentence pair). More complete dataset and license details are available in the following subsection and Appendix A.
4.2 Meta-Evaluation Protocols
As discussed earlier, FLEX evaluates four different types of transfer: Class, Domain, Task, and Pretraining Transfer. To support all types, we report results to the FLEX benchmark both without metatraining (pretraining-only) and with meta-training. This reporting scheme evaluates the performance of the basic pretrained model and the benefit (or lack thereof) of meta-training. A similar reporting scheme was proposed by Triantafillou et al. [67] for CV.
Pretraining-Only In this setting, the pretrained model is directly meta-tested on our benchmark without any additional training. This is the Pretraining Transfer setting, and it is the most difficult, but given the recent success of pretrained models in NLP for few-shot learning [10, 24], we believe that comparison to models without any meta-training is important for NLP tasks.
Meta-Trained In this setting, the model is meta-trained then meta-tested on our benchmark. We carefully selected and split datasets across meta-train/validation/test in order to enable testing of Class, Domain, and Task transfer with a single meta-training phase (to reduce computational burden). Datasets involved in each transfer setting (detailed split information in Table 4 in Appendix A):
• Class Transfer: FewRel [28], HuffPost [46], Amazon [30], 20News [38], and Reuters [41] take part in meta-training and meta-testing but with different classes.
• Domain Transfer: MR [49], CR [32], SNLI [9], and SciTail [35] are only in the meta-testing phase, but the corresponding sentiment and NLI datasets exist in the meta-training phase (MNLI [74], QNLI [52], and SST-2 [62]).
• Task Transfer: Subj [48], TREC [69], and CoNLL [66] are also for meta-testing only, and they represent tasks that the model does not encounter during meta-training.
Instead of per-episode hyperparameter tuning, we provide meta-validation episodes Eval for learning (during meta-training) global hyperparameters that work across all episodes. Specifically, the metavalidation dataset splits (see Table 4) consist of CoLa [72] for task transfer, WNLI [40] for domain transfer, and the validation splits used by Bao et al. [5] for all class transfer datasets. Following [3], we also include meta-training datasets MRPC [20], RTE [6, 8, 17, 26], and QQP [70].
4.3 Episode Sampling
We describe how our benchmark samples meta-testing episodes Etest. For meta-training, we allow users to sample from Etrain, Eval in any way, or directly use the underlying dataset splits. Number of classes For Class Transfer datasets, FLEX evaluates model robustness to variable number of new classes. When constructing episode E from one of these datasets D, our benchmark samples an episode-specific number of classes from dataset D, the sampler picks a random number from the range YE ∼ Unif(5,min(|YD|, 10)). 11 For Domain and Task Transfer, the number of classes is fixed D to the maximum number of classes in each dataset because Class Transfer is not being evaluated.
Number of shots Following prior work outside NLP [47, 67], our benchmark samples the training shot independently for each episode E and class i, as KEi ∼ Unif(Kmin,Kmax), where Kmin = 1. Given strong performance of NLP models with few or even zero examples [10, 73] and following prior work [5], we set the limit Kmax = 5. Separately, we allocate an equal number of episodes as zero-shot, where we instead set DE = ∅ (equivalently, Ktrain Ei = 0,∀i). In each episode, examples are sampled uniformly at random without replacement (but can be reused across episodes).12 Following Triantafillou et al. [67], we select a testing shot that is balanced across classes and leaves roughly half of examples for sampling the training examples. The total number of episodes for each reported configuration (pair of dataset and either zero- or few-shot) is set to 90 using Sample Size Design (§5).
4.4 Extensible Toolkit for Benchmark Creation and Model Training & Evaluation
Alongside the FLEX benchmark, we release an extensible, highly-configurable Python toolkit, which we used to generate the benchmark, and train and evaluate our models. Unlike existing meta-learning frameworks (e.g., Torchmeta [18], learn2learn [2]), our framework makes available a wide range of community-contributed NLP datasets and utilities via HuggingFace Datasets [42].13 Our code also provides advanced sampling utilities (e.g., for class imbalance), ensures reproducibility by checksumming generated episodes, and reports all recommended statistics.
4.5 Public Leaderboard
We provide public leaderboards for each of the meta-evaluation protocols: Pretraining-Only14 and Meta-Trained.15 Submissions take the form of a text label predictions file, which is produced by our toolkit. Results are reported with confidence intervals, standard deviations, and individual predictions on request. See Appendix G for a screenshot of the results interface.
5 Sample Size Design: Balancing Statistical Measurement & Compute Cost
We demonstrate a principled approach to determining the optimal sample size configuration in our few-shot benchmark. A proper benchmark should produce performance estimates that are accurate, close to the true value, and precise, low variance. A large (test) sample size can achieve this, yet must be considered alongside computational cost so that a broad community of researchers with differing amounts of compute resources can participate. This decision is further complicated in the few-shot
11We limit to 10 classes to avoid undue burden on in-context approaches that fit examples in memory [10], and use a lower bound of 5 classes to match prior work [5].
12These samples represent an unbiased performance estimate, but do not eliminate underlying dataset biases. 13Apache License 2.0. Full license details for all software dependencies available in Appendix F. 14https://leaderboard.allenai.org/flex/ 15https://leaderboard.allenai.org/flex_meta/
setting, where sample size refers to both the number of test episodes |Etest| and the number of test examples |DE test| across alltest| per episode E ∈ Etest. For practicality, we consider |Dtest|, the mean |DE episodes, rather than every |DEtest|. It remains unknown how one should best distribute test examples between |Etest| and |Dtest|: More episodes each with fewer examples, or fewer episodes each with many examples? Prior work has been inconsistent in this regard. For example, Gao et al. [24] used |Etest| = 5 and large |Dtest|, while Bao et al. [5] used |Etest| = 1000 and much smaller |Dtest|. Inspired by simulation techniques for informing statistically-powered experimental design [13], we study how different configurations of |Etest| and |Dtest| across different compute budgets C impact the accuracy and precision of our estimated CIs, specifically with respect to coverage probability [53] and width. First, we estimate per-episode and per-test-example costs of our few-shot model (§6) to obtain valid (C, |Etest|, |Dtest|) configurations s.t. the full benchmark completes within given C (GPU-hours).16 Then, for each (C, |Etest|, |Dtest|), we perform 1000 simulation runs, in which each run samples predictions under a true model accuracy µacc and computes a single 95% CI, its width, and whether it correctly covers µacc. Averaging over simulation runs gives us estimates for the coverage probability and width of our benchmark’s CI for a single (C, |Etest|, |Dtest|). We repeat this whole procedure for different µacc ∈ {0.3, 0.35, . . . , 0.95} to cover a wide range of possible model performances observed across many datasets (see Table 3).
Figure 1 shows CI coverage probability and width for many (C, |Etest|, |Dtest|) configurations. First, we find in Figure 1a that sufficiently-many test episodes (i.e., |Etest| > 60) is needed to guarantee coverage probability of our CIs is within one percentage point of the target 95%, a trend that holds regardless of compute budget. Small |Etest| also corresponds to large CI widths across all considered budgets in Figure 1b. This suggests that the choices of |Etest| = 1, 5, 10 in prior work [4, 24, 56, 75] can mean inaccurate and wide CIs, while choices of |Etest| = 1000 [5] can be prohibitively costly for methods with high training cost.
Next, Figure 1b reveals (i) diminishing returns in CI width (decrease in y-axis) as compute increases, and (ii) existence of an optimal balance between |Etest| and |Dtest| for each budget. Restricting our consideration to budgets with optima satisfying sufficient coverage probability (|Etest| > 60), the minimum viable budget is 36 GPU-hours. Then, assessing the marginal benefit of each 12 GPU-hour budget increase in terms of marginal reduction in CI width between optima, we arrive at our FLEX
16Costs estimated using a Quadro RTX-8000 GPU with 48Gb memory. For few-shot settings, model was trained with 300 steps. Per-episode and per-test-example costs were approx. 95–98 and 0.7–0.11 GPU-sec, respectively. Using a model with high per-episode cost for this analysis allows us to define a lower-bound sample size requirement; we can always test inexpensive or zero-shot models on more |Etest| or Dtest within budget.
configuration of |Etest| = 90 and |Dtest| ≈ 470 under a budget of C = 48 GPU-hours.17 Further details are in Appendix B.
6 UniFew: A Few-Shot Learning Model by Unifying Pre-training and Downstream Task Formats
Despite their encouraging results, existing works on few-shot learning in NLP are based on either customized and often complex meta-learning algorithms [3, 4, 5, 60], heavy manual/automated engineering of textual descriptions or prompts [24, 55, 59, 78], ordering of training examples [44, 56], extensive hyperparameter tuning on held-out sets [24, 44, 55], or custom learning algorithms [55, 65]. We present UniFew, a strong few-shot learning model across all transfer settings and datasets tested, that eschews the need for incorporating the above-mentioned complexities and challenges.
UniFew is a prompt-based model [56], a class of models that tailor the input/output format of their data to match the format used during pretraining. While this technique allows them to perform a task without the need for additional classification layers, prompt-based models are typically sensitive to the choice of the prompts, which can require extensive search, trial-and-error, and even additional models to get right [24, 78]. To avoid this issue while still leveraging the strong capabilities of pretrained models, UniFew (1) converts examples into multiple-choice question-answer (QA) format, and (2) uses UnifiedQA [34], a T5 [51] model further pretrained on a large collection of QA pairs.18,19
Compared to other prompt-based models, UniFew has two main strengths. First, the prompt design problem is much simpler because UnifiedQA questions had well-defined formats. For example, we only need four general prompt templates which cover all 20 datasets in the FLEX benchmark, while prior works have needed specialized prompts for each dataset. Second, UnifiedQA’s multiple-choice format ensures the model outputs a valid class label, without the need for learned or manually-defined mappings or verbalizers required for other prompt-based methods [24, 55].20 In concurrent work, Zhong et al. [80] also show the benefit of performing meta-tuning on a variety of datasets; while their task setup as Q/A is similar to UniFew, they focus exclusively on binary zero-shot classification tasks and, unlike UniFew, do not handle multi-class or few-shot problems.
We experiment with UniFew both without and with meta-training on the FLEX benchmark’s metatraining data, following the FLEX protocol (§4.2). We call the meta-trained variant UniFewmeta. We use simple prompts in the format of question followed by choices followed by the answer (according to the UnifiedQA original format). The exact prompts used are provided in Appendix C.
Training details For meta-training and meta-validation of UniFew, we sampled Etrain and Eval with 5-class, 5-training-shot sampling with the same number of shots per class.21 We trained the model for total number of 30K steps, using a linear learning rate scheduler with peak rate of 3e−5, 200 warmup steps, and batch size of 4; we selected the best checkpoint based on Eval performance. At meta-test time, for each episode, we trained the model on the episode’s training examples (if they exist) and predicted the outputs on test examples. For training at meta-test time, we used constant learning rate of 3e−5 and batch size of 4, and trained the model for 400 steps.22 We used NVidia RTX8000 GPUs, which take about 7 GPU-hours for meta-training and 48 GPU-hours for meta-testing. For meta-testing we split the episodes among 8 GPUs to speed up evaluations.
7 Experiments
Comparing UniFew with prior work To demonstrate the efficacy of UniFew, we evaluate it against state-of-the-art approaches for few-shot and meta-learning in NLP: LM-BFF [24], a language
17Consider budget increases 36→ 48, 48→ 60, 60→ 72 and 72→ 80. The first reduces CI width by 13%. Further increases reduce CI width by an additional 9%, 7%, and 5%, respectively. We choose C = 48 based on these diminishing returns.
18UnifiedQA and T5 both use Apache License 2.0. We use publicly-released large-size model weights. 19None of the supervised datasets in the pretraining of UnifiedQA or T5 are in FLEX. 20In rare cases, especially for zero-shot, UnifiedQA may generate an invalid answer (e.g., “Yes, Yes, No”
instead of “Yes”). We use simple heuristics to normalize the answer in such cases. 21Users of FLEX can specify the sampling configuration of Etrain and Eval as desired. 22For comparison with [24] we trained the model for 600 steps.
B H-SMLMT
model prompt-based fine-tuning method, as well as Distributional Signatures (DS) [5] and H-SMLMT [4], two state-of-the-art meta-learning techniques. Refer to Appendix D for details on these methods.
We compare to these methods using the datasets in the FLEX benchmark to establish the quality of our model. Since we constructed our benchmark from disjoint subsets of datasets evaluated in each of these prior works (§4.1), we compare each method with its corresponding subset of datasets. Each of these prior works evaluates their methods using different experimental setups (classes, number of episodes, shots) than our benchmark and was not designed to handle FLEX’s challenging episode characteristics like class imbalance. To enable fair comparison, we test UniFew on the exact data splits released by the authors when available (H-SMLMT and LM-BFF). For DS, we sample (balanced) episodes using our framework after matching their test settings (number of shots and classes, class splits, etc.) and reproduce their reported results to within 1% absolute difference using their model code; we use these episodes for our experiments. The results in Table 2 show that UniFewmeta outperforms both H-SMLMT and DS meta-learning approaches by relatively large margins, while achieving competitive results compared with LM-BFF. Note that UniFew’s strong results are without meta-learning approaches, extensive prompt-engineering, or per-episode hyperparameter search.
Evaluating UniFew on the FLEX benchmark Having established UniFew as a strong model comparable to recent, state-of-the art techniques, we present its results on the final version of our benchmark (with class imbalance, etc.). From Table 3, we observe three findings. First, pretraining is an effective technique for infusing an NLP model with the ability to perform few-shot generalization even without any meta-training, as UniFew is able to score Δfew = +12.8 higher when provided
23Gao et al. [24]’s automatic prompt search and in-context learning are not available in the zero-shot setting, so they instead use manually-designed prompts.
24Zero-shot results from Gao et al. [24] are on the entire test set, so there is no reported standard deviation. 2516/16 denotes 16 shots for training plus 16 more for validation which we only use for early stopping while
Gao et al. [24] use for grid-search hyperparameter tuning.
few rather than zero examples. Second, by comparing UniFewmeta and UniFew, we see that metatraining has a substantial impact on zero-shot performance (Δmeta = +14.5), but its benefit, while still substantial, is less in the few-shot setting (Δmeta = +8.6). Third, while meta-training adds roughly the same benefit to zero and few-shot performance for both domain and task transfer settings, meta-training disproportionately benefits zero-shot class transfer (Δmeta = +16.2) over few-shot class transfer (Δmeta = +4.3). Such observations are made possible through unified evaluation and comparison across different transfer types. The full FLEX benchmark results broken down by individual datasets are in Appendix E.
8 Limitations and Future Work
While the initial FLEX benchmark is focused on classification tasks, we aim to use our benchmark creation toolkit (§4.4) to incorporate additional task formats like span selection or text generation. Furthermore, the benchmark currently only supports English language tasks; to study language transfer, we aim to incorporate new datasets using our toolkit. Adding diverse datasets has its own challenges; while we’ve selected datasets for our benchmark based on prior work adoption and have attempted to verify their licensing for research use, we were unable to find license details for some datasets (Appendix A). We believe it is crucial to continually evolve the suite of datasets to remain challenging for the best models [36] and to tackle real-world challenges [1].
In addition, Sample Size Design (§5) simulations currently rely on our own available training estimates. We plan to gather a more representative sample from community leaderboard submissions.
Our public leaderboard could benefit from extended support for detailed comparisons between submissions based on properties of techniques. For example, approaches may vary in terms of model characteristics (e.g., number of parameters), data and supervision used during pretraining, amount of compute, etc. We encourage reporting all these factors to enable the community to analyze and make progress on important sub-spaces in the overall few-shot technique design space.
Finally, we believe the benefits of improving few-shot NLP techniques outweigh potential risks, but we acknowledge potential harms associated with language models [7, 14, 57, 63]. Few-shot models learn a task from a few examples but rely heavily on knowledge encoded in the pretrained model. Thus, few-shot models are more likely to inherit the biases of the pretrained models, compared to more fully supervised models; as the community focuses more on few-shot learning, it is more important than ever for future pretrained models to be careful about biases in the underlying pretraining corpora.
9 Conclusion
In this work, we unify and bring rigor to few-shot NLP evaluation. We formulate the FLEX Principles, a set of requirements and best practices that enables unified, rigorous, valid, and cost-sensitive measurement. We advance the principles with new Sample Size Design methodology for optimizing statistical accuracy and precision while keeping costs low. The FLEX benchmark is our instantiation of the FLEX Principles; it employs Sample Size Design and includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard with diverse NLP tasks. We present UniFew, a promptbased model that aligns pretraining and downstream task formats, achieving results competitive with recent few-shot methods despite using trivial prompt engineering. Finally, we release an extensible, open-source toolkit (used to train UniFew and generate the FLEX benchmark) to support future benchmark creation and few-shot NLP model training.
Acknowledgments and Disclosure of Funding
We would like to thank Chandra Bhagavatula, Matt Gardner, Matt Peters, Doug Downey, Dan Weld, and the four anonymous reviewers for helpful comments, suggestions and feedback. We would also like to acknowledge the large community effort involved in the creation of the datasets and open-source tools we utilize. | 1. What is the focus and contribution of the paper regarding few-shot learning in NLP?
2. How does the reviewer assess the significance and usefulness of the proposed benchmark, FLEET?
3. What are the strengths and weaknesses of the paper in terms of its design and construction?
4. Do you have any concerns about the selection of tasks and datasets in FLEET?
5. How does the reviewer evaluate the similarity between the proposed method and a concurrent work? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes FLEET, a benchmark for few-shot learning in NLP. FLEET is comprised of different few-shot settings (new tasks, label sets, or domains at test time), different amount of training examples ("shots"), and meta-learning strategies (w/ and w/o meta learning). The paper goes through the design decisions behind FLEET and detail its construction. Finally, they present a baseline based on UnifiedQA, wherein they reformulate downstream tasks as QA and then finetune the model on downstream tasks.
Review
First, the benchmark contains a lot of different settings. On one hand, this is a positive as it allows many different types of research to be conducted with the data. On the other hand, it practically makes understanding, reporting, and using the benchmark a bit messy. For example, people will report very specific settings such as FLEET with {4-shot, with meta learning, task transfer}. More generally, there is often a tradeoff when constructing datasets of "do we focus on specific problems/settings we think are interesting" versus "do we build datasets that many people could use in different settings". This paper is a very extreme on trying to satisfy the second question: rather than trying to take a stand on what the community should work on, it tries to do a bit of everything.
Second, a common criticism of GLUE and other benchmarks which applies to FLEET is that the benchmarks (1) simply repackage and resell other datasets as a new benchmark without adding substantial new value and (2) contain a relatively ad-hoc selection of individual datasets. FLEET is similar in that it sort of throws together a collection of tasks (sentiment, NLI, etc.) mainly guided by low-level reasons (e.g., the class imbalances) rather than higher-level ones (e.g., based on practical few-shot scenarios, different types of reasoning, tests of spurious correlations, etc.).
Third, a lower-level point is that I think few-shot benchmarks should contain multiple prompts / label descriptions for each dataset. There can be high variance across different prompts (Zhao et al. ICML 2021, Gao et al. ACL 2021, etc.) and so I think the reaction to this should be to report accuracy across multiple different prompts.
Finally, the proposed method is extremely similar to this concurrent work https://arxiv.org/abs/2104.04670. This is not a negative or a positive, I am simply pointing this out as an FYI to the authors, as they should probably cite/discuss this in a future version of their draft. |
NIPS | Title
Object-aware Contrastive Learning for Debiased Scene Representation
Abstract
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. However, the learned representations are often contextually biased to the spurious scene correlations of different objects or object and background, which may harm their generalization on the downstream tasks. To tackle the issue, we develop a novel object-aware contrastive learning framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. For (a), we propose the contrastive class activation map (ContraCAM), which finds the most discriminative regions (e.g., objects) in the image compared to the other images using the contrastively trained models. We further improve the ContraCAM to detect multiple objects and entire shapes via an iterative refinement procedure. For (b), we introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning, respectively. Our experiments demonstrate the effectiveness of our representation learning framework, particularly when trained under multi-object images or evaluated under the background (and distribution) shifted images.1
1 Introduction
Self-supervised learning of visual representations from unlabeled images is a fundamental task of machine learning, which establishes various applications including object recognition [1, 2], reinforcement learning [3, 4], out-of-distribution detection [5, 6], and multimodal learning [7, 8]. Recently, contrastive learning [1, 2, 9–15] has shown remarkable advances along this line. The idea is to learn invariant representations by attracting the different views (e.g., augmentations) of the same instance (i.e., positives) while contrasting different instances (i.e., negatives).2
Despite the success of contrastive learning on various downstream tasks [16], they still suffer from the generalization issue due to the unique features of the training datasets [17–19] or the choice of data augmentations [19–21]. In particular, the co-occurrence of different objects and background in randomly cropped patches (i.e., positives) leads the model to suffer from the scene bias. For example, Figure 1a presents two types of the scene bias: the positive pairs contain different objects (e.g., giraffe and zebra), and the patches contain adjacent object and background (e.g., zebra and safari). Specifically, the co-occurrence of different objects is called contextual bias [22], and that of object and background is called background bias [23]. Attracting the patches in contrastive learning
∗Equal contribution 1Code is available at https://github.com/alinlab/object-aware-contrastive. 2Some recent works (e.g., [14, 15]) attract the positives without contrasting the negatives. While we mainly
focus on contrastive learning with negatives, our method is also applicable to the positive-only methods.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
makes the features of correlated objects and background indistinguishable, which may harm their generalization (Figure 1b) because of being prone to biases (Figure 1c).
Contribution. We develop a novel object-aware contrastive learning framework that mitigates the scene bias and improves the generalization of learned representation. The key to success is the proposed contrastive class activation map (ContraCAM), a simple yet effective self-supervised object localization method by contrasting other images to find the most discriminate regions in the image. We leverage the ContraCAM to create new types of positives and negatives. First, we introduce two data augmentations for constructing the positive sample-pairs of contrastive learning: object-aware random crop and background mixup that reduce contextual and background biases, respectively. Second, by equipping ContraCAM with an iterative refinement procedure, we extend it to detect multiple objects and entire shapes, which allows us to generate masked images as effective negatives.
We demonstrate that the proposed method can improve two representative contrastive (or positiveonly) representation learning schemes, MoCov2 [28] and BYOL [14], by reducing contextual and background biases as well as learning object-centric representation. In particular, we improve:
• The representation learning under multi-object images, evaluated on the COCO [25] dataset, boosting the performance on the downstream tasks, e.g., classification and detection.
• The generalizability of the learned representation on the background shifts, i.e., objects appear in the unusual background (e.g., fish on the ground), evaluated on the Background Challenge [23].
• The generalizability of the learned representation on the distribution shifts, particularly for the shape-biased, e.g., ImageNet-Sketch [29], and corrupted, e.g., ImageNet-C [30] datasets.
Furthermore, ContraCAM shows comparable results with the state-of-the-art unsupervised localization method (and also with the supervised classifier CAM) while being simple.
2 Object-aware Contrastive Learning
We first briefly review contrastive learning in Section 2.1. We then introduce our object localization and debiased contrastive learning methods in Section 2.2 and Section 2.3, respectively.
2.1 Contrastive learning
Contrastive self-supervised learning aims to learn an encoder f(·) that extracts a useful representation from an unlabeled image x by attracting similar sample x+ (i.e., positives) and dispelling dissimilar samples {x−i } (i.e., negatives). In particular, instance discrimination [10] defines the same samples of different data augmentations (e.g., random crop) as the positives and different samples as negatives.
Formally, contrastive learning maximizes the contrastive score:
scon(x;x +, {x−n }) := log
exp(sim(z(x), z̄(x+))/τ) exp(sim(z(x), z̄(x+))/τ) + ∑
x−n exp(sim(z(x), z̄(x−n ))/τ)
, (1)
where z(·) and z̄(·) are the output and target functions wrapping the representation f(x) for use, sim(·, ·) denotes the cosine similarity, and τ is a temperature hyperparameter. The specific form of z(·), z̄(·) depends on the method. For example, MoCov2 [28] sets z(·) = g(f(·)), z̄(·) = gm(fm(·)) where g(·) is a projector network to indirectly match the feature f(x) and fm(·), gm(·) are the momentum version of the encoder and projectors. On the other hand, BYOL [14] sets z(·) = h(g(f(·))), z̄(·) = gm(fm(·)), where h(·) is an additional predictor network to avoid collapse of the features because it only maximizes the similarity score ssim(x;x+) := sim(z(x), z̄(x+)) [14, 15].
Scene bias in contrastive learning. Despite the success of contrastive learning, they often suffer from the scene bias: entangling representations of co-occurring (but different) objects, i.e., contextual bias [22], or adjacent object and background, i.e., background bias [23], by attracting the randomly cropped patches reflecting the correlations (Figure 1a). The scene bias harms the performance (Figure 1b) and generalization of the learned representations on distribution shifts (Figure 1c). To tackle the issue, we propose object-aware data augmentations for debiased contrastive learning (Section 2.3) utilizing the object locations inferred from the contrastively trained models (Section 2.2).
2.2 ContraCAM: Unsupervised object localization via contrastive learning
We aim to find the most discriminative region in an image, such as objects for scene images, compared to the other images. To this end, we extend the (gradient-based) class activation map (CAM) [31, 32], originally used to find the salient regions for the prediction of classifiers. Our proposed method, contrastive class activation map (ContraCAM), has two differences from the classifier CAM. First, we use the contrastive score instead of the softmax probability. Second, we discard the negative signals from the similar objects in the negative batch since they cancel out the positive signals and hinder the localization, which is crucial as shown in Table 1 and Appendix C.1).
Following the classifier CAM, we define the saliency map as the weighted sum of spatial activations (e.g., penultimate feature before pooling), where the weight of each activation is given by the importance, the sum of gradients, of the activation for the score function. Formally, let A := [Akij ] be a spatial activation of an image x where 1 ≤ i ≤ H, 1 ≤ j ≤ W, 1 ≤ k ≤ K denote the index of row, column, and channel, and H,W,K denote the height, width, and channel size of the activation. Given a batch of samples B, we define the score function of the sample x as the contrastive score scon in Eq. (1) using the sample x itself as a positive3 and the remaining samples B \ x as negatives. Then, the weight of the k-th activation αk and the CAM mask CAM := [CAMij ] ∈ [0, 1]H×W are:
CAMij = Normalize
( ReLU (∑ k αkA k ij )) , αk = ReLU 1 HW ∑ i,j ∂scon(x;x,B \ x) ∂Aki,j , (2) where Normalize(x) := x−min xmax x−min x is a normalization function that maps the elements to [0, 1]. We highlight the differences from the classifier CAM with the red color. Note that the ReLU used to compute αk in Eq. (2) discards the negative signals. The negative signal removal trick also slightly improves the classifier CAM [33] but much effective for the ContraCAM.
We further improve the ContraCAM to detect multiple objects and entire shapes with an iterative refinement procedure [34]: cover the salient regions of the image with the (reverse of) current CAM, predict new CAM from the masked image, and aggregate them (see Figure 2). It expands the CAM regions since the new CAM from the masked image detects the unmasked regions. Here, we additionally provide the masked images in the batch (parellely computed) as the negatives: they are better negatives by removing the possibly existing similar objects. Also, we use the original image x as the positive to highlight the undetected objects. Formally, let CAMt be the CAM of iteration t and CAMt := [CAMtij ] = [maxl≤t CAM l ij ] be the aggregated CAM mask. Also, let x t be the image softly masked by the (reverse of) current aggregated mask, i.e., xt := (1− CAMt−1) x for t ≥ 2
3It does not affect the score but is defined for the notation consistency with the iterative extension.
and x1 = x where denotes an element-wise product, and Bt := {xtn} be the batch of the masked images. Then, we define the score function for iteration t as:
stcon(x) := scon(x t;x,∪l≤t(Bl \ xl)). (3)
We substitute the contrastive score scon in Eq. (2) with the stcon in Eq. (3) to compute the CAM of iteration t, and use the final aggregated mask after T iterations. We remark that the CAM results are not sensitive to the number of iterations T if it is large enough; CAM converges to the stationary value since soft masking xt regularizes the CAM not to be keep expanded (see Appendix C.2). We provide the pseudo-code of the entire Iterative ContraCAM procedure in Appendix A.
Note that contrastive learning was known to be ineffective at localizing objects [35] with standard saliency methods (using a classifier on top of the learned representation) since attracting the randomly cropped patches makes the model look at the entire scene. To our best knowledge, we are the first to extend the CAM for the self-supervised setting, relaxing the assumption of class labels. Selvaraju et al. [36] considered CAM for contrastive learning, but their purpose was to regularize CAM to be similar to the ground-truth masks (or predicted by pre-trained models) and used the similarity of the image and the masked image (by ground-truth masks) as the score function of CAM.
2.3 Object-aware augmentations for debiased contrastive learning
We propose two data augmentations for contrastive learning that reduce contextual and background biases, respectively, utilizing the object locations inferred by ContraCAM. Both augmentations are applied to the positive samples before other augmentations; thus, it is applicable for both contrastive learning (e.g., MoCov2 [28]) and positive-only methods (e.g., BYOL [14]).
Reducing contextual bias. We first tackle the contextual bias of contrastive learning, i.e., entangling the features of different objects. To tackle the issue, we propose a data augmentation named objectaware random crop, which restricts the random crop around a single object and avoids the attraction of different objects. To this end, we first extract the (possibly multiple or none) bounding boxes of the image from the binarized mask4 of the ContraCAM. We then crop the image around the box, randomly chosen from the boxes, before applying other augmentations (e.g., random crop). Here, we apply augmentations (to produce positives) to the same cropped box; thus, the patches are restricted in the same box. Technically, it only requires a few line addition of code:
if len(boxes) > 0: # can be empty box = random.choice(boxes) image = image.crop(box) # apply other augmentations (e.g., random crop)
4Threshold the mask or apply a post-processing method, e.g., conditional random field (CRF) [37].
Purushwalkam and Gupta [19] considered a similar approach using ground-truth bounding boxes applied on MoCov2. However, we found that cropping around the ground-truth boxes often harms contrastive learning (see Table 4). This is because some objects (e.g., small ones) in ground-truth boxes are hard to discriminate (as negatives), making contrastive learning hard to optimize. In contrast, the ContraCAM produces more discriminative boxes, often outperforming the ground-truth boxes (see Appendix D.1). Note that the positive-only methods do not suffer from the issue: both groundtruth and ContraCAM boxes work well. On the other hand, Selvaraju et al. [36] used a pre-trained segmentation model to constrain the patches to contain objects. It partly resolves the false positive issue by avoiding the attraction of background-only patches but does not prevent the patches with different objects; in contrast, the object-aware random crop avoids both cases.
Reducing background bias. We then tackle the background bias of contrastive learning, i.e., entangling the features of adjacent object and background. To this end, we propose a data augmentation named background mixup, which substitutes the background of an image with other backgrounds. Intuitively, the positive samples share the objects but have different backgrounds, thus reducing the background bias. Formally, background mixup blends an image x1 and a background-only image x bg 2 (generated from an image x2) using the ContraCAM of image x1 as a weight, i.e.,
xbg-mix1 := CAM(x1) x1 + (1− CAM(x1)) x bg 2 , (4)
where denotes an element-wise product. Here, the background-only image xbg2 is generated by tiling the background patch of the image x2 inferred by the ContraCAM. Precisely, we choose the largest rectangle in the zeros of the binarized CAM mask for the region of the background patch. The overall procedure of the background mixup is illustrated in Figure 3.
Prior works considered the background bias for contrastive learning [35, 38] but used a pre-trained segmentation model and copy-and-pasted the objects to the background-only images using binary masks. We also tested the copy-and-paste version with the binarized CAM, but the soft version in Eq. (4) performed better (see Appendix E.1); one should consider the confidence of the soft masks since they are inaccurate. Furthermore, the background mixup improves the generalization on distribution shifts, e.g., shape-biased [29, 39, 40] and corrupted [30] datasets (see Table 8). Remark that the background mixup often outperforms the Mixup [41] and CutMix [42] applied for contrastive learning [43]. Intuitively, the background mixup can be viewed as a saliency-guided extension [44, 45] of mixup but not mixing the targets (positives), since the mixed patch should be only considered as the positive of the patch sharing foreground, not the one sharing background.
3 Experiments
We first verify the localization performance of ContraCAM in Section 3.1. We then demonstrate the efficacy of our debiased contrastive learning: object-aware random crop improves the training under multi-object images by reducing contextual bias in Section 3.2, and background mixup improves generalization on background and distribution shifts by reducing background bias in Section 3.3.
Common setup. We apply our method on two representative contrastive (or positive-only) learning models: MoCov2 [28] and BYOL [14], under the ResNet-18 and ResNet-50 architectures [27]. We train the models for 800 epochs on COCO [25] and ImageNet-9 [23], and 2,000 epochs on CUB [46] and Flowers [26] datasets with batch size 256. For object localization experiments, we train the vanilla MoCov2 and BYOL on each dataset and compute the CAM masks. For representation learning experiments, we first train the vanilla MoCov2 and BYOL to pre-compute the CAM masks (and corresponding bounding boxes); then, we retrain MoCov2 and BYOL, applying our proposed augmentations using the fixed pre-computed masks (and boxes). Here, we retrain the models from scratch to make the training budgets fair. We also retrained (i.e., third iteration) the model using the CAM masks from our debiased models but did not see the gain (see Appendix D.6). We follow the default hyperparameters of MoCov2 and BYOL, except the smaller minimum random crop scale of 0.08 (instead of the original 0.2) since it performed better, especially for the multi-object images. We run a single trial for contextual bias and three trials for background bias experiments.
We use the penultimate spatial activations to compute the CAM results. At inference, we follow the protocol of [48] that doubly expands the resolution of the activations to detect the smaller objects through decreasing the stride of the convolutional layer in the final residual block. Since it produces the smaller masks, we use more iterations (e.g., 10) for the Iterative ContraCAM. Here, we apply the conditional random field (CRF) using the default hyperparameters from the pydensecrf library [49] to produce segmentation masks and use the opencv [50] library to extract bounding boxes. We use a single iteration of the ContraCAM without the expansion trick for background bias results; it is sufficient for single instance images. Here, we binarize the masks with a threshold of 0.2 to produce background-only images. We provide the further implementation details in Appendix B.
Computation time. The training of the baseline models on the COCO (∼100,000 samples) dataset takes ∼1.5 days on 4 GPUs and ∼3 days on 8 GPUs for ResNet-18 and ResNet-50 architectures, respectively, using a single machine with 8 GeForce RTX 2080 Ti GPUs; proportional to the number of samples and training epochs for other cases. The inference of ContraCAM takes a few minutes for the entire training dataset, and generating the boxes using CRF takes dozens of minutes. Using the pre-computed masks and boxes, our method only slightly increases the training time.
3.1 Unsupervised object localization
We check the performance of our proposed self-supervised object localization method, ContraCAM. Figure 4 shows the examples of the ContraCAM on various image datasets, including CUB, Flowers,
COCO, and ImageNet-9 datasets. ContraCAM even detects multiple objects in the image. We also quantitatively compare ContraCAM with the state-of-the-art unsupervised object localization method, ReDo [47]. Table 1 shows that the ContraCAM is comparable with ReDO, in terms of the the mask mean intersection-over-unions (mIoUs). One can also see that the negative signal removal, i.e., ReLU in Eq. (1), is a critical to the performance (see Appendix C.1 for the visual examples).
We also compare the localization performance of ContraCAM (using MoCov2) and classifier CAM (using a supervised model). Table 2 shows the results where all models are solely trained from the target dataset and evaluated on the same dataset. Interestingly, ContraCAM outperforms the classifier CAM on CUB and Flowers. We conjecture this is because CUB and Flowers have few training samples; the supervised classifier is prone to overfitting. On the other hand, Table 3 shows the results on the transfer setting, i.e., the models are trained on the ImageNet [51] using the ResNet-50 architecture. We use the publicly available supervised classifier [52] and MoCov2, and follow the MaxBoxAccV2 evaluation protocol [48]. The ContraCAM often outperforms the classifier CAM, especially for the unseen images (e.g., CUB). This is because the classifiers project out the features unrelated to the target classes, losing their generalizability on the out-of-class samples.
We provide additional analysis and results in Appendix C. Appendix C.2 shows the ablation study on the number of iterations of ContraCAM. One needs a sufficient number of iterations since too few iterations often detect subregions. Since ContraCAM converges to the stationary values for more iterations, we simply choose 10 for all datasets. Appendix C.3 shows the effects of the negative batch of ContraCAM. Since ContraCAM finds the most discriminative regions compared to the negative batch, one needs to choose the negative batch different from the target image. Using a few randomly sampled images is sufficient. Appendix C.4 provides additional comparison of ContraCAM and classifier CAM. Finally, Appendix C.5 provides a comparison with the gradient-based saliency methods [53, 54] using the same contrastive score. CAM gives better localization results.
3.2 Reducing contextual bias: Representation learning from multi-object images
We demonstrate the effectiveness of the object-aware random crop (OA-Crop) for representation learning under multi-object images by reducing contextual bias. To this end, we train MoCov2 and BYOL on the COCO dataset, comparing them with the models that applied the OA-Crop using the ground-truth (GT) bounding boxes or inferred ones from the ContraCAM.
We first compare the linear evaluation [24], test accuracy of a linear classifier trained on top of the learned representation, in Table 4. We report the results on the COCO-Crop, i.e., the objects in the COCO dataset cropped by the GT boxes, CIFAR-10 and CIFAR-100 [55], CUB, Flowers, Food [56], and Pets [57] datasets. OA-Crop significantly improves the linear evaluation of MoCov2 and BYOL for all tested cases. Somewhat interestingly, OA-Crop using the ContraCAM boxes even outperforms the GT boxes for MoCov2 under the ResNet-50 architecture. This is because the GT boxes often contain objects hard to discriminate (e.g., small objects), making contrastive learning hard to optimize; in contrast, ContraCAM finds more distinct objects. Note that BYOL does not suffer from this issue and performs well with both boxes. See Appendix D.1 for the detailed discussion.
We also compare the detection (and segmentation) performance measured by mean average precision (AP), an area under the precision-recall curve of the bounding boxes (or segmentation masks),
on the COCO detection and segmentation tasks in Table 5. Here, we fine-tune the MoCov2 and BYOL models using the ResNet-50 architecture. Remark that OA-Crop using the ContraCAM boxes outperforms the baselines, while the GT boxes are on par or worse. This is because the GT boxes solely focus on the objects while ContraCAM also catches the salient scene information.
In addition, we present the generalization performance of learned representations under the distribution shifts in Table 6. To this end, we evaluate the models trained on the COCO dataset to various 9 superclass (370 classes) subsets of ImageNet, whose details will be elaborated in the next section. ImageNet-9 contains natural images like COCO, but other datasets contain distribution-shifted (e.g., shape-biased or corrupted) images. Note that OA-Crop performs on par with the vanilla MoCov2 and BYOL on the original ImageNet-9 but performs better on the distribution-shifted dataset. It verifies that the OA-Crop improves the generalizability of the learned representation.
We provide additional analysis and results in Appendix D. Appendix D.2 provides an additional analysis that OA-Crop indeed reduces the contextual bias. Specifically, the representation learned from OA-Crop shows better separation between the co-occurring objects, giraffe and zebra. Appendix D.3 provides the comparison with the supervised representation, learned by Faster R-CNN [58] and Mask R-CNN [59], using ground-truth bounding boxes or segmentation masks. OA-Crop significantly reduces the gap between self-supervised and supervised representation. Appendix D.4 presents the class-wise accuracy on CIFAR10 that OA-Crop consistently improves the accuracy over all classes. Appendix D.5 presents the linear evaluation performance of MoCov2 and BYOL trained on a 10% subset of ImageNet for readers comparing with the results with the ImageNet-trained models.
3.3 Reducing background bias: Generalization on background and distribution shifts
We demonstrate the effectiveness of the background mixup (BG-Mixup) for the generalization of the learned representations on background and distribution shifts by reducing background bias and learning object-centric representation. To this end, we train MoCov2 and BYOL (and BG-Mixup upon them) on the ORIGINAL dataset from the Background Challenge [23], a 9 superclass (370 classes) subset of the ImageNet [51]. We then train a linear classifier on top of the learned representation using the ORIGINAL dataset. Here, we evaluate the classifier on the Background Challenge datasets for the background shift results, and the corresponding 9 superclasses of the ImageNet-Sketch [29], Stylized-ImageNet [39], ImageNet-R [40], and ImageNet-C [30] datasets, denoted by putting ‘-9’ at the suffix of the dataset names, for the distribution shift results (see Appendix B.3 for details).
We additionally compare BG-Mixp with the hard background mixing (i.e., copy-and-paste) using ground-truth masks (BG-HardMix (GT)) for the background shift experiments, and Mixup [41] and CutMix [42] (following the training procedure of [43]) for the distribution shift experiments. We also tested the BG-HardMix using the binarized CAM but did not work well (see Appendix E.1). On the other hand, the BG-Mixup often makes contrastive learning hard to be optimized by producing hard positives; thus, we apply BG-Mix with probability pmix < 1, independently applied on the patches. We tested pmix ∈ {0.2, 0.3, 0.4, 0.5} and choose pmix = 0.4 for MoCov2 and pmix = 0.3 for BYOL. Note that MoCov2 permits the higher pmix, since finding the closest sample from the (finite) batch is easier than clustering infinitely many samples (see Appendix E.2 for details).
Table 7 presents the results on background shifts: BG-Mixup improves the predictions on the objectfocused datasets (e.g., MIXED-RAND) while regularizing the background-focused datasets (e.g., ONLY-BG-T). Table 8 presents the results on distribution shifts: BG-Mixup mostly outperforms the Mixup and the CutMix. We also provide the BG-HardMix (GT) results on distribution shifts in Appendix E.3 and the mixup results on background shifts in Appendix E.4. The superiority of BG-Mix on both background and distribution shifts shows that its merits come from both objectcentric learning via reducing background and the saliency-guided input interpolation. In addition, we provide the corruption-wise classification results on ImageNet-9-C in Appendix E.5, and additional distribution shifts results on ObjectNet [60] and SI-Score [61] in Appendix E.6.
4 Related work
Contrastive learning. Contrastive learning (or positive-only method) [1, 2, 14] is the state-of-the-art method for visual representation learning, which incorporates the prior knowledge of invariance over the data augmentations. However, they suffer from an inherent problem of matching false positives from random crop augmentation. We tackle this scene bias issue and improve the quality of learned representation. Note that prior work considering the scene bias for contrastive learning [19, 35, 36, 38] assumed the ground-truth object annotations or pre-trained segmentation models, undermining the motivation of self-supervised learning to reduce such supervision. In contrast, we propose a fully self-supervised framework of object localization and debiased contrastive learning. Several works [62, 63] consider an object-aware approach for video representation learning, but their motivation was to attract the objects of different temporal views and require a pretrained object detector.
Bias in visual representation. The bias (or shortcut) in neural networks [64] have got significant attention recently, pointing out the unintended over-reliance on texture [39], background [23], adversarial features [65], or conspicuous inputs [66]. Numerous works have thus attempted to remove such biases, particularly in an unsupervised manner [29, 67, 68]. Our work also lies on this line: we evoke the scene bias issue of self-supervised representation learning and propose an unsupervised debiasing method. Our work would be a step towards an unbiased, robust visual representation.
Unsupervised object localization. The deep-learning-based unsupervised object localization methods can be categorized as follow. (a) The generative-based [47, 69, 70] approaches train a generative model that disentangles the objects and background by enforcing the object-perturbed image to be considered as real. (b) The noisy-ensemble [71–73] approaches train a model using handcrafted predictions as noisy targets. Despite the training is unsupervised, they initialize the weights with the supervised model. (c) Voynov et al. [74] manually finds the ‘salient direction’ from the noise (latent) of the ImageNet-trained BigGAN [75]. Besides, scene decomposition (e.g., [76]) aims at a more ambitious goal: fully decompose the objects and background, but currently not scale to the complex images. To our best knowledge, the generative-based approach is the state-of-the-art method for fully unsupervised scenarios. Our proposed ContraCAM could be an alternative in this direction.
Class activation map. Class activation map [31, 32] has been used for the weakly-supervised object localization (WSOL), inferring the pixel- (or object-) level annotations using class labels. Specifically, classifier CAM finds the regions that are most salient for the classifier score. ContraCAM further expands its applicability from weakly-supervised to unsupervised object localization by utilizing the contrastive score instead of the classifier score. We think ContraCAM will raise new interesting research questions, e.g., one could adopt the techniques from CAM to the ContraCAM.
5 Conclusion and Discussion
We proposed the ContraCAM, a simple and effective self-supervised object localization method using the contrastively trained models. We then introduced two data augmentations upon the ContraCAM that reduce scene bias and improve the quality of the learned representations for contrastive learning. We remark that the scene bias is more severe for the uncurated images; our work would be a step towards strong self-supervised learning under real-world scenarios [77, 78].
Limitations. Since the ContraCAM finds the most salient regions, it can differ from the desiderata of the users, e.g., the ContraCAM detects both the birds and branches in the CUB [46] dataset, but one may only want to detect the birds. Also, though the ContraCAM identifies the disjoint objects, it is hard to separate the occluded objects. Incorporating the prior knowledge of the objects and designing a more careful method to disentangle objects would be an interesting future direction.
Potential negative impacts. Our proposed framework enforces the model to focus on the “objects”, or the salient regions, to disentangle the relations of the objects and background. However, ContraCAM may over-rely on the conspicuous objects and the derived data augmentation strategy by ContraCAM could potentially incur imbalanced performance across different objects. We remark that the biases in datasets and models cannot be entirely eliminated without carefully designed guidelines. While we empirically observe our proposed learning strategies mitigate contextual and background biases on certain object types, we still need a closer look at the models, interactively correcting them.
Acknowledgements
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST); No. 2019-0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data; No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence), and partly by the Defense Challengeable Future Technology Program of the Agency for Defense Development, Republic of Korea. We thank Jihoon Tack, Jongjin Park, and Sihyun Yu for their valuable comments. | 1. What is the main contribution of the paper in the field of contrastive learning?
2. How does the proposed approach, ContraCAM, extend Class Activation Maps to the contrastive learning framework?
3. Can you explain how ContraCAM can detect multiple discriminative objects in an image using an iterative method?
4. What are the strengths of the paper regarding its novelty and performance improvements compared to existing contrastive learning methods?
5. Do you have any concerns or suggestions for improving the paper's content or research methodology? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes an object-aware contrastive learning framework which can localize the objects in a self-supervised manner (ContraCAM) and then uses these localized objects to learn de-biased representations. The authors motivate the proposed approach through spurious scene correlations between different objects or between objects and the background. The proposed ContraCAM approach is an intuitive extension of CAM to contrastively trained models. ContraCAM can detect multiple discriminative objects in an image using an iterative method.
Review
The paper proposes an interesting and novel approach which extends Class Activation Maps to the contrastive learning framework. The proposed approach, called ContraCAM, can be used to obtain activation maps for discriminative objects in an image. The authors also propose an iterative extension of ContraCAM which enables localization of multiple objects. The paper further proposes some object-aware augmentations for reducing the contextual biases in contrastive learning methods. The authors show that the proposed approach achieves good performance improvements on an existing contrastive learning method. I believe that the novelty of the proposed approach and the strong results and detailed ablation studies presented by the authors make this a strong paper.
Edit: After reading the other reviews and the authors responses, I will maintain my rating and recommend this paper for acceptance. |
NIPS | Title
Object-aware Contrastive Learning for Debiased Scene Representation
Abstract
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. However, the learned representations are often contextually biased to the spurious scene correlations of different objects or object and background, which may harm their generalization on the downstream tasks. To tackle the issue, we develop a novel object-aware contrastive learning framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. For (a), we propose the contrastive class activation map (ContraCAM), which finds the most discriminative regions (e.g., objects) in the image compared to the other images using the contrastively trained models. We further improve the ContraCAM to detect multiple objects and entire shapes via an iterative refinement procedure. For (b), we introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning, respectively. Our experiments demonstrate the effectiveness of our representation learning framework, particularly when trained under multi-object images or evaluated under the background (and distribution) shifted images.1
1 Introduction
Self-supervised learning of visual representations from unlabeled images is a fundamental task of machine learning, which establishes various applications including object recognition [1, 2], reinforcement learning [3, 4], out-of-distribution detection [5, 6], and multimodal learning [7, 8]. Recently, contrastive learning [1, 2, 9–15] has shown remarkable advances along this line. The idea is to learn invariant representations by attracting the different views (e.g., augmentations) of the same instance (i.e., positives) while contrasting different instances (i.e., negatives).2
Despite the success of contrastive learning on various downstream tasks [16], they still suffer from the generalization issue due to the unique features of the training datasets [17–19] or the choice of data augmentations [19–21]. In particular, the co-occurrence of different objects and background in randomly cropped patches (i.e., positives) leads the model to suffer from the scene bias. For example, Figure 1a presents two types of the scene bias: the positive pairs contain different objects (e.g., giraffe and zebra), and the patches contain adjacent object and background (e.g., zebra and safari). Specifically, the co-occurrence of different objects is called contextual bias [22], and that of object and background is called background bias [23]. Attracting the patches in contrastive learning
∗Equal contribution 1Code is available at https://github.com/alinlab/object-aware-contrastive. 2Some recent works (e.g., [14, 15]) attract the positives without contrasting the negatives. While we mainly
focus on contrastive learning with negatives, our method is also applicable to the positive-only methods.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
makes the features of correlated objects and background indistinguishable, which may harm their generalization (Figure 1b) because of being prone to biases (Figure 1c).
Contribution. We develop a novel object-aware contrastive learning framework that mitigates the scene bias and improves the generalization of learned representation. The key to success is the proposed contrastive class activation map (ContraCAM), a simple yet effective self-supervised object localization method by contrasting other images to find the most discriminate regions in the image. We leverage the ContraCAM to create new types of positives and negatives. First, we introduce two data augmentations for constructing the positive sample-pairs of contrastive learning: object-aware random crop and background mixup that reduce contextual and background biases, respectively. Second, by equipping ContraCAM with an iterative refinement procedure, we extend it to detect multiple objects and entire shapes, which allows us to generate masked images as effective negatives.
We demonstrate that the proposed method can improve two representative contrastive (or positiveonly) representation learning schemes, MoCov2 [28] and BYOL [14], by reducing contextual and background biases as well as learning object-centric representation. In particular, we improve:
• The representation learning under multi-object images, evaluated on the COCO [25] dataset, boosting the performance on the downstream tasks, e.g., classification and detection.
• The generalizability of the learned representation on the background shifts, i.e., objects appear in the unusual background (e.g., fish on the ground), evaluated on the Background Challenge [23].
• The generalizability of the learned representation on the distribution shifts, particularly for the shape-biased, e.g., ImageNet-Sketch [29], and corrupted, e.g., ImageNet-C [30] datasets.
Furthermore, ContraCAM shows comparable results with the state-of-the-art unsupervised localization method (and also with the supervised classifier CAM) while being simple.
2 Object-aware Contrastive Learning
We first briefly review contrastive learning in Section 2.1. We then introduce our object localization and debiased contrastive learning methods in Section 2.2 and Section 2.3, respectively.
2.1 Contrastive learning
Contrastive self-supervised learning aims to learn an encoder f(·) that extracts a useful representation from an unlabeled image x by attracting similar sample x+ (i.e., positives) and dispelling dissimilar samples {x−i } (i.e., negatives). In particular, instance discrimination [10] defines the same samples of different data augmentations (e.g., random crop) as the positives and different samples as negatives.
Formally, contrastive learning maximizes the contrastive score:
scon(x;x +, {x−n }) := log
exp(sim(z(x), z̄(x+))/τ) exp(sim(z(x), z̄(x+))/τ) + ∑
x−n exp(sim(z(x), z̄(x−n ))/τ)
, (1)
where z(·) and z̄(·) are the output and target functions wrapping the representation f(x) for use, sim(·, ·) denotes the cosine similarity, and τ is a temperature hyperparameter. The specific form of z(·), z̄(·) depends on the method. For example, MoCov2 [28] sets z(·) = g(f(·)), z̄(·) = gm(fm(·)) where g(·) is a projector network to indirectly match the feature f(x) and fm(·), gm(·) are the momentum version of the encoder and projectors. On the other hand, BYOL [14] sets z(·) = h(g(f(·))), z̄(·) = gm(fm(·)), where h(·) is an additional predictor network to avoid collapse of the features because it only maximizes the similarity score ssim(x;x+) := sim(z(x), z̄(x+)) [14, 15].
Scene bias in contrastive learning. Despite the success of contrastive learning, they often suffer from the scene bias: entangling representations of co-occurring (but different) objects, i.e., contextual bias [22], or adjacent object and background, i.e., background bias [23], by attracting the randomly cropped patches reflecting the correlations (Figure 1a). The scene bias harms the performance (Figure 1b) and generalization of the learned representations on distribution shifts (Figure 1c). To tackle the issue, we propose object-aware data augmentations for debiased contrastive learning (Section 2.3) utilizing the object locations inferred from the contrastively trained models (Section 2.2).
2.2 ContraCAM: Unsupervised object localization via contrastive learning
We aim to find the most discriminative region in an image, such as objects for scene images, compared to the other images. To this end, we extend the (gradient-based) class activation map (CAM) [31, 32], originally used to find the salient regions for the prediction of classifiers. Our proposed method, contrastive class activation map (ContraCAM), has two differences from the classifier CAM. First, we use the contrastive score instead of the softmax probability. Second, we discard the negative signals from the similar objects in the negative batch since they cancel out the positive signals and hinder the localization, which is crucial as shown in Table 1 and Appendix C.1).
Following the classifier CAM, we define the saliency map as the weighted sum of spatial activations (e.g., penultimate feature before pooling), where the weight of each activation is given by the importance, the sum of gradients, of the activation for the score function. Formally, let A := [Akij ] be a spatial activation of an image x where 1 ≤ i ≤ H, 1 ≤ j ≤ W, 1 ≤ k ≤ K denote the index of row, column, and channel, and H,W,K denote the height, width, and channel size of the activation. Given a batch of samples B, we define the score function of the sample x as the contrastive score scon in Eq. (1) using the sample x itself as a positive3 and the remaining samples B \ x as negatives. Then, the weight of the k-th activation αk and the CAM mask CAM := [CAMij ] ∈ [0, 1]H×W are:
CAMij = Normalize
( ReLU (∑ k αkA k ij )) , αk = ReLU 1 HW ∑ i,j ∂scon(x;x,B \ x) ∂Aki,j , (2) where Normalize(x) := x−min xmax x−min x is a normalization function that maps the elements to [0, 1]. We highlight the differences from the classifier CAM with the red color. Note that the ReLU used to compute αk in Eq. (2) discards the negative signals. The negative signal removal trick also slightly improves the classifier CAM [33] but much effective for the ContraCAM.
We further improve the ContraCAM to detect multiple objects and entire shapes with an iterative refinement procedure [34]: cover the salient regions of the image with the (reverse of) current CAM, predict new CAM from the masked image, and aggregate them (see Figure 2). It expands the CAM regions since the new CAM from the masked image detects the unmasked regions. Here, we additionally provide the masked images in the batch (parellely computed) as the negatives: they are better negatives by removing the possibly existing similar objects. Also, we use the original image x as the positive to highlight the undetected objects. Formally, let CAMt be the CAM of iteration t and CAMt := [CAMtij ] = [maxl≤t CAM l ij ] be the aggregated CAM mask. Also, let x t be the image softly masked by the (reverse of) current aggregated mask, i.e., xt := (1− CAMt−1) x for t ≥ 2
3It does not affect the score but is defined for the notation consistency with the iterative extension.
and x1 = x where denotes an element-wise product, and Bt := {xtn} be the batch of the masked images. Then, we define the score function for iteration t as:
stcon(x) := scon(x t;x,∪l≤t(Bl \ xl)). (3)
We substitute the contrastive score scon in Eq. (2) with the stcon in Eq. (3) to compute the CAM of iteration t, and use the final aggregated mask after T iterations. We remark that the CAM results are not sensitive to the number of iterations T if it is large enough; CAM converges to the stationary value since soft masking xt regularizes the CAM not to be keep expanded (see Appendix C.2). We provide the pseudo-code of the entire Iterative ContraCAM procedure in Appendix A.
Note that contrastive learning was known to be ineffective at localizing objects [35] with standard saliency methods (using a classifier on top of the learned representation) since attracting the randomly cropped patches makes the model look at the entire scene. To our best knowledge, we are the first to extend the CAM for the self-supervised setting, relaxing the assumption of class labels. Selvaraju et al. [36] considered CAM for contrastive learning, but their purpose was to regularize CAM to be similar to the ground-truth masks (or predicted by pre-trained models) and used the similarity of the image and the masked image (by ground-truth masks) as the score function of CAM.
2.3 Object-aware augmentations for debiased contrastive learning
We propose two data augmentations for contrastive learning that reduce contextual and background biases, respectively, utilizing the object locations inferred by ContraCAM. Both augmentations are applied to the positive samples before other augmentations; thus, it is applicable for both contrastive learning (e.g., MoCov2 [28]) and positive-only methods (e.g., BYOL [14]).
Reducing contextual bias. We first tackle the contextual bias of contrastive learning, i.e., entangling the features of different objects. To tackle the issue, we propose a data augmentation named objectaware random crop, which restricts the random crop around a single object and avoids the attraction of different objects. To this end, we first extract the (possibly multiple or none) bounding boxes of the image from the binarized mask4 of the ContraCAM. We then crop the image around the box, randomly chosen from the boxes, before applying other augmentations (e.g., random crop). Here, we apply augmentations (to produce positives) to the same cropped box; thus, the patches are restricted in the same box. Technically, it only requires a few line addition of code:
if len(boxes) > 0: # can be empty box = random.choice(boxes) image = image.crop(box) # apply other augmentations (e.g., random crop)
4Threshold the mask or apply a post-processing method, e.g., conditional random field (CRF) [37].
Purushwalkam and Gupta [19] considered a similar approach using ground-truth bounding boxes applied on MoCov2. However, we found that cropping around the ground-truth boxes often harms contrastive learning (see Table 4). This is because some objects (e.g., small ones) in ground-truth boxes are hard to discriminate (as negatives), making contrastive learning hard to optimize. In contrast, the ContraCAM produces more discriminative boxes, often outperforming the ground-truth boxes (see Appendix D.1). Note that the positive-only methods do not suffer from the issue: both groundtruth and ContraCAM boxes work well. On the other hand, Selvaraju et al. [36] used a pre-trained segmentation model to constrain the patches to contain objects. It partly resolves the false positive issue by avoiding the attraction of background-only patches but does not prevent the patches with different objects; in contrast, the object-aware random crop avoids both cases.
Reducing background bias. We then tackle the background bias of contrastive learning, i.e., entangling the features of adjacent object and background. To this end, we propose a data augmentation named background mixup, which substitutes the background of an image with other backgrounds. Intuitively, the positive samples share the objects but have different backgrounds, thus reducing the background bias. Formally, background mixup blends an image x1 and a background-only image x bg 2 (generated from an image x2) using the ContraCAM of image x1 as a weight, i.e.,
xbg-mix1 := CAM(x1) x1 + (1− CAM(x1)) x bg 2 , (4)
where denotes an element-wise product. Here, the background-only image xbg2 is generated by tiling the background patch of the image x2 inferred by the ContraCAM. Precisely, we choose the largest rectangle in the zeros of the binarized CAM mask for the region of the background patch. The overall procedure of the background mixup is illustrated in Figure 3.
Prior works considered the background bias for contrastive learning [35, 38] but used a pre-trained segmentation model and copy-and-pasted the objects to the background-only images using binary masks. We also tested the copy-and-paste version with the binarized CAM, but the soft version in Eq. (4) performed better (see Appendix E.1); one should consider the confidence of the soft masks since they are inaccurate. Furthermore, the background mixup improves the generalization on distribution shifts, e.g., shape-biased [29, 39, 40] and corrupted [30] datasets (see Table 8). Remark that the background mixup often outperforms the Mixup [41] and CutMix [42] applied for contrastive learning [43]. Intuitively, the background mixup can be viewed as a saliency-guided extension [44, 45] of mixup but not mixing the targets (positives), since the mixed patch should be only considered as the positive of the patch sharing foreground, not the one sharing background.
3 Experiments
We first verify the localization performance of ContraCAM in Section 3.1. We then demonstrate the efficacy of our debiased contrastive learning: object-aware random crop improves the training under multi-object images by reducing contextual bias in Section 3.2, and background mixup improves generalization on background and distribution shifts by reducing background bias in Section 3.3.
Common setup. We apply our method on two representative contrastive (or positive-only) learning models: MoCov2 [28] and BYOL [14], under the ResNet-18 and ResNet-50 architectures [27]. We train the models for 800 epochs on COCO [25] and ImageNet-9 [23], and 2,000 epochs on CUB [46] and Flowers [26] datasets with batch size 256. For object localization experiments, we train the vanilla MoCov2 and BYOL on each dataset and compute the CAM masks. For representation learning experiments, we first train the vanilla MoCov2 and BYOL to pre-compute the CAM masks (and corresponding bounding boxes); then, we retrain MoCov2 and BYOL, applying our proposed augmentations using the fixed pre-computed masks (and boxes). Here, we retrain the models from scratch to make the training budgets fair. We also retrained (i.e., third iteration) the model using the CAM masks from our debiased models but did not see the gain (see Appendix D.6). We follow the default hyperparameters of MoCov2 and BYOL, except the smaller minimum random crop scale of 0.08 (instead of the original 0.2) since it performed better, especially for the multi-object images. We run a single trial for contextual bias and three trials for background bias experiments.
We use the penultimate spatial activations to compute the CAM results. At inference, we follow the protocol of [48] that doubly expands the resolution of the activations to detect the smaller objects through decreasing the stride of the convolutional layer in the final residual block. Since it produces the smaller masks, we use more iterations (e.g., 10) for the Iterative ContraCAM. Here, we apply the conditional random field (CRF) using the default hyperparameters from the pydensecrf library [49] to produce segmentation masks and use the opencv [50] library to extract bounding boxes. We use a single iteration of the ContraCAM without the expansion trick for background bias results; it is sufficient for single instance images. Here, we binarize the masks with a threshold of 0.2 to produce background-only images. We provide the further implementation details in Appendix B.
Computation time. The training of the baseline models on the COCO (∼100,000 samples) dataset takes ∼1.5 days on 4 GPUs and ∼3 days on 8 GPUs for ResNet-18 and ResNet-50 architectures, respectively, using a single machine with 8 GeForce RTX 2080 Ti GPUs; proportional to the number of samples and training epochs for other cases. The inference of ContraCAM takes a few minutes for the entire training dataset, and generating the boxes using CRF takes dozens of minutes. Using the pre-computed masks and boxes, our method only slightly increases the training time.
3.1 Unsupervised object localization
We check the performance of our proposed self-supervised object localization method, ContraCAM. Figure 4 shows the examples of the ContraCAM on various image datasets, including CUB, Flowers,
COCO, and ImageNet-9 datasets. ContraCAM even detects multiple objects in the image. We also quantitatively compare ContraCAM with the state-of-the-art unsupervised object localization method, ReDo [47]. Table 1 shows that the ContraCAM is comparable with ReDO, in terms of the the mask mean intersection-over-unions (mIoUs). One can also see that the negative signal removal, i.e., ReLU in Eq. (1), is a critical to the performance (see Appendix C.1 for the visual examples).
We also compare the localization performance of ContraCAM (using MoCov2) and classifier CAM (using a supervised model). Table 2 shows the results where all models are solely trained from the target dataset and evaluated on the same dataset. Interestingly, ContraCAM outperforms the classifier CAM on CUB and Flowers. We conjecture this is because CUB and Flowers have few training samples; the supervised classifier is prone to overfitting. On the other hand, Table 3 shows the results on the transfer setting, i.e., the models are trained on the ImageNet [51] using the ResNet-50 architecture. We use the publicly available supervised classifier [52] and MoCov2, and follow the MaxBoxAccV2 evaluation protocol [48]. The ContraCAM often outperforms the classifier CAM, especially for the unseen images (e.g., CUB). This is because the classifiers project out the features unrelated to the target classes, losing their generalizability on the out-of-class samples.
We provide additional analysis and results in Appendix C. Appendix C.2 shows the ablation study on the number of iterations of ContraCAM. One needs a sufficient number of iterations since too few iterations often detect subregions. Since ContraCAM converges to the stationary values for more iterations, we simply choose 10 for all datasets. Appendix C.3 shows the effects of the negative batch of ContraCAM. Since ContraCAM finds the most discriminative regions compared to the negative batch, one needs to choose the negative batch different from the target image. Using a few randomly sampled images is sufficient. Appendix C.4 provides additional comparison of ContraCAM and classifier CAM. Finally, Appendix C.5 provides a comparison with the gradient-based saliency methods [53, 54] using the same contrastive score. CAM gives better localization results.
3.2 Reducing contextual bias: Representation learning from multi-object images
We demonstrate the effectiveness of the object-aware random crop (OA-Crop) for representation learning under multi-object images by reducing contextual bias. To this end, we train MoCov2 and BYOL on the COCO dataset, comparing them with the models that applied the OA-Crop using the ground-truth (GT) bounding boxes or inferred ones from the ContraCAM.
We first compare the linear evaluation [24], test accuracy of a linear classifier trained on top of the learned representation, in Table 4. We report the results on the COCO-Crop, i.e., the objects in the COCO dataset cropped by the GT boxes, CIFAR-10 and CIFAR-100 [55], CUB, Flowers, Food [56], and Pets [57] datasets. OA-Crop significantly improves the linear evaluation of MoCov2 and BYOL for all tested cases. Somewhat interestingly, OA-Crop using the ContraCAM boxes even outperforms the GT boxes for MoCov2 under the ResNet-50 architecture. This is because the GT boxes often contain objects hard to discriminate (e.g., small objects), making contrastive learning hard to optimize; in contrast, ContraCAM finds more distinct objects. Note that BYOL does not suffer from this issue and performs well with both boxes. See Appendix D.1 for the detailed discussion.
We also compare the detection (and segmentation) performance measured by mean average precision (AP), an area under the precision-recall curve of the bounding boxes (or segmentation masks),
on the COCO detection and segmentation tasks in Table 5. Here, we fine-tune the MoCov2 and BYOL models using the ResNet-50 architecture. Remark that OA-Crop using the ContraCAM boxes outperforms the baselines, while the GT boxes are on par or worse. This is because the GT boxes solely focus on the objects while ContraCAM also catches the salient scene information.
In addition, we present the generalization performance of learned representations under the distribution shifts in Table 6. To this end, we evaluate the models trained on the COCO dataset to various 9 superclass (370 classes) subsets of ImageNet, whose details will be elaborated in the next section. ImageNet-9 contains natural images like COCO, but other datasets contain distribution-shifted (e.g., shape-biased or corrupted) images. Note that OA-Crop performs on par with the vanilla MoCov2 and BYOL on the original ImageNet-9 but performs better on the distribution-shifted dataset. It verifies that the OA-Crop improves the generalizability of the learned representation.
We provide additional analysis and results in Appendix D. Appendix D.2 provides an additional analysis that OA-Crop indeed reduces the contextual bias. Specifically, the representation learned from OA-Crop shows better separation between the co-occurring objects, giraffe and zebra. Appendix D.3 provides the comparison with the supervised representation, learned by Faster R-CNN [58] and Mask R-CNN [59], using ground-truth bounding boxes or segmentation masks. OA-Crop significantly reduces the gap between self-supervised and supervised representation. Appendix D.4 presents the class-wise accuracy on CIFAR10 that OA-Crop consistently improves the accuracy over all classes. Appendix D.5 presents the linear evaluation performance of MoCov2 and BYOL trained on a 10% subset of ImageNet for readers comparing with the results with the ImageNet-trained models.
3.3 Reducing background bias: Generalization on background and distribution shifts
We demonstrate the effectiveness of the background mixup (BG-Mixup) for the generalization of the learned representations on background and distribution shifts by reducing background bias and learning object-centric representation. To this end, we train MoCov2 and BYOL (and BG-Mixup upon them) on the ORIGINAL dataset from the Background Challenge [23], a 9 superclass (370 classes) subset of the ImageNet [51]. We then train a linear classifier on top of the learned representation using the ORIGINAL dataset. Here, we evaluate the classifier on the Background Challenge datasets for the background shift results, and the corresponding 9 superclasses of the ImageNet-Sketch [29], Stylized-ImageNet [39], ImageNet-R [40], and ImageNet-C [30] datasets, denoted by putting ‘-9’ at the suffix of the dataset names, for the distribution shift results (see Appendix B.3 for details).
We additionally compare BG-Mixp with the hard background mixing (i.e., copy-and-paste) using ground-truth masks (BG-HardMix (GT)) for the background shift experiments, and Mixup [41] and CutMix [42] (following the training procedure of [43]) for the distribution shift experiments. We also tested the BG-HardMix using the binarized CAM but did not work well (see Appendix E.1). On the other hand, the BG-Mixup often makes contrastive learning hard to be optimized by producing hard positives; thus, we apply BG-Mix with probability pmix < 1, independently applied on the patches. We tested pmix ∈ {0.2, 0.3, 0.4, 0.5} and choose pmix = 0.4 for MoCov2 and pmix = 0.3 for BYOL. Note that MoCov2 permits the higher pmix, since finding the closest sample from the (finite) batch is easier than clustering infinitely many samples (see Appendix E.2 for details).
Table 7 presents the results on background shifts: BG-Mixup improves the predictions on the objectfocused datasets (e.g., MIXED-RAND) while regularizing the background-focused datasets (e.g., ONLY-BG-T). Table 8 presents the results on distribution shifts: BG-Mixup mostly outperforms the Mixup and the CutMix. We also provide the BG-HardMix (GT) results on distribution shifts in Appendix E.3 and the mixup results on background shifts in Appendix E.4. The superiority of BG-Mix on both background and distribution shifts shows that its merits come from both objectcentric learning via reducing background and the saliency-guided input interpolation. In addition, we provide the corruption-wise classification results on ImageNet-9-C in Appendix E.5, and additional distribution shifts results on ObjectNet [60] and SI-Score [61] in Appendix E.6.
4 Related work
Contrastive learning. Contrastive learning (or positive-only method) [1, 2, 14] is the state-of-the-art method for visual representation learning, which incorporates the prior knowledge of invariance over the data augmentations. However, they suffer from an inherent problem of matching false positives from random crop augmentation. We tackle this scene bias issue and improve the quality of learned representation. Note that prior work considering the scene bias for contrastive learning [19, 35, 36, 38] assumed the ground-truth object annotations or pre-trained segmentation models, undermining the motivation of self-supervised learning to reduce such supervision. In contrast, we propose a fully self-supervised framework of object localization and debiased contrastive learning. Several works [62, 63] consider an object-aware approach for video representation learning, but their motivation was to attract the objects of different temporal views and require a pretrained object detector.
Bias in visual representation. The bias (or shortcut) in neural networks [64] have got significant attention recently, pointing out the unintended over-reliance on texture [39], background [23], adversarial features [65], or conspicuous inputs [66]. Numerous works have thus attempted to remove such biases, particularly in an unsupervised manner [29, 67, 68]. Our work also lies on this line: we evoke the scene bias issue of self-supervised representation learning and propose an unsupervised debiasing method. Our work would be a step towards an unbiased, robust visual representation.
Unsupervised object localization. The deep-learning-based unsupervised object localization methods can be categorized as follow. (a) The generative-based [47, 69, 70] approaches train a generative model that disentangles the objects and background by enforcing the object-perturbed image to be considered as real. (b) The noisy-ensemble [71–73] approaches train a model using handcrafted predictions as noisy targets. Despite the training is unsupervised, they initialize the weights with the supervised model. (c) Voynov et al. [74] manually finds the ‘salient direction’ from the noise (latent) of the ImageNet-trained BigGAN [75]. Besides, scene decomposition (e.g., [76]) aims at a more ambitious goal: fully decompose the objects and background, but currently not scale to the complex images. To our best knowledge, the generative-based approach is the state-of-the-art method for fully unsupervised scenarios. Our proposed ContraCAM could be an alternative in this direction.
Class activation map. Class activation map [31, 32] has been used for the weakly-supervised object localization (WSOL), inferring the pixel- (or object-) level annotations using class labels. Specifically, classifier CAM finds the regions that are most salient for the classifier score. ContraCAM further expands its applicability from weakly-supervised to unsupervised object localization by utilizing the contrastive score instead of the classifier score. We think ContraCAM will raise new interesting research questions, e.g., one could adopt the techniques from CAM to the ContraCAM.
5 Conclusion and Discussion
We proposed the ContraCAM, a simple and effective self-supervised object localization method using the contrastively trained models. We then introduced two data augmentations upon the ContraCAM that reduce scene bias and improve the quality of the learned representations for contrastive learning. We remark that the scene bias is more severe for the uncurated images; our work would be a step towards strong self-supervised learning under real-world scenarios [77, 78].
Limitations. Since the ContraCAM finds the most salient regions, it can differ from the desiderata of the users, e.g., the ContraCAM detects both the birds and branches in the CUB [46] dataset, but one may only want to detect the birds. Also, though the ContraCAM identifies the disjoint objects, it is hard to separate the occluded objects. Incorporating the prior knowledge of the objects and designing a more careful method to disentangle objects would be an interesting future direction.
Potential negative impacts. Our proposed framework enforces the model to focus on the “objects”, or the salient regions, to disentangle the relations of the objects and background. However, ContraCAM may over-rely on the conspicuous objects and the derived data augmentation strategy by ContraCAM could potentially incur imbalanced performance across different objects. We remark that the biases in datasets and models cannot be entirely eliminated without carefully designed guidelines. While we empirically observe our proposed learning strategies mitigate contextual and background biases on certain object types, we still need a closer look at the models, interactively correcting them.
Acknowledgements
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST); No. 2019-0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data; No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence), and partly by the Defense Challengeable Future Technology Program of the Agency for Defense Development, Republic of Korea. We thank Jihoon Tack, Jongjin Park, and Sihyun Yu for their valuable comments. | 1. What is the main contribution of the paper regarding representation learning?
2. What are the strengths of the proposed method, particularly in using object-aware signals?
3. What are the weaknesses of the paper, especially regarding the reliance on a good initialized model?
4. How does the proposed method compare to supervised models, and how would the results change if compared to such models?
5. Could you provide more explanations or clarifications regarding some parts of the paper, such as Figure 1B, Line 51, Line 74, Line 100, and Line 274?
6. Are there any suggestions for future work related to object-aware contrastive learning? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new learning scheme for representation learning. Using class activation maps, augmentations are obtained for contrastive learning that are aware of the objects in the image. Results include results on multiple classification, segmentation, and detection datasets.
Note on 10 sept 2021: rebuttal has been read and discussed with colleague reviewers and AC
Review
Strengths:
Uses object-aware signals for representation learning.
Figure 3 clearly explains the different steps in the generation of training data.
Table 1, 2, and 3 show results on multiple datasets.
Table 4 studies a case where crops obtained from the proposed method might even work better than crops based on the ground truth boxes. This conclusion could have a strong impact on future object-aware contrastive learning.
Weaknesses:
How much does the proposed method rely on a good initialized model? The class activation maps in equation (2) will only point to an object of interest whenever the model achieves “good enough” discriminability. In this light, line 181 confuses me. Do I understand correctly a model is fully trained (on a labelled dataset), then provides CAM masks from which a from-scratch model is learned? If so, the results should be compared to a supervised model that uses the same labelled data.
Table 1 & table 3: given that this method requires labels (line 179-180), it would be fair to compare with a supervised model.
Table 4: Comparison with a supervised model would be useful in this table. If I understand correctly from line 178, the ContraCAM maps were obtained from a model that trained 800 epochs on the COCO dataset.
Comments:
Figure 1B: the message of this diagram is unclear. The blue line might as well be overfitting and possible to mitigate via regularisation. Is the suggestion that the proposed method adds additional learning signals and so is less amenable to overfitting?
Line 51: Object net [3] also evaluates this kind of co-occurrences.
Line 74: effect of image background was also studied in [4].
LIne 100: how does the iterative procedure influence results? How good are results obtained with just one iteration?
Line 274: Object-aware self supervised learning was also studied in [1] and [2]
[1] Pirk et al. "Online object representations with contrastive learning." arXiv preprint arXiv:1906.04312 (2019).
[2] Romijnders et al. "Representation learning from videos in-the-wild: An object-centric approach." WACV 2021.
[3] Barbu et al. "Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models." (2019).
[4] Yung et al. "SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size." arXiv (2021)
Edits after rebuttal: Authors clarified that models are pre-trained with self-supervised learning. (I erroneously interpreted lines 176-181 as supervised pre-training.) This clarification puts the results in a better light and I will increase review score. I strongly encourage authors to clarify this paragraph in a subsequent version of the paper. |
NIPS | Title
Object-aware Contrastive Learning for Debiased Scene Representation
Abstract
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. However, the learned representations are often contextually biased to the spurious scene correlations of different objects or object and background, which may harm their generalization on the downstream tasks. To tackle the issue, we develop a novel object-aware contrastive learning framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. For (a), we propose the contrastive class activation map (ContraCAM), which finds the most discriminative regions (e.g., objects) in the image compared to the other images using the contrastively trained models. We further improve the ContraCAM to detect multiple objects and entire shapes via an iterative refinement procedure. For (b), we introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning, respectively. Our experiments demonstrate the effectiveness of our representation learning framework, particularly when trained under multi-object images or evaluated under the background (and distribution) shifted images.1
1 Introduction
Self-supervised learning of visual representations from unlabeled images is a fundamental task of machine learning, which establishes various applications including object recognition [1, 2], reinforcement learning [3, 4], out-of-distribution detection [5, 6], and multimodal learning [7, 8]. Recently, contrastive learning [1, 2, 9–15] has shown remarkable advances along this line. The idea is to learn invariant representations by attracting the different views (e.g., augmentations) of the same instance (i.e., positives) while contrasting different instances (i.e., negatives).2
Despite the success of contrastive learning on various downstream tasks [16], they still suffer from the generalization issue due to the unique features of the training datasets [17–19] or the choice of data augmentations [19–21]. In particular, the co-occurrence of different objects and background in randomly cropped patches (i.e., positives) leads the model to suffer from the scene bias. For example, Figure 1a presents two types of the scene bias: the positive pairs contain different objects (e.g., giraffe and zebra), and the patches contain adjacent object and background (e.g., zebra and safari). Specifically, the co-occurrence of different objects is called contextual bias [22], and that of object and background is called background bias [23]. Attracting the patches in contrastive learning
∗Equal contribution 1Code is available at https://github.com/alinlab/object-aware-contrastive. 2Some recent works (e.g., [14, 15]) attract the positives without contrasting the negatives. While we mainly
focus on contrastive learning with negatives, our method is also applicable to the positive-only methods.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
makes the features of correlated objects and background indistinguishable, which may harm their generalization (Figure 1b) because of being prone to biases (Figure 1c).
Contribution. We develop a novel object-aware contrastive learning framework that mitigates the scene bias and improves the generalization of learned representation. The key to success is the proposed contrastive class activation map (ContraCAM), a simple yet effective self-supervised object localization method by contrasting other images to find the most discriminate regions in the image. We leverage the ContraCAM to create new types of positives and negatives. First, we introduce two data augmentations for constructing the positive sample-pairs of contrastive learning: object-aware random crop and background mixup that reduce contextual and background biases, respectively. Second, by equipping ContraCAM with an iterative refinement procedure, we extend it to detect multiple objects and entire shapes, which allows us to generate masked images as effective negatives.
We demonstrate that the proposed method can improve two representative contrastive (or positiveonly) representation learning schemes, MoCov2 [28] and BYOL [14], by reducing contextual and background biases as well as learning object-centric representation. In particular, we improve:
• The representation learning under multi-object images, evaluated on the COCO [25] dataset, boosting the performance on the downstream tasks, e.g., classification and detection.
• The generalizability of the learned representation on the background shifts, i.e., objects appear in the unusual background (e.g., fish on the ground), evaluated on the Background Challenge [23].
• The generalizability of the learned representation on the distribution shifts, particularly for the shape-biased, e.g., ImageNet-Sketch [29], and corrupted, e.g., ImageNet-C [30] datasets.
Furthermore, ContraCAM shows comparable results with the state-of-the-art unsupervised localization method (and also with the supervised classifier CAM) while being simple.
2 Object-aware Contrastive Learning
We first briefly review contrastive learning in Section 2.1. We then introduce our object localization and debiased contrastive learning methods in Section 2.2 and Section 2.3, respectively.
2.1 Contrastive learning
Contrastive self-supervised learning aims to learn an encoder f(·) that extracts a useful representation from an unlabeled image x by attracting similar sample x+ (i.e., positives) and dispelling dissimilar samples {x−i } (i.e., negatives). In particular, instance discrimination [10] defines the same samples of different data augmentations (e.g., random crop) as the positives and different samples as negatives.
Formally, contrastive learning maximizes the contrastive score:
scon(x;x +, {x−n }) := log
exp(sim(z(x), z̄(x+))/τ) exp(sim(z(x), z̄(x+))/τ) + ∑
x−n exp(sim(z(x), z̄(x−n ))/τ)
, (1)
where z(·) and z̄(·) are the output and target functions wrapping the representation f(x) for use, sim(·, ·) denotes the cosine similarity, and τ is a temperature hyperparameter. The specific form of z(·), z̄(·) depends on the method. For example, MoCov2 [28] sets z(·) = g(f(·)), z̄(·) = gm(fm(·)) where g(·) is a projector network to indirectly match the feature f(x) and fm(·), gm(·) are the momentum version of the encoder and projectors. On the other hand, BYOL [14] sets z(·) = h(g(f(·))), z̄(·) = gm(fm(·)), where h(·) is an additional predictor network to avoid collapse of the features because it only maximizes the similarity score ssim(x;x+) := sim(z(x), z̄(x+)) [14, 15].
Scene bias in contrastive learning. Despite the success of contrastive learning, they often suffer from the scene bias: entangling representations of co-occurring (but different) objects, i.e., contextual bias [22], or adjacent object and background, i.e., background bias [23], by attracting the randomly cropped patches reflecting the correlations (Figure 1a). The scene bias harms the performance (Figure 1b) and generalization of the learned representations on distribution shifts (Figure 1c). To tackle the issue, we propose object-aware data augmentations for debiased contrastive learning (Section 2.3) utilizing the object locations inferred from the contrastively trained models (Section 2.2).
2.2 ContraCAM: Unsupervised object localization via contrastive learning
We aim to find the most discriminative region in an image, such as objects for scene images, compared to the other images. To this end, we extend the (gradient-based) class activation map (CAM) [31, 32], originally used to find the salient regions for the prediction of classifiers. Our proposed method, contrastive class activation map (ContraCAM), has two differences from the classifier CAM. First, we use the contrastive score instead of the softmax probability. Second, we discard the negative signals from the similar objects in the negative batch since they cancel out the positive signals and hinder the localization, which is crucial as shown in Table 1 and Appendix C.1).
Following the classifier CAM, we define the saliency map as the weighted sum of spatial activations (e.g., penultimate feature before pooling), where the weight of each activation is given by the importance, the sum of gradients, of the activation for the score function. Formally, let A := [Akij ] be a spatial activation of an image x where 1 ≤ i ≤ H, 1 ≤ j ≤ W, 1 ≤ k ≤ K denote the index of row, column, and channel, and H,W,K denote the height, width, and channel size of the activation. Given a batch of samples B, we define the score function of the sample x as the contrastive score scon in Eq. (1) using the sample x itself as a positive3 and the remaining samples B \ x as negatives. Then, the weight of the k-th activation αk and the CAM mask CAM := [CAMij ] ∈ [0, 1]H×W are:
CAMij = Normalize
( ReLU (∑ k αkA k ij )) , αk = ReLU 1 HW ∑ i,j ∂scon(x;x,B \ x) ∂Aki,j , (2) where Normalize(x) := x−min xmax x−min x is a normalization function that maps the elements to [0, 1]. We highlight the differences from the classifier CAM with the red color. Note that the ReLU used to compute αk in Eq. (2) discards the negative signals. The negative signal removal trick also slightly improves the classifier CAM [33] but much effective for the ContraCAM.
We further improve the ContraCAM to detect multiple objects and entire shapes with an iterative refinement procedure [34]: cover the salient regions of the image with the (reverse of) current CAM, predict new CAM from the masked image, and aggregate them (see Figure 2). It expands the CAM regions since the new CAM from the masked image detects the unmasked regions. Here, we additionally provide the masked images in the batch (parellely computed) as the negatives: they are better negatives by removing the possibly existing similar objects. Also, we use the original image x as the positive to highlight the undetected objects. Formally, let CAMt be the CAM of iteration t and CAMt := [CAMtij ] = [maxl≤t CAM l ij ] be the aggregated CAM mask. Also, let x t be the image softly masked by the (reverse of) current aggregated mask, i.e., xt := (1− CAMt−1) x for t ≥ 2
3It does not affect the score but is defined for the notation consistency with the iterative extension.
and x1 = x where denotes an element-wise product, and Bt := {xtn} be the batch of the masked images. Then, we define the score function for iteration t as:
stcon(x) := scon(x t;x,∪l≤t(Bl \ xl)). (3)
We substitute the contrastive score scon in Eq. (2) with the stcon in Eq. (3) to compute the CAM of iteration t, and use the final aggregated mask after T iterations. We remark that the CAM results are not sensitive to the number of iterations T if it is large enough; CAM converges to the stationary value since soft masking xt regularizes the CAM not to be keep expanded (see Appendix C.2). We provide the pseudo-code of the entire Iterative ContraCAM procedure in Appendix A.
Note that contrastive learning was known to be ineffective at localizing objects [35] with standard saliency methods (using a classifier on top of the learned representation) since attracting the randomly cropped patches makes the model look at the entire scene. To our best knowledge, we are the first to extend the CAM for the self-supervised setting, relaxing the assumption of class labels. Selvaraju et al. [36] considered CAM for contrastive learning, but their purpose was to regularize CAM to be similar to the ground-truth masks (or predicted by pre-trained models) and used the similarity of the image and the masked image (by ground-truth masks) as the score function of CAM.
2.3 Object-aware augmentations for debiased contrastive learning
We propose two data augmentations for contrastive learning that reduce contextual and background biases, respectively, utilizing the object locations inferred by ContraCAM. Both augmentations are applied to the positive samples before other augmentations; thus, it is applicable for both contrastive learning (e.g., MoCov2 [28]) and positive-only methods (e.g., BYOL [14]).
Reducing contextual bias. We first tackle the contextual bias of contrastive learning, i.e., entangling the features of different objects. To tackle the issue, we propose a data augmentation named objectaware random crop, which restricts the random crop around a single object and avoids the attraction of different objects. To this end, we first extract the (possibly multiple or none) bounding boxes of the image from the binarized mask4 of the ContraCAM. We then crop the image around the box, randomly chosen from the boxes, before applying other augmentations (e.g., random crop). Here, we apply augmentations (to produce positives) to the same cropped box; thus, the patches are restricted in the same box. Technically, it only requires a few line addition of code:
if len(boxes) > 0: # can be empty box = random.choice(boxes) image = image.crop(box) # apply other augmentations (e.g., random crop)
4Threshold the mask or apply a post-processing method, e.g., conditional random field (CRF) [37].
Purushwalkam and Gupta [19] considered a similar approach using ground-truth bounding boxes applied on MoCov2. However, we found that cropping around the ground-truth boxes often harms contrastive learning (see Table 4). This is because some objects (e.g., small ones) in ground-truth boxes are hard to discriminate (as negatives), making contrastive learning hard to optimize. In contrast, the ContraCAM produces more discriminative boxes, often outperforming the ground-truth boxes (see Appendix D.1). Note that the positive-only methods do not suffer from the issue: both groundtruth and ContraCAM boxes work well. On the other hand, Selvaraju et al. [36] used a pre-trained segmentation model to constrain the patches to contain objects. It partly resolves the false positive issue by avoiding the attraction of background-only patches but does not prevent the patches with different objects; in contrast, the object-aware random crop avoids both cases.
Reducing background bias. We then tackle the background bias of contrastive learning, i.e., entangling the features of adjacent object and background. To this end, we propose a data augmentation named background mixup, which substitutes the background of an image with other backgrounds. Intuitively, the positive samples share the objects but have different backgrounds, thus reducing the background bias. Formally, background mixup blends an image x1 and a background-only image x bg 2 (generated from an image x2) using the ContraCAM of image x1 as a weight, i.e.,
xbg-mix1 := CAM(x1) x1 + (1− CAM(x1)) x bg 2 , (4)
where denotes an element-wise product. Here, the background-only image xbg2 is generated by tiling the background patch of the image x2 inferred by the ContraCAM. Precisely, we choose the largest rectangle in the zeros of the binarized CAM mask for the region of the background patch. The overall procedure of the background mixup is illustrated in Figure 3.
Prior works considered the background bias for contrastive learning [35, 38] but used a pre-trained segmentation model and copy-and-pasted the objects to the background-only images using binary masks. We also tested the copy-and-paste version with the binarized CAM, but the soft version in Eq. (4) performed better (see Appendix E.1); one should consider the confidence of the soft masks since they are inaccurate. Furthermore, the background mixup improves the generalization on distribution shifts, e.g., shape-biased [29, 39, 40] and corrupted [30] datasets (see Table 8). Remark that the background mixup often outperforms the Mixup [41] and CutMix [42] applied for contrastive learning [43]. Intuitively, the background mixup can be viewed as a saliency-guided extension [44, 45] of mixup but not mixing the targets (positives), since the mixed patch should be only considered as the positive of the patch sharing foreground, not the one sharing background.
3 Experiments
We first verify the localization performance of ContraCAM in Section 3.1. We then demonstrate the efficacy of our debiased contrastive learning: object-aware random crop improves the training under multi-object images by reducing contextual bias in Section 3.2, and background mixup improves generalization on background and distribution shifts by reducing background bias in Section 3.3.
Common setup. We apply our method on two representative contrastive (or positive-only) learning models: MoCov2 [28] and BYOL [14], under the ResNet-18 and ResNet-50 architectures [27]. We train the models for 800 epochs on COCO [25] and ImageNet-9 [23], and 2,000 epochs on CUB [46] and Flowers [26] datasets with batch size 256. For object localization experiments, we train the vanilla MoCov2 and BYOL on each dataset and compute the CAM masks. For representation learning experiments, we first train the vanilla MoCov2 and BYOL to pre-compute the CAM masks (and corresponding bounding boxes); then, we retrain MoCov2 and BYOL, applying our proposed augmentations using the fixed pre-computed masks (and boxes). Here, we retrain the models from scratch to make the training budgets fair. We also retrained (i.e., third iteration) the model using the CAM masks from our debiased models but did not see the gain (see Appendix D.6). We follow the default hyperparameters of MoCov2 and BYOL, except the smaller minimum random crop scale of 0.08 (instead of the original 0.2) since it performed better, especially for the multi-object images. We run a single trial for contextual bias and three trials for background bias experiments.
We use the penultimate spatial activations to compute the CAM results. At inference, we follow the protocol of [48] that doubly expands the resolution of the activations to detect the smaller objects through decreasing the stride of the convolutional layer in the final residual block. Since it produces the smaller masks, we use more iterations (e.g., 10) for the Iterative ContraCAM. Here, we apply the conditional random field (CRF) using the default hyperparameters from the pydensecrf library [49] to produce segmentation masks and use the opencv [50] library to extract bounding boxes. We use a single iteration of the ContraCAM without the expansion trick for background bias results; it is sufficient for single instance images. Here, we binarize the masks with a threshold of 0.2 to produce background-only images. We provide the further implementation details in Appendix B.
Computation time. The training of the baseline models on the COCO (∼100,000 samples) dataset takes ∼1.5 days on 4 GPUs and ∼3 days on 8 GPUs for ResNet-18 and ResNet-50 architectures, respectively, using a single machine with 8 GeForce RTX 2080 Ti GPUs; proportional to the number of samples and training epochs for other cases. The inference of ContraCAM takes a few minutes for the entire training dataset, and generating the boxes using CRF takes dozens of minutes. Using the pre-computed masks and boxes, our method only slightly increases the training time.
3.1 Unsupervised object localization
We check the performance of our proposed self-supervised object localization method, ContraCAM. Figure 4 shows the examples of the ContraCAM on various image datasets, including CUB, Flowers,
COCO, and ImageNet-9 datasets. ContraCAM even detects multiple objects in the image. We also quantitatively compare ContraCAM with the state-of-the-art unsupervised object localization method, ReDo [47]. Table 1 shows that the ContraCAM is comparable with ReDO, in terms of the the mask mean intersection-over-unions (mIoUs). One can also see that the negative signal removal, i.e., ReLU in Eq. (1), is a critical to the performance (see Appendix C.1 for the visual examples).
We also compare the localization performance of ContraCAM (using MoCov2) and classifier CAM (using a supervised model). Table 2 shows the results where all models are solely trained from the target dataset and evaluated on the same dataset. Interestingly, ContraCAM outperforms the classifier CAM on CUB and Flowers. We conjecture this is because CUB and Flowers have few training samples; the supervised classifier is prone to overfitting. On the other hand, Table 3 shows the results on the transfer setting, i.e., the models are trained on the ImageNet [51] using the ResNet-50 architecture. We use the publicly available supervised classifier [52] and MoCov2, and follow the MaxBoxAccV2 evaluation protocol [48]. The ContraCAM often outperforms the classifier CAM, especially for the unseen images (e.g., CUB). This is because the classifiers project out the features unrelated to the target classes, losing their generalizability on the out-of-class samples.
We provide additional analysis and results in Appendix C. Appendix C.2 shows the ablation study on the number of iterations of ContraCAM. One needs a sufficient number of iterations since too few iterations often detect subregions. Since ContraCAM converges to the stationary values for more iterations, we simply choose 10 for all datasets. Appendix C.3 shows the effects of the negative batch of ContraCAM. Since ContraCAM finds the most discriminative regions compared to the negative batch, one needs to choose the negative batch different from the target image. Using a few randomly sampled images is sufficient. Appendix C.4 provides additional comparison of ContraCAM and classifier CAM. Finally, Appendix C.5 provides a comparison with the gradient-based saliency methods [53, 54] using the same contrastive score. CAM gives better localization results.
3.2 Reducing contextual bias: Representation learning from multi-object images
We demonstrate the effectiveness of the object-aware random crop (OA-Crop) for representation learning under multi-object images by reducing contextual bias. To this end, we train MoCov2 and BYOL on the COCO dataset, comparing them with the models that applied the OA-Crop using the ground-truth (GT) bounding boxes or inferred ones from the ContraCAM.
We first compare the linear evaluation [24], test accuracy of a linear classifier trained on top of the learned representation, in Table 4. We report the results on the COCO-Crop, i.e., the objects in the COCO dataset cropped by the GT boxes, CIFAR-10 and CIFAR-100 [55], CUB, Flowers, Food [56], and Pets [57] datasets. OA-Crop significantly improves the linear evaluation of MoCov2 and BYOL for all tested cases. Somewhat interestingly, OA-Crop using the ContraCAM boxes even outperforms the GT boxes for MoCov2 under the ResNet-50 architecture. This is because the GT boxes often contain objects hard to discriminate (e.g., small objects), making contrastive learning hard to optimize; in contrast, ContraCAM finds more distinct objects. Note that BYOL does not suffer from this issue and performs well with both boxes. See Appendix D.1 for the detailed discussion.
We also compare the detection (and segmentation) performance measured by mean average precision (AP), an area under the precision-recall curve of the bounding boxes (or segmentation masks),
on the COCO detection and segmentation tasks in Table 5. Here, we fine-tune the MoCov2 and BYOL models using the ResNet-50 architecture. Remark that OA-Crop using the ContraCAM boxes outperforms the baselines, while the GT boxes are on par or worse. This is because the GT boxes solely focus on the objects while ContraCAM also catches the salient scene information.
In addition, we present the generalization performance of learned representations under the distribution shifts in Table 6. To this end, we evaluate the models trained on the COCO dataset to various 9 superclass (370 classes) subsets of ImageNet, whose details will be elaborated in the next section. ImageNet-9 contains natural images like COCO, but other datasets contain distribution-shifted (e.g., shape-biased or corrupted) images. Note that OA-Crop performs on par with the vanilla MoCov2 and BYOL on the original ImageNet-9 but performs better on the distribution-shifted dataset. It verifies that the OA-Crop improves the generalizability of the learned representation.
We provide additional analysis and results in Appendix D. Appendix D.2 provides an additional analysis that OA-Crop indeed reduces the contextual bias. Specifically, the representation learned from OA-Crop shows better separation between the co-occurring objects, giraffe and zebra. Appendix D.3 provides the comparison with the supervised representation, learned by Faster R-CNN [58] and Mask R-CNN [59], using ground-truth bounding boxes or segmentation masks. OA-Crop significantly reduces the gap between self-supervised and supervised representation. Appendix D.4 presents the class-wise accuracy on CIFAR10 that OA-Crop consistently improves the accuracy over all classes. Appendix D.5 presents the linear evaluation performance of MoCov2 and BYOL trained on a 10% subset of ImageNet for readers comparing with the results with the ImageNet-trained models.
3.3 Reducing background bias: Generalization on background and distribution shifts
We demonstrate the effectiveness of the background mixup (BG-Mixup) for the generalization of the learned representations on background and distribution shifts by reducing background bias and learning object-centric representation. To this end, we train MoCov2 and BYOL (and BG-Mixup upon them) on the ORIGINAL dataset from the Background Challenge [23], a 9 superclass (370 classes) subset of the ImageNet [51]. We then train a linear classifier on top of the learned representation using the ORIGINAL dataset. Here, we evaluate the classifier on the Background Challenge datasets for the background shift results, and the corresponding 9 superclasses of the ImageNet-Sketch [29], Stylized-ImageNet [39], ImageNet-R [40], and ImageNet-C [30] datasets, denoted by putting ‘-9’ at the suffix of the dataset names, for the distribution shift results (see Appendix B.3 for details).
We additionally compare BG-Mixp with the hard background mixing (i.e., copy-and-paste) using ground-truth masks (BG-HardMix (GT)) for the background shift experiments, and Mixup [41] and CutMix [42] (following the training procedure of [43]) for the distribution shift experiments. We also tested the BG-HardMix using the binarized CAM but did not work well (see Appendix E.1). On the other hand, the BG-Mixup often makes contrastive learning hard to be optimized by producing hard positives; thus, we apply BG-Mix with probability pmix < 1, independently applied on the patches. We tested pmix ∈ {0.2, 0.3, 0.4, 0.5} and choose pmix = 0.4 for MoCov2 and pmix = 0.3 for BYOL. Note that MoCov2 permits the higher pmix, since finding the closest sample from the (finite) batch is easier than clustering infinitely many samples (see Appendix E.2 for details).
Table 7 presents the results on background shifts: BG-Mixup improves the predictions on the objectfocused datasets (e.g., MIXED-RAND) while regularizing the background-focused datasets (e.g., ONLY-BG-T). Table 8 presents the results on distribution shifts: BG-Mixup mostly outperforms the Mixup and the CutMix. We also provide the BG-HardMix (GT) results on distribution shifts in Appendix E.3 and the mixup results on background shifts in Appendix E.4. The superiority of BG-Mix on both background and distribution shifts shows that its merits come from both objectcentric learning via reducing background and the saliency-guided input interpolation. In addition, we provide the corruption-wise classification results on ImageNet-9-C in Appendix E.5, and additional distribution shifts results on ObjectNet [60] and SI-Score [61] in Appendix E.6.
4 Related work
Contrastive learning. Contrastive learning (or positive-only method) [1, 2, 14] is the state-of-the-art method for visual representation learning, which incorporates the prior knowledge of invariance over the data augmentations. However, they suffer from an inherent problem of matching false positives from random crop augmentation. We tackle this scene bias issue and improve the quality of learned representation. Note that prior work considering the scene bias for contrastive learning [19, 35, 36, 38] assumed the ground-truth object annotations or pre-trained segmentation models, undermining the motivation of self-supervised learning to reduce such supervision. In contrast, we propose a fully self-supervised framework of object localization and debiased contrastive learning. Several works [62, 63] consider an object-aware approach for video representation learning, but their motivation was to attract the objects of different temporal views and require a pretrained object detector.
Bias in visual representation. The bias (or shortcut) in neural networks [64] have got significant attention recently, pointing out the unintended over-reliance on texture [39], background [23], adversarial features [65], or conspicuous inputs [66]. Numerous works have thus attempted to remove such biases, particularly in an unsupervised manner [29, 67, 68]. Our work also lies on this line: we evoke the scene bias issue of self-supervised representation learning and propose an unsupervised debiasing method. Our work would be a step towards an unbiased, robust visual representation.
Unsupervised object localization. The deep-learning-based unsupervised object localization methods can be categorized as follow. (a) The generative-based [47, 69, 70] approaches train a generative model that disentangles the objects and background by enforcing the object-perturbed image to be considered as real. (b) The noisy-ensemble [71–73] approaches train a model using handcrafted predictions as noisy targets. Despite the training is unsupervised, they initialize the weights with the supervised model. (c) Voynov et al. [74] manually finds the ‘salient direction’ from the noise (latent) of the ImageNet-trained BigGAN [75]. Besides, scene decomposition (e.g., [76]) aims at a more ambitious goal: fully decompose the objects and background, but currently not scale to the complex images. To our best knowledge, the generative-based approach is the state-of-the-art method for fully unsupervised scenarios. Our proposed ContraCAM could be an alternative in this direction.
Class activation map. Class activation map [31, 32] has been used for the weakly-supervised object localization (WSOL), inferring the pixel- (or object-) level annotations using class labels. Specifically, classifier CAM finds the regions that are most salient for the classifier score. ContraCAM further expands its applicability from weakly-supervised to unsupervised object localization by utilizing the contrastive score instead of the classifier score. We think ContraCAM will raise new interesting research questions, e.g., one could adopt the techniques from CAM to the ContraCAM.
5 Conclusion and Discussion
We proposed the ContraCAM, a simple and effective self-supervised object localization method using the contrastively trained models. We then introduced two data augmentations upon the ContraCAM that reduce scene bias and improve the quality of the learned representations for contrastive learning. We remark that the scene bias is more severe for the uncurated images; our work would be a step towards strong self-supervised learning under real-world scenarios [77, 78].
Limitations. Since the ContraCAM finds the most salient regions, it can differ from the desiderata of the users, e.g., the ContraCAM detects both the birds and branches in the CUB [46] dataset, but one may only want to detect the birds. Also, though the ContraCAM identifies the disjoint objects, it is hard to separate the occluded objects. Incorporating the prior knowledge of the objects and designing a more careful method to disentangle objects would be an interesting future direction.
Potential negative impacts. Our proposed framework enforces the model to focus on the “objects”, or the salient regions, to disentangle the relations of the objects and background. However, ContraCAM may over-rely on the conspicuous objects and the derived data augmentation strategy by ContraCAM could potentially incur imbalanced performance across different objects. We remark that the biases in datasets and models cannot be entirely eliminated without carefully designed guidelines. While we empirically observe our proposed learning strategies mitigate contextual and background biases on certain object types, we still need a closer look at the models, interactively correcting them.
Acknowledgements
This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST); No. 2019-0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data; No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence), and partly by the Defense Challengeable Future Technology Program of the Agency for Defense Development, Republic of Korea. We thank Jihoon Tack, Jongjin Park, and Sihyun Yu for their valuable comments. | 1. What is the focus of the paper regarding visual representation learning?
2. What are the claimed issues with traditional methods that the paper addresses?
3. What is the proposed solution, and how does it differ from prior works?
4. How effective are the proposed approach and its advantages, according to the experiments conducted?
5. Are there any limitations or concerns regarding the ContraCAM method? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes object-aware contrastive learning to learn the visual representation from unlabeled images by debiasing the spurious scene correlations. The authors claim there are two types of spurious scene correlations, contextual bias and background bias, which harm the generalization of the model. A novel self-supervised object localization method ContraCAM is proposed to find the unbiased pairs of positives and negatives. Lots of experiments are well done and show the advantages of the proposed ContraCAM.
Review
The main contribution of this paper is the ContraCAM, which is developed upon the CAM algorithm and to find the most discriminative regions. As the name of ContraCAM, the authors introduce contrastive scores into the CAM algorithm to find the salient regions or objects iteratively. The proposed solution is novel and effective. Combining the unsupervised object localization and unsupervised representation learning with the self-supervised manner may be effective in scene representation. |
NIPS | Title
Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Abstract
Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks. Here, we show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine. We observe that these engines learn quickly and generalize in a natural way that reflects human intuition. For example, training such a system to model blockstacking might naturally generalize to stacking other types of objects because of structure in the real world that has been partially captured by the language describing it. We study several abstract textual reasoning tasks, such as object manipulation and navigation, and demonstrate multiple types of generalization to novel scenarios and the symbols that comprise them. We also demonstrate the surprising utility of compositional learning, where a learner dedicated to mastering a complicated task gains an advantage by training on relevant simpler tasks instead of jumping straight to the complicated task.
1 Introduction
Natural language processing (NLP) has seen major progress thanks to probabilistic language models (LMs) like GPT-2 [1], BERT [2], and T5 [3]. These are pre-trained in a general, task-agnostic way on large corpora of unstructured text and then fine-tuned on specific tasks. This method has achieved state of the art performance on popular NLP tasks like question-answering, textual entailment, text summarization, neural machine translation, and more [4].
These models are not just impressive for the high scores they achieve on quantitative language tasks; the text they generate often reflects patterns of “real-world” structure, suggesting that embedded in the weights of the LM is implicit knowledge of physics, object persistence, containment, spatial relationships, causal mechanisms, material properties, and other common-sense knowledge central to human reasoning and intuition. If that is so, then they ought to provide a useful inductive bias in learning to perform symbolic reasoning tasks that mirror real-world tasks.
In this paper, we attempt to leverage and characterize this inductive bias by training reasoning engines that demonstrate some of the hallmarks of human reasoning ability: learning (a) rules (b) that generalize well (c) from few examples. Concretely, we fine-tune T5 on a suite of symbolic reasoning tasks and study generalization along multiple different axes beyond the normal test/train split: we examine cardinality generalization, object generalization, part-of-speech generalization, and show (in the spirit of curriculum learning) that LMs can leverage combinations of learned subskills to master complicated composite tasks with both better sample efficiency and higher terminal performance than learning directly on the complicated tasks.
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
We see our contributions as four-fold. First, we demonstrate a high level of performance by connectionist models on tasks resembling symbolic classical AI tasks, demonstrating some symbolic ability of such models in light of recent calls to unify the principles of both symbolism and connectionism. Secondly, we demonstrate the breakdown of reasoning ability by manufacturing our own reasoning datasets that can be tweaked in systematic ways, instead of simply split into training/validation/test sets. This means that we can assess our models’ ability to both interpolate and extrapolate in systematic, symbolic, and grammatical ways, and otherwise flex with changing distributions. Thirdly, we demonstrate the ability of large LMs to reason compositionally, that is, to learn two kinds of reasoning separately and then to combine those different kinds of reasoning on a novel composite task, to which they are both relevant. Lastly, we demonstrate the inductive bias we hypothesize is present in large LMs and can be leveraged to assist in the formation of reasoning engines.
2 Related Work
Transformer-based LMs [5] are the dominant LM architecture, including the original GPT models [1, 6], BERT [2, 3] and the recent 175 billion parameter GPT-3 [4]. They are trained via generic maximum likelihood learning on vast, unstructured text corpora, but often exhibit zero/few-shot learning abilities on a variety of NLP tasks, having grasped key mechanics of natural language by learning to simply predict a missing word; it is this inductive bias we hope to harness.
These models implicitly house a rich world model with concepts and relations. Petroni et al. query a language model (instead of a traditional, symbolic knowledge base) for relational data explicitly expressed in natural langauge [7]. Bosselut et al. expand this scope, attempting to generate explicit commonsense knowledge graphs using pre-trained LMs [8], and Bouraoui et al. perform relation extraction with BERT, which can be construed as finding and specifying edges [9]. This work treats the formation of explicit knowledge graphs using LMs, whereas ours treats the leveraging of implicit knowledge graphs using LMs.
Our tasks are inspired by classic examples of good old-fashioned AI, including Blocks World [10] and STRIPS planners [11], where a system’s state, composed of symbols such as toy blocks, must be manipulated to achieve a desired end given a rule set. In contrast to these original problems, we learn the symbolic rules governing a similar system from examples. Our work is therefore similar in spirit to program induction, where a connectionist model must learn a structured program from examples; examples include the Neural Turing Machine [12] and the Differentiable Neural Computer [13]. Some work has attempted to teach a language model rules by explicit teaching through specialized datasets [14] or knowledge graphs [15]. We count on these rules being learned implicitly, as a side effect of the general learning objective of LMs. Weber et al. combine neural networks with logic programming to solve multi-hop reasoning tasks in natural language [16]. We don’t insist on expressing explicit representations of symbolic rules learned, instead reasoning about rules learned by examining out-of-distribution performance.
Our tasks are similar to the bAbI dataset [17]. However, we focus on systematic study of specific dimensions of generalization that is impossible with bAbI. bAbI questions evaluate whether certain skills are possessed; our questions evaluate to what extent these skills are possessed, how they generalize, and when they break down.
Finally, the idea of compositional learning draws inspiration from multi-task learning, which was explored in [18] and more recently in a deep learning approach [19]. Supplementary Training on Intermediate Labeled-data Tasks (STILTs) was introduced in [20], which uses two stages of pretraining, where the first stage is unsupervised like with other pre-trained models, but where the second is on some data-rich intermediate supervised task.
3 Data Generation, Evaluation, and Training Protocol
The central questions we ask in this paper revolve around whether large LMs possess some world model or inductive bias that helps them to learn reasoning rules from few examples. These rules should be used to successfully extrapolate not just to novel instances, but to novel tasks. Upon learning to track objects across containers, for example, a language model should be able to generalize to new types of objects, marginally more complicated scenarios than it has seen before, and leverage skills already learned to progress more quickly than if it hadn’t.
We show how LMs can generalize in these ways and others by systematically fine-tuning them on classes of scenarios where they can learn rules. Across tasks, our reasoners deduce the final state of an environment given natural language descriptions of an initial state and actions taken on it.
3.1 Task Types
Containers. In the first task, which we call containers, we manipulate objects in various containers and ask the reasoner to track the state of the environment. The initial state of the environment is a random allotment of n_objects objects into n_containers containers. The names of these objects and containers are sampled uniformly and without replacement from lists of candidates. Examples of such candidates are given in the appendix. Once sampled and organized, the objects and containers are converted into a plain English expression describing their organization. Then, there is a random manipulation of this initial state, where an object is randomly taken from a container and placed into another container. The task of the reasoning engine is to describe the final state of the environment.
For training, each scenario uses 2-8 objects and 2-3 containers, sampled uniformly. We construct sentences by sampling without replacement from a uniform distribution of container names and object names and filling a template accordingly. Our training set of object and container names comes from a proprietary linguistic dataset, which has data on commonness, part of speech, etc. of each word. We divide this dataset into nouns and verbs and subtract their intersection from both. We split the unique nouns into a train set (n=36566) and a validation set (n=12189) and a container set (n=9), the former two of which are to be used as object names and the latter of which is to be used for container names during training. For each set (train and val), we find the 2000 most common and join the dataset on concreteness ratings from Brysbaert et al. [21] to take the 2000 most concrete nouns from each set. We also generate a list of random strings by sampling uniformly from single digit integers and lowercase English characters, ranging in length randomly from 5 to 10.
Navigation. In both versions of navigation, a natural-language description of a map of the environment is provided to the reasoning engine. It is generated by sampling from a list of common locations including “kitchen", “garden", and others. These locations are then composed into a grid where they are in north-south-west-east orientation to each other. In the first task, called navigation route, the reasoner is additionally given a starting point and a destination. The reasoner must provide a valid route from origin to destination. In the second task, called navigation result, we require that the reasoner, given a starting point and a route, determine the location where they would find themselves in the map. For training, the maximum number of locations is 8 and the minimum is 3.
Composite task. We are interested in assessing the ability of a reasoning engine to perform a task which combines multiple elemental types of knowledge. Specifically, can a learner master two skills separately, and then leverage knowledge from both to perform the composite task better and more quickly than it would have by learning directly on the composite task? This task, called hard object,
is intended to be a composition of navigation and containers. The reasoner is given a verbal map of the world and an action taken. In this action, an object is taken from its container, carried on a route indicated by successive moves in cardinal directions, and placed. The reasoner must then describe the new state of the containers.
3.2 Generalization Types
Given our overarching interest in large LMs’ ability to yield reasoners that generalize well, we craft several types of experiments to probe the bounds of different kinds of generalization, namely:
Cardinality generalization. This tests a model’s structural understanding of the domain. If a reasoner is trained on scenarios with k objects, steps of navigations, rooms, etc. can it generalize to more than k objects, steps of navigations, rooms, etc.?
Object generalization. This is a semantic test of whether or not the reasoner can leverage prior knowledge of English to generalize to new, never-before-seen objects. For example, if several different container training scenarios are composed of objects from different distributions of words (e.g. 2000 concrete nouns vs. 2000 most common nouns vs. 2000 randomly sampled nouns), which distribution of training scenarios results in the best model generalization to scenarios composed with new, previously unseen nouns?
Part-of-speech generalization. This is another semantic test based on the idea that a reasoner that understands language will perform better on “right" words than on “wrong" words. For example, on the container task, the model should perform well when nouns are objects and poorly when verbs or random strings are. Intuitively, a scenario such as “The bin contains a dethrone and a transpose" makes less sense, and is statistically less likely, than a scenario such as “The bin contains a ball and a snake"; the naturalness of the scenario descriptions should positively correlate with generalization. We hypothesize that natural nouns will work best, since nouns are often sensible things to move from one container to the other, and that arbitrary strings and verbs will both decrease performance.On the other hand, random strings have probably never been encountered by the model, and it might learn to simply copy whatever tokens are in specific places, treating strings as opaque IDs, symbols without meaning. Verbs should actively confuse the model, since a verb’s linguistic role is structurally and semantically different than a noun’s; it is a symbol not without meaning, but with the wrong meaning.
Reasonable phrasing generalization. Finally, what if we replace the templates themselves? Instead of replacing the objects in a scenario, we replace the English scaffolding we insert them into by mapping deterministically from each English word to a gibberish word composed of English morphemes but possessing no meaning (See 4.5). A reasoner for which language is meaningful will have a harder time adjusting to this task, while a reasoner for which language is simply a sequence of meaningless strings will struggle no more with this task than with its original English version.
3.3 Base Model and Training Details
On all experiments, we use T5’s 3 billion parameter architecture, either fine-tuning a pretrained model or training from scratch. We do so on Nvidia Tesla V100 32GB GPUs with a batch size of 1 and a learning rate of 0.003. We train a different reasoning engine for all three elemental tasks for 1000 total steps. After evaluating them on their respective tasks, we train them on the other base task for 1000 steps and eventually on the composite task for 1000 steps (3000 total steps).
3.4 Metrics
For each experiment, we gauge performance using three metrics. The first is exact equality of true final state. This is obviously the highest standard, as it mimics perfectly the reasoning we would expect a human to carry out after having learned the rules and format of the dynamical system. The second metric is substring equality. We want to see how many individual statements from the predicted final state are contained in the true final state. Sometimes the reasoning is flawed, but on target (e.g. if the reasoner predicts, after moving a hammer from a box to a bin, that it is in both the box and the bin, it should be given partial credit). The third metric is the standard BLEU score, to see how similar the sentences are at the individual word level.
Interpolation and Extrapolation. Throughout our experiments, we assess different kinds of generalization. The first, which we term interpolation, refers to testing a LM on new instances that are
drawn from the same distribution as the training set. For example, in the container task, we might test on scenarios that use already-seen objects and containers, but arranged in novel scenarios. The second, which we term extrapolation, refers to testing a LM on new instances that are drawn from a different distribution than the training distribution. For example, in the container task, we might test on scenarios that involve new, never-before-seen objects, containers, or cardinalities.
Structural and Semantic Generalization. Finally, we distinguish between structural generalization, where we test things like cardinality generalization, and semantic generalization, where we explore new words, or new types of words.
4 Results
We now systematically test interpolation and extrapolation, assessing both semantic and structural generalization. We begin with several baselines, and then explore increasingly difficult tasks.
4.1 Comparison with Baseline Methods
Since our claims have to do with the inductive bias of large, pre-trained LMs, we first establish a wide baseline gulf between pre-trained models and those trained from scratch (tabula rasa). We train two such versions of T5-3B on the same training set (1000 scenarios with maximums of 8 objects and 3 containers and minimums of 2 objects and 2 containers) and then measure two kinds of both interpolative and extrapolative performance: systematic, meaning the number of objects and containers, and seman-
tic, meaning the words used for objects in reasoning scenarios (See 3.4 and 3.2 for more info). Thus, the two models’ performance is compared on 4 total experiments with 7200 examples each: Interp - same number of containers and objects and same words used for object names (as training set); SemExtrap - same number of containers and objects (as training set), different words used for object names; SysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, same words used for object names (as training set); SemSysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, different words used for object names (than training set); The average BLEU scores for each model checkpoint (taken every 100 steps during fine-tuning/training) are displayed in Figure 1. These baselines make clear that pre-trained language models that have been fine-tuned wildly outperform language models trained from scratch.
4.2 Containers
We now begin our primary experiments, starting by testing cardinality generalization. We train on scenarios with a maximum of 8 objects and 3 containers, and validate structural generalization on scenarios with up to 19 objects and 5 containers. The results are shown in Figure 2. The model is able to correctly predict the exact string a majority of the time even well outside the training domain. This kind of generalization could only be reached by grasping, to some extent, rules like those used when enumerating long English lists. This is one way in which the inductive bias of LMs aids in the reasoner’s good performance.
We now test object generalization and part-of-speech generalization. Here, we train models with scenarios generated by parameterizing our templates with various sets of nouns, and then testing by parameterizing with a different set of words. The results are shown in Table 2. Each row represents a different training set; columns represent performance on different test sets. We tested four different training sets: "all" represents nouns sampled uniformly from our set of all 36,566 nouns. "2k common" represents the 2000 most common nouns, "2k concrete" represents the 2000 most concrete nouns; "2k random" represents 2000 randomly sampled nouns. There are 3 groups of columns, representing different validation conditions. "Training words" represents a validation condition where the reasoner was tested on new scenarios that used the same set of nouns (eg, a different mix of
containers and objects, or a different number of them). "Validation words" represents a condition with new scenarios using never-before-seen words of a specific type. "New POS" represents words that are an entirely different part of speech, or random words. The general conclusion is that, regardless of the metric used, it is best to train on the largest non-specific set of nouns possible. Doing so gives good generalization across a wide variety of alternative words, including verbs and random strings.
We now explore the distributions of the training nouns. Many of the words passed between containers were nouns that would not be passed in between containers, like "year", and so we conducted an experiment where we trained on the 2000 most concrete nouns, a random subsample of 200 of those nouns, and 20 of the most sensible nouns to be moved between containers, like "marble" or "mouse". As can be seen in Figure 3, the model trained on sensible nouns gets most confused at verbs and random strings, relative to its peers. This suggests that the distribution of very concrete nouns is farther from the distribution of verbs than the larger distribution of concrete nouns.
4.3 Navigation
We now turn our attention to the navigation and navigation route tasks. In this section, we use exact string accuracy as our metric, as both substring and BLEU scores ignore the (critical) sequential nature of the predictions.
4.3.1 Interpolation and Extrapolation
Our primary results are in Figure 4. For both the navigation route and navigation result tasks, we trained on distributions of maps that contained between 3 to 8 total rooms; this resulted in total path plans that were 1 to 5 steps long.
In the middle, we see results for the navigation result task (where the model must predict the result of a specific sequence of actions). Here, we see several interesting phenomena. Overall, the model does a good job of making accurate predictions, with a 79% marginal accuracy over the training regime. There are two situations where the model performs especially well: the first column represents tasks where there is only a single step in the route; the model shows a strong ability to extrapolate to any number of rooms. The second case is the "diagonal" in the upper-right. This represents situations where the map is a linear chain, with very few junctions in the map. In this case, the model also shows a strong ability to generalize to new situations involving a new number of rooms, and a new number of steps. It is unclear why the model has such trouble with two-step planning problems, although we hypothesize it may have something to do with the training distribution, as discussed later.
In contrast, on the left of Figure 4, we see results for the navigation route task. Here, we see similarly strong performance in the training regime, with a 82% marginal accuracy. However, extrapolating to new situations shows mixed results: the model is easily able to extrapolate to any number of rooms, and like the navigation result task, it performs especially well in the case of single-step planning. However, it seems to be completely unable to generalize to new numbers of steps needed in the planning process.
What accounts for the difference? The biggest difference between the navigation route and navigation result tasks is the format of the output: in the navigation result task, the model always outputs a single word (the room name), regardless of the complexity of the map or the plan. In contrast, on the navigation route task, longer plans require a longer output. The model seems to struggle with having to generate sentences that are longer than any it has seen before. While we have used exact string equality in all of the tests reported here, we note that on longer routes, sometimes the model’s output would be a substring of the correct output, as in the example below, which is correct, but missing the final step:
Target: to the west, then to the west, then to the north, then to the north, then to the north, then to the north
Prediction: to the west, then to the west, then to the north, then to the north, then to the north
4.4 Compositional Reasoning on the Composite Task
Finally, we tested the hard object task, where the reasoner must consider objects, containers, and maps. To explore this, we test four conditions. We train a learner exclusively on the task (HardObj). We also train a learner first on the navigation task for 1000 steps, then on the container task for 1000 steps, then on the hard object task for 1000 steps (Nav-Cont-HardObj). We also reverse the order of navigation and container pre-training (Cont-Nav-HardObj), and we test a random mixture of pretraining, with 2000 steps of a 50/50 mixture of sentences drawn from both tasks (ContNav5050HardObj). For each regime, we trained 10 models and averaged their performance on a test set of 5000 held-out examples.
The results in Figure 5 are surprising: the models that learned first on other kinds of tasks learned the composite task more quickly and achieved better ultimate performance. Like curriculum learning, this seems to indicate subskills can persist and be ported to superskills for improved sample efficiency. Instead of fine-tuning on a single hard task (and thereby gaining proficiency in only that task), a better strategy may be to fine-tune on a wide variety of elemental tasks, and focus more on combining them, thereby potentially solving not just one, but a combinatorially large number of complex tasks.
4.4.1 Importance of Training Set Distribution
The T5 model seems unusually sensitive to the training set distribution. Figure 6 explores this. In our first attempt at creating a training distribution for the navigation tasks, we randomly created a map with a uniformly selected number of rooms, and then randomly selected source and destination rooms. This resulted in the performance shown in the right-hand panels (C and D) of Figure 6. Panel (D) shows most of the training data is concentrated in the upper-left corner, meaning short paths in small maps. As a result (panel (C)), the model was almost completely unable to model length 4 or length 5 plans, because they rarely appeared in the training set.
An alternative is to sample a desired path length uniformly, and then generate a map, source, and destination with that length. This results in the training set distribution shown in panel (B), which concentrates many more examples on longer paths in larger maps, but with fewer examples of two-step plans. The resulting accuracy is shown in Panel (A). Here, we see that accuracy is greatly improved for four- and five-step plans, although two-step accuracy suffers. However, even the
carefully constructed training distribution with uniformly sampled path lengths does not induce generalization to paths longer than 5 steps, as shown in Figure 4. Understanding this phenomenon is an important direction for future research.
4.5 Does all that reading really help?
At the heart of our paper is the idea that natural language provides a useful inductive bias. To explore this directly, we modify our object-container task in the following way: We devise a one-to-one mapping from each word in our sensible English templates to words in an invented gibberish. Since the mapping is one-to-one, the English templates and the gibberish templates are identically structured but with substituted words. For example, the word "The" is mapped to the gibberish word "Xrq", the word "contains" to "sixnqkxb", etc. Noun slots in both templates are still filled with the original English nouns (i.e. not substituted with gibberish). Figure 7 shows the results: it is harder for T5 to master the gibberish domain. If there were no inductive bias of English-trained models aiding our learners on English tasks, then the mastery of these two grammars would grow at the same pace, since they are just as structured as one another and thus presumably as predictable and learnable as one another. This is a bit of evidence in favor of the notion that the inductive bias of large LMs does, in fact, aid in learning quickly and generalize well in several different senses.
5 Ethical and Risk Considerations
Since this paper studies what are essentially toy tasks, we don’t consider the risks for our methods to be particularly high. However, we believe in the eventual application of these models in realworld domains (e.g. robotics and planning), and this work contributes to a needful preliminary understanding of these models’ behavior, since they are difficult to control and predict. We urge applications of these models to take into account this general risk along with work such as ours which helps to characterize it. It is dangerous to use language models in high-risk domains (consider a surgical robot or a planning agent for defusing explosives) without extensive understanding of their interpolative and extrapolative generalization.
6 Conclusions and Future Work
The central goal of this paper has been to investigate if connectionist LMs can learn something akin to symbolic rules that generalize in “natural" ways. We have shown strong performance on a suite of reasoning tasks, including tasks that we consider to be simple–an object manipulation task and a navigation task–and more difficult–a composite task that combines the other two. On the container task, we show near perfect performance on the training domain and considerably far outside of it and an eventual marked drop-off in performance. On the navigation task, we show differing patterns of generalization on different subtasks, with slightly lower performance than on the container task. We have also demonstrated some of the boundaries of generalization by identifying several different kinds of generalization and measuring them. We do not claim to have explored all the axes of generalization possible in the study of these language models, but rather shown how the generalization ability of
LMs can be probed. But these results are exciting because the models perform so well outside of the training domain that it seems as if some general, symbolic rules have been learned.
Beyond generalizing just to new examples on learned tasks, we have demonstrated that these models are aided in the performance of complex tasks by learning first on elemental tasks. This hints at the prospect of learning difficult tasks more quickly and better by learning more simple tasks first.
We have additionally argued that natural language provides a powerful inductive bias for symbolic tasks by comparing on scenarios across different, but equivalent grammars, one in natural language and one in an invented language. Intuitively, learning the distribution of language that describes the world helps a learner to understand the distribution of that world itself. Thus, it is plausible that LMs might be effective ways to provide autonomous agents with priors and world models.
Implicitly, we have demonstrated that connectionist architectures are capable of strong performance on some classically symbolic tasks. This is done in light of recent calls to incorporate guiding principles of symbolic AI into connectionist models; it is exciting that language models seem to possess at least some of the rule-learning and generalization that humans possess, as opposed to the mere ability to recognize patterns and interpolate over a well-explored training domain. | 1. What is the main contribution of the paper regarding language models and reasoning tasks?
2. How does the paper differentiate itself from prior works such as RuleTaker and Banerjee et al.?
3. What are the strengths and weaknesses of the experiments conducted in the paper?
4. Do you have any concerns or suggestions regarding the presentation of results, such as the use of bar plots and bolding the best numbers?
5. Are there any potential confounding factors that need to be considered when interpreting the results, such as superficial features learned from container and navigation tasks? | Summary Of The Paper
Review | Summary Of The Paper
This paper takes a closer look at the ability of language models to perform reasoning based tasks such as tracking states of entities and answering navigation based questions. The main question is: Can these models generalize outside of the training distribution, thus exhibiting the ability to learn underlying rules instead of learning superficial correlations that only work on in-distribution data. This question is studied through several different generalization splits. From results, we see some evidence that pre-trained models are able to generalize outside of the training distribution though in some cases, it’s inconclusive, (and I elaborate on why below).
Review
Originality: While I think that the exact settings in this work are somewhat original, Clark et al. 2020 (RuleTaker) also consider reasoning abilities of transformers on synthetic tasks and look at various generalization abilities. Similarly, Banerjee et al. 2020 also look at how well transformers can be trained to do various reasoning tasks in blocks worlds. In the authors’ opinion, what are some major differences between these works and this paper?
Experiments: While I really like this direction, I think some of the experiments are inconclusive in determining whether the model has truly learnt the underlying rule.
For example, in the containers experiment the model just needs to track the state of 2 containers (which can be easily detected based on the language context) to correctly answer. So, even if the model can generalize to more objects and containers, it is not really impressive since the model can learn to ignore all other containers except the two containers whose states were altered. Indeed, if we look at the navigation results, we see poorer extrapolation since the task is harder.
In Section 4.4, we cannot immediately conclude that the model is leveraging previous knowledge. The model could just as well be learning superficial features from the container and navigation tasks that help on the hard object tasks. One way to check for this confounder would be to just do some steps of pre-training on the Nav and Cont tasks and see if that by itself explains the fast learning on HardObj. If not, we can then conclude that perhaps the model is leveraging previously acquired knowledge to do well on a composition.
I really liked the experiment in Section 4.5. Although, I wonder if the decrease in performance has a simpler explanation like gibberish words like “sixnqkxb” contain more sub-words and are hence harder to reason / track.
Quality: I think the paper is decently written. Some style suggestions:
Table-2 could be converted into a barplot (moving numbers into appendix) since it’s really hard to parse so many numbers. And it’s generally good to bold the best numbers for a quick read.
In Figure-7, it is unclear what the x axis refers to. |
NIPS | Title
Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Abstract
Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks. Here, we show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine. We observe that these engines learn quickly and generalize in a natural way that reflects human intuition. For example, training such a system to model blockstacking might naturally generalize to stacking other types of objects because of structure in the real world that has been partially captured by the language describing it. We study several abstract textual reasoning tasks, such as object manipulation and navigation, and demonstrate multiple types of generalization to novel scenarios and the symbols that comprise them. We also demonstrate the surprising utility of compositional learning, where a learner dedicated to mastering a complicated task gains an advantage by training on relevant simpler tasks instead of jumping straight to the complicated task.
1 Introduction
Natural language processing (NLP) has seen major progress thanks to probabilistic language models (LMs) like GPT-2 [1], BERT [2], and T5 [3]. These are pre-trained in a general, task-agnostic way on large corpora of unstructured text and then fine-tuned on specific tasks. This method has achieved state of the art performance on popular NLP tasks like question-answering, textual entailment, text summarization, neural machine translation, and more [4].
These models are not just impressive for the high scores they achieve on quantitative language tasks; the text they generate often reflects patterns of “real-world” structure, suggesting that embedded in the weights of the LM is implicit knowledge of physics, object persistence, containment, spatial relationships, causal mechanisms, material properties, and other common-sense knowledge central to human reasoning and intuition. If that is so, then they ought to provide a useful inductive bias in learning to perform symbolic reasoning tasks that mirror real-world tasks.
In this paper, we attempt to leverage and characterize this inductive bias by training reasoning engines that demonstrate some of the hallmarks of human reasoning ability: learning (a) rules (b) that generalize well (c) from few examples. Concretely, we fine-tune T5 on a suite of symbolic reasoning tasks and study generalization along multiple different axes beyond the normal test/train split: we examine cardinality generalization, object generalization, part-of-speech generalization, and show (in the spirit of curriculum learning) that LMs can leverage combinations of learned subskills to master complicated composite tasks with both better sample efficiency and higher terminal performance than learning directly on the complicated tasks.
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
We see our contributions as four-fold. First, we demonstrate a high level of performance by connectionist models on tasks resembling symbolic classical AI tasks, demonstrating some symbolic ability of such models in light of recent calls to unify the principles of both symbolism and connectionism. Secondly, we demonstrate the breakdown of reasoning ability by manufacturing our own reasoning datasets that can be tweaked in systematic ways, instead of simply split into training/validation/test sets. This means that we can assess our models’ ability to both interpolate and extrapolate in systematic, symbolic, and grammatical ways, and otherwise flex with changing distributions. Thirdly, we demonstrate the ability of large LMs to reason compositionally, that is, to learn two kinds of reasoning separately and then to combine those different kinds of reasoning on a novel composite task, to which they are both relevant. Lastly, we demonstrate the inductive bias we hypothesize is present in large LMs and can be leveraged to assist in the formation of reasoning engines.
2 Related Work
Transformer-based LMs [5] are the dominant LM architecture, including the original GPT models [1, 6], BERT [2, 3] and the recent 175 billion parameter GPT-3 [4]. They are trained via generic maximum likelihood learning on vast, unstructured text corpora, but often exhibit zero/few-shot learning abilities on a variety of NLP tasks, having grasped key mechanics of natural language by learning to simply predict a missing word; it is this inductive bias we hope to harness.
These models implicitly house a rich world model with concepts and relations. Petroni et al. query a language model (instead of a traditional, symbolic knowledge base) for relational data explicitly expressed in natural langauge [7]. Bosselut et al. expand this scope, attempting to generate explicit commonsense knowledge graphs using pre-trained LMs [8], and Bouraoui et al. perform relation extraction with BERT, which can be construed as finding and specifying edges [9]. This work treats the formation of explicit knowledge graphs using LMs, whereas ours treats the leveraging of implicit knowledge graphs using LMs.
Our tasks are inspired by classic examples of good old-fashioned AI, including Blocks World [10] and STRIPS planners [11], where a system’s state, composed of symbols such as toy blocks, must be manipulated to achieve a desired end given a rule set. In contrast to these original problems, we learn the symbolic rules governing a similar system from examples. Our work is therefore similar in spirit to program induction, where a connectionist model must learn a structured program from examples; examples include the Neural Turing Machine [12] and the Differentiable Neural Computer [13]. Some work has attempted to teach a language model rules by explicit teaching through specialized datasets [14] or knowledge graphs [15]. We count on these rules being learned implicitly, as a side effect of the general learning objective of LMs. Weber et al. combine neural networks with logic programming to solve multi-hop reasoning tasks in natural language [16]. We don’t insist on expressing explicit representations of symbolic rules learned, instead reasoning about rules learned by examining out-of-distribution performance.
Our tasks are similar to the bAbI dataset [17]. However, we focus on systematic study of specific dimensions of generalization that is impossible with bAbI. bAbI questions evaluate whether certain skills are possessed; our questions evaluate to what extent these skills are possessed, how they generalize, and when they break down.
Finally, the idea of compositional learning draws inspiration from multi-task learning, which was explored in [18] and more recently in a deep learning approach [19]. Supplementary Training on Intermediate Labeled-data Tasks (STILTs) was introduced in [20], which uses two stages of pretraining, where the first stage is unsupervised like with other pre-trained models, but where the second is on some data-rich intermediate supervised task.
3 Data Generation, Evaluation, and Training Protocol
The central questions we ask in this paper revolve around whether large LMs possess some world model or inductive bias that helps them to learn reasoning rules from few examples. These rules should be used to successfully extrapolate not just to novel instances, but to novel tasks. Upon learning to track objects across containers, for example, a language model should be able to generalize to new types of objects, marginally more complicated scenarios than it has seen before, and leverage skills already learned to progress more quickly than if it hadn’t.
We show how LMs can generalize in these ways and others by systematically fine-tuning them on classes of scenarios where they can learn rules. Across tasks, our reasoners deduce the final state of an environment given natural language descriptions of an initial state and actions taken on it.
3.1 Task Types
Containers. In the first task, which we call containers, we manipulate objects in various containers and ask the reasoner to track the state of the environment. The initial state of the environment is a random allotment of n_objects objects into n_containers containers. The names of these objects and containers are sampled uniformly and without replacement from lists of candidates. Examples of such candidates are given in the appendix. Once sampled and organized, the objects and containers are converted into a plain English expression describing their organization. Then, there is a random manipulation of this initial state, where an object is randomly taken from a container and placed into another container. The task of the reasoning engine is to describe the final state of the environment.
For training, each scenario uses 2-8 objects and 2-3 containers, sampled uniformly. We construct sentences by sampling without replacement from a uniform distribution of container names and object names and filling a template accordingly. Our training set of object and container names comes from a proprietary linguistic dataset, which has data on commonness, part of speech, etc. of each word. We divide this dataset into nouns and verbs and subtract their intersection from both. We split the unique nouns into a train set (n=36566) and a validation set (n=12189) and a container set (n=9), the former two of which are to be used as object names and the latter of which is to be used for container names during training. For each set (train and val), we find the 2000 most common and join the dataset on concreteness ratings from Brysbaert et al. [21] to take the 2000 most concrete nouns from each set. We also generate a list of random strings by sampling uniformly from single digit integers and lowercase English characters, ranging in length randomly from 5 to 10.
Navigation. In both versions of navigation, a natural-language description of a map of the environment is provided to the reasoning engine. It is generated by sampling from a list of common locations including “kitchen", “garden", and others. These locations are then composed into a grid where they are in north-south-west-east orientation to each other. In the first task, called navigation route, the reasoner is additionally given a starting point and a destination. The reasoner must provide a valid route from origin to destination. In the second task, called navigation result, we require that the reasoner, given a starting point and a route, determine the location where they would find themselves in the map. For training, the maximum number of locations is 8 and the minimum is 3.
Composite task. We are interested in assessing the ability of a reasoning engine to perform a task which combines multiple elemental types of knowledge. Specifically, can a learner master two skills separately, and then leverage knowledge from both to perform the composite task better and more quickly than it would have by learning directly on the composite task? This task, called hard object,
is intended to be a composition of navigation and containers. The reasoner is given a verbal map of the world and an action taken. In this action, an object is taken from its container, carried on a route indicated by successive moves in cardinal directions, and placed. The reasoner must then describe the new state of the containers.
3.2 Generalization Types
Given our overarching interest in large LMs’ ability to yield reasoners that generalize well, we craft several types of experiments to probe the bounds of different kinds of generalization, namely:
Cardinality generalization. This tests a model’s structural understanding of the domain. If a reasoner is trained on scenarios with k objects, steps of navigations, rooms, etc. can it generalize to more than k objects, steps of navigations, rooms, etc.?
Object generalization. This is a semantic test of whether or not the reasoner can leverage prior knowledge of English to generalize to new, never-before-seen objects. For example, if several different container training scenarios are composed of objects from different distributions of words (e.g. 2000 concrete nouns vs. 2000 most common nouns vs. 2000 randomly sampled nouns), which distribution of training scenarios results in the best model generalization to scenarios composed with new, previously unseen nouns?
Part-of-speech generalization. This is another semantic test based on the idea that a reasoner that understands language will perform better on “right" words than on “wrong" words. For example, on the container task, the model should perform well when nouns are objects and poorly when verbs or random strings are. Intuitively, a scenario such as “The bin contains a dethrone and a transpose" makes less sense, and is statistically less likely, than a scenario such as “The bin contains a ball and a snake"; the naturalness of the scenario descriptions should positively correlate with generalization. We hypothesize that natural nouns will work best, since nouns are often sensible things to move from one container to the other, and that arbitrary strings and verbs will both decrease performance.On the other hand, random strings have probably never been encountered by the model, and it might learn to simply copy whatever tokens are in specific places, treating strings as opaque IDs, symbols without meaning. Verbs should actively confuse the model, since a verb’s linguistic role is structurally and semantically different than a noun’s; it is a symbol not without meaning, but with the wrong meaning.
Reasonable phrasing generalization. Finally, what if we replace the templates themselves? Instead of replacing the objects in a scenario, we replace the English scaffolding we insert them into by mapping deterministically from each English word to a gibberish word composed of English morphemes but possessing no meaning (See 4.5). A reasoner for which language is meaningful will have a harder time adjusting to this task, while a reasoner for which language is simply a sequence of meaningless strings will struggle no more with this task than with its original English version.
3.3 Base Model and Training Details
On all experiments, we use T5’s 3 billion parameter architecture, either fine-tuning a pretrained model or training from scratch. We do so on Nvidia Tesla V100 32GB GPUs with a batch size of 1 and a learning rate of 0.003. We train a different reasoning engine for all three elemental tasks for 1000 total steps. After evaluating them on their respective tasks, we train them on the other base task for 1000 steps and eventually on the composite task for 1000 steps (3000 total steps).
3.4 Metrics
For each experiment, we gauge performance using three metrics. The first is exact equality of true final state. This is obviously the highest standard, as it mimics perfectly the reasoning we would expect a human to carry out after having learned the rules and format of the dynamical system. The second metric is substring equality. We want to see how many individual statements from the predicted final state are contained in the true final state. Sometimes the reasoning is flawed, but on target (e.g. if the reasoner predicts, after moving a hammer from a box to a bin, that it is in both the box and the bin, it should be given partial credit). The third metric is the standard BLEU score, to see how similar the sentences are at the individual word level.
Interpolation and Extrapolation. Throughout our experiments, we assess different kinds of generalization. The first, which we term interpolation, refers to testing a LM on new instances that are
drawn from the same distribution as the training set. For example, in the container task, we might test on scenarios that use already-seen objects and containers, but arranged in novel scenarios. The second, which we term extrapolation, refers to testing a LM on new instances that are drawn from a different distribution than the training distribution. For example, in the container task, we might test on scenarios that involve new, never-before-seen objects, containers, or cardinalities.
Structural and Semantic Generalization. Finally, we distinguish between structural generalization, where we test things like cardinality generalization, and semantic generalization, where we explore new words, or new types of words.
4 Results
We now systematically test interpolation and extrapolation, assessing both semantic and structural generalization. We begin with several baselines, and then explore increasingly difficult tasks.
4.1 Comparison with Baseline Methods
Since our claims have to do with the inductive bias of large, pre-trained LMs, we first establish a wide baseline gulf between pre-trained models and those trained from scratch (tabula rasa). We train two such versions of T5-3B on the same training set (1000 scenarios with maximums of 8 objects and 3 containers and minimums of 2 objects and 2 containers) and then measure two kinds of both interpolative and extrapolative performance: systematic, meaning the number of objects and containers, and seman-
tic, meaning the words used for objects in reasoning scenarios (See 3.4 and 3.2 for more info). Thus, the two models’ performance is compared on 4 total experiments with 7200 examples each: Interp - same number of containers and objects and same words used for object names (as training set); SemExtrap - same number of containers and objects (as training set), different words used for object names; SysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, same words used for object names (as training set); SemSysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, different words used for object names (than training set); The average BLEU scores for each model checkpoint (taken every 100 steps during fine-tuning/training) are displayed in Figure 1. These baselines make clear that pre-trained language models that have been fine-tuned wildly outperform language models trained from scratch.
4.2 Containers
We now begin our primary experiments, starting by testing cardinality generalization. We train on scenarios with a maximum of 8 objects and 3 containers, and validate structural generalization on scenarios with up to 19 objects and 5 containers. The results are shown in Figure 2. The model is able to correctly predict the exact string a majority of the time even well outside the training domain. This kind of generalization could only be reached by grasping, to some extent, rules like those used when enumerating long English lists. This is one way in which the inductive bias of LMs aids in the reasoner’s good performance.
We now test object generalization and part-of-speech generalization. Here, we train models with scenarios generated by parameterizing our templates with various sets of nouns, and then testing by parameterizing with a different set of words. The results are shown in Table 2. Each row represents a different training set; columns represent performance on different test sets. We tested four different training sets: "all" represents nouns sampled uniformly from our set of all 36,566 nouns. "2k common" represents the 2000 most common nouns, "2k concrete" represents the 2000 most concrete nouns; "2k random" represents 2000 randomly sampled nouns. There are 3 groups of columns, representing different validation conditions. "Training words" represents a validation condition where the reasoner was tested on new scenarios that used the same set of nouns (eg, a different mix of
containers and objects, or a different number of them). "Validation words" represents a condition with new scenarios using never-before-seen words of a specific type. "New POS" represents words that are an entirely different part of speech, or random words. The general conclusion is that, regardless of the metric used, it is best to train on the largest non-specific set of nouns possible. Doing so gives good generalization across a wide variety of alternative words, including verbs and random strings.
We now explore the distributions of the training nouns. Many of the words passed between containers were nouns that would not be passed in between containers, like "year", and so we conducted an experiment where we trained on the 2000 most concrete nouns, a random subsample of 200 of those nouns, and 20 of the most sensible nouns to be moved between containers, like "marble" or "mouse". As can be seen in Figure 3, the model trained on sensible nouns gets most confused at verbs and random strings, relative to its peers. This suggests that the distribution of very concrete nouns is farther from the distribution of verbs than the larger distribution of concrete nouns.
4.3 Navigation
We now turn our attention to the navigation and navigation route tasks. In this section, we use exact string accuracy as our metric, as both substring and BLEU scores ignore the (critical) sequential nature of the predictions.
4.3.1 Interpolation and Extrapolation
Our primary results are in Figure 4. For both the navigation route and navigation result tasks, we trained on distributions of maps that contained between 3 to 8 total rooms; this resulted in total path plans that were 1 to 5 steps long.
In the middle, we see results for the navigation result task (where the model must predict the result of a specific sequence of actions). Here, we see several interesting phenomena. Overall, the model does a good job of making accurate predictions, with a 79% marginal accuracy over the training regime. There are two situations where the model performs especially well: the first column represents tasks where there is only a single step in the route; the model shows a strong ability to extrapolate to any number of rooms. The second case is the "diagonal" in the upper-right. This represents situations where the map is a linear chain, with very few junctions in the map. In this case, the model also shows a strong ability to generalize to new situations involving a new number of rooms, and a new number of steps. It is unclear why the model has such trouble with two-step planning problems, although we hypothesize it may have something to do with the training distribution, as discussed later.
In contrast, on the left of Figure 4, we see results for the navigation route task. Here, we see similarly strong performance in the training regime, with a 82% marginal accuracy. However, extrapolating to new situations shows mixed results: the model is easily able to extrapolate to any number of rooms, and like the navigation result task, it performs especially well in the case of single-step planning. However, it seems to be completely unable to generalize to new numbers of steps needed in the planning process.
What accounts for the difference? The biggest difference between the navigation route and navigation result tasks is the format of the output: in the navigation result task, the model always outputs a single word (the room name), regardless of the complexity of the map or the plan. In contrast, on the navigation route task, longer plans require a longer output. The model seems to struggle with having to generate sentences that are longer than any it has seen before. While we have used exact string equality in all of the tests reported here, we note that on longer routes, sometimes the model’s output would be a substring of the correct output, as in the example below, which is correct, but missing the final step:
Target: to the west, then to the west, then to the north, then to the north, then to the north, then to the north
Prediction: to the west, then to the west, then to the north, then to the north, then to the north
4.4 Compositional Reasoning on the Composite Task
Finally, we tested the hard object task, where the reasoner must consider objects, containers, and maps. To explore this, we test four conditions. We train a learner exclusively on the task (HardObj). We also train a learner first on the navigation task for 1000 steps, then on the container task for 1000 steps, then on the hard object task for 1000 steps (Nav-Cont-HardObj). We also reverse the order of navigation and container pre-training (Cont-Nav-HardObj), and we test a random mixture of pretraining, with 2000 steps of a 50/50 mixture of sentences drawn from both tasks (ContNav5050HardObj). For each regime, we trained 10 models and averaged their performance on a test set of 5000 held-out examples.
The results in Figure 5 are surprising: the models that learned first on other kinds of tasks learned the composite task more quickly and achieved better ultimate performance. Like curriculum learning, this seems to indicate subskills can persist and be ported to superskills for improved sample efficiency. Instead of fine-tuning on a single hard task (and thereby gaining proficiency in only that task), a better strategy may be to fine-tune on a wide variety of elemental tasks, and focus more on combining them, thereby potentially solving not just one, but a combinatorially large number of complex tasks.
4.4.1 Importance of Training Set Distribution
The T5 model seems unusually sensitive to the training set distribution. Figure 6 explores this. In our first attempt at creating a training distribution for the navigation tasks, we randomly created a map with a uniformly selected number of rooms, and then randomly selected source and destination rooms. This resulted in the performance shown in the right-hand panels (C and D) of Figure 6. Panel (D) shows most of the training data is concentrated in the upper-left corner, meaning short paths in small maps. As a result (panel (C)), the model was almost completely unable to model length 4 or length 5 plans, because they rarely appeared in the training set.
An alternative is to sample a desired path length uniformly, and then generate a map, source, and destination with that length. This results in the training set distribution shown in panel (B), which concentrates many more examples on longer paths in larger maps, but with fewer examples of two-step plans. The resulting accuracy is shown in Panel (A). Here, we see that accuracy is greatly improved for four- and five-step plans, although two-step accuracy suffers. However, even the
carefully constructed training distribution with uniformly sampled path lengths does not induce generalization to paths longer than 5 steps, as shown in Figure 4. Understanding this phenomenon is an important direction for future research.
4.5 Does all that reading really help?
At the heart of our paper is the idea that natural language provides a useful inductive bias. To explore this directly, we modify our object-container task in the following way: We devise a one-to-one mapping from each word in our sensible English templates to words in an invented gibberish. Since the mapping is one-to-one, the English templates and the gibberish templates are identically structured but with substituted words. For example, the word "The" is mapped to the gibberish word "Xrq", the word "contains" to "sixnqkxb", etc. Noun slots in both templates are still filled with the original English nouns (i.e. not substituted with gibberish). Figure 7 shows the results: it is harder for T5 to master the gibberish domain. If there were no inductive bias of English-trained models aiding our learners on English tasks, then the mastery of these two grammars would grow at the same pace, since they are just as structured as one another and thus presumably as predictable and learnable as one another. This is a bit of evidence in favor of the notion that the inductive bias of large LMs does, in fact, aid in learning quickly and generalize well in several different senses.
5 Ethical and Risk Considerations
Since this paper studies what are essentially toy tasks, we don’t consider the risks for our methods to be particularly high. However, we believe in the eventual application of these models in realworld domains (e.g. robotics and planning), and this work contributes to a needful preliminary understanding of these models’ behavior, since they are difficult to control and predict. We urge applications of these models to take into account this general risk along with work such as ours which helps to characterize it. It is dangerous to use language models in high-risk domains (consider a surgical robot or a planning agent for defusing explosives) without extensive understanding of their interpolative and extrapolative generalization.
6 Conclusions and Future Work
The central goal of this paper has been to investigate if connectionist LMs can learn something akin to symbolic rules that generalize in “natural" ways. We have shown strong performance on a suite of reasoning tasks, including tasks that we consider to be simple–an object manipulation task and a navigation task–and more difficult–a composite task that combines the other two. On the container task, we show near perfect performance on the training domain and considerably far outside of it and an eventual marked drop-off in performance. On the navigation task, we show differing patterns of generalization on different subtasks, with slightly lower performance than on the container task. We have also demonstrated some of the boundaries of generalization by identifying several different kinds of generalization and measuring them. We do not claim to have explored all the axes of generalization possible in the study of these language models, but rather shown how the generalization ability of
LMs can be probed. But these results are exciting because the models perform so well outside of the training domain that it seems as if some general, symbolic rules have been learned.
Beyond generalizing just to new examples on learned tasks, we have demonstrated that these models are aided in the performance of complex tasks by learning first on elemental tasks. This hints at the prospect of learning difficult tasks more quickly and better by learning more simple tasks first.
We have additionally argued that natural language provides a powerful inductive bias for symbolic tasks by comparing on scenarios across different, but equivalent grammars, one in natural language and one in an invented language. Intuitively, learning the distribution of language that describes the world helps a learner to understand the distribution of that world itself. Thus, it is plausible that LMs might be effective ways to provide autonomous agents with priors and world models.
Implicitly, we have demonstrated that connectionist architectures are capable of strong performance on some classically symbolic tasks. This is done in light of recent calls to incorporate guiding principles of symbolic AI into connectionist models; it is exciting that language models seem to possess at least some of the rule-learning and generalization that humans possess, as opposed to the mere ability to recognize patterns and interpolate over a well-explored training domain. | 1. What is the main contribution of the paper regarding symbolic reasoning tasks?
2. How does the reviewer assess the value of synthetically generated data in NLP research?
3. What are the strengths of the paper in terms of experimental design and presentation?
4. Are there any typos or minor issues in the review that need correction? | Summary Of The Paper
Review | Summary Of The Paper
In this work, the authors investigate if large scale pre-trained language models (such as T5) can provide inductive biases that are useful for solving language-based symbolic reasoning tasks, and furthermore, generalize to unseen settings.
First, the authors define four types of tasks. While all the four tasks are formulated as sequence generation given prompts (prefixes), they require different kinds of reasoning:
Container. The prefix text describes an initial states of the world and a sequence of transitions, the target sequence needs to describe the ending states.
Navigation Route. The prefix text describes a map as well as two positions in the map, the target sequence is the navigation path from one position to the other.
Navigation Result. The prefix text describes a map, a starting position, and a navigation path, the target is the ending position.
Composite Task. A mixture of container and navigation tasks.
Then, the authors define a set of experimental settings targeting the generalizability of models, from a variety of aspects such as map sizes, object/container numbers, reasoning steps required as well as some linguistic properties (e.g., replacing nouns with words with other POS tags, or even made-up words).
The authors provide plenty of experiments, suggesting the pre-trained language models indeed have capability of learning and to some extent generalizing symbolic rules via language-based tasks.
Review
Synthetically generated data
Just to be transparent. I was one of the reviewers of this paper at ICML 2021. The authors have described in the "submission history" field about lacking of baselines. In addition to that, there were some ICML reviewers devalued this work because the used tasks are synthetically generated (and thus not natural language).
I want to emphasize my point that synthetically generated tasks are equally valuable.
I understand and agree that the ultimate goal is to model and perform textual reasoning on real world data. However, before stepping into natural language data, I believe it is meaningful to have synthetic environments that are controllable, helping researchers to better understand the problem. Although the tasks introduced in this work use templated language, it is clearly shown in the figures that they are not oversimplified (there are areas in the heatmaps where the models fail to generalize). It might make less sense to use natural language data when strong pre-trained LMs such as T5 still have trouble solving/generalizing on toy-ish tasks. Because otherwise, it might be difficult to analyze, for example, which part of the task is the model struggling on. Furthermore, there are many examples in the NLP communities that use synthetic datasets (templated language) as a starting point of certain research directions, such as bAbI (Weston et al., 2016), TextWorld (Côté et al., 2018) and many more.
One potentially interesting thing to try might be, back-translating (e.g., EN -> DE -> EN) the teamplated descriptions, and investigate to what degree the language models generalize on this dimension of "naturalness".
Overall comments
This work provides a set of well-designed experiments with convincing results that explore an interesting and important direction.
Since the flourishing of the large scale pre-trained language models, SOTA scores of neural models on various NLP tasks have increased by a lot. On some tasks, some researchers believe neural models have achieved human-level performance, while others argue that neural models may be somehow exploiting biases and trivial cues injected into the data or model unconsciously, rather than really doing reasoning.
To that end, the community do require works that help us understanding what and why such language models can learn, and to generalize. This work falls into this category.
Among the experiment designs and presentations, I especially like the heatmaps, which clearly show how the model gradually loses its generalizability when certain dimension of task becomes more difficult.
Typos and minor things
L217: There are 3 groups of columns
References
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. Weston et al., 2016.
TextWorld: A Learning Environment for Text-based Games. Côté et al., 2018. |
NIPS | Title
Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Abstract
Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks. Here, we show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine. We observe that these engines learn quickly and generalize in a natural way that reflects human intuition. For example, training such a system to model blockstacking might naturally generalize to stacking other types of objects because of structure in the real world that has been partially captured by the language describing it. We study several abstract textual reasoning tasks, such as object manipulation and navigation, and demonstrate multiple types of generalization to novel scenarios and the symbols that comprise them. We also demonstrate the surprising utility of compositional learning, where a learner dedicated to mastering a complicated task gains an advantage by training on relevant simpler tasks instead of jumping straight to the complicated task.
1 Introduction
Natural language processing (NLP) has seen major progress thanks to probabilistic language models (LMs) like GPT-2 [1], BERT [2], and T5 [3]. These are pre-trained in a general, task-agnostic way on large corpora of unstructured text and then fine-tuned on specific tasks. This method has achieved state of the art performance on popular NLP tasks like question-answering, textual entailment, text summarization, neural machine translation, and more [4].
These models are not just impressive for the high scores they achieve on quantitative language tasks; the text they generate often reflects patterns of “real-world” structure, suggesting that embedded in the weights of the LM is implicit knowledge of physics, object persistence, containment, spatial relationships, causal mechanisms, material properties, and other common-sense knowledge central to human reasoning and intuition. If that is so, then they ought to provide a useful inductive bias in learning to perform symbolic reasoning tasks that mirror real-world tasks.
In this paper, we attempt to leverage and characterize this inductive bias by training reasoning engines that demonstrate some of the hallmarks of human reasoning ability: learning (a) rules (b) that generalize well (c) from few examples. Concretely, we fine-tune T5 on a suite of symbolic reasoning tasks and study generalization along multiple different axes beyond the normal test/train split: we examine cardinality generalization, object generalization, part-of-speech generalization, and show (in the spirit of curriculum learning) that LMs can leverage combinations of learned subskills to master complicated composite tasks with both better sample efficiency and higher terminal performance than learning directly on the complicated tasks.
35th Conference on Neural Information Processing Systems (NeurIPS 2021)
We see our contributions as four-fold. First, we demonstrate a high level of performance by connectionist models on tasks resembling symbolic classical AI tasks, demonstrating some symbolic ability of such models in light of recent calls to unify the principles of both symbolism and connectionism. Secondly, we demonstrate the breakdown of reasoning ability by manufacturing our own reasoning datasets that can be tweaked in systematic ways, instead of simply split into training/validation/test sets. This means that we can assess our models’ ability to both interpolate and extrapolate in systematic, symbolic, and grammatical ways, and otherwise flex with changing distributions. Thirdly, we demonstrate the ability of large LMs to reason compositionally, that is, to learn two kinds of reasoning separately and then to combine those different kinds of reasoning on a novel composite task, to which they are both relevant. Lastly, we demonstrate the inductive bias we hypothesize is present in large LMs and can be leveraged to assist in the formation of reasoning engines.
2 Related Work
Transformer-based LMs [5] are the dominant LM architecture, including the original GPT models [1, 6], BERT [2, 3] and the recent 175 billion parameter GPT-3 [4]. They are trained via generic maximum likelihood learning on vast, unstructured text corpora, but often exhibit zero/few-shot learning abilities on a variety of NLP tasks, having grasped key mechanics of natural language by learning to simply predict a missing word; it is this inductive bias we hope to harness.
These models implicitly house a rich world model with concepts and relations. Petroni et al. query a language model (instead of a traditional, symbolic knowledge base) for relational data explicitly expressed in natural langauge [7]. Bosselut et al. expand this scope, attempting to generate explicit commonsense knowledge graphs using pre-trained LMs [8], and Bouraoui et al. perform relation extraction with BERT, which can be construed as finding and specifying edges [9]. This work treats the formation of explicit knowledge graphs using LMs, whereas ours treats the leveraging of implicit knowledge graphs using LMs.
Our tasks are inspired by classic examples of good old-fashioned AI, including Blocks World [10] and STRIPS planners [11], where a system’s state, composed of symbols such as toy blocks, must be manipulated to achieve a desired end given a rule set. In contrast to these original problems, we learn the symbolic rules governing a similar system from examples. Our work is therefore similar in spirit to program induction, where a connectionist model must learn a structured program from examples; examples include the Neural Turing Machine [12] and the Differentiable Neural Computer [13]. Some work has attempted to teach a language model rules by explicit teaching through specialized datasets [14] or knowledge graphs [15]. We count on these rules being learned implicitly, as a side effect of the general learning objective of LMs. Weber et al. combine neural networks with logic programming to solve multi-hop reasoning tasks in natural language [16]. We don’t insist on expressing explicit representations of symbolic rules learned, instead reasoning about rules learned by examining out-of-distribution performance.
Our tasks are similar to the bAbI dataset [17]. However, we focus on systematic study of specific dimensions of generalization that is impossible with bAbI. bAbI questions evaluate whether certain skills are possessed; our questions evaluate to what extent these skills are possessed, how they generalize, and when they break down.
Finally, the idea of compositional learning draws inspiration from multi-task learning, which was explored in [18] and more recently in a deep learning approach [19]. Supplementary Training on Intermediate Labeled-data Tasks (STILTs) was introduced in [20], which uses two stages of pretraining, where the first stage is unsupervised like with other pre-trained models, but where the second is on some data-rich intermediate supervised task.
3 Data Generation, Evaluation, and Training Protocol
The central questions we ask in this paper revolve around whether large LMs possess some world model or inductive bias that helps them to learn reasoning rules from few examples. These rules should be used to successfully extrapolate not just to novel instances, but to novel tasks. Upon learning to track objects across containers, for example, a language model should be able to generalize to new types of objects, marginally more complicated scenarios than it has seen before, and leverage skills already learned to progress more quickly than if it hadn’t.
We show how LMs can generalize in these ways and others by systematically fine-tuning them on classes of scenarios where they can learn rules. Across tasks, our reasoners deduce the final state of an environment given natural language descriptions of an initial state and actions taken on it.
3.1 Task Types
Containers. In the first task, which we call containers, we manipulate objects in various containers and ask the reasoner to track the state of the environment. The initial state of the environment is a random allotment of n_objects objects into n_containers containers. The names of these objects and containers are sampled uniformly and without replacement from lists of candidates. Examples of such candidates are given in the appendix. Once sampled and organized, the objects and containers are converted into a plain English expression describing their organization. Then, there is a random manipulation of this initial state, where an object is randomly taken from a container and placed into another container. The task of the reasoning engine is to describe the final state of the environment.
For training, each scenario uses 2-8 objects and 2-3 containers, sampled uniformly. We construct sentences by sampling without replacement from a uniform distribution of container names and object names and filling a template accordingly. Our training set of object and container names comes from a proprietary linguistic dataset, which has data on commonness, part of speech, etc. of each word. We divide this dataset into nouns and verbs and subtract their intersection from both. We split the unique nouns into a train set (n=36566) and a validation set (n=12189) and a container set (n=9), the former two of which are to be used as object names and the latter of which is to be used for container names during training. For each set (train and val), we find the 2000 most common and join the dataset on concreteness ratings from Brysbaert et al. [21] to take the 2000 most concrete nouns from each set. We also generate a list of random strings by sampling uniformly from single digit integers and lowercase English characters, ranging in length randomly from 5 to 10.
Navigation. In both versions of navigation, a natural-language description of a map of the environment is provided to the reasoning engine. It is generated by sampling from a list of common locations including “kitchen", “garden", and others. These locations are then composed into a grid where they are in north-south-west-east orientation to each other. In the first task, called navigation route, the reasoner is additionally given a starting point and a destination. The reasoner must provide a valid route from origin to destination. In the second task, called navigation result, we require that the reasoner, given a starting point and a route, determine the location where they would find themselves in the map. For training, the maximum number of locations is 8 and the minimum is 3.
Composite task. We are interested in assessing the ability of a reasoning engine to perform a task which combines multiple elemental types of knowledge. Specifically, can a learner master two skills separately, and then leverage knowledge from both to perform the composite task better and more quickly than it would have by learning directly on the composite task? This task, called hard object,
is intended to be a composition of navigation and containers. The reasoner is given a verbal map of the world and an action taken. In this action, an object is taken from its container, carried on a route indicated by successive moves in cardinal directions, and placed. The reasoner must then describe the new state of the containers.
3.2 Generalization Types
Given our overarching interest in large LMs’ ability to yield reasoners that generalize well, we craft several types of experiments to probe the bounds of different kinds of generalization, namely:
Cardinality generalization. This tests a model’s structural understanding of the domain. If a reasoner is trained on scenarios with k objects, steps of navigations, rooms, etc. can it generalize to more than k objects, steps of navigations, rooms, etc.?
Object generalization. This is a semantic test of whether or not the reasoner can leverage prior knowledge of English to generalize to new, never-before-seen objects. For example, if several different container training scenarios are composed of objects from different distributions of words (e.g. 2000 concrete nouns vs. 2000 most common nouns vs. 2000 randomly sampled nouns), which distribution of training scenarios results in the best model generalization to scenarios composed with new, previously unseen nouns?
Part-of-speech generalization. This is another semantic test based on the idea that a reasoner that understands language will perform better on “right" words than on “wrong" words. For example, on the container task, the model should perform well when nouns are objects and poorly when verbs or random strings are. Intuitively, a scenario such as “The bin contains a dethrone and a transpose" makes less sense, and is statistically less likely, than a scenario such as “The bin contains a ball and a snake"; the naturalness of the scenario descriptions should positively correlate with generalization. We hypothesize that natural nouns will work best, since nouns are often sensible things to move from one container to the other, and that arbitrary strings and verbs will both decrease performance.On the other hand, random strings have probably never been encountered by the model, and it might learn to simply copy whatever tokens are in specific places, treating strings as opaque IDs, symbols without meaning. Verbs should actively confuse the model, since a verb’s linguistic role is structurally and semantically different than a noun’s; it is a symbol not without meaning, but with the wrong meaning.
Reasonable phrasing generalization. Finally, what if we replace the templates themselves? Instead of replacing the objects in a scenario, we replace the English scaffolding we insert them into by mapping deterministically from each English word to a gibberish word composed of English morphemes but possessing no meaning (See 4.5). A reasoner for which language is meaningful will have a harder time adjusting to this task, while a reasoner for which language is simply a sequence of meaningless strings will struggle no more with this task than with its original English version.
3.3 Base Model and Training Details
On all experiments, we use T5’s 3 billion parameter architecture, either fine-tuning a pretrained model or training from scratch. We do so on Nvidia Tesla V100 32GB GPUs with a batch size of 1 and a learning rate of 0.003. We train a different reasoning engine for all three elemental tasks for 1000 total steps. After evaluating them on their respective tasks, we train them on the other base task for 1000 steps and eventually on the composite task for 1000 steps (3000 total steps).
3.4 Metrics
For each experiment, we gauge performance using three metrics. The first is exact equality of true final state. This is obviously the highest standard, as it mimics perfectly the reasoning we would expect a human to carry out after having learned the rules and format of the dynamical system. The second metric is substring equality. We want to see how many individual statements from the predicted final state are contained in the true final state. Sometimes the reasoning is flawed, but on target (e.g. if the reasoner predicts, after moving a hammer from a box to a bin, that it is in both the box and the bin, it should be given partial credit). The third metric is the standard BLEU score, to see how similar the sentences are at the individual word level.
Interpolation and Extrapolation. Throughout our experiments, we assess different kinds of generalization. The first, which we term interpolation, refers to testing a LM on new instances that are
drawn from the same distribution as the training set. For example, in the container task, we might test on scenarios that use already-seen objects and containers, but arranged in novel scenarios. The second, which we term extrapolation, refers to testing a LM on new instances that are drawn from a different distribution than the training distribution. For example, in the container task, we might test on scenarios that involve new, never-before-seen objects, containers, or cardinalities.
Structural and Semantic Generalization. Finally, we distinguish between structural generalization, where we test things like cardinality generalization, and semantic generalization, where we explore new words, or new types of words.
4 Results
We now systematically test interpolation and extrapolation, assessing both semantic and structural generalization. We begin with several baselines, and then explore increasingly difficult tasks.
4.1 Comparison with Baseline Methods
Since our claims have to do with the inductive bias of large, pre-trained LMs, we first establish a wide baseline gulf between pre-trained models and those trained from scratch (tabula rasa). We train two such versions of T5-3B on the same training set (1000 scenarios with maximums of 8 objects and 3 containers and minimums of 2 objects and 2 containers) and then measure two kinds of both interpolative and extrapolative performance: systematic, meaning the number of objects and containers, and seman-
tic, meaning the words used for objects in reasoning scenarios (See 3.4 and 3.2 for more info). Thus, the two models’ performance is compared on 4 total experiments with 7200 examples each: Interp - same number of containers and objects and same words used for object names (as training set); SemExtrap - same number of containers and objects (as training set), different words used for object names; SysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, same words used for object names (as training set); SemSysExtrap - minimums of 4 containers and 10 objects and maximums of 5 containers and 19 objects, different words used for object names (than training set); The average BLEU scores for each model checkpoint (taken every 100 steps during fine-tuning/training) are displayed in Figure 1. These baselines make clear that pre-trained language models that have been fine-tuned wildly outperform language models trained from scratch.
4.2 Containers
We now begin our primary experiments, starting by testing cardinality generalization. We train on scenarios with a maximum of 8 objects and 3 containers, and validate structural generalization on scenarios with up to 19 objects and 5 containers. The results are shown in Figure 2. The model is able to correctly predict the exact string a majority of the time even well outside the training domain. This kind of generalization could only be reached by grasping, to some extent, rules like those used when enumerating long English lists. This is one way in which the inductive bias of LMs aids in the reasoner’s good performance.
We now test object generalization and part-of-speech generalization. Here, we train models with scenarios generated by parameterizing our templates with various sets of nouns, and then testing by parameterizing with a different set of words. The results are shown in Table 2. Each row represents a different training set; columns represent performance on different test sets. We tested four different training sets: "all" represents nouns sampled uniformly from our set of all 36,566 nouns. "2k common" represents the 2000 most common nouns, "2k concrete" represents the 2000 most concrete nouns; "2k random" represents 2000 randomly sampled nouns. There are 3 groups of columns, representing different validation conditions. "Training words" represents a validation condition where the reasoner was tested on new scenarios that used the same set of nouns (eg, a different mix of
containers and objects, or a different number of them). "Validation words" represents a condition with new scenarios using never-before-seen words of a specific type. "New POS" represents words that are an entirely different part of speech, or random words. The general conclusion is that, regardless of the metric used, it is best to train on the largest non-specific set of nouns possible. Doing so gives good generalization across a wide variety of alternative words, including verbs and random strings.
We now explore the distributions of the training nouns. Many of the words passed between containers were nouns that would not be passed in between containers, like "year", and so we conducted an experiment where we trained on the 2000 most concrete nouns, a random subsample of 200 of those nouns, and 20 of the most sensible nouns to be moved between containers, like "marble" or "mouse". As can be seen in Figure 3, the model trained on sensible nouns gets most confused at verbs and random strings, relative to its peers. This suggests that the distribution of very concrete nouns is farther from the distribution of verbs than the larger distribution of concrete nouns.
4.3 Navigation
We now turn our attention to the navigation and navigation route tasks. In this section, we use exact string accuracy as our metric, as both substring and BLEU scores ignore the (critical) sequential nature of the predictions.
4.3.1 Interpolation and Extrapolation
Our primary results are in Figure 4. For both the navigation route and navigation result tasks, we trained on distributions of maps that contained between 3 to 8 total rooms; this resulted in total path plans that were 1 to 5 steps long.
In the middle, we see results for the navigation result task (where the model must predict the result of a specific sequence of actions). Here, we see several interesting phenomena. Overall, the model does a good job of making accurate predictions, with a 79% marginal accuracy over the training regime. There are two situations where the model performs especially well: the first column represents tasks where there is only a single step in the route; the model shows a strong ability to extrapolate to any number of rooms. The second case is the "diagonal" in the upper-right. This represents situations where the map is a linear chain, with very few junctions in the map. In this case, the model also shows a strong ability to generalize to new situations involving a new number of rooms, and a new number of steps. It is unclear why the model has such trouble with two-step planning problems, although we hypothesize it may have something to do with the training distribution, as discussed later.
In contrast, on the left of Figure 4, we see results for the navigation route task. Here, we see similarly strong performance in the training regime, with a 82% marginal accuracy. However, extrapolating to new situations shows mixed results: the model is easily able to extrapolate to any number of rooms, and like the navigation result task, it performs especially well in the case of single-step planning. However, it seems to be completely unable to generalize to new numbers of steps needed in the planning process.
What accounts for the difference? The biggest difference between the navigation route and navigation result tasks is the format of the output: in the navigation result task, the model always outputs a single word (the room name), regardless of the complexity of the map or the plan. In contrast, on the navigation route task, longer plans require a longer output. The model seems to struggle with having to generate sentences that are longer than any it has seen before. While we have used exact string equality in all of the tests reported here, we note that on longer routes, sometimes the model’s output would be a substring of the correct output, as in the example below, which is correct, but missing the final step:
Target: to the west, then to the west, then to the north, then to the north, then to the north, then to the north
Prediction: to the west, then to the west, then to the north, then to the north, then to the north
4.4 Compositional Reasoning on the Composite Task
Finally, we tested the hard object task, where the reasoner must consider objects, containers, and maps. To explore this, we test four conditions. We train a learner exclusively on the task (HardObj). We also train a learner first on the navigation task for 1000 steps, then on the container task for 1000 steps, then on the hard object task for 1000 steps (Nav-Cont-HardObj). We also reverse the order of navigation and container pre-training (Cont-Nav-HardObj), and we test a random mixture of pretraining, with 2000 steps of a 50/50 mixture of sentences drawn from both tasks (ContNav5050HardObj). For each regime, we trained 10 models and averaged their performance on a test set of 5000 held-out examples.
The results in Figure 5 are surprising: the models that learned first on other kinds of tasks learned the composite task more quickly and achieved better ultimate performance. Like curriculum learning, this seems to indicate subskills can persist and be ported to superskills for improved sample efficiency. Instead of fine-tuning on a single hard task (and thereby gaining proficiency in only that task), a better strategy may be to fine-tune on a wide variety of elemental tasks, and focus more on combining them, thereby potentially solving not just one, but a combinatorially large number of complex tasks.
4.4.1 Importance of Training Set Distribution
The T5 model seems unusually sensitive to the training set distribution. Figure 6 explores this. In our first attempt at creating a training distribution for the navigation tasks, we randomly created a map with a uniformly selected number of rooms, and then randomly selected source and destination rooms. This resulted in the performance shown in the right-hand panels (C and D) of Figure 6. Panel (D) shows most of the training data is concentrated in the upper-left corner, meaning short paths in small maps. As a result (panel (C)), the model was almost completely unable to model length 4 or length 5 plans, because they rarely appeared in the training set.
An alternative is to sample a desired path length uniformly, and then generate a map, source, and destination with that length. This results in the training set distribution shown in panel (B), which concentrates many more examples on longer paths in larger maps, but with fewer examples of two-step plans. The resulting accuracy is shown in Panel (A). Here, we see that accuracy is greatly improved for four- and five-step plans, although two-step accuracy suffers. However, even the
carefully constructed training distribution with uniformly sampled path lengths does not induce generalization to paths longer than 5 steps, as shown in Figure 4. Understanding this phenomenon is an important direction for future research.
4.5 Does all that reading really help?
At the heart of our paper is the idea that natural language provides a useful inductive bias. To explore this directly, we modify our object-container task in the following way: We devise a one-to-one mapping from each word in our sensible English templates to words in an invented gibberish. Since the mapping is one-to-one, the English templates and the gibberish templates are identically structured but with substituted words. For example, the word "The" is mapped to the gibberish word "Xrq", the word "contains" to "sixnqkxb", etc. Noun slots in both templates are still filled with the original English nouns (i.e. not substituted with gibberish). Figure 7 shows the results: it is harder for T5 to master the gibberish domain. If there were no inductive bias of English-trained models aiding our learners on English tasks, then the mastery of these two grammars would grow at the same pace, since they are just as structured as one another and thus presumably as predictable and learnable as one another. This is a bit of evidence in favor of the notion that the inductive bias of large LMs does, in fact, aid in learning quickly and generalize well in several different senses.
5 Ethical and Risk Considerations
Since this paper studies what are essentially toy tasks, we don’t consider the risks for our methods to be particularly high. However, we believe in the eventual application of these models in realworld domains (e.g. robotics and planning), and this work contributes to a needful preliminary understanding of these models’ behavior, since they are difficult to control and predict. We urge applications of these models to take into account this general risk along with work such as ours which helps to characterize it. It is dangerous to use language models in high-risk domains (consider a surgical robot or a planning agent for defusing explosives) without extensive understanding of their interpolative and extrapolative generalization.
6 Conclusions and Future Work
The central goal of this paper has been to investigate if connectionist LMs can learn something akin to symbolic rules that generalize in “natural" ways. We have shown strong performance on a suite of reasoning tasks, including tasks that we consider to be simple–an object manipulation task and a navigation task–and more difficult–a composite task that combines the other two. On the container task, we show near perfect performance on the training domain and considerably far outside of it and an eventual marked drop-off in performance. On the navigation task, we show differing patterns of generalization on different subtasks, with slightly lower performance than on the container task. We have also demonstrated some of the boundaries of generalization by identifying several different kinds of generalization and measuring them. We do not claim to have explored all the axes of generalization possible in the study of these language models, but rather shown how the generalization ability of
LMs can be probed. But these results are exciting because the models perform so well outside of the training domain that it seems as if some general, symbolic rules have been learned.
Beyond generalizing just to new examples on learned tasks, we have demonstrated that these models are aided in the performance of complex tasks by learning first on elemental tasks. This hints at the prospect of learning difficult tasks more quickly and better by learning more simple tasks first.
We have additionally argued that natural language provides a powerful inductive bias for symbolic tasks by comparing on scenarios across different, but equivalent grammars, one in natural language and one in an invented language. Intuitively, learning the distribution of language that describes the world helps a learner to understand the distribution of that world itself. Thus, it is plausible that LMs might be effective ways to provide autonomous agents with priors and world models.
Implicitly, we have demonstrated that connectionist architectures are capable of strong performance on some classically symbolic tasks. This is done in light of recent calls to incorporate guiding principles of symbolic AI into connectionist models; it is exciting that language models seem to possess at least some of the rule-learning and generalization that humans possess, as opposed to the mere ability to recognize patterns and interpolate over a well-explored training domain. | 1. What is the main objective of the paper regarding symbolic reasoning tasks?
2. What are the three types of tasks designed to evaluate the inductive bias of pre-trained language models?
3. What are the different kinds of generalizations used to probe the model's abilities?
4. How does the reviewer assess the contributions and findings of the paper?
5. What are the limitations of the paper regarding the datasets used? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to explore whether the inductive bias of pre-trained language models can support symbolic reasoning tasks. Three different types of tasks are designed for the above objective, including a container-based task, a navigation task and a composite task. In order to verify the generalization ability of the model, the paper also designed different kinds of generalization for probing, including cardinality generalization, object generalization, POS generalization and reasonable phrasing generalization. T5 is fine-tuned and evaluated on these tasks, with some interesting findings observed, which I think are the key contribution of this work: (1) for container-based task, T5 shows good generalization and prediction capability; (2) for navigation task, T5 can also do a good job but when the number of inference steps changes, its performance will drop. (3) for composite task, T5 can do a good job by learning in a curriculum learning way.
Review
The objective of the paper is clearly clarified. The writing is easy to follow. The findings in the experiments are interesting. It is a good paper to study how well pre-trained LM can perform on reasoning-required tasks and its generalization capabilities on different aspects. I appreciate this work but think the datasets built within this paper is limited to specific domains and scenarios. It will be better if the paper can cover results on other open domain datasets as well, such as commonsense QA, mathematical reasoning, etc. |
NIPS | Title
GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs
Abstract
There has been an increased interest in discovering heuristics for combinatorial problems on graphs through machine learning. While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of budgetconstraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional Network (GCN) using a novel probabilistic greedy mechanism to predict the quality of a node. To further facilitate the combinatorial nature of the problem, GCOMB utilizes a Q-learning framework, which is made efficient through importance sampling. We perform extensive experiments on real graphs to benchmark the efficiency and efficacy of GCOMB. Our results establish that GCOMB is 100 times faster and marginally better in quality than state-of-the-art algorithms for learning combinatorial algorithms. Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.
1 Introduction and Related Work
Combinatorial optimization problems on graphs appear routinely in various applications such as viral marketing in social networks [14, 4], computational sustainability [8], health-care [33], and infrastructure deployment [20, 23, 24, 22]. In these set combinatorial problems, the goal is to identify the set of nodes that optimizes a given objective function. These optimization problems are often NP-hard. Therefore, designing an exact algorithm is infeasible and polynomial-time algorithms, with or without approximation guarantees, are often desired and used in practice [13, 31]. Furthermore, these graphs are often dynamic in nature and the approximation algorithms need to be run repeatedly at regular intervals. Since real-world graphs may contain millions of nodes and edges, this entire process becomes tedious and time-consuming.
To provide a concrete example, consider the problem of viral marketing on social networks through Influence Maximization [2, 14]. Given a budget b, the goal is to select b nodes (users) such that their endorsement of a certain product (ex: through a tweet) is expected to initiate a cascade that reaches the largest number of nodes in the graph. This problem is NP-hard [14]. Advertising through social networks is a common practice today and needs to solved repeatedly due to the graphs being dynamic
∗denotes equal contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
in nature. Furthermore, even the greedy approximation algorithm does not scale to large graphs [2] resulting in a large body of research work [31, 13, 16, 26, 14, 5, 32, 6].
At this juncture, we highlight two key observations. First, although the graph is changing, the underlying model generating the graph is likely to remain the same. Second, the nodes that get selected in the answer set of the approximation algorithm may have certain properties common in them. Motivated by these observations, we ask the following question [7]: Given a set combinatorial problem P on graphG and its corresponding solution set S, can we learn an approximation algorithm for problem P and solve it on an unseen graph that is similar to G?
1.1 Limitations of Existing Work
The above observations were first highlighted by S2V-DQN [7], where they show that it is indeed possible to learn combinatorial algorithms on graphs. Subsequently, an improved approach was proposed in GCN-TREESEARCH [19]. Despite these efforts, there is scope for further improvement.
• Scalability: The primary focus of both GCN-TREESEARCH and S2V-DQN have been on obtaining quality that is as close to the optimal as possible. Efficiency studies, however, are limited to graphs containing only hundreds of thousands nodes. To provide a concrete case study, we apply GCNTREESEARCH for the Influence Maximization problem on the YouTube social network. We observe that GCN-TREESEARCH takes one hour on a graph containing a million edges (Fig. 3a; we will revisit this experiment in § 4.3). Real-life graphs may contain billions of edges (See. Table 1a).
• Generalizability to real-life combinatorial problems: GCN-TREESEARCH proposes a learningbased heuristic for the Maximal Independent Set problem (MIS). When the combinatorial problem is not MIS, GCN-TREESEARCH suggests that we map that problem to MIS. Consequently, for problems that are not easily mappable to MIS, the efficacy is compromised (ex: Influence Maximization).
• Budget constraints: Both GCN-TREESEARCH and S2V-DQN solve the decision versions of combinatorial problems (Ex. set cover, vertex cover). In real life, we often encounter their budgetconstrained versions, such as max-cover and Influence Maximization [14].
Among other related work, Gasse et al. [9] used GCN for learning branch-and-bound variable selection policies, whereas Prates et al. [27] focused on solving Travelling Salesman Problem. However, the proposed techniques in these papers do not directly apply to our setting of set combinatorial problems.
1.2 Contributions
At the core of our study lies the observation that although the graph may be large, only a small percentage of the nodes are likely to contribute to the solution set. Thus, pruning the search space is as important as prediction of the solution set. Both S2V-DQN [7] and GCN-TREESEARCH [19] have primarily focused on the prediction component. In particular, S2V-DQN learns an end-to-end neural model on the entire graph through reinforcement learning. The neural model integrates node embedding and Q-learning into a single integrated framework. Consequently, the model is bogged down by a large number of parameters, which needs to be learned on the entire node set. As a result, we will show in §. 4 that S2V-DQN fails to scale to graphs beyond 20, 000 nodes.
On the other hand, GCN-TREESEARCH employs a two-component framework: (1) a graph convolutional network (GCN) to learn and predict the individual value of each node, and (2) a tree-search component to analyze the dependence among nodes and identify the solution set that collectively works well. Following tree-search, GCN is repeated on a reduced graph and this process continues iteratively. This approach is not scalable to large graphs since due to repeated iterations of GCN and TreeSearch where each iteration of tree-search has O(|E|) complexity; E is the set of edges. Our method GCOMB builds on the observation that computationally expensive predictions should be attempted only for promising nodes. Towards that end, GCOMB has two separate components: (1) a GCN to prune poor nodes and learn embeddings of good nodes in a supervised manner, and (2) a Q-learning component that focuses only on the good nodes to predict the solution set. Thus, unlike S2V-DQN, GCOMB uses a mixture of supervised and reinforcement learning, and does not employ an end-to-end architecture. Consequently, the prediction framework is lightweight with a significantly reduced number of parameters.
When compared to GCN-TREESEARCH, although both techniques use a GCN, in GCOMB, we train using a novel probabilistic greedy mechanism. Furthermore, instead of an iterative procedure of repeated GCN and TreeSearch calls, GCOMB performs a single forward pass through GCN
during inference. In addition, unlike TreeSearch, which is specifically tailored for the MIS problem, GCOMB is problem-agnostic 2. Finally, unlike both S2V-DQN and GCN-TREESEARCH, GCOMB uses lightweight operations to prune poor nodes and focus expensive computations only on nodes with a high potential of being part of the solution set. The pruning of the search space not only enhances scalability but also removes noise from the search space leading to improved prediction quality. Owing to these design choices, (1) GCOMB is scalable to billion-sized graphs and up to 100 times faster, (2) on average, computes higher quality solution sets than S2V-DQN and GCN-TREESEARCH, and (3) improves upon the state-of-the-art algorithm for Influence Maximization on social networks.
2 Problem Formulation
Objective: Given a budget-constrained set combinatorial problem P over graphs drawn from distribution D, learn a heuristic to solve problem P on an unseen graph G generated from D.
Next, we describe three instances of budget-constrained set combinatorial problems on graphs.
Maximum Coverage Problem on bipartite graph (MCP): Given a bipartite graph G = (V,E), where V = A ∪ B, and a budget b, find a set S∗ ⊆ A of b nodes such that coverage is maximized. The coverage of set S∗ is defined as f(S∗) = |X||B| , where X = {j|(i, j) ∈ E, i ∈ S∗, j ∈ B}. Budget-constrained Maximum Vertex Cover (MVC): Given a graph G = (V,E) and a budget b, find a set S∗ of b nodes such that the coverage f(S∗) of S∗ is maximized. f(S∗) = |X||E| , where X = {(i, j)|(i, j) ∈ E, i ∈ S∗, j ∈ V }. Influence Maximization (IM) [2]: Given a budget b, a social networkG, and a information diffusion modelM, select a set S∗ of b nodes such that the expected diffusion spread f(S∗) = E[Γ(S∗)] is maximized. (See App. A in supplementary for more details).
3 GCOMB
The input to the training phase is a set of graphs and the optimization function f(·) corresponding to the combinatorial problem in hand. The output is a sequence of two separate neural graphs, GCN [10] and Q-learning network, with their corresponding learned parameters ΘG and ΘQ respectively. In the testing phase, the inputs include a graph G = (V,E), the optimization function f(·) and the budget b. The output of the testing part is the solution set of nodes constructed using the learned neural networks. Fig. 1 presents the training pipeline. We will now discuss each of the phases.
3.1 Generating Training Data for GCN
Our goal is to learn node embeddings that can predict “quality”, and thereby, identify those nodes that are likely to be part of the answer set. We could adopt a classification-based method, where, given a training graph G = (V,E), budget b and its solution set S, a node v is called positive if v ∈ S; otherwise it is negative. This approach, however, assumes all nodes that are not a part of S to be equally bad. In reality, this may not be the case. Consider the case where f({v1})=f({v2}), but the marginal gain of node v2 given S = {v1}, i.e., f({v1, v2}) − f({v1}), is 0 and vice versa. In this scenario, only one of v1 and v2 would be selected in the answer set although both are of equal quality on their own.
2We are, however, limited to set combinatorial problems only.
Probabilistic greedy: To address the above issue, we sample from the solution space in a greedy manner and learn embeddings that reflect the marginal gain f(S ∪{v})− f(S) provided by a node v towards the solution set S (Alg. 2 in Appendix). To sample from the solution space, in each iteration, instead of selecting the node with the highest marginal gain, we choose a node with probability proportional to its marginal gain. The probabilistic greedy algorithm runs m times to construct m different solution sets S = {S1, · · · , Sm} and the score of node v ∈ V is set to:
score(v) = ∑m i gaini(v)∑m i f(Si)
(1)
Here, gaini(v) denotes the marginal gain contribution of v to Si. Specifically, assume v is added to Si in the (j + 1)th iteration and let S j i be the set of nodes that were added in the first j iterations
while constructing Si. Then, gaini(v) = f ( Sji ∪ {v} ) − f ( Sji ) . In our experiments, m is set to
30 for all three problems of MCP, MVC and IM.
Termination condition of probabilistic greedy: Probabilistic greedy runs till convergence of the marginal gains, i.e., gaini(v) ≤ ∆, where ∆ is a small value. The goal here is to identify all nodes that could potentially be part of the solution set for any given budget. ∆ in our experiments is set to 0.01 for all three problems of MCP, MVC and IM.
3.2 Training the GCN
Our goal in this phase is two-fold: (1) Identify nodes that are unlikely to be part of the solution set and are therefore noise in the context of our problem; (2) Learn a predictive model for node quality.
Noise predictor: The noise predictor should be lightweight so that expensive computations are reserved only for the good nodes. With this goal, we exploit the first layer information of the GCN and learn a classifier to predict for a given budget b, whether a node can be safely pruned without affecting the quality of the solution set. Typically, the first layer of a GCN contains the raw features of nodes that are relevant for the problem being solved. In GCOMB, we use the summation of the outgoing edge weights as node features. Let xv denote the total outgoing edge weight of node v. To learn the noise predictor, given a set of training graphs {G1, · · · , Gt}, we first sort all nodes based on xv . Let rank(v,Gi) denote the position of v in the sorted sequence based on xv in Gi. Furthermore, let Sij denote the j
th solution set constructed by probabilistic greedy on Gi. Given a budget b, SjGi,b ⊆ Sij denotes the subset containing the first b nodes added to Sij by probabilistic greedy. Therefore, rbGi = max m j=0 { max∀v∈SjGi,b {rank(v,Gi)} } represents the lowest rank of any node in a solution set of budget b in Gi. This measure is further generalized to all training graphs in the form of rbmax = max∀Gi { rbGi }
, which represents the lowest rank of any node that has a realistic chance of being included in an answer set of budget b. To generalize across budgets, we compute rbimax for a series of budgets {b1, · · · , bmax}, where bmax = max∀Gi { maxmj=0 { |Sij | }} . On this data, we can perform curve fitting [1] to predict rbmax for any (unseen) budget b. In our experiments, we use linear interpolation. To generalize across graph sizes, all of the above computations are performed on normalized budgets, where b is expressed in terms of the proportion of nodes with respect to the node set size of the graph. Similarly, rank rank(v,Gi) is expressed in terms of percentile.
Node quality predictor: To train the GCN, we sample a training graph Gi = (Vi, Ei) and a (normalized) budget b from the range (0, bimax], where b i max = max m j=0 { |Sij | |Vi| } . This tuple is sent to the noise predictor to obtain the good (non-noisy) nodes. The GCN parameters (ΘG) are next learned by minimizing the loss function only on the good nodes. Specifically, for each good node v, we want to learn embeddings that can predict score(v) through a surrogate function score′(v). Towards that end, we draw multiple samples of training graphs and budgets, and the parameters are learned by minimizing the mean squared error loss (See Alg.3 for detailed pseudocode in the Supplementary).
J(ΘG) = ∑ ∼〈Gi,b〉 1 |V gi | ∑ ∀v∈V gi (score(v)− score′(v))2 (2)
In the above equation, V gi denotes the set of good nodes for budget b in graph Gi. Since GCNs are trained through message passing, in a GCN with K hidden layers, the computation graph is limited to the induced subgraph formed by the K-hop neighbors of V gi , instead of the entire graph.
3.3 Learning Q-function While GCN captures the individual importance of a node, Q-learning [29] learns the combinatorial aspect in a budget-independent manner. Given a set of nodes S and a node v 6∈ S, we predict the n-step reward, Qn(S, v), for adding v to set S (action) via the surrogate function Q′n(S, v; ΘQ).
Defining the framework: We define the Q-learning task in terms of state space, action, reward, policy and termination with the input as a set of nodes and their predicted scores.
• State space: The state space characterizes the state of the system at any time step t in terms of the candidate nodes being considered, i.e., Ct = V g \ St, with respect to the partially computed solution set St; V g represents the set of good nodes from a training graph. In a combinatorial problem over nodes, two factors have a strong influence: (1) the individual quality of a node, and (2) its locality. The quality of a node v is captured through score′(v). Locality is an important factor since two high-quality nodes from the same neighborhood may not be good collectively. The locality of a node v ∈ Ct (Ct = V g \ St) is defined as: loc(v, St) = |N(v) \ ∪∀u∈StN(u)| (3) where N(v) = {v′ ∈ V | (v, v′) ∈ E} are the neighbors of v. Note that N(v) may contain noisy nodes since they contribute to the locality of v ∈ V g . However, locality (and q-learning in general) is computed only on good nodes. The initial representation µv of each node v ∈ Ct is therefore the 2-dimensional vector [score′(v), loc(v, St)]. The representation of the set of nodes Ct is defined as µCt = MAXPOOL {µv | v ∈ Ct}. µSt is defined analogously as well. We use MAXPOOL since it captures the best available candidate node better than alternatives such as MEANPOOL. Empirically, we obtain better results as well.
• Action and Reward: An action corresponds to adding a node v ∈ Ct to the solution set St. The immediate (0-step) reward of the action is its marginal gain, i.e. r(St, v) = f(St ∪ {v})− f(St). • Policy and Termination: The policy π(v | St) selects the node with the highest predicted n-step reward, i.e., arg maxv∈Ct Q ′ n(St, v; ΘQ). We terminate after training the model for T samples.
Learning the parameter set ΘQ: We partition ΘQ into three weight matrices Θ1, Θ2, Θ3, and one weight vector Θ4 such that, Q′n(St, v; ΘQ) = Θ4 · µCt,St,v, where µCt,St,v = CONCAT ( Θ1 · µCt ,Θ2 · µSt ,Θ3 · µv ) . If we want to encode the state space in a d-dimensional layer, the dimensions of the weight vectors are as follows: Θ4 ∈ R1×3d; Θ1,Θ2,Θ3 ∈ Rd×2. Qlearning updates parameters in a single episode via Adam optimizer[15] to minimize the squared loss.
J(ΘQ) = (y −Q′n(St, vt; ΘQ))2, where y = γ · max v∈V g {Q′n(St+n, v; ΘQ)}+ n−1∑ i=0 r(St+i, vt+i)
γ is the discount factor and balances the importance of immediate reward with the predicted n-step future reward [29]. The pseudocode with more details is provided in the Supplementary (App. C).
3.3.1 Importance Sampling for Fast Locality Computation
Since degrees of nodes in real graphs may be very high, computing locality (Eq. 3) is expensive. Furthermore, locality is re-computed in each iteration. We negate this computational bottleneck through importance sampling. Let N(V g) = {(v, u) ∈ E | v ∈ V g} be the neighbors of all nodes in V g. Given a sample size z, we extract a subset Nz(V g) ⊆ N(V g) of size z and compute locality only based on the nodes in Nz(V g). Importance sampling samples elements proportional to their importance. The importance of a node in N(V g) is defined as I(v) = score
′(v)∑ ∀v′∈N(V g) score ′(v′) .
Determining sample size: Let µN(V g) be the mean importance of all nodes in N(V g) and µ̂Nz(V g) the mean importance of sampled nodes. The sampling is accurate if µN(V g) ≈ µ̂Nz(V g). Theorem 1 Given an error bound , if sample size z is O ( log |N(V g)|
2
) , then
P [ |µ̂Nz(V g) − µN(V g)| < ] > 1− 1|N(V g)|2 . Remarks: (1) The sample size grows logarithmically with the neighborhood size, i.e., |N(V g)| and thus scalable to large graphs. (2) z is an inversely proportional function of the error bound .
3.4 Test Phase
Given an unseen graph G and budget b, we (1) identify the noisy nodes, (2) embed good nodes through a single forward pass through GCN, and (3) use GCN output to embed them and perform Q-learning to compute the final solution set.
Complexity analysis: The time complexity of the test phase in GCOMB is O ( |V |+ |V g,K | ( dmG +m 2 G ) + |V g|b (d+mQ) ) , where d is the average degree of a node, mG and mQ are the dimensions of the embeddings in GCN and Q-learning respectively, K is the number of layers in GCN, and V g,K represents the set of nodes within the K-hop neighborhood of V g . The space complexity is O(|V |+ |E|+Km2G +mQ). The derivations are provided in App. D.
4 Empirical Evaluation
In this section, we benchmark GCOMB against GCN-TREESEARCH and S2V-DQN, and establish that GCOMB produces marginally improved quality, while being orders of magnitudes faster. The source code can be found at https://github.com/idea-iitd/GCOMB .
4.1 Experimental Setup
All experiments are performed on a machine running Intel Xeon E5-2698v4 processor with 64 cores, having 1 Nvidia 1080 Ti GPU card with 12GB GPU memory, and 256 GB RAM with Ubuntu 16.04. All experiments are repeated 5 times and we report the average of the metric being measured.
Datasets: Table 1a) lists the real datasets used for our experiments. Random Bipartite Graphs (BP): We also use the synthetic random bipartite graphs from S2V-DQN [7]. In this model, given the number of nodes, they are partitioned into two sets with 20% nodes in one side and the rest in other. The edge between any pair of nodes from different partitions is generated with probability 0.1. We use BP-X to denote a generated bipartite graph of X nodes.
Problem Instances: The performance of GCOMB is benchmarked on Influence Maximization (IM), Maximum Vertex Cover (MVC), and Maximum Coverage Problem (MCP) (§ 2). Since MVC can be mapped to MCP, empirical results on MVC are included in App. M.
Baselines: The performance of GCOMB is primarily compared with (1) GCN-TREESEARCH [19], which is the state-of-the-art technique to learn combinatorial algorithms. In addition, for MCP, we also compare the performance with (2) Greedy (Alg.1 in App. B), (3) S2V-DQN [7], (5) CELF [17] and (6) the Optimal solution set (obtained using CPLEX [12] on small datasets). Greedy and CELF guarantees a 1 − 1/e approximation for all three problems. We also compare with (6) Stochastic Greedy(SG) [21] in App. L. For the problem of IM, we also compare with the state-of-the-art algorithm (7) IMM [31]. Additionally, we also compare GCOMB with (8) OPIM [30]. For S2V-DQN, GCN-TREESEARCH, IMM, and OPIM we use the code shared by the authors.
Training: In all our experiments, for a fair comparison of GCOMB with S2V-DQN and GCNTREESEARCH, we train all models for 12 hours and the best performing model on the validation set is used for inference. Nonetheless, we precisely measure the impact of training time in Fig. 2a. The break-up of time spent in each of the three training phases is shown in App. G in the Supplementary.
Parameters: The parameters used for GCOMB are outlined in App. H and their impact on performance is analyzed in App. N. For S2V-DQN and GCN-TREESEARCH, the best performing parameter values are identified using grid-search. In IMM, we set = 0.5 as suggested by the authors. In OPIM, is recommended to be kept in range [0.01, 0.1]. Thus, we set it to = 0.05.
4.2 Performance on Max Cover (MCP)
We evaluate the methods on both synthetic random bipartite (BP) graphs as well as real networks. Train-Validation-Test split: While testing on any synthetic BP graph, we train and validate on five
BP-1k graphs each. For real graphs, we train and validate on BrightKite (BK) (50 : 50 split for train and validate) and test on other real networks. Since our real graphs are not bipartite, we convert it to one by making two copies of V : V1 and V2. We add an edge from u ∈ V1 to u′ ∈ V2 if (u, u′) ∈ E. Comparison with Greedy and Optimal: Table 1b presents the achieved coverage (Recall § 2 for definition of coverage). We note that Greedy provides an empirical approximation ratio of at least 99% when compared to the optimal. This indicates that in larger datasets where we are unable to compute the optimal, Greedy can be assumed to be sufficiently close to the optimal. Second, GCOMB is sometimes able to perform even better than greedy. This indicates that Q-learning is able to learn a more generalized policy through delayed rewards and avoid a myopic view of the solution space.
Synthetic Datasets: Table 2a presents the results. GCOMB and Greedy achieves the highest coverage consistently. While S2V-DQN performs marginally better than GCN-TREESEARCH, S2V-DQN is the least scalable among all techniques; it runs out of memory on graphs containing more than 20, 000 nodes. As discussed in details in § 1.2, the non-scalability of S2V-DQN stems from relying on an architecture with significantly larger parameter set than GCOMB or GCN-TREESEARCH. In contrast, GCOMB avoids noisy nodes, and focuses the search operation only on the good nodes.
Impact of training time: A complex model with more number of parameters results in slower learning. In Fig. 2a, we measure the coverage against the training time. While GCOMB’s performance saturates within 10 minutes, S2V-DQN and GCN-TREESEARCH need 9 and 5 hours respectively for training to obtain its best performance.
Real Datasets: Figs. 2b and 2c present the achieved Coverage as the budget is varied. GCOMB achieves similar quality as Greedy, while GCN-TREESEARCH is marginally inferior. The real impact of GCOMB is highlighted in Figs. 2d and 2e, which shows that GCOMB is up to 2 orders of magnitude faster than GCN-TREESEARCH and 10 times faster than Greedy. Similar conclusion can also be drawn from the results on Gowalla dataset in App. K in Supplementary.
Comparison with CELF: Table 2b presents the speed-up achieved by GCOMB against CELF. The first pass of CELF involves sorting the nodes, which has complexityO(|V |log|V |). On the other hand, no such sorting is required in GCOMB. Thus, the speed-up achieved is higher in smaller budgets.
4.3 Performance on Influence Maximization
Influence Maximization (IM) is the hardest of the three combinatorial problems since estimating the spread of a node is #P-hard [14].
Edge weights: We assign edge weights that denote the influence of a connection using the two popular models [2]: (1) Constant (CO:) All edge weights are set to 0.1, (2) Tri-valency (TV): Edge weights are sampled randomly from the set {0.1, 0.01, 0.001}. In addition, we also employ a third (3) Learned (LND) model, where we learn the influence probabilities from the action logs of users. This is only applicable to the Stack data which contain action logs from 8/2008 to 3/2016. We define the influence of u on v as the probability of v interacting with u’s content at least once in a month.
Train-Validation-Test split: In all of the subsequent experiments, for CO and TV edge weight models, we train and validate on a subgraph sampled out of YT by randomly selecting 30% of the edges (50% of this subset is used for training and 50% is used for validation). For LND edge weight models, we train and validate on the subgraph induced by the 30% of the earliest edges from Stack in terms of temporal order. While testing, on YT and Stack, we use the graph formed by the remaining 70% of the edges that are not used for training. On other datasets, we use the entire graph for testing since neither those datasets nor their subsets are used for training purposes.
GCOMB vs.GCN-TREESEARCH: Fig. 3a compares the running time in IM on progressively larger subgraphs extracted from YT. While GCN-TREESEARCH consumes≈ 3 hours on the 70% sub-graph, GCOMB finishes in 5 seconds.
GCOMB vs. NOISEPRUNER+CELF NOISEPRUNER+CELF, i.e., running CELF only on non-noisy nodes, is orders of magnitude slower than GCOMB in IM (See Fig 3d). Pruning noisy nodes does not reduce the graph size; it only reduces the number of candidate nodes. To compute expected spread in IM, we still require the entire graph, resulting in non-scalability.
Billion-sized graphs: IMM crashes on both the billion-sized datasets of TW and FS, as well as Orkut. Unsurprisingly, similar results have been reported in [2]. IMM strategically samples a subgraph of the entire graph based on the edge weights. On this sampled subgraph, it estimates the influence of a node using reverse reachability sets. On large graphs, the sample size exceeds the RAM capacity of 256GB. Hence, it crashes. In contrast, GCOMB finishes within minutes for smaller budgets (b < 30) and within 100 minutes on larger budgets of 100 and 200 (Figs. 3g-3h ). This massive scalability of GCOMB is a result of low storage overhead (only the graph and GCN and Q-learning parameters; detailed Space complexity provided in App. D in the Supplementary) and relying on just forwarded passes through GCN and Q-learning. The speed-up with respect to OPIM on billion-sized graphs can be seen in App. J.
Performance on YT and Stack: Since IMM crashes on Orkut, TW, and FS, we compare the quality of GCOMB with IMM on YT and Stack. Table 3a reports the results in terms of spread difference, where Spread Difference = f(SIMM )−f(SGCOMB)f(SIMM ) × 100. SIMM and SGCOMB are answer sets computed by IMM and GCOMB respectively. A negative spread difference indicates better performance by GCOMB. The expected spread of a given set of nodes S, i.e. f(S), is computed by taking the average spread across 10, 000 Monte Carlo simulations.
Table 3a shows that the expected spread obtained by both techniques are extremely close. The true impact of GCOMB is realized when Table 3a is considered in conjunction with Figs. 3b-3c, which shows GCOMB is 30 to 160 times faster than IMM. In this plot, speed-up is measured as timeIMMtimeGCOMB where timeIMM and timeGCOMB are the running times of IMM and GCOMB respectively.
Similar behavior is observed when compared against OPIM as seen in Table 3b and Figs. 3e- 3f.
4.4 Design Choices
Impact of Q-learning: Since GCN predicts the expected marginal gain of a node, why not simply select the top-b nodes with the highest predicted marginal gains for the given budget b? This is a pertinent question since, as visible in Fig. 3i, majority of the time in GCOMB is spent on Q-learning. Fig. 3j shows that Q-learning imparts an additional coverage of up to 10%. Improvement (%) is quantified as CoverageGCOMB−CoverageGCNCoverageGCN × 100. Impact of Noise Predictor: Fig. 3k presents the impact of noise predictor which is close to two orders of magnitude reduction in running time. This improvement, however, does not come at the cost of efficacy (Fig. 3l). In fact, the quality improves slightly due to the removal of noisy nodes.
5 Conclusion
S2V-DQN [7] initiated the promising direction of learning combinatorial algorithms on graphs. GCN-TREESEARCH [19] pursued the same line of work and enhanced scalability to larger graphs. However, the barrier to million and billion-sized graphs remained. GCOMB removes this barrier with a new lightweight architecture. In particular, GCOMB uses a phase-wise mixture of supervised and reinforcement learning. While the supervised component predicts individual node qualities and prunes those that are unlikely to be part of the solution set, the Q-learning architecture carefully analyzes the remaining high-quality nodes to identify those that collectively form a good solution set. This architecture allows GCOMB to generalize to unseen graphs of significantly larger sizes and convincingly outperform the state of the art in efficiency and efficacy. Nonetheless, there is scope for improvement. GCOMB is limited to set combinatorial problems on graphs. In future, we will explore a bigger class of combinatorial algorithms such as sequential and capacity constrained problems.
Broader Impact
The need to solve NP-hard combinatorial problems on graphs routinely arise in several real-world problems. Examples include facility location problems on road networks [20], strategies to combat rumor propagation in online social networks [3], computational sustainability [8] and health-care [33]. Each of these problems plays an important role in our society. Consequently, designing effective and efficient solutions are important, and our current work is a step in that direction. The major impact of this paper is that good heuristics for NP-hard problems can be learned for large-scale data. While we are not the first to observe that heuristics for combinatorial algorithms can be learned, we are the first to make them scale to billion-size graphs, thereby bringing an algorithmic idea to practical use-cases.
Acknowledgments and Disclosure of Funding
The project was partially supported by the National Science Foundation under award IIS-1817046. Further, Sahil Manchanda acknowledges the financial support from the Ministry of Human Resource Development (MHRD) of India and the Department of Computer Science and Engineering, IIT Delhi. | 1. What is the focus and contribution of the paper on combinatorial optimization algorithms?
2. What are the strengths of the proposed approach, particularly in terms of its empirical performance?
3. What are the weaknesses of the paper, especially regarding its lack of theoretical guarantees and novelty? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper introduces a new algorithm to learn combinatorial optimization algorithms on graphs. The method applies to the problems of maximum coverage and influence maximization. The new method is inspired and improves recent work on learning combinatorial optimization algorithms on graphs. It relies on techniques of graph embedding and reinforcement learning. The method is empirically compared with previous learning algorithms for combinatorial problems on graphs, as well as state-of-the-art combinatorial algorithms. The results show that the new method achieves solutions of same quality, but it is much faster.
Strengths
1. The problems studied in the paper are interesting, and the techniques are relevant to the NeurIPS community. 2. The empirical results are quite convincing. In particular, there has been a lot of work on combinatorial algorithms for influence maximization, and improving over the state-of-the-art methods is a very good result.
Weaknesses
1. The method is mainly heuristic, there is no guarantee for the performance of the new method. Accordingly the quality of the method can be judged only empirically on the datasets that have been tested. 2. I am not an expert in the are, but my impression is that the novelty of the work is somewhat limited. In particular the novelty is mainly to refine the framework of Dai et al. [4] on the particular problem, and to introduce the components of noise predictor and importance sampling for scalability. |
NIPS | Title
GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs
Abstract
There has been an increased interest in discovering heuristics for combinatorial problems on graphs through machine learning. While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of budgetconstraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional Network (GCN) using a novel probabilistic greedy mechanism to predict the quality of a node. To further facilitate the combinatorial nature of the problem, GCOMB utilizes a Q-learning framework, which is made efficient through importance sampling. We perform extensive experiments on real graphs to benchmark the efficiency and efficacy of GCOMB. Our results establish that GCOMB is 100 times faster and marginally better in quality than state-of-the-art algorithms for learning combinatorial algorithms. Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.
1 Introduction and Related Work
Combinatorial optimization problems on graphs appear routinely in various applications such as viral marketing in social networks [14, 4], computational sustainability [8], health-care [33], and infrastructure deployment [20, 23, 24, 22]. In these set combinatorial problems, the goal is to identify the set of nodes that optimizes a given objective function. These optimization problems are often NP-hard. Therefore, designing an exact algorithm is infeasible and polynomial-time algorithms, with or without approximation guarantees, are often desired and used in practice [13, 31]. Furthermore, these graphs are often dynamic in nature and the approximation algorithms need to be run repeatedly at regular intervals. Since real-world graphs may contain millions of nodes and edges, this entire process becomes tedious and time-consuming.
To provide a concrete example, consider the problem of viral marketing on social networks through Influence Maximization [2, 14]. Given a budget b, the goal is to select b nodes (users) such that their endorsement of a certain product (ex: through a tweet) is expected to initiate a cascade that reaches the largest number of nodes in the graph. This problem is NP-hard [14]. Advertising through social networks is a common practice today and needs to solved repeatedly due to the graphs being dynamic
∗denotes equal contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
in nature. Furthermore, even the greedy approximation algorithm does not scale to large graphs [2] resulting in a large body of research work [31, 13, 16, 26, 14, 5, 32, 6].
At this juncture, we highlight two key observations. First, although the graph is changing, the underlying model generating the graph is likely to remain the same. Second, the nodes that get selected in the answer set of the approximation algorithm may have certain properties common in them. Motivated by these observations, we ask the following question [7]: Given a set combinatorial problem P on graphG and its corresponding solution set S, can we learn an approximation algorithm for problem P and solve it on an unseen graph that is similar to G?
1.1 Limitations of Existing Work
The above observations were first highlighted by S2V-DQN [7], where they show that it is indeed possible to learn combinatorial algorithms on graphs. Subsequently, an improved approach was proposed in GCN-TREESEARCH [19]. Despite these efforts, there is scope for further improvement.
• Scalability: The primary focus of both GCN-TREESEARCH and S2V-DQN have been on obtaining quality that is as close to the optimal as possible. Efficiency studies, however, are limited to graphs containing only hundreds of thousands nodes. To provide a concrete case study, we apply GCNTREESEARCH for the Influence Maximization problem on the YouTube social network. We observe that GCN-TREESEARCH takes one hour on a graph containing a million edges (Fig. 3a; we will revisit this experiment in § 4.3). Real-life graphs may contain billions of edges (See. Table 1a).
• Generalizability to real-life combinatorial problems: GCN-TREESEARCH proposes a learningbased heuristic for the Maximal Independent Set problem (MIS). When the combinatorial problem is not MIS, GCN-TREESEARCH suggests that we map that problem to MIS. Consequently, for problems that are not easily mappable to MIS, the efficacy is compromised (ex: Influence Maximization).
• Budget constraints: Both GCN-TREESEARCH and S2V-DQN solve the decision versions of combinatorial problems (Ex. set cover, vertex cover). In real life, we often encounter their budgetconstrained versions, such as max-cover and Influence Maximization [14].
Among other related work, Gasse et al. [9] used GCN for learning branch-and-bound variable selection policies, whereas Prates et al. [27] focused on solving Travelling Salesman Problem. However, the proposed techniques in these papers do not directly apply to our setting of set combinatorial problems.
1.2 Contributions
At the core of our study lies the observation that although the graph may be large, only a small percentage of the nodes are likely to contribute to the solution set. Thus, pruning the search space is as important as prediction of the solution set. Both S2V-DQN [7] and GCN-TREESEARCH [19] have primarily focused on the prediction component. In particular, S2V-DQN learns an end-to-end neural model on the entire graph through reinforcement learning. The neural model integrates node embedding and Q-learning into a single integrated framework. Consequently, the model is bogged down by a large number of parameters, which needs to be learned on the entire node set. As a result, we will show in §. 4 that S2V-DQN fails to scale to graphs beyond 20, 000 nodes.
On the other hand, GCN-TREESEARCH employs a two-component framework: (1) a graph convolutional network (GCN) to learn and predict the individual value of each node, and (2) a tree-search component to analyze the dependence among nodes and identify the solution set that collectively works well. Following tree-search, GCN is repeated on a reduced graph and this process continues iteratively. This approach is not scalable to large graphs since due to repeated iterations of GCN and TreeSearch where each iteration of tree-search has O(|E|) complexity; E is the set of edges. Our method GCOMB builds on the observation that computationally expensive predictions should be attempted only for promising nodes. Towards that end, GCOMB has two separate components: (1) a GCN to prune poor nodes and learn embeddings of good nodes in a supervised manner, and (2) a Q-learning component that focuses only on the good nodes to predict the solution set. Thus, unlike S2V-DQN, GCOMB uses a mixture of supervised and reinforcement learning, and does not employ an end-to-end architecture. Consequently, the prediction framework is lightweight with a significantly reduced number of parameters.
When compared to GCN-TREESEARCH, although both techniques use a GCN, in GCOMB, we train using a novel probabilistic greedy mechanism. Furthermore, instead of an iterative procedure of repeated GCN and TreeSearch calls, GCOMB performs a single forward pass through GCN
during inference. In addition, unlike TreeSearch, which is specifically tailored for the MIS problem, GCOMB is problem-agnostic 2. Finally, unlike both S2V-DQN and GCN-TREESEARCH, GCOMB uses lightweight operations to prune poor nodes and focus expensive computations only on nodes with a high potential of being part of the solution set. The pruning of the search space not only enhances scalability but also removes noise from the search space leading to improved prediction quality. Owing to these design choices, (1) GCOMB is scalable to billion-sized graphs and up to 100 times faster, (2) on average, computes higher quality solution sets than S2V-DQN and GCN-TREESEARCH, and (3) improves upon the state-of-the-art algorithm for Influence Maximization on social networks.
2 Problem Formulation
Objective: Given a budget-constrained set combinatorial problem P over graphs drawn from distribution D, learn a heuristic to solve problem P on an unseen graph G generated from D.
Next, we describe three instances of budget-constrained set combinatorial problems on graphs.
Maximum Coverage Problem on bipartite graph (MCP): Given a bipartite graph G = (V,E), where V = A ∪ B, and a budget b, find a set S∗ ⊆ A of b nodes such that coverage is maximized. The coverage of set S∗ is defined as f(S∗) = |X||B| , where X = {j|(i, j) ∈ E, i ∈ S∗, j ∈ B}. Budget-constrained Maximum Vertex Cover (MVC): Given a graph G = (V,E) and a budget b, find a set S∗ of b nodes such that the coverage f(S∗) of S∗ is maximized. f(S∗) = |X||E| , where X = {(i, j)|(i, j) ∈ E, i ∈ S∗, j ∈ V }. Influence Maximization (IM) [2]: Given a budget b, a social networkG, and a information diffusion modelM, select a set S∗ of b nodes such that the expected diffusion spread f(S∗) = E[Γ(S∗)] is maximized. (See App. A in supplementary for more details).
3 GCOMB
The input to the training phase is a set of graphs and the optimization function f(·) corresponding to the combinatorial problem in hand. The output is a sequence of two separate neural graphs, GCN [10] and Q-learning network, with their corresponding learned parameters ΘG and ΘQ respectively. In the testing phase, the inputs include a graph G = (V,E), the optimization function f(·) and the budget b. The output of the testing part is the solution set of nodes constructed using the learned neural networks. Fig. 1 presents the training pipeline. We will now discuss each of the phases.
3.1 Generating Training Data for GCN
Our goal is to learn node embeddings that can predict “quality”, and thereby, identify those nodes that are likely to be part of the answer set. We could adopt a classification-based method, where, given a training graph G = (V,E), budget b and its solution set S, a node v is called positive if v ∈ S; otherwise it is negative. This approach, however, assumes all nodes that are not a part of S to be equally bad. In reality, this may not be the case. Consider the case where f({v1})=f({v2}), but the marginal gain of node v2 given S = {v1}, i.e., f({v1, v2}) − f({v1}), is 0 and vice versa. In this scenario, only one of v1 and v2 would be selected in the answer set although both are of equal quality on their own.
2We are, however, limited to set combinatorial problems only.
Probabilistic greedy: To address the above issue, we sample from the solution space in a greedy manner and learn embeddings that reflect the marginal gain f(S ∪{v})− f(S) provided by a node v towards the solution set S (Alg. 2 in Appendix). To sample from the solution space, in each iteration, instead of selecting the node with the highest marginal gain, we choose a node with probability proportional to its marginal gain. The probabilistic greedy algorithm runs m times to construct m different solution sets S = {S1, · · · , Sm} and the score of node v ∈ V is set to:
score(v) = ∑m i gaini(v)∑m i f(Si)
(1)
Here, gaini(v) denotes the marginal gain contribution of v to Si. Specifically, assume v is added to Si in the (j + 1)th iteration and let S j i be the set of nodes that were added in the first j iterations
while constructing Si. Then, gaini(v) = f ( Sji ∪ {v} ) − f ( Sji ) . In our experiments, m is set to
30 for all three problems of MCP, MVC and IM.
Termination condition of probabilistic greedy: Probabilistic greedy runs till convergence of the marginal gains, i.e., gaini(v) ≤ ∆, where ∆ is a small value. The goal here is to identify all nodes that could potentially be part of the solution set for any given budget. ∆ in our experiments is set to 0.01 for all three problems of MCP, MVC and IM.
3.2 Training the GCN
Our goal in this phase is two-fold: (1) Identify nodes that are unlikely to be part of the solution set and are therefore noise in the context of our problem; (2) Learn a predictive model for node quality.
Noise predictor: The noise predictor should be lightweight so that expensive computations are reserved only for the good nodes. With this goal, we exploit the first layer information of the GCN and learn a classifier to predict for a given budget b, whether a node can be safely pruned without affecting the quality of the solution set. Typically, the first layer of a GCN contains the raw features of nodes that are relevant for the problem being solved. In GCOMB, we use the summation of the outgoing edge weights as node features. Let xv denote the total outgoing edge weight of node v. To learn the noise predictor, given a set of training graphs {G1, · · · , Gt}, we first sort all nodes based on xv . Let rank(v,Gi) denote the position of v in the sorted sequence based on xv in Gi. Furthermore, let Sij denote the j
th solution set constructed by probabilistic greedy on Gi. Given a budget b, SjGi,b ⊆ Sij denotes the subset containing the first b nodes added to Sij by probabilistic greedy. Therefore, rbGi = max m j=0 { max∀v∈SjGi,b {rank(v,Gi)} } represents the lowest rank of any node in a solution set of budget b in Gi. This measure is further generalized to all training graphs in the form of rbmax = max∀Gi { rbGi }
, which represents the lowest rank of any node that has a realistic chance of being included in an answer set of budget b. To generalize across budgets, we compute rbimax for a series of budgets {b1, · · · , bmax}, where bmax = max∀Gi { maxmj=0 { |Sij | }} . On this data, we can perform curve fitting [1] to predict rbmax for any (unseen) budget b. In our experiments, we use linear interpolation. To generalize across graph sizes, all of the above computations are performed on normalized budgets, where b is expressed in terms of the proportion of nodes with respect to the node set size of the graph. Similarly, rank rank(v,Gi) is expressed in terms of percentile.
Node quality predictor: To train the GCN, we sample a training graph Gi = (Vi, Ei) and a (normalized) budget b from the range (0, bimax], where b i max = max m j=0 { |Sij | |Vi| } . This tuple is sent to the noise predictor to obtain the good (non-noisy) nodes. The GCN parameters (ΘG) are next learned by minimizing the loss function only on the good nodes. Specifically, for each good node v, we want to learn embeddings that can predict score(v) through a surrogate function score′(v). Towards that end, we draw multiple samples of training graphs and budgets, and the parameters are learned by minimizing the mean squared error loss (See Alg.3 for detailed pseudocode in the Supplementary).
J(ΘG) = ∑ ∼〈Gi,b〉 1 |V gi | ∑ ∀v∈V gi (score(v)− score′(v))2 (2)
In the above equation, V gi denotes the set of good nodes for budget b in graph Gi. Since GCNs are trained through message passing, in a GCN with K hidden layers, the computation graph is limited to the induced subgraph formed by the K-hop neighbors of V gi , instead of the entire graph.
3.3 Learning Q-function While GCN captures the individual importance of a node, Q-learning [29] learns the combinatorial aspect in a budget-independent manner. Given a set of nodes S and a node v 6∈ S, we predict the n-step reward, Qn(S, v), for adding v to set S (action) via the surrogate function Q′n(S, v; ΘQ).
Defining the framework: We define the Q-learning task in terms of state space, action, reward, policy and termination with the input as a set of nodes and their predicted scores.
• State space: The state space characterizes the state of the system at any time step t in terms of the candidate nodes being considered, i.e., Ct = V g \ St, with respect to the partially computed solution set St; V g represents the set of good nodes from a training graph. In a combinatorial problem over nodes, two factors have a strong influence: (1) the individual quality of a node, and (2) its locality. The quality of a node v is captured through score′(v). Locality is an important factor since two high-quality nodes from the same neighborhood may not be good collectively. The locality of a node v ∈ Ct (Ct = V g \ St) is defined as: loc(v, St) = |N(v) \ ∪∀u∈StN(u)| (3) where N(v) = {v′ ∈ V | (v, v′) ∈ E} are the neighbors of v. Note that N(v) may contain noisy nodes since they contribute to the locality of v ∈ V g . However, locality (and q-learning in general) is computed only on good nodes. The initial representation µv of each node v ∈ Ct is therefore the 2-dimensional vector [score′(v), loc(v, St)]. The representation of the set of nodes Ct is defined as µCt = MAXPOOL {µv | v ∈ Ct}. µSt is defined analogously as well. We use MAXPOOL since it captures the best available candidate node better than alternatives such as MEANPOOL. Empirically, we obtain better results as well.
• Action and Reward: An action corresponds to adding a node v ∈ Ct to the solution set St. The immediate (0-step) reward of the action is its marginal gain, i.e. r(St, v) = f(St ∪ {v})− f(St). • Policy and Termination: The policy π(v | St) selects the node with the highest predicted n-step reward, i.e., arg maxv∈Ct Q ′ n(St, v; ΘQ). We terminate after training the model for T samples.
Learning the parameter set ΘQ: We partition ΘQ into three weight matrices Θ1, Θ2, Θ3, and one weight vector Θ4 such that, Q′n(St, v; ΘQ) = Θ4 · µCt,St,v, where µCt,St,v = CONCAT ( Θ1 · µCt ,Θ2 · µSt ,Θ3 · µv ) . If we want to encode the state space in a d-dimensional layer, the dimensions of the weight vectors are as follows: Θ4 ∈ R1×3d; Θ1,Θ2,Θ3 ∈ Rd×2. Qlearning updates parameters in a single episode via Adam optimizer[15] to minimize the squared loss.
J(ΘQ) = (y −Q′n(St, vt; ΘQ))2, where y = γ · max v∈V g {Q′n(St+n, v; ΘQ)}+ n−1∑ i=0 r(St+i, vt+i)
γ is the discount factor and balances the importance of immediate reward with the predicted n-step future reward [29]. The pseudocode with more details is provided in the Supplementary (App. C).
3.3.1 Importance Sampling for Fast Locality Computation
Since degrees of nodes in real graphs may be very high, computing locality (Eq. 3) is expensive. Furthermore, locality is re-computed in each iteration. We negate this computational bottleneck through importance sampling. Let N(V g) = {(v, u) ∈ E | v ∈ V g} be the neighbors of all nodes in V g. Given a sample size z, we extract a subset Nz(V g) ⊆ N(V g) of size z and compute locality only based on the nodes in Nz(V g). Importance sampling samples elements proportional to their importance. The importance of a node in N(V g) is defined as I(v) = score
′(v)∑ ∀v′∈N(V g) score ′(v′) .
Determining sample size: Let µN(V g) be the mean importance of all nodes in N(V g) and µ̂Nz(V g) the mean importance of sampled nodes. The sampling is accurate if µN(V g) ≈ µ̂Nz(V g). Theorem 1 Given an error bound , if sample size z is O ( log |N(V g)|
2
) , then
P [ |µ̂Nz(V g) − µN(V g)| < ] > 1− 1|N(V g)|2 . Remarks: (1) The sample size grows logarithmically with the neighborhood size, i.e., |N(V g)| and thus scalable to large graphs. (2) z is an inversely proportional function of the error bound .
3.4 Test Phase
Given an unseen graph G and budget b, we (1) identify the noisy nodes, (2) embed good nodes through a single forward pass through GCN, and (3) use GCN output to embed them and perform Q-learning to compute the final solution set.
Complexity analysis: The time complexity of the test phase in GCOMB is O ( |V |+ |V g,K | ( dmG +m 2 G ) + |V g|b (d+mQ) ) , where d is the average degree of a node, mG and mQ are the dimensions of the embeddings in GCN and Q-learning respectively, K is the number of layers in GCN, and V g,K represents the set of nodes within the K-hop neighborhood of V g . The space complexity is O(|V |+ |E|+Km2G +mQ). The derivations are provided in App. D.
4 Empirical Evaluation
In this section, we benchmark GCOMB against GCN-TREESEARCH and S2V-DQN, and establish that GCOMB produces marginally improved quality, while being orders of magnitudes faster. The source code can be found at https://github.com/idea-iitd/GCOMB .
4.1 Experimental Setup
All experiments are performed on a machine running Intel Xeon E5-2698v4 processor with 64 cores, having 1 Nvidia 1080 Ti GPU card with 12GB GPU memory, and 256 GB RAM with Ubuntu 16.04. All experiments are repeated 5 times and we report the average of the metric being measured.
Datasets: Table 1a) lists the real datasets used for our experiments. Random Bipartite Graphs (BP): We also use the synthetic random bipartite graphs from S2V-DQN [7]. In this model, given the number of nodes, they are partitioned into two sets with 20% nodes in one side and the rest in other. The edge between any pair of nodes from different partitions is generated with probability 0.1. We use BP-X to denote a generated bipartite graph of X nodes.
Problem Instances: The performance of GCOMB is benchmarked on Influence Maximization (IM), Maximum Vertex Cover (MVC), and Maximum Coverage Problem (MCP) (§ 2). Since MVC can be mapped to MCP, empirical results on MVC are included in App. M.
Baselines: The performance of GCOMB is primarily compared with (1) GCN-TREESEARCH [19], which is the state-of-the-art technique to learn combinatorial algorithms. In addition, for MCP, we also compare the performance with (2) Greedy (Alg.1 in App. B), (3) S2V-DQN [7], (5) CELF [17] and (6) the Optimal solution set (obtained using CPLEX [12] on small datasets). Greedy and CELF guarantees a 1 − 1/e approximation for all three problems. We also compare with (6) Stochastic Greedy(SG) [21] in App. L. For the problem of IM, we also compare with the state-of-the-art algorithm (7) IMM [31]. Additionally, we also compare GCOMB with (8) OPIM [30]. For S2V-DQN, GCN-TREESEARCH, IMM, and OPIM we use the code shared by the authors.
Training: In all our experiments, for a fair comparison of GCOMB with S2V-DQN and GCNTREESEARCH, we train all models for 12 hours and the best performing model on the validation set is used for inference. Nonetheless, we precisely measure the impact of training time in Fig. 2a. The break-up of time spent in each of the three training phases is shown in App. G in the Supplementary.
Parameters: The parameters used for GCOMB are outlined in App. H and their impact on performance is analyzed in App. N. For S2V-DQN and GCN-TREESEARCH, the best performing parameter values are identified using grid-search. In IMM, we set = 0.5 as suggested by the authors. In OPIM, is recommended to be kept in range [0.01, 0.1]. Thus, we set it to = 0.05.
4.2 Performance on Max Cover (MCP)
We evaluate the methods on both synthetic random bipartite (BP) graphs as well as real networks. Train-Validation-Test split: While testing on any synthetic BP graph, we train and validate on five
BP-1k graphs each. For real graphs, we train and validate on BrightKite (BK) (50 : 50 split for train and validate) and test on other real networks. Since our real graphs are not bipartite, we convert it to one by making two copies of V : V1 and V2. We add an edge from u ∈ V1 to u′ ∈ V2 if (u, u′) ∈ E. Comparison with Greedy and Optimal: Table 1b presents the achieved coverage (Recall § 2 for definition of coverage). We note that Greedy provides an empirical approximation ratio of at least 99% when compared to the optimal. This indicates that in larger datasets where we are unable to compute the optimal, Greedy can be assumed to be sufficiently close to the optimal. Second, GCOMB is sometimes able to perform even better than greedy. This indicates that Q-learning is able to learn a more generalized policy through delayed rewards and avoid a myopic view of the solution space.
Synthetic Datasets: Table 2a presents the results. GCOMB and Greedy achieves the highest coverage consistently. While S2V-DQN performs marginally better than GCN-TREESEARCH, S2V-DQN is the least scalable among all techniques; it runs out of memory on graphs containing more than 20, 000 nodes. As discussed in details in § 1.2, the non-scalability of S2V-DQN stems from relying on an architecture with significantly larger parameter set than GCOMB or GCN-TREESEARCH. In contrast, GCOMB avoids noisy nodes, and focuses the search operation only on the good nodes.
Impact of training time: A complex model with more number of parameters results in slower learning. In Fig. 2a, we measure the coverage against the training time. While GCOMB’s performance saturates within 10 minutes, S2V-DQN and GCN-TREESEARCH need 9 and 5 hours respectively for training to obtain its best performance.
Real Datasets: Figs. 2b and 2c present the achieved Coverage as the budget is varied. GCOMB achieves similar quality as Greedy, while GCN-TREESEARCH is marginally inferior. The real impact of GCOMB is highlighted in Figs. 2d and 2e, which shows that GCOMB is up to 2 orders of magnitude faster than GCN-TREESEARCH and 10 times faster than Greedy. Similar conclusion can also be drawn from the results on Gowalla dataset in App. K in Supplementary.
Comparison with CELF: Table 2b presents the speed-up achieved by GCOMB against CELF. The first pass of CELF involves sorting the nodes, which has complexityO(|V |log|V |). On the other hand, no such sorting is required in GCOMB. Thus, the speed-up achieved is higher in smaller budgets.
4.3 Performance on Influence Maximization
Influence Maximization (IM) is the hardest of the three combinatorial problems since estimating the spread of a node is #P-hard [14].
Edge weights: We assign edge weights that denote the influence of a connection using the two popular models [2]: (1) Constant (CO:) All edge weights are set to 0.1, (2) Tri-valency (TV): Edge weights are sampled randomly from the set {0.1, 0.01, 0.001}. In addition, we also employ a third (3) Learned (LND) model, where we learn the influence probabilities from the action logs of users. This is only applicable to the Stack data which contain action logs from 8/2008 to 3/2016. We define the influence of u on v as the probability of v interacting with u’s content at least once in a month.
Train-Validation-Test split: In all of the subsequent experiments, for CO and TV edge weight models, we train and validate on a subgraph sampled out of YT by randomly selecting 30% of the edges (50% of this subset is used for training and 50% is used for validation). For LND edge weight models, we train and validate on the subgraph induced by the 30% of the earliest edges from Stack in terms of temporal order. While testing, on YT and Stack, we use the graph formed by the remaining 70% of the edges that are not used for training. On other datasets, we use the entire graph for testing since neither those datasets nor their subsets are used for training purposes.
GCOMB vs.GCN-TREESEARCH: Fig. 3a compares the running time in IM on progressively larger subgraphs extracted from YT. While GCN-TREESEARCH consumes≈ 3 hours on the 70% sub-graph, GCOMB finishes in 5 seconds.
GCOMB vs. NOISEPRUNER+CELF NOISEPRUNER+CELF, i.e., running CELF only on non-noisy nodes, is orders of magnitude slower than GCOMB in IM (See Fig 3d). Pruning noisy nodes does not reduce the graph size; it only reduces the number of candidate nodes. To compute expected spread in IM, we still require the entire graph, resulting in non-scalability.
Billion-sized graphs: IMM crashes on both the billion-sized datasets of TW and FS, as well as Orkut. Unsurprisingly, similar results have been reported in [2]. IMM strategically samples a subgraph of the entire graph based on the edge weights. On this sampled subgraph, it estimates the influence of a node using reverse reachability sets. On large graphs, the sample size exceeds the RAM capacity of 256GB. Hence, it crashes. In contrast, GCOMB finishes within minutes for smaller budgets (b < 30) and within 100 minutes on larger budgets of 100 and 200 (Figs. 3g-3h ). This massive scalability of GCOMB is a result of low storage overhead (only the graph and GCN and Q-learning parameters; detailed Space complexity provided in App. D in the Supplementary) and relying on just forwarded passes through GCN and Q-learning. The speed-up with respect to OPIM on billion-sized graphs can be seen in App. J.
Performance on YT and Stack: Since IMM crashes on Orkut, TW, and FS, we compare the quality of GCOMB with IMM on YT and Stack. Table 3a reports the results in terms of spread difference, where Spread Difference = f(SIMM )−f(SGCOMB)f(SIMM ) × 100. SIMM and SGCOMB are answer sets computed by IMM and GCOMB respectively. A negative spread difference indicates better performance by GCOMB. The expected spread of a given set of nodes S, i.e. f(S), is computed by taking the average spread across 10, 000 Monte Carlo simulations.
Table 3a shows that the expected spread obtained by both techniques are extremely close. The true impact of GCOMB is realized when Table 3a is considered in conjunction with Figs. 3b-3c, which shows GCOMB is 30 to 160 times faster than IMM. In this plot, speed-up is measured as timeIMMtimeGCOMB where timeIMM and timeGCOMB are the running times of IMM and GCOMB respectively.
Similar behavior is observed when compared against OPIM as seen in Table 3b and Figs. 3e- 3f.
4.4 Design Choices
Impact of Q-learning: Since GCN predicts the expected marginal gain of a node, why not simply select the top-b nodes with the highest predicted marginal gains for the given budget b? This is a pertinent question since, as visible in Fig. 3i, majority of the time in GCOMB is spent on Q-learning. Fig. 3j shows that Q-learning imparts an additional coverage of up to 10%. Improvement (%) is quantified as CoverageGCOMB−CoverageGCNCoverageGCN × 100. Impact of Noise Predictor: Fig. 3k presents the impact of noise predictor which is close to two orders of magnitude reduction in running time. This improvement, however, does not come at the cost of efficacy (Fig. 3l). In fact, the quality improves slightly due to the removal of noisy nodes.
5 Conclusion
S2V-DQN [7] initiated the promising direction of learning combinatorial algorithms on graphs. GCN-TREESEARCH [19] pursued the same line of work and enhanced scalability to larger graphs. However, the barrier to million and billion-sized graphs remained. GCOMB removes this barrier with a new lightweight architecture. In particular, GCOMB uses a phase-wise mixture of supervised and reinforcement learning. While the supervised component predicts individual node qualities and prunes those that are unlikely to be part of the solution set, the Q-learning architecture carefully analyzes the remaining high-quality nodes to identify those that collectively form a good solution set. This architecture allows GCOMB to generalize to unseen graphs of significantly larger sizes and convincingly outperform the state of the art in efficiency and efficacy. Nonetheless, there is scope for improvement. GCOMB is limited to set combinatorial problems on graphs. In future, we will explore a bigger class of combinatorial algorithms such as sequential and capacity constrained problems.
Broader Impact
The need to solve NP-hard combinatorial problems on graphs routinely arise in several real-world problems. Examples include facility location problems on road networks [20], strategies to combat rumor propagation in online social networks [3], computational sustainability [8] and health-care [33]. Each of these problems plays an important role in our society. Consequently, designing effective and efficient solutions are important, and our current work is a step in that direction. The major impact of this paper is that good heuristics for NP-hard problems can be learned for large-scale data. While we are not the first to observe that heuristics for combinatorial algorithms can be learned, we are the first to make them scale to billion-size graphs, thereby bringing an algorithmic idea to practical use-cases.
Acknowledgments and Disclosure of Funding
The project was partially supported by the National Science Foundation under award IIS-1817046. Further, Sahil Manchanda acknowledges the financial support from the Ministry of Human Resource Development (MHRD) of India and the Department of Computer Science and Engineering, IIT Delhi. | 1. What is the focus and contribution of the paper regarding solving combinatorial set problems on graphs?
2. What are the strengths of the proposed approach compared to previous learning-based methods?
3. What are the weaknesses of the paper, particularly regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the scope of applicability and performance improvements of the proposed approach? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors present a scalable learning-based heuristic approach for a set of hard cardinality-constrained combinatorial set problems on graphs, studying the empirical performance on cardinality constrained submodular maximization settings using real-world and synthetic data, and giving time and space complexities for the system’s various components. The approach seems to have better scalability than previous approaches for learning combinatorial algorithms on graphs. They achieve this scalability by having three sequential modules: - Pruning nodes in the input graph based on incident edge weight - A GCN that predicts a score related to an average of marginal gains from adding that vertex during collected probabilistic greedy solves - Q learning to iteratively add nodes to a solution that predicts the discounted value of adding that vertex given the current set of nodes in the solution as the state. Here the authors further improve runtime by computing a “locality” node feature based on sampling nodes according to their predicted scores from the GCN. The authors evaluate runtime and solution quality to compare their approach to the two learning-based approaches applicable for these problems, and a naïve greedy algorithm, demonstrating improved solution quality and faster runtimes compared to learning approaches and faster runtimes compared to the naïve greedy algorithm, sometimes with marginal solution quality improvement over greedy. The authors present experiments on maximum coverage on bipartite graphs, and influence maximization in the main text, and budget constrained vertex cover in the supplementary information. Additionally, the authors provide a small ablation study to understand the impact of using the Q learning module over just the GCN-based scoring module, as well as the impact on the pipeline from node pruning.
Strengths
The main strengths are in outperforming previous learning-based approaches, providing computational and space complexity, as well as providing a broad set of experiments in the specified domains with diverse settings of large real world and synthetic problems. Additionally, the authors provide a good motivation for their work and situate it well with respect to the relevant literature on learning for combinatorial optimization on graphs. Empirically, the approach seems to outperform previous learning-based approaches to solving combinatorial problems on graphs for cases where greedy algorithms yield a 1-1/e approximation. Additionally, even though there are several components to the proposed complex system, the different components are well motivated and the ablation study hints at all components being necessary for good performance. The method itself is well described and the authors provide relevant code and pseudocode in the appendix. The approach is relatively novel and adds supervised learning components, importance sampling, and node preprocessing to existing work in reinforcement learning for combinatorial optimization. Finally, the approach is relevant to NeurIPS as it approaches combinatorial optimization with a novel learning-based approach that incorporates domain-knowledge, and specialized methods to improve scalability and performance over existing learning-based approaches.
Weaknesses
The main weaknesses of the paper are that the work only uses a naïve version of the greedy algorithm rather than the faster lazy greedy algorithm, and that it seems to claim more than the results suggest without further investigation in terms of the scope of applicability, and performance improvements over the greedy algorithm. The approach seems to be specialized to selecting a set of elements for coverage-like problems and specifically submodular maximization problems which admit greedy approximation algorithms, not necessarily general set combinatorial problems as claimed (it is important to clearly and fairly articulate the claimed scope of the proposed algorithms superior performance). Additionally, the greedy algorithm empirically gives near-optimal performance in the experiments, so it would be useful to know whether this approach performs well for more difficult problems, where greedy is not almost optimal. It would be good to see performance on other more combinatorial problems or nonsubmodular set graph problems, e.g. picking a subset of nodes in a graph to allow spread for IC maximization instead of selecting seed nodes (Sheldon et al 2010) , which may not yield as easily to greedy algorithms. The score supervision used to train the GCN is highly related to the marginal return that greedy would use to score nodes. In addition, the locality metric seems to directly consider the percent of neighbors of a node which are not currently covered by a partial solution, which is directly related to the coverage problems considered in this work. The locality measure and marginal improvement scoring are both related to coverage-like problems but may be potentially less impactful for more combinatorial problems. All three domains are cardinality constrained, and not more generally budget-constrained problems with node weights, hence again it will be important to articulate that distinction or add experiments in weighted budgeted settings. It seems the main benefit for the overall goal of a high-quality fast heuristic is runtime improvement as performance improvements over greedy seem very marginal when they occur. However, the authors don’t compare against CELF (lazy greedy) which will have the same quality guarantees as greedy, but will have faster runtime as it will compute marginal gains for “noisy” nodes once then realistically never update them again. It remains to be seen whether the approach will perform well against this standard scalability method for cardinality constrained submodular maximization. CELF was introduced in Cost-effective Outbreak Detection in Networks, Leskovec et al KDD 2007. 3 domains, max vertex cover (MVC), influence maximization, and maximum coverage, are described but results are only given for influence maximization and maximum coverage in the main text with smaller solution quality improvement results on MVC reported in supplementary as GCOMB doesn’t improve as substantially over GCN-TreeSearch. It would be helpful to include all results in the main text to clearly state the performance improvement in the considered settings. |
NIPS | Title
GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs
Abstract
There has been an increased interest in discovering heuristics for combinatorial problems on graphs through machine learning. While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of budgetconstraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional Network (GCN) using a novel probabilistic greedy mechanism to predict the quality of a node. To further facilitate the combinatorial nature of the problem, GCOMB utilizes a Q-learning framework, which is made efficient through importance sampling. We perform extensive experiments on real graphs to benchmark the efficiency and efficacy of GCOMB. Our results establish that GCOMB is 100 times faster and marginally better in quality than state-of-the-art algorithms for learning combinatorial algorithms. Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.
1 Introduction and Related Work
Combinatorial optimization problems on graphs appear routinely in various applications such as viral marketing in social networks [14, 4], computational sustainability [8], health-care [33], and infrastructure deployment [20, 23, 24, 22]. In these set combinatorial problems, the goal is to identify the set of nodes that optimizes a given objective function. These optimization problems are often NP-hard. Therefore, designing an exact algorithm is infeasible and polynomial-time algorithms, with or without approximation guarantees, are often desired and used in practice [13, 31]. Furthermore, these graphs are often dynamic in nature and the approximation algorithms need to be run repeatedly at regular intervals. Since real-world graphs may contain millions of nodes and edges, this entire process becomes tedious and time-consuming.
To provide a concrete example, consider the problem of viral marketing on social networks through Influence Maximization [2, 14]. Given a budget b, the goal is to select b nodes (users) such that their endorsement of a certain product (ex: through a tweet) is expected to initiate a cascade that reaches the largest number of nodes in the graph. This problem is NP-hard [14]. Advertising through social networks is a common practice today and needs to solved repeatedly due to the graphs being dynamic
∗denotes equal contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
in nature. Furthermore, even the greedy approximation algorithm does not scale to large graphs [2] resulting in a large body of research work [31, 13, 16, 26, 14, 5, 32, 6].
At this juncture, we highlight two key observations. First, although the graph is changing, the underlying model generating the graph is likely to remain the same. Second, the nodes that get selected in the answer set of the approximation algorithm may have certain properties common in them. Motivated by these observations, we ask the following question [7]: Given a set combinatorial problem P on graphG and its corresponding solution set S, can we learn an approximation algorithm for problem P and solve it on an unseen graph that is similar to G?
1.1 Limitations of Existing Work
The above observations were first highlighted by S2V-DQN [7], where they show that it is indeed possible to learn combinatorial algorithms on graphs. Subsequently, an improved approach was proposed in GCN-TREESEARCH [19]. Despite these efforts, there is scope for further improvement.
• Scalability: The primary focus of both GCN-TREESEARCH and S2V-DQN have been on obtaining quality that is as close to the optimal as possible. Efficiency studies, however, are limited to graphs containing only hundreds of thousands nodes. To provide a concrete case study, we apply GCNTREESEARCH for the Influence Maximization problem on the YouTube social network. We observe that GCN-TREESEARCH takes one hour on a graph containing a million edges (Fig. 3a; we will revisit this experiment in § 4.3). Real-life graphs may contain billions of edges (See. Table 1a).
• Generalizability to real-life combinatorial problems: GCN-TREESEARCH proposes a learningbased heuristic for the Maximal Independent Set problem (MIS). When the combinatorial problem is not MIS, GCN-TREESEARCH suggests that we map that problem to MIS. Consequently, for problems that are not easily mappable to MIS, the efficacy is compromised (ex: Influence Maximization).
• Budget constraints: Both GCN-TREESEARCH and S2V-DQN solve the decision versions of combinatorial problems (Ex. set cover, vertex cover). In real life, we often encounter their budgetconstrained versions, such as max-cover and Influence Maximization [14].
Among other related work, Gasse et al. [9] used GCN for learning branch-and-bound variable selection policies, whereas Prates et al. [27] focused on solving Travelling Salesman Problem. However, the proposed techniques in these papers do not directly apply to our setting of set combinatorial problems.
1.2 Contributions
At the core of our study lies the observation that although the graph may be large, only a small percentage of the nodes are likely to contribute to the solution set. Thus, pruning the search space is as important as prediction of the solution set. Both S2V-DQN [7] and GCN-TREESEARCH [19] have primarily focused on the prediction component. In particular, S2V-DQN learns an end-to-end neural model on the entire graph through reinforcement learning. The neural model integrates node embedding and Q-learning into a single integrated framework. Consequently, the model is bogged down by a large number of parameters, which needs to be learned on the entire node set. As a result, we will show in §. 4 that S2V-DQN fails to scale to graphs beyond 20, 000 nodes.
On the other hand, GCN-TREESEARCH employs a two-component framework: (1) a graph convolutional network (GCN) to learn and predict the individual value of each node, and (2) a tree-search component to analyze the dependence among nodes and identify the solution set that collectively works well. Following tree-search, GCN is repeated on a reduced graph and this process continues iteratively. This approach is not scalable to large graphs since due to repeated iterations of GCN and TreeSearch where each iteration of tree-search has O(|E|) complexity; E is the set of edges. Our method GCOMB builds on the observation that computationally expensive predictions should be attempted only for promising nodes. Towards that end, GCOMB has two separate components: (1) a GCN to prune poor nodes and learn embeddings of good nodes in a supervised manner, and (2) a Q-learning component that focuses only on the good nodes to predict the solution set. Thus, unlike S2V-DQN, GCOMB uses a mixture of supervised and reinforcement learning, and does not employ an end-to-end architecture. Consequently, the prediction framework is lightweight with a significantly reduced number of parameters.
When compared to GCN-TREESEARCH, although both techniques use a GCN, in GCOMB, we train using a novel probabilistic greedy mechanism. Furthermore, instead of an iterative procedure of repeated GCN and TreeSearch calls, GCOMB performs a single forward pass through GCN
during inference. In addition, unlike TreeSearch, which is specifically tailored for the MIS problem, GCOMB is problem-agnostic 2. Finally, unlike both S2V-DQN and GCN-TREESEARCH, GCOMB uses lightweight operations to prune poor nodes and focus expensive computations only on nodes with a high potential of being part of the solution set. The pruning of the search space not only enhances scalability but also removes noise from the search space leading to improved prediction quality. Owing to these design choices, (1) GCOMB is scalable to billion-sized graphs and up to 100 times faster, (2) on average, computes higher quality solution sets than S2V-DQN and GCN-TREESEARCH, and (3) improves upon the state-of-the-art algorithm for Influence Maximization on social networks.
2 Problem Formulation
Objective: Given a budget-constrained set combinatorial problem P over graphs drawn from distribution D, learn a heuristic to solve problem P on an unseen graph G generated from D.
Next, we describe three instances of budget-constrained set combinatorial problems on graphs.
Maximum Coverage Problem on bipartite graph (MCP): Given a bipartite graph G = (V,E), where V = A ∪ B, and a budget b, find a set S∗ ⊆ A of b nodes such that coverage is maximized. The coverage of set S∗ is defined as f(S∗) = |X||B| , where X = {j|(i, j) ∈ E, i ∈ S∗, j ∈ B}. Budget-constrained Maximum Vertex Cover (MVC): Given a graph G = (V,E) and a budget b, find a set S∗ of b nodes such that the coverage f(S∗) of S∗ is maximized. f(S∗) = |X||E| , where X = {(i, j)|(i, j) ∈ E, i ∈ S∗, j ∈ V }. Influence Maximization (IM) [2]: Given a budget b, a social networkG, and a information diffusion modelM, select a set S∗ of b nodes such that the expected diffusion spread f(S∗) = E[Γ(S∗)] is maximized. (See App. A in supplementary for more details).
3 GCOMB
The input to the training phase is a set of graphs and the optimization function f(·) corresponding to the combinatorial problem in hand. The output is a sequence of two separate neural graphs, GCN [10] and Q-learning network, with their corresponding learned parameters ΘG and ΘQ respectively. In the testing phase, the inputs include a graph G = (V,E), the optimization function f(·) and the budget b. The output of the testing part is the solution set of nodes constructed using the learned neural networks. Fig. 1 presents the training pipeline. We will now discuss each of the phases.
3.1 Generating Training Data for GCN
Our goal is to learn node embeddings that can predict “quality”, and thereby, identify those nodes that are likely to be part of the answer set. We could adopt a classification-based method, where, given a training graph G = (V,E), budget b and its solution set S, a node v is called positive if v ∈ S; otherwise it is negative. This approach, however, assumes all nodes that are not a part of S to be equally bad. In reality, this may not be the case. Consider the case where f({v1})=f({v2}), but the marginal gain of node v2 given S = {v1}, i.e., f({v1, v2}) − f({v1}), is 0 and vice versa. In this scenario, only one of v1 and v2 would be selected in the answer set although both are of equal quality on their own.
2We are, however, limited to set combinatorial problems only.
Probabilistic greedy: To address the above issue, we sample from the solution space in a greedy manner and learn embeddings that reflect the marginal gain f(S ∪{v})− f(S) provided by a node v towards the solution set S (Alg. 2 in Appendix). To sample from the solution space, in each iteration, instead of selecting the node with the highest marginal gain, we choose a node with probability proportional to its marginal gain. The probabilistic greedy algorithm runs m times to construct m different solution sets S = {S1, · · · , Sm} and the score of node v ∈ V is set to:
score(v) = ∑m i gaini(v)∑m i f(Si)
(1)
Here, gaini(v) denotes the marginal gain contribution of v to Si. Specifically, assume v is added to Si in the (j + 1)th iteration and let S j i be the set of nodes that were added in the first j iterations
while constructing Si. Then, gaini(v) = f ( Sji ∪ {v} ) − f ( Sji ) . In our experiments, m is set to
30 for all three problems of MCP, MVC and IM.
Termination condition of probabilistic greedy: Probabilistic greedy runs till convergence of the marginal gains, i.e., gaini(v) ≤ ∆, where ∆ is a small value. The goal here is to identify all nodes that could potentially be part of the solution set for any given budget. ∆ in our experiments is set to 0.01 for all three problems of MCP, MVC and IM.
3.2 Training the GCN
Our goal in this phase is two-fold: (1) Identify nodes that are unlikely to be part of the solution set and are therefore noise in the context of our problem; (2) Learn a predictive model for node quality.
Noise predictor: The noise predictor should be lightweight so that expensive computations are reserved only for the good nodes. With this goal, we exploit the first layer information of the GCN and learn a classifier to predict for a given budget b, whether a node can be safely pruned without affecting the quality of the solution set. Typically, the first layer of a GCN contains the raw features of nodes that are relevant for the problem being solved. In GCOMB, we use the summation of the outgoing edge weights as node features. Let xv denote the total outgoing edge weight of node v. To learn the noise predictor, given a set of training graphs {G1, · · · , Gt}, we first sort all nodes based on xv . Let rank(v,Gi) denote the position of v in the sorted sequence based on xv in Gi. Furthermore, let Sij denote the j
th solution set constructed by probabilistic greedy on Gi. Given a budget b, SjGi,b ⊆ Sij denotes the subset containing the first b nodes added to Sij by probabilistic greedy. Therefore, rbGi = max m j=0 { max∀v∈SjGi,b {rank(v,Gi)} } represents the lowest rank of any node in a solution set of budget b in Gi. This measure is further generalized to all training graphs in the form of rbmax = max∀Gi { rbGi }
, which represents the lowest rank of any node that has a realistic chance of being included in an answer set of budget b. To generalize across budgets, we compute rbimax for a series of budgets {b1, · · · , bmax}, where bmax = max∀Gi { maxmj=0 { |Sij | }} . On this data, we can perform curve fitting [1] to predict rbmax for any (unseen) budget b. In our experiments, we use linear interpolation. To generalize across graph sizes, all of the above computations are performed on normalized budgets, where b is expressed in terms of the proportion of nodes with respect to the node set size of the graph. Similarly, rank rank(v,Gi) is expressed in terms of percentile.
Node quality predictor: To train the GCN, we sample a training graph Gi = (Vi, Ei) and a (normalized) budget b from the range (0, bimax], where b i max = max m j=0 { |Sij | |Vi| } . This tuple is sent to the noise predictor to obtain the good (non-noisy) nodes. The GCN parameters (ΘG) are next learned by minimizing the loss function only on the good nodes. Specifically, for each good node v, we want to learn embeddings that can predict score(v) through a surrogate function score′(v). Towards that end, we draw multiple samples of training graphs and budgets, and the parameters are learned by minimizing the mean squared error loss (See Alg.3 for detailed pseudocode in the Supplementary).
J(ΘG) = ∑ ∼〈Gi,b〉 1 |V gi | ∑ ∀v∈V gi (score(v)− score′(v))2 (2)
In the above equation, V gi denotes the set of good nodes for budget b in graph Gi. Since GCNs are trained through message passing, in a GCN with K hidden layers, the computation graph is limited to the induced subgraph formed by the K-hop neighbors of V gi , instead of the entire graph.
3.3 Learning Q-function While GCN captures the individual importance of a node, Q-learning [29] learns the combinatorial aspect in a budget-independent manner. Given a set of nodes S and a node v 6∈ S, we predict the n-step reward, Qn(S, v), for adding v to set S (action) via the surrogate function Q′n(S, v; ΘQ).
Defining the framework: We define the Q-learning task in terms of state space, action, reward, policy and termination with the input as a set of nodes and their predicted scores.
• State space: The state space characterizes the state of the system at any time step t in terms of the candidate nodes being considered, i.e., Ct = V g \ St, with respect to the partially computed solution set St; V g represents the set of good nodes from a training graph. In a combinatorial problem over nodes, two factors have a strong influence: (1) the individual quality of a node, and (2) its locality. The quality of a node v is captured through score′(v). Locality is an important factor since two high-quality nodes from the same neighborhood may not be good collectively. The locality of a node v ∈ Ct (Ct = V g \ St) is defined as: loc(v, St) = |N(v) \ ∪∀u∈StN(u)| (3) where N(v) = {v′ ∈ V | (v, v′) ∈ E} are the neighbors of v. Note that N(v) may contain noisy nodes since they contribute to the locality of v ∈ V g . However, locality (and q-learning in general) is computed only on good nodes. The initial representation µv of each node v ∈ Ct is therefore the 2-dimensional vector [score′(v), loc(v, St)]. The representation of the set of nodes Ct is defined as µCt = MAXPOOL {µv | v ∈ Ct}. µSt is defined analogously as well. We use MAXPOOL since it captures the best available candidate node better than alternatives such as MEANPOOL. Empirically, we obtain better results as well.
• Action and Reward: An action corresponds to adding a node v ∈ Ct to the solution set St. The immediate (0-step) reward of the action is its marginal gain, i.e. r(St, v) = f(St ∪ {v})− f(St). • Policy and Termination: The policy π(v | St) selects the node with the highest predicted n-step reward, i.e., arg maxv∈Ct Q ′ n(St, v; ΘQ). We terminate after training the model for T samples.
Learning the parameter set ΘQ: We partition ΘQ into three weight matrices Θ1, Θ2, Θ3, and one weight vector Θ4 such that, Q′n(St, v; ΘQ) = Θ4 · µCt,St,v, where µCt,St,v = CONCAT ( Θ1 · µCt ,Θ2 · µSt ,Θ3 · µv ) . If we want to encode the state space in a d-dimensional layer, the dimensions of the weight vectors are as follows: Θ4 ∈ R1×3d; Θ1,Θ2,Θ3 ∈ Rd×2. Qlearning updates parameters in a single episode via Adam optimizer[15] to minimize the squared loss.
J(ΘQ) = (y −Q′n(St, vt; ΘQ))2, where y = γ · max v∈V g {Q′n(St+n, v; ΘQ)}+ n−1∑ i=0 r(St+i, vt+i)
γ is the discount factor and balances the importance of immediate reward with the predicted n-step future reward [29]. The pseudocode with more details is provided in the Supplementary (App. C).
3.3.1 Importance Sampling for Fast Locality Computation
Since degrees of nodes in real graphs may be very high, computing locality (Eq. 3) is expensive. Furthermore, locality is re-computed in each iteration. We negate this computational bottleneck through importance sampling. Let N(V g) = {(v, u) ∈ E | v ∈ V g} be the neighbors of all nodes in V g. Given a sample size z, we extract a subset Nz(V g) ⊆ N(V g) of size z and compute locality only based on the nodes in Nz(V g). Importance sampling samples elements proportional to their importance. The importance of a node in N(V g) is defined as I(v) = score
′(v)∑ ∀v′∈N(V g) score ′(v′) .
Determining sample size: Let µN(V g) be the mean importance of all nodes in N(V g) and µ̂Nz(V g) the mean importance of sampled nodes. The sampling is accurate if µN(V g) ≈ µ̂Nz(V g). Theorem 1 Given an error bound , if sample size z is O ( log |N(V g)|
2
) , then
P [ |µ̂Nz(V g) − µN(V g)| < ] > 1− 1|N(V g)|2 . Remarks: (1) The sample size grows logarithmically with the neighborhood size, i.e., |N(V g)| and thus scalable to large graphs. (2) z is an inversely proportional function of the error bound .
3.4 Test Phase
Given an unseen graph G and budget b, we (1) identify the noisy nodes, (2) embed good nodes through a single forward pass through GCN, and (3) use GCN output to embed them and perform Q-learning to compute the final solution set.
Complexity analysis: The time complexity of the test phase in GCOMB is O ( |V |+ |V g,K | ( dmG +m 2 G ) + |V g|b (d+mQ) ) , where d is the average degree of a node, mG and mQ are the dimensions of the embeddings in GCN and Q-learning respectively, K is the number of layers in GCN, and V g,K represents the set of nodes within the K-hop neighborhood of V g . The space complexity is O(|V |+ |E|+Km2G +mQ). The derivations are provided in App. D.
4 Empirical Evaluation
In this section, we benchmark GCOMB against GCN-TREESEARCH and S2V-DQN, and establish that GCOMB produces marginally improved quality, while being orders of magnitudes faster. The source code can be found at https://github.com/idea-iitd/GCOMB .
4.1 Experimental Setup
All experiments are performed on a machine running Intel Xeon E5-2698v4 processor with 64 cores, having 1 Nvidia 1080 Ti GPU card with 12GB GPU memory, and 256 GB RAM with Ubuntu 16.04. All experiments are repeated 5 times and we report the average of the metric being measured.
Datasets: Table 1a) lists the real datasets used for our experiments. Random Bipartite Graphs (BP): We also use the synthetic random bipartite graphs from S2V-DQN [7]. In this model, given the number of nodes, they are partitioned into two sets with 20% nodes in one side and the rest in other. The edge between any pair of nodes from different partitions is generated with probability 0.1. We use BP-X to denote a generated bipartite graph of X nodes.
Problem Instances: The performance of GCOMB is benchmarked on Influence Maximization (IM), Maximum Vertex Cover (MVC), and Maximum Coverage Problem (MCP) (§ 2). Since MVC can be mapped to MCP, empirical results on MVC are included in App. M.
Baselines: The performance of GCOMB is primarily compared with (1) GCN-TREESEARCH [19], which is the state-of-the-art technique to learn combinatorial algorithms. In addition, for MCP, we also compare the performance with (2) Greedy (Alg.1 in App. B), (3) S2V-DQN [7], (5) CELF [17] and (6) the Optimal solution set (obtained using CPLEX [12] on small datasets). Greedy and CELF guarantees a 1 − 1/e approximation for all three problems. We also compare with (6) Stochastic Greedy(SG) [21] in App. L. For the problem of IM, we also compare with the state-of-the-art algorithm (7) IMM [31]. Additionally, we also compare GCOMB with (8) OPIM [30]. For S2V-DQN, GCN-TREESEARCH, IMM, and OPIM we use the code shared by the authors.
Training: In all our experiments, for a fair comparison of GCOMB with S2V-DQN and GCNTREESEARCH, we train all models for 12 hours and the best performing model on the validation set is used for inference. Nonetheless, we precisely measure the impact of training time in Fig. 2a. The break-up of time spent in each of the three training phases is shown in App. G in the Supplementary.
Parameters: The parameters used for GCOMB are outlined in App. H and their impact on performance is analyzed in App. N. For S2V-DQN and GCN-TREESEARCH, the best performing parameter values are identified using grid-search. In IMM, we set = 0.5 as suggested by the authors. In OPIM, is recommended to be kept in range [0.01, 0.1]. Thus, we set it to = 0.05.
4.2 Performance on Max Cover (MCP)
We evaluate the methods on both synthetic random bipartite (BP) graphs as well as real networks. Train-Validation-Test split: While testing on any synthetic BP graph, we train and validate on five
BP-1k graphs each. For real graphs, we train and validate on BrightKite (BK) (50 : 50 split for train and validate) and test on other real networks. Since our real graphs are not bipartite, we convert it to one by making two copies of V : V1 and V2. We add an edge from u ∈ V1 to u′ ∈ V2 if (u, u′) ∈ E. Comparison with Greedy and Optimal: Table 1b presents the achieved coverage (Recall § 2 for definition of coverage). We note that Greedy provides an empirical approximation ratio of at least 99% when compared to the optimal. This indicates that in larger datasets where we are unable to compute the optimal, Greedy can be assumed to be sufficiently close to the optimal. Second, GCOMB is sometimes able to perform even better than greedy. This indicates that Q-learning is able to learn a more generalized policy through delayed rewards and avoid a myopic view of the solution space.
Synthetic Datasets: Table 2a presents the results. GCOMB and Greedy achieves the highest coverage consistently. While S2V-DQN performs marginally better than GCN-TREESEARCH, S2V-DQN is the least scalable among all techniques; it runs out of memory on graphs containing more than 20, 000 nodes. As discussed in details in § 1.2, the non-scalability of S2V-DQN stems from relying on an architecture with significantly larger parameter set than GCOMB or GCN-TREESEARCH. In contrast, GCOMB avoids noisy nodes, and focuses the search operation only on the good nodes.
Impact of training time: A complex model with more number of parameters results in slower learning. In Fig. 2a, we measure the coverage against the training time. While GCOMB’s performance saturates within 10 minutes, S2V-DQN and GCN-TREESEARCH need 9 and 5 hours respectively for training to obtain its best performance.
Real Datasets: Figs. 2b and 2c present the achieved Coverage as the budget is varied. GCOMB achieves similar quality as Greedy, while GCN-TREESEARCH is marginally inferior. The real impact of GCOMB is highlighted in Figs. 2d and 2e, which shows that GCOMB is up to 2 orders of magnitude faster than GCN-TREESEARCH and 10 times faster than Greedy. Similar conclusion can also be drawn from the results on Gowalla dataset in App. K in Supplementary.
Comparison with CELF: Table 2b presents the speed-up achieved by GCOMB against CELF. The first pass of CELF involves sorting the nodes, which has complexityO(|V |log|V |). On the other hand, no such sorting is required in GCOMB. Thus, the speed-up achieved is higher in smaller budgets.
4.3 Performance on Influence Maximization
Influence Maximization (IM) is the hardest of the three combinatorial problems since estimating the spread of a node is #P-hard [14].
Edge weights: We assign edge weights that denote the influence of a connection using the two popular models [2]: (1) Constant (CO:) All edge weights are set to 0.1, (2) Tri-valency (TV): Edge weights are sampled randomly from the set {0.1, 0.01, 0.001}. In addition, we also employ a third (3) Learned (LND) model, where we learn the influence probabilities from the action logs of users. This is only applicable to the Stack data which contain action logs from 8/2008 to 3/2016. We define the influence of u on v as the probability of v interacting with u’s content at least once in a month.
Train-Validation-Test split: In all of the subsequent experiments, for CO and TV edge weight models, we train and validate on a subgraph sampled out of YT by randomly selecting 30% of the edges (50% of this subset is used for training and 50% is used for validation). For LND edge weight models, we train and validate on the subgraph induced by the 30% of the earliest edges from Stack in terms of temporal order. While testing, on YT and Stack, we use the graph formed by the remaining 70% of the edges that are not used for training. On other datasets, we use the entire graph for testing since neither those datasets nor their subsets are used for training purposes.
GCOMB vs.GCN-TREESEARCH: Fig. 3a compares the running time in IM on progressively larger subgraphs extracted from YT. While GCN-TREESEARCH consumes≈ 3 hours on the 70% sub-graph, GCOMB finishes in 5 seconds.
GCOMB vs. NOISEPRUNER+CELF NOISEPRUNER+CELF, i.e., running CELF only on non-noisy nodes, is orders of magnitude slower than GCOMB in IM (See Fig 3d). Pruning noisy nodes does not reduce the graph size; it only reduces the number of candidate nodes. To compute expected spread in IM, we still require the entire graph, resulting in non-scalability.
Billion-sized graphs: IMM crashes on both the billion-sized datasets of TW and FS, as well as Orkut. Unsurprisingly, similar results have been reported in [2]. IMM strategically samples a subgraph of the entire graph based on the edge weights. On this sampled subgraph, it estimates the influence of a node using reverse reachability sets. On large graphs, the sample size exceeds the RAM capacity of 256GB. Hence, it crashes. In contrast, GCOMB finishes within minutes for smaller budgets (b < 30) and within 100 minutes on larger budgets of 100 and 200 (Figs. 3g-3h ). This massive scalability of GCOMB is a result of low storage overhead (only the graph and GCN and Q-learning parameters; detailed Space complexity provided in App. D in the Supplementary) and relying on just forwarded passes through GCN and Q-learning. The speed-up with respect to OPIM on billion-sized graphs can be seen in App. J.
Performance on YT and Stack: Since IMM crashes on Orkut, TW, and FS, we compare the quality of GCOMB with IMM on YT and Stack. Table 3a reports the results in terms of spread difference, where Spread Difference = f(SIMM )−f(SGCOMB)f(SIMM ) × 100. SIMM and SGCOMB are answer sets computed by IMM and GCOMB respectively. A negative spread difference indicates better performance by GCOMB. The expected spread of a given set of nodes S, i.e. f(S), is computed by taking the average spread across 10, 000 Monte Carlo simulations.
Table 3a shows that the expected spread obtained by both techniques are extremely close. The true impact of GCOMB is realized when Table 3a is considered in conjunction with Figs. 3b-3c, which shows GCOMB is 30 to 160 times faster than IMM. In this plot, speed-up is measured as timeIMMtimeGCOMB where timeIMM and timeGCOMB are the running times of IMM and GCOMB respectively.
Similar behavior is observed when compared against OPIM as seen in Table 3b and Figs. 3e- 3f.
4.4 Design Choices
Impact of Q-learning: Since GCN predicts the expected marginal gain of a node, why not simply select the top-b nodes with the highest predicted marginal gains for the given budget b? This is a pertinent question since, as visible in Fig. 3i, majority of the time in GCOMB is spent on Q-learning. Fig. 3j shows that Q-learning imparts an additional coverage of up to 10%. Improvement (%) is quantified as CoverageGCOMB−CoverageGCNCoverageGCN × 100. Impact of Noise Predictor: Fig. 3k presents the impact of noise predictor which is close to two orders of magnitude reduction in running time. This improvement, however, does not come at the cost of efficacy (Fig. 3l). In fact, the quality improves slightly due to the removal of noisy nodes.
5 Conclusion
S2V-DQN [7] initiated the promising direction of learning combinatorial algorithms on graphs. GCN-TREESEARCH [19] pursued the same line of work and enhanced scalability to larger graphs. However, the barrier to million and billion-sized graphs remained. GCOMB removes this barrier with a new lightweight architecture. In particular, GCOMB uses a phase-wise mixture of supervised and reinforcement learning. While the supervised component predicts individual node qualities and prunes those that are unlikely to be part of the solution set, the Q-learning architecture carefully analyzes the remaining high-quality nodes to identify those that collectively form a good solution set. This architecture allows GCOMB to generalize to unseen graphs of significantly larger sizes and convincingly outperform the state of the art in efficiency and efficacy. Nonetheless, there is scope for improvement. GCOMB is limited to set combinatorial problems on graphs. In future, we will explore a bigger class of combinatorial algorithms such as sequential and capacity constrained problems.
Broader Impact
The need to solve NP-hard combinatorial problems on graphs routinely arise in several real-world problems. Examples include facility location problems on road networks [20], strategies to combat rumor propagation in online social networks [3], computational sustainability [8] and health-care [33]. Each of these problems plays an important role in our society. Consequently, designing effective and efficient solutions are important, and our current work is a step in that direction. The major impact of this paper is that good heuristics for NP-hard problems can be learned for large-scale data. While we are not the first to observe that heuristics for combinatorial algorithms can be learned, we are the first to make them scale to billion-size graphs, thereby bringing an algorithmic idea to practical use-cases.
Acknowledgments and Disclosure of Funding
The project was partially supported by the National Science Foundation under award IIS-1817046. Further, Sahil Manchanda acknowledges the financial support from the Ministry of Human Resource Development (MHRD) of India and the Department of Computer Science and Engineering, IIT Delhi. | 1. What are the main contributions and strengths of the paper regarding its proposed framework for combinatorial optimization?
2. What are the weaknesses of the paper, particularly in terms of experimental design and comparison with baseline algorithms?
3. Do you have any concerns about the implementation and choice of specific algorithms for each problem in the paper?
4. How does the reviewer assess the scalability and solution quality of the proposed method compared to existing works?
5. Are there any suggestions or recommendations for improving the experimental design and comparisons in the paper? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Contributions of this paper are summarized below: * Propose a new framework for combinatorial optimization (Gcomb), based on Graph Convolutional Network, aimed at scaling-up to billion-scale graph, and managing budget constraints. * Conduct experimental evaluation on standard combinatorial optimization problems (i.e., Vertex Cover, Maximum Coverage, and Influence Maximization) demonstrating that Gcomb outperforms S2V-DQN and Gcn-TreeSearch in terms of scalability and solution quality, and it is often competitive to and faster than baselines.
Strengths
<disclaimer: I am a novice in Deep Neural Networks, and so I was not able to judge of the details on existing DNN-style heuristics such as Gcn-TreeSearch and S2V-DQN.> * The motivation of this paper is clear: Scaling-up is a crucial issue given that existing approaches are limited to small-scale graphs, generalizability to various problems is important, and imposing (budget) constraints is crucial. * Superiority over Gcn-TreeSearch & S2V-DQN is quite convincing: Gcomb's scalability against massive-scale data and solution accuracy over these existing works were impressive and convincing for me, through three standard combinatorial optimization problems, i.e., Vertex Cover, Maximum Coverage, and Influence Maximization (though I was not able to judge of many experimental details regarding the comparison to S2V-DQN & Gcn-TreeSearch due to my lack of expertise in GCN).
Weaknesses
* I have several concerns that the experimental design is not convincing enough for demonstrating the superiority over baselines: The authors compare Gcomb to baseline algorithms experimentally to demonstrate Gcomb's scalability and accuracy. However, the implementation of the greedy algorithm used in this paper seems to be too naive, First, there are several generic techniques for scaling-up the greedy algorithm. One is LazyGreedy, which can detect and prune elements whose marginal gain is never significant, which does not affect the resulting solution quality; LazyGreedy can be >100 times faster than the naive greedy in practice (e.g., [Leskovec-Krause-Guestrin-Faloutsos-VanBriesen-Glance. KDD'07. Cost-effective Outbreak Detection in Networks]), and hence the claim that Gcomb is ~10 times faster than the greedy (e.g., in Lines 276, 586, and 598) is not convincing. Also, StochasticGreedy [Mirzasoleiman-Badanidiyuru-Karbasi-Vondrak-Krause. AAAI'15. Lazier Than Lazy Greedy] is proven to evaluate the object function at most O(n log ε^{-1}) times (for parameter some ε), which is much faster than LazyGreedy, with a slight decrease in objective value. I also have concerns for the choice/implementation of specific algorithms for each problem. ** Maximum Coverage: In Appendix B, the authors claim that greedy's time complexity is O(bd|V|), where b is budget, d is the average degree, and V is the ground set. However, it is well known that a slightly-modified greedy algorithm on Maximum Coverage runs in nearly-linear time (e.g., ~ O(d|V|) time); see, e.g., [Borgs-Brautbar-Chayes-Lucier. SODA'14. Maximizing Social Influence in Nearly Optimal Time] Simply using such algorithms (without LazyGreedy) would result in 100x speed-up for the case of b=100. ** Influence Maximization: IMM is a state-of-the-art algorithm of Influence Maximization (in 2015) in a sense that it samples the smallest number of RR samples with the *worst-case theoretical* guarantee on approximation accuracy. In practice, other existing memory-saving and time-efficient algorithms give reasonable-quality solutions similar to IMM; e.g., SKIM [Cohen-Delling-Pajor-Werneck. CIKM'14. Sketch-based Influence Maximization and Computation: Scaling up with Guarantees] can easily scale to billion-edge scale networks, and is a reasonable choice. Also, OPIM in [Tang-Tang-Xiao-Yuan. SIGMOD'18. Online Processing Algorithms for Influence Maximization], which is an improvement over IMM, has been shown to be up to 3 orders of magnitude faster than IMM. Therefore, the conclusion that Gcomb "improves upon the state-of-the-art algorithm for Influence Maximization" is not convincing. |
NIPS | Title
GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs
Abstract
There has been an increased interest in discovering heuristics for combinatorial problems on graphs through machine learning. While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of budgetconstraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional Network (GCN) using a novel probabilistic greedy mechanism to predict the quality of a node. To further facilitate the combinatorial nature of the problem, GCOMB utilizes a Q-learning framework, which is made efficient through importance sampling. We perform extensive experiments on real graphs to benchmark the efficiency and efficacy of GCOMB. Our results establish that GCOMB is 100 times faster and marginally better in quality than state-of-the-art algorithms for learning combinatorial algorithms. Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.
1 Introduction and Related Work
Combinatorial optimization problems on graphs appear routinely in various applications such as viral marketing in social networks [14, 4], computational sustainability [8], health-care [33], and infrastructure deployment [20, 23, 24, 22]. In these set combinatorial problems, the goal is to identify the set of nodes that optimizes a given objective function. These optimization problems are often NP-hard. Therefore, designing an exact algorithm is infeasible and polynomial-time algorithms, with or without approximation guarantees, are often desired and used in practice [13, 31]. Furthermore, these graphs are often dynamic in nature and the approximation algorithms need to be run repeatedly at regular intervals. Since real-world graphs may contain millions of nodes and edges, this entire process becomes tedious and time-consuming.
To provide a concrete example, consider the problem of viral marketing on social networks through Influence Maximization [2, 14]. Given a budget b, the goal is to select b nodes (users) such that their endorsement of a certain product (ex: through a tweet) is expected to initiate a cascade that reaches the largest number of nodes in the graph. This problem is NP-hard [14]. Advertising through social networks is a common practice today and needs to solved repeatedly due to the graphs being dynamic
∗denotes equal contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
in nature. Furthermore, even the greedy approximation algorithm does not scale to large graphs [2] resulting in a large body of research work [31, 13, 16, 26, 14, 5, 32, 6].
At this juncture, we highlight two key observations. First, although the graph is changing, the underlying model generating the graph is likely to remain the same. Second, the nodes that get selected in the answer set of the approximation algorithm may have certain properties common in them. Motivated by these observations, we ask the following question [7]: Given a set combinatorial problem P on graphG and its corresponding solution set S, can we learn an approximation algorithm for problem P and solve it on an unseen graph that is similar to G?
1.1 Limitations of Existing Work
The above observations were first highlighted by S2V-DQN [7], where they show that it is indeed possible to learn combinatorial algorithms on graphs. Subsequently, an improved approach was proposed in GCN-TREESEARCH [19]. Despite these efforts, there is scope for further improvement.
• Scalability: The primary focus of both GCN-TREESEARCH and S2V-DQN have been on obtaining quality that is as close to the optimal as possible. Efficiency studies, however, are limited to graphs containing only hundreds of thousands nodes. To provide a concrete case study, we apply GCNTREESEARCH for the Influence Maximization problem on the YouTube social network. We observe that GCN-TREESEARCH takes one hour on a graph containing a million edges (Fig. 3a; we will revisit this experiment in § 4.3). Real-life graphs may contain billions of edges (See. Table 1a).
• Generalizability to real-life combinatorial problems: GCN-TREESEARCH proposes a learningbased heuristic for the Maximal Independent Set problem (MIS). When the combinatorial problem is not MIS, GCN-TREESEARCH suggests that we map that problem to MIS. Consequently, for problems that are not easily mappable to MIS, the efficacy is compromised (ex: Influence Maximization).
• Budget constraints: Both GCN-TREESEARCH and S2V-DQN solve the decision versions of combinatorial problems (Ex. set cover, vertex cover). In real life, we often encounter their budgetconstrained versions, such as max-cover and Influence Maximization [14].
Among other related work, Gasse et al. [9] used GCN for learning branch-and-bound variable selection policies, whereas Prates et al. [27] focused on solving Travelling Salesman Problem. However, the proposed techniques in these papers do not directly apply to our setting of set combinatorial problems.
1.2 Contributions
At the core of our study lies the observation that although the graph may be large, only a small percentage of the nodes are likely to contribute to the solution set. Thus, pruning the search space is as important as prediction of the solution set. Both S2V-DQN [7] and GCN-TREESEARCH [19] have primarily focused on the prediction component. In particular, S2V-DQN learns an end-to-end neural model on the entire graph through reinforcement learning. The neural model integrates node embedding and Q-learning into a single integrated framework. Consequently, the model is bogged down by a large number of parameters, which needs to be learned on the entire node set. As a result, we will show in §. 4 that S2V-DQN fails to scale to graphs beyond 20, 000 nodes.
On the other hand, GCN-TREESEARCH employs a two-component framework: (1) a graph convolutional network (GCN) to learn and predict the individual value of each node, and (2) a tree-search component to analyze the dependence among nodes and identify the solution set that collectively works well. Following tree-search, GCN is repeated on a reduced graph and this process continues iteratively. This approach is not scalable to large graphs since due to repeated iterations of GCN and TreeSearch where each iteration of tree-search has O(|E|) complexity; E is the set of edges. Our method GCOMB builds on the observation that computationally expensive predictions should be attempted only for promising nodes. Towards that end, GCOMB has two separate components: (1) a GCN to prune poor nodes and learn embeddings of good nodes in a supervised manner, and (2) a Q-learning component that focuses only on the good nodes to predict the solution set. Thus, unlike S2V-DQN, GCOMB uses a mixture of supervised and reinforcement learning, and does not employ an end-to-end architecture. Consequently, the prediction framework is lightweight with a significantly reduced number of parameters.
When compared to GCN-TREESEARCH, although both techniques use a GCN, in GCOMB, we train using a novel probabilistic greedy mechanism. Furthermore, instead of an iterative procedure of repeated GCN and TreeSearch calls, GCOMB performs a single forward pass through GCN
during inference. In addition, unlike TreeSearch, which is specifically tailored for the MIS problem, GCOMB is problem-agnostic 2. Finally, unlike both S2V-DQN and GCN-TREESEARCH, GCOMB uses lightweight operations to prune poor nodes and focus expensive computations only on nodes with a high potential of being part of the solution set. The pruning of the search space not only enhances scalability but also removes noise from the search space leading to improved prediction quality. Owing to these design choices, (1) GCOMB is scalable to billion-sized graphs and up to 100 times faster, (2) on average, computes higher quality solution sets than S2V-DQN and GCN-TREESEARCH, and (3) improves upon the state-of-the-art algorithm for Influence Maximization on social networks.
2 Problem Formulation
Objective: Given a budget-constrained set combinatorial problem P over graphs drawn from distribution D, learn a heuristic to solve problem P on an unseen graph G generated from D.
Next, we describe three instances of budget-constrained set combinatorial problems on graphs.
Maximum Coverage Problem on bipartite graph (MCP): Given a bipartite graph G = (V,E), where V = A ∪ B, and a budget b, find a set S∗ ⊆ A of b nodes such that coverage is maximized. The coverage of set S∗ is defined as f(S∗) = |X||B| , where X = {j|(i, j) ∈ E, i ∈ S∗, j ∈ B}. Budget-constrained Maximum Vertex Cover (MVC): Given a graph G = (V,E) and a budget b, find a set S∗ of b nodes such that the coverage f(S∗) of S∗ is maximized. f(S∗) = |X||E| , where X = {(i, j)|(i, j) ∈ E, i ∈ S∗, j ∈ V }. Influence Maximization (IM) [2]: Given a budget b, a social networkG, and a information diffusion modelM, select a set S∗ of b nodes such that the expected diffusion spread f(S∗) = E[Γ(S∗)] is maximized. (See App. A in supplementary for more details).
3 GCOMB
The input to the training phase is a set of graphs and the optimization function f(·) corresponding to the combinatorial problem in hand. The output is a sequence of two separate neural graphs, GCN [10] and Q-learning network, with their corresponding learned parameters ΘG and ΘQ respectively. In the testing phase, the inputs include a graph G = (V,E), the optimization function f(·) and the budget b. The output of the testing part is the solution set of nodes constructed using the learned neural networks. Fig. 1 presents the training pipeline. We will now discuss each of the phases.
3.1 Generating Training Data for GCN
Our goal is to learn node embeddings that can predict “quality”, and thereby, identify those nodes that are likely to be part of the answer set. We could adopt a classification-based method, where, given a training graph G = (V,E), budget b and its solution set S, a node v is called positive if v ∈ S; otherwise it is negative. This approach, however, assumes all nodes that are not a part of S to be equally bad. In reality, this may not be the case. Consider the case where f({v1})=f({v2}), but the marginal gain of node v2 given S = {v1}, i.e., f({v1, v2}) − f({v1}), is 0 and vice versa. In this scenario, only one of v1 and v2 would be selected in the answer set although both are of equal quality on their own.
2We are, however, limited to set combinatorial problems only.
Probabilistic greedy: To address the above issue, we sample from the solution space in a greedy manner and learn embeddings that reflect the marginal gain f(S ∪{v})− f(S) provided by a node v towards the solution set S (Alg. 2 in Appendix). To sample from the solution space, in each iteration, instead of selecting the node with the highest marginal gain, we choose a node with probability proportional to its marginal gain. The probabilistic greedy algorithm runs m times to construct m different solution sets S = {S1, · · · , Sm} and the score of node v ∈ V is set to:
score(v) = ∑m i gaini(v)∑m i f(Si)
(1)
Here, gaini(v) denotes the marginal gain contribution of v to Si. Specifically, assume v is added to Si in the (j + 1)th iteration and let S j i be the set of nodes that were added in the first j iterations
while constructing Si. Then, gaini(v) = f ( Sji ∪ {v} ) − f ( Sji ) . In our experiments, m is set to
30 for all three problems of MCP, MVC and IM.
Termination condition of probabilistic greedy: Probabilistic greedy runs till convergence of the marginal gains, i.e., gaini(v) ≤ ∆, where ∆ is a small value. The goal here is to identify all nodes that could potentially be part of the solution set for any given budget. ∆ in our experiments is set to 0.01 for all three problems of MCP, MVC and IM.
3.2 Training the GCN
Our goal in this phase is two-fold: (1) Identify nodes that are unlikely to be part of the solution set and are therefore noise in the context of our problem; (2) Learn a predictive model for node quality.
Noise predictor: The noise predictor should be lightweight so that expensive computations are reserved only for the good nodes. With this goal, we exploit the first layer information of the GCN and learn a classifier to predict for a given budget b, whether a node can be safely pruned without affecting the quality of the solution set. Typically, the first layer of a GCN contains the raw features of nodes that are relevant for the problem being solved. In GCOMB, we use the summation of the outgoing edge weights as node features. Let xv denote the total outgoing edge weight of node v. To learn the noise predictor, given a set of training graphs {G1, · · · , Gt}, we first sort all nodes based on xv . Let rank(v,Gi) denote the position of v in the sorted sequence based on xv in Gi. Furthermore, let Sij denote the j
th solution set constructed by probabilistic greedy on Gi. Given a budget b, SjGi,b ⊆ Sij denotes the subset containing the first b nodes added to Sij by probabilistic greedy. Therefore, rbGi = max m j=0 { max∀v∈SjGi,b {rank(v,Gi)} } represents the lowest rank of any node in a solution set of budget b in Gi. This measure is further generalized to all training graphs in the form of rbmax = max∀Gi { rbGi }
, which represents the lowest rank of any node that has a realistic chance of being included in an answer set of budget b. To generalize across budgets, we compute rbimax for a series of budgets {b1, · · · , bmax}, where bmax = max∀Gi { maxmj=0 { |Sij | }} . On this data, we can perform curve fitting [1] to predict rbmax for any (unseen) budget b. In our experiments, we use linear interpolation. To generalize across graph sizes, all of the above computations are performed on normalized budgets, where b is expressed in terms of the proportion of nodes with respect to the node set size of the graph. Similarly, rank rank(v,Gi) is expressed in terms of percentile.
Node quality predictor: To train the GCN, we sample a training graph Gi = (Vi, Ei) and a (normalized) budget b from the range (0, bimax], where b i max = max m j=0 { |Sij | |Vi| } . This tuple is sent to the noise predictor to obtain the good (non-noisy) nodes. The GCN parameters (ΘG) are next learned by minimizing the loss function only on the good nodes. Specifically, for each good node v, we want to learn embeddings that can predict score(v) through a surrogate function score′(v). Towards that end, we draw multiple samples of training graphs and budgets, and the parameters are learned by minimizing the mean squared error loss (See Alg.3 for detailed pseudocode in the Supplementary).
J(ΘG) = ∑ ∼〈Gi,b〉 1 |V gi | ∑ ∀v∈V gi (score(v)− score′(v))2 (2)
In the above equation, V gi denotes the set of good nodes for budget b in graph Gi. Since GCNs are trained through message passing, in a GCN with K hidden layers, the computation graph is limited to the induced subgraph formed by the K-hop neighbors of V gi , instead of the entire graph.
3.3 Learning Q-function While GCN captures the individual importance of a node, Q-learning [29] learns the combinatorial aspect in a budget-independent manner. Given a set of nodes S and a node v 6∈ S, we predict the n-step reward, Qn(S, v), for adding v to set S (action) via the surrogate function Q′n(S, v; ΘQ).
Defining the framework: We define the Q-learning task in terms of state space, action, reward, policy and termination with the input as a set of nodes and their predicted scores.
• State space: The state space characterizes the state of the system at any time step t in terms of the candidate nodes being considered, i.e., Ct = V g \ St, with respect to the partially computed solution set St; V g represents the set of good nodes from a training graph. In a combinatorial problem over nodes, two factors have a strong influence: (1) the individual quality of a node, and (2) its locality. The quality of a node v is captured through score′(v). Locality is an important factor since two high-quality nodes from the same neighborhood may not be good collectively. The locality of a node v ∈ Ct (Ct = V g \ St) is defined as: loc(v, St) = |N(v) \ ∪∀u∈StN(u)| (3) where N(v) = {v′ ∈ V | (v, v′) ∈ E} are the neighbors of v. Note that N(v) may contain noisy nodes since they contribute to the locality of v ∈ V g . However, locality (and q-learning in general) is computed only on good nodes. The initial representation µv of each node v ∈ Ct is therefore the 2-dimensional vector [score′(v), loc(v, St)]. The representation of the set of nodes Ct is defined as µCt = MAXPOOL {µv | v ∈ Ct}. µSt is defined analogously as well. We use MAXPOOL since it captures the best available candidate node better than alternatives such as MEANPOOL. Empirically, we obtain better results as well.
• Action and Reward: An action corresponds to adding a node v ∈ Ct to the solution set St. The immediate (0-step) reward of the action is its marginal gain, i.e. r(St, v) = f(St ∪ {v})− f(St). • Policy and Termination: The policy π(v | St) selects the node with the highest predicted n-step reward, i.e., arg maxv∈Ct Q ′ n(St, v; ΘQ). We terminate after training the model for T samples.
Learning the parameter set ΘQ: We partition ΘQ into three weight matrices Θ1, Θ2, Θ3, and one weight vector Θ4 such that, Q′n(St, v; ΘQ) = Θ4 · µCt,St,v, where µCt,St,v = CONCAT ( Θ1 · µCt ,Θ2 · µSt ,Θ3 · µv ) . If we want to encode the state space in a d-dimensional layer, the dimensions of the weight vectors are as follows: Θ4 ∈ R1×3d; Θ1,Θ2,Θ3 ∈ Rd×2. Qlearning updates parameters in a single episode via Adam optimizer[15] to minimize the squared loss.
J(ΘQ) = (y −Q′n(St, vt; ΘQ))2, where y = γ · max v∈V g {Q′n(St+n, v; ΘQ)}+ n−1∑ i=0 r(St+i, vt+i)
γ is the discount factor and balances the importance of immediate reward with the predicted n-step future reward [29]. The pseudocode with more details is provided in the Supplementary (App. C).
3.3.1 Importance Sampling for Fast Locality Computation
Since degrees of nodes in real graphs may be very high, computing locality (Eq. 3) is expensive. Furthermore, locality is re-computed in each iteration. We negate this computational bottleneck through importance sampling. Let N(V g) = {(v, u) ∈ E | v ∈ V g} be the neighbors of all nodes in V g. Given a sample size z, we extract a subset Nz(V g) ⊆ N(V g) of size z and compute locality only based on the nodes in Nz(V g). Importance sampling samples elements proportional to their importance. The importance of a node in N(V g) is defined as I(v) = score
′(v)∑ ∀v′∈N(V g) score ′(v′) .
Determining sample size: Let µN(V g) be the mean importance of all nodes in N(V g) and µ̂Nz(V g) the mean importance of sampled nodes. The sampling is accurate if µN(V g) ≈ µ̂Nz(V g). Theorem 1 Given an error bound , if sample size z is O ( log |N(V g)|
2
) , then
P [ |µ̂Nz(V g) − µN(V g)| < ] > 1− 1|N(V g)|2 . Remarks: (1) The sample size grows logarithmically with the neighborhood size, i.e., |N(V g)| and thus scalable to large graphs. (2) z is an inversely proportional function of the error bound .
3.4 Test Phase
Given an unseen graph G and budget b, we (1) identify the noisy nodes, (2) embed good nodes through a single forward pass through GCN, and (3) use GCN output to embed them and perform Q-learning to compute the final solution set.
Complexity analysis: The time complexity of the test phase in GCOMB is O ( |V |+ |V g,K | ( dmG +m 2 G ) + |V g|b (d+mQ) ) , where d is the average degree of a node, mG and mQ are the dimensions of the embeddings in GCN and Q-learning respectively, K is the number of layers in GCN, and V g,K represents the set of nodes within the K-hop neighborhood of V g . The space complexity is O(|V |+ |E|+Km2G +mQ). The derivations are provided in App. D.
4 Empirical Evaluation
In this section, we benchmark GCOMB against GCN-TREESEARCH and S2V-DQN, and establish that GCOMB produces marginally improved quality, while being orders of magnitudes faster. The source code can be found at https://github.com/idea-iitd/GCOMB .
4.1 Experimental Setup
All experiments are performed on a machine running Intel Xeon E5-2698v4 processor with 64 cores, having 1 Nvidia 1080 Ti GPU card with 12GB GPU memory, and 256 GB RAM with Ubuntu 16.04. All experiments are repeated 5 times and we report the average of the metric being measured.
Datasets: Table 1a) lists the real datasets used for our experiments. Random Bipartite Graphs (BP): We also use the synthetic random bipartite graphs from S2V-DQN [7]. In this model, given the number of nodes, they are partitioned into two sets with 20% nodes in one side and the rest in other. The edge between any pair of nodes from different partitions is generated with probability 0.1. We use BP-X to denote a generated bipartite graph of X nodes.
Problem Instances: The performance of GCOMB is benchmarked on Influence Maximization (IM), Maximum Vertex Cover (MVC), and Maximum Coverage Problem (MCP) (§ 2). Since MVC can be mapped to MCP, empirical results on MVC are included in App. M.
Baselines: The performance of GCOMB is primarily compared with (1) GCN-TREESEARCH [19], which is the state-of-the-art technique to learn combinatorial algorithms. In addition, for MCP, we also compare the performance with (2) Greedy (Alg.1 in App. B), (3) S2V-DQN [7], (5) CELF [17] and (6) the Optimal solution set (obtained using CPLEX [12] on small datasets). Greedy and CELF guarantees a 1 − 1/e approximation for all three problems. We also compare with (6) Stochastic Greedy(SG) [21] in App. L. For the problem of IM, we also compare with the state-of-the-art algorithm (7) IMM [31]. Additionally, we also compare GCOMB with (8) OPIM [30]. For S2V-DQN, GCN-TREESEARCH, IMM, and OPIM we use the code shared by the authors.
Training: In all our experiments, for a fair comparison of GCOMB with S2V-DQN and GCNTREESEARCH, we train all models for 12 hours and the best performing model on the validation set is used for inference. Nonetheless, we precisely measure the impact of training time in Fig. 2a. The break-up of time spent in each of the three training phases is shown in App. G in the Supplementary.
Parameters: The parameters used for GCOMB are outlined in App. H and their impact on performance is analyzed in App. N. For S2V-DQN and GCN-TREESEARCH, the best performing parameter values are identified using grid-search. In IMM, we set = 0.5 as suggested by the authors. In OPIM, is recommended to be kept in range [0.01, 0.1]. Thus, we set it to = 0.05.
4.2 Performance on Max Cover (MCP)
We evaluate the methods on both synthetic random bipartite (BP) graphs as well as real networks. Train-Validation-Test split: While testing on any synthetic BP graph, we train and validate on five
BP-1k graphs each. For real graphs, we train and validate on BrightKite (BK) (50 : 50 split for train and validate) and test on other real networks. Since our real graphs are not bipartite, we convert it to one by making two copies of V : V1 and V2. We add an edge from u ∈ V1 to u′ ∈ V2 if (u, u′) ∈ E. Comparison with Greedy and Optimal: Table 1b presents the achieved coverage (Recall § 2 for definition of coverage). We note that Greedy provides an empirical approximation ratio of at least 99% when compared to the optimal. This indicates that in larger datasets where we are unable to compute the optimal, Greedy can be assumed to be sufficiently close to the optimal. Second, GCOMB is sometimes able to perform even better than greedy. This indicates that Q-learning is able to learn a more generalized policy through delayed rewards and avoid a myopic view of the solution space.
Synthetic Datasets: Table 2a presents the results. GCOMB and Greedy achieves the highest coverage consistently. While S2V-DQN performs marginally better than GCN-TREESEARCH, S2V-DQN is the least scalable among all techniques; it runs out of memory on graphs containing more than 20, 000 nodes. As discussed in details in § 1.2, the non-scalability of S2V-DQN stems from relying on an architecture with significantly larger parameter set than GCOMB or GCN-TREESEARCH. In contrast, GCOMB avoids noisy nodes, and focuses the search operation only on the good nodes.
Impact of training time: A complex model with more number of parameters results in slower learning. In Fig. 2a, we measure the coverage against the training time. While GCOMB’s performance saturates within 10 minutes, S2V-DQN and GCN-TREESEARCH need 9 and 5 hours respectively for training to obtain its best performance.
Real Datasets: Figs. 2b and 2c present the achieved Coverage as the budget is varied. GCOMB achieves similar quality as Greedy, while GCN-TREESEARCH is marginally inferior. The real impact of GCOMB is highlighted in Figs. 2d and 2e, which shows that GCOMB is up to 2 orders of magnitude faster than GCN-TREESEARCH and 10 times faster than Greedy. Similar conclusion can also be drawn from the results on Gowalla dataset in App. K in Supplementary.
Comparison with CELF: Table 2b presents the speed-up achieved by GCOMB against CELF. The first pass of CELF involves sorting the nodes, which has complexityO(|V |log|V |). On the other hand, no such sorting is required in GCOMB. Thus, the speed-up achieved is higher in smaller budgets.
4.3 Performance on Influence Maximization
Influence Maximization (IM) is the hardest of the three combinatorial problems since estimating the spread of a node is #P-hard [14].
Edge weights: We assign edge weights that denote the influence of a connection using the two popular models [2]: (1) Constant (CO:) All edge weights are set to 0.1, (2) Tri-valency (TV): Edge weights are sampled randomly from the set {0.1, 0.01, 0.001}. In addition, we also employ a third (3) Learned (LND) model, where we learn the influence probabilities from the action logs of users. This is only applicable to the Stack data which contain action logs from 8/2008 to 3/2016. We define the influence of u on v as the probability of v interacting with u’s content at least once in a month.
Train-Validation-Test split: In all of the subsequent experiments, for CO and TV edge weight models, we train and validate on a subgraph sampled out of YT by randomly selecting 30% of the edges (50% of this subset is used for training and 50% is used for validation). For LND edge weight models, we train and validate on the subgraph induced by the 30% of the earliest edges from Stack in terms of temporal order. While testing, on YT and Stack, we use the graph formed by the remaining 70% of the edges that are not used for training. On other datasets, we use the entire graph for testing since neither those datasets nor their subsets are used for training purposes.
GCOMB vs.GCN-TREESEARCH: Fig. 3a compares the running time in IM on progressively larger subgraphs extracted from YT. While GCN-TREESEARCH consumes≈ 3 hours on the 70% sub-graph, GCOMB finishes in 5 seconds.
GCOMB vs. NOISEPRUNER+CELF NOISEPRUNER+CELF, i.e., running CELF only on non-noisy nodes, is orders of magnitude slower than GCOMB in IM (See Fig 3d). Pruning noisy nodes does not reduce the graph size; it only reduces the number of candidate nodes. To compute expected spread in IM, we still require the entire graph, resulting in non-scalability.
Billion-sized graphs: IMM crashes on both the billion-sized datasets of TW and FS, as well as Orkut. Unsurprisingly, similar results have been reported in [2]. IMM strategically samples a subgraph of the entire graph based on the edge weights. On this sampled subgraph, it estimates the influence of a node using reverse reachability sets. On large graphs, the sample size exceeds the RAM capacity of 256GB. Hence, it crashes. In contrast, GCOMB finishes within minutes for smaller budgets (b < 30) and within 100 minutes on larger budgets of 100 and 200 (Figs. 3g-3h ). This massive scalability of GCOMB is a result of low storage overhead (only the graph and GCN and Q-learning parameters; detailed Space complexity provided in App. D in the Supplementary) and relying on just forwarded passes through GCN and Q-learning. The speed-up with respect to OPIM on billion-sized graphs can be seen in App. J.
Performance on YT and Stack: Since IMM crashes on Orkut, TW, and FS, we compare the quality of GCOMB with IMM on YT and Stack. Table 3a reports the results in terms of spread difference, where Spread Difference = f(SIMM )−f(SGCOMB)f(SIMM ) × 100. SIMM and SGCOMB are answer sets computed by IMM and GCOMB respectively. A negative spread difference indicates better performance by GCOMB. The expected spread of a given set of nodes S, i.e. f(S), is computed by taking the average spread across 10, 000 Monte Carlo simulations.
Table 3a shows that the expected spread obtained by both techniques are extremely close. The true impact of GCOMB is realized when Table 3a is considered in conjunction with Figs. 3b-3c, which shows GCOMB is 30 to 160 times faster than IMM. In this plot, speed-up is measured as timeIMMtimeGCOMB where timeIMM and timeGCOMB are the running times of IMM and GCOMB respectively.
Similar behavior is observed when compared against OPIM as seen in Table 3b and Figs. 3e- 3f.
4.4 Design Choices
Impact of Q-learning: Since GCN predicts the expected marginal gain of a node, why not simply select the top-b nodes with the highest predicted marginal gains for the given budget b? This is a pertinent question since, as visible in Fig. 3i, majority of the time in GCOMB is spent on Q-learning. Fig. 3j shows that Q-learning imparts an additional coverage of up to 10%. Improvement (%) is quantified as CoverageGCOMB−CoverageGCNCoverageGCN × 100. Impact of Noise Predictor: Fig. 3k presents the impact of noise predictor which is close to two orders of magnitude reduction in running time. This improvement, however, does not come at the cost of efficacy (Fig. 3l). In fact, the quality improves slightly due to the removal of noisy nodes.
5 Conclusion
S2V-DQN [7] initiated the promising direction of learning combinatorial algorithms on graphs. GCN-TREESEARCH [19] pursued the same line of work and enhanced scalability to larger graphs. However, the barrier to million and billion-sized graphs remained. GCOMB removes this barrier with a new lightweight architecture. In particular, GCOMB uses a phase-wise mixture of supervised and reinforcement learning. While the supervised component predicts individual node qualities and prunes those that are unlikely to be part of the solution set, the Q-learning architecture carefully analyzes the remaining high-quality nodes to identify those that collectively form a good solution set. This architecture allows GCOMB to generalize to unseen graphs of significantly larger sizes and convincingly outperform the state of the art in efficiency and efficacy. Nonetheless, there is scope for improvement. GCOMB is limited to set combinatorial problems on graphs. In future, we will explore a bigger class of combinatorial algorithms such as sequential and capacity constrained problems.
Broader Impact
The need to solve NP-hard combinatorial problems on graphs routinely arise in several real-world problems. Examples include facility location problems on road networks [20], strategies to combat rumor propagation in online social networks [3], computational sustainability [8] and health-care [33]. Each of these problems plays an important role in our society. Consequently, designing effective and efficient solutions are important, and our current work is a step in that direction. The major impact of this paper is that good heuristics for NP-hard problems can be learned for large-scale data. While we are not the first to observe that heuristics for combinatorial algorithms can be learned, we are the first to make them scale to billion-size graphs, thereby bringing an algorithmic idea to practical use-cases.
Acknowledgments and Disclosure of Funding
The project was partially supported by the National Science Foundation under award IIS-1817046. Further, Sahil Manchanda acknowledges the financial support from the Ministry of Human Resource Development (MHRD) of India and the Department of Computer Science and Engineering, IIT Delhi. | 1. What is the focus and contribution of the paper regarding influence maximization?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and performance improvement?
3. What are the weaknesses of the paper, especially regarding its clarity and consistency?
4. Are there any concerns or questions regarding the heuristics used to improve running time?
5. How does the reviewer assess the overall quality and impact of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper develops a deep reinforcement learning algorithm for influence maximization and related coverage type problems. The method is compared with greedy and other state of the art methods, in terms of objective value and running time, and shows significant improvement
Strengths
The approach is pretty interesting. The paper also identifies several heuristics to improve the running time of individual steps in the training and reinforcement learning. The empirical results are pretty impressive
Weaknesses
Several parts of the paper are hard to follow, and there are some inconsistencies in the notions used |
NIPS | Title
BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
Abstract
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models), the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples3 and code4 are available online.
1 Introduction
Human brains have the ability to decode spatial properties from real-world stereophonic sounds, which helps us to locate and interact with the environment. While in artificial spaces such as augmented and virtual reality, generating accurate binaural audio from the mono one is therefore essential to provide
∗Equal contribution. †This work was conducted at Microsoft. Corresponding author: Xu Tan, xuta@microsoft.com 3https://speechresearch.github.io/binauralgrad 4https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
immersive environments for listeners. Traditional Digital Signal Processing (DSP) systems generate binaural audio by warping the mono audio according to the time difference between two ears and modeling room reverberations (e.g., room impulse response (RIR)) and head/ear related filtration (e.g., head-related transfer functions (HRTF)) through a linear time-invariant system [32, 47, 35]. However, they fail to generate accurate and immersive binaural audio due to the non-linear nature of sound propagation and the simplification in DSP systems, such as simplifying the physical modeling of RIR and utilizing general HRTFs instead of personalized. In addition, the exact physical information of the recording environment is not available most of the time.
Instead of following the complicated synthesis process of binaural audio in traditional digital signal processing which is difficult to model, in this paper, we formulate the process from a different and novel perspective. We first decompose the synthesis of binaural audio into two steps: 1) the original mono audio is emitted by the object and then diffuses to the listener. The audio that arrives at the left and right ears are similar, as the distance between the two ears is marginal compared with that between the object and the listener; 2) the binaural audio is then encoded by human head depending on the marginal difference of audio received at two ears [2, 43]. This process is personalized as the morphology of each listener is unique, which will change the internal encoding results. Precisely processing the two stages is difficult due to the complexity of physical system. Therefore, as an alternative, given that the two channels of the binaural audio is similar but with marginal differences, we propose to divide the the representation of the binaural audio y = (yl, yr) as two parts. One is the general part that contains the common information of two channels, which are represented as the average ȳ; the other one is the specific part that is distinct between the two channels encoded by human head in the end, which are denoted as δl and δr. Then we can represent the binaural audio as yl = ȳ + δl and yr = ȳ + δr respectively. The common part represents the fundamental information of the source mono audio, while the specific parts contain the difference between the left and right channels caused by room reverberance and the acoustical influence of human head.
Based on the representation, we propose a two-stage framework to model the two parts respectively. The common information of the two channels of binaural audio is generated in the first stage, and their difference are generated in the second stage. We therefore term them as first/common and second/specific stages. The common stage is encouraged to generate ȳ while the specific stage tries to model the difference to the left and right audio. Specifically, we build our framework on denoising diffusion probabilistic models (diffusion models for short) [12], which have been shown effective and efficient to generate high-fidelity audio on the task of speech synthesis [15, 4, 17]. In the common stage, the mono audio is taken as the condition of a single-channel diffusion model, which is supervised by the average over the two channels of the golden binaural speech ȳ, encouraging the model to generate the common information of two channels. And in the specific stage, conditioned on the single-channel output of the first stage, we utilize a two-channel diffusion model to generate the audio for the left and right ears respectively. We provide an illustration of the proposed framework in Figure 1.
The main contributions of this work are summarized as follows:
• We formulate the representation of binaural audio as two parts, and propose a novel two-stage framework to respectively synthesize their common and specific information conditioned on the mono audio. • Equipped with denoising probabilistic diffusion models, the proposed two-stage framework is able to generate accurate and high-fidelity binaural audio. • We conduct experiments on the benchmark dataset released in [29], and the proposed framework outperforms existing baselines by a large margin, both on automatic (Wave L2: 0.128 vs. 0.157) and human evaluation (MOS: 3.80 vs. 3.61) metrics.
2 Related Works
2.1 Binaural Audio Synthesis
Different from text to speech synthesis that mainly generates mono speech from text [37, 38], binaural audio synthesis aims to convert mono audio into its binaural version. Based on the physical process of sound rendering, human listening can be generally considered as a source-medium-receiver model [3]. The sound waves emitted by the object will travel through the medium, i.e., free air and then be received by listener. In medium, the sound will be diffused, reverberated and effected by the physical objects such as walls during the propagation. Room impulse response (RIR) [19, 36, 1, 30] describes the distortion effect caused by surrounding environment using filters. Besides the interaction with medium, the morphology of listener (torso, head, and pinna) will also change the sound received by the eardrum. The head-related transfer functions (HRTF) [3, 6] are utilized to describe the transformation of the sound from a direction in free air to that it arrives at the eardrum [18]. Digital Signal Processing (DSP) methods synthesize the binaural speech by combining RIR and HRTF [47, 35], both of which are expensive and tedious to measure, as well as missed from most open-sourced datasets in addition. Therefore, DSP methods usually utilize generic functions instead, resulting in sub-optimal generation results as the recording environments are highly specialized among different datasets.
Recently, neural rendering approaches are proposed for binaural speech synthesis. Gebru et al. [10] utilizing neural networks to learn HRTFs implicitly. Richard et al. [29] propose a convolution neural network based model with an additional neural time warping module to learn the time shifts from the mono to binaural audio. Some works incorporate extra modalities such as the visual information to help the synthesis of binaural speech [46, 45, 26]. In this work, we consider the most basic scenario where only the mono audio is available.
2.2 Denoising Diffusion Probabilistic Models
Denoising diffusion probabilistic models (diffusion models for short) have achieved the state-of-theart (SOTA) generation results in various tasks, including image [34, 22, 8, 7, 33, 39, 44] and super resolution image generation [13, 31, 41, 25], text-to-image generation [23, 11, 14, 28], text-to-speech synthesis [4, 15, 27, 17, 16, 5] and speech enhancement [20, 21, 42]. Especially, in audio synthesis, diffusion models have shown strong ability in modelling both spectrogram features [27, 17] and raw waveforms [4, 15, 5].
Diffusion models require a large number of inference steps to achieve high sample quality. For highresolution data generation, some works [8, 13, 28] even cascade multiple diffusion models together and achieve the SOTA quality in text-to-image generation [28]. In this cascaded structure, multiple diffusion models are trained at different resolution scales, and while inference, low-resolution data samples are firstly generated with the base diffusion model, and then stacked conditional diffusion models are employed for super resolution, where high-fidelity results can be achieved with moderate time cost.
In this paper, for generating binaural audio waveform with high resolution (48kHz sampling rate), we integrate diffusion models into the proposed two-stage framework, and generate the common and specific information of binaural audio with single- and two-channel diffusion models respectively.
3 Methods
We introduce the proposed framework BinauralGrad in this section. We start with the preliminary knowledge about binaural audio and geometric warping, and then introduce the proposed two-stage framework as well as the structure of the diffusion models.
3.1 Preliminary
Problem Definition Given the source mono audio x ∈ RN with length N and the relative position between the source and listener p, we aim to generate the binaural audio y = (yl, yr) that contains two channels of audio for left and right ears through
(yl(n), yr(n)) = f(x(n), p), (1)
where yl and yr ∈ RN , and f indicates the transformation function parameterized by the proposed framework. We denote the average of the binaural audio as ȳ = mean(yl, yr).
Specifically, the relative position can be described with a tuple p = (ps, pα), where ps = (px, py, pz) is the spatial position that indicated by the coordinates, and pα = (qx, qy, qz, qw) is quaternions that indicates the head orientation from the listener to the source. The listener is stationary and set in the origin of the coordinate system, and therefore (px, py, pz) axes indicate the front, right and up directions respectively. Besides, we also denote the spatial position of the left and right ears of the listener as pllstn and p r lstn respectively.
A simple way to align the temporal differences between the left and right ears is geometric warping, a non-parametric method conditioned on the distance between the source and the listener:
ρ(n) = n− C · ∥psrc(n)− plstn(n)∥, (2)
where n indicates the current time-stamp, C is a constant that calculated by the ratio between the audio sampling rate and the speed of the sound. As the predicted warpfield ρ(n) are usually floats, the warped signals can be computed with the linear interpolation:
xwarp(n) = (⌈ρ(n)⌉ − ρ(n)) · x⌊ρ(n)⌋ + (ρ(n)− ⌊ρ(n)⌋) · x⌈ρ(n)⌉, (3)
where ⌈·⌉ and ⌊·⌋ indicate the ceil and floor function respectively. The warping for the left and right ears can be obtained by changing the position plstn(n), and the warped binaural audio is denoted as (xlwarp, x r warp), which are with low quality without considering the diffraction of audio.
3.2 Overview of the Framework
The proposed framework contains two stages, and we train two distinct diffusion models for each stage, parameterized by θc and θs respectively. In the first stage, the model θc aims to synthesize the common information of the binaural audio supervised by ȳ, i.e., we treat the mono average of two channels as the golden supervision of the common information. And in the second stage, conditioned
on the synthesized audio yc of the first stage, a two-channel diffusion model θs tries to synthesize the golden binaural audio y = (yl, yr). We provide an illustration of the proposed framework in Figure 2. We will first introduce the backbone diffusion model and then the details of two stages as below.
3.2.1 Conditional Diffusion Models
Diffusion models are score-based generative models [12, 34, 8]. They are composed of a forward process and a reverse process. In the training stage, the forward process gradually destroys the data samples into Gaussian noise with a large number of time steps. At each time step t, the model learns a score function that denotes the gradient information. Then, in the reverse process, with learned score functions at predefined inference noise schedules, the model can generate clean data samples from Gaussian noise in an iterative denoising process.
The forward process converts the data samples z0 into the isotropic Gaussian noise ϵ ∼ N (0, I) with a predefined variance schedule 0 < β1 < · · · < βt < · · · < βT < 1. The latent representations at time step t can be directly calculated with:
q(zt|z0) = N (zt; √ ᾱtz0, (1− ᾱt)ϵ), (4)
where the αt := 1− βt, and ᾱt := ∏t s=1 αs can denote a corresponding noise level at time step t.
In sampling, the reverse process starts from the isotropic Gaussian noise p(zT ) ∼ N (0, I), and iteratively denoises the generated samples toward clean data z0 with the conditioning information c :
pθ(z0, · · · , zT−1|zT , c) = T∏
t=1
pθ(zt−1|zt, c). (5)
Providing strong conditioning information c is usually helpful to reduce the number of inference steps and improve the generation quality. The Gaussian transition probability of each inference step is parameterized as:
pθ(zt−1|zt) = N (zt−1, µθ(zt, t, c), σ2θI), (6)
where the variance σ2θ is usually predefined as 1−ᾱt−1 1−ᾱt βt or βt. Following previous works [12, 15], we use the former one in this work. The model is trained by maximizing the variation lower bound of the likelihood pθ(z0). When we parameterize the mean function as:
µθ(zt, t, c) = 1/ √ αt(zt − βt/ √ 1− ᾱtϵθ(zt, t, c)), (7)
a reweighted training objective is usually adopted in practice as [12, 4, 15]: LD(θ) = Ez0,ϵ,t ∥∥ϵ− ϵθ(√ᾱtz0 +√1− ᾱtϵ, t, c)∥∥22 . (8)
For human speech signals, they usually have more energy in low-frequency region, while Gaussian noise has equal intensity at different frequencies. Hence, in the forward process, the high-frequency information is firstly destroyed, then the low-frequency information is also destroyed when the time step t increases. Correspondingly, in the reverse process, the low-frequency information will be gradually generated at first, then details will be iteratively refined with following inference steps, which is suitable for accurate and high-fidelity speech waveform generation.
3.2.2 Details of Two Stages
With the understanding of the iterative refinement mechanism of diffusion models, we integrate them into the proposed two-stage framework, in which a single-channel diffusion model generates the common information of binaural audio in the first stage, while a two-channel diffusion model is then used for modelling the specific information of both left ear and right ear.
In the first stage, the average ȳ of the golden binaural audio is considered as the clean data in the forward process. In this way, the diffusion model is encouraged to generate the common information of the two channels. The condition information is consistent for the forward and reverse processes. In concrete, the model takes the average of the warped binaural audio x̄ = mean(xlwarp, x r warp) as the condition information (Defined in Equation 3). In addition to the mono audio and position, the
condition of the first stage can be denoted as c1 = (ps, pα, x̄, x). The position and audio information are separately processed by the model, which will be detailed introduced in Section 3.3.
The second stage aims at synthesizing the binaural audio through a two-channel diffusion model, therefore the golden audio y = (yl, yr) is taken as the clean data for the two channels respectively. Different from that in the first stage, the condition information is distinct in the forward and reverse process as it includes the generation results of the first stage. The average ȳ of the golden binaural audio is known in the forward process, therefore, along with the warped binaural audio, the condition information can be written as c2 = (ps, pα, xlwarp, x r warp, x, ȳ). For the reverse process, the generation results yc of the first stage is utilized to replace the golden audio which is unknown in inference, i.e., c2 = (ps, pα, xlwarp, x r warp, x, yc).
While training, the two stages are trained separately by minimizing the regression loss defined in Equation 8 utilizing different clean data and conditions. And in inference, firstly the common stage generates the mono audio yc, conditioned on which the specific stage synthesizes the binaural audio, which are the final output of the framework. There exists an inconsistency between the training and inference of the second stage due to the different conditions, and fine-tune the second stage conditioned on the output of the first stage may alleviate this problem. We leave it for future work.
3.3 Model Architecture
The model architecture of our diffusion model is shown in Figure 3. The diffusion models utilized in the two stages have similar architectures, except the channels of the input, output and condition are one and two for the common and specific stages respectively. We introduce a Conditioner component to deal with the condition signals introduced in previous sections. Specifically, we utilize two separate convolution blocks to process the position as well as the conditional audio respectively, and the output features are concatenated and then processed by a final convolution layer. The conditional audio represents the concatenation of [x̄, x] and [xlwarp, x r warp, x, yc] for the two stages. The output of the Conditioner is taken as the representation of condition signals.
The rest of our model generally follows the design in [15]. The main network is based on the bidirectional variant of the dilated convolution layer [24] given the non-autoregressive nature of the task. The architecture is constructed by M residual blocks, with m dilated convolution layers in
each block. The diffusion step t is represented by positional encoding [40] and encoded by two fully connected layers before fed to the network, and then in each block, the diffusion step will be further transformed by another fully connected layer that is unique among blocks, where the output representation will be added to the input of the dilated convolution layer. The conditional representation modeled by the Conditioner will be added to the output of the dilated layer to bring in the conditional information. In the end, through skip connections, the outputs of residual blocks are gathered to produce the final output of the model.
4 Experiments
Dataset We utilize the dataset released in [29]5 as it is the largest binaural dataset captured in the wild up to now. It contains 2 hours of mono-binaural parallel audio data at 48kHz from eight different objects, recorded in a regular room. The position and orientation between the objects and the listener are tracked at 120Hz and aligned with the audio. We follow the original split of training/valid/test sets to make our results comparable.
Baselines The proposed framework is denoted as BinauralGrad, and we consider following baselines. DSP is a binaural rendering approach that simulates the spatial acoustic effects of sound sources with dynamic virtual positions. DSP utilizes RIR to model the room reverberance and HRTF to model the acoustical influence of the human head. We perform RIR simulation with an open-sourced tool6 according to the room information provided in the original paper [29]. The HRTF data comes from [9], which was measured at a distance of 1.4 meters using the KEMAR, a manikin for acoustic testing. And we follow the procedure in 7 to simulate the binaural sound. As the exact HRTF and RIR are not provided in the dataset, therefore the DSP results are worse than that reported in the original paper [29]. WaveNet [24] where the positions of source and listener are used as condition signals and fed into a ConvNet to generate binaural audio from the mono one. WarpNet [29] which originally proposes the dataset, and stacking a neural time warping module with a temporal ConvNet to generate binaural audio. For all baselines, we adjust their hyper-parameters to make the number of parameters comparable with our model to ensure fair comparisons.
Evaluation Metrics We use following metrics to evaluate the quality of synthesized binaural audio. Wave L2, which is the mean squared error between the synthesized binaural audio and the golden binaural recording. Amplitude L2 and Phase L2, which are the mean squared errors between the synthesized binaural speech and the binaural recording on the amplitude and phase respectively after performing Short-Time Fourier Transform (STFT) on wave. PESQ, which is the perceptual evaluation of speech quality8 and widely used in speech enhancement. MRSTFT, which models the multi-resolution spectral loss9 by taking the spectral convergence, log magnitude loss and linear magnitude loss into consideration. Except PESQ which is the higher the better, the rest of metrics are the lower the better. Besides object metrics, we also conduct subject evaluation (Mean Option Score, MOS) to verify the audio quality intuitively.
Training Configurations As described in Section 3.3, BinauralGrad consists of M = 3 residual blocks, each of which has m = 10 bidirectional dilated convolution layers. The hidden size and the dimension of the diffusion step encoding are both set to 128. We train BinauralGrad on 8 Nvidia V100 GPUs for 1M steps, and the diffusion steps are set to 200 and 6 during training and inference respectively, mainly following previous diffusion works [15].
We will introduce the experimental results in the rest of this section. We first compare the proposed BinauralGrad with baseline systems in terms of objective metrics, then verify the effectiveness of the proposed two-stage framework by comparing with single stage diffusion model. In addition, a subject metric MOS test is also utilized to show the intuitive performance of the model. Finally, to better understand the two-stage model, we also conduct a thorough analysis on the output of each stage.
5https://github.com/facebookresearch/BinauralSpeechSynthesis 6https://github.com/sunits/rir_simulator_python 7http://spatialaudio.net/ssr 8https://github.com/aliutkus/speechmetrics 9https://github.com/csteinmetz1/auraloss
4.1 Main Results
We report the object metrics of BinauralGrad and other binaural speech synthesis baselines in Table 1. The proposed framework performs consistently better than compared baselines over different metrics. Specifically, we have several observations:
• Our method outperforms the main baseline WarpNet on Wave L2 by a large margin, showing that BinauralGrad can synthesize better binaural waveforms that are closer to the golden recordings.
• Since there are some randomness in the waveform, the metrics on the STFT results of waveform is important to evaluate the overall quality of binaural speech synthesis. Our method surpasses baseline methods by a large margin in terms of Amplitude L2 and MRSTFT, and being slight better than WarpNet on Phase L2.
• PESQ is a widely used metric on the perceptual evaluation of speech quality. The proposed BinauralGrad achieves 2.759 PESQ score, which is significantly better than other baseline systems. The naturalness of our method is further verified by the MOS results listed in Table 3.
In conclusion, the object metrics show that the proposed two-stage framework is able to generate more accurate binaural audio than existing baselines.
4.2 Comparison with Single Stage Diffusion Model
We compare BinauralGrad with a single stage diffusion model to verify the advantage of the twostage framework. For the single stage diffusion model, its model architecture is similar to the two-channel diffusion model in the second stage of BinauralGrad, except for two differences: 1) the single stage model does not use any information from the common information, i.e., with condition c = (ps, pα, x l warp, x r warp, x); 2) the single stage model is two times deeper so as to make the number of parameters comparable.
The comparison of BinauralGrad with the single stage diffusion model are shown in Table 2. The results show that the two-stage model performs better than the single stage one on most metrics including PESQ and L2 losses of waveform, amplitude and phase, while achieving comparable MRSTFT scores, demonstrating the effectiveness of two stage framework. It is worth noting that the single stage model already outperforms the major baseline WarpNet over most metrics except Phase L2 as shown in Table 1, illustrating the stronger ability of diffusion models in synthesizing raw audio waveform than purely convolution based models.
4.3 MOS Test
We conduct three types of MOS to verify the quality of synthesized audio with human evaluation, including: 1) MOS, where judges are asked to rate for the overall naturalness and fluency of synthesized audio; 2) Similarity MOS, where judges are asked to rate for the similarity between the synthesized results and the golden binaural recordings; 3) Spatial MOS, where the judges are asked
to rate for the sense of direction contained in the synthesized audio. For all MOS tests, the judges rate discrete score from 1 to 5, and the higher score the better quality. The MOS results together with the 95% confidence interval are shown in Table 3.
Firstly, the proposed BinauralGrad achieves a quite high MOS score 3.80 which not only surpasses all baselines, but also slightly outperforms the recording, illustrating the strong ability of the proposed framework in synthesizing natural audio waveforms. Secondly, the similarity MOS shows that BinauralGrad can synthesize closest binaural audio to the golden binaural recording among all compared models, which is consistent with the conclusions from objective metrics. Finally, the results on the spatial MOS show that DSP, WarpNet and BinauralGrad achieve similar performance on describing the sense of spatial, which might be attributed to the fact that the above three methods all utilize the physical warping process that brings in the interaural time difference (ITD) and results in the sense of direction [43].
4.4 Analysis
In this section, we conduct a thorough analysis to verify the effectiveness of each stage in our framework, and try to find out the potential bottleneck that brings sparks for future work.
In each stage, the diffusion model synthesize audio conditioned on different information. Therefore, to verify the effectiveness of the model in each stage, we evaluate the performance of the condition and synthesized audio separately to illustrate the improvements brought by the model. Specifically, in addition to the synthesized audio from the two stages (ID 3 and 5), we also calculate the performance of the physical warping results (i.e., (xlwarp, x r warp), ID 2) and the average ȳ of the golden binaural audio. Note that as the output of the first stage and ȳ are both mono audio, we therefore calculate the scores by duplicating them to binaural such as (ȳ, ȳ) for ID 4 and (yc, yc) for ID 3.
The results are listed in Table 4. Firstly, we can find that the performance of the physical warping (ID 2) is worse as it only brings gains on Wave L2 and remain consistent on other metrics compared to the mono audio (ID 1). Secondly, the improvements from ID 2 to 3 and 3 to 5 are brought by the first and second stage of the proposed model respectively, verifying the effectiveness and necessities of both stages.
In addition, we try to explore the boundary of the framework by directly feeding the perfect condition to the second stage while inference, i.e., instead of conditioning on yc which is synthesized by the first stage, we utilize the golden average ȳ as the condition which is unavailable in inference to test the best performance of the model. As a result, we can find that the second stage achieves an extremely good result and brings a large improvement over all metrics compared with the given condition,
showing that the model in the second stage has been well trained. On the contrary, there still exists a large margin between the synthesized result from the first stage and the label (ID 3 vs. ID 4). In conclusion, the results in Table 4 show that the first stage is the bottleneck of the framework, which points out potential future work directions such as the end-to-end optimization of the two stages.
5 Conclusion
In this paper, we propose BinauralGrad, a two-stage framework for binaural audio synthesis conditioned on mono audio. Specifically, we formulate the synthesis process from a novel perspective and divide the binaural audio into a common part that is shared by two channels as well as distinct parts. Accordingly, a single-channel diffusion model is utilized to generate the common information in the first stage, conditioned on which a two-channel diffusion model synthesizes the binaural audio. The proposed framework is able to synthesize accurate and high-fidelity audio samples. On a benchmark dataset, BinauralGrad achieves state-of-the-art results both in object and subject evaluation metrics. In the future, we plan to improve the training of the two stages in an end-to-end way for better performance or speed up the inference of BinauralGrad, which is important for the online deployment of our method. The negative impact of our method might be the abuse, e.g., generating fake binaural audio to mislead the user in some VR-based human-computer interactive games, resulting in physical injury. | 1. What is the focus and contribution of the paper on sound generation?
2. What are the strengths of the proposed approach, particularly in terms of its design and experimental results?
3. Do you have any concerns or questions regarding the system's ability to directly generate binaural signals?
4. What are the limitations of the method, especially regarding computation complexity?
5. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In this paper, the authors propose a diffusion-based system that can directly generate binaural signals. Based on analysis of the way sound propagates and humans perceive spatial impressions of sound, the authors designed a two-step system. The proposed system is evaluated by both objective measures and subjective evaluations, both of which shows that the proposed system is trained successfully.
Strengths And Weaknesses
Strength: The proposed system is designed based on a nice intuition. It’s novel, and effective. The design and result of the experiment are solid. The writing is good, too.
Weakness: Directly synthesizing the binaural signals is not practical as of now given its high computation complexity. (But I do not think this is not a critical weakness)
Questions
L162-163: Just curious, Why? White noise has equal energies all over the frequency. Is it bc the test signals have, due to our auditory perception, more energy on the low-frequency region? It could be useful to discuss this
In general: There are many symbols (x, y, with bars and hats etc). I found it quite confusing at the beginning and once misunderstood something was wrong.
Limitations
(Mentioned in the Weakness) |
NIPS | Title
BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
Abstract
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models), the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples3 and code4 are available online.
1 Introduction
Human brains have the ability to decode spatial properties from real-world stereophonic sounds, which helps us to locate and interact with the environment. While in artificial spaces such as augmented and virtual reality, generating accurate binaural audio from the mono one is therefore essential to provide
∗Equal contribution. †This work was conducted at Microsoft. Corresponding author: Xu Tan, xuta@microsoft.com 3https://speechresearch.github.io/binauralgrad 4https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
immersive environments for listeners. Traditional Digital Signal Processing (DSP) systems generate binaural audio by warping the mono audio according to the time difference between two ears and modeling room reverberations (e.g., room impulse response (RIR)) and head/ear related filtration (e.g., head-related transfer functions (HRTF)) through a linear time-invariant system [32, 47, 35]. However, they fail to generate accurate and immersive binaural audio due to the non-linear nature of sound propagation and the simplification in DSP systems, such as simplifying the physical modeling of RIR and utilizing general HRTFs instead of personalized. In addition, the exact physical information of the recording environment is not available most of the time.
Instead of following the complicated synthesis process of binaural audio in traditional digital signal processing which is difficult to model, in this paper, we formulate the process from a different and novel perspective. We first decompose the synthesis of binaural audio into two steps: 1) the original mono audio is emitted by the object and then diffuses to the listener. The audio that arrives at the left and right ears are similar, as the distance between the two ears is marginal compared with that between the object and the listener; 2) the binaural audio is then encoded by human head depending on the marginal difference of audio received at two ears [2, 43]. This process is personalized as the morphology of each listener is unique, which will change the internal encoding results. Precisely processing the two stages is difficult due to the complexity of physical system. Therefore, as an alternative, given that the two channels of the binaural audio is similar but with marginal differences, we propose to divide the the representation of the binaural audio y = (yl, yr) as two parts. One is the general part that contains the common information of two channels, which are represented as the average ȳ; the other one is the specific part that is distinct between the two channels encoded by human head in the end, which are denoted as δl and δr. Then we can represent the binaural audio as yl = ȳ + δl and yr = ȳ + δr respectively. The common part represents the fundamental information of the source mono audio, while the specific parts contain the difference between the left and right channels caused by room reverberance and the acoustical influence of human head.
Based on the representation, we propose a two-stage framework to model the two parts respectively. The common information of the two channels of binaural audio is generated in the first stage, and their difference are generated in the second stage. We therefore term them as first/common and second/specific stages. The common stage is encouraged to generate ȳ while the specific stage tries to model the difference to the left and right audio. Specifically, we build our framework on denoising diffusion probabilistic models (diffusion models for short) [12], which have been shown effective and efficient to generate high-fidelity audio on the task of speech synthesis [15, 4, 17]. In the common stage, the mono audio is taken as the condition of a single-channel diffusion model, which is supervised by the average over the two channels of the golden binaural speech ȳ, encouraging the model to generate the common information of two channels. And in the specific stage, conditioned on the single-channel output of the first stage, we utilize a two-channel diffusion model to generate the audio for the left and right ears respectively. We provide an illustration of the proposed framework in Figure 1.
The main contributions of this work are summarized as follows:
• We formulate the representation of binaural audio as two parts, and propose a novel two-stage framework to respectively synthesize their common and specific information conditioned on the mono audio. • Equipped with denoising probabilistic diffusion models, the proposed two-stage framework is able to generate accurate and high-fidelity binaural audio. • We conduct experiments on the benchmark dataset released in [29], and the proposed framework outperforms existing baselines by a large margin, both on automatic (Wave L2: 0.128 vs. 0.157) and human evaluation (MOS: 3.80 vs. 3.61) metrics.
2 Related Works
2.1 Binaural Audio Synthesis
Different from text to speech synthesis that mainly generates mono speech from text [37, 38], binaural audio synthesis aims to convert mono audio into its binaural version. Based on the physical process of sound rendering, human listening can be generally considered as a source-medium-receiver model [3]. The sound waves emitted by the object will travel through the medium, i.e., free air and then be received by listener. In medium, the sound will be diffused, reverberated and effected by the physical objects such as walls during the propagation. Room impulse response (RIR) [19, 36, 1, 30] describes the distortion effect caused by surrounding environment using filters. Besides the interaction with medium, the morphology of listener (torso, head, and pinna) will also change the sound received by the eardrum. The head-related transfer functions (HRTF) [3, 6] are utilized to describe the transformation of the sound from a direction in free air to that it arrives at the eardrum [18]. Digital Signal Processing (DSP) methods synthesize the binaural speech by combining RIR and HRTF [47, 35], both of which are expensive and tedious to measure, as well as missed from most open-sourced datasets in addition. Therefore, DSP methods usually utilize generic functions instead, resulting in sub-optimal generation results as the recording environments are highly specialized among different datasets.
Recently, neural rendering approaches are proposed for binaural speech synthesis. Gebru et al. [10] utilizing neural networks to learn HRTFs implicitly. Richard et al. [29] propose a convolution neural network based model with an additional neural time warping module to learn the time shifts from the mono to binaural audio. Some works incorporate extra modalities such as the visual information to help the synthesis of binaural speech [46, 45, 26]. In this work, we consider the most basic scenario where only the mono audio is available.
2.2 Denoising Diffusion Probabilistic Models
Denoising diffusion probabilistic models (diffusion models for short) have achieved the state-of-theart (SOTA) generation results in various tasks, including image [34, 22, 8, 7, 33, 39, 44] and super resolution image generation [13, 31, 41, 25], text-to-image generation [23, 11, 14, 28], text-to-speech synthesis [4, 15, 27, 17, 16, 5] and speech enhancement [20, 21, 42]. Especially, in audio synthesis, diffusion models have shown strong ability in modelling both spectrogram features [27, 17] and raw waveforms [4, 15, 5].
Diffusion models require a large number of inference steps to achieve high sample quality. For highresolution data generation, some works [8, 13, 28] even cascade multiple diffusion models together and achieve the SOTA quality in text-to-image generation [28]. In this cascaded structure, multiple diffusion models are trained at different resolution scales, and while inference, low-resolution data samples are firstly generated with the base diffusion model, and then stacked conditional diffusion models are employed for super resolution, where high-fidelity results can be achieved with moderate time cost.
In this paper, for generating binaural audio waveform with high resolution (48kHz sampling rate), we integrate diffusion models into the proposed two-stage framework, and generate the common and specific information of binaural audio with single- and two-channel diffusion models respectively.
3 Methods
We introduce the proposed framework BinauralGrad in this section. We start with the preliminary knowledge about binaural audio and geometric warping, and then introduce the proposed two-stage framework as well as the structure of the diffusion models.
3.1 Preliminary
Problem Definition Given the source mono audio x ∈ RN with length N and the relative position between the source and listener p, we aim to generate the binaural audio y = (yl, yr) that contains two channels of audio for left and right ears through
(yl(n), yr(n)) = f(x(n), p), (1)
where yl and yr ∈ RN , and f indicates the transformation function parameterized by the proposed framework. We denote the average of the binaural audio as ȳ = mean(yl, yr).
Specifically, the relative position can be described with a tuple p = (ps, pα), where ps = (px, py, pz) is the spatial position that indicated by the coordinates, and pα = (qx, qy, qz, qw) is quaternions that indicates the head orientation from the listener to the source. The listener is stationary and set in the origin of the coordinate system, and therefore (px, py, pz) axes indicate the front, right and up directions respectively. Besides, we also denote the spatial position of the left and right ears of the listener as pllstn and p r lstn respectively.
A simple way to align the temporal differences between the left and right ears is geometric warping, a non-parametric method conditioned on the distance between the source and the listener:
ρ(n) = n− C · ∥psrc(n)− plstn(n)∥, (2)
where n indicates the current time-stamp, C is a constant that calculated by the ratio between the audio sampling rate and the speed of the sound. As the predicted warpfield ρ(n) are usually floats, the warped signals can be computed with the linear interpolation:
xwarp(n) = (⌈ρ(n)⌉ − ρ(n)) · x⌊ρ(n)⌋ + (ρ(n)− ⌊ρ(n)⌋) · x⌈ρ(n)⌉, (3)
where ⌈·⌉ and ⌊·⌋ indicate the ceil and floor function respectively. The warping for the left and right ears can be obtained by changing the position plstn(n), and the warped binaural audio is denoted as (xlwarp, x r warp), which are with low quality without considering the diffraction of audio.
3.2 Overview of the Framework
The proposed framework contains two stages, and we train two distinct diffusion models for each stage, parameterized by θc and θs respectively. In the first stage, the model θc aims to synthesize the common information of the binaural audio supervised by ȳ, i.e., we treat the mono average of two channels as the golden supervision of the common information. And in the second stage, conditioned
on the synthesized audio yc of the first stage, a two-channel diffusion model θs tries to synthesize the golden binaural audio y = (yl, yr). We provide an illustration of the proposed framework in Figure 2. We will first introduce the backbone diffusion model and then the details of two stages as below.
3.2.1 Conditional Diffusion Models
Diffusion models are score-based generative models [12, 34, 8]. They are composed of a forward process and a reverse process. In the training stage, the forward process gradually destroys the data samples into Gaussian noise with a large number of time steps. At each time step t, the model learns a score function that denotes the gradient information. Then, in the reverse process, with learned score functions at predefined inference noise schedules, the model can generate clean data samples from Gaussian noise in an iterative denoising process.
The forward process converts the data samples z0 into the isotropic Gaussian noise ϵ ∼ N (0, I) with a predefined variance schedule 0 < β1 < · · · < βt < · · · < βT < 1. The latent representations at time step t can be directly calculated with:
q(zt|z0) = N (zt; √ ᾱtz0, (1− ᾱt)ϵ), (4)
where the αt := 1− βt, and ᾱt := ∏t s=1 αs can denote a corresponding noise level at time step t.
In sampling, the reverse process starts from the isotropic Gaussian noise p(zT ) ∼ N (0, I), and iteratively denoises the generated samples toward clean data z0 with the conditioning information c :
pθ(z0, · · · , zT−1|zT , c) = T∏
t=1
pθ(zt−1|zt, c). (5)
Providing strong conditioning information c is usually helpful to reduce the number of inference steps and improve the generation quality. The Gaussian transition probability of each inference step is parameterized as:
pθ(zt−1|zt) = N (zt−1, µθ(zt, t, c), σ2θI), (6)
where the variance σ2θ is usually predefined as 1−ᾱt−1 1−ᾱt βt or βt. Following previous works [12, 15], we use the former one in this work. The model is trained by maximizing the variation lower bound of the likelihood pθ(z0). When we parameterize the mean function as:
µθ(zt, t, c) = 1/ √ αt(zt − βt/ √ 1− ᾱtϵθ(zt, t, c)), (7)
a reweighted training objective is usually adopted in practice as [12, 4, 15]: LD(θ) = Ez0,ϵ,t ∥∥ϵ− ϵθ(√ᾱtz0 +√1− ᾱtϵ, t, c)∥∥22 . (8)
For human speech signals, they usually have more energy in low-frequency region, while Gaussian noise has equal intensity at different frequencies. Hence, in the forward process, the high-frequency information is firstly destroyed, then the low-frequency information is also destroyed when the time step t increases. Correspondingly, in the reverse process, the low-frequency information will be gradually generated at first, then details will be iteratively refined with following inference steps, which is suitable for accurate and high-fidelity speech waveform generation.
3.2.2 Details of Two Stages
With the understanding of the iterative refinement mechanism of diffusion models, we integrate them into the proposed two-stage framework, in which a single-channel diffusion model generates the common information of binaural audio in the first stage, while a two-channel diffusion model is then used for modelling the specific information of both left ear and right ear.
In the first stage, the average ȳ of the golden binaural audio is considered as the clean data in the forward process. In this way, the diffusion model is encouraged to generate the common information of the two channels. The condition information is consistent for the forward and reverse processes. In concrete, the model takes the average of the warped binaural audio x̄ = mean(xlwarp, x r warp) as the condition information (Defined in Equation 3). In addition to the mono audio and position, the
condition of the first stage can be denoted as c1 = (ps, pα, x̄, x). The position and audio information are separately processed by the model, which will be detailed introduced in Section 3.3.
The second stage aims at synthesizing the binaural audio through a two-channel diffusion model, therefore the golden audio y = (yl, yr) is taken as the clean data for the two channels respectively. Different from that in the first stage, the condition information is distinct in the forward and reverse process as it includes the generation results of the first stage. The average ȳ of the golden binaural audio is known in the forward process, therefore, along with the warped binaural audio, the condition information can be written as c2 = (ps, pα, xlwarp, x r warp, x, ȳ). For the reverse process, the generation results yc of the first stage is utilized to replace the golden audio which is unknown in inference, i.e., c2 = (ps, pα, xlwarp, x r warp, x, yc).
While training, the two stages are trained separately by minimizing the regression loss defined in Equation 8 utilizing different clean data and conditions. And in inference, firstly the common stage generates the mono audio yc, conditioned on which the specific stage synthesizes the binaural audio, which are the final output of the framework. There exists an inconsistency between the training and inference of the second stage due to the different conditions, and fine-tune the second stage conditioned on the output of the first stage may alleviate this problem. We leave it for future work.
3.3 Model Architecture
The model architecture of our diffusion model is shown in Figure 3. The diffusion models utilized in the two stages have similar architectures, except the channels of the input, output and condition are one and two for the common and specific stages respectively. We introduce a Conditioner component to deal with the condition signals introduced in previous sections. Specifically, we utilize two separate convolution blocks to process the position as well as the conditional audio respectively, and the output features are concatenated and then processed by a final convolution layer. The conditional audio represents the concatenation of [x̄, x] and [xlwarp, x r warp, x, yc] for the two stages. The output of the Conditioner is taken as the representation of condition signals.
The rest of our model generally follows the design in [15]. The main network is based on the bidirectional variant of the dilated convolution layer [24] given the non-autoregressive nature of the task. The architecture is constructed by M residual blocks, with m dilated convolution layers in
each block. The diffusion step t is represented by positional encoding [40] and encoded by two fully connected layers before fed to the network, and then in each block, the diffusion step will be further transformed by another fully connected layer that is unique among blocks, where the output representation will be added to the input of the dilated convolution layer. The conditional representation modeled by the Conditioner will be added to the output of the dilated layer to bring in the conditional information. In the end, through skip connections, the outputs of residual blocks are gathered to produce the final output of the model.
4 Experiments
Dataset We utilize the dataset released in [29]5 as it is the largest binaural dataset captured in the wild up to now. It contains 2 hours of mono-binaural parallel audio data at 48kHz from eight different objects, recorded in a regular room. The position and orientation between the objects and the listener are tracked at 120Hz and aligned with the audio. We follow the original split of training/valid/test sets to make our results comparable.
Baselines The proposed framework is denoted as BinauralGrad, and we consider following baselines. DSP is a binaural rendering approach that simulates the spatial acoustic effects of sound sources with dynamic virtual positions. DSP utilizes RIR to model the room reverberance and HRTF to model the acoustical influence of the human head. We perform RIR simulation with an open-sourced tool6 according to the room information provided in the original paper [29]. The HRTF data comes from [9], which was measured at a distance of 1.4 meters using the KEMAR, a manikin for acoustic testing. And we follow the procedure in 7 to simulate the binaural sound. As the exact HRTF and RIR are not provided in the dataset, therefore the DSP results are worse than that reported in the original paper [29]. WaveNet [24] where the positions of source and listener are used as condition signals and fed into a ConvNet to generate binaural audio from the mono one. WarpNet [29] which originally proposes the dataset, and stacking a neural time warping module with a temporal ConvNet to generate binaural audio. For all baselines, we adjust their hyper-parameters to make the number of parameters comparable with our model to ensure fair comparisons.
Evaluation Metrics We use following metrics to evaluate the quality of synthesized binaural audio. Wave L2, which is the mean squared error between the synthesized binaural audio and the golden binaural recording. Amplitude L2 and Phase L2, which are the mean squared errors between the synthesized binaural speech and the binaural recording on the amplitude and phase respectively after performing Short-Time Fourier Transform (STFT) on wave. PESQ, which is the perceptual evaluation of speech quality8 and widely used in speech enhancement. MRSTFT, which models the multi-resolution spectral loss9 by taking the spectral convergence, log magnitude loss and linear magnitude loss into consideration. Except PESQ which is the higher the better, the rest of metrics are the lower the better. Besides object metrics, we also conduct subject evaluation (Mean Option Score, MOS) to verify the audio quality intuitively.
Training Configurations As described in Section 3.3, BinauralGrad consists of M = 3 residual blocks, each of which has m = 10 bidirectional dilated convolution layers. The hidden size and the dimension of the diffusion step encoding are both set to 128. We train BinauralGrad on 8 Nvidia V100 GPUs for 1M steps, and the diffusion steps are set to 200 and 6 during training and inference respectively, mainly following previous diffusion works [15].
We will introduce the experimental results in the rest of this section. We first compare the proposed BinauralGrad with baseline systems in terms of objective metrics, then verify the effectiveness of the proposed two-stage framework by comparing with single stage diffusion model. In addition, a subject metric MOS test is also utilized to show the intuitive performance of the model. Finally, to better understand the two-stage model, we also conduct a thorough analysis on the output of each stage.
5https://github.com/facebookresearch/BinauralSpeechSynthesis 6https://github.com/sunits/rir_simulator_python 7http://spatialaudio.net/ssr 8https://github.com/aliutkus/speechmetrics 9https://github.com/csteinmetz1/auraloss
4.1 Main Results
We report the object metrics of BinauralGrad and other binaural speech synthesis baselines in Table 1. The proposed framework performs consistently better than compared baselines over different metrics. Specifically, we have several observations:
• Our method outperforms the main baseline WarpNet on Wave L2 by a large margin, showing that BinauralGrad can synthesize better binaural waveforms that are closer to the golden recordings.
• Since there are some randomness in the waveform, the metrics on the STFT results of waveform is important to evaluate the overall quality of binaural speech synthesis. Our method surpasses baseline methods by a large margin in terms of Amplitude L2 and MRSTFT, and being slight better than WarpNet on Phase L2.
• PESQ is a widely used metric on the perceptual evaluation of speech quality. The proposed BinauralGrad achieves 2.759 PESQ score, which is significantly better than other baseline systems. The naturalness of our method is further verified by the MOS results listed in Table 3.
In conclusion, the object metrics show that the proposed two-stage framework is able to generate more accurate binaural audio than existing baselines.
4.2 Comparison with Single Stage Diffusion Model
We compare BinauralGrad with a single stage diffusion model to verify the advantage of the twostage framework. For the single stage diffusion model, its model architecture is similar to the two-channel diffusion model in the second stage of BinauralGrad, except for two differences: 1) the single stage model does not use any information from the common information, i.e., with condition c = (ps, pα, x l warp, x r warp, x); 2) the single stage model is two times deeper so as to make the number of parameters comparable.
The comparison of BinauralGrad with the single stage diffusion model are shown in Table 2. The results show that the two-stage model performs better than the single stage one on most metrics including PESQ and L2 losses of waveform, amplitude and phase, while achieving comparable MRSTFT scores, demonstrating the effectiveness of two stage framework. It is worth noting that the single stage model already outperforms the major baseline WarpNet over most metrics except Phase L2 as shown in Table 1, illustrating the stronger ability of diffusion models in synthesizing raw audio waveform than purely convolution based models.
4.3 MOS Test
We conduct three types of MOS to verify the quality of synthesized audio with human evaluation, including: 1) MOS, where judges are asked to rate for the overall naturalness and fluency of synthesized audio; 2) Similarity MOS, where judges are asked to rate for the similarity between the synthesized results and the golden binaural recordings; 3) Spatial MOS, where the judges are asked
to rate for the sense of direction contained in the synthesized audio. For all MOS tests, the judges rate discrete score from 1 to 5, and the higher score the better quality. The MOS results together with the 95% confidence interval are shown in Table 3.
Firstly, the proposed BinauralGrad achieves a quite high MOS score 3.80 which not only surpasses all baselines, but also slightly outperforms the recording, illustrating the strong ability of the proposed framework in synthesizing natural audio waveforms. Secondly, the similarity MOS shows that BinauralGrad can synthesize closest binaural audio to the golden binaural recording among all compared models, which is consistent with the conclusions from objective metrics. Finally, the results on the spatial MOS show that DSP, WarpNet and BinauralGrad achieve similar performance on describing the sense of spatial, which might be attributed to the fact that the above three methods all utilize the physical warping process that brings in the interaural time difference (ITD) and results in the sense of direction [43].
4.4 Analysis
In this section, we conduct a thorough analysis to verify the effectiveness of each stage in our framework, and try to find out the potential bottleneck that brings sparks for future work.
In each stage, the diffusion model synthesize audio conditioned on different information. Therefore, to verify the effectiveness of the model in each stage, we evaluate the performance of the condition and synthesized audio separately to illustrate the improvements brought by the model. Specifically, in addition to the synthesized audio from the two stages (ID 3 and 5), we also calculate the performance of the physical warping results (i.e., (xlwarp, x r warp), ID 2) and the average ȳ of the golden binaural audio. Note that as the output of the first stage and ȳ are both mono audio, we therefore calculate the scores by duplicating them to binaural such as (ȳ, ȳ) for ID 4 and (yc, yc) for ID 3.
The results are listed in Table 4. Firstly, we can find that the performance of the physical warping (ID 2) is worse as it only brings gains on Wave L2 and remain consistent on other metrics compared to the mono audio (ID 1). Secondly, the improvements from ID 2 to 3 and 3 to 5 are brought by the first and second stage of the proposed model respectively, verifying the effectiveness and necessities of both stages.
In addition, we try to explore the boundary of the framework by directly feeding the perfect condition to the second stage while inference, i.e., instead of conditioning on yc which is synthesized by the first stage, we utilize the golden average ȳ as the condition which is unavailable in inference to test the best performance of the model. As a result, we can find that the second stage achieves an extremely good result and brings a large improvement over all metrics compared with the given condition,
showing that the model in the second stage has been well trained. On the contrary, there still exists a large margin between the synthesized result from the first stage and the label (ID 3 vs. ID 4). In conclusion, the results in Table 4 show that the first stage is the bottleneck of the framework, which points out potential future work directions such as the end-to-end optimization of the two stages.
5 Conclusion
In this paper, we propose BinauralGrad, a two-stage framework for binaural audio synthesis conditioned on mono audio. Specifically, we formulate the synthesis process from a novel perspective and divide the binaural audio into a common part that is shared by two channels as well as distinct parts. Accordingly, a single-channel diffusion model is utilized to generate the common information in the first stage, conditioned on which a two-channel diffusion model synthesizes the binaural audio. The proposed framework is able to synthesize accurate and high-fidelity audio samples. On a benchmark dataset, BinauralGrad achieves state-of-the-art results both in object and subject evaluation metrics. In the future, we plan to improve the training of the two stages in an end-to-end way for better performance or speed up the inference of BinauralGrad, which is important for the online deployment of our method. The negative impact of our method might be the abuse, e.g., generating fake binaural audio to mislead the user in some VR-based human-computer interactive games, resulting in physical injury. | 1. What is the focus and contribution of the paper on binaural audio synthesis?
2. What are the strengths of the proposed approach, particularly in using diffusion models?
3. What are the weaknesses of the paper regarding the motivation and discussion of the proposed method?
4. Do you have any concerns about the suitability and limitations of diffusion models for binaural audio synthesis?
5. What are the potential societal impacts and limitations of the proposed method that the authors should have discussed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the problem of binaural audio synthesis, and proposes to decompose the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. A two-stage framework based on diffusion models is proposed to synthesize the two channels, respectively. Experiments on a benchmark dataset demonstrate the effectiveness of the proposed framework.
Strengths And Weaknesses
Strengths: The idea to use diffusion models for this task is interesting, and the proposed two-stage framework is well-motivated. Generally, the paper is clearly written and easy to understand. Table 1 clearly shows that the proposed method based on diffusion models compares favorably to prior methods including the latest neural method WarpNet [29]. The main contribution/strength of this paper is introducing a technique---diffusion model, which is demonstrated to be powerful in many other areas, to this existing task of binaural sound synthesis, and achieves some gains.
Weakness: The motivation for dividing the binaural synthesis problem into two stages can be better discussed. For example, in Figure 1, it would be useful to explicitly illustrate what information is used in the first stage and what information is used in the second stage, instead of just showing the waveforms. Moreover, diffusion models are powerful and lead to good gains in terms of binaural sound synthesis results, it would be important to discuss why it is helpful and suitable for this specific task, compared to prior methods as well as its limitations. No limitations or failure cases are currently discussed across the paper and only one sentence discusses potential societal impact.
Questions
Diffusion models are powerful and lead to good gains in terms of binaural sound synthesis results, it would be important to discuss why it is helpful and suitable for this specific task, compared to prior methods as well as its limitations.
Limitations
No limitations are discussed in the paper. There is only one sentence that discusses the societal impact: The negative impact of our method might be the abuse of binaural speech synthesis, which is very vague and unclear. |
NIPS | Title
BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
Abstract
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models), the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples3 and code4 are available online.
1 Introduction
Human brains have the ability to decode spatial properties from real-world stereophonic sounds, which helps us to locate and interact with the environment. While in artificial spaces such as augmented and virtual reality, generating accurate binaural audio from the mono one is therefore essential to provide
∗Equal contribution. †This work was conducted at Microsoft. Corresponding author: Xu Tan, xuta@microsoft.com 3https://speechresearch.github.io/binauralgrad 4https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
immersive environments for listeners. Traditional Digital Signal Processing (DSP) systems generate binaural audio by warping the mono audio according to the time difference between two ears and modeling room reverberations (e.g., room impulse response (RIR)) and head/ear related filtration (e.g., head-related transfer functions (HRTF)) through a linear time-invariant system [32, 47, 35]. However, they fail to generate accurate and immersive binaural audio due to the non-linear nature of sound propagation and the simplification in DSP systems, such as simplifying the physical modeling of RIR and utilizing general HRTFs instead of personalized. In addition, the exact physical information of the recording environment is not available most of the time.
Instead of following the complicated synthesis process of binaural audio in traditional digital signal processing which is difficult to model, in this paper, we formulate the process from a different and novel perspective. We first decompose the synthesis of binaural audio into two steps: 1) the original mono audio is emitted by the object and then diffuses to the listener. The audio that arrives at the left and right ears are similar, as the distance between the two ears is marginal compared with that between the object and the listener; 2) the binaural audio is then encoded by human head depending on the marginal difference of audio received at two ears [2, 43]. This process is personalized as the morphology of each listener is unique, which will change the internal encoding results. Precisely processing the two stages is difficult due to the complexity of physical system. Therefore, as an alternative, given that the two channels of the binaural audio is similar but with marginal differences, we propose to divide the the representation of the binaural audio y = (yl, yr) as two parts. One is the general part that contains the common information of two channels, which are represented as the average ȳ; the other one is the specific part that is distinct between the two channels encoded by human head in the end, which are denoted as δl and δr. Then we can represent the binaural audio as yl = ȳ + δl and yr = ȳ + δr respectively. The common part represents the fundamental information of the source mono audio, while the specific parts contain the difference between the left and right channels caused by room reverberance and the acoustical influence of human head.
Based on the representation, we propose a two-stage framework to model the two parts respectively. The common information of the two channels of binaural audio is generated in the first stage, and their difference are generated in the second stage. We therefore term them as first/common and second/specific stages. The common stage is encouraged to generate ȳ while the specific stage tries to model the difference to the left and right audio. Specifically, we build our framework on denoising diffusion probabilistic models (diffusion models for short) [12], which have been shown effective and efficient to generate high-fidelity audio on the task of speech synthesis [15, 4, 17]. In the common stage, the mono audio is taken as the condition of a single-channel diffusion model, which is supervised by the average over the two channels of the golden binaural speech ȳ, encouraging the model to generate the common information of two channels. And in the specific stage, conditioned on the single-channel output of the first stage, we utilize a two-channel diffusion model to generate the audio for the left and right ears respectively. We provide an illustration of the proposed framework in Figure 1.
The main contributions of this work are summarized as follows:
• We formulate the representation of binaural audio as two parts, and propose a novel two-stage framework to respectively synthesize their common and specific information conditioned on the mono audio. • Equipped with denoising probabilistic diffusion models, the proposed two-stage framework is able to generate accurate and high-fidelity binaural audio. • We conduct experiments on the benchmark dataset released in [29], and the proposed framework outperforms existing baselines by a large margin, both on automatic (Wave L2: 0.128 vs. 0.157) and human evaluation (MOS: 3.80 vs. 3.61) metrics.
2 Related Works
2.1 Binaural Audio Synthesis
Different from text to speech synthesis that mainly generates mono speech from text [37, 38], binaural audio synthesis aims to convert mono audio into its binaural version. Based on the physical process of sound rendering, human listening can be generally considered as a source-medium-receiver model [3]. The sound waves emitted by the object will travel through the medium, i.e., free air and then be received by listener. In medium, the sound will be diffused, reverberated and effected by the physical objects such as walls during the propagation. Room impulse response (RIR) [19, 36, 1, 30] describes the distortion effect caused by surrounding environment using filters. Besides the interaction with medium, the morphology of listener (torso, head, and pinna) will also change the sound received by the eardrum. The head-related transfer functions (HRTF) [3, 6] are utilized to describe the transformation of the sound from a direction in free air to that it arrives at the eardrum [18]. Digital Signal Processing (DSP) methods synthesize the binaural speech by combining RIR and HRTF [47, 35], both of which are expensive and tedious to measure, as well as missed from most open-sourced datasets in addition. Therefore, DSP methods usually utilize generic functions instead, resulting in sub-optimal generation results as the recording environments are highly specialized among different datasets.
Recently, neural rendering approaches are proposed for binaural speech synthesis. Gebru et al. [10] utilizing neural networks to learn HRTFs implicitly. Richard et al. [29] propose a convolution neural network based model with an additional neural time warping module to learn the time shifts from the mono to binaural audio. Some works incorporate extra modalities such as the visual information to help the synthesis of binaural speech [46, 45, 26]. In this work, we consider the most basic scenario where only the mono audio is available.
2.2 Denoising Diffusion Probabilistic Models
Denoising diffusion probabilistic models (diffusion models for short) have achieved the state-of-theart (SOTA) generation results in various tasks, including image [34, 22, 8, 7, 33, 39, 44] and super resolution image generation [13, 31, 41, 25], text-to-image generation [23, 11, 14, 28], text-to-speech synthesis [4, 15, 27, 17, 16, 5] and speech enhancement [20, 21, 42]. Especially, in audio synthesis, diffusion models have shown strong ability in modelling both spectrogram features [27, 17] and raw waveforms [4, 15, 5].
Diffusion models require a large number of inference steps to achieve high sample quality. For highresolution data generation, some works [8, 13, 28] even cascade multiple diffusion models together and achieve the SOTA quality in text-to-image generation [28]. In this cascaded structure, multiple diffusion models are trained at different resolution scales, and while inference, low-resolution data samples are firstly generated with the base diffusion model, and then stacked conditional diffusion models are employed for super resolution, where high-fidelity results can be achieved with moderate time cost.
In this paper, for generating binaural audio waveform with high resolution (48kHz sampling rate), we integrate diffusion models into the proposed two-stage framework, and generate the common and specific information of binaural audio with single- and two-channel diffusion models respectively.
3 Methods
We introduce the proposed framework BinauralGrad in this section. We start with the preliminary knowledge about binaural audio and geometric warping, and then introduce the proposed two-stage framework as well as the structure of the diffusion models.
3.1 Preliminary
Problem Definition Given the source mono audio x ∈ RN with length N and the relative position between the source and listener p, we aim to generate the binaural audio y = (yl, yr) that contains two channels of audio for left and right ears through
(yl(n), yr(n)) = f(x(n), p), (1)
where yl and yr ∈ RN , and f indicates the transformation function parameterized by the proposed framework. We denote the average of the binaural audio as ȳ = mean(yl, yr).
Specifically, the relative position can be described with a tuple p = (ps, pα), where ps = (px, py, pz) is the spatial position that indicated by the coordinates, and pα = (qx, qy, qz, qw) is quaternions that indicates the head orientation from the listener to the source. The listener is stationary and set in the origin of the coordinate system, and therefore (px, py, pz) axes indicate the front, right and up directions respectively. Besides, we also denote the spatial position of the left and right ears of the listener as pllstn and p r lstn respectively.
A simple way to align the temporal differences between the left and right ears is geometric warping, a non-parametric method conditioned on the distance between the source and the listener:
ρ(n) = n− C · ∥psrc(n)− plstn(n)∥, (2)
where n indicates the current time-stamp, C is a constant that calculated by the ratio between the audio sampling rate and the speed of the sound. As the predicted warpfield ρ(n) are usually floats, the warped signals can be computed with the linear interpolation:
xwarp(n) = (⌈ρ(n)⌉ − ρ(n)) · x⌊ρ(n)⌋ + (ρ(n)− ⌊ρ(n)⌋) · x⌈ρ(n)⌉, (3)
where ⌈·⌉ and ⌊·⌋ indicate the ceil and floor function respectively. The warping for the left and right ears can be obtained by changing the position plstn(n), and the warped binaural audio is denoted as (xlwarp, x r warp), which are with low quality without considering the diffraction of audio.
3.2 Overview of the Framework
The proposed framework contains two stages, and we train two distinct diffusion models for each stage, parameterized by θc and θs respectively. In the first stage, the model θc aims to synthesize the common information of the binaural audio supervised by ȳ, i.e., we treat the mono average of two channels as the golden supervision of the common information. And in the second stage, conditioned
on the synthesized audio yc of the first stage, a two-channel diffusion model θs tries to synthesize the golden binaural audio y = (yl, yr). We provide an illustration of the proposed framework in Figure 2. We will first introduce the backbone diffusion model and then the details of two stages as below.
3.2.1 Conditional Diffusion Models
Diffusion models are score-based generative models [12, 34, 8]. They are composed of a forward process and a reverse process. In the training stage, the forward process gradually destroys the data samples into Gaussian noise with a large number of time steps. At each time step t, the model learns a score function that denotes the gradient information. Then, in the reverse process, with learned score functions at predefined inference noise schedules, the model can generate clean data samples from Gaussian noise in an iterative denoising process.
The forward process converts the data samples z0 into the isotropic Gaussian noise ϵ ∼ N (0, I) with a predefined variance schedule 0 < β1 < · · · < βt < · · · < βT < 1. The latent representations at time step t can be directly calculated with:
q(zt|z0) = N (zt; √ ᾱtz0, (1− ᾱt)ϵ), (4)
where the αt := 1− βt, and ᾱt := ∏t s=1 αs can denote a corresponding noise level at time step t.
In sampling, the reverse process starts from the isotropic Gaussian noise p(zT ) ∼ N (0, I), and iteratively denoises the generated samples toward clean data z0 with the conditioning information c :
pθ(z0, · · · , zT−1|zT , c) = T∏
t=1
pθ(zt−1|zt, c). (5)
Providing strong conditioning information c is usually helpful to reduce the number of inference steps and improve the generation quality. The Gaussian transition probability of each inference step is parameterized as:
pθ(zt−1|zt) = N (zt−1, µθ(zt, t, c), σ2θI), (6)
where the variance σ2θ is usually predefined as 1−ᾱt−1 1−ᾱt βt or βt. Following previous works [12, 15], we use the former one in this work. The model is trained by maximizing the variation lower bound of the likelihood pθ(z0). When we parameterize the mean function as:
µθ(zt, t, c) = 1/ √ αt(zt − βt/ √ 1− ᾱtϵθ(zt, t, c)), (7)
a reweighted training objective is usually adopted in practice as [12, 4, 15]: LD(θ) = Ez0,ϵ,t ∥∥ϵ− ϵθ(√ᾱtz0 +√1− ᾱtϵ, t, c)∥∥22 . (8)
For human speech signals, they usually have more energy in low-frequency region, while Gaussian noise has equal intensity at different frequencies. Hence, in the forward process, the high-frequency information is firstly destroyed, then the low-frequency information is also destroyed when the time step t increases. Correspondingly, in the reverse process, the low-frequency information will be gradually generated at first, then details will be iteratively refined with following inference steps, which is suitable for accurate and high-fidelity speech waveform generation.
3.2.2 Details of Two Stages
With the understanding of the iterative refinement mechanism of diffusion models, we integrate them into the proposed two-stage framework, in which a single-channel diffusion model generates the common information of binaural audio in the first stage, while a two-channel diffusion model is then used for modelling the specific information of both left ear and right ear.
In the first stage, the average ȳ of the golden binaural audio is considered as the clean data in the forward process. In this way, the diffusion model is encouraged to generate the common information of the two channels. The condition information is consistent for the forward and reverse processes. In concrete, the model takes the average of the warped binaural audio x̄ = mean(xlwarp, x r warp) as the condition information (Defined in Equation 3). In addition to the mono audio and position, the
condition of the first stage can be denoted as c1 = (ps, pα, x̄, x). The position and audio information are separately processed by the model, which will be detailed introduced in Section 3.3.
The second stage aims at synthesizing the binaural audio through a two-channel diffusion model, therefore the golden audio y = (yl, yr) is taken as the clean data for the two channels respectively. Different from that in the first stage, the condition information is distinct in the forward and reverse process as it includes the generation results of the first stage. The average ȳ of the golden binaural audio is known in the forward process, therefore, along with the warped binaural audio, the condition information can be written as c2 = (ps, pα, xlwarp, x r warp, x, ȳ). For the reverse process, the generation results yc of the first stage is utilized to replace the golden audio which is unknown in inference, i.e., c2 = (ps, pα, xlwarp, x r warp, x, yc).
While training, the two stages are trained separately by minimizing the regression loss defined in Equation 8 utilizing different clean data and conditions. And in inference, firstly the common stage generates the mono audio yc, conditioned on which the specific stage synthesizes the binaural audio, which are the final output of the framework. There exists an inconsistency between the training and inference of the second stage due to the different conditions, and fine-tune the second stage conditioned on the output of the first stage may alleviate this problem. We leave it for future work.
3.3 Model Architecture
The model architecture of our diffusion model is shown in Figure 3. The diffusion models utilized in the two stages have similar architectures, except the channels of the input, output and condition are one and two for the common and specific stages respectively. We introduce a Conditioner component to deal with the condition signals introduced in previous sections. Specifically, we utilize two separate convolution blocks to process the position as well as the conditional audio respectively, and the output features are concatenated and then processed by a final convolution layer. The conditional audio represents the concatenation of [x̄, x] and [xlwarp, x r warp, x, yc] for the two stages. The output of the Conditioner is taken as the representation of condition signals.
The rest of our model generally follows the design in [15]. The main network is based on the bidirectional variant of the dilated convolution layer [24] given the non-autoregressive nature of the task. The architecture is constructed by M residual blocks, with m dilated convolution layers in
each block. The diffusion step t is represented by positional encoding [40] and encoded by two fully connected layers before fed to the network, and then in each block, the diffusion step will be further transformed by another fully connected layer that is unique among blocks, where the output representation will be added to the input of the dilated convolution layer. The conditional representation modeled by the Conditioner will be added to the output of the dilated layer to bring in the conditional information. In the end, through skip connections, the outputs of residual blocks are gathered to produce the final output of the model.
4 Experiments
Dataset We utilize the dataset released in [29]5 as it is the largest binaural dataset captured in the wild up to now. It contains 2 hours of mono-binaural parallel audio data at 48kHz from eight different objects, recorded in a regular room. The position and orientation between the objects and the listener are tracked at 120Hz and aligned with the audio. We follow the original split of training/valid/test sets to make our results comparable.
Baselines The proposed framework is denoted as BinauralGrad, and we consider following baselines. DSP is a binaural rendering approach that simulates the spatial acoustic effects of sound sources with dynamic virtual positions. DSP utilizes RIR to model the room reverberance and HRTF to model the acoustical influence of the human head. We perform RIR simulation with an open-sourced tool6 according to the room information provided in the original paper [29]. The HRTF data comes from [9], which was measured at a distance of 1.4 meters using the KEMAR, a manikin for acoustic testing. And we follow the procedure in 7 to simulate the binaural sound. As the exact HRTF and RIR are not provided in the dataset, therefore the DSP results are worse than that reported in the original paper [29]. WaveNet [24] where the positions of source and listener are used as condition signals and fed into a ConvNet to generate binaural audio from the mono one. WarpNet [29] which originally proposes the dataset, and stacking a neural time warping module with a temporal ConvNet to generate binaural audio. For all baselines, we adjust their hyper-parameters to make the number of parameters comparable with our model to ensure fair comparisons.
Evaluation Metrics We use following metrics to evaluate the quality of synthesized binaural audio. Wave L2, which is the mean squared error between the synthesized binaural audio and the golden binaural recording. Amplitude L2 and Phase L2, which are the mean squared errors between the synthesized binaural speech and the binaural recording on the amplitude and phase respectively after performing Short-Time Fourier Transform (STFT) on wave. PESQ, which is the perceptual evaluation of speech quality8 and widely used in speech enhancement. MRSTFT, which models the multi-resolution spectral loss9 by taking the spectral convergence, log magnitude loss and linear magnitude loss into consideration. Except PESQ which is the higher the better, the rest of metrics are the lower the better. Besides object metrics, we also conduct subject evaluation (Mean Option Score, MOS) to verify the audio quality intuitively.
Training Configurations As described in Section 3.3, BinauralGrad consists of M = 3 residual blocks, each of which has m = 10 bidirectional dilated convolution layers. The hidden size and the dimension of the diffusion step encoding are both set to 128. We train BinauralGrad on 8 Nvidia V100 GPUs for 1M steps, and the diffusion steps are set to 200 and 6 during training and inference respectively, mainly following previous diffusion works [15].
We will introduce the experimental results in the rest of this section. We first compare the proposed BinauralGrad with baseline systems in terms of objective metrics, then verify the effectiveness of the proposed two-stage framework by comparing with single stage diffusion model. In addition, a subject metric MOS test is also utilized to show the intuitive performance of the model. Finally, to better understand the two-stage model, we also conduct a thorough analysis on the output of each stage.
5https://github.com/facebookresearch/BinauralSpeechSynthesis 6https://github.com/sunits/rir_simulator_python 7http://spatialaudio.net/ssr 8https://github.com/aliutkus/speechmetrics 9https://github.com/csteinmetz1/auraloss
4.1 Main Results
We report the object metrics of BinauralGrad and other binaural speech synthesis baselines in Table 1. The proposed framework performs consistently better than compared baselines over different metrics. Specifically, we have several observations:
• Our method outperforms the main baseline WarpNet on Wave L2 by a large margin, showing that BinauralGrad can synthesize better binaural waveforms that are closer to the golden recordings.
• Since there are some randomness in the waveform, the metrics on the STFT results of waveform is important to evaluate the overall quality of binaural speech synthesis. Our method surpasses baseline methods by a large margin in terms of Amplitude L2 and MRSTFT, and being slight better than WarpNet on Phase L2.
• PESQ is a widely used metric on the perceptual evaluation of speech quality. The proposed BinauralGrad achieves 2.759 PESQ score, which is significantly better than other baseline systems. The naturalness of our method is further verified by the MOS results listed in Table 3.
In conclusion, the object metrics show that the proposed two-stage framework is able to generate more accurate binaural audio than existing baselines.
4.2 Comparison with Single Stage Diffusion Model
We compare BinauralGrad with a single stage diffusion model to verify the advantage of the twostage framework. For the single stage diffusion model, its model architecture is similar to the two-channel diffusion model in the second stage of BinauralGrad, except for two differences: 1) the single stage model does not use any information from the common information, i.e., with condition c = (ps, pα, x l warp, x r warp, x); 2) the single stage model is two times deeper so as to make the number of parameters comparable.
The comparison of BinauralGrad with the single stage diffusion model are shown in Table 2. The results show that the two-stage model performs better than the single stage one on most metrics including PESQ and L2 losses of waveform, amplitude and phase, while achieving comparable MRSTFT scores, demonstrating the effectiveness of two stage framework. It is worth noting that the single stage model already outperforms the major baseline WarpNet over most metrics except Phase L2 as shown in Table 1, illustrating the stronger ability of diffusion models in synthesizing raw audio waveform than purely convolution based models.
4.3 MOS Test
We conduct three types of MOS to verify the quality of synthesized audio with human evaluation, including: 1) MOS, where judges are asked to rate for the overall naturalness and fluency of synthesized audio; 2) Similarity MOS, where judges are asked to rate for the similarity between the synthesized results and the golden binaural recordings; 3) Spatial MOS, where the judges are asked
to rate for the sense of direction contained in the synthesized audio. For all MOS tests, the judges rate discrete score from 1 to 5, and the higher score the better quality. The MOS results together with the 95% confidence interval are shown in Table 3.
Firstly, the proposed BinauralGrad achieves a quite high MOS score 3.80 which not only surpasses all baselines, but also slightly outperforms the recording, illustrating the strong ability of the proposed framework in synthesizing natural audio waveforms. Secondly, the similarity MOS shows that BinauralGrad can synthesize closest binaural audio to the golden binaural recording among all compared models, which is consistent with the conclusions from objective metrics. Finally, the results on the spatial MOS show that DSP, WarpNet and BinauralGrad achieve similar performance on describing the sense of spatial, which might be attributed to the fact that the above three methods all utilize the physical warping process that brings in the interaural time difference (ITD) and results in the sense of direction [43].
4.4 Analysis
In this section, we conduct a thorough analysis to verify the effectiveness of each stage in our framework, and try to find out the potential bottleneck that brings sparks for future work.
In each stage, the diffusion model synthesize audio conditioned on different information. Therefore, to verify the effectiveness of the model in each stage, we evaluate the performance of the condition and synthesized audio separately to illustrate the improvements brought by the model. Specifically, in addition to the synthesized audio from the two stages (ID 3 and 5), we also calculate the performance of the physical warping results (i.e., (xlwarp, x r warp), ID 2) and the average ȳ of the golden binaural audio. Note that as the output of the first stage and ȳ are both mono audio, we therefore calculate the scores by duplicating them to binaural such as (ȳ, ȳ) for ID 4 and (yc, yc) for ID 3.
The results are listed in Table 4. Firstly, we can find that the performance of the physical warping (ID 2) is worse as it only brings gains on Wave L2 and remain consistent on other metrics compared to the mono audio (ID 1). Secondly, the improvements from ID 2 to 3 and 3 to 5 are brought by the first and second stage of the proposed model respectively, verifying the effectiveness and necessities of both stages.
In addition, we try to explore the boundary of the framework by directly feeding the perfect condition to the second stage while inference, i.e., instead of conditioning on yc which is synthesized by the first stage, we utilize the golden average ȳ as the condition which is unavailable in inference to test the best performance of the model. As a result, we can find that the second stage achieves an extremely good result and brings a large improvement over all metrics compared with the given condition,
showing that the model in the second stage has been well trained. On the contrary, there still exists a large margin between the synthesized result from the first stage and the label (ID 3 vs. ID 4). In conclusion, the results in Table 4 show that the first stage is the bottleneck of the framework, which points out potential future work directions such as the end-to-end optimization of the two stages.
5 Conclusion
In this paper, we propose BinauralGrad, a two-stage framework for binaural audio synthesis conditioned on mono audio. Specifically, we formulate the synthesis process from a novel perspective and divide the binaural audio into a common part that is shared by two channels as well as distinct parts. Accordingly, a single-channel diffusion model is utilized to generate the common information in the first stage, conditioned on which a two-channel diffusion model synthesizes the binaural audio. The proposed framework is able to synthesize accurate and high-fidelity audio samples. On a benchmark dataset, BinauralGrad achieves state-of-the-art results both in object and subject evaluation metrics. In the future, we plan to improve the training of the two stages in an end-to-end way for better performance or speed up the inference of BinauralGrad, which is important for the online deployment of our method. The negative impact of our method might be the abuse, e.g., generating fake binaural audio to mislead the user in some VR-based human-computer interactive games, resulting in physical injury. | 1. What is the focus and contribution of the paper on binaural audio generation?
2. What are the strengths of the proposed approach, particularly in terms of its architecture and experimental results?
3. What are the weaknesses of the paper, especially regarding the experimental results and potential information leakage?
4. Do you have any concerns regarding the use of warped binaural audio as conditional input?
5. How does the two-stage generation method affect the generation speed compared to single-stage models? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This is a paper that applies the diffusion model to a binaural audio generation task. The authors use a two-stage method to accomplish their task and use a modified wavenet-like structure as their model architecture. After thorough and detailed experiments, the authors achieve the best results among the listed models, which nicely support their conclusions. The contributions of the paper include:
A new two-stage framework is used to generate high-quality binaural audio, with the assistance of a diffusion model.
The experiment results show that the proposed method achieves excellent performance on this task.
Strengths And Weaknesses
Strengths:
This paper is easy to follow. The authors give a detailed background of the task, a clear definition of the problem, and a comprehensive explanation of the proposed method.
The experiments are adequate. The authors have verified the feasibility of their method through comprehensive experiments and subjective and objective evaluations, which support their conclusions very well.
Weaknesses:
The values of the standard deviations for all results in Table 3 seem to be a bit too large, which indicates that there may be a large disagreement among raters about the experimental results. Such results may be unconvincing.
Besides that, there are no significant weaknesses in this paper, but there are some issues that are difficult to explain and could be potential weaknesses, which will be mentioned in the next section.
Questions
The authors mention that they use warped binaural audio as conditional input, but never explain how this audio is warped. We would like to know if these audios are warped using traditional DSP methods, or other additional methods? Does the use of such conditional inputs lead to some kind of information leakage, and have the authors performed ablation experiments on this?
The authors use the pattern of frequency destroying they found in the forward process of the diffusion model to explain why they use a two-stage generation method. However, in Table 2, the frequency-related evaluation metric MRSTFT of the proposed method is not as good as that of the single-stage model. Can the authors explain this phenomenon?
Have the authors compared the generation speed of single-stage and two-stage models and is there a significant difference between the two?
Limitations
The authors make a good statement about the limitations of their work. |
NIPS | Title
Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data
Abstract
Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.
1 Introduction
Learning correlative or causative dependencies in multivariate data is a fundamental problem in science and has application across many disciplines such as natural and social sciences, finance and engineering [1, 20]. Most statistical approaches consider the case of snapshot or static data, where one assumes that the data is drawn from an unknown probability distribution. For that case several methods for learning the directed or undirected dependency structure have been proposed, e.g., the PC algorithm [21, 13] or the graphical LASSO [8, 12], respectively. Causality for such models can only partially be recovered up to an equivalence class that relates to the preservation of v-structures [21] in the graphical model corresponding to the distribution. If longitudinal and especially temporal data is available, structure learning methods need to exploit the temporal ordering of cause and effect that is implicit in the data for determining the causal dependency structure. One assumes that the data are drawn from an unknown stochastic process. Classical approaches such as Granger causality or transfer entropy methods usually require large sample sizes [23]. Dynamic Bayesian networks offer an appealing framework to formulate structure learning for temporal data within the graphical model framework [10]. The fact that the time granularity of the data can often be very different from the actual granularity of the underlying process motivates the extension to continuous-time Bayesian networks (CTBN) [14], where no time granularity of the unknown process has to be assumed. Learning the structure within the CTBN framework involves a combinatorial search over structures and is hence generally limited to low-dimensional problems even if one considers variational approaches [11] and/or greedy hill-climbing strategies in structure space [15, 16]. Reminiscent of
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
optimization-based approaches such as graphical LASSO, where structure scoring is circumvented by performing gradient descent on the edge coefficients of the structure under a sparsity constraint, we here propose the first gradient-based scheme for learning the structure of CTBNs.
2 Background
2.1 Continuous-time Bayesian Networks
We consider continuous-time Markov chains (CTMCs) {X(t)}t≥0 taking values in a countable statespace S . A time-homogeneous Markov chain evolves according to an intensity matrixR : S×S → R, whose elements are denoted by R(s, s′), where s, s′ ∈ S . A continuous-time Bayesian network [14] is defined as an N -component process over a factorized state-space S = X1 × · · · × XN evolving jointly as a CTMC. For local states xi, x′i ∈ Xi, we will drop the states’ component index i, if evident by the context and no ambiguity arises. We impose a directed graph structure G = (V,E), encoding the relationship among the components V ≡ {V1, . . . , VN}, which we refer to as nodes. These are connected via an edge set E ⊆ V × V . This quantity is the structure, which we will later learn. The state of each component is denoted by Xi(t) assuming values in Xi, which depends only on the states of a subset of nodes, called the parent set parG(i) ≡ {j | (j, i) ∈ E}. Conversely, we define the child set chG(i) ≡ {j | (i, j) ∈ E}. The dynamics of a local state Xi(t) are described as a Markov process conditioned on the current state of all its parents Un(t) taking values in Ui ≡ {Xj | j ∈ parG(i)}. They can then be expressed by means of the conditional intensity matrices (CIMs) Ri : Xi ×Xi × Ui → R, where ui ≡ (u1, . . . uL) ∈ Ui denotes the current state of the parents (L = |parG(i)|). The CIMs are the generators of the dynamics of a CTBN. Specifically, we can express the probability of finding node i in state x′ after some small time-step h, given that it was in state x at time t with x, x′ ∈ Xi as
p(Xi(t+ h) = x ′ | Xi(t) = x, Ui(t) = u) = δx,x′ + hRi(x, x′ | u) + o(h),
where Ri(x, x′ | u) is the rate the transition x → x′ given the parents’ state u ∈ Ui and δx,x′ being the Kronecker-delta. We further make use of the small o(h) notation, which is defined via limh→0 o(h)/h = 0. It holds that Ri(x, x | u) = − ∑ x′ 6=xRi(x, x
′ | u). The CIMs are connected to the joint intensity matrix R of the CTMC via amalgamation – see, for example [14].
2.2 Structure Learning for CTBNs
Complete data. The likelihood of a CTBN can be expressed in terms of its sufficient statistics [15], Mi(x, x′ | u), which denotes the number of transitions of node i from state x to x′ and Ti(x | u), which denotes the amount of time node i spend in state x. In order to avoid clutter, we introduce the setsM≡ {Mi(x, x′ | u) | i ∈ {1, . . . , N}, x, x′ ∈ X , u ∈ U} and T ≡ {Ti(x | u) | i ∈ {1, . . . , N}, x ∈ X , u ∈ U}. The likelihood then takes the form
p(M, T | G, R) = N∏ i=1 exp ∑ x,x′ 6=x,u Mi(x, x ′ | u) lnRi(x, x′ | u)− Ti(x | u)Ri(x, x′ | u) . (1)
In [15] and similarly in [22] it was shown that a marginal likelihood for the structure can be calculated in closed form, when assuming a gamma prior over the rates Ri(x, x′ | u) ∼ Gam(αi(x, x′ | u), βi(x ′ | u)). In this case, the marginal log-likelihood of a structure takes the form
ln p(M, T | G, α, β) ∝ N∑ i=1 ∑ u,x,x′ 6=x { ln Γ (ᾱi(x, x ′ | u))− ᾱi(x, x′ | u) ln β̄i(x | u) } , (2)
with ᾱi(x, x′ | u) ≡Mi(x, x′ | u) + αi(x, x′ | u) and β̄i(x | u) ≡ Ti(x | u) + βi(x | u). Structure learning in previous works [16, 22, 11] is then performed by iterating over possible structures and scoring them using the marginal likelihood. The best scoring structure is then the maximum-aposteriori estimate of the structure.
Incomplete data. In many cases, the sufficient statistics of a CTBN cannot be provided. Instead, data comes in the form of noisy state observations at some points in time. In the following, we
will assume data is provided in form of Ns samples D ≡ { (tk, yk) | k ∈ {1, . . . , Ns} }
, where yk is some, possibly noisy, measurement of the latent-state generated by some observation model yk ∼ p(Y = yk | X(tk) = s) at time tk. This data is incomplete, as the sufficient statistics of the underlying latent process have to be estimated before model identification can be performed. In [16], an expectation-maximization for structure learning (SEM) was introduced, in which, given a proposal CTBN, sufficient statistics were first estimated by exact inference, the CTBN parameters were optimized given those expected sufficient-statistics and, subsequently, structures where scored via (1). Similarly, in [11] expected sufficient-statistics were estimated via variational inference under marginal (parameter-free) dynamics and structures were then scored via (2).
The problem of structure learning from incomplete data has two distinct bottlenecks, (i) Latent state estimation (scales exponentially in the number of nodes) (ii) Structure identification (scales super-exponentially in the number of nodes). While bottleneck (i) has been tackled in many ways [4, 5, 19, 11], existing approaches [16, 11] employ a combinatorial search over structures, thus an efficient solution for bottleneck (ii) is still outstanding.
Our approach. We will employ a similar strategy as mentioned above in this manuscript. However, statistics are estimated under a marginal CTBN that no longer depends on rate parameters or a discrete structure. Instead, statistics are estimated given a mixture of different parent-sets. Thus, instead of blindly iterating over possible structures in a hill-climbing procedure, we can update our distribution over structures by a gradient step. This allows us to directly converge into regions of high-probability. Further, in combination of this gradient-based approach with a high-order variational method, we can perform estimation of the expected sufficient-statistics in large systems. These two features combined, enable us to perform structure learning in large systems. An implementation of our method is available via Git1.
3 Likelihood of CTBNs Under a Mixture of CIMs
Complete data. In the following, we consider a CTBN over some 2over-complete graph G. In practice, this graph may be derived from data as prior knowledge. In the absence of prior knowledge, we will choose the full graph. We want to represent its CIMsRi(x, x′ | u), here for node i, as mixture of CIMs of smaller support and write by using the power-set P(·) (set of all possible subsets)
Ri(x, x ′ | u) = ∑ m∈P(parG(i)) πi(m)ri(x, x ′ | um) ≡ Eπi [ri(x, x′ | um)], (3)
where um denotes the projection of the full parent-state u on the subsetm, i.e. f(um) = ∑ u/um
f(u), and the expectation Eπi [f(θm)] = ∑ m∈P(parG(i))
πi(m)f(θm). The mixture-weights are given by a distribution πi ∈ ∆i with ∆i being the |P(parG(i))|−dimensional probability simplex. Corresponding edge probabilities of the graph can be computed via marginalization. The probability that an edge eij ∈ E exists is then
p(eij = 1) = ∑
m∈P(parG(j))
πj(m)1(i ∈ m), (4)
with 1(·) being the indicator function. In order to arrive at a marginal score for the mixture we insert (3) into (1) and apply Jensen’s inequality Eπi [ln (r)] ≤ ln (Eπi [r]). This yields a lower-bound to the mixture likelihood
p(M, T | π, r) ≥ N∏ i=1 ∏ x,x′ 6=x,um eE π i [Mi(x,x ′|um) ln ri(x,x′|um)−Ti(x|um)ri(x,x′|um)].
For details on this derivation, we refer to the supplementary material A.1. Note that Jensens inequality, which only provides a poor approximation in general, improves with increasing concentration of probability mass and becomes exact for degenerate distributions. For the task of selecting a CTBN with a specific parent-set, it is useful to marginalize over the rate parameters r of the CTBNs. This allows for a direct estimation of the parent-set, without first estimating the rates. This
1https://git.rwth-aachen.de/bcs/ssl-ctbn 2An over-complete graph has more edges than the underlying true graph, which generated the data.
marginal likelihood can be computed under the assumption of independent gamma prior distributions ri(x, x
′ | um) ∼ Gam(αi(x, x′ | um), βi(x′ | um)) over the rates. The marginal likelihood lowerbound can then be computed analytically. Under the assumption of independent Dirichlet priors πi ∼ Dir(πi | ci), with concentration parameters ci we arrive at a lower-bound to the marginal log-posterior of the mixture weights π
ln p(π | M, T , α, β) ≥ ∑ i Fi[M, T , π] + lnZ, (5)
Fi[M, T , π] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱi(x, x ′ | um))− ᾱi(x, x′ | um) ln β̄i(x | um) } +ln Dir(πi | ci),
with the updated posterior parameters ᾱi(x, x′ | um) ≡ πi(m)Mi(x, x′ | um) + αi(x, x′ | um) and β̄i(x | um) ≡ πi(m)Ti(x | um) + βi(x | um). For details, we refer to the supplementary material A.2. The constant log-partition function lnZ can be ignored in the following analysis. Because (5) decomposes into a sum of node-wise terms, the maximum-a-posterior estimate of the mixture weights of node i can be calculated as solution of the following optimization problem:
π∗i = arg max πi∈∆i {Fi[M, T , π]} . (6)
By construction, learning the mixture weights π of the CIMs, corresponds to learning a distribution over parent-sets for each node. We thus re-expressed the problem of structure learning to an estimation of π. Further, we note that for any degenerate π, (5) coincides with the exact structure score (2).
Incomplete data. In the case of incomplete noisy data D, the likelihood of the CTBN does no longer decompose into node-wise terms. Instead, the likelihood is one of the full amalgamated CTMC [16]. In order to tackle this problem, approximation methods through sampling [7, 6, 19], or variational approaches [4, 5] have been investigated. These, however, either fail to treat highdimensional spaces because of sample sparsity, are unsatisfactory in terms of accuracy, or provide only an uncontrolled approximation. Our method is based on a variational approximation, e.g. weak coupling expansion [11]. Under this approximation, we recover by the same calculation an approximate likelihood of the same form as (1), where the sufficient statistics Mi(x, x′ | u) and Ti(x | u) are, however, replaced by their expectation Eq [Mi(x, x′ | u)] and Eq [Ti(x | u)] under a variational distribution q, – for details we refer to the supplementary B.1. Subsequently, also our optimization objective Fi[M, T , π] becomes dependent on the variational distribution Fi[D, π, q]. In the following chapter, we will develop an Expectation-Maximization (EM)-algorithm that iteratively estimates the expected sufficient-statistics given the mixture-weights and subsequently optimizes those mixture-weights given the expected sufficient-statistics.
4 Incomplete data: Expected Sufficient Statistics Under a Mixture of CIMs
Short review of the foundational method. In [11], the exact posterior over paths of a CTBN given incomplete dataD, is approximated by a path measure q(X[0,T ]) of a variational time-inhomogeneous Markov process via a higher order variational inference method. For a CTBN, this path measure is fully described by its node-wise marginals qi(x′, x, u; t) ≡ qi(Xi(t+ h) = x′, Xi(t) = x, Ui(t) = u; t). From it, one can compute the marginal probability qi(x; t) of node i to be in state x, the marginal probability of the parents qi(Ui(t) = u; t) ≡ qui (t) and the marginal transition probability τi(x, x
′, u; t) ≡ limh→0 qi(x′, x, u; t)/h for x 6= x′. The exact form of the expected statistics were calculated to be
Eq [Ti(x | u)] ≡ ∫ T
0
dt qi(x; t)q u i (t), Eq [Mi(x, x
′ | u)] ≡ ∫ T
0
dt τi(x, x ′, u; t). (7)
In the following, we will use the short-hand Eq [M] and Eq [T ] to denote the sets of expected sufficient-statistics. We note, that the variational distribution q has the support of the full overcomplete parent-set parG(i). Via marginalization of qi(x
′, x, u; t), the marginal probability and the marginal transition probability can be shown to be connected via the relation
d dt qi(x; t) = ∑ x′ 6=x,u [τi(x, ′ x, u; t)− τi(x, x′, u; t)] . (8)
Algorithm 1 Stationary points of Euler–Lagrange equation 1: Input: Initial trajectories qi(x; t), boundary conditions q(x; 0) and ρ(x;T ), mixture weights π
and data D. 2: repeat 3: repeat 4: for all i ∈ {1, . . . , N} do 5: for all (yk, tk) ∈ D do 6: Update ρi(t) by backward propagation from tk to tk−1 using (10) fulfilling the jump conditons (12). 7: end for 8: Update qi(t) by forward propagation using (10) given ρi(t). 9: end for
10: until Convergence 11: Compute expected sufficient statistics using (7) and (11) from qi(t) and ρi(t). 12: until Convergence of F [D, π, q] 13: Output: Set of expected sufficient statistics Eq[M] and Eq[T ].
Application to our setting. As discussed in the last section, the objective function in the incomplete data case has the same form as (5)
Fi[D, π, q] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱqi (x, x ′ | um))− ᾱqi (x, x ′ | um) ln β̄qi (x | um) } +ln Dir(πi | ci),
(9)
however, now with ᾱqi (x, x ′ | um) ≡ πi(m)Eq[Mi(x, x′ | um)] + αi(x, x′ | um) and β̄qi (x | um) ≡ πi(m)Eq[Ti(x | um)] + βi(x | um). In order to arrive at approximation to the expected sufficient statistics in our case, we have to maximize (9) with respect to q, while fulfilling the constraint (8). The corresponding Lagrangian becomes
L[D, π, q, λ] = N∑ i=1 Fi[D, π, q]− ∑ x,x′ 6=x,u ∫ T 0 dt λi(x; t) { d dt qi(x; t)− [τi(x,′ x, u; t)− τi(x, x′, u; t)] } , with Lagrange-multipliers λi(x; t). In order to derive Euler-Lagrange equations, we employ Stirlings-
approximation for the gamma function Γ(z) = √
2π z ( z e )z +O ( 1 z ) , which becomes exact asymp-
totically. In our case, Stirlings-approximation is valid if ᾱ 1. We thereby assumed that either enough data has been recorded, or a sufficiently strong prior α. Finally, we recover the approximate forward- and backward-equations of the mixture CTBNs as the stationary point of the Lagrangian Euler-Lagrange equations
d dt ρi(t) = Ω̃ π i (t)ρi(t), d dt qi(t) = qi(t)Ω π i (t), (10)
with effective rate matrices
Ωπi (x, x ′; t) ≡ Eui [ R̃πi (x, x ′ | u) ] ρi(x′; t) ρi(x; t)
Ω̃πi (x, x ′; t) ≡ (1− δx,x′)Eui [ R̃πi (x, x ′ | u) ] + δx,x′ {Eui [Rπi (x, x′ | u)] + Ψi(x; t)} ,
with ρi(x; t) ≡ exp(−λi(x; t)) and Ψi(x; t) as given in the supplementary material B.2. Further we have introduced the shorthand Eui [f(u)] = ∑ u f(u)q u i (t)
and defined the posterior expected rates
Rπi (x, x ′ | u) ≡ Eπi
[ ᾱqi (x, x
′ | um) β̄qi (x | um)
] , R̃πi (x, x ′ | u) ≡ ∏ m ( ᾱqi (x, x ′ | um) β̄qi (x | um) )πi(m) ,
Algorithm 2 Gradient-based Structure Learning 1: Input: Initial trajectories qi(x; t), boundary conditions qi(x; 0) and ρi(x;T ), initial mixture
weights π(0), data D and iterator n = 0 2: repeat 3: Compute expected sufficient statistics Eq[M] and Eq[T ] given π(n) using Algorithm 1. 4: for all i ∈ {1, . . . , N} do 5: Maximize (6) with respect to πi, set maximizer π (n+1) i = π ∗ i and n→ n+ 1. 6: end for 7: until Convergence of F [D, π, q] 8: Output: Maximum-a-posteriori mixture weights π(n)
which take the form of an arithmetic and geometric mean, respectively. For the variational transitionmatrix we find the algebraic relationship
τi(x, x ′, u; t) = qi(x; t)q u i (t)R̃ π i (x, x
′ | u)ρi(x ′; t)
ρi(x; t) . (11)
Because, the derivation is quite lengthy, we refer to supplementary B.2 for details. In order to incorporate noisy observations into the CTBN dynamics, we need to specify an observation model. In the following we assume that the data likelihood factorizes p(Y = yk | X(tk) = s) = ∏ i pi(Yi = y k i | Xi(tk) = x), allowing us to condition on the data by enforcing jump conditions
lim t→tk− ρi(x; t) = lim t→tk+
pi(Yi = y k i | Xi(tk) = x)ρi(x; t). (12)
The converged solutions of the ODE system can then be used to compute the sufficient statistics via (7). For a full derivation, we refer to the supplementary material B.2.
We note that in the limiting case of a degenerate mixture distribution π, this set of equations reduces to the marginal dynamics for CTBNs proposed in [11]. The set of ODEs can be solved iteratively as a fixed-point procedure in the same manner as in previous works [17, 4] (see Algorithm 1) in a forward-backward procedure.
Exhaustive structure search. As we are now able to calculate expected-sufficient statistics given mixture weights π, we can design an EM-algorithm for structure learning. For this iteratively optimize π given the expected sufficient statistics, which we subsequently re-calculate. The EM-algorithm is summarized in Algorithm 2. In contrast to the exact EM-procedure [16], we preserve structure modularity. We can thus optimize the parent-set of each node independently. This already provides a huge boost in performance, as in our case the search space scales exponentially in the components, instead of super-exponentially. In the paragraph "Greedy structure search", we will demonstrate how to further reduce complexity to a polynomial scaling, while preserving most prediction accuracy.
Restricted exhaustive search. In many cases, especially for applications in molecular biology, comprehensive 3databases of putative interactions are available and can be used to construct overcomplete yet not fully connected prior networks G0 of reported gene and protein interactions. In this case we can restrict the search space by excluding possible non-reported parents for every node i, parG(i) = parG0(i), allowing for structure learning of large networks.
Greedy structure search. Although we have derived a gradient-based scheme for exhaustive search, the number of possible mixture components still equals the number of all possible parent-sets. However, in many applications, it is reasonable to assume the number of parents to be limited, which corresponds to a sparsity assumption. For this reason, greedy schemes for structure learning have been proposed in previous works [16]. Here, candidate parent-sets were limited to have at most K parents, in which case, the number of candidate graphs only grows polynomially in the number of nodes. In order to incorporate a similar scheme in our method, we have to perform an additional approximation to the set of equations (10). The problem lies in the expectation step (Algorithm 1), as expectation is performed with respect to the full over-complete graph. In order to calculate expectations of the geometric mean Eui [R̃ π i (x, x
′ | u)], we have to consider the over-complete set of parenting nodes qui (t) for each node i. However, for the calculation of the arithmetic mean E u i [R π i (x, x
′ | u)] only 3e.g. https://string-db.org/ or https://www.ebi.ac.uk/intact/
parent-sets restricted to the considered sub-graphs have to be considered, due to linearity. For this reason, we approximate the geometric mean by the arithmetic mean R̃πi ≈ Rπi , corresponding to the first-order expansion Eπi [ln(x)] = ln(E π i [x]) +O(Var[x]), which, as before, becomes more valid for more concentrated πi and is exact if πi is degenerate.
5 Experiments
We demonstrate the effectiveness of our method on synthetic and two real-world data sets. For all experiments, we consider a fixed set of hyper-parameters. We set the Dirichlet concentration parameter ci = 0.9 for all i ∈ {1, . . . , N}. Further, we assume a prior for the generators, which is uninformative on the structure αi(x, x′ | u) = 5 and βi(x | u) = 10, for all x, x′ ∈ Xi, u ∈ Ui. For the optimization step in Algorithm 2, we use standard Matlab implementation of the interior-point method with 100 random restarts. This is feasible, as the Jacobian of (9) can be calculated analytically.
5.1 Synthetic Data
In this experiment, we consider synthetic data generated by random graphs with a flat degree distribution, truncated at degree two, i.e. each nodes has a maximal number of two parents. We restrict the state-space of each node to be binary X = {−1, 1}. The generators of each node are chosen such that they undergo Glauber-dynamics [9]Ri(x, x̄ | u) = 12 + 1 2 tanh ( γx ∑ j∈parG(i) uj ) , which is a popular model for benchmarking, also in CTBN literature [4]. The parameter γ denotes the coupling-strength of node j to i. With increasing γ the dynamics of the network become increasingly deterministic, converging to a logical-model for γ → ∞. In order to avoid violating the weakcoupling assumption [11], underlying our method, we choose γ = 0.6. We generated a varying number of trajectories with each containing 10 transitions. In order to have a fair evaluation, we generate data from thirty random graphs among five nodes, as described above. By computing the edge probabilities p(eij = 1) via (4), we can evaluate the performance of our method as an edgeclassifier by computing the receiver-operator characteristic curve (ROC) and the precision-recall curve (PR) and their area-under-curve (AUROC) and (AUPR). For an unbiased classifier, both quantities have to approach 1, for increasing amounts of data.
Complete data. In this experiment, we investigate the viability of using the marginal mixture likelihood lower-bound as in (5) given the complete data in the form of the sufficient statisticsM and T . In Figure 1 we compare the AUROCs a) and AUPRs b) achieved in an edge classification task using exhaustive scoring of the exact marginal likelihood (2) as in [15] (blue) and gradient ascend in π of the mixture marginal likelihood lower-bound (red-dashed) as in (5). In Figure 1 c) we show via numerical integration, that the marginal mixture likelihood lower-bound approaches the exact one (2) for decreasing entropy of π and increasing number of trajectories. Small negative deviations are due
to the limited accuracy of numerical integration. Additional synthetic experiments investigating the effect of different concentration parameters c can be found in the supplementary C.1
Incomplete data. Next, we test our method for network inference from incomplete data. Noisy incomplete observations were generated by measuring the state at Ns = 10 uniformly random timepoints and adding Gaussian noise with zero mean and variance 0.2. Because of the expectation-step in Algorithm 1, is only approximate [11], we do not expect a perfect classifier in this experiment. We compare the exhaustive search, with a K = 4 parents greedy search, such that both methods have the same search-space. We initialized both methods with π(0)i (m) = 1 if m = parG(i) and 0 else, as a heuristic. In Figure 2 a) and b), it can be seen that both methods approach AUROCs and AUPRs close to one, for increasing amounts of data. However, due to the additional approximation in the greedy algorithm, it performs slightly worse. In Figure 2 c) and d) we plot the corresponding ROC and PR curves for 40 trajectories.
Scalablity. We compare the scalability of our gradient-based greedy structure search with a greedy hill-climbing implementation of structure seach (K = 2) with variational inference as in [11] (we limited this search to one sweep over families). We fixed all parameters as before and the number of trajectories to 40. Results are displayed in Figure 3.
Dependence on initial values. We investigate the performance of our method with respect to different initial values. For this, we draw the initial values of mixture components uniformly at random, and then project them on the probability simplex via normalization, π̃(0)i ∼ U(0, 1) and π (0) i (m) = π̃ (0) i (m)/ ∑ n π̃ (0) i (n). We fixed all parameters as before and the number of trajectories to 40. In Figure 2, we displayed ROC e) and PR f) for our heuristic initial and random initial values. We find, that the heuristic performs almost consistently better.
5.2 Real-world data
British household dataset. We show scalability in a realistic setting, we applied our method to the British Household Panel Survey (ESRC Research Centre on Micro-social Change, 2003). This dataset has been collected yearly from 1991 to 2002, thus consisting of 11 time-points. Each of the 1535 participants was questioned about several facts of their life. We picked 15 of those, that we deemed interpretable, some of them, "health status", "job status" and "health issues", having non-binary state-spaces. Because the participants had the option of not answering a question and changes in their lives are unlikely to happen during the questionnaire, this dataset is strongly incomplete. Out of the 1535 trajectories, we picked
600 at random and inferred the network presented in Figure 4. In supplementary C.2 we investigate the stability of this result. We performed inference with our greedy algorithm (K = 2). This dataset has been considered in [16], where a network among 4 variables was inferred. Inferring a large network at once is important, as latent variables can create spurious edges in the network [2].
IRMA gene-regulatory network. Finally, we investigate performance on realistic data. For this, we apply it to the In vivo Reverse-engineering and Modeling Assessment (IRMA) network [3]. It is, to best of our knowledge, the only molecular biological network with a ground-truth. This gene regulatory network has been implemented on cultures of yeast, as a benchmark for network reconstruction algorithms. Special care has been taken to isolate this network from crosstalk with other cellular components. The authors of [3] provide time course data from two perturbation experiments, referred to as “switch on” and “switch off”, and attempted reconstruction using different methods. In Table 1, we compare to other methods tested in [18]. For more details on this experiment and details on other methods, we refer to the supplementary C.3, respectively.
6 Conclusion
We presented a novel scalable gradient-based approach for structure learning for CTBNs from complete and incomplete data, and demonstrated its usefulness on synthetic and real-world data. In the future we plan to apply our algorithm to new bio-molecular datasets. Further, we believe that the mixture likelihood may also be applicable to tasks different from structure learning.
Acknowledgements
We thank the anonymous reviewers for helpful comments on the previous version of this manuscript. Dominik Linzner and Michael Schmidt are funded by the European Union’s Horizon 2020 research and innovation programme (iPC–Pediatric Cure, No. 826121). Heinz Koeppl acknowledges support by the European Research Council (ERC) within the CONSYN project, No. 773196, and by the Hessian research priority programme LOEWE within the project CompuGene. | 1. What is the novel approach introduced by the paper in continuous-time structure estimation?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of scalability and comparisons with other works?
3. Do you have any concerns regarding the connection between the variational method used in the paper and previous works?
4. How does the reviewer assess the clarity and quality of the paper's content, including minor notes and references? | Review | Review
I like the approach. This paper describes something new for continuous-time structure estimation. While mixtures have been explored for structure estimation in other domains, they have not been applied here and there are non-trivial hurdles that were overcome to do so. This method seems to be scalable. However, this is not tested in the paper. It would be good to see some experiments that would demonstrate this, particularly as the BHPS data was down-sampled to 200 trajectories (from 1535) and it isn't clear why. The variational method used seems very similar to that of [4]. This paper should make the connection clearer. Finally, structural EM (SEM) (see Section 19.4.3 of Koller & Friedman or Friedman's original 1997 paper) from BNs has been applied to CTBNs before (see [16], for example, and it seems to be implemented in CTBN-RLE). While exact inference is not scalable, this could be used with the the variation inference of [4]. This would make a natural comparison, as it is a scalable existing alternative that also employs variational inference. Minor notes: - I think Equation 2 needs a "+1" inside the Gamma function, as well. - The last equation on page 2 of the supplementary material does not seem to account for the sqrt(2pi/z) part of Stirling's approximation (which has an apostrophe, please note). |
NIPS | Title
Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data
Abstract
Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.
1 Introduction
Learning correlative or causative dependencies in multivariate data is a fundamental problem in science and has application across many disciplines such as natural and social sciences, finance and engineering [1, 20]. Most statistical approaches consider the case of snapshot or static data, where one assumes that the data is drawn from an unknown probability distribution. For that case several methods for learning the directed or undirected dependency structure have been proposed, e.g., the PC algorithm [21, 13] or the graphical LASSO [8, 12], respectively. Causality for such models can only partially be recovered up to an equivalence class that relates to the preservation of v-structures [21] in the graphical model corresponding to the distribution. If longitudinal and especially temporal data is available, structure learning methods need to exploit the temporal ordering of cause and effect that is implicit in the data for determining the causal dependency structure. One assumes that the data are drawn from an unknown stochastic process. Classical approaches such as Granger causality or transfer entropy methods usually require large sample sizes [23]. Dynamic Bayesian networks offer an appealing framework to formulate structure learning for temporal data within the graphical model framework [10]. The fact that the time granularity of the data can often be very different from the actual granularity of the underlying process motivates the extension to continuous-time Bayesian networks (CTBN) [14], where no time granularity of the unknown process has to be assumed. Learning the structure within the CTBN framework involves a combinatorial search over structures and is hence generally limited to low-dimensional problems even if one considers variational approaches [11] and/or greedy hill-climbing strategies in structure space [15, 16]. Reminiscent of
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
optimization-based approaches such as graphical LASSO, where structure scoring is circumvented by performing gradient descent on the edge coefficients of the structure under a sparsity constraint, we here propose the first gradient-based scheme for learning the structure of CTBNs.
2 Background
2.1 Continuous-time Bayesian Networks
We consider continuous-time Markov chains (CTMCs) {X(t)}t≥0 taking values in a countable statespace S . A time-homogeneous Markov chain evolves according to an intensity matrixR : S×S → R, whose elements are denoted by R(s, s′), where s, s′ ∈ S . A continuous-time Bayesian network [14] is defined as an N -component process over a factorized state-space S = X1 × · · · × XN evolving jointly as a CTMC. For local states xi, x′i ∈ Xi, we will drop the states’ component index i, if evident by the context and no ambiguity arises. We impose a directed graph structure G = (V,E), encoding the relationship among the components V ≡ {V1, . . . , VN}, which we refer to as nodes. These are connected via an edge set E ⊆ V × V . This quantity is the structure, which we will later learn. The state of each component is denoted by Xi(t) assuming values in Xi, which depends only on the states of a subset of nodes, called the parent set parG(i) ≡ {j | (j, i) ∈ E}. Conversely, we define the child set chG(i) ≡ {j | (i, j) ∈ E}. The dynamics of a local state Xi(t) are described as a Markov process conditioned on the current state of all its parents Un(t) taking values in Ui ≡ {Xj | j ∈ parG(i)}. They can then be expressed by means of the conditional intensity matrices (CIMs) Ri : Xi ×Xi × Ui → R, where ui ≡ (u1, . . . uL) ∈ Ui denotes the current state of the parents (L = |parG(i)|). The CIMs are the generators of the dynamics of a CTBN. Specifically, we can express the probability of finding node i in state x′ after some small time-step h, given that it was in state x at time t with x, x′ ∈ Xi as
p(Xi(t+ h) = x ′ | Xi(t) = x, Ui(t) = u) = δx,x′ + hRi(x, x′ | u) + o(h),
where Ri(x, x′ | u) is the rate the transition x → x′ given the parents’ state u ∈ Ui and δx,x′ being the Kronecker-delta. We further make use of the small o(h) notation, which is defined via limh→0 o(h)/h = 0. It holds that Ri(x, x | u) = − ∑ x′ 6=xRi(x, x
′ | u). The CIMs are connected to the joint intensity matrix R of the CTMC via amalgamation – see, for example [14].
2.2 Structure Learning for CTBNs
Complete data. The likelihood of a CTBN can be expressed in terms of its sufficient statistics [15], Mi(x, x′ | u), which denotes the number of transitions of node i from state x to x′ and Ti(x | u), which denotes the amount of time node i spend in state x. In order to avoid clutter, we introduce the setsM≡ {Mi(x, x′ | u) | i ∈ {1, . . . , N}, x, x′ ∈ X , u ∈ U} and T ≡ {Ti(x | u) | i ∈ {1, . . . , N}, x ∈ X , u ∈ U}. The likelihood then takes the form
p(M, T | G, R) = N∏ i=1 exp ∑ x,x′ 6=x,u Mi(x, x ′ | u) lnRi(x, x′ | u)− Ti(x | u)Ri(x, x′ | u) . (1)
In [15] and similarly in [22] it was shown that a marginal likelihood for the structure can be calculated in closed form, when assuming a gamma prior over the rates Ri(x, x′ | u) ∼ Gam(αi(x, x′ | u), βi(x ′ | u)). In this case, the marginal log-likelihood of a structure takes the form
ln p(M, T | G, α, β) ∝ N∑ i=1 ∑ u,x,x′ 6=x { ln Γ (ᾱi(x, x ′ | u))− ᾱi(x, x′ | u) ln β̄i(x | u) } , (2)
with ᾱi(x, x′ | u) ≡Mi(x, x′ | u) + αi(x, x′ | u) and β̄i(x | u) ≡ Ti(x | u) + βi(x | u). Structure learning in previous works [16, 22, 11] is then performed by iterating over possible structures and scoring them using the marginal likelihood. The best scoring structure is then the maximum-aposteriori estimate of the structure.
Incomplete data. In many cases, the sufficient statistics of a CTBN cannot be provided. Instead, data comes in the form of noisy state observations at some points in time. In the following, we
will assume data is provided in form of Ns samples D ≡ { (tk, yk) | k ∈ {1, . . . , Ns} }
, where yk is some, possibly noisy, measurement of the latent-state generated by some observation model yk ∼ p(Y = yk | X(tk) = s) at time tk. This data is incomplete, as the sufficient statistics of the underlying latent process have to be estimated before model identification can be performed. In [16], an expectation-maximization for structure learning (SEM) was introduced, in which, given a proposal CTBN, sufficient statistics were first estimated by exact inference, the CTBN parameters were optimized given those expected sufficient-statistics and, subsequently, structures where scored via (1). Similarly, in [11] expected sufficient-statistics were estimated via variational inference under marginal (parameter-free) dynamics and structures were then scored via (2).
The problem of structure learning from incomplete data has two distinct bottlenecks, (i) Latent state estimation (scales exponentially in the number of nodes) (ii) Structure identification (scales super-exponentially in the number of nodes). While bottleneck (i) has been tackled in many ways [4, 5, 19, 11], existing approaches [16, 11] employ a combinatorial search over structures, thus an efficient solution for bottleneck (ii) is still outstanding.
Our approach. We will employ a similar strategy as mentioned above in this manuscript. However, statistics are estimated under a marginal CTBN that no longer depends on rate parameters or a discrete structure. Instead, statistics are estimated given a mixture of different parent-sets. Thus, instead of blindly iterating over possible structures in a hill-climbing procedure, we can update our distribution over structures by a gradient step. This allows us to directly converge into regions of high-probability. Further, in combination of this gradient-based approach with a high-order variational method, we can perform estimation of the expected sufficient-statistics in large systems. These two features combined, enable us to perform structure learning in large systems. An implementation of our method is available via Git1.
3 Likelihood of CTBNs Under a Mixture of CIMs
Complete data. In the following, we consider a CTBN over some 2over-complete graph G. In practice, this graph may be derived from data as prior knowledge. In the absence of prior knowledge, we will choose the full graph. We want to represent its CIMsRi(x, x′ | u), here for node i, as mixture of CIMs of smaller support and write by using the power-set P(·) (set of all possible subsets)
Ri(x, x ′ | u) = ∑ m∈P(parG(i)) πi(m)ri(x, x ′ | um) ≡ Eπi [ri(x, x′ | um)], (3)
where um denotes the projection of the full parent-state u on the subsetm, i.e. f(um) = ∑ u/um
f(u), and the expectation Eπi [f(θm)] = ∑ m∈P(parG(i))
πi(m)f(θm). The mixture-weights are given by a distribution πi ∈ ∆i with ∆i being the |P(parG(i))|−dimensional probability simplex. Corresponding edge probabilities of the graph can be computed via marginalization. The probability that an edge eij ∈ E exists is then
p(eij = 1) = ∑
m∈P(parG(j))
πj(m)1(i ∈ m), (4)
with 1(·) being the indicator function. In order to arrive at a marginal score for the mixture we insert (3) into (1) and apply Jensen’s inequality Eπi [ln (r)] ≤ ln (Eπi [r]). This yields a lower-bound to the mixture likelihood
p(M, T | π, r) ≥ N∏ i=1 ∏ x,x′ 6=x,um eE π i [Mi(x,x ′|um) ln ri(x,x′|um)−Ti(x|um)ri(x,x′|um)].
For details on this derivation, we refer to the supplementary material A.1. Note that Jensens inequality, which only provides a poor approximation in general, improves with increasing concentration of probability mass and becomes exact for degenerate distributions. For the task of selecting a CTBN with a specific parent-set, it is useful to marginalize over the rate parameters r of the CTBNs. This allows for a direct estimation of the parent-set, without first estimating the rates. This
1https://git.rwth-aachen.de/bcs/ssl-ctbn 2An over-complete graph has more edges than the underlying true graph, which generated the data.
marginal likelihood can be computed under the assumption of independent gamma prior distributions ri(x, x
′ | um) ∼ Gam(αi(x, x′ | um), βi(x′ | um)) over the rates. The marginal likelihood lowerbound can then be computed analytically. Under the assumption of independent Dirichlet priors πi ∼ Dir(πi | ci), with concentration parameters ci we arrive at a lower-bound to the marginal log-posterior of the mixture weights π
ln p(π | M, T , α, β) ≥ ∑ i Fi[M, T , π] + lnZ, (5)
Fi[M, T , π] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱi(x, x ′ | um))− ᾱi(x, x′ | um) ln β̄i(x | um) } +ln Dir(πi | ci),
with the updated posterior parameters ᾱi(x, x′ | um) ≡ πi(m)Mi(x, x′ | um) + αi(x, x′ | um) and β̄i(x | um) ≡ πi(m)Ti(x | um) + βi(x | um). For details, we refer to the supplementary material A.2. The constant log-partition function lnZ can be ignored in the following analysis. Because (5) decomposes into a sum of node-wise terms, the maximum-a-posterior estimate of the mixture weights of node i can be calculated as solution of the following optimization problem:
π∗i = arg max πi∈∆i {Fi[M, T , π]} . (6)
By construction, learning the mixture weights π of the CIMs, corresponds to learning a distribution over parent-sets for each node. We thus re-expressed the problem of structure learning to an estimation of π. Further, we note that for any degenerate π, (5) coincides with the exact structure score (2).
Incomplete data. In the case of incomplete noisy data D, the likelihood of the CTBN does no longer decompose into node-wise terms. Instead, the likelihood is one of the full amalgamated CTMC [16]. In order to tackle this problem, approximation methods through sampling [7, 6, 19], or variational approaches [4, 5] have been investigated. These, however, either fail to treat highdimensional spaces because of sample sparsity, are unsatisfactory in terms of accuracy, or provide only an uncontrolled approximation. Our method is based on a variational approximation, e.g. weak coupling expansion [11]. Under this approximation, we recover by the same calculation an approximate likelihood of the same form as (1), where the sufficient statistics Mi(x, x′ | u) and Ti(x | u) are, however, replaced by their expectation Eq [Mi(x, x′ | u)] and Eq [Ti(x | u)] under a variational distribution q, – for details we refer to the supplementary B.1. Subsequently, also our optimization objective Fi[M, T , π] becomes dependent on the variational distribution Fi[D, π, q]. In the following chapter, we will develop an Expectation-Maximization (EM)-algorithm that iteratively estimates the expected sufficient-statistics given the mixture-weights and subsequently optimizes those mixture-weights given the expected sufficient-statistics.
4 Incomplete data: Expected Sufficient Statistics Under a Mixture of CIMs
Short review of the foundational method. In [11], the exact posterior over paths of a CTBN given incomplete dataD, is approximated by a path measure q(X[0,T ]) of a variational time-inhomogeneous Markov process via a higher order variational inference method. For a CTBN, this path measure is fully described by its node-wise marginals qi(x′, x, u; t) ≡ qi(Xi(t+ h) = x′, Xi(t) = x, Ui(t) = u; t). From it, one can compute the marginal probability qi(x; t) of node i to be in state x, the marginal probability of the parents qi(Ui(t) = u; t) ≡ qui (t) and the marginal transition probability τi(x, x
′, u; t) ≡ limh→0 qi(x′, x, u; t)/h for x 6= x′. The exact form of the expected statistics were calculated to be
Eq [Ti(x | u)] ≡ ∫ T
0
dt qi(x; t)q u i (t), Eq [Mi(x, x
′ | u)] ≡ ∫ T
0
dt τi(x, x ′, u; t). (7)
In the following, we will use the short-hand Eq [M] and Eq [T ] to denote the sets of expected sufficient-statistics. We note, that the variational distribution q has the support of the full overcomplete parent-set parG(i). Via marginalization of qi(x
′, x, u; t), the marginal probability and the marginal transition probability can be shown to be connected via the relation
d dt qi(x; t) = ∑ x′ 6=x,u [τi(x, ′ x, u; t)− τi(x, x′, u; t)] . (8)
Algorithm 1 Stationary points of Euler–Lagrange equation 1: Input: Initial trajectories qi(x; t), boundary conditions q(x; 0) and ρ(x;T ), mixture weights π
and data D. 2: repeat 3: repeat 4: for all i ∈ {1, . . . , N} do 5: for all (yk, tk) ∈ D do 6: Update ρi(t) by backward propagation from tk to tk−1 using (10) fulfilling the jump conditons (12). 7: end for 8: Update qi(t) by forward propagation using (10) given ρi(t). 9: end for
10: until Convergence 11: Compute expected sufficient statistics using (7) and (11) from qi(t) and ρi(t). 12: until Convergence of F [D, π, q] 13: Output: Set of expected sufficient statistics Eq[M] and Eq[T ].
Application to our setting. As discussed in the last section, the objective function in the incomplete data case has the same form as (5)
Fi[D, π, q] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱqi (x, x ′ | um))− ᾱqi (x, x ′ | um) ln β̄qi (x | um) } +ln Dir(πi | ci),
(9)
however, now with ᾱqi (x, x ′ | um) ≡ πi(m)Eq[Mi(x, x′ | um)] + αi(x, x′ | um) and β̄qi (x | um) ≡ πi(m)Eq[Ti(x | um)] + βi(x | um). In order to arrive at approximation to the expected sufficient statistics in our case, we have to maximize (9) with respect to q, while fulfilling the constraint (8). The corresponding Lagrangian becomes
L[D, π, q, λ] = N∑ i=1 Fi[D, π, q]− ∑ x,x′ 6=x,u ∫ T 0 dt λi(x; t) { d dt qi(x; t)− [τi(x,′ x, u; t)− τi(x, x′, u; t)] } , with Lagrange-multipliers λi(x; t). In order to derive Euler-Lagrange equations, we employ Stirlings-
approximation for the gamma function Γ(z) = √
2π z ( z e )z +O ( 1 z ) , which becomes exact asymp-
totically. In our case, Stirlings-approximation is valid if ᾱ 1. We thereby assumed that either enough data has been recorded, or a sufficiently strong prior α. Finally, we recover the approximate forward- and backward-equations of the mixture CTBNs as the stationary point of the Lagrangian Euler-Lagrange equations
d dt ρi(t) = Ω̃ π i (t)ρi(t), d dt qi(t) = qi(t)Ω π i (t), (10)
with effective rate matrices
Ωπi (x, x ′; t) ≡ Eui [ R̃πi (x, x ′ | u) ] ρi(x′; t) ρi(x; t)
Ω̃πi (x, x ′; t) ≡ (1− δx,x′)Eui [ R̃πi (x, x ′ | u) ] + δx,x′ {Eui [Rπi (x, x′ | u)] + Ψi(x; t)} ,
with ρi(x; t) ≡ exp(−λi(x; t)) and Ψi(x; t) as given in the supplementary material B.2. Further we have introduced the shorthand Eui [f(u)] = ∑ u f(u)q u i (t)
and defined the posterior expected rates
Rπi (x, x ′ | u) ≡ Eπi
[ ᾱqi (x, x
′ | um) β̄qi (x | um)
] , R̃πi (x, x ′ | u) ≡ ∏ m ( ᾱqi (x, x ′ | um) β̄qi (x | um) )πi(m) ,
Algorithm 2 Gradient-based Structure Learning 1: Input: Initial trajectories qi(x; t), boundary conditions qi(x; 0) and ρi(x;T ), initial mixture
weights π(0), data D and iterator n = 0 2: repeat 3: Compute expected sufficient statistics Eq[M] and Eq[T ] given π(n) using Algorithm 1. 4: for all i ∈ {1, . . . , N} do 5: Maximize (6) with respect to πi, set maximizer π (n+1) i = π ∗ i and n→ n+ 1. 6: end for 7: until Convergence of F [D, π, q] 8: Output: Maximum-a-posteriori mixture weights π(n)
which take the form of an arithmetic and geometric mean, respectively. For the variational transitionmatrix we find the algebraic relationship
τi(x, x ′, u; t) = qi(x; t)q u i (t)R̃ π i (x, x
′ | u)ρi(x ′; t)
ρi(x; t) . (11)
Because, the derivation is quite lengthy, we refer to supplementary B.2 for details. In order to incorporate noisy observations into the CTBN dynamics, we need to specify an observation model. In the following we assume that the data likelihood factorizes p(Y = yk | X(tk) = s) = ∏ i pi(Yi = y k i | Xi(tk) = x), allowing us to condition on the data by enforcing jump conditions
lim t→tk− ρi(x; t) = lim t→tk+
pi(Yi = y k i | Xi(tk) = x)ρi(x; t). (12)
The converged solutions of the ODE system can then be used to compute the sufficient statistics via (7). For a full derivation, we refer to the supplementary material B.2.
We note that in the limiting case of a degenerate mixture distribution π, this set of equations reduces to the marginal dynamics for CTBNs proposed in [11]. The set of ODEs can be solved iteratively as a fixed-point procedure in the same manner as in previous works [17, 4] (see Algorithm 1) in a forward-backward procedure.
Exhaustive structure search. As we are now able to calculate expected-sufficient statistics given mixture weights π, we can design an EM-algorithm for structure learning. For this iteratively optimize π given the expected sufficient statistics, which we subsequently re-calculate. The EM-algorithm is summarized in Algorithm 2. In contrast to the exact EM-procedure [16], we preserve structure modularity. We can thus optimize the parent-set of each node independently. This already provides a huge boost in performance, as in our case the search space scales exponentially in the components, instead of super-exponentially. In the paragraph "Greedy structure search", we will demonstrate how to further reduce complexity to a polynomial scaling, while preserving most prediction accuracy.
Restricted exhaustive search. In many cases, especially for applications in molecular biology, comprehensive 3databases of putative interactions are available and can be used to construct overcomplete yet not fully connected prior networks G0 of reported gene and protein interactions. In this case we can restrict the search space by excluding possible non-reported parents for every node i, parG(i) = parG0(i), allowing for structure learning of large networks.
Greedy structure search. Although we have derived a gradient-based scheme for exhaustive search, the number of possible mixture components still equals the number of all possible parent-sets. However, in many applications, it is reasonable to assume the number of parents to be limited, which corresponds to a sparsity assumption. For this reason, greedy schemes for structure learning have been proposed in previous works [16]. Here, candidate parent-sets were limited to have at most K parents, in which case, the number of candidate graphs only grows polynomially in the number of nodes. In order to incorporate a similar scheme in our method, we have to perform an additional approximation to the set of equations (10). The problem lies in the expectation step (Algorithm 1), as expectation is performed with respect to the full over-complete graph. In order to calculate expectations of the geometric mean Eui [R̃ π i (x, x
′ | u)], we have to consider the over-complete set of parenting nodes qui (t) for each node i. However, for the calculation of the arithmetic mean E u i [R π i (x, x
′ | u)] only 3e.g. https://string-db.org/ or https://www.ebi.ac.uk/intact/
parent-sets restricted to the considered sub-graphs have to be considered, due to linearity. For this reason, we approximate the geometric mean by the arithmetic mean R̃πi ≈ Rπi , corresponding to the first-order expansion Eπi [ln(x)] = ln(E π i [x]) +O(Var[x]), which, as before, becomes more valid for more concentrated πi and is exact if πi is degenerate.
5 Experiments
We demonstrate the effectiveness of our method on synthetic and two real-world data sets. For all experiments, we consider a fixed set of hyper-parameters. We set the Dirichlet concentration parameter ci = 0.9 for all i ∈ {1, . . . , N}. Further, we assume a prior for the generators, which is uninformative on the structure αi(x, x′ | u) = 5 and βi(x | u) = 10, for all x, x′ ∈ Xi, u ∈ Ui. For the optimization step in Algorithm 2, we use standard Matlab implementation of the interior-point method with 100 random restarts. This is feasible, as the Jacobian of (9) can be calculated analytically.
5.1 Synthetic Data
In this experiment, we consider synthetic data generated by random graphs with a flat degree distribution, truncated at degree two, i.e. each nodes has a maximal number of two parents. We restrict the state-space of each node to be binary X = {−1, 1}. The generators of each node are chosen such that they undergo Glauber-dynamics [9]Ri(x, x̄ | u) = 12 + 1 2 tanh ( γx ∑ j∈parG(i) uj ) , which is a popular model for benchmarking, also in CTBN literature [4]. The parameter γ denotes the coupling-strength of node j to i. With increasing γ the dynamics of the network become increasingly deterministic, converging to a logical-model for γ → ∞. In order to avoid violating the weakcoupling assumption [11], underlying our method, we choose γ = 0.6. We generated a varying number of trajectories with each containing 10 transitions. In order to have a fair evaluation, we generate data from thirty random graphs among five nodes, as described above. By computing the edge probabilities p(eij = 1) via (4), we can evaluate the performance of our method as an edgeclassifier by computing the receiver-operator characteristic curve (ROC) and the precision-recall curve (PR) and their area-under-curve (AUROC) and (AUPR). For an unbiased classifier, both quantities have to approach 1, for increasing amounts of data.
Complete data. In this experiment, we investigate the viability of using the marginal mixture likelihood lower-bound as in (5) given the complete data in the form of the sufficient statisticsM and T . In Figure 1 we compare the AUROCs a) and AUPRs b) achieved in an edge classification task using exhaustive scoring of the exact marginal likelihood (2) as in [15] (blue) and gradient ascend in π of the mixture marginal likelihood lower-bound (red-dashed) as in (5). In Figure 1 c) we show via numerical integration, that the marginal mixture likelihood lower-bound approaches the exact one (2) for decreasing entropy of π and increasing number of trajectories. Small negative deviations are due
to the limited accuracy of numerical integration. Additional synthetic experiments investigating the effect of different concentration parameters c can be found in the supplementary C.1
Incomplete data. Next, we test our method for network inference from incomplete data. Noisy incomplete observations were generated by measuring the state at Ns = 10 uniformly random timepoints and adding Gaussian noise with zero mean and variance 0.2. Because of the expectation-step in Algorithm 1, is only approximate [11], we do not expect a perfect classifier in this experiment. We compare the exhaustive search, with a K = 4 parents greedy search, such that both methods have the same search-space. We initialized both methods with π(0)i (m) = 1 if m = parG(i) and 0 else, as a heuristic. In Figure 2 a) and b), it can be seen that both methods approach AUROCs and AUPRs close to one, for increasing amounts of data. However, due to the additional approximation in the greedy algorithm, it performs slightly worse. In Figure 2 c) and d) we plot the corresponding ROC and PR curves for 40 trajectories.
Scalablity. We compare the scalability of our gradient-based greedy structure search with a greedy hill-climbing implementation of structure seach (K = 2) with variational inference as in [11] (we limited this search to one sweep over families). We fixed all parameters as before and the number of trajectories to 40. Results are displayed in Figure 3.
Dependence on initial values. We investigate the performance of our method with respect to different initial values. For this, we draw the initial values of mixture components uniformly at random, and then project them on the probability simplex via normalization, π̃(0)i ∼ U(0, 1) and π (0) i (m) = π̃ (0) i (m)/ ∑ n π̃ (0) i (n). We fixed all parameters as before and the number of trajectories to 40. In Figure 2, we displayed ROC e) and PR f) for our heuristic initial and random initial values. We find, that the heuristic performs almost consistently better.
5.2 Real-world data
British household dataset. We show scalability in a realistic setting, we applied our method to the British Household Panel Survey (ESRC Research Centre on Micro-social Change, 2003). This dataset has been collected yearly from 1991 to 2002, thus consisting of 11 time-points. Each of the 1535 participants was questioned about several facts of their life. We picked 15 of those, that we deemed interpretable, some of them, "health status", "job status" and "health issues", having non-binary state-spaces. Because the participants had the option of not answering a question and changes in their lives are unlikely to happen during the questionnaire, this dataset is strongly incomplete. Out of the 1535 trajectories, we picked
600 at random and inferred the network presented in Figure 4. In supplementary C.2 we investigate the stability of this result. We performed inference with our greedy algorithm (K = 2). This dataset has been considered in [16], where a network among 4 variables was inferred. Inferring a large network at once is important, as latent variables can create spurious edges in the network [2].
IRMA gene-regulatory network. Finally, we investigate performance on realistic data. For this, we apply it to the In vivo Reverse-engineering and Modeling Assessment (IRMA) network [3]. It is, to best of our knowledge, the only molecular biological network with a ground-truth. This gene regulatory network has been implemented on cultures of yeast, as a benchmark for network reconstruction algorithms. Special care has been taken to isolate this network from crosstalk with other cellular components. The authors of [3] provide time course data from two perturbation experiments, referred to as “switch on” and “switch off”, and attempted reconstruction using different methods. In Table 1, we compare to other methods tested in [18]. For more details on this experiment and details on other methods, we refer to the supplementary C.3, respectively.
6 Conclusion
We presented a novel scalable gradient-based approach for structure learning for CTBNs from complete and incomplete data, and demonstrated its usefulness on synthetic and real-world data. In the future we plan to apply our algorithm to new bio-molecular datasets. Further, we believe that the mixture likelihood may also be applicable to tasks different from structure learning.
Acknowledgements
We thank the anonymous reviewers for helpful comments on the previous version of this manuscript. Dominik Linzner and Michael Schmidt are funded by the European Union’s Horizon 2020 research and innovation programme (iPC–Pediatric Cure, No. 826121). Heinz Koeppl acknowledges support by the European Research Council (ERC) within the CONSYN project, No. 773196, and by the Hessian research priority programme LOEWE within the project CompuGene. | 1. What is the focus and contribution of the paper regarding CTBN models?
2. What are the strengths of the proposed approach, particularly in managing big data?
3. What are the weaknesses of the paper, especially concerning the likelihood formula and the strong assumption about alpha?
4. How does the reviewer assess the clarity and significance of the paper's content?
5. Are there any questions or concerns regarding the applicability of the proposed method in certain domains? | Review | Review
Originality; the work sheds new light on CTBN models and how to learn them from data in the case when big data has to be managed. Furthermore, the formulation of the structural learning algorithm for complete and incomplete data is a relevant step to improve effectiveness of CTBNs. Quality; the submission is formal and sound. However, I have the following concerns: Pag. 2; formula (1), I would ask to explain why the likelihood misses the part related to permanence in any given state, Pag. 3; I read "The integral ca ..." which one ? I suggest it is better to clarify this point, even if I know it. Pag. 5; the assumption about the alpha>>1 is quite strong and I would kindly ask to better motivate, investigate and analyze its' impact on solutions. I think this could strongly limit the application of the proposed approach in case where you have few observations w.r.t the number of variables. I found some minor typos.. I also would like to know something about inference on the CTBN once you have learnt it from data, i.e. how are you computing filtering, smoothing, ...? Clarity; the paper is in general quite clear, even if some more details and examples to introduce the idea of mixtures could have helped the reader to better understand the proposed approach. Significance; the contribution, in my humble opinion, is relevant with specific reference to the research area of CTBNs. Furthermore, it can help improve results achieved in relevant application domains as finance, medicine and biology. |
NIPS | Title
Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data
Abstract
Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.
1 Introduction
Learning correlative or causative dependencies in multivariate data is a fundamental problem in science and has application across many disciplines such as natural and social sciences, finance and engineering [1, 20]. Most statistical approaches consider the case of snapshot or static data, where one assumes that the data is drawn from an unknown probability distribution. For that case several methods for learning the directed or undirected dependency structure have been proposed, e.g., the PC algorithm [21, 13] or the graphical LASSO [8, 12], respectively. Causality for such models can only partially be recovered up to an equivalence class that relates to the preservation of v-structures [21] in the graphical model corresponding to the distribution. If longitudinal and especially temporal data is available, structure learning methods need to exploit the temporal ordering of cause and effect that is implicit in the data for determining the causal dependency structure. One assumes that the data are drawn from an unknown stochastic process. Classical approaches such as Granger causality or transfer entropy methods usually require large sample sizes [23]. Dynamic Bayesian networks offer an appealing framework to formulate structure learning for temporal data within the graphical model framework [10]. The fact that the time granularity of the data can often be very different from the actual granularity of the underlying process motivates the extension to continuous-time Bayesian networks (CTBN) [14], where no time granularity of the unknown process has to be assumed. Learning the structure within the CTBN framework involves a combinatorial search over structures and is hence generally limited to low-dimensional problems even if one considers variational approaches [11] and/or greedy hill-climbing strategies in structure space [15, 16]. Reminiscent of
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
optimization-based approaches such as graphical LASSO, where structure scoring is circumvented by performing gradient descent on the edge coefficients of the structure under a sparsity constraint, we here propose the first gradient-based scheme for learning the structure of CTBNs.
2 Background
2.1 Continuous-time Bayesian Networks
We consider continuous-time Markov chains (CTMCs) {X(t)}t≥0 taking values in a countable statespace S . A time-homogeneous Markov chain evolves according to an intensity matrixR : S×S → R, whose elements are denoted by R(s, s′), where s, s′ ∈ S . A continuous-time Bayesian network [14] is defined as an N -component process over a factorized state-space S = X1 × · · · × XN evolving jointly as a CTMC. For local states xi, x′i ∈ Xi, we will drop the states’ component index i, if evident by the context and no ambiguity arises. We impose a directed graph structure G = (V,E), encoding the relationship among the components V ≡ {V1, . . . , VN}, which we refer to as nodes. These are connected via an edge set E ⊆ V × V . This quantity is the structure, which we will later learn. The state of each component is denoted by Xi(t) assuming values in Xi, which depends only on the states of a subset of nodes, called the parent set parG(i) ≡ {j | (j, i) ∈ E}. Conversely, we define the child set chG(i) ≡ {j | (i, j) ∈ E}. The dynamics of a local state Xi(t) are described as a Markov process conditioned on the current state of all its parents Un(t) taking values in Ui ≡ {Xj | j ∈ parG(i)}. They can then be expressed by means of the conditional intensity matrices (CIMs) Ri : Xi ×Xi × Ui → R, where ui ≡ (u1, . . . uL) ∈ Ui denotes the current state of the parents (L = |parG(i)|). The CIMs are the generators of the dynamics of a CTBN. Specifically, we can express the probability of finding node i in state x′ after some small time-step h, given that it was in state x at time t with x, x′ ∈ Xi as
p(Xi(t+ h) = x ′ | Xi(t) = x, Ui(t) = u) = δx,x′ + hRi(x, x′ | u) + o(h),
where Ri(x, x′ | u) is the rate the transition x → x′ given the parents’ state u ∈ Ui and δx,x′ being the Kronecker-delta. We further make use of the small o(h) notation, which is defined via limh→0 o(h)/h = 0. It holds that Ri(x, x | u) = − ∑ x′ 6=xRi(x, x
′ | u). The CIMs are connected to the joint intensity matrix R of the CTMC via amalgamation – see, for example [14].
2.2 Structure Learning for CTBNs
Complete data. The likelihood of a CTBN can be expressed in terms of its sufficient statistics [15], Mi(x, x′ | u), which denotes the number of transitions of node i from state x to x′ and Ti(x | u), which denotes the amount of time node i spend in state x. In order to avoid clutter, we introduce the setsM≡ {Mi(x, x′ | u) | i ∈ {1, . . . , N}, x, x′ ∈ X , u ∈ U} and T ≡ {Ti(x | u) | i ∈ {1, . . . , N}, x ∈ X , u ∈ U}. The likelihood then takes the form
p(M, T | G, R) = N∏ i=1 exp ∑ x,x′ 6=x,u Mi(x, x ′ | u) lnRi(x, x′ | u)− Ti(x | u)Ri(x, x′ | u) . (1)
In [15] and similarly in [22] it was shown that a marginal likelihood for the structure can be calculated in closed form, when assuming a gamma prior over the rates Ri(x, x′ | u) ∼ Gam(αi(x, x′ | u), βi(x ′ | u)). In this case, the marginal log-likelihood of a structure takes the form
ln p(M, T | G, α, β) ∝ N∑ i=1 ∑ u,x,x′ 6=x { ln Γ (ᾱi(x, x ′ | u))− ᾱi(x, x′ | u) ln β̄i(x | u) } , (2)
with ᾱi(x, x′ | u) ≡Mi(x, x′ | u) + αi(x, x′ | u) and β̄i(x | u) ≡ Ti(x | u) + βi(x | u). Structure learning in previous works [16, 22, 11] is then performed by iterating over possible structures and scoring them using the marginal likelihood. The best scoring structure is then the maximum-aposteriori estimate of the structure.
Incomplete data. In many cases, the sufficient statistics of a CTBN cannot be provided. Instead, data comes in the form of noisy state observations at some points in time. In the following, we
will assume data is provided in form of Ns samples D ≡ { (tk, yk) | k ∈ {1, . . . , Ns} }
, where yk is some, possibly noisy, measurement of the latent-state generated by some observation model yk ∼ p(Y = yk | X(tk) = s) at time tk. This data is incomplete, as the sufficient statistics of the underlying latent process have to be estimated before model identification can be performed. In [16], an expectation-maximization for structure learning (SEM) was introduced, in which, given a proposal CTBN, sufficient statistics were first estimated by exact inference, the CTBN parameters were optimized given those expected sufficient-statistics and, subsequently, structures where scored via (1). Similarly, in [11] expected sufficient-statistics were estimated via variational inference under marginal (parameter-free) dynamics and structures were then scored via (2).
The problem of structure learning from incomplete data has two distinct bottlenecks, (i) Latent state estimation (scales exponentially in the number of nodes) (ii) Structure identification (scales super-exponentially in the number of nodes). While bottleneck (i) has been tackled in many ways [4, 5, 19, 11], existing approaches [16, 11] employ a combinatorial search over structures, thus an efficient solution for bottleneck (ii) is still outstanding.
Our approach. We will employ a similar strategy as mentioned above in this manuscript. However, statistics are estimated under a marginal CTBN that no longer depends on rate parameters or a discrete structure. Instead, statistics are estimated given a mixture of different parent-sets. Thus, instead of blindly iterating over possible structures in a hill-climbing procedure, we can update our distribution over structures by a gradient step. This allows us to directly converge into regions of high-probability. Further, in combination of this gradient-based approach with a high-order variational method, we can perform estimation of the expected sufficient-statistics in large systems. These two features combined, enable us to perform structure learning in large systems. An implementation of our method is available via Git1.
3 Likelihood of CTBNs Under a Mixture of CIMs
Complete data. In the following, we consider a CTBN over some 2over-complete graph G. In practice, this graph may be derived from data as prior knowledge. In the absence of prior knowledge, we will choose the full graph. We want to represent its CIMsRi(x, x′ | u), here for node i, as mixture of CIMs of smaller support and write by using the power-set P(·) (set of all possible subsets)
Ri(x, x ′ | u) = ∑ m∈P(parG(i)) πi(m)ri(x, x ′ | um) ≡ Eπi [ri(x, x′ | um)], (3)
where um denotes the projection of the full parent-state u on the subsetm, i.e. f(um) = ∑ u/um
f(u), and the expectation Eπi [f(θm)] = ∑ m∈P(parG(i))
πi(m)f(θm). The mixture-weights are given by a distribution πi ∈ ∆i with ∆i being the |P(parG(i))|−dimensional probability simplex. Corresponding edge probabilities of the graph can be computed via marginalization. The probability that an edge eij ∈ E exists is then
p(eij = 1) = ∑
m∈P(parG(j))
πj(m)1(i ∈ m), (4)
with 1(·) being the indicator function. In order to arrive at a marginal score for the mixture we insert (3) into (1) and apply Jensen’s inequality Eπi [ln (r)] ≤ ln (Eπi [r]). This yields a lower-bound to the mixture likelihood
p(M, T | π, r) ≥ N∏ i=1 ∏ x,x′ 6=x,um eE π i [Mi(x,x ′|um) ln ri(x,x′|um)−Ti(x|um)ri(x,x′|um)].
For details on this derivation, we refer to the supplementary material A.1. Note that Jensens inequality, which only provides a poor approximation in general, improves with increasing concentration of probability mass and becomes exact for degenerate distributions. For the task of selecting a CTBN with a specific parent-set, it is useful to marginalize over the rate parameters r of the CTBNs. This allows for a direct estimation of the parent-set, without first estimating the rates. This
1https://git.rwth-aachen.de/bcs/ssl-ctbn 2An over-complete graph has more edges than the underlying true graph, which generated the data.
marginal likelihood can be computed under the assumption of independent gamma prior distributions ri(x, x
′ | um) ∼ Gam(αi(x, x′ | um), βi(x′ | um)) over the rates. The marginal likelihood lowerbound can then be computed analytically. Under the assumption of independent Dirichlet priors πi ∼ Dir(πi | ci), with concentration parameters ci we arrive at a lower-bound to the marginal log-posterior of the mixture weights π
ln p(π | M, T , α, β) ≥ ∑ i Fi[M, T , π] + lnZ, (5)
Fi[M, T , π] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱi(x, x ′ | um))− ᾱi(x, x′ | um) ln β̄i(x | um) } +ln Dir(πi | ci),
with the updated posterior parameters ᾱi(x, x′ | um) ≡ πi(m)Mi(x, x′ | um) + αi(x, x′ | um) and β̄i(x | um) ≡ πi(m)Ti(x | um) + βi(x | um). For details, we refer to the supplementary material A.2. The constant log-partition function lnZ can be ignored in the following analysis. Because (5) decomposes into a sum of node-wise terms, the maximum-a-posterior estimate of the mixture weights of node i can be calculated as solution of the following optimization problem:
π∗i = arg max πi∈∆i {Fi[M, T , π]} . (6)
By construction, learning the mixture weights π of the CIMs, corresponds to learning a distribution over parent-sets for each node. We thus re-expressed the problem of structure learning to an estimation of π. Further, we note that for any degenerate π, (5) coincides with the exact structure score (2).
Incomplete data. In the case of incomplete noisy data D, the likelihood of the CTBN does no longer decompose into node-wise terms. Instead, the likelihood is one of the full amalgamated CTMC [16]. In order to tackle this problem, approximation methods through sampling [7, 6, 19], or variational approaches [4, 5] have been investigated. These, however, either fail to treat highdimensional spaces because of sample sparsity, are unsatisfactory in terms of accuracy, or provide only an uncontrolled approximation. Our method is based on a variational approximation, e.g. weak coupling expansion [11]. Under this approximation, we recover by the same calculation an approximate likelihood of the same form as (1), where the sufficient statistics Mi(x, x′ | u) and Ti(x | u) are, however, replaced by their expectation Eq [Mi(x, x′ | u)] and Eq [Ti(x | u)] under a variational distribution q, – for details we refer to the supplementary B.1. Subsequently, also our optimization objective Fi[M, T , π] becomes dependent on the variational distribution Fi[D, π, q]. In the following chapter, we will develop an Expectation-Maximization (EM)-algorithm that iteratively estimates the expected sufficient-statistics given the mixture-weights and subsequently optimizes those mixture-weights given the expected sufficient-statistics.
4 Incomplete data: Expected Sufficient Statistics Under a Mixture of CIMs
Short review of the foundational method. In [11], the exact posterior over paths of a CTBN given incomplete dataD, is approximated by a path measure q(X[0,T ]) of a variational time-inhomogeneous Markov process via a higher order variational inference method. For a CTBN, this path measure is fully described by its node-wise marginals qi(x′, x, u; t) ≡ qi(Xi(t+ h) = x′, Xi(t) = x, Ui(t) = u; t). From it, one can compute the marginal probability qi(x; t) of node i to be in state x, the marginal probability of the parents qi(Ui(t) = u; t) ≡ qui (t) and the marginal transition probability τi(x, x
′, u; t) ≡ limh→0 qi(x′, x, u; t)/h for x 6= x′. The exact form of the expected statistics were calculated to be
Eq [Ti(x | u)] ≡ ∫ T
0
dt qi(x; t)q u i (t), Eq [Mi(x, x
′ | u)] ≡ ∫ T
0
dt τi(x, x ′, u; t). (7)
In the following, we will use the short-hand Eq [M] and Eq [T ] to denote the sets of expected sufficient-statistics. We note, that the variational distribution q has the support of the full overcomplete parent-set parG(i). Via marginalization of qi(x
′, x, u; t), the marginal probability and the marginal transition probability can be shown to be connected via the relation
d dt qi(x; t) = ∑ x′ 6=x,u [τi(x, ′ x, u; t)− τi(x, x′, u; t)] . (8)
Algorithm 1 Stationary points of Euler–Lagrange equation 1: Input: Initial trajectories qi(x; t), boundary conditions q(x; 0) and ρ(x;T ), mixture weights π
and data D. 2: repeat 3: repeat 4: for all i ∈ {1, . . . , N} do 5: for all (yk, tk) ∈ D do 6: Update ρi(t) by backward propagation from tk to tk−1 using (10) fulfilling the jump conditons (12). 7: end for 8: Update qi(t) by forward propagation using (10) given ρi(t). 9: end for
10: until Convergence 11: Compute expected sufficient statistics using (7) and (11) from qi(t) and ρi(t). 12: until Convergence of F [D, π, q] 13: Output: Set of expected sufficient statistics Eq[M] and Eq[T ].
Application to our setting. As discussed in the last section, the objective function in the incomplete data case has the same form as (5)
Fi[D, π, q] ≡ ∑
m,um,x,x′ 6=x
{ ln Γ (ᾱqi (x, x ′ | um))− ᾱqi (x, x ′ | um) ln β̄qi (x | um) } +ln Dir(πi | ci),
(9)
however, now with ᾱqi (x, x ′ | um) ≡ πi(m)Eq[Mi(x, x′ | um)] + αi(x, x′ | um) and β̄qi (x | um) ≡ πi(m)Eq[Ti(x | um)] + βi(x | um). In order to arrive at approximation to the expected sufficient statistics in our case, we have to maximize (9) with respect to q, while fulfilling the constraint (8). The corresponding Lagrangian becomes
L[D, π, q, λ] = N∑ i=1 Fi[D, π, q]− ∑ x,x′ 6=x,u ∫ T 0 dt λi(x; t) { d dt qi(x; t)− [τi(x,′ x, u; t)− τi(x, x′, u; t)] } , with Lagrange-multipliers λi(x; t). In order to derive Euler-Lagrange equations, we employ Stirlings-
approximation for the gamma function Γ(z) = √
2π z ( z e )z +O ( 1 z ) , which becomes exact asymp-
totically. In our case, Stirlings-approximation is valid if ᾱ 1. We thereby assumed that either enough data has been recorded, or a sufficiently strong prior α. Finally, we recover the approximate forward- and backward-equations of the mixture CTBNs as the stationary point of the Lagrangian Euler-Lagrange equations
d dt ρi(t) = Ω̃ π i (t)ρi(t), d dt qi(t) = qi(t)Ω π i (t), (10)
with effective rate matrices
Ωπi (x, x ′; t) ≡ Eui [ R̃πi (x, x ′ | u) ] ρi(x′; t) ρi(x; t)
Ω̃πi (x, x ′; t) ≡ (1− δx,x′)Eui [ R̃πi (x, x ′ | u) ] + δx,x′ {Eui [Rπi (x, x′ | u)] + Ψi(x; t)} ,
with ρi(x; t) ≡ exp(−λi(x; t)) and Ψi(x; t) as given in the supplementary material B.2. Further we have introduced the shorthand Eui [f(u)] = ∑ u f(u)q u i (t)
and defined the posterior expected rates
Rπi (x, x ′ | u) ≡ Eπi
[ ᾱqi (x, x
′ | um) β̄qi (x | um)
] , R̃πi (x, x ′ | u) ≡ ∏ m ( ᾱqi (x, x ′ | um) β̄qi (x | um) )πi(m) ,
Algorithm 2 Gradient-based Structure Learning 1: Input: Initial trajectories qi(x; t), boundary conditions qi(x; 0) and ρi(x;T ), initial mixture
weights π(0), data D and iterator n = 0 2: repeat 3: Compute expected sufficient statistics Eq[M] and Eq[T ] given π(n) using Algorithm 1. 4: for all i ∈ {1, . . . , N} do 5: Maximize (6) with respect to πi, set maximizer π (n+1) i = π ∗ i and n→ n+ 1. 6: end for 7: until Convergence of F [D, π, q] 8: Output: Maximum-a-posteriori mixture weights π(n)
which take the form of an arithmetic and geometric mean, respectively. For the variational transitionmatrix we find the algebraic relationship
τi(x, x ′, u; t) = qi(x; t)q u i (t)R̃ π i (x, x
′ | u)ρi(x ′; t)
ρi(x; t) . (11)
Because, the derivation is quite lengthy, we refer to supplementary B.2 for details. In order to incorporate noisy observations into the CTBN dynamics, we need to specify an observation model. In the following we assume that the data likelihood factorizes p(Y = yk | X(tk) = s) = ∏ i pi(Yi = y k i | Xi(tk) = x), allowing us to condition on the data by enforcing jump conditions
lim t→tk− ρi(x; t) = lim t→tk+
pi(Yi = y k i | Xi(tk) = x)ρi(x; t). (12)
The converged solutions of the ODE system can then be used to compute the sufficient statistics via (7). For a full derivation, we refer to the supplementary material B.2.
We note that in the limiting case of a degenerate mixture distribution π, this set of equations reduces to the marginal dynamics for CTBNs proposed in [11]. The set of ODEs can be solved iteratively as a fixed-point procedure in the same manner as in previous works [17, 4] (see Algorithm 1) in a forward-backward procedure.
Exhaustive structure search. As we are now able to calculate expected-sufficient statistics given mixture weights π, we can design an EM-algorithm for structure learning. For this iteratively optimize π given the expected sufficient statistics, which we subsequently re-calculate. The EM-algorithm is summarized in Algorithm 2. In contrast to the exact EM-procedure [16], we preserve structure modularity. We can thus optimize the parent-set of each node independently. This already provides a huge boost in performance, as in our case the search space scales exponentially in the components, instead of super-exponentially. In the paragraph "Greedy structure search", we will demonstrate how to further reduce complexity to a polynomial scaling, while preserving most prediction accuracy.
Restricted exhaustive search. In many cases, especially for applications in molecular biology, comprehensive 3databases of putative interactions are available and can be used to construct overcomplete yet not fully connected prior networks G0 of reported gene and protein interactions. In this case we can restrict the search space by excluding possible non-reported parents for every node i, parG(i) = parG0(i), allowing for structure learning of large networks.
Greedy structure search. Although we have derived a gradient-based scheme for exhaustive search, the number of possible mixture components still equals the number of all possible parent-sets. However, in many applications, it is reasonable to assume the number of parents to be limited, which corresponds to a sparsity assumption. For this reason, greedy schemes for structure learning have been proposed in previous works [16]. Here, candidate parent-sets were limited to have at most K parents, in which case, the number of candidate graphs only grows polynomially in the number of nodes. In order to incorporate a similar scheme in our method, we have to perform an additional approximation to the set of equations (10). The problem lies in the expectation step (Algorithm 1), as expectation is performed with respect to the full over-complete graph. In order to calculate expectations of the geometric mean Eui [R̃ π i (x, x
′ | u)], we have to consider the over-complete set of parenting nodes qui (t) for each node i. However, for the calculation of the arithmetic mean E u i [R π i (x, x
′ | u)] only 3e.g. https://string-db.org/ or https://www.ebi.ac.uk/intact/
parent-sets restricted to the considered sub-graphs have to be considered, due to linearity. For this reason, we approximate the geometric mean by the arithmetic mean R̃πi ≈ Rπi , corresponding to the first-order expansion Eπi [ln(x)] = ln(E π i [x]) +O(Var[x]), which, as before, becomes more valid for more concentrated πi and is exact if πi is degenerate.
5 Experiments
We demonstrate the effectiveness of our method on synthetic and two real-world data sets. For all experiments, we consider a fixed set of hyper-parameters. We set the Dirichlet concentration parameter ci = 0.9 for all i ∈ {1, . . . , N}. Further, we assume a prior for the generators, which is uninformative on the structure αi(x, x′ | u) = 5 and βi(x | u) = 10, for all x, x′ ∈ Xi, u ∈ Ui. For the optimization step in Algorithm 2, we use standard Matlab implementation of the interior-point method with 100 random restarts. This is feasible, as the Jacobian of (9) can be calculated analytically.
5.1 Synthetic Data
In this experiment, we consider synthetic data generated by random graphs with a flat degree distribution, truncated at degree two, i.e. each nodes has a maximal number of two parents. We restrict the state-space of each node to be binary X = {−1, 1}. The generators of each node are chosen such that they undergo Glauber-dynamics [9]Ri(x, x̄ | u) = 12 + 1 2 tanh ( γx ∑ j∈parG(i) uj ) , which is a popular model for benchmarking, also in CTBN literature [4]. The parameter γ denotes the coupling-strength of node j to i. With increasing γ the dynamics of the network become increasingly deterministic, converging to a logical-model for γ → ∞. In order to avoid violating the weakcoupling assumption [11], underlying our method, we choose γ = 0.6. We generated a varying number of trajectories with each containing 10 transitions. In order to have a fair evaluation, we generate data from thirty random graphs among five nodes, as described above. By computing the edge probabilities p(eij = 1) via (4), we can evaluate the performance of our method as an edgeclassifier by computing the receiver-operator characteristic curve (ROC) and the precision-recall curve (PR) and their area-under-curve (AUROC) and (AUPR). For an unbiased classifier, both quantities have to approach 1, for increasing amounts of data.
Complete data. In this experiment, we investigate the viability of using the marginal mixture likelihood lower-bound as in (5) given the complete data in the form of the sufficient statisticsM and T . In Figure 1 we compare the AUROCs a) and AUPRs b) achieved in an edge classification task using exhaustive scoring of the exact marginal likelihood (2) as in [15] (blue) and gradient ascend in π of the mixture marginal likelihood lower-bound (red-dashed) as in (5). In Figure 1 c) we show via numerical integration, that the marginal mixture likelihood lower-bound approaches the exact one (2) for decreasing entropy of π and increasing number of trajectories. Small negative deviations are due
to the limited accuracy of numerical integration. Additional synthetic experiments investigating the effect of different concentration parameters c can be found in the supplementary C.1
Incomplete data. Next, we test our method for network inference from incomplete data. Noisy incomplete observations were generated by measuring the state at Ns = 10 uniformly random timepoints and adding Gaussian noise with zero mean and variance 0.2. Because of the expectation-step in Algorithm 1, is only approximate [11], we do not expect a perfect classifier in this experiment. We compare the exhaustive search, with a K = 4 parents greedy search, such that both methods have the same search-space. We initialized both methods with π(0)i (m) = 1 if m = parG(i) and 0 else, as a heuristic. In Figure 2 a) and b), it can be seen that both methods approach AUROCs and AUPRs close to one, for increasing amounts of data. However, due to the additional approximation in the greedy algorithm, it performs slightly worse. In Figure 2 c) and d) we plot the corresponding ROC and PR curves for 40 trajectories.
Scalablity. We compare the scalability of our gradient-based greedy structure search with a greedy hill-climbing implementation of structure seach (K = 2) with variational inference as in [11] (we limited this search to one sweep over families). We fixed all parameters as before and the number of trajectories to 40. Results are displayed in Figure 3.
Dependence on initial values. We investigate the performance of our method with respect to different initial values. For this, we draw the initial values of mixture components uniformly at random, and then project them on the probability simplex via normalization, π̃(0)i ∼ U(0, 1) and π (0) i (m) = π̃ (0) i (m)/ ∑ n π̃ (0) i (n). We fixed all parameters as before and the number of trajectories to 40. In Figure 2, we displayed ROC e) and PR f) for our heuristic initial and random initial values. We find, that the heuristic performs almost consistently better.
5.2 Real-world data
British household dataset. We show scalability in a realistic setting, we applied our method to the British Household Panel Survey (ESRC Research Centre on Micro-social Change, 2003). This dataset has been collected yearly from 1991 to 2002, thus consisting of 11 time-points. Each of the 1535 participants was questioned about several facts of their life. We picked 15 of those, that we deemed interpretable, some of them, "health status", "job status" and "health issues", having non-binary state-spaces. Because the participants had the option of not answering a question and changes in their lives are unlikely to happen during the questionnaire, this dataset is strongly incomplete. Out of the 1535 trajectories, we picked
600 at random and inferred the network presented in Figure 4. In supplementary C.2 we investigate the stability of this result. We performed inference with our greedy algorithm (K = 2). This dataset has been considered in [16], where a network among 4 variables was inferred. Inferring a large network at once is important, as latent variables can create spurious edges in the network [2].
IRMA gene-regulatory network. Finally, we investigate performance on realistic data. For this, we apply it to the In vivo Reverse-engineering and Modeling Assessment (IRMA) network [3]. It is, to best of our knowledge, the only molecular biological network with a ground-truth. This gene regulatory network has been implemented on cultures of yeast, as a benchmark for network reconstruction algorithms. Special care has been taken to isolate this network from crosstalk with other cellular components. The authors of [3] provide time course data from two perturbation experiments, referred to as “switch on” and “switch off”, and attempted reconstruction using different methods. In Table 1, we compare to other methods tested in [18]. For more details on this experiment and details on other methods, we refer to the supplementary C.3, respectively.
6 Conclusion
We presented a novel scalable gradient-based approach for structure learning for CTBNs from complete and incomplete data, and demonstrated its usefulness on synthetic and real-world data. In the future we plan to apply our algorithm to new bio-molecular datasets. Further, we believe that the mixture likelihood may also be applicable to tasks different from structure learning.
Acknowledgements
We thank the anonymous reviewers for helpful comments on the previous version of this manuscript. Dominik Linzner and Michael Schmidt are funded by the European Union’s Horizon 2020 research and innovation programme (iPC–Pediatric Cure, No. 826121). Heinz Koeppl acknowledges support by the European Research Council (ERC) within the CONSYN project, No. 773196, and by the Hessian research priority programme LOEWE within the project CompuGene. | 1. What is the main contribution of the paper regarding continuous time Bayesian networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to handle larger classes of conditional intensity matrices?
3. How does the reviewer assess the clarity and quality of the manuscript, and what specific areas of confusion do they identify?
4. Are there any minor errors or omissions in the paper that the reviewer notes?
5. How does the proposed method compare to other approaches in terms of accuracy and computational efficiency? | Review | Review
Summary: Within the manuscript, the authors extend the continuous time Bayesian Networks by incorporating a mixture prior over the conditional intensity matrices, thereby allowing for a larger class compared to a gamma prior usually employed over these. My main concerns are with clarity / quality as the manuscript is quite densely written with quite some material has either been omitted or shifted to the appendix. For a non-expert in continuous time bayesian networks, it is quite hard to read. Additionally, there are quite a few minor mistakes (see below) that make understanding of the manuscript harder. As it stands, Originality: The authors combine variational inference method from Linzner et al [11], with the new prior over the dependency structure (mixture). By replacing sufficient statistics with expected (according to the variational distribution) sufficient statistics the authors derive a gradient based scheme according to the approximation to the (marginal likelihood). Quality/Clarity: As said, my main concern is about clarity and to some degree therefore also quality. My main confusion arises from section 4 (partly also 3), as the overall scheme is opaque to me. This is mainly due to the fact that part of the derivation is shifted to the appendix. As a result, it is unclear to me, how the expected moments can be computed from \rho_i, q_i. It is said, that this can be done from 7, but there I need \tau_i, how do I get this, this is not explained. Also, the final solution to (9) in (10,11) does not depend on the observations Y anymore, how is this possible? Some minor things contributing to my confusion: - Line 75: "a posteriori estimate": This is not a posterior over structures, but a maximum marginal likelihood estimate. - Eq (5), line 114: I was wondering about the 'normalization constant'. First, I think, it should be mentioned, that it is constant wrt to \pi. Second, Z is not necessarily the normalization constant of the true posterior but the approximation to the normalization constant that one would obtain, if the lower bound of line 105 would be used as likelihood, correct? - Algorithm 1: is only mentioned two pages later and the references to equations don't make sense. Also this algorithm is not explained at all. - Line 127: ref [5] is actually EP not VI - Line 149: the shorthand is used later not there. - Line 161: psi (x,t): I guess this should depend on Y. As stated the overall inference scheme does not depend on the observations Y, that does not make sense. - line 168: why should constraint ensure that incorporate noisy observations. The whole section is opaque to me. - Figure 1: subfigure labeling is wrong - Experiment british household: the authors report ROC scores, but do not mention the classification problem they are trying to solve, what was the ground truth? Also, it seems odd to me, that childcare is not linked to children. Significance: The proposed method does improve the scaling of inferring the dependency structure (reported from 4 nodes to 11). However, other approaches as in were discarded as not being sufficiently accurate or being too data hungry. The quality of the likelihood approximation for example could be evaluated on a small toy-example and compared against sampling based approaches, or [11]. |
NIPS | Title
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Abstract
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
N/A
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
1 Introduction
Real-world visual recognition datasets typically exhibit a long-tailed distribution, where a few classes contain numerous samples (called head classes), but the others are associated with only a few instances (called tail classes) [24, 33]. Due to the class imbalance, the trained model is easily biased towards head classes and perform poorly on tail classes [2, 58]. To tackle this issue, numerous studies have explored long-tailed recognition for learning well-performing models from imbalanced data [20, 56].
Most existing long-tailed studies [3, 9, 10, 48, 52] assume the test class distribution is uniform, i.e., each class has an equal amount of test data. Therefore, they develop various techniques, e.g., class resampling [13, 18, 25, 55], cost-sensitive learning [11, 36, 41, 47] or ensemble learning [2, 13, 27, 53], to re-balance the model performance on different classes for fitting the uniform class distribution. However, this assumption does not always hold in real applications, where actual test data may follow any kind of class distribution, being either uniform, long-tailed, or even inversely long-tailed to the training data (cf. Figure 1(a)). For example, one may train a recognition model for autonomous cars based on the training data collected from city areas, where pedestrians are majority classes and stone obstacles are minority classes. However, when the model is deployed to mountain areas, the pedestrians become the minority while the stones become the majority. In this case, the test class distribution is inverse to the training one, and existing methods may perform poorly.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To address the issue of varying class distributions, as the first research attempt, LADE [17] assumes the test class distribution to be known and uses the knowledge to post-adjust model predictions. However, the actual test class distribution is usually unknown a priori, making LADE not applicable in practice. Therefore, we study a more realistic yet challenging problem, namely test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test distribution is agnostic. To tackle this problem, motivated by the idea of "divide and conquer", we propose to learn multiple experts with diverse skills that excel at handling different class distributions (cf. Figure 1(b)). As long as these skill-diverse experts can be aggregated suitably at test time, the multi-expert model would manage to handle the unknown test class distribution. Following this idea, we develop a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE).
The first challenge for SADE is how to learn multiple diverse experts from a single and stationary long-tailed training dataset. To handle this challenge, we empirically evaluate existing long-tailed methods in this task, and find that the models trained by existing methods have a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by various losses are skilled in handling class distributions with different skewness. For example, the model trained with the conventional softmax loss simulates the long-tailed training class distribution, while the models obtained from existing long-tailed methods are good at the uniform class distribution. Inspired by this finding, SADE presents a simple but effective skill-diverse expert learning strategy to generate experts with different distribution preferences from a single long-tailed training distribution. Here, various experts are trained with different expertise-guided objective functions to deal with different class distributions, respectively. As a result, the learned experts are more diverse than previous multi-expert long-tailed methods [49, 63], leading to better ensemble performance, and in aggregate simulate a wide spectrum of possible class distributions.
The other challenge is how to aggregate these skill-diverse experts for handling test-agnostic class distributions based on only unlabeled test data. To tackle this challenge, we empirically investigate the property of different experts, and observe that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction consistency between different perturbed views of samples from their favorable classes. Motivated by this finding, we develop a novel self-supervised strategy, namely prediction stability maximization, to adaptively aggregate experts based on only unlabeled test data. We theoretically show that maximizing the prediction stability enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution and the true class distribution. In this way, the resulting model is able to simulate unknown test class distributions.
We empirically verify the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. Specifically, SADE achieves promising performance on vanilla long-tailed recognition under all benchmark datasets. For instance, SADE achieves 58.8% accuracy on ImageNet-LT with more than 2% accuracy gain over previous state-of-the-art ensemble long-tailed methods, i.e., RIDE [49] and ACE [2]. More importantly, SADE is the first long-tailed approach that is able to handle various test-agnostic class distributions without knowing the true class distribution of test data in advance. Note that SADE even outperforms LADE [17] that uses knowledge of the test class distribution.
Compared to previous long-tailed methods (e.g., LADE [17] and RIDE [49]), our method offers the following advantages: (i) SADE does not assume the test class distribution to be known, and provides the first practical approach to handling test-agnostic long-tailed recognition; (ii) SADE develops a simple diversity-promoting strategy to learn skill-diverse experts from a single and stationary long-tailed dataset; (iii) SADE presents a novel self-supervised strategy to aggregate skill-diverse experts at test time, by maximizing prediction consistency between unlabeled test samples’ perturbed views; (iv) the presented self-supervised strategy has a provable ability to simulate test-agnostic class distributions, which opens the opportunity for tackling unknown class distribution shifts at test time.
2 Related Work
Long-tailed recognition Existing long-tailed recognition methods, related to our study, can be categorized into three types: class re-balancing, logit adjustment and ensemble learning. Specifically, class re-balancing resorts to re-sampling [4, 13, 18, 25] or cost-sensitive learning [3, 10, 16, 61] to balance different classes during model training. Logit adjustment [17, 33, 37, 43] adjusts models’ output logits via the label frequencies of training data at inference time, for obtaining a large relative margin between head and tail classes. Ensemble-based methods [2, 13, 53, 63], e.g., RIDE [49], are based on multiple experts, which seek to capture heterogeneous knowledge, followed by ensemble aggregation. More discussions on the difference between our method and RIDE [49] are provided in Appendix D.3. Regarding test-agnostic long-tailed recognition, LADE [17] assumes the test class distribution is available and uses it to post-adjust model predictions. However, the true test class distribution is usually unknown a priori, making LADE inapplicable. In contrast, our method does not rely on the true test distribution for handling this problem, but presents a novel self-supervised strategy to aggregate skill-diverse experts at test time for test-agnostic class distributions. Moreover, some ensemble-based long-tailed methods [39] aggregate experts based on a labeled uniform validation set. However, as the test class distribution could be different from the validation one, simply aggregating experts on the validation set is unable to handle test-agnostic long-tailed recognition.
Test-time training Test-time training [23, 26, 30, 40, 46] is a transductive learning paradigm for handling distribution shifts [28, 32, 34, 38, 45, 59] between training and test data, and has been applied with success to out-of-domain generalization [19, 35] and dynamic scene deblurring [6]. In this study, we explore this paradigm to handle test-agnostic long-tailed recognition, where the issue of class distribution shifts is the main challenge. However, most existing test-time training methods seek to handle covariate distribution shifts instead of class distribution shifts, so simply leveraging them cannot resolve test-agnostic long-tailed recognition, as shown in our experiment (cf. Table 9).
3 Problem Formulation
Long-tailed recognition aims to learn a well-performing classification model from a training dataset with long-tailed class distribution. Let Ds={xi, yi}nsi=1 denote the long-tailed training set, where yi is the class label of the sample xi. The total number of training data over C classes is ns= ∑C k=1 nk, where nk denotes the number of samples in class k. Without loss of generality, we follow a common assumption [17, 25] that the classes are sorted by cardinality in decreasing order (i.e., if i1 < i2, then ni1 ≥ ni2), and n1 ≫ nC . The imbalance ratio is defined as max(nk)/min(nk) = n1/nC . The test data Dt = {xj , yj}ntj=1 is defined in a similar way. Most existing long-tailed recognition methods assume the test class distribution is uniform (i.e., pt(y) = 1/C), and seek to train models from the long-tailed training distribution ps(y) to perform well on the uniform test distribution. However, such an assumption does not always hold in practice. The actual test class distribution in real-world applications may also be long-tailed (i.e., pt(y) = ps(y)), or even inversely long-tailed to the training data (i.e., pt(y) = inv(ps(y))). Here, inv(·) indicates that the order of the long tail on classes is flipped. As a result, the models learned by existing methods may fail when the actual test class distribution is different from the assumed one. To address this, we propose to study a more practical yet challenging long-tailed problem, i.e., Test-agnostic Long-tailed Recognition. This task aims to learn a recognition model from long-tailed training data, where the resulting model would be evaluated on multiple test sets that follow different class distributions. This task is challenging due to the integration of two challenges: (1) the severe class imbalance in the training data makes it difficult to train models; (2) unknown class distribution shifts between training and test data (i.e., pt(y) ̸= ps(y)) makes models hard to generalize.
4 Method
To tackle the above problem, inspired by the idea of "divide and conquer", we propose to learn multiple skill-diverse experts that excel at handling different class distributions. By reasonably fusing these experts at test time, the multi-expert model would manage to handle unknown class distribution shifts and resolve test-agnostic long-tailed recognition. Following this idea, we develop a novel Self-supervised Aggregation of Diverse Experts (SADE) approach. Specifically, SADE consists of two innovative strategies: (1) learning skill-diverse experts from a single long-tailed training dataset; (2) test-time aggregating experts with self-supervision to handle test-agnostic class distributions.
4.1 Skill-diverse Expert Learning
As shown in Figure 2, SADE builds a three-expert model that comprises two components: (1) an expert-shared backbone fθ; (2) independent expert networks E1, E2 and E3. When training the model, the key challenge is how to learn skill-diverse experts from a single and stationary long-tailed training dataset. Existing ensemble-based long-tailed methods [13, 49] seek to train experts for the uniform test class distribution, and hence the trained experts are not differentiated sufficiently for handling various class distributions (refer to Table 6 for an example). To tackle this challenge, we first empirically investigate existing long-tailed methods in this task. From Table 1, we find that there is a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by different losses are good at dealing with class distributions with different skewness. For instance, the model trained with the softmax loss is good at the long-tailed distribution, while the models obtained from long-tailed methods are skilled in the uniform distribution.
Motivated by this finding, we develop a simple skill-diverse expert learning strategy to generate experts with different distribution preferences. To be specific, the forward expert E1 seeks to be good at the long-tailed class distribution and performs well on many-shot classes. The uniform expert E2 strives to be skilled in the uniform distribution. The backward expert E3 aims at the inversely long-tailed distribution and performs well on few-shot classes. Here, the forward and backward experts are necessary since they span a wide spectrum of possible class distributions, while the uniform expert ensures retaining high accuracy on the uniform distribution. To this end, we use three different expertise-guided losses to train the three experts, respectively.
The forward expert E1 We use the softmax cross-entropy loss to train this expert, so that it directly simulates the original long-tailed training class distribution:
Lce = 1
ns ∑ xi∈Ds −yi log σ(v1(xi)), (1)
where v1(·) is the output logits of the forward expert E1, and σ(·) is the softmax function.
The uniform expert E2 We aim to train this expert to simulate the uniform class distribution. Inspired by the effectiveness of logit adjusted losses for long-tailed recognition [33], we resort to the balanced softmax loss [21]. Specifically, let ŷk = exp(v
k)∑C c=1 exp(v c) be the prediction probability.
The balanced softmax adjusts the prediction probability by compensating for the long-tailed class distribution with the prior of training label frequencies: ŷk = π
k exp(vk)∑C c=1 π c exp(vc) = exp(v k+log πk)∑C c=1 exp(v c+log πc) ,
where πk = nkn denotes the training label frequency of class k. Then, given v2(·) as the output logits of the expert E2, the balanced softmax loss for the expert E2 is defined as:
Lbal = 1
ns ∑ xi∈Ds −yi log σ(v2(xi) + log π). (2)
Intuitively, by adjusting logits to compensate for the long-tailed distribution with the prior π, this loss enables E2 to output class-balanced predictions that simulate the uniform distribution.
The backward expert E3 We seek to train this expert to simulate the inversely long-tailed class distribution. To this end, we propose a new inverse softmax loss, based on the same rationale of logit adjusted losses [21, 33]. Specifically, we adjust the prediction probability by: ŷk = exp(v
k+log πk−log π̄k)∑C c=1 exp(v
c+log πc−log π̄c) , where the inverse training prior π̄ is obtained by inverting the order of training label frequencies π. Then, the new inverse softmax loss for the expert E3 is defined as:
Linv= 1
ns ∑ xi∈Ds −yi log σ(v3(xi)+ log π−λ log π̄), (3)
where v3(·) denotes the output logits of E3 and λ is a hyper-parameter. Intuitively, this loss adjusts logits to compensate for the long-tailed distribution with π, and further applies reverse adjustment with π̄. This enables E3 to simulate the inversely long-tailed distribution (cf. Table 6 for verification).
4.2 Test-time Self-supervised Aggregation
Based on the skill-diverse learning strategy, the three experts in SADE are skilled in different class distributions. The remaining challenge is how to fuse them to deal with unknown test class distributions. A basic principle for expert aggregation is that the experts should play a bigger role in situations where they have expertise. Nevertheless, how to detect strong experts for unknown test class distribution remains unknown. Our key insight is that strong experts should be more stable in predicting the samples from their skilled classes, even though these samples are perturbed.
Empirical observation To verify this hypothesis, we estimate the prediction stability of experts by comparing the cosine similarity between their predictions for a sample’s two augmented views. Here, the data views are generated by the data augmentation techniques in MoCo v2 [5]. From Table 2, we find that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction similarity between different views of samples from their favorable classes. Following this finding, we propose to explore the relative prediction stability to detect strong experts and weight experts for the unknown test class distribution. Consequently, we develop a novel self-supervised strategy, namely prediction stability maximization.
Prediction stability maximization This strategy learns aggregation weights for experts (with frozen parameters) by maximizing model prediction stability for unlabeled test samples. As shown in Figure 3, the method comprises three major components as follows.
Data view generation For a given sample x, we conduct two stochastic data augmentations to generate the sample’s two views, i.e., x1 and x2. Here, we use the same augmentation techniques as the advanced contrastive learning method, i.e., MoCo v2 [5], which has been shown effective in self-supervised learning.
Learnable aggregation weight Given the output logits of three experts (v1, v2, v3) ∈ R3×C , we aggregate experts with a learnable aggregation weight w = [w1, w2, w3] ∈ R3 and obtain the final softmax prediction by ŷ = σ(w1·v1 + w2·v2 + w3·v3), where w is normalized before aggregation, i.e., w1 + w2 + w3=1.
Objective function Given the view predictions of unlabeled test data, we maximize the prediction stability based on the cosine similarity between the view predictions:
max w S, where S = 1 nt ∑ x∈Dt ŷ1 · ŷ2. (4)
Here, ŷ1 and ŷ2 are normalized by the softmax function. In test-time training, only the aggregation weight w is updated. Since stronger experts have higher prediction similarity for their skilled classes, maximizing the prediction stability S would learn higher weights for stronger experts regarding the unknown test class distribution. Moreover, the self-supervised aggregation strategy can be conducted in an online manner for streaming test data. The pseudo-code of SADE is provided in Appendix B.
Theoretical Analysis We then theoretically analyze the prediction stability maximization strategy to conceptually understand why it works. To this end, we first define the random variables of predictions and labels as Ŷ∼p(ŷ) and Y∼pt(y). We have the following result: Theorem 1. The prediction stability S is positive proportional to the mutual information between the predicted label distribution and the test class distribution I(Ŷ ;Y ), and negative proportional to the prediction entropy H(Ŷ ):
S ∝ I(Ŷ ;Y )−H(Ŷ ).
Please refer to Appendix A for proofs. According to Theorem 1, maximizing the prediction stability S enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution p(ŷ) and the test class distribution pt(y), as well as minimizing the prediction entropy. Since minimizing entropy helps to improve the confidence of the classifier output [12], the aggregation weight is learned to simulate the test class distribution pt(y) and increase the prediction confidence. This property intuitively explains why our method has the potential to tackle the challenging task of test-agnostic long-tailed recognition at test time.
5 Experiments
In this section, we first evaluate the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. We then verify the effectiveness of SADE in terms of its two strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. More ablation studies are reported in appendices. Here, we begin with the experimental settings.
5.1 Experimental Setups
Datasets We use four benchmark datasets (i.e., ImageNet-LT [31], CIFAR100-LT [3], PlacesLT [31], and iNaturalist 2018 [44]) to simulate real-world long-tailed class distributions. Their data statistics and imbalance ratios are summarized in Appendix C.1. The imbalance ratio is defined as maxnj /minnj , where nj denotes the data number of class j. Note that CIFAR100-LT has three variants with different imbalance ratios.
Baselines We compare SADE with state-of-the-art long-tailed methods, including two-stage methods (Decouple [25], MiSLAS [62]), logit-adjusted training (Balanced Softmax [21], LADE [17]), ensemble learning (BBN [63], ACE [2], RIDE [49]), classifier design (Causal [42]), and representation learning (PaCo [8]). Note that LADE uses the prior of test class distribution for post-adjustment (although it is unavailable in practice), while all other methods do not use this prior.
Evaluation protocols In test-agnostic long-tailed recognition, following LADE [17], the models are evaluated on multiple sets of test data that follow different class distributions, in terms of micro accuracy. Same as LADE [17], we construct three kinds of test class distributions, i.e., the uniform distribution, forward long-tailed distributions as training data, and backward long-tailed distributions. In the backward ones, the order of the long tail on classes is flipped. More details of test data construction are provided in Appendix C.2. Besides, we also evaluate methods on vanilla long-tailed recognition [25, 31], where the models are evaluated on the uniform test class distribution. Here, the accuracy on three class sub-groups is also reported, i.e., many-shot classes (more than 100 training images), medium-shot classes (20∼100 images) and few-shot classes (less than 20 images). Implementation details We use the same setup for all the baselines and our method. Specifically, following [17, 49], we use ResNeXt-50 for ImageNet-LT, ResNet-32 for CIFAR100-LT, ResNet-152 for Places-LT and ResNet-50 for iNaturalist 2018 as backbones, respectively. Moreover, we adopt the cosine classifier for prediction on all datasets. If not specified, we use the SGD optimizer with the momentum of 0.9 for training 200 epochs and set the initial learning rate as 0.1 with linear decay. We set λ=2 for ImageNet-LT and CIFAR100-LT, and λ=1 for the remaining datasets. During test-time training, we train the aggregation weights for 5 epochs with the batch size 128, where we use the same optimizer and learning rate as the training phase. More implementation details and the hyper-parameter statistics are reported in Appendix C.3.
5.2 Superiority on Vanilla Long-tailed Recognition
This subsection compares SADE with state-of-the-art long-tailed methods on vanilla long-tailed recognition. Specifically, as shown in Tables 3-4, Softmax trains the model with only cross-entropy, so it simulates the longtailed training distribution and performs well on manyshot classes. However, it performs poorly on mediumshot and few-shot classes, leading to worse overall performance. In contrast, existing long-tailed methods (e.g., Decouple, Causal) seek to simulate the uniform class distribution, so their performance is more class-balanced, leading to better overall performance. However, as these methods mainly seek balanced performance, they in-
evitably sacrifice the performance on many-shot classes. To address this, RIDE and ACE explore ensemble learning for long-tailed recognition and achieve better performance on tail classes without sacrificing the head-class performance. In comparison, based on the increasing expert diversity derived from skill-diverse expert learning, our method performs the best on all datasets, e.g., with more than 2% accuracy gain on ImageNet-LT compared to RIDE and ACE. These results demonstrate the superiority of SADE over the compared methods that are particularly designed for the uniform test class distribution. Note that SADE also outperforms baselines in experiments with stronger data augmentation (i.e., RandAugment [7]) and other architectures, as reported in Appendix D.1.
5.3 Superiority on Test-agnostic Long-tailed Recognition
In this subsection, we evaluate SADE on test-agnostic long-tailed recognition. The results on various test class distributions are reported in Table 5. Specifically, since Softmax seeks to simulate the longtailed training distribution, it performs well on forward long-tailed test distributions. However, its performance on the uniform and backward long-tailed distributions is poor. In contrast, existing longtailed methods show more balanced performance among classes, leading to better overall accuracy. However, the resulting models by these methods suffer from a simulation bias, i.e., performing similarly among classes on various class distributions (c.f. Table 1). As a result, they cannot adapt to diverse test class distributions well. To handle this task, LADE assumes the test class distribution to be known and uses this information to adjust its predictions, leading to better performance on various test class distributions. However, since obtaining the actual test class distribution is difficult in real applications, the methods requiring such knowledge may be not applicable in practice. Moreover, in some specific cases like Forward-LT-3 and Backward-LT-3 distributions of iNaturalist 2018, the number of test samples on some classes becomes zero. In such cases, the test prior cannot be used in LADE, since adjusting logits with log 0 results in biased predictions. In contrast, without relying on the knowledge of test class distributions, our SADE presents an innovative self-supervised strategy to deal with unknown class distributions, and obtains even better performance than LADE that uses the test class prior (c.f. Table 5). The promising results demonstrate the effectiveness and practicality of our method on test-agnostic long-tailed recognition. Note that the performance advantages of SADE become larger as the test data get more imbalanced. Due to the page limitation, the results on more datasets are reported in Appendix D.2.
5.4 Effectiveness of Skill-diverse Expert Learning
We next examine our skill-diverse expert learning strategy. The results are reported in Table 6, where RIDE [49] is a state-of-the-art ensemble-based method. RIDE trains each expert with cross-entropy independently and uses KL-Divergence to improve expert diversity. However, simply maximizing the divergence of expert predictions cannot learn visibly diverse experts (cf. Table 6). In contrast, the three experts learned by our strategy have significantly diverse expertise, excelling at many-shot classes, the uniform distribution (with higher overall performance), and few-shot classes, respectively. As a result, the increasing expert diversity leads to a non-trivial gain for the ensemble performance of SADE compared to RIDE. Moreover, consistent results on more datasets are reported in Appendix D.3, while the ablation studies of the expert learning strategy are provided in Appendix E.
5.5 Effectiveness of Test-time Self-supervised Aggregation
This subsection evaluates our test-time self-supervised aggregation strategy.
Effectiveness in expert aggregation. As shown in Table 7, our self-supervised strategy learns suitable expert weights for various unknown test class distributions. For forward long-tailed distributions, the weight of the forward expert E1 is higher; while for backward long-tailed ones, the weight of the backward expert E3 is relatively high. This enables our multi-expert model to boost the performance on dominant classes for unknown test distributions, leading to better ensemble performance (cf. Table 8), particularly as test data get more skewed. The results on more datasets are reported in Appendix D.4, while more ablation studies of our strategy are shown in Appendix F.
Superiority over test-time training methods. We then verify the superiority of our self-supervised strategy over existing test-time training approaches on various test class distributions. Specifically, we adopt three non-trivial baselines: (i) Test-time pseudo-labeling uses the multi-expert model to iteratively generate pseudo labels for unlabeled test data and uses them to fine-tune the model; (ii) Test class distribution estimation leverages BBSE [29] to estimate the test class distribution and uses it to pose-adjust model predictions; (iii) Tent [46] fine-tunes the batch normalization layers of models through entropy minimization on unlabeled test data. The results in Table 9 show that directly applying existing test-time training methods cannot handle well the class distribution shifts, particularly on the inversely long-tailed class distribution. In comparison, our self-supervised strategy is able to aggregate multiple experts appropriately for the unknown test class distribution (cf. Table 7), leading to promising performance gains on various test class distributions (cf. Table 9).
Effectiveness on partial class distributions. Real-world test data may follow any type of class distribution, including partial class distributions (i.e., not all of the classes appear in the test data). Motivated by this, we further evaluate SADE on three partial class distributions: only many-shot classes, only medium-shot classes, and only few-shot classes. The results in Table 10 demonstrate the effectiveness of SADE in tackling more complex test class distributions.
6 Conclusion
In this paper, we have explored a practical yet challenging task of test-agnostic long-tailed recognition, where the test class distribution is unknown and not necessarily uniform. To tackle this task, we present a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE), which consists of two innovative strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. We theoretically analyze our proposed method and also empirically show that SADE achieves new state-of-the-art performance on both vanilla and test-agnostic long-tailed recognition.
Acknowledgments
This work was partially supported by NUS ODPRT Grant R252-000-A81-133 and NUS Advanced Research and Technology Innovation Centre (ARTIC) Project Reference (ECT-RP2). We also gratefully appreciate the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. | 1. What is the focus and contribution of the paper regarding long-tailed recognition?
2. What are the strengths of the proposed approach, particularly in its ability to handle different class distributions?
3. What are the weaknesses of the paper, especially regarding the practicality of the test-time adaptation strategy and the comparison with other works?
4. Do you have any concerns about the fairness of the result comparisons?
5. Have the authors considered using alternative methods, such as MC-Dropout, for expert aggregation?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The aim of this paper is to develop a mixture-of-expert (MOE) model to solve the test-agnostic long-tailed recognition problem, where the test class distribution may follow a uniform, forward or backward long-tailed distribution. The method is developed on the basis of RIDE with three experts and consists of two strategies. At training time, SADE utilizes skill-diverse expert learning strategies that require each expert to handle a different class distribution in order to solve distribution-agnostic long-tailed recognition problems. At test time, SADE utilizes a test-time expert aggregation strategy, which is based on a self-supervised learning approach, to determine expert aggregation methods that handle unknown class distributions. Experiments were conducted on various test-time training strategies dealing with class distribution transfer. SADE achieves state-of-the-art performance on multiple long-tailed datasets, including CIFAR100-LT, ImageNet-LT, Places-LT and iNaturalist 2018.
Strengths And Weaknesses
Strengths:
Evaluating on Forward-LT and Uniform test-class distributions can help us better understanding the performance of various long-tailed algorithms on different testing scenarios. It is shown that SADE can achieve SOTA performance on all these testing distributions.
This paper is well-written and easy to follow.
Authors conducted thorough experiments on various benchmarks and testing scenarios and achieved consistent performance improvements on all these benchmarks.
For weaknesses, my main concerns are twofold:
The biggest improvement comes from test data with Backward-LT distributions, however, I find it hard to believe that backward class distributions are common in real-world applications. The many-shot (few-shot) classes in the training data are often the many-shot (few-shot) classes in the testing data as well. Thus, Forward-LT and Uniform test class distributions make more sense, however, SADE achieves marginal improvements in these two testing cases.
The test-time self-supervised aggregation strategy requires the model to see all test data (unlabeled) before deployment, however, in real-world applications we more commonly see only 1 test image. This part is similar to my first concern, which is whether this setup is a practical setup that can be used in real-world applications.
Questions
I have a few questions on the fairness of the result comparisons and the setting of the test-time adaptation:
When comparing with the current SOTA method RIDE [46], the results of RIDE (https://github.com/frank-xwang/RIDE-LongTailRecognition/blob/main/MODEL_ZOO.md) is actually achieved by training the model for 100 epochs. However, the results reported in the paper is achieved by training the model for 200 epochs. Therefore, I am concerned about the fairness of the comparisons.
For test-time adaptation, have you tried using MC-Dropout [1] as a metric for expert aggregation? You can get the uncertainty of each expert and decide the weight of each expert based on the uncertainty. Does it save you from using all test data for expert aggregation? I think it might help if MC-Dropout is used with a strong data augmentation to produce different inputs.
[1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. PMLR, 2016.
Limitations
Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work. The long-tail recognition algorithm aims to alleviate the problem of ignoring underrepresented minorities, which is necessary to obtain an unbiased model and facilitate the development of CNN models for social justice. |
NIPS | Title
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Abstract
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
N/A
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
1 Introduction
Real-world visual recognition datasets typically exhibit a long-tailed distribution, where a few classes contain numerous samples (called head classes), but the others are associated with only a few instances (called tail classes) [24, 33]. Due to the class imbalance, the trained model is easily biased towards head classes and perform poorly on tail classes [2, 58]. To tackle this issue, numerous studies have explored long-tailed recognition for learning well-performing models from imbalanced data [20, 56].
Most existing long-tailed studies [3, 9, 10, 48, 52] assume the test class distribution is uniform, i.e., each class has an equal amount of test data. Therefore, they develop various techniques, e.g., class resampling [13, 18, 25, 55], cost-sensitive learning [11, 36, 41, 47] or ensemble learning [2, 13, 27, 53], to re-balance the model performance on different classes for fitting the uniform class distribution. However, this assumption does not always hold in real applications, where actual test data may follow any kind of class distribution, being either uniform, long-tailed, or even inversely long-tailed to the training data (cf. Figure 1(a)). For example, one may train a recognition model for autonomous cars based on the training data collected from city areas, where pedestrians are majority classes and stone obstacles are minority classes. However, when the model is deployed to mountain areas, the pedestrians become the minority while the stones become the majority. In this case, the test class distribution is inverse to the training one, and existing methods may perform poorly.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To address the issue of varying class distributions, as the first research attempt, LADE [17] assumes the test class distribution to be known and uses the knowledge to post-adjust model predictions. However, the actual test class distribution is usually unknown a priori, making LADE not applicable in practice. Therefore, we study a more realistic yet challenging problem, namely test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test distribution is agnostic. To tackle this problem, motivated by the idea of "divide and conquer", we propose to learn multiple experts with diverse skills that excel at handling different class distributions (cf. Figure 1(b)). As long as these skill-diverse experts can be aggregated suitably at test time, the multi-expert model would manage to handle the unknown test class distribution. Following this idea, we develop a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE).
The first challenge for SADE is how to learn multiple diverse experts from a single and stationary long-tailed training dataset. To handle this challenge, we empirically evaluate existing long-tailed methods in this task, and find that the models trained by existing methods have a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by various losses are skilled in handling class distributions with different skewness. For example, the model trained with the conventional softmax loss simulates the long-tailed training class distribution, while the models obtained from existing long-tailed methods are good at the uniform class distribution. Inspired by this finding, SADE presents a simple but effective skill-diverse expert learning strategy to generate experts with different distribution preferences from a single long-tailed training distribution. Here, various experts are trained with different expertise-guided objective functions to deal with different class distributions, respectively. As a result, the learned experts are more diverse than previous multi-expert long-tailed methods [49, 63], leading to better ensemble performance, and in aggregate simulate a wide spectrum of possible class distributions.
The other challenge is how to aggregate these skill-diverse experts for handling test-agnostic class distributions based on only unlabeled test data. To tackle this challenge, we empirically investigate the property of different experts, and observe that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction consistency between different perturbed views of samples from their favorable classes. Motivated by this finding, we develop a novel self-supervised strategy, namely prediction stability maximization, to adaptively aggregate experts based on only unlabeled test data. We theoretically show that maximizing the prediction stability enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution and the true class distribution. In this way, the resulting model is able to simulate unknown test class distributions.
We empirically verify the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. Specifically, SADE achieves promising performance on vanilla long-tailed recognition under all benchmark datasets. For instance, SADE achieves 58.8% accuracy on ImageNet-LT with more than 2% accuracy gain over previous state-of-the-art ensemble long-tailed methods, i.e., RIDE [49] and ACE [2]. More importantly, SADE is the first long-tailed approach that is able to handle various test-agnostic class distributions without knowing the true class distribution of test data in advance. Note that SADE even outperforms LADE [17] that uses knowledge of the test class distribution.
Compared to previous long-tailed methods (e.g., LADE [17] and RIDE [49]), our method offers the following advantages: (i) SADE does not assume the test class distribution to be known, and provides the first practical approach to handling test-agnostic long-tailed recognition; (ii) SADE develops a simple diversity-promoting strategy to learn skill-diverse experts from a single and stationary long-tailed dataset; (iii) SADE presents a novel self-supervised strategy to aggregate skill-diverse experts at test time, by maximizing prediction consistency between unlabeled test samples’ perturbed views; (iv) the presented self-supervised strategy has a provable ability to simulate test-agnostic class distributions, which opens the opportunity for tackling unknown class distribution shifts at test time.
2 Related Work
Long-tailed recognition Existing long-tailed recognition methods, related to our study, can be categorized into three types: class re-balancing, logit adjustment and ensemble learning. Specifically, class re-balancing resorts to re-sampling [4, 13, 18, 25] or cost-sensitive learning [3, 10, 16, 61] to balance different classes during model training. Logit adjustment [17, 33, 37, 43] adjusts models’ output logits via the label frequencies of training data at inference time, for obtaining a large relative margin between head and tail classes. Ensemble-based methods [2, 13, 53, 63], e.g., RIDE [49], are based on multiple experts, which seek to capture heterogeneous knowledge, followed by ensemble aggregation. More discussions on the difference between our method and RIDE [49] are provided in Appendix D.3. Regarding test-agnostic long-tailed recognition, LADE [17] assumes the test class distribution is available and uses it to post-adjust model predictions. However, the true test class distribution is usually unknown a priori, making LADE inapplicable. In contrast, our method does not rely on the true test distribution for handling this problem, but presents a novel self-supervised strategy to aggregate skill-diverse experts at test time for test-agnostic class distributions. Moreover, some ensemble-based long-tailed methods [39] aggregate experts based on a labeled uniform validation set. However, as the test class distribution could be different from the validation one, simply aggregating experts on the validation set is unable to handle test-agnostic long-tailed recognition.
Test-time training Test-time training [23, 26, 30, 40, 46] is a transductive learning paradigm for handling distribution shifts [28, 32, 34, 38, 45, 59] between training and test data, and has been applied with success to out-of-domain generalization [19, 35] and dynamic scene deblurring [6]. In this study, we explore this paradigm to handle test-agnostic long-tailed recognition, where the issue of class distribution shifts is the main challenge. However, most existing test-time training methods seek to handle covariate distribution shifts instead of class distribution shifts, so simply leveraging them cannot resolve test-agnostic long-tailed recognition, as shown in our experiment (cf. Table 9).
3 Problem Formulation
Long-tailed recognition aims to learn a well-performing classification model from a training dataset with long-tailed class distribution. Let Ds={xi, yi}nsi=1 denote the long-tailed training set, where yi is the class label of the sample xi. The total number of training data over C classes is ns= ∑C k=1 nk, where nk denotes the number of samples in class k. Without loss of generality, we follow a common assumption [17, 25] that the classes are sorted by cardinality in decreasing order (i.e., if i1 < i2, then ni1 ≥ ni2), and n1 ≫ nC . The imbalance ratio is defined as max(nk)/min(nk) = n1/nC . The test data Dt = {xj , yj}ntj=1 is defined in a similar way. Most existing long-tailed recognition methods assume the test class distribution is uniform (i.e., pt(y) = 1/C), and seek to train models from the long-tailed training distribution ps(y) to perform well on the uniform test distribution. However, such an assumption does not always hold in practice. The actual test class distribution in real-world applications may also be long-tailed (i.e., pt(y) = ps(y)), or even inversely long-tailed to the training data (i.e., pt(y) = inv(ps(y))). Here, inv(·) indicates that the order of the long tail on classes is flipped. As a result, the models learned by existing methods may fail when the actual test class distribution is different from the assumed one. To address this, we propose to study a more practical yet challenging long-tailed problem, i.e., Test-agnostic Long-tailed Recognition. This task aims to learn a recognition model from long-tailed training data, where the resulting model would be evaluated on multiple test sets that follow different class distributions. This task is challenging due to the integration of two challenges: (1) the severe class imbalance in the training data makes it difficult to train models; (2) unknown class distribution shifts between training and test data (i.e., pt(y) ̸= ps(y)) makes models hard to generalize.
4 Method
To tackle the above problem, inspired by the idea of "divide and conquer", we propose to learn multiple skill-diverse experts that excel at handling different class distributions. By reasonably fusing these experts at test time, the multi-expert model would manage to handle unknown class distribution shifts and resolve test-agnostic long-tailed recognition. Following this idea, we develop a novel Self-supervised Aggregation of Diverse Experts (SADE) approach. Specifically, SADE consists of two innovative strategies: (1) learning skill-diverse experts from a single long-tailed training dataset; (2) test-time aggregating experts with self-supervision to handle test-agnostic class distributions.
4.1 Skill-diverse Expert Learning
As shown in Figure 2, SADE builds a three-expert model that comprises two components: (1) an expert-shared backbone fθ; (2) independent expert networks E1, E2 and E3. When training the model, the key challenge is how to learn skill-diverse experts from a single and stationary long-tailed training dataset. Existing ensemble-based long-tailed methods [13, 49] seek to train experts for the uniform test class distribution, and hence the trained experts are not differentiated sufficiently for handling various class distributions (refer to Table 6 for an example). To tackle this challenge, we first empirically investigate existing long-tailed methods in this task. From Table 1, we find that there is a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by different losses are good at dealing with class distributions with different skewness. For instance, the model trained with the softmax loss is good at the long-tailed distribution, while the models obtained from long-tailed methods are skilled in the uniform distribution.
Motivated by this finding, we develop a simple skill-diverse expert learning strategy to generate experts with different distribution preferences. To be specific, the forward expert E1 seeks to be good at the long-tailed class distribution and performs well on many-shot classes. The uniform expert E2 strives to be skilled in the uniform distribution. The backward expert E3 aims at the inversely long-tailed distribution and performs well on few-shot classes. Here, the forward and backward experts are necessary since they span a wide spectrum of possible class distributions, while the uniform expert ensures retaining high accuracy on the uniform distribution. To this end, we use three different expertise-guided losses to train the three experts, respectively.
The forward expert E1 We use the softmax cross-entropy loss to train this expert, so that it directly simulates the original long-tailed training class distribution:
Lce = 1
ns ∑ xi∈Ds −yi log σ(v1(xi)), (1)
where v1(·) is the output logits of the forward expert E1, and σ(·) is the softmax function.
The uniform expert E2 We aim to train this expert to simulate the uniform class distribution. Inspired by the effectiveness of logit adjusted losses for long-tailed recognition [33], we resort to the balanced softmax loss [21]. Specifically, let ŷk = exp(v
k)∑C c=1 exp(v c) be the prediction probability.
The balanced softmax adjusts the prediction probability by compensating for the long-tailed class distribution with the prior of training label frequencies: ŷk = π
k exp(vk)∑C c=1 π c exp(vc) = exp(v k+log πk)∑C c=1 exp(v c+log πc) ,
where πk = nkn denotes the training label frequency of class k. Then, given v2(·) as the output logits of the expert E2, the balanced softmax loss for the expert E2 is defined as:
Lbal = 1
ns ∑ xi∈Ds −yi log σ(v2(xi) + log π). (2)
Intuitively, by adjusting logits to compensate for the long-tailed distribution with the prior π, this loss enables E2 to output class-balanced predictions that simulate the uniform distribution.
The backward expert E3 We seek to train this expert to simulate the inversely long-tailed class distribution. To this end, we propose a new inverse softmax loss, based on the same rationale of logit adjusted losses [21, 33]. Specifically, we adjust the prediction probability by: ŷk = exp(v
k+log πk−log π̄k)∑C c=1 exp(v
c+log πc−log π̄c) , where the inverse training prior π̄ is obtained by inverting the order of training label frequencies π. Then, the new inverse softmax loss for the expert E3 is defined as:
Linv= 1
ns ∑ xi∈Ds −yi log σ(v3(xi)+ log π−λ log π̄), (3)
where v3(·) denotes the output logits of E3 and λ is a hyper-parameter. Intuitively, this loss adjusts logits to compensate for the long-tailed distribution with π, and further applies reverse adjustment with π̄. This enables E3 to simulate the inversely long-tailed distribution (cf. Table 6 for verification).
4.2 Test-time Self-supervised Aggregation
Based on the skill-diverse learning strategy, the three experts in SADE are skilled in different class distributions. The remaining challenge is how to fuse them to deal with unknown test class distributions. A basic principle for expert aggregation is that the experts should play a bigger role in situations where they have expertise. Nevertheless, how to detect strong experts for unknown test class distribution remains unknown. Our key insight is that strong experts should be more stable in predicting the samples from their skilled classes, even though these samples are perturbed.
Empirical observation To verify this hypothesis, we estimate the prediction stability of experts by comparing the cosine similarity between their predictions for a sample’s two augmented views. Here, the data views are generated by the data augmentation techniques in MoCo v2 [5]. From Table 2, we find that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction similarity between different views of samples from their favorable classes. Following this finding, we propose to explore the relative prediction stability to detect strong experts and weight experts for the unknown test class distribution. Consequently, we develop a novel self-supervised strategy, namely prediction stability maximization.
Prediction stability maximization This strategy learns aggregation weights for experts (with frozen parameters) by maximizing model prediction stability for unlabeled test samples. As shown in Figure 3, the method comprises three major components as follows.
Data view generation For a given sample x, we conduct two stochastic data augmentations to generate the sample’s two views, i.e., x1 and x2. Here, we use the same augmentation techniques as the advanced contrastive learning method, i.e., MoCo v2 [5], which has been shown effective in self-supervised learning.
Learnable aggregation weight Given the output logits of three experts (v1, v2, v3) ∈ R3×C , we aggregate experts with a learnable aggregation weight w = [w1, w2, w3] ∈ R3 and obtain the final softmax prediction by ŷ = σ(w1·v1 + w2·v2 + w3·v3), where w is normalized before aggregation, i.e., w1 + w2 + w3=1.
Objective function Given the view predictions of unlabeled test data, we maximize the prediction stability based on the cosine similarity between the view predictions:
max w S, where S = 1 nt ∑ x∈Dt ŷ1 · ŷ2. (4)
Here, ŷ1 and ŷ2 are normalized by the softmax function. In test-time training, only the aggregation weight w is updated. Since stronger experts have higher prediction similarity for their skilled classes, maximizing the prediction stability S would learn higher weights for stronger experts regarding the unknown test class distribution. Moreover, the self-supervised aggregation strategy can be conducted in an online manner for streaming test data. The pseudo-code of SADE is provided in Appendix B.
Theoretical Analysis We then theoretically analyze the prediction stability maximization strategy to conceptually understand why it works. To this end, we first define the random variables of predictions and labels as Ŷ∼p(ŷ) and Y∼pt(y). We have the following result: Theorem 1. The prediction stability S is positive proportional to the mutual information between the predicted label distribution and the test class distribution I(Ŷ ;Y ), and negative proportional to the prediction entropy H(Ŷ ):
S ∝ I(Ŷ ;Y )−H(Ŷ ).
Please refer to Appendix A for proofs. According to Theorem 1, maximizing the prediction stability S enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution p(ŷ) and the test class distribution pt(y), as well as minimizing the prediction entropy. Since minimizing entropy helps to improve the confidence of the classifier output [12], the aggregation weight is learned to simulate the test class distribution pt(y) and increase the prediction confidence. This property intuitively explains why our method has the potential to tackle the challenging task of test-agnostic long-tailed recognition at test time.
5 Experiments
In this section, we first evaluate the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. We then verify the effectiveness of SADE in terms of its two strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. More ablation studies are reported in appendices. Here, we begin with the experimental settings.
5.1 Experimental Setups
Datasets We use four benchmark datasets (i.e., ImageNet-LT [31], CIFAR100-LT [3], PlacesLT [31], and iNaturalist 2018 [44]) to simulate real-world long-tailed class distributions. Their data statistics and imbalance ratios are summarized in Appendix C.1. The imbalance ratio is defined as maxnj /minnj , where nj denotes the data number of class j. Note that CIFAR100-LT has three variants with different imbalance ratios.
Baselines We compare SADE with state-of-the-art long-tailed methods, including two-stage methods (Decouple [25], MiSLAS [62]), logit-adjusted training (Balanced Softmax [21], LADE [17]), ensemble learning (BBN [63], ACE [2], RIDE [49]), classifier design (Causal [42]), and representation learning (PaCo [8]). Note that LADE uses the prior of test class distribution for post-adjustment (although it is unavailable in practice), while all other methods do not use this prior.
Evaluation protocols In test-agnostic long-tailed recognition, following LADE [17], the models are evaluated on multiple sets of test data that follow different class distributions, in terms of micro accuracy. Same as LADE [17], we construct three kinds of test class distributions, i.e., the uniform distribution, forward long-tailed distributions as training data, and backward long-tailed distributions. In the backward ones, the order of the long tail on classes is flipped. More details of test data construction are provided in Appendix C.2. Besides, we also evaluate methods on vanilla long-tailed recognition [25, 31], where the models are evaluated on the uniform test class distribution. Here, the accuracy on three class sub-groups is also reported, i.e., many-shot classes (more than 100 training images), medium-shot classes (20∼100 images) and few-shot classes (less than 20 images). Implementation details We use the same setup for all the baselines and our method. Specifically, following [17, 49], we use ResNeXt-50 for ImageNet-LT, ResNet-32 for CIFAR100-LT, ResNet-152 for Places-LT and ResNet-50 for iNaturalist 2018 as backbones, respectively. Moreover, we adopt the cosine classifier for prediction on all datasets. If not specified, we use the SGD optimizer with the momentum of 0.9 for training 200 epochs and set the initial learning rate as 0.1 with linear decay. We set λ=2 for ImageNet-LT and CIFAR100-LT, and λ=1 for the remaining datasets. During test-time training, we train the aggregation weights for 5 epochs with the batch size 128, where we use the same optimizer and learning rate as the training phase. More implementation details and the hyper-parameter statistics are reported in Appendix C.3.
5.2 Superiority on Vanilla Long-tailed Recognition
This subsection compares SADE with state-of-the-art long-tailed methods on vanilla long-tailed recognition. Specifically, as shown in Tables 3-4, Softmax trains the model with only cross-entropy, so it simulates the longtailed training distribution and performs well on manyshot classes. However, it performs poorly on mediumshot and few-shot classes, leading to worse overall performance. In contrast, existing long-tailed methods (e.g., Decouple, Causal) seek to simulate the uniform class distribution, so their performance is more class-balanced, leading to better overall performance. However, as these methods mainly seek balanced performance, they in-
evitably sacrifice the performance on many-shot classes. To address this, RIDE and ACE explore ensemble learning for long-tailed recognition and achieve better performance on tail classes without sacrificing the head-class performance. In comparison, based on the increasing expert diversity derived from skill-diverse expert learning, our method performs the best on all datasets, e.g., with more than 2% accuracy gain on ImageNet-LT compared to RIDE and ACE. These results demonstrate the superiority of SADE over the compared methods that are particularly designed for the uniform test class distribution. Note that SADE also outperforms baselines in experiments with stronger data augmentation (i.e., RandAugment [7]) and other architectures, as reported in Appendix D.1.
5.3 Superiority on Test-agnostic Long-tailed Recognition
In this subsection, we evaluate SADE on test-agnostic long-tailed recognition. The results on various test class distributions are reported in Table 5. Specifically, since Softmax seeks to simulate the longtailed training distribution, it performs well on forward long-tailed test distributions. However, its performance on the uniform and backward long-tailed distributions is poor. In contrast, existing longtailed methods show more balanced performance among classes, leading to better overall accuracy. However, the resulting models by these methods suffer from a simulation bias, i.e., performing similarly among classes on various class distributions (c.f. Table 1). As a result, they cannot adapt to diverse test class distributions well. To handle this task, LADE assumes the test class distribution to be known and uses this information to adjust its predictions, leading to better performance on various test class distributions. However, since obtaining the actual test class distribution is difficult in real applications, the methods requiring such knowledge may be not applicable in practice. Moreover, in some specific cases like Forward-LT-3 and Backward-LT-3 distributions of iNaturalist 2018, the number of test samples on some classes becomes zero. In such cases, the test prior cannot be used in LADE, since adjusting logits with log 0 results in biased predictions. In contrast, without relying on the knowledge of test class distributions, our SADE presents an innovative self-supervised strategy to deal with unknown class distributions, and obtains even better performance than LADE that uses the test class prior (c.f. Table 5). The promising results demonstrate the effectiveness and practicality of our method on test-agnostic long-tailed recognition. Note that the performance advantages of SADE become larger as the test data get more imbalanced. Due to the page limitation, the results on more datasets are reported in Appendix D.2.
5.4 Effectiveness of Skill-diverse Expert Learning
We next examine our skill-diverse expert learning strategy. The results are reported in Table 6, where RIDE [49] is a state-of-the-art ensemble-based method. RIDE trains each expert with cross-entropy independently and uses KL-Divergence to improve expert diversity. However, simply maximizing the divergence of expert predictions cannot learn visibly diverse experts (cf. Table 6). In contrast, the three experts learned by our strategy have significantly diverse expertise, excelling at many-shot classes, the uniform distribution (with higher overall performance), and few-shot classes, respectively. As a result, the increasing expert diversity leads to a non-trivial gain for the ensemble performance of SADE compared to RIDE. Moreover, consistent results on more datasets are reported in Appendix D.3, while the ablation studies of the expert learning strategy are provided in Appendix E.
5.5 Effectiveness of Test-time Self-supervised Aggregation
This subsection evaluates our test-time self-supervised aggregation strategy.
Effectiveness in expert aggregation. As shown in Table 7, our self-supervised strategy learns suitable expert weights for various unknown test class distributions. For forward long-tailed distributions, the weight of the forward expert E1 is higher; while for backward long-tailed ones, the weight of the backward expert E3 is relatively high. This enables our multi-expert model to boost the performance on dominant classes for unknown test distributions, leading to better ensemble performance (cf. Table 8), particularly as test data get more skewed. The results on more datasets are reported in Appendix D.4, while more ablation studies of our strategy are shown in Appendix F.
Superiority over test-time training methods. We then verify the superiority of our self-supervised strategy over existing test-time training approaches on various test class distributions. Specifically, we adopt three non-trivial baselines: (i) Test-time pseudo-labeling uses the multi-expert model to iteratively generate pseudo labels for unlabeled test data and uses them to fine-tune the model; (ii) Test class distribution estimation leverages BBSE [29] to estimate the test class distribution and uses it to pose-adjust model predictions; (iii) Tent [46] fine-tunes the batch normalization layers of models through entropy minimization on unlabeled test data. The results in Table 9 show that directly applying existing test-time training methods cannot handle well the class distribution shifts, particularly on the inversely long-tailed class distribution. In comparison, our self-supervised strategy is able to aggregate multiple experts appropriately for the unknown test class distribution (cf. Table 7), leading to promising performance gains on various test class distributions (cf. Table 9).
Effectiveness on partial class distributions. Real-world test data may follow any type of class distribution, including partial class distributions (i.e., not all of the classes appear in the test data). Motivated by this, we further evaluate SADE on three partial class distributions: only many-shot classes, only medium-shot classes, and only few-shot classes. The results in Table 10 demonstrate the effectiveness of SADE in tackling more complex test class distributions.
6 Conclusion
In this paper, we have explored a practical yet challenging task of test-agnostic long-tailed recognition, where the test class distribution is unknown and not necessarily uniform. To tackle this task, we present a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE), which consists of two innovative strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. We theoretically analyze our proposed method and also empirically show that SADE achieves new state-of-the-art performance on both vanilla and test-agnostic long-tailed recognition.
Acknowledgments
This work was partially supported by NUS ODPRT Grant R252-000-A81-133 and NUS Advanced Research and Technology Innovation Centre (ARTIC) Project Reference (ECT-RP2). We also gratefully appreciate the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. | 1. What is the focus and contribution of the paper regarding long-tailed learning?
2. What are the strengths of the proposed approach, particularly in its novel modules?
3. What are the weaknesses of the paper, especially regarding computation complexity?
4. Do you have any questions regarding the adaptation of the expertise-guided loss function for different distribution types?
5. What are the limitations of the proposed method in terms of task extensibility and model complexity? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper extends the conventional long-tailed learning to the "test-agnostic one", in which the model trained on a long-tailed class distribution should generalize to arbitrary testing distribution not necessarily being uniform. To handle such a problem, this paper proposes a novel method consisting of two modules: (1) diverse experts with different class expertise and (2) a self-supervised test-time weighting strategy that adaptively aggregates the experts to tackle unknown testing distribution. Extensive experiments have shown the efficacy of the proposed method on handling arbitrary class distributions in testing.
Strengths And Weaknesses
Strengths
The proposed test-agnostic setting is challenging and of great practical significance.
The paper is well-written and easy to follow.
The proposed test-time aggregation strategy is interesting and has proven to be useful.
The experiments and ablation studies are comprehensive and convincing.
Weaknesses
The weakness is mainly focused on the computation complexity. As mentioned in the paper, the three experts are independent in ResNet blocks (later stages) and fully-connected layers. Though it seems tolerable since it is a quite challenging problem after all, have the authors explored the trade-off between accuracy and complexity? For instance, is it a near-linear relationship between higher accuracy and experts with fewer shared modules? I would like to see how far can it go at the two extreme points: (1) when nothing is shared between experts and (2) everything is shared except the fully-connected layers.
Questions
Line 762: how exactly is the expertise-guided loss functions changed to suit different types of distributions?
Limitations
The limitations are carefully discussed in the paper, which mainly encompass the extensibility to different tasks and the model complexity of the proposed method. |
NIPS | Title
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Abstract
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
N/A
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. The source code is available at https: //github.com/Vanint/SADE-AgnosticLT.
1 Introduction
Real-world visual recognition datasets typically exhibit a long-tailed distribution, where a few classes contain numerous samples (called head classes), but the others are associated with only a few instances (called tail classes) [24, 33]. Due to the class imbalance, the trained model is easily biased towards head classes and perform poorly on tail classes [2, 58]. To tackle this issue, numerous studies have explored long-tailed recognition for learning well-performing models from imbalanced data [20, 56].
Most existing long-tailed studies [3, 9, 10, 48, 52] assume the test class distribution is uniform, i.e., each class has an equal amount of test data. Therefore, they develop various techniques, e.g., class resampling [13, 18, 25, 55], cost-sensitive learning [11, 36, 41, 47] or ensemble learning [2, 13, 27, 53], to re-balance the model performance on different classes for fitting the uniform class distribution. However, this assumption does not always hold in real applications, where actual test data may follow any kind of class distribution, being either uniform, long-tailed, or even inversely long-tailed to the training data (cf. Figure 1(a)). For example, one may train a recognition model for autonomous cars based on the training data collected from city areas, where pedestrians are majority classes and stone obstacles are minority classes. However, when the model is deployed to mountain areas, the pedestrians become the minority while the stones become the majority. In this case, the test class distribution is inverse to the training one, and existing methods may perform poorly.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To address the issue of varying class distributions, as the first research attempt, LADE [17] assumes the test class distribution to be known and uses the knowledge to post-adjust model predictions. However, the actual test class distribution is usually unknown a priori, making LADE not applicable in practice. Therefore, we study a more realistic yet challenging problem, namely test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test distribution is agnostic. To tackle this problem, motivated by the idea of "divide and conquer", we propose to learn multiple experts with diverse skills that excel at handling different class distributions (cf. Figure 1(b)). As long as these skill-diverse experts can be aggregated suitably at test time, the multi-expert model would manage to handle the unknown test class distribution. Following this idea, we develop a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE).
The first challenge for SADE is how to learn multiple diverse experts from a single and stationary long-tailed training dataset. To handle this challenge, we empirically evaluate existing long-tailed methods in this task, and find that the models trained by existing methods have a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by various losses are skilled in handling class distributions with different skewness. For example, the model trained with the conventional softmax loss simulates the long-tailed training class distribution, while the models obtained from existing long-tailed methods are good at the uniform class distribution. Inspired by this finding, SADE presents a simple but effective skill-diverse expert learning strategy to generate experts with different distribution preferences from a single long-tailed training distribution. Here, various experts are trained with different expertise-guided objective functions to deal with different class distributions, respectively. As a result, the learned experts are more diverse than previous multi-expert long-tailed methods [49, 63], leading to better ensemble performance, and in aggregate simulate a wide spectrum of possible class distributions.
The other challenge is how to aggregate these skill-diverse experts for handling test-agnostic class distributions based on only unlabeled test data. To tackle this challenge, we empirically investigate the property of different experts, and observe that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction consistency between different perturbed views of samples from their favorable classes. Motivated by this finding, we develop a novel self-supervised strategy, namely prediction stability maximization, to adaptively aggregate experts based on only unlabeled test data. We theoretically show that maximizing the prediction stability enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution and the true class distribution. In this way, the resulting model is able to simulate unknown test class distributions.
We empirically verify the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. Specifically, SADE achieves promising performance on vanilla long-tailed recognition under all benchmark datasets. For instance, SADE achieves 58.8% accuracy on ImageNet-LT with more than 2% accuracy gain over previous state-of-the-art ensemble long-tailed methods, i.e., RIDE [49] and ACE [2]. More importantly, SADE is the first long-tailed approach that is able to handle various test-agnostic class distributions without knowing the true class distribution of test data in advance. Note that SADE even outperforms LADE [17] that uses knowledge of the test class distribution.
Compared to previous long-tailed methods (e.g., LADE [17] and RIDE [49]), our method offers the following advantages: (i) SADE does not assume the test class distribution to be known, and provides the first practical approach to handling test-agnostic long-tailed recognition; (ii) SADE develops a simple diversity-promoting strategy to learn skill-diverse experts from a single and stationary long-tailed dataset; (iii) SADE presents a novel self-supervised strategy to aggregate skill-diverse experts at test time, by maximizing prediction consistency between unlabeled test samples’ perturbed views; (iv) the presented self-supervised strategy has a provable ability to simulate test-agnostic class distributions, which opens the opportunity for tackling unknown class distribution shifts at test time.
2 Related Work
Long-tailed recognition Existing long-tailed recognition methods, related to our study, can be categorized into three types: class re-balancing, logit adjustment and ensemble learning. Specifically, class re-balancing resorts to re-sampling [4, 13, 18, 25] or cost-sensitive learning [3, 10, 16, 61] to balance different classes during model training. Logit adjustment [17, 33, 37, 43] adjusts models’ output logits via the label frequencies of training data at inference time, for obtaining a large relative margin between head and tail classes. Ensemble-based methods [2, 13, 53, 63], e.g., RIDE [49], are based on multiple experts, which seek to capture heterogeneous knowledge, followed by ensemble aggregation. More discussions on the difference between our method and RIDE [49] are provided in Appendix D.3. Regarding test-agnostic long-tailed recognition, LADE [17] assumes the test class distribution is available and uses it to post-adjust model predictions. However, the true test class distribution is usually unknown a priori, making LADE inapplicable. In contrast, our method does not rely on the true test distribution for handling this problem, but presents a novel self-supervised strategy to aggregate skill-diverse experts at test time for test-agnostic class distributions. Moreover, some ensemble-based long-tailed methods [39] aggregate experts based on a labeled uniform validation set. However, as the test class distribution could be different from the validation one, simply aggregating experts on the validation set is unable to handle test-agnostic long-tailed recognition.
Test-time training Test-time training [23, 26, 30, 40, 46] is a transductive learning paradigm for handling distribution shifts [28, 32, 34, 38, 45, 59] between training and test data, and has been applied with success to out-of-domain generalization [19, 35] and dynamic scene deblurring [6]. In this study, we explore this paradigm to handle test-agnostic long-tailed recognition, where the issue of class distribution shifts is the main challenge. However, most existing test-time training methods seek to handle covariate distribution shifts instead of class distribution shifts, so simply leveraging them cannot resolve test-agnostic long-tailed recognition, as shown in our experiment (cf. Table 9).
3 Problem Formulation
Long-tailed recognition aims to learn a well-performing classification model from a training dataset with long-tailed class distribution. Let Ds={xi, yi}nsi=1 denote the long-tailed training set, where yi is the class label of the sample xi. The total number of training data over C classes is ns= ∑C k=1 nk, where nk denotes the number of samples in class k. Without loss of generality, we follow a common assumption [17, 25] that the classes are sorted by cardinality in decreasing order (i.e., if i1 < i2, then ni1 ≥ ni2), and n1 ≫ nC . The imbalance ratio is defined as max(nk)/min(nk) = n1/nC . The test data Dt = {xj , yj}ntj=1 is defined in a similar way. Most existing long-tailed recognition methods assume the test class distribution is uniform (i.e., pt(y) = 1/C), and seek to train models from the long-tailed training distribution ps(y) to perform well on the uniform test distribution. However, such an assumption does not always hold in practice. The actual test class distribution in real-world applications may also be long-tailed (i.e., pt(y) = ps(y)), or even inversely long-tailed to the training data (i.e., pt(y) = inv(ps(y))). Here, inv(·) indicates that the order of the long tail on classes is flipped. As a result, the models learned by existing methods may fail when the actual test class distribution is different from the assumed one. To address this, we propose to study a more practical yet challenging long-tailed problem, i.e., Test-agnostic Long-tailed Recognition. This task aims to learn a recognition model from long-tailed training data, where the resulting model would be evaluated on multiple test sets that follow different class distributions. This task is challenging due to the integration of two challenges: (1) the severe class imbalance in the training data makes it difficult to train models; (2) unknown class distribution shifts between training and test data (i.e., pt(y) ̸= ps(y)) makes models hard to generalize.
4 Method
To tackle the above problem, inspired by the idea of "divide and conquer", we propose to learn multiple skill-diverse experts that excel at handling different class distributions. By reasonably fusing these experts at test time, the multi-expert model would manage to handle unknown class distribution shifts and resolve test-agnostic long-tailed recognition. Following this idea, we develop a novel Self-supervised Aggregation of Diverse Experts (SADE) approach. Specifically, SADE consists of two innovative strategies: (1) learning skill-diverse experts from a single long-tailed training dataset; (2) test-time aggregating experts with self-supervision to handle test-agnostic class distributions.
4.1 Skill-diverse Expert Learning
As shown in Figure 2, SADE builds a three-expert model that comprises two components: (1) an expert-shared backbone fθ; (2) independent expert networks E1, E2 and E3. When training the model, the key challenge is how to learn skill-diverse experts from a single and stationary long-tailed training dataset. Existing ensemble-based long-tailed methods [13, 49] seek to train experts for the uniform test class distribution, and hence the trained experts are not differentiated sufficiently for handling various class distributions (refer to Table 6 for an example). To tackle this challenge, we first empirically investigate existing long-tailed methods in this task. From Table 1, we find that there is a simulation correlation between the learned class distribution and the training loss function. That is, the models learned by different losses are good at dealing with class distributions with different skewness. For instance, the model trained with the softmax loss is good at the long-tailed distribution, while the models obtained from long-tailed methods are skilled in the uniform distribution.
Motivated by this finding, we develop a simple skill-diverse expert learning strategy to generate experts with different distribution preferences. To be specific, the forward expert E1 seeks to be good at the long-tailed class distribution and performs well on many-shot classes. The uniform expert E2 strives to be skilled in the uniform distribution. The backward expert E3 aims at the inversely long-tailed distribution and performs well on few-shot classes. Here, the forward and backward experts are necessary since they span a wide spectrum of possible class distributions, while the uniform expert ensures retaining high accuracy on the uniform distribution. To this end, we use three different expertise-guided losses to train the three experts, respectively.
The forward expert E1 We use the softmax cross-entropy loss to train this expert, so that it directly simulates the original long-tailed training class distribution:
Lce = 1
ns ∑ xi∈Ds −yi log σ(v1(xi)), (1)
where v1(·) is the output logits of the forward expert E1, and σ(·) is the softmax function.
The uniform expert E2 We aim to train this expert to simulate the uniform class distribution. Inspired by the effectiveness of logit adjusted losses for long-tailed recognition [33], we resort to the balanced softmax loss [21]. Specifically, let ŷk = exp(v
k)∑C c=1 exp(v c) be the prediction probability.
The balanced softmax adjusts the prediction probability by compensating for the long-tailed class distribution with the prior of training label frequencies: ŷk = π
k exp(vk)∑C c=1 π c exp(vc) = exp(v k+log πk)∑C c=1 exp(v c+log πc) ,
where πk = nkn denotes the training label frequency of class k. Then, given v2(·) as the output logits of the expert E2, the balanced softmax loss for the expert E2 is defined as:
Lbal = 1
ns ∑ xi∈Ds −yi log σ(v2(xi) + log π). (2)
Intuitively, by adjusting logits to compensate for the long-tailed distribution with the prior π, this loss enables E2 to output class-balanced predictions that simulate the uniform distribution.
The backward expert E3 We seek to train this expert to simulate the inversely long-tailed class distribution. To this end, we propose a new inverse softmax loss, based on the same rationale of logit adjusted losses [21, 33]. Specifically, we adjust the prediction probability by: ŷk = exp(v
k+log πk−log π̄k)∑C c=1 exp(v
c+log πc−log π̄c) , where the inverse training prior π̄ is obtained by inverting the order of training label frequencies π. Then, the new inverse softmax loss for the expert E3 is defined as:
Linv= 1
ns ∑ xi∈Ds −yi log σ(v3(xi)+ log π−λ log π̄), (3)
where v3(·) denotes the output logits of E3 and λ is a hyper-parameter. Intuitively, this loss adjusts logits to compensate for the long-tailed distribution with π, and further applies reverse adjustment with π̄. This enables E3 to simulate the inversely long-tailed distribution (cf. Table 6 for verification).
4.2 Test-time Self-supervised Aggregation
Based on the skill-diverse learning strategy, the three experts in SADE are skilled in different class distributions. The remaining challenge is how to fuse them to deal with unknown test class distributions. A basic principle for expert aggregation is that the experts should play a bigger role in situations where they have expertise. Nevertheless, how to detect strong experts for unknown test class distribution remains unknown. Our key insight is that strong experts should be more stable in predicting the samples from their skilled classes, even though these samples are perturbed.
Empirical observation To verify this hypothesis, we estimate the prediction stability of experts by comparing the cosine similarity between their predictions for a sample’s two augmented views. Here, the data views are generated by the data augmentation techniques in MoCo v2 [5]. From Table 2, we find that there is a positive correlation between expertise and prediction stability, i.e., stronger experts have higher prediction similarity between different views of samples from their favorable classes. Following this finding, we propose to explore the relative prediction stability to detect strong experts and weight experts for the unknown test class distribution. Consequently, we develop a novel self-supervised strategy, namely prediction stability maximization.
Prediction stability maximization This strategy learns aggregation weights for experts (with frozen parameters) by maximizing model prediction stability for unlabeled test samples. As shown in Figure 3, the method comprises three major components as follows.
Data view generation For a given sample x, we conduct two stochastic data augmentations to generate the sample’s two views, i.e., x1 and x2. Here, we use the same augmentation techniques as the advanced contrastive learning method, i.e., MoCo v2 [5], which has been shown effective in self-supervised learning.
Learnable aggregation weight Given the output logits of three experts (v1, v2, v3) ∈ R3×C , we aggregate experts with a learnable aggregation weight w = [w1, w2, w3] ∈ R3 and obtain the final softmax prediction by ŷ = σ(w1·v1 + w2·v2 + w3·v3), where w is normalized before aggregation, i.e., w1 + w2 + w3=1.
Objective function Given the view predictions of unlabeled test data, we maximize the prediction stability based on the cosine similarity between the view predictions:
max w S, where S = 1 nt ∑ x∈Dt ŷ1 · ŷ2. (4)
Here, ŷ1 and ŷ2 are normalized by the softmax function. In test-time training, only the aggregation weight w is updated. Since stronger experts have higher prediction similarity for their skilled classes, maximizing the prediction stability S would learn higher weights for stronger experts regarding the unknown test class distribution. Moreover, the self-supervised aggregation strategy can be conducted in an online manner for streaming test data. The pseudo-code of SADE is provided in Appendix B.
Theoretical Analysis We then theoretically analyze the prediction stability maximization strategy to conceptually understand why it works. To this end, we first define the random variables of predictions and labels as Ŷ∼p(ŷ) and Y∼pt(y). We have the following result: Theorem 1. The prediction stability S is positive proportional to the mutual information between the predicted label distribution and the test class distribution I(Ŷ ;Y ), and negative proportional to the prediction entropy H(Ŷ ):
S ∝ I(Ŷ ;Y )−H(Ŷ ).
Please refer to Appendix A for proofs. According to Theorem 1, maximizing the prediction stability S enables SADE to learn an aggregation weight that maximizes the mutual information between the predicted label distribution p(ŷ) and the test class distribution pt(y), as well as minimizing the prediction entropy. Since minimizing entropy helps to improve the confidence of the classifier output [12], the aggregation weight is learned to simulate the test class distribution pt(y) and increase the prediction confidence. This property intuitively explains why our method has the potential to tackle the challenging task of test-agnostic long-tailed recognition at test time.
5 Experiments
In this section, we first evaluate the superiority of SADE on both vanilla and test-agnostic long-tailed recognition. We then verify the effectiveness of SADE in terms of its two strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. More ablation studies are reported in appendices. Here, we begin with the experimental settings.
5.1 Experimental Setups
Datasets We use four benchmark datasets (i.e., ImageNet-LT [31], CIFAR100-LT [3], PlacesLT [31], and iNaturalist 2018 [44]) to simulate real-world long-tailed class distributions. Their data statistics and imbalance ratios are summarized in Appendix C.1. The imbalance ratio is defined as maxnj /minnj , where nj denotes the data number of class j. Note that CIFAR100-LT has three variants with different imbalance ratios.
Baselines We compare SADE with state-of-the-art long-tailed methods, including two-stage methods (Decouple [25], MiSLAS [62]), logit-adjusted training (Balanced Softmax [21], LADE [17]), ensemble learning (BBN [63], ACE [2], RIDE [49]), classifier design (Causal [42]), and representation learning (PaCo [8]). Note that LADE uses the prior of test class distribution for post-adjustment (although it is unavailable in practice), while all other methods do not use this prior.
Evaluation protocols In test-agnostic long-tailed recognition, following LADE [17], the models are evaluated on multiple sets of test data that follow different class distributions, in terms of micro accuracy. Same as LADE [17], we construct three kinds of test class distributions, i.e., the uniform distribution, forward long-tailed distributions as training data, and backward long-tailed distributions. In the backward ones, the order of the long tail on classes is flipped. More details of test data construction are provided in Appendix C.2. Besides, we also evaluate methods on vanilla long-tailed recognition [25, 31], where the models are evaluated on the uniform test class distribution. Here, the accuracy on three class sub-groups is also reported, i.e., many-shot classes (more than 100 training images), medium-shot classes (20∼100 images) and few-shot classes (less than 20 images). Implementation details We use the same setup for all the baselines and our method. Specifically, following [17, 49], we use ResNeXt-50 for ImageNet-LT, ResNet-32 for CIFAR100-LT, ResNet-152 for Places-LT and ResNet-50 for iNaturalist 2018 as backbones, respectively. Moreover, we adopt the cosine classifier for prediction on all datasets. If not specified, we use the SGD optimizer with the momentum of 0.9 for training 200 epochs and set the initial learning rate as 0.1 with linear decay. We set λ=2 for ImageNet-LT and CIFAR100-LT, and λ=1 for the remaining datasets. During test-time training, we train the aggregation weights for 5 epochs with the batch size 128, where we use the same optimizer and learning rate as the training phase. More implementation details and the hyper-parameter statistics are reported in Appendix C.3.
5.2 Superiority on Vanilla Long-tailed Recognition
This subsection compares SADE with state-of-the-art long-tailed methods on vanilla long-tailed recognition. Specifically, as shown in Tables 3-4, Softmax trains the model with only cross-entropy, so it simulates the longtailed training distribution and performs well on manyshot classes. However, it performs poorly on mediumshot and few-shot classes, leading to worse overall performance. In contrast, existing long-tailed methods (e.g., Decouple, Causal) seek to simulate the uniform class distribution, so their performance is more class-balanced, leading to better overall performance. However, as these methods mainly seek balanced performance, they in-
evitably sacrifice the performance on many-shot classes. To address this, RIDE and ACE explore ensemble learning for long-tailed recognition and achieve better performance on tail classes without sacrificing the head-class performance. In comparison, based on the increasing expert diversity derived from skill-diverse expert learning, our method performs the best on all datasets, e.g., with more than 2% accuracy gain on ImageNet-LT compared to RIDE and ACE. These results demonstrate the superiority of SADE over the compared methods that are particularly designed for the uniform test class distribution. Note that SADE also outperforms baselines in experiments with stronger data augmentation (i.e., RandAugment [7]) and other architectures, as reported in Appendix D.1.
5.3 Superiority on Test-agnostic Long-tailed Recognition
In this subsection, we evaluate SADE on test-agnostic long-tailed recognition. The results on various test class distributions are reported in Table 5. Specifically, since Softmax seeks to simulate the longtailed training distribution, it performs well on forward long-tailed test distributions. However, its performance on the uniform and backward long-tailed distributions is poor. In contrast, existing longtailed methods show more balanced performance among classes, leading to better overall accuracy. However, the resulting models by these methods suffer from a simulation bias, i.e., performing similarly among classes on various class distributions (c.f. Table 1). As a result, they cannot adapt to diverse test class distributions well. To handle this task, LADE assumes the test class distribution to be known and uses this information to adjust its predictions, leading to better performance on various test class distributions. However, since obtaining the actual test class distribution is difficult in real applications, the methods requiring such knowledge may be not applicable in practice. Moreover, in some specific cases like Forward-LT-3 and Backward-LT-3 distributions of iNaturalist 2018, the number of test samples on some classes becomes zero. In such cases, the test prior cannot be used in LADE, since adjusting logits with log 0 results in biased predictions. In contrast, without relying on the knowledge of test class distributions, our SADE presents an innovative self-supervised strategy to deal with unknown class distributions, and obtains even better performance than LADE that uses the test class prior (c.f. Table 5). The promising results demonstrate the effectiveness and practicality of our method on test-agnostic long-tailed recognition. Note that the performance advantages of SADE become larger as the test data get more imbalanced. Due to the page limitation, the results on more datasets are reported in Appendix D.2.
5.4 Effectiveness of Skill-diverse Expert Learning
We next examine our skill-diverse expert learning strategy. The results are reported in Table 6, where RIDE [49] is a state-of-the-art ensemble-based method. RIDE trains each expert with cross-entropy independently and uses KL-Divergence to improve expert diversity. However, simply maximizing the divergence of expert predictions cannot learn visibly diverse experts (cf. Table 6). In contrast, the three experts learned by our strategy have significantly diverse expertise, excelling at many-shot classes, the uniform distribution (with higher overall performance), and few-shot classes, respectively. As a result, the increasing expert diversity leads to a non-trivial gain for the ensemble performance of SADE compared to RIDE. Moreover, consistent results on more datasets are reported in Appendix D.3, while the ablation studies of the expert learning strategy are provided in Appendix E.
5.5 Effectiveness of Test-time Self-supervised Aggregation
This subsection evaluates our test-time self-supervised aggregation strategy.
Effectiveness in expert aggregation. As shown in Table 7, our self-supervised strategy learns suitable expert weights for various unknown test class distributions. For forward long-tailed distributions, the weight of the forward expert E1 is higher; while for backward long-tailed ones, the weight of the backward expert E3 is relatively high. This enables our multi-expert model to boost the performance on dominant classes for unknown test distributions, leading to better ensemble performance (cf. Table 8), particularly as test data get more skewed. The results on more datasets are reported in Appendix D.4, while more ablation studies of our strategy are shown in Appendix F.
Superiority over test-time training methods. We then verify the superiority of our self-supervised strategy over existing test-time training approaches on various test class distributions. Specifically, we adopt three non-trivial baselines: (i) Test-time pseudo-labeling uses the multi-expert model to iteratively generate pseudo labels for unlabeled test data and uses them to fine-tune the model; (ii) Test class distribution estimation leverages BBSE [29] to estimate the test class distribution and uses it to pose-adjust model predictions; (iii) Tent [46] fine-tunes the batch normalization layers of models through entropy minimization on unlabeled test data. The results in Table 9 show that directly applying existing test-time training methods cannot handle well the class distribution shifts, particularly on the inversely long-tailed class distribution. In comparison, our self-supervised strategy is able to aggregate multiple experts appropriately for the unknown test class distribution (cf. Table 7), leading to promising performance gains on various test class distributions (cf. Table 9).
Effectiveness on partial class distributions. Real-world test data may follow any type of class distribution, including partial class distributions (i.e., not all of the classes appear in the test data). Motivated by this, we further evaluate SADE on three partial class distributions: only many-shot classes, only medium-shot classes, and only few-shot classes. The results in Table 10 demonstrate the effectiveness of SADE in tackling more complex test class distributions.
6 Conclusion
In this paper, we have explored a practical yet challenging task of test-agnostic long-tailed recognition, where the test class distribution is unknown and not necessarily uniform. To tackle this task, we present a novel approach, namely Self-supervised Aggregation of Diverse Experts (SADE), which consists of two innovative strategies, i.e., skill-diverse expert learning and test-time self-supervised aggregation. We theoretically analyze our proposed method and also empirically show that SADE achieves new state-of-the-art performance on both vanilla and test-agnostic long-tailed recognition.
Acknowledgments
This work was partially supported by NUS ODPRT Grant R252-000-A81-133 and NUS Advanced Research and Technology Innovation Centre (ARTIC) Project Reference (ECT-RP2). We also gratefully appreciate the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research. | 1. What is the focus and contribution of the paper on long-tailed recognition?
2. What are the strengths of the proposed approach, particularly in its effectiveness and extensiveness?
3. What are the weaknesses of the paper regarding its technical significance and comparisons with other works?
4. Do you have any concerns or questions about the chosen approaches, such as Balanced softmax and logit adjustment loss, the number of experts, and the classifier re-weighting strategy?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies an interesting problem in long-tailed recognition, i.e., the training class distribution is long-tailed while the test class distribution is agnostic rather than a uniform distribution as assumed in previous works. To deal with the problem, this paper proposes a new approach that outperforms existing methods in both vanilla and test-agnostic long-tailed recognition settings.
Strengths And Weaknesses
Strength
The studies problem is interesting and under-explored.
Extensive experiments are conducted to justify the effectiveness of the proposed method.
The writing is clear and easy to understand.
Weakness
The technical significance is not enough. Specifically, there are two aspects. First, the skill-diverse expert learning does not make new contribution to the field because multiple experts have been used in many existing literature, e.g., [1-3]. Moreover, the idea of aggregating multiple diverse models was also explored in [3] though the studied problem is different. Second, the Test-time Self-supervised Aggregation simply a re-weighting of three models. The key contribution might be the prediction stability maximization, but optimizing this objective does not ensure to obtain the optimal weights.
This paper only considers the transductive setting where the entire test data are accessible at once. However, in many applications, the assumption is not satisfied.
This paper incurs more computational cost than previous methods. The Test-time Self-supervised Aggregation has to be performed at each test time.
nitpick: some bold numbers in Table 9 are not the best results.
[1] Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification
[2] Long-tailed recognition by routing diverse distribution-aware experts
[3] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification
Questions
Why does this paper choose Balanced softmax and a variant logit adjustment loss?
How should we decide the number of experts?
Is the classifier re-weighting strategy optimal? And can the Test-time Self-supervised Aggregation learn optimal weights?
Why should we use the proposed method instead of existing long-tailed methods such as RIDE? The performance improvement is not significant and the proposed method incurs additional computational costs.
Limitations
Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work. |
NIPS | Title
Deep Anomaly Detection Using Geometric Transformations
Abstract
We consider the problem of anomaly detection in images, and present a new detection technique. Given a sample of images, all known to belong to a “normal” class (e.g., dogs), we show how to train a deep neural model that can detect out-of-distribution images (i.e., non-dog objects). The main idea behind our scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images. The auxiliary expertise learned by the model generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images. We present extensive experiments using the proposed detector, which indicate that our technique consistently improves all known algorithms by a wide margin.
1 Introduction
Future machine learning applications such as self-driving cars or domestic robots will, inevitably, encounter various kinds of risks including statistical uncertainties. To be usable, these applications should be as robust as possible to such risks. One such risk is exposure to statistical errors or inconsistencies due to distributional divergences or noisy observations. The well-known problem of anomaly/novelty detection highlights some of these risks, and its resolution is of the utmost importance to mission critical machine learning applications. While anomaly detection has long been considered in the literature, conclusive understanding of this problem in the context of deep neural models is sorely lacking. For example, in machine vision applications, presently available novelty detection methods can suffer from poor performance in some problems, as demonstrated by our experiments.
In the basic anomaly detection problem, we have a sample from a “normal” class of instances, emerging from some distribution, and the goal is to construct a classifier capable of detecting outof-distribution “abnormal” instances [5].1 There are quite a few variants of this basic anomaly detection problem. For example, in the positive and unlabeled version, we are given a sample from the “normal” class, as well as an unlabeled sample that is contaminated with abnormal instances. This contaminated-sample variant turns out to be easier than the pure version of the problem (in the sense that better performance can be achieved) [2]. In the present paper, we focus on the basic (and harder) version of anomaly detection, and consider only machine vision applications for which deep models (e.g., convolutional neural networks) are essential.
There are a few works that tackle the basic, pure-sample-anomaly detection problem in the context of images. The most successful results among these are reported for methods that rely on one of
1Unless otherwise mentioned, the use of the adjective “normal” is unrelated to the Gaussian distribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the following two general schemes. The first scheme consists of methods that analyze errors in reconstruction, which is based either on autoencoders or generative adversarial models (GANs) trained over the normal class. In the former case, reconstruction deficiency of a test point indicates abnormality. In the latter, the reconstruction error of a test instance is estimated using optimization to find the approximate inverse of the generator. The second class of methods utilizes an autoencoder trained over the normal class to generate a low-dimensional embedding. To identify anomalies, one uses classical methods over this embedding, such as low-density rejection [8, 9] or single-class SVM [29, 30]. A more advanced variant of this approach combines these two steps (encoding and then detection) using an appropriate cost function, which is used to train a single neural model that performs both procedures [27].
In this paper we consider a completely different approach that bypasses reconstruction (as in autoencoders or GANs) altogether. The proposed method is based on the observation that learning to discriminate between many types of geometric transformations applied to normal images, encourages learning of features that are useful for detecting novelties. Thus, we train a multi-class neural classifier over a self-labeled dataset, which is created from the normal instances and their transformed versions, obtained by applying numerous geometric transformations. At test time, this discriminative model is applied on transformed instances of the test example, and the distribution of softmax response values of the “normal” train images is used for effective detection of novelties. The intuition behind our method is that by training the classifier to distinguish between transformed images, it must learn salient geometrical features, some of which are likely to be unique to the single class.
We present extensive experiments of the proposed method and compare it to several state-of-the-art methods for pure anomaly detection. We evaluate performance using a one-vs-all scheme over several image datasets such as CIFAR-100, which (to the best of our knowledge) have never been considered before in this setting. Our results overwhelmingly indicate that the proposed method achieves dramatic improvements over the best available methods. For example, on the CIFAR-10 dataset (10 different experiments), we improved the top performing baseline AUROC by 32% on average. In the CatsVsDogs dataset, we improve the top performing baseline AUROC by 67%.
2 Related Work
The literature related to anomaly detection is extensive and beyond the scope of this paper (see, e.g., [5, 42] for wider scope surveys). Our focus is on anomaly detection in the context of images and deep learning. In this scope, most published works rely, implicitly or explicitly, on some form of (unsupervised) reconstruction learning. These methods can be roughly categorized into two approaches.
Reconstruction-based anomaly score. These methods assume that anomalies possess different visual attributes than their non-anomalous counterparts, so it will be difficult to compress and reconstruct them based on a reconstruction scheme optimized for single-class data. Motivated by this assumption, the anomaly score for a new sample is given by the quality of the reconstructed image, which is usually measured by the `2 distance between the original and reconstructed image. Classic methods belonging to this category include Principal Component Analysis (PCA) [18], and Robust-PCA [4]. In the context of deep learning, various forms of deep autoencoders are the main tool used for reconstruction-based anomaly scoring. Xia et al. [37] use a convolutional autoencoder with a regularizing term that encourages outlier samples to have a large reconstruction error. Variational autoencoder is used by An and Cho [1], where they estimate the reconstruction probability through Monte-Carlo sampling, from which they extract an anomaly score. Another related method, which scores an unseen sample based on the ability of the model to generate a similar one, uses Generative Adversarial Networks (GANS) [16]. Schlegl et al. [28] use this approach on optical coherence tomography images of the retina. Deecke et al. [7] employ a variation of this model called ADGAN, reporting slightly superior results on CIFAR-10 [21] and MNIST [22].
Reconstruction-based representation learning. Many conventional anomaly detection methods use a low-density rejection principle [8]. Given data, the density at each point is estimated, and new samples are deemed anomalous when they lie in a low-density region. Examples of such methods are kernel density estimation (KDE) [25], and Robust-KDE [19]. This approach is known to be problematic when handling high-dimensional data due to the curse of dimensionality. To mitigate
this problem, practitioners often use a two-step approach of learning a compact representation of the data, and then applying density estimation methods on the lower-dimensional representation [4]. More advanced techniques combine these two steps and aim to learn a representation that facilitates the density estimation task. Zhai et al. [41] utilize an energy-based model in the form of a regularized autoencoder in order to map each sample to an energy score, which is the estimated negative logprobability of the sample under the data distribution. Zong et al. [43] uses the representation layer of an autoencoder in order to estimate parameters of a Gaussian mixture model.
There are few approaches that tackled the anomaly detection problem without resorting to some form of reconstruction. A recent example was published by Ruff et al. [27], who have developed a deep one-class SVM model. The model consists of a deep neural network whose weights are optimized using a loss function resembling the SVDD [30] objective.
3 Problem Statement
In this paper, we consider the problem of anomaly detection in images. Let X be the space of all “natural” images, and let X ⊆ X be the set of images defined as normal. Given a sample S ⊆ X , and a type-II error constraint (rate of normal samples that were classified as anomalies), we would like to learn the best possible (in terms of type-I error) classifier hS(x) : X → {0, 1}, where hS(x) = 1 ⇔ x ∈ X , which satisfies the constraint. Images that are not in X are referred to as anomalies or novelties.
To control the trade-off between type-I and type-II errors when classifying, a common practice is to learn a scoring (ranking) function nS(x) : X → R, such that higher scores indicate that samples are more likely to be in X . Once such a scoring function has been learned, a classifier can be constructed from it by specifying an anomaly threshold (λ):
hλS(x) = { 1 nS(x) ≥ λ 0 nS(x) < λ.
As many related works [28, 31, 17], in this paper we also focus only on learning the scoring function nS(x), and completely ignore the constrained binary decision problem. A useful (and common practice) performance metric to measure the quality of the trade-off of a given scoring function is the area under the Receiver Operating Characteristic (ROC) curve, which we denote here as AUROC. When prior knowledge on the proportion of anomalies is available, the area under the precision-recall curve (AUPR) metric might be preferred [6]. We also report on performance in term of this metric in the supplementary material.
4 Discriminative Learning of an Anomaly Scoring Function Using Geometric Transformations
As noted above, we aim to learn a scoring function nS (as described in Section 3) in a discriminative fashion. To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations T . The created dataset, denoted ST , is generated by applying each geometric transformation in T on all images in S, where we label each transformed image with the index of the transformation that was applied on it. This process creates a self-labeled multi-class dataset (with |T | classes) whose cardinality is |T ||S|. After the creation of ST , we train a multi-class image classifier whose objective is to predict, for each image, the index of its generating transformation in T . At inference time, given an unseen image x, we decide whether it belongs to the normal class by first applying each transformation on it, and then applying the classifier on each of the |T | transformed images. Each such application results in a softmax response vector of size |T |. The final normality score is defined using the combined log-likelihood of these vectors under an estimated distribution of “normal” softmax vectors (see details below).
4.1 Creating and Learning the Self-Labeled Dataset
Let T = {T0, T1, . . . , Tk−1} be a set of geometric transformations, where for each 1 ≤ i ≤ k−1, Ti : X → X , and T0(x) = x is the identity transformation. The set T is a hyperparameter of our method, on which we elaborate in Section 6. The self-labeled set ST is defined as
ST , {(Tj(x), j) : x ∈ S, Tj ∈ T } .
Thus, for any x ∈ S, j is the label of Tj(x). We use this set to straightforwardly learn a deep k-class classification model, fθ, which we train over the self-labeled dataset ST using the standard cross-entropy loss function. To this end, any useful classification architecture and optimization method can be employed for this task.
4.2 Dirichlet Normality Score
We now define our normality score function nS(x). Fix a set of geometric transformations T = {T0, T1, . . . , Tk−1}, and assume that a k-class classification model fθ has been trained on the selflabeled set ST (as described above). For any image x, let y(x) , softmax (fθ (x)), i.e., the vector of softmax responses of the classifier fθ applied on x. To construct our normality score we define:
nS(x) , k−1∑ i=0 log p(y(Ti(x))|Ti),
which is the combined log-likelihood of a transformed image conditioned on each of the applied transformations in T , under a naïve (typically incorrect) assumption that all of these conditional distributions are independent. We approximate each conditional distribution to be y(Ti(x))|Ti ∼ Dir(αi), where αi ∈ Rk+, x ∼ pX(x), i ∼ Uni(0, k − 1), and pX(x) is the real data probability distribution of “normal” samples. Our choice of the Dirichlet distribution is motivated by two reasons. First, it is a common choice for distribution approximation when samples (i.e., y) reside in the unit k − 1 simplex. Second, there are efficient methods for numerically estimating the maximum likelihood parameters [24, 34]. We denote the estimation by α̃i. Using the estimated Dirichlet parameters, the normality score of an image x is:
nS(x) = k−1∑ i=0 log Γ(k−1∑ j=0 [α̃i]j)− k−1∑ j=0 log Γ([α̃i]j) + k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j . Since all α̃i are constant w.r.t x, we can ignore the first two terms in the parenthesis and redefine a simplified normality score, which is equivalent in its normality ordering:
nS(x) = k−1∑ i=0 k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j = k−1∑ i=0 (α̃i − 1) · logy(Ti(x)).
As demonstrated in our experiments, this score tightly captures normality in the sense that for two images x1 and x2, nS(x1) > nS(x2) tend to imply that x1 is “more normal” than x2. For each i ∈ {0, . . . , k−1}, we estimate α̃i using the fixed point iteration method described in [24], combined with the initialization step proposed by Wicker et al. [34]. Each vector α̃i is estimated based on the set Si = {y(Ti(x))|x ∈ S}. We note that the use of an independent image set for estimating α̃i may improve performance. A full and detailed algorithm is available in the supplementary material.
A simplified version of the proposed normality score was used during preliminary stages of this research: n̂S(x) , 1k ∑k−1 j=0 [y (Tj(x))]j . This simple score function eliminates the need for the Dirichlet parameter estimation, is easy to implement, and still achieves excellent results that are only slightly worse than the above Dirichlet score.
5 Experimental Results
In this section, we describe our experimental setup and evaluation method, the baseline algorithms we use for comparison purposes, the datasets, and the implementation details of our technique (architecture used and geometric transformations applied). We then present extensive experiments on the described publicly available datasets, demonstrating the effectiveness of our scoring function. Finally, we show that our method is also effective at identifying out-of-distribution samples in labeled multi-class datasets.
5.1 Baseline Methods
We compare our method to state-of-the-art deep learning approaches as well as a few classic methods.
One-Class SVM. The one-class support vector machine (OC-SVM) is a classic and popular kernelbased method for novelty detection [29, 30]. It is typically employed with an RBF kernel, and learns a collection of closed sets in the input space, containing most of the training samples. Samples residing outside of these enclosures are deemed anomalous. Following [41, 7], we use this model on raw input (i.e., a flattened array of the pixels comprising an image), as well as on a low-dimensional representation obtained by taking the bottleneck layer of a trained convolutional autoencoder. We name these models RAW-OC-SVM and CAE-OC-SVM, respectively. It is very important to note that in both these variants of OC-SVM, we provide the OC-SVM with an unfair significant advantage by optimizing its hyperparameters in hindsight; i.e., the OC-SVM hyperparameters (ν and γ) were optimized to maximize AUROC and taken to be the best performing values among those in the parameter grid: ν ∈ {0.1, 0.2, . . . , 0.9}, γ ∈ {2−7, 2−6, . . . , 22}. Note that the hyperparameter optimization procedure has been provided with a two-class classification problem. There are, in fact, methods for optimizing these parameters without hindsight knowledge [33, 3]. These methods are likely to degrade the performance of the OC-SVM models. The convolutional autoencoder is chosen to have a similar architecture to that of DCGAN [26], where the encoder is adapted from the discriminator, and the decoder is adapted from the generator.
In addition, we compare our method to a recently published, end-to-end variant of OC-SVM called One-Class Deep SVDD [27]. This model, which we name E2E-OC-SVM, uses an objective similar to that of the classic SVDD [30] to optimize the weights of a deep architecture. However, there are constraints on the used architecture, such as lack of bias terms and unbounded activation functions. The experimental setup used by the authors is identical to ours, allowing us to report their published results as they are, on CIFAR-10.
Deep structured energy-based models. A deep structured energy-based model (DSEBM) is a state-of-the-art deep neural technique, whose output is the energy function (negative log probability) associated with an input sample [41]. Such models can be trained efficiently using score matching in a similar way to a denoising autoencoder [32]. Samples associated with high energy are considered anomalous. While the authors of [41] used a very shallow architecture in their model (which is ineffective in our problems), we selected a deeper one when using their method. The chosen architecture is the same as that of the encoder part in the convolutional autoencoder used by CAEOC-SVM, with ReLU activations in the encoding layer.
Deep Autoencoding Gaussian Mixture Model. A deep autoencoding Gaussian mixture model (DAGMM) is another state-of-the-art deep autoencoder-based model, which generates a lowdimensional representation of the training data, and leverages a Gaussian mixture model to perform density estimation on the compact representation [43]. A DAGMM jointly and simultaneously optimizes the parameters of the autoencoder and the mixture model in an end-to-end fashion, thus leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The architecture of the autoencoder we used is similar to that of the convolutional autoencoder from the CAE-OC-SVM experiment, but with linear activation in the representation layer. The estimation network is inspired by the one in the original DAGMM paper.
Anomaly Detection with a Generative Adversarial Network. This network, given the acronym ADGAN, is a GAN based model, which learns a one-way mapping from a low-dimensional multivariate Gaussian distribution to the distribution of the training set [7]. After training the GAN on the “normal” dataset, the discriminator is discarded. Given a sample, the training of ADGAN uses gradient descent to estimate the inverse mapping from the image to the low-dimensional seed. The seed is then used to generate a sample, and the anomaly score is the `2 distance between that image and the original one. In our experiments, for the generative model of the ADGAN we incorporated the same architecture used by the authors of the original paper, namely, the original DCGAN architecture [26]. As described, ADGAN requires only a trained generator.
5.2 Datasets
We consider four image datasets in our experiments: CIFAR-10, CIFAR-100 [21], CatsVsDogs [11], and fashion-MNIST [38], which are described below. We note that in all our experiments, pixel values of all images were scaled to reside in [−1, 1]. No other pre-processing was applied.
• CIFAR-10: consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images, divided equally across the classes.
• CIFAR-100: similar to CIFAR-10, but with 100 classes containing 600 images each. This set has a fixed train/test partition with 500 training images and 100 test images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses, which we use in our experiments.
• Fashion-MNIST: a relatively new dataset comprising 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. In order to be compatible with the CIFAR-10 and CIFAR-100 classification architectures, we zero-pad the images so that they are of size 32x32.
• CatsVsDogs: extracted from the ASIRRA dataset, it contains 25,000 images of cats and dogs, 12,500 in each class. We split this dataset into a training set containing 10,000 images, and a test set of 2,500 images in each class. We also rescale each image to size 64x64. The average dimension size of the original images is roughly 360x400.
5.3 Experimental Protocol
We employ a one-vs-all evaluation scheme in each experiment. Consider a dataset with C classes, from which we create C different experiments. For each 1 ≤ c ≤ C, we designate class c to be the single class of normal images. We take S to be the set of images in the training set belonging to class c. The set S is considered to be the set of “normal” samples based on which the model must learn a normality score function. We emphasize that S contains only normal samples, and no additional samples are provided to the model during training. The normality score function is then applied on all images in the test set, containing both anomalies (not belonging to class c) and normal samples (belonging to class c), in order to evaluate the model’s performance. As stated in Section 3, we completely ignore the problem of choosing the appropriate anomaly threshold (λ) on the normality score, and quantify performance using the area under the ROC curve metric, which is commonly utilized as a performance measure for anomaly detection models. We are able to compute the ROC curve since we have full knowledge of the ground truth labels of the test set.2
Hyperparameters and Optimization Methods For the self-labeled classification task, we use 72 geometric transformations. These transformations are specified in the supplementary material (see also Section 6 discussing the intuition behind the choice of these transformations). Our model is implemented using the state-of-the-art Wide Residual Network (WRN) model [40]. The parameters for the depth and width of the model for all 32x32 datasets were chosen to be 10 and 4, respectively, and for the CatsVsDogs dataset (64x64), 16 and 8, respectively. These hyperparameters were selected prior to conducting any experiment, and were fixed for all runs.3 We used the Adam [20] optimizer with default hyperparameters. Batch size for all methods was set to 128. The number of epochs was set to 200 on all benchmark models, except for training the GAN in ADGAN for which it was set to 100 and produced superior results. We trained the WRN for d200/|T |e epochs on the self-labeled set ST , to obtain approximately the same number of parameter updates as would have been performed had we trained on S for 200 epochs.
5.4 Results
In Table 1 we present our results. The table is composed of four blocks, with each block containing several anomaly detection problems derived from the same dataset (for lack of space we omit class names from the tables, and those can be found in the supplementary material). For example, the first row contains the results for an anomaly detection problem where the normal class is class 0 in CIFAR-10 (airplane), and the anomalous instances are images from all other classes in CIFAR-10 (classes 1-9). In this row (as in any other row), we see the average AUROC results over five runs and the corresponding standard error of the mean for all baseline methods. The results of our algorithm are shown in the rightmost column. OC-SVM variants and ADGAN were run once due to their
2A complete code of the proposed method’s implementation and the conducted experiments is available at https://github.com/izikgo/AnomalyDetectionTransformations.
3The parameters 16, 8 were used on CIFAR-10 by the authors. Due to the induced computational complexity, we chose smaller values. When testing the parameters 16, 8 with our method on the CIFAR-10 dataset, anomaly detection results improved.
time complexity. The best performing method in each row appears in bold. For example, in the CatsVsDogs experiments where dog (class 1) is the “normal” class, the best baseline (DSEBM) achieves 0.561 AUROC. Note that the trivial average AUROC is always 0.5, regardless of the proportion of normal vs. anomalous instances. Our method achieves an average AUROC of 0.888.
Several interesting observations can be made by inspecting the numbers in Table 1. Our relative advantage is most prominent when focusing on the larger images. All baseline methods, including OC-SVM variants, which enjoy hindsight information, only achieve performance that is slightly better than random guessing in the CatsVsDogs dataset. On the smaller-sized images, the baselines can perform much better. In most cases, however, our algorithm significantly outperformed the other methods. Interestingly, in many cases where the baseline methods struggled with separating normal samples from anomalies, our method excelled. See, for instance, the cases of automobile (class 1) and horse (class 7; see the CIFAR-10 section in the table). Inspecting the results on CIFAR-100 (where 20 super-classes defined the partition), we observe that our method was challenged by the diversity inside the normal class. In this case, there are a few normal classes on which our method did not perform well; see e.g., non-insect invertebrates (class 13), insects (class 7), and household electrical devices (class 5). In Section 6 we speculate why this might happen. We used the super-class partitioning of CIFAR-100 (instead of the 100 base classes) because labeled data for single base classes is scarce. On the fashion-MNIST dataset, all methods, excluding DAGMM, performed very well, with a slight advantage to our method. The fashion-MNIST dataset was designed as a drop-in replacement for the original MNIST dataset, which is slightly more challenging. Classic models, such as SVM with an RBF kernel, can perform well on this task, achieving almost 90% accuracy [38].
5.5 Identifying Out-of-distribution Samples in Labeled Multi-class Datasets
Although it is not the main focus of this work, we have also tackled the problem of identifying out-ofdistribution samples in labeled multi-class datasets (i.e., identify images that belong to a different distribution than that of the labeled dataset). To this end, we created a two-headed classification model based on the WRN architecture. The model has two separate softmax output layers. One for categories (e.g., cat, truck, airplane, etc.) and another for classifying transformations (our method). We use the categories softmax layer only during training. At test time, we only utilize the transformations softmax layer output as described in section 4.2, but use the simplified normality score. When training on the CIFAR-10 dataset, and taking the tiny-imagenet (resized) dataset to be anomalies as done by Liang et al. [23] in their ODIN method, we improved ODIN’s AUROC/AUPR-In/AUPR-Out results from 92.1/89.0/93.6 to 95.7/96.1/95.4, respectively. It is important to note that in contrast to our method, ODIN is inapplicable in the pure single class setting, where there are no class labels.
6 On the Intuition for Using Geometric Transformations
In this section we explain our intuition behind the choice of the set of transformations used in our method. Any bijection of a set (having some geometric structure) to itself is a geometric transformation. Among all geometric transformations, we only used compositions of horizontal flipping, translations, and rotations in our model, resulting in 72 distinct transformations (see supplementary material for the entire list). In the earlier stages of this work, we tried a few nongeometric transformations (e.g., Gaussian blur, sharpening, gamma correction), which degraded performance and we abandoned them altogether. We hypothesize that non-geometric transformations perform worse since they can eliminate important features of the learned image set.
We speculate that the effectiveness of the chosen transformation set is affected by their ability to preserve spatial information about the given “normal” images, as well as the ability of our classifier to predict which transformation was applied on a given transformed image. In addition, for a fixed type-II error rate, the type-I error rate of our method decreases the harder it gets for the trained classifier to correctly predict the identity of the transformations that were applied on anomalies.
We demonstrate this idea by conducting three experiments. Each experiment has the following structure. We train a neural classifier to discriminate between two transformations, where the normal class is taken to be images of a single digit from the MNIST [22] training set. We then evaluate our method using AUROC on a set of images comprising normal images and images of another digit from the MNIST test set. The three experiments are:
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and horizontal flip. It can be expected that due to the invariance of ‘8’ to horizontal flip, the classifier will have difficulties learning distinguishing features. Indeed, when presented with the test set containing ‘3’ as anomalies (which do not exhibit such invariance), our method did not perform well, achieving an AUROC of 0.646.
• Normal digit: ‘3’. Anomaly: ‘8’. Transformations: Identity and horizontal flip. In contrast to the previous experiment, the transformed variants of digit ‘3’ can easily be classified to the correct transformation. Indeed, our method, using the trained model for ‘3’, achieved 0.957 AUROC in this experiment.
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and translation by 7 pixels. In this experiment, the transformed images are distinguishable from each other. As can be expected, our method performs well in this case, achieving an AUROC of 0.919.
To convince ourselves that high scores given by our scoring function indicate membership in the normal class, we tested how an image would need to change in order to obtain a high normality score. This was implemented by optimizing an input image using gradient ascent to maximize the simplified variant of the normality score described in section 5.5 (see, e.g., [39]). Thus, we trained a classifier on the digit ‘3’ from the MNIST dataset, with a few geometric transformations. We then took an arbitrary image of the digit ‘0’ and optimized it. In Figure 1(a) we present two such images, where the left one is the original, and the right is the result after taking 200 gradient ascent steps that “optimize” the original image. It is evident that the ‘0’ digits have deformed, now resembling the digit ‘3’. This illustrates the fact that the classification model has learned features relevant to the “normal” class. To further strengthen our hypothesis, we conducted the same experiment using images from the normal class (i.e., images of the digit ‘3’). We expected these images to maintain their appearance during the optimization process, since they already contain the features that should contribute to a high normality score. Figure 1(b) contains two examples of the process, where in each row, the left image is the initial ‘3’, and the right is the result after taking 200 gradient ascent steps on it. As hypothesized, it is evident that the images remained roughly unchanged at the end of the optimization process (regardless of their different orientations).
7 Conclusion and Future Work
We presented a novel method for anomaly detection of images, which learns a meaningful representation of the learned training data in a fully discriminative fashion. The proposed method is computationally efficient, and as simple to implement as a multi-class classification task. Unlike best-known methods so far, our approach completely alleviates the need for a generative component (autoencoders/GANs). Most importantly, our method significantly advances the state-of-the-art by offering a dramatic improvement over the best available anomaly detection methods. Our results open many avenues for future research. First, it is important to develop a theory that grounds the use of geometric transformations. It would be interesting to study the possibility of selecting transformations that would best serve a given training set, possibly with prior knowledge on the anomalous samples. Another avenue is explicitly optimizing the set of transformations. Due to the effectiveness of our method, it is tempting to try adapting it to other settings or utilizing it in applications. Some examples are open-world classification, selective classification and regression [36, 35, 13], uncertainty estimation [15], and deep active learning [12, 14]. Finally, it would be interesting to consider using our techniques in settings where additional unlabeled “contaminated” data (consisting of both normal and novel instances) is provided, perhaps within a transductive learning framework [10].
Acknowledgements
This research was partially supported by the Israel Science Foundation (grant No. 710/18). | 1. What is the main contribution of the paper on image anomaly detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other baseline methods?
3. How does the reviewer assess the choice of datasets and evaluation metrics for testing anomaly detection methods?
4. What are some concerns regarding the use of one-class SVM and the selection of hyperparameters?
5. How would the proposed method perform in practical image classification problems with a large number of classes?
6. Are there any recommendations for selecting an appropriate architecture of the neural network for anomaly detection?
7. How does the reviewer evaluate the novelty and significance of the proposed approach in the context of existing works, such as the paper by Liang et al.? | Review | Review
The authors proposed an original approach to image anomaly detection. They proposed to define a set of geometric transformations and then to construct a classifier to predict to which type of transformation a given image is more similar. In case the classifier score is low, the image is considered as anomalous. As a test set the authors considered standard image datasets like CIFAR, Fashion-MNIST and CatsVsDogs. However, there exist special datasets to evaluate anomaly detection methods for images. E.g. http://odds.cs.stonybrook.edu/#table5 The UCSD anomaly detection is annotated: each video frame is annotated as anomalous/non-anomalous and sometimes it contains pixel-wise annotations. Such datasets are closer to real-life applications and should be used for testing. As baselines the authors used - one-class SVM based either on pixels, or on low-dimension representation from the neural network - energy-based models - deep autoencoding GMM - GANs The authors claimed that in case of one-class SVM they used the best hyperparameters as there is no approach how to tune them. In fact, this is not true. There are approaches how to tune hyperparameters of one-class SVM: - Wang, S., Liu, Q., et al.: Hyperparameter selection of one-class support vector machine by self-adaptive data shifting. Pattern Recognition 74 (2018) 198â211 - Thomas, A., Feuillard, V., et al.: Calibration of one-class svm for mv set estimation. â¨CoRR abs/1508.07535 (2015) - E. Burnaev, P. Erofeev, D. Smolyakov. Model Selection for Anomaly Detection // Proc. SPIE 9875, Eighth International Conference on Machine Vision, 987525 (December 8, 2015); 5 P. doi:10.1117/12.2228794; http://dx.doi.org/10.1117/12.2228794 Moreover, additional gain in accuracy when using one-class SVM can be obtained through end-to-end learning of embeddings specifically for one-class classification. There exists an approach how to do that, namely, https://arxiv.org/pdf/1804.04888.pdf Also see https://arxiv.org/pdf/1801.05365.pdf Of course, the one-class SVM is not the most powerful approach for anomaly detection. Experiments on numerous benchmark datasets with ground truth that compared popular anomaly detection algorithms find that One Class SVM ranks at the bottom, see We note that the top performer in [1] is the Isolation Forest algorithm [2], an ensemble of randomized trees. Thus it is important to include in the comparison Isolation Forest algorithm based on the same features, as one-class SVM! [1] Emmott, A., Das, S., et al.: Systematic construction of anomaly detection bench-marks from real data. In: Proceedings of KDD ODD. (2013) [2] Liu, F., Ting, K., Zhou, Z.H.: Isolation forest. In: Proceedings of ICDM. (2008) - In general, features, generated by neural networks, are believed to provide efficient characterization of image data. In particular, very often it is enough to use linear classifier in the feature space, produce by deep layers of the neural network. Thus, why did the authors use kernel one-class SVM (CAE-OC-SVM)? It seems that it can be enough to use linear one-class SVM with features, produced by usual deep network such as VGG. - another issue concerning section 5.1. PCA should be used as a baseline! It is very often used in image processing engineering applications, including anomaly detection. - page 3, line 104. The authors proposed to use AUC. However, this may be not appropriate measure to estimate performance of anomaly detector. In fact, in real datasets the number of anomalies is small compareв to the size of the normal class. Thus area under the precision-recall curve is better suited for such imbalanced cases. - displayed formula after the line 100: after the first line of the displayed formula a comma sign should be used - The idea to use some class of geometric transformations T looks similar to the idea of image set augmentation. It is obvious that ârichnessâ of the used class of geometric transformations should significantly influence anomaly detection accuracy. However, the authors did not investigate how anomaly detection accuracy depend on the diversity of the used class of geometric transformations. To which extent obtained anomaly detection results are sensitive to properties of this class? - Practical image classification problems often contain a lot of classes (so-called extreme classification problems). Is the proposed method capable to deal with such case? To which extent obtained anomaly detection performance is sensitive to the number of classes, used to model anomalous observations? - in section 5.3 the authors consider issues about hyperparameters. Are there any recommendations how to select a particular architecture of the neural network? Should we try to find such architecture which provides as accurate classification of types of image transformations as possible? - in the paper by Liang et al. âEnhancing the reliability â¦â they proposed the method that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for effective detection of out-of-distribution images. In fact, in the current paper the authors proposed to use a set of geometric transformation as such perturbations. Conclusion: - the topic of the paper is important - the paper is well-written. However, still there are a number of open questions, see comments above. - the results are novel. However, the novelty is not very big compared, e.g. to the results of the paper by Liang et al. |
NIPS | Title
Deep Anomaly Detection Using Geometric Transformations
Abstract
We consider the problem of anomaly detection in images, and present a new detection technique. Given a sample of images, all known to belong to a “normal” class (e.g., dogs), we show how to train a deep neural model that can detect out-of-distribution images (i.e., non-dog objects). The main idea behind our scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images. The auxiliary expertise learned by the model generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images. We present extensive experiments using the proposed detector, which indicate that our technique consistently improves all known algorithms by a wide margin.
1 Introduction
Future machine learning applications such as self-driving cars or domestic robots will, inevitably, encounter various kinds of risks including statistical uncertainties. To be usable, these applications should be as robust as possible to such risks. One such risk is exposure to statistical errors or inconsistencies due to distributional divergences or noisy observations. The well-known problem of anomaly/novelty detection highlights some of these risks, and its resolution is of the utmost importance to mission critical machine learning applications. While anomaly detection has long been considered in the literature, conclusive understanding of this problem in the context of deep neural models is sorely lacking. For example, in machine vision applications, presently available novelty detection methods can suffer from poor performance in some problems, as demonstrated by our experiments.
In the basic anomaly detection problem, we have a sample from a “normal” class of instances, emerging from some distribution, and the goal is to construct a classifier capable of detecting outof-distribution “abnormal” instances [5].1 There are quite a few variants of this basic anomaly detection problem. For example, in the positive and unlabeled version, we are given a sample from the “normal” class, as well as an unlabeled sample that is contaminated with abnormal instances. This contaminated-sample variant turns out to be easier than the pure version of the problem (in the sense that better performance can be achieved) [2]. In the present paper, we focus on the basic (and harder) version of anomaly detection, and consider only machine vision applications for which deep models (e.g., convolutional neural networks) are essential.
There are a few works that tackle the basic, pure-sample-anomaly detection problem in the context of images. The most successful results among these are reported for methods that rely on one of
1Unless otherwise mentioned, the use of the adjective “normal” is unrelated to the Gaussian distribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the following two general schemes. The first scheme consists of methods that analyze errors in reconstruction, which is based either on autoencoders or generative adversarial models (GANs) trained over the normal class. In the former case, reconstruction deficiency of a test point indicates abnormality. In the latter, the reconstruction error of a test instance is estimated using optimization to find the approximate inverse of the generator. The second class of methods utilizes an autoencoder trained over the normal class to generate a low-dimensional embedding. To identify anomalies, one uses classical methods over this embedding, such as low-density rejection [8, 9] or single-class SVM [29, 30]. A more advanced variant of this approach combines these two steps (encoding and then detection) using an appropriate cost function, which is used to train a single neural model that performs both procedures [27].
In this paper we consider a completely different approach that bypasses reconstruction (as in autoencoders or GANs) altogether. The proposed method is based on the observation that learning to discriminate between many types of geometric transformations applied to normal images, encourages learning of features that are useful for detecting novelties. Thus, we train a multi-class neural classifier over a self-labeled dataset, which is created from the normal instances and their transformed versions, obtained by applying numerous geometric transformations. At test time, this discriminative model is applied on transformed instances of the test example, and the distribution of softmax response values of the “normal” train images is used for effective detection of novelties. The intuition behind our method is that by training the classifier to distinguish between transformed images, it must learn salient geometrical features, some of which are likely to be unique to the single class.
We present extensive experiments of the proposed method and compare it to several state-of-the-art methods for pure anomaly detection. We evaluate performance using a one-vs-all scheme over several image datasets such as CIFAR-100, which (to the best of our knowledge) have never been considered before in this setting. Our results overwhelmingly indicate that the proposed method achieves dramatic improvements over the best available methods. For example, on the CIFAR-10 dataset (10 different experiments), we improved the top performing baseline AUROC by 32% on average. In the CatsVsDogs dataset, we improve the top performing baseline AUROC by 67%.
2 Related Work
The literature related to anomaly detection is extensive and beyond the scope of this paper (see, e.g., [5, 42] for wider scope surveys). Our focus is on anomaly detection in the context of images and deep learning. In this scope, most published works rely, implicitly or explicitly, on some form of (unsupervised) reconstruction learning. These methods can be roughly categorized into two approaches.
Reconstruction-based anomaly score. These methods assume that anomalies possess different visual attributes than their non-anomalous counterparts, so it will be difficult to compress and reconstruct them based on a reconstruction scheme optimized for single-class data. Motivated by this assumption, the anomaly score for a new sample is given by the quality of the reconstructed image, which is usually measured by the `2 distance between the original and reconstructed image. Classic methods belonging to this category include Principal Component Analysis (PCA) [18], and Robust-PCA [4]. In the context of deep learning, various forms of deep autoencoders are the main tool used for reconstruction-based anomaly scoring. Xia et al. [37] use a convolutional autoencoder with a regularizing term that encourages outlier samples to have a large reconstruction error. Variational autoencoder is used by An and Cho [1], where they estimate the reconstruction probability through Monte-Carlo sampling, from which they extract an anomaly score. Another related method, which scores an unseen sample based on the ability of the model to generate a similar one, uses Generative Adversarial Networks (GANS) [16]. Schlegl et al. [28] use this approach on optical coherence tomography images of the retina. Deecke et al. [7] employ a variation of this model called ADGAN, reporting slightly superior results on CIFAR-10 [21] and MNIST [22].
Reconstruction-based representation learning. Many conventional anomaly detection methods use a low-density rejection principle [8]. Given data, the density at each point is estimated, and new samples are deemed anomalous when they lie in a low-density region. Examples of such methods are kernel density estimation (KDE) [25], and Robust-KDE [19]. This approach is known to be problematic when handling high-dimensional data due to the curse of dimensionality. To mitigate
this problem, practitioners often use a two-step approach of learning a compact representation of the data, and then applying density estimation methods on the lower-dimensional representation [4]. More advanced techniques combine these two steps and aim to learn a representation that facilitates the density estimation task. Zhai et al. [41] utilize an energy-based model in the form of a regularized autoencoder in order to map each sample to an energy score, which is the estimated negative logprobability of the sample under the data distribution. Zong et al. [43] uses the representation layer of an autoencoder in order to estimate parameters of a Gaussian mixture model.
There are few approaches that tackled the anomaly detection problem without resorting to some form of reconstruction. A recent example was published by Ruff et al. [27], who have developed a deep one-class SVM model. The model consists of a deep neural network whose weights are optimized using a loss function resembling the SVDD [30] objective.
3 Problem Statement
In this paper, we consider the problem of anomaly detection in images. Let X be the space of all “natural” images, and let X ⊆ X be the set of images defined as normal. Given a sample S ⊆ X , and a type-II error constraint (rate of normal samples that were classified as anomalies), we would like to learn the best possible (in terms of type-I error) classifier hS(x) : X → {0, 1}, where hS(x) = 1 ⇔ x ∈ X , which satisfies the constraint. Images that are not in X are referred to as anomalies or novelties.
To control the trade-off between type-I and type-II errors when classifying, a common practice is to learn a scoring (ranking) function nS(x) : X → R, such that higher scores indicate that samples are more likely to be in X . Once such a scoring function has been learned, a classifier can be constructed from it by specifying an anomaly threshold (λ):
hλS(x) = { 1 nS(x) ≥ λ 0 nS(x) < λ.
As many related works [28, 31, 17], in this paper we also focus only on learning the scoring function nS(x), and completely ignore the constrained binary decision problem. A useful (and common practice) performance metric to measure the quality of the trade-off of a given scoring function is the area under the Receiver Operating Characteristic (ROC) curve, which we denote here as AUROC. When prior knowledge on the proportion of anomalies is available, the area under the precision-recall curve (AUPR) metric might be preferred [6]. We also report on performance in term of this metric in the supplementary material.
4 Discriminative Learning of an Anomaly Scoring Function Using Geometric Transformations
As noted above, we aim to learn a scoring function nS (as described in Section 3) in a discriminative fashion. To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations T . The created dataset, denoted ST , is generated by applying each geometric transformation in T on all images in S, where we label each transformed image with the index of the transformation that was applied on it. This process creates a self-labeled multi-class dataset (with |T | classes) whose cardinality is |T ||S|. After the creation of ST , we train a multi-class image classifier whose objective is to predict, for each image, the index of its generating transformation in T . At inference time, given an unseen image x, we decide whether it belongs to the normal class by first applying each transformation on it, and then applying the classifier on each of the |T | transformed images. Each such application results in a softmax response vector of size |T |. The final normality score is defined using the combined log-likelihood of these vectors under an estimated distribution of “normal” softmax vectors (see details below).
4.1 Creating and Learning the Self-Labeled Dataset
Let T = {T0, T1, . . . , Tk−1} be a set of geometric transformations, where for each 1 ≤ i ≤ k−1, Ti : X → X , and T0(x) = x is the identity transformation. The set T is a hyperparameter of our method, on which we elaborate in Section 6. The self-labeled set ST is defined as
ST , {(Tj(x), j) : x ∈ S, Tj ∈ T } .
Thus, for any x ∈ S, j is the label of Tj(x). We use this set to straightforwardly learn a deep k-class classification model, fθ, which we train over the self-labeled dataset ST using the standard cross-entropy loss function. To this end, any useful classification architecture and optimization method can be employed for this task.
4.2 Dirichlet Normality Score
We now define our normality score function nS(x). Fix a set of geometric transformations T = {T0, T1, . . . , Tk−1}, and assume that a k-class classification model fθ has been trained on the selflabeled set ST (as described above). For any image x, let y(x) , softmax (fθ (x)), i.e., the vector of softmax responses of the classifier fθ applied on x. To construct our normality score we define:
nS(x) , k−1∑ i=0 log p(y(Ti(x))|Ti),
which is the combined log-likelihood of a transformed image conditioned on each of the applied transformations in T , under a naïve (typically incorrect) assumption that all of these conditional distributions are independent. We approximate each conditional distribution to be y(Ti(x))|Ti ∼ Dir(αi), where αi ∈ Rk+, x ∼ pX(x), i ∼ Uni(0, k − 1), and pX(x) is the real data probability distribution of “normal” samples. Our choice of the Dirichlet distribution is motivated by two reasons. First, it is a common choice for distribution approximation when samples (i.e., y) reside in the unit k − 1 simplex. Second, there are efficient methods for numerically estimating the maximum likelihood parameters [24, 34]. We denote the estimation by α̃i. Using the estimated Dirichlet parameters, the normality score of an image x is:
nS(x) = k−1∑ i=0 log Γ(k−1∑ j=0 [α̃i]j)− k−1∑ j=0 log Γ([α̃i]j) + k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j . Since all α̃i are constant w.r.t x, we can ignore the first two terms in the parenthesis and redefine a simplified normality score, which is equivalent in its normality ordering:
nS(x) = k−1∑ i=0 k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j = k−1∑ i=0 (α̃i − 1) · logy(Ti(x)).
As demonstrated in our experiments, this score tightly captures normality in the sense that for two images x1 and x2, nS(x1) > nS(x2) tend to imply that x1 is “more normal” than x2. For each i ∈ {0, . . . , k−1}, we estimate α̃i using the fixed point iteration method described in [24], combined with the initialization step proposed by Wicker et al. [34]. Each vector α̃i is estimated based on the set Si = {y(Ti(x))|x ∈ S}. We note that the use of an independent image set for estimating α̃i may improve performance. A full and detailed algorithm is available in the supplementary material.
A simplified version of the proposed normality score was used during preliminary stages of this research: n̂S(x) , 1k ∑k−1 j=0 [y (Tj(x))]j . This simple score function eliminates the need for the Dirichlet parameter estimation, is easy to implement, and still achieves excellent results that are only slightly worse than the above Dirichlet score.
5 Experimental Results
In this section, we describe our experimental setup and evaluation method, the baseline algorithms we use for comparison purposes, the datasets, and the implementation details of our technique (architecture used and geometric transformations applied). We then present extensive experiments on the described publicly available datasets, demonstrating the effectiveness of our scoring function. Finally, we show that our method is also effective at identifying out-of-distribution samples in labeled multi-class datasets.
5.1 Baseline Methods
We compare our method to state-of-the-art deep learning approaches as well as a few classic methods.
One-Class SVM. The one-class support vector machine (OC-SVM) is a classic and popular kernelbased method for novelty detection [29, 30]. It is typically employed with an RBF kernel, and learns a collection of closed sets in the input space, containing most of the training samples. Samples residing outside of these enclosures are deemed anomalous. Following [41, 7], we use this model on raw input (i.e., a flattened array of the pixels comprising an image), as well as on a low-dimensional representation obtained by taking the bottleneck layer of a trained convolutional autoencoder. We name these models RAW-OC-SVM and CAE-OC-SVM, respectively. It is very important to note that in both these variants of OC-SVM, we provide the OC-SVM with an unfair significant advantage by optimizing its hyperparameters in hindsight; i.e., the OC-SVM hyperparameters (ν and γ) were optimized to maximize AUROC and taken to be the best performing values among those in the parameter grid: ν ∈ {0.1, 0.2, . . . , 0.9}, γ ∈ {2−7, 2−6, . . . , 22}. Note that the hyperparameter optimization procedure has been provided with a two-class classification problem. There are, in fact, methods for optimizing these parameters without hindsight knowledge [33, 3]. These methods are likely to degrade the performance of the OC-SVM models. The convolutional autoencoder is chosen to have a similar architecture to that of DCGAN [26], where the encoder is adapted from the discriminator, and the decoder is adapted from the generator.
In addition, we compare our method to a recently published, end-to-end variant of OC-SVM called One-Class Deep SVDD [27]. This model, which we name E2E-OC-SVM, uses an objective similar to that of the classic SVDD [30] to optimize the weights of a deep architecture. However, there are constraints on the used architecture, such as lack of bias terms and unbounded activation functions. The experimental setup used by the authors is identical to ours, allowing us to report their published results as they are, on CIFAR-10.
Deep structured energy-based models. A deep structured energy-based model (DSEBM) is a state-of-the-art deep neural technique, whose output is the energy function (negative log probability) associated with an input sample [41]. Such models can be trained efficiently using score matching in a similar way to a denoising autoencoder [32]. Samples associated with high energy are considered anomalous. While the authors of [41] used a very shallow architecture in their model (which is ineffective in our problems), we selected a deeper one when using their method. The chosen architecture is the same as that of the encoder part in the convolutional autoencoder used by CAEOC-SVM, with ReLU activations in the encoding layer.
Deep Autoencoding Gaussian Mixture Model. A deep autoencoding Gaussian mixture model (DAGMM) is another state-of-the-art deep autoencoder-based model, which generates a lowdimensional representation of the training data, and leverages a Gaussian mixture model to perform density estimation on the compact representation [43]. A DAGMM jointly and simultaneously optimizes the parameters of the autoencoder and the mixture model in an end-to-end fashion, thus leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The architecture of the autoencoder we used is similar to that of the convolutional autoencoder from the CAE-OC-SVM experiment, but with linear activation in the representation layer. The estimation network is inspired by the one in the original DAGMM paper.
Anomaly Detection with a Generative Adversarial Network. This network, given the acronym ADGAN, is a GAN based model, which learns a one-way mapping from a low-dimensional multivariate Gaussian distribution to the distribution of the training set [7]. After training the GAN on the “normal” dataset, the discriminator is discarded. Given a sample, the training of ADGAN uses gradient descent to estimate the inverse mapping from the image to the low-dimensional seed. The seed is then used to generate a sample, and the anomaly score is the `2 distance between that image and the original one. In our experiments, for the generative model of the ADGAN we incorporated the same architecture used by the authors of the original paper, namely, the original DCGAN architecture [26]. As described, ADGAN requires only a trained generator.
5.2 Datasets
We consider four image datasets in our experiments: CIFAR-10, CIFAR-100 [21], CatsVsDogs [11], and fashion-MNIST [38], which are described below. We note that in all our experiments, pixel values of all images were scaled to reside in [−1, 1]. No other pre-processing was applied.
• CIFAR-10: consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images, divided equally across the classes.
• CIFAR-100: similar to CIFAR-10, but with 100 classes containing 600 images each. This set has a fixed train/test partition with 500 training images and 100 test images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses, which we use in our experiments.
• Fashion-MNIST: a relatively new dataset comprising 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. In order to be compatible with the CIFAR-10 and CIFAR-100 classification architectures, we zero-pad the images so that they are of size 32x32.
• CatsVsDogs: extracted from the ASIRRA dataset, it contains 25,000 images of cats and dogs, 12,500 in each class. We split this dataset into a training set containing 10,000 images, and a test set of 2,500 images in each class. We also rescale each image to size 64x64. The average dimension size of the original images is roughly 360x400.
5.3 Experimental Protocol
We employ a one-vs-all evaluation scheme in each experiment. Consider a dataset with C classes, from which we create C different experiments. For each 1 ≤ c ≤ C, we designate class c to be the single class of normal images. We take S to be the set of images in the training set belonging to class c. The set S is considered to be the set of “normal” samples based on which the model must learn a normality score function. We emphasize that S contains only normal samples, and no additional samples are provided to the model during training. The normality score function is then applied on all images in the test set, containing both anomalies (not belonging to class c) and normal samples (belonging to class c), in order to evaluate the model’s performance. As stated in Section 3, we completely ignore the problem of choosing the appropriate anomaly threshold (λ) on the normality score, and quantify performance using the area under the ROC curve metric, which is commonly utilized as a performance measure for anomaly detection models. We are able to compute the ROC curve since we have full knowledge of the ground truth labels of the test set.2
Hyperparameters and Optimization Methods For the self-labeled classification task, we use 72 geometric transformations. These transformations are specified in the supplementary material (see also Section 6 discussing the intuition behind the choice of these transformations). Our model is implemented using the state-of-the-art Wide Residual Network (WRN) model [40]. The parameters for the depth and width of the model for all 32x32 datasets were chosen to be 10 and 4, respectively, and for the CatsVsDogs dataset (64x64), 16 and 8, respectively. These hyperparameters were selected prior to conducting any experiment, and were fixed for all runs.3 We used the Adam [20] optimizer with default hyperparameters. Batch size for all methods was set to 128. The number of epochs was set to 200 on all benchmark models, except for training the GAN in ADGAN for which it was set to 100 and produced superior results. We trained the WRN for d200/|T |e epochs on the self-labeled set ST , to obtain approximately the same number of parameter updates as would have been performed had we trained on S for 200 epochs.
5.4 Results
In Table 1 we present our results. The table is composed of four blocks, with each block containing several anomaly detection problems derived from the same dataset (for lack of space we omit class names from the tables, and those can be found in the supplementary material). For example, the first row contains the results for an anomaly detection problem where the normal class is class 0 in CIFAR-10 (airplane), and the anomalous instances are images from all other classes in CIFAR-10 (classes 1-9). In this row (as in any other row), we see the average AUROC results over five runs and the corresponding standard error of the mean for all baseline methods. The results of our algorithm are shown in the rightmost column. OC-SVM variants and ADGAN were run once due to their
2A complete code of the proposed method’s implementation and the conducted experiments is available at https://github.com/izikgo/AnomalyDetectionTransformations.
3The parameters 16, 8 were used on CIFAR-10 by the authors. Due to the induced computational complexity, we chose smaller values. When testing the parameters 16, 8 with our method on the CIFAR-10 dataset, anomaly detection results improved.
time complexity. The best performing method in each row appears in bold. For example, in the CatsVsDogs experiments where dog (class 1) is the “normal” class, the best baseline (DSEBM) achieves 0.561 AUROC. Note that the trivial average AUROC is always 0.5, regardless of the proportion of normal vs. anomalous instances. Our method achieves an average AUROC of 0.888.
Several interesting observations can be made by inspecting the numbers in Table 1. Our relative advantage is most prominent when focusing on the larger images. All baseline methods, including OC-SVM variants, which enjoy hindsight information, only achieve performance that is slightly better than random guessing in the CatsVsDogs dataset. On the smaller-sized images, the baselines can perform much better. In most cases, however, our algorithm significantly outperformed the other methods. Interestingly, in many cases where the baseline methods struggled with separating normal samples from anomalies, our method excelled. See, for instance, the cases of automobile (class 1) and horse (class 7; see the CIFAR-10 section in the table). Inspecting the results on CIFAR-100 (where 20 super-classes defined the partition), we observe that our method was challenged by the diversity inside the normal class. In this case, there are a few normal classes on which our method did not perform well; see e.g., non-insect invertebrates (class 13), insects (class 7), and household electrical devices (class 5). In Section 6 we speculate why this might happen. We used the super-class partitioning of CIFAR-100 (instead of the 100 base classes) because labeled data for single base classes is scarce. On the fashion-MNIST dataset, all methods, excluding DAGMM, performed very well, with a slight advantage to our method. The fashion-MNIST dataset was designed as a drop-in replacement for the original MNIST dataset, which is slightly more challenging. Classic models, such as SVM with an RBF kernel, can perform well on this task, achieving almost 90% accuracy [38].
5.5 Identifying Out-of-distribution Samples in Labeled Multi-class Datasets
Although it is not the main focus of this work, we have also tackled the problem of identifying out-ofdistribution samples in labeled multi-class datasets (i.e., identify images that belong to a different distribution than that of the labeled dataset). To this end, we created a two-headed classification model based on the WRN architecture. The model has two separate softmax output layers. One for categories (e.g., cat, truck, airplane, etc.) and another for classifying transformations (our method). We use the categories softmax layer only during training. At test time, we only utilize the transformations softmax layer output as described in section 4.2, but use the simplified normality score. When training on the CIFAR-10 dataset, and taking the tiny-imagenet (resized) dataset to be anomalies as done by Liang et al. [23] in their ODIN method, we improved ODIN’s AUROC/AUPR-In/AUPR-Out results from 92.1/89.0/93.6 to 95.7/96.1/95.4, respectively. It is important to note that in contrast to our method, ODIN is inapplicable in the pure single class setting, where there are no class labels.
6 On the Intuition for Using Geometric Transformations
In this section we explain our intuition behind the choice of the set of transformations used in our method. Any bijection of a set (having some geometric structure) to itself is a geometric transformation. Among all geometric transformations, we only used compositions of horizontal flipping, translations, and rotations in our model, resulting in 72 distinct transformations (see supplementary material for the entire list). In the earlier stages of this work, we tried a few nongeometric transformations (e.g., Gaussian blur, sharpening, gamma correction), which degraded performance and we abandoned them altogether. We hypothesize that non-geometric transformations perform worse since they can eliminate important features of the learned image set.
We speculate that the effectiveness of the chosen transformation set is affected by their ability to preserve spatial information about the given “normal” images, as well as the ability of our classifier to predict which transformation was applied on a given transformed image. In addition, for a fixed type-II error rate, the type-I error rate of our method decreases the harder it gets for the trained classifier to correctly predict the identity of the transformations that were applied on anomalies.
We demonstrate this idea by conducting three experiments. Each experiment has the following structure. We train a neural classifier to discriminate between two transformations, where the normal class is taken to be images of a single digit from the MNIST [22] training set. We then evaluate our method using AUROC on a set of images comprising normal images and images of another digit from the MNIST test set. The three experiments are:
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and horizontal flip. It can be expected that due to the invariance of ‘8’ to horizontal flip, the classifier will have difficulties learning distinguishing features. Indeed, when presented with the test set containing ‘3’ as anomalies (which do not exhibit such invariance), our method did not perform well, achieving an AUROC of 0.646.
• Normal digit: ‘3’. Anomaly: ‘8’. Transformations: Identity and horizontal flip. In contrast to the previous experiment, the transformed variants of digit ‘3’ can easily be classified to the correct transformation. Indeed, our method, using the trained model for ‘3’, achieved 0.957 AUROC in this experiment.
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and translation by 7 pixels. In this experiment, the transformed images are distinguishable from each other. As can be expected, our method performs well in this case, achieving an AUROC of 0.919.
To convince ourselves that high scores given by our scoring function indicate membership in the normal class, we tested how an image would need to change in order to obtain a high normality score. This was implemented by optimizing an input image using gradient ascent to maximize the simplified variant of the normality score described in section 5.5 (see, e.g., [39]). Thus, we trained a classifier on the digit ‘3’ from the MNIST dataset, with a few geometric transformations. We then took an arbitrary image of the digit ‘0’ and optimized it. In Figure 1(a) we present two such images, where the left one is the original, and the right is the result after taking 200 gradient ascent steps that “optimize” the original image. It is evident that the ‘0’ digits have deformed, now resembling the digit ‘3’. This illustrates the fact that the classification model has learned features relevant to the “normal” class. To further strengthen our hypothesis, we conducted the same experiment using images from the normal class (i.e., images of the digit ‘3’). We expected these images to maintain their appearance during the optimization process, since they already contain the features that should contribute to a high normality score. Figure 1(b) contains two examples of the process, where in each row, the left image is the initial ‘3’, and the right is the result after taking 200 gradient ascent steps on it. As hypothesized, it is evident that the images remained roughly unchanged at the end of the optimization process (regardless of their different orientations).
7 Conclusion and Future Work
We presented a novel method for anomaly detection of images, which learns a meaningful representation of the learned training data in a fully discriminative fashion. The proposed method is computationally efficient, and as simple to implement as a multi-class classification task. Unlike best-known methods so far, our approach completely alleviates the need for a generative component (autoencoders/GANs). Most importantly, our method significantly advances the state-of-the-art by offering a dramatic improvement over the best available anomaly detection methods. Our results open many avenues for future research. First, it is important to develop a theory that grounds the use of geometric transformations. It would be interesting to study the possibility of selecting transformations that would best serve a given training set, possibly with prior knowledge on the anomalous samples. Another avenue is explicitly optimizing the set of transformations. Due to the effectiveness of our method, it is tempting to try adapting it to other settings or utilizing it in applications. Some examples are open-world classification, selective classification and regression [36, 35, 13], uncertainty estimation [15], and deep active learning [12, 14]. Finally, it would be interesting to consider using our techniques in settings where additional unlabeled “contaminated” data (consisting of both normal and novel instances) is provided, perhaps within a transductive learning framework [10].
Acknowledgements
This research was partially supported by the Israel Science Foundation (grant No. 710/18). | 1. What is the focus and contribution of the paper on image anomaly detection?
2. What are the strengths of the proposed approach, particularly in its ability to learn a scoring function?
3. What are the weaknesses of the paper regarding its comparisons with other works?
4. How does the reviewer assess the effectiveness of the method in its performance on diverse databases?
5. Do you have any questions regarding the appropriateness of the comparison between the proposed method and other approaches? | Review | Review
The authors present a framework for image anomaly detection. This method seems to advance the state-of-art by learning a scoring function ns_(x) from which a classifier can be constructed with an anomaly threshold. They also did experiments to demonstrate the idea about causes of the effectiveness. The authors has verified the performance of their method on sufficient and diverse databases , but the approaches which they compare with are not very new to me, it may be more convincing if the experiments can be conducted by comparing with the latest methods, e.g., #1 Hendrycks, Dan, and Kevin Gimpel. "A baseline for detecting misclassified and out-of-distribution examples in neural networks."Â International Conference on Learning Representations (ICLR), 2017. #2 Liang, Shiyu, Yixuan Li, and R. Srikant. "Enhancing the reliability of out-of-distribution image detection in neural networks."Â International Conference on Learning Representations (ICLR), 2018. The authors also analyze the reason why their method are not outstanding on the same datasets, but the method DAGMM they compare with is unsupervised while their method in this paper seems supervised to me. In this sense, this comparison may not be very appropriate to me, or it should be further clarified at least. The rebuttal seems to have addressed my major concerns in the experiments. |
NIPS | Title
Deep Anomaly Detection Using Geometric Transformations
Abstract
We consider the problem of anomaly detection in images, and present a new detection technique. Given a sample of images, all known to belong to a “normal” class (e.g., dogs), we show how to train a deep neural model that can detect out-of-distribution images (i.e., non-dog objects). The main idea behind our scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images. The auxiliary expertise learned by the model generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images. We present extensive experiments using the proposed detector, which indicate that our technique consistently improves all known algorithms by a wide margin.
1 Introduction
Future machine learning applications such as self-driving cars or domestic robots will, inevitably, encounter various kinds of risks including statistical uncertainties. To be usable, these applications should be as robust as possible to such risks. One such risk is exposure to statistical errors or inconsistencies due to distributional divergences or noisy observations. The well-known problem of anomaly/novelty detection highlights some of these risks, and its resolution is of the utmost importance to mission critical machine learning applications. While anomaly detection has long been considered in the literature, conclusive understanding of this problem in the context of deep neural models is sorely lacking. For example, in machine vision applications, presently available novelty detection methods can suffer from poor performance in some problems, as demonstrated by our experiments.
In the basic anomaly detection problem, we have a sample from a “normal” class of instances, emerging from some distribution, and the goal is to construct a classifier capable of detecting outof-distribution “abnormal” instances [5].1 There are quite a few variants of this basic anomaly detection problem. For example, in the positive and unlabeled version, we are given a sample from the “normal” class, as well as an unlabeled sample that is contaminated with abnormal instances. This contaminated-sample variant turns out to be easier than the pure version of the problem (in the sense that better performance can be achieved) [2]. In the present paper, we focus on the basic (and harder) version of anomaly detection, and consider only machine vision applications for which deep models (e.g., convolutional neural networks) are essential.
There are a few works that tackle the basic, pure-sample-anomaly detection problem in the context of images. The most successful results among these are reported for methods that rely on one of
1Unless otherwise mentioned, the use of the adjective “normal” is unrelated to the Gaussian distribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the following two general schemes. The first scheme consists of methods that analyze errors in reconstruction, which is based either on autoencoders or generative adversarial models (GANs) trained over the normal class. In the former case, reconstruction deficiency of a test point indicates abnormality. In the latter, the reconstruction error of a test instance is estimated using optimization to find the approximate inverse of the generator. The second class of methods utilizes an autoencoder trained over the normal class to generate a low-dimensional embedding. To identify anomalies, one uses classical methods over this embedding, such as low-density rejection [8, 9] or single-class SVM [29, 30]. A more advanced variant of this approach combines these two steps (encoding and then detection) using an appropriate cost function, which is used to train a single neural model that performs both procedures [27].
In this paper we consider a completely different approach that bypasses reconstruction (as in autoencoders or GANs) altogether. The proposed method is based on the observation that learning to discriminate between many types of geometric transformations applied to normal images, encourages learning of features that are useful for detecting novelties. Thus, we train a multi-class neural classifier over a self-labeled dataset, which is created from the normal instances and their transformed versions, obtained by applying numerous geometric transformations. At test time, this discriminative model is applied on transformed instances of the test example, and the distribution of softmax response values of the “normal” train images is used for effective detection of novelties. The intuition behind our method is that by training the classifier to distinguish between transformed images, it must learn salient geometrical features, some of which are likely to be unique to the single class.
We present extensive experiments of the proposed method and compare it to several state-of-the-art methods for pure anomaly detection. We evaluate performance using a one-vs-all scheme over several image datasets such as CIFAR-100, which (to the best of our knowledge) have never been considered before in this setting. Our results overwhelmingly indicate that the proposed method achieves dramatic improvements over the best available methods. For example, on the CIFAR-10 dataset (10 different experiments), we improved the top performing baseline AUROC by 32% on average. In the CatsVsDogs dataset, we improve the top performing baseline AUROC by 67%.
2 Related Work
The literature related to anomaly detection is extensive and beyond the scope of this paper (see, e.g., [5, 42] for wider scope surveys). Our focus is on anomaly detection in the context of images and deep learning. In this scope, most published works rely, implicitly or explicitly, on some form of (unsupervised) reconstruction learning. These methods can be roughly categorized into two approaches.
Reconstruction-based anomaly score. These methods assume that anomalies possess different visual attributes than their non-anomalous counterparts, so it will be difficult to compress and reconstruct them based on a reconstruction scheme optimized for single-class data. Motivated by this assumption, the anomaly score for a new sample is given by the quality of the reconstructed image, which is usually measured by the `2 distance between the original and reconstructed image. Classic methods belonging to this category include Principal Component Analysis (PCA) [18], and Robust-PCA [4]. In the context of deep learning, various forms of deep autoencoders are the main tool used for reconstruction-based anomaly scoring. Xia et al. [37] use a convolutional autoencoder with a regularizing term that encourages outlier samples to have a large reconstruction error. Variational autoencoder is used by An and Cho [1], where they estimate the reconstruction probability through Monte-Carlo sampling, from which they extract an anomaly score. Another related method, which scores an unseen sample based on the ability of the model to generate a similar one, uses Generative Adversarial Networks (GANS) [16]. Schlegl et al. [28] use this approach on optical coherence tomography images of the retina. Deecke et al. [7] employ a variation of this model called ADGAN, reporting slightly superior results on CIFAR-10 [21] and MNIST [22].
Reconstruction-based representation learning. Many conventional anomaly detection methods use a low-density rejection principle [8]. Given data, the density at each point is estimated, and new samples are deemed anomalous when they lie in a low-density region. Examples of such methods are kernel density estimation (KDE) [25], and Robust-KDE [19]. This approach is known to be problematic when handling high-dimensional data due to the curse of dimensionality. To mitigate
this problem, practitioners often use a two-step approach of learning a compact representation of the data, and then applying density estimation methods on the lower-dimensional representation [4]. More advanced techniques combine these two steps and aim to learn a representation that facilitates the density estimation task. Zhai et al. [41] utilize an energy-based model in the form of a regularized autoencoder in order to map each sample to an energy score, which is the estimated negative logprobability of the sample under the data distribution. Zong et al. [43] uses the representation layer of an autoencoder in order to estimate parameters of a Gaussian mixture model.
There are few approaches that tackled the anomaly detection problem without resorting to some form of reconstruction. A recent example was published by Ruff et al. [27], who have developed a deep one-class SVM model. The model consists of a deep neural network whose weights are optimized using a loss function resembling the SVDD [30] objective.
3 Problem Statement
In this paper, we consider the problem of anomaly detection in images. Let X be the space of all “natural” images, and let X ⊆ X be the set of images defined as normal. Given a sample S ⊆ X , and a type-II error constraint (rate of normal samples that were classified as anomalies), we would like to learn the best possible (in terms of type-I error) classifier hS(x) : X → {0, 1}, where hS(x) = 1 ⇔ x ∈ X , which satisfies the constraint. Images that are not in X are referred to as anomalies or novelties.
To control the trade-off between type-I and type-II errors when classifying, a common practice is to learn a scoring (ranking) function nS(x) : X → R, such that higher scores indicate that samples are more likely to be in X . Once such a scoring function has been learned, a classifier can be constructed from it by specifying an anomaly threshold (λ):
hλS(x) = { 1 nS(x) ≥ λ 0 nS(x) < λ.
As many related works [28, 31, 17], in this paper we also focus only on learning the scoring function nS(x), and completely ignore the constrained binary decision problem. A useful (and common practice) performance metric to measure the quality of the trade-off of a given scoring function is the area under the Receiver Operating Characteristic (ROC) curve, which we denote here as AUROC. When prior knowledge on the proportion of anomalies is available, the area under the precision-recall curve (AUPR) metric might be preferred [6]. We also report on performance in term of this metric in the supplementary material.
4 Discriminative Learning of an Anomaly Scoring Function Using Geometric Transformations
As noted above, we aim to learn a scoring function nS (as described in Section 3) in a discriminative fashion. To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations T . The created dataset, denoted ST , is generated by applying each geometric transformation in T on all images in S, where we label each transformed image with the index of the transformation that was applied on it. This process creates a self-labeled multi-class dataset (with |T | classes) whose cardinality is |T ||S|. After the creation of ST , we train a multi-class image classifier whose objective is to predict, for each image, the index of its generating transformation in T . At inference time, given an unseen image x, we decide whether it belongs to the normal class by first applying each transformation on it, and then applying the classifier on each of the |T | transformed images. Each such application results in a softmax response vector of size |T |. The final normality score is defined using the combined log-likelihood of these vectors under an estimated distribution of “normal” softmax vectors (see details below).
4.1 Creating and Learning the Self-Labeled Dataset
Let T = {T0, T1, . . . , Tk−1} be a set of geometric transformations, where for each 1 ≤ i ≤ k−1, Ti : X → X , and T0(x) = x is the identity transformation. The set T is a hyperparameter of our method, on which we elaborate in Section 6. The self-labeled set ST is defined as
ST , {(Tj(x), j) : x ∈ S, Tj ∈ T } .
Thus, for any x ∈ S, j is the label of Tj(x). We use this set to straightforwardly learn a deep k-class classification model, fθ, which we train over the self-labeled dataset ST using the standard cross-entropy loss function. To this end, any useful classification architecture and optimization method can be employed for this task.
4.2 Dirichlet Normality Score
We now define our normality score function nS(x). Fix a set of geometric transformations T = {T0, T1, . . . , Tk−1}, and assume that a k-class classification model fθ has been trained on the selflabeled set ST (as described above). For any image x, let y(x) , softmax (fθ (x)), i.e., the vector of softmax responses of the classifier fθ applied on x. To construct our normality score we define:
nS(x) , k−1∑ i=0 log p(y(Ti(x))|Ti),
which is the combined log-likelihood of a transformed image conditioned on each of the applied transformations in T , under a naïve (typically incorrect) assumption that all of these conditional distributions are independent. We approximate each conditional distribution to be y(Ti(x))|Ti ∼ Dir(αi), where αi ∈ Rk+, x ∼ pX(x), i ∼ Uni(0, k − 1), and pX(x) is the real data probability distribution of “normal” samples. Our choice of the Dirichlet distribution is motivated by two reasons. First, it is a common choice for distribution approximation when samples (i.e., y) reside in the unit k − 1 simplex. Second, there are efficient methods for numerically estimating the maximum likelihood parameters [24, 34]. We denote the estimation by α̃i. Using the estimated Dirichlet parameters, the normality score of an image x is:
nS(x) = k−1∑ i=0 log Γ(k−1∑ j=0 [α̃i]j)− k−1∑ j=0 log Γ([α̃i]j) + k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j . Since all α̃i are constant w.r.t x, we can ignore the first two terms in the parenthesis and redefine a simplified normality score, which is equivalent in its normality ordering:
nS(x) = k−1∑ i=0 k−1∑ j=0 ([α̃i]j − 1) logy(Ti(x))j = k−1∑ i=0 (α̃i − 1) · logy(Ti(x)).
As demonstrated in our experiments, this score tightly captures normality in the sense that for two images x1 and x2, nS(x1) > nS(x2) tend to imply that x1 is “more normal” than x2. For each i ∈ {0, . . . , k−1}, we estimate α̃i using the fixed point iteration method described in [24], combined with the initialization step proposed by Wicker et al. [34]. Each vector α̃i is estimated based on the set Si = {y(Ti(x))|x ∈ S}. We note that the use of an independent image set for estimating α̃i may improve performance. A full and detailed algorithm is available in the supplementary material.
A simplified version of the proposed normality score was used during preliminary stages of this research: n̂S(x) , 1k ∑k−1 j=0 [y (Tj(x))]j . This simple score function eliminates the need for the Dirichlet parameter estimation, is easy to implement, and still achieves excellent results that are only slightly worse than the above Dirichlet score.
5 Experimental Results
In this section, we describe our experimental setup and evaluation method, the baseline algorithms we use for comparison purposes, the datasets, and the implementation details of our technique (architecture used and geometric transformations applied). We then present extensive experiments on the described publicly available datasets, demonstrating the effectiveness of our scoring function. Finally, we show that our method is also effective at identifying out-of-distribution samples in labeled multi-class datasets.
5.1 Baseline Methods
We compare our method to state-of-the-art deep learning approaches as well as a few classic methods.
One-Class SVM. The one-class support vector machine (OC-SVM) is a classic and popular kernelbased method for novelty detection [29, 30]. It is typically employed with an RBF kernel, and learns a collection of closed sets in the input space, containing most of the training samples. Samples residing outside of these enclosures are deemed anomalous. Following [41, 7], we use this model on raw input (i.e., a flattened array of the pixels comprising an image), as well as on a low-dimensional representation obtained by taking the bottleneck layer of a trained convolutional autoencoder. We name these models RAW-OC-SVM and CAE-OC-SVM, respectively. It is very important to note that in both these variants of OC-SVM, we provide the OC-SVM with an unfair significant advantage by optimizing its hyperparameters in hindsight; i.e., the OC-SVM hyperparameters (ν and γ) were optimized to maximize AUROC and taken to be the best performing values among those in the parameter grid: ν ∈ {0.1, 0.2, . . . , 0.9}, γ ∈ {2−7, 2−6, . . . , 22}. Note that the hyperparameter optimization procedure has been provided with a two-class classification problem. There are, in fact, methods for optimizing these parameters without hindsight knowledge [33, 3]. These methods are likely to degrade the performance of the OC-SVM models. The convolutional autoencoder is chosen to have a similar architecture to that of DCGAN [26], where the encoder is adapted from the discriminator, and the decoder is adapted from the generator.
In addition, we compare our method to a recently published, end-to-end variant of OC-SVM called One-Class Deep SVDD [27]. This model, which we name E2E-OC-SVM, uses an objective similar to that of the classic SVDD [30] to optimize the weights of a deep architecture. However, there are constraints on the used architecture, such as lack of bias terms and unbounded activation functions. The experimental setup used by the authors is identical to ours, allowing us to report their published results as they are, on CIFAR-10.
Deep structured energy-based models. A deep structured energy-based model (DSEBM) is a state-of-the-art deep neural technique, whose output is the energy function (negative log probability) associated with an input sample [41]. Such models can be trained efficiently using score matching in a similar way to a denoising autoencoder [32]. Samples associated with high energy are considered anomalous. While the authors of [41] used a very shallow architecture in their model (which is ineffective in our problems), we selected a deeper one when using their method. The chosen architecture is the same as that of the encoder part in the convolutional autoencoder used by CAEOC-SVM, with ReLU activations in the encoding layer.
Deep Autoencoding Gaussian Mixture Model. A deep autoencoding Gaussian mixture model (DAGMM) is another state-of-the-art deep autoencoder-based model, which generates a lowdimensional representation of the training data, and leverages a Gaussian mixture model to perform density estimation on the compact representation [43]. A DAGMM jointly and simultaneously optimizes the parameters of the autoencoder and the mixture model in an end-to-end fashion, thus leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The architecture of the autoencoder we used is similar to that of the convolutional autoencoder from the CAE-OC-SVM experiment, but with linear activation in the representation layer. The estimation network is inspired by the one in the original DAGMM paper.
Anomaly Detection with a Generative Adversarial Network. This network, given the acronym ADGAN, is a GAN based model, which learns a one-way mapping from a low-dimensional multivariate Gaussian distribution to the distribution of the training set [7]. After training the GAN on the “normal” dataset, the discriminator is discarded. Given a sample, the training of ADGAN uses gradient descent to estimate the inverse mapping from the image to the low-dimensional seed. The seed is then used to generate a sample, and the anomaly score is the `2 distance between that image and the original one. In our experiments, for the generative model of the ADGAN we incorporated the same architecture used by the authors of the original paper, namely, the original DCGAN architecture [26]. As described, ADGAN requires only a trained generator.
5.2 Datasets
We consider four image datasets in our experiments: CIFAR-10, CIFAR-100 [21], CatsVsDogs [11], and fashion-MNIST [38], which are described below. We note that in all our experiments, pixel values of all images were scaled to reside in [−1, 1]. No other pre-processing was applied.
• CIFAR-10: consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images, divided equally across the classes.
• CIFAR-100: similar to CIFAR-10, but with 100 classes containing 600 images each. This set has a fixed train/test partition with 500 training images and 100 test images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses, which we use in our experiments.
• Fashion-MNIST: a relatively new dataset comprising 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. In order to be compatible with the CIFAR-10 and CIFAR-100 classification architectures, we zero-pad the images so that they are of size 32x32.
• CatsVsDogs: extracted from the ASIRRA dataset, it contains 25,000 images of cats and dogs, 12,500 in each class. We split this dataset into a training set containing 10,000 images, and a test set of 2,500 images in each class. We also rescale each image to size 64x64. The average dimension size of the original images is roughly 360x400.
5.3 Experimental Protocol
We employ a one-vs-all evaluation scheme in each experiment. Consider a dataset with C classes, from which we create C different experiments. For each 1 ≤ c ≤ C, we designate class c to be the single class of normal images. We take S to be the set of images in the training set belonging to class c. The set S is considered to be the set of “normal” samples based on which the model must learn a normality score function. We emphasize that S contains only normal samples, and no additional samples are provided to the model during training. The normality score function is then applied on all images in the test set, containing both anomalies (not belonging to class c) and normal samples (belonging to class c), in order to evaluate the model’s performance. As stated in Section 3, we completely ignore the problem of choosing the appropriate anomaly threshold (λ) on the normality score, and quantify performance using the area under the ROC curve metric, which is commonly utilized as a performance measure for anomaly detection models. We are able to compute the ROC curve since we have full knowledge of the ground truth labels of the test set.2
Hyperparameters and Optimization Methods For the self-labeled classification task, we use 72 geometric transformations. These transformations are specified in the supplementary material (see also Section 6 discussing the intuition behind the choice of these transformations). Our model is implemented using the state-of-the-art Wide Residual Network (WRN) model [40]. The parameters for the depth and width of the model for all 32x32 datasets were chosen to be 10 and 4, respectively, and for the CatsVsDogs dataset (64x64), 16 and 8, respectively. These hyperparameters were selected prior to conducting any experiment, and were fixed for all runs.3 We used the Adam [20] optimizer with default hyperparameters. Batch size for all methods was set to 128. The number of epochs was set to 200 on all benchmark models, except for training the GAN in ADGAN for which it was set to 100 and produced superior results. We trained the WRN for d200/|T |e epochs on the self-labeled set ST , to obtain approximately the same number of parameter updates as would have been performed had we trained on S for 200 epochs.
5.4 Results
In Table 1 we present our results. The table is composed of four blocks, with each block containing several anomaly detection problems derived from the same dataset (for lack of space we omit class names from the tables, and those can be found in the supplementary material). For example, the first row contains the results for an anomaly detection problem where the normal class is class 0 in CIFAR-10 (airplane), and the anomalous instances are images from all other classes in CIFAR-10 (classes 1-9). In this row (as in any other row), we see the average AUROC results over five runs and the corresponding standard error of the mean for all baseline methods. The results of our algorithm are shown in the rightmost column. OC-SVM variants and ADGAN were run once due to their
2A complete code of the proposed method’s implementation and the conducted experiments is available at https://github.com/izikgo/AnomalyDetectionTransformations.
3The parameters 16, 8 were used on CIFAR-10 by the authors. Due to the induced computational complexity, we chose smaller values. When testing the parameters 16, 8 with our method on the CIFAR-10 dataset, anomaly detection results improved.
time complexity. The best performing method in each row appears in bold. For example, in the CatsVsDogs experiments where dog (class 1) is the “normal” class, the best baseline (DSEBM) achieves 0.561 AUROC. Note that the trivial average AUROC is always 0.5, regardless of the proportion of normal vs. anomalous instances. Our method achieves an average AUROC of 0.888.
Several interesting observations can be made by inspecting the numbers in Table 1. Our relative advantage is most prominent when focusing on the larger images. All baseline methods, including OC-SVM variants, which enjoy hindsight information, only achieve performance that is slightly better than random guessing in the CatsVsDogs dataset. On the smaller-sized images, the baselines can perform much better. In most cases, however, our algorithm significantly outperformed the other methods. Interestingly, in many cases where the baseline methods struggled with separating normal samples from anomalies, our method excelled. See, for instance, the cases of automobile (class 1) and horse (class 7; see the CIFAR-10 section in the table). Inspecting the results on CIFAR-100 (where 20 super-classes defined the partition), we observe that our method was challenged by the diversity inside the normal class. In this case, there are a few normal classes on which our method did not perform well; see e.g., non-insect invertebrates (class 13), insects (class 7), and household electrical devices (class 5). In Section 6 we speculate why this might happen. We used the super-class partitioning of CIFAR-100 (instead of the 100 base classes) because labeled data for single base classes is scarce. On the fashion-MNIST dataset, all methods, excluding DAGMM, performed very well, with a slight advantage to our method. The fashion-MNIST dataset was designed as a drop-in replacement for the original MNIST dataset, which is slightly more challenging. Classic models, such as SVM with an RBF kernel, can perform well on this task, achieving almost 90% accuracy [38].
5.5 Identifying Out-of-distribution Samples in Labeled Multi-class Datasets
Although it is not the main focus of this work, we have also tackled the problem of identifying out-ofdistribution samples in labeled multi-class datasets (i.e., identify images that belong to a different distribution than that of the labeled dataset). To this end, we created a two-headed classification model based on the WRN architecture. The model has two separate softmax output layers. One for categories (e.g., cat, truck, airplane, etc.) and another for classifying transformations (our method). We use the categories softmax layer only during training. At test time, we only utilize the transformations softmax layer output as described in section 4.2, but use the simplified normality score. When training on the CIFAR-10 dataset, and taking the tiny-imagenet (resized) dataset to be anomalies as done by Liang et al. [23] in their ODIN method, we improved ODIN’s AUROC/AUPR-In/AUPR-Out results from 92.1/89.0/93.6 to 95.7/96.1/95.4, respectively. It is important to note that in contrast to our method, ODIN is inapplicable in the pure single class setting, where there are no class labels.
6 On the Intuition for Using Geometric Transformations
In this section we explain our intuition behind the choice of the set of transformations used in our method. Any bijection of a set (having some geometric structure) to itself is a geometric transformation. Among all geometric transformations, we only used compositions of horizontal flipping, translations, and rotations in our model, resulting in 72 distinct transformations (see supplementary material for the entire list). In the earlier stages of this work, we tried a few nongeometric transformations (e.g., Gaussian blur, sharpening, gamma correction), which degraded performance and we abandoned them altogether. We hypothesize that non-geometric transformations perform worse since they can eliminate important features of the learned image set.
We speculate that the effectiveness of the chosen transformation set is affected by their ability to preserve spatial information about the given “normal” images, as well as the ability of our classifier to predict which transformation was applied on a given transformed image. In addition, for a fixed type-II error rate, the type-I error rate of our method decreases the harder it gets for the trained classifier to correctly predict the identity of the transformations that were applied on anomalies.
We demonstrate this idea by conducting three experiments. Each experiment has the following structure. We train a neural classifier to discriminate between two transformations, where the normal class is taken to be images of a single digit from the MNIST [22] training set. We then evaluate our method using AUROC on a set of images comprising normal images and images of another digit from the MNIST test set. The three experiments are:
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and horizontal flip. It can be expected that due to the invariance of ‘8’ to horizontal flip, the classifier will have difficulties learning distinguishing features. Indeed, when presented with the test set containing ‘3’ as anomalies (which do not exhibit such invariance), our method did not perform well, achieving an AUROC of 0.646.
• Normal digit: ‘3’. Anomaly: ‘8’. Transformations: Identity and horizontal flip. In contrast to the previous experiment, the transformed variants of digit ‘3’ can easily be classified to the correct transformation. Indeed, our method, using the trained model for ‘3’, achieved 0.957 AUROC in this experiment.
• Normal digit: ‘8’. Anomaly: ‘3’. Transformations: Identity and translation by 7 pixels. In this experiment, the transformed images are distinguishable from each other. As can be expected, our method performs well in this case, achieving an AUROC of 0.919.
To convince ourselves that high scores given by our scoring function indicate membership in the normal class, we tested how an image would need to change in order to obtain a high normality score. This was implemented by optimizing an input image using gradient ascent to maximize the simplified variant of the normality score described in section 5.5 (see, e.g., [39]). Thus, we trained a classifier on the digit ‘3’ from the MNIST dataset, with a few geometric transformations. We then took an arbitrary image of the digit ‘0’ and optimized it. In Figure 1(a) we present two such images, where the left one is the original, and the right is the result after taking 200 gradient ascent steps that “optimize” the original image. It is evident that the ‘0’ digits have deformed, now resembling the digit ‘3’. This illustrates the fact that the classification model has learned features relevant to the “normal” class. To further strengthen our hypothesis, we conducted the same experiment using images from the normal class (i.e., images of the digit ‘3’). We expected these images to maintain their appearance during the optimization process, since they already contain the features that should contribute to a high normality score. Figure 1(b) contains two examples of the process, where in each row, the left image is the initial ‘3’, and the right is the result after taking 200 gradient ascent steps on it. As hypothesized, it is evident that the images remained roughly unchanged at the end of the optimization process (regardless of their different orientations).
7 Conclusion and Future Work
We presented a novel method for anomaly detection of images, which learns a meaningful representation of the learned training data in a fully discriminative fashion. The proposed method is computationally efficient, and as simple to implement as a multi-class classification task. Unlike best-known methods so far, our approach completely alleviates the need for a generative component (autoencoders/GANs). Most importantly, our method significantly advances the state-of-the-art by offering a dramatic improvement over the best available anomaly detection methods. Our results open many avenues for future research. First, it is important to develop a theory that grounds the use of geometric transformations. It would be interesting to study the possibility of selecting transformations that would best serve a given training set, possibly with prior knowledge on the anomalous samples. Another avenue is explicitly optimizing the set of transformations. Due to the effectiveness of our method, it is tempting to try adapting it to other settings or utilizing it in applications. Some examples are open-world classification, selective classification and regression [36, 35, 13], uncertainty estimation [15], and deep active learning [12, 14]. Finally, it would be interesting to consider using our techniques in settings where additional unlabeled “contaminated” data (consisting of both normal and novel instances) is provided, perhaps within a transductive learning framework [10].
Acknowledgements
This research was partially supported by the Israel Science Foundation (grant No. 710/18). | 1. What is the main contribution of the paper regarding anomaly detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in its classification and feature representation aspects?
3. How does the reviewer assess the effectiveness and efficiency of the method compared to other baselines?
4. What are the concerns regarding the choice of classes and transformations, and how would the authors address them?
5. Are there any limitations or potential improvements regarding the number of classes, transformations, and applicability to real-world data? | Review | Review
This work focuses on detecting anomalies in images by identifying out-of-distribution images. Their approach is to train a multiclass model to classify different geometric transformations of training images. They claim that such a model can generate feature representation that is useful for detecting novelties. Given a set of images, they apply a set of geometric transformations on each of these images. Next, a model is created to classify the transformed images based on classification. During testing, each of the transformation is applied to the test image and passed through the trained model to produce a softmax output. The score of the input image is the mean softmax output of all the transformations. They compare the performance of their approach against a variety of baselines on different datsets and show that their method can work better. It seemed a bit confusing when the work talks about classes in the main paper. There are 2 types of classes mentioned. 1) The set of geometric transformations, and 2) The classes inherently made available through supervised labeling of the data set itself. It seems that if there are k1 types of transformations and k2 classes in the labeled dataset (eg: k2 = 10 for CIFAR10) then anomaly detection is performed separately for each of the k2 classes. This might give substantial overhead as it seems that this requires training of k2 different models. What are the author's thoughts about directly training a model for k1 transformations using all the k2 classes ? What was the reason for using the 20 class grouping and not the 100 class version of the CIFAR 100 data? Do the number of classes (k2) in the dataset have an influence on performance? What does that mean for large datasets with a large number of classes. What about datasets that do not have class labels (as might be the case for real-world data). It will be interesting to know the influence of each geometric transformation on the performance. Furthermore, it seems that a very small set of transformations is utilized (some of which are common transformations used in data augmentation). Is there a specific reason for choosing these? Can fewer/more diverse transformations be used ? |
NIPS | Title
Policy Regret in Repeated Games
Abstract
The notion of policy regret in online learning is a well defined performance measure for the common scenario of adaptive adversaries, which more traditional quantities such as external regret do not take into account. We revisit the notion of policy regret and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play that achieves a favorable regret with respect to one definition must do poorly with respect to the other. We then focus on the game-theoretic setting where the adversary is a self-interested agent. In that setting, we show that external regret and policy regret are not in conflict and, in fact, that a wide class of algorithms can ensure a favorable regret with respect to both definitions, so long as the adversary is also using such an algorithm. We also show that the sequence of play of no-policy regret algorithms converges to a policy equilibrium, a new notion of equilibrium that we introduce. Relating this back to external regret, we show that coarse correlated equilibria, which no-external regret players converge to, are a strict subset of policy equilibria. Thus, in game-theoretic settings, every sequence of play with no external regret also admits no policy regret, but the converse does not hold.
1 Introduction
Learning in dynamically evolving environments can be described as a repeated game between a player, an online learning algorithm, and an adversary. At each round of the game, the player selects an action, e.g. invests in a specific stock, the adversary, which may be the stock market, chooses a utility function, and the player gains the utility value of its action. The player observes the utility value and uses it to update its strategy for subsequent rounds. The player’s goal is to accumulate the largest possible utility over a finite number of rounds of play.1
The standard measure of the performance of a player is its regret, that is the difference between the utility achieved by the best offline solution from some restricted class and the utility obtained by the online player, when utilities are revealed incrementally. Formally, we can model learning as the following problem. Consider an action setA. The player selects an action at at round t, the adversary picks a utility function ut, and the player gains the utility value ut(at). While in a full observation setting the player observes the entire utility function ut, in a bandit setting the player only observes
1Such games can be equivalently described in terms of minimizing losses rather than maximizing utilities. All our results can be equivalently expressed in terms of losses instead of utilities.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the utility value of its own action, ut(at). We use the shorthand a1∶t to denote the player’s sequence of actions (a1, . . . , at) and denote by Ut = {u∶At → R} the family of utility functions ut. The objective of the player is to maximize its expected cumulative utility over T rounds, i.e. maximize E[∑Tt=1 ft(a1∶t)], where the expectation is over the player’s (possible) internal randomization. Since this is clearly impossible to maximize without knowledge of the future, the algorithm instead seeks to achieve a performance comparable to that of the best fixed action in hindsight. Formally, external regret is defined as
R(T ) = E max a∈A T t=1ut(a1∶t−1, a) − T t=1ut(a1∶t) . (1)
A player is said to admit no external regret if the external regret is sublinear, that is R(T ) = o(T ). In contrast to statistical learning, online learning algorithms do not need to make stochastic assumptions about data generation: strong regret bounds are possible even if the utility functions are adversarial.
There are two main adversarial settings in online learning: the oblivious setting where the adversary ignores the player’s actions and where the utility functions can be thought of as determined before the game starts (for instance, in weather prediction); and the adaptive setting where the adversary can react to the player’s actions, thus seeking to throw the player off track (e.g., competing with other agents in the stock market). More generally, we define an m-memory bounded adversary as one that at any time t selects a utility function based on the player’s past m actions: ut(a′1, . . . , a′t−m−1, at−m, . . . , at) = ut(a1, . . . , at−m−1, at−m, . . . , at), for all a′1, . . . , a′t−m−1 and all a1, . . . , at. An oblivious adversary can therefore be equivalently viewed as a 0-memory bounded adversary and an adaptive adversary as an 8-memory bounded adversary. For an oblivious adversary, external regret in Equation 1 reduces to R(T ) = E[maxa∈A∑Tt=1 ut(a)−ut(at)], since the utility functions do not depend upon past actions. Thus, external regret is meaningful when the adversary is oblivious, but it does not admit any natural interpretation when the adversary is adaptive. The problem stems from the fact that in the definition of external regret, the benchmark is a function of the player’s actions. Thus, if the adversary is adaptive, or even memory-bounded for some m > 0, then, external regret does not take into account how the adversary would react had the player selected some other action.
To resolve this critical issue, Arora et al. (2012b) introduced an alternative measure of performance called policy regret for which the benchmark does not depend on the player’s actions. Policy regret is defined as follows
P (T ) =max a∈A T t=1ut(a, . . . , a) −E T t=1ut(a1∶t) . (2)
Arora et al. (2012b) further gave a reduction, using a mini-batch technique where the minibatch size is larger than the memory m of adversary, that turns any algorithm with a sublinear external regret against an oblivious adversary into an algorithm with a sublinear policy regret against an m-memory bounded adversary, albeit at the price of a somewhat worse regret bound, which is still sublinear.
In this paper, we revisit the problem of online learning against adaptive adversaries. Since Arora et al. (2012b) showed that there exists an adaptive adversary against which any online learning algorithm admits linear policy regret, even when the external regret may be sublinear, we ask if no policy regret implies no external regret. One could expect this to be the case since policy regret seems to be a stronger notion than external regret. However, our first main result (Theorem 3.2) shows that this in fact is not the case and that the two notions of regret are incompatible: there exist adversaries (or sequence of utilities) on which action sequences with sublinear external regret admit linear policy regret and action sequences with sublinear policy regret incur linear external regret.
We argue, however, that such sequences may not arise in practical settings, that is in settings where the adversary is a self-interested entity. In such settings, rather than considering a malicious opponent whose goal is to hurt the player by inflicting large regret, it seems more reasonable to consider an opponent whose goal is to maximize his own utility. In zero-sum games, maximizing one’s utility comes at the expense of the other player’s, but there is a subtle difference between an adversary who is seeking to maximize the player’s regret and an adversary who is seeking to minimize the player’s utility (or maximize his own utility). We show that in such strategic game settings there is indeed a strong relationship between external regret and policy regret. In particular, we show in Theorem 3.4 that a large class of stable online learning algorithms with sublinear external regret also benefit from sublinear policy regret.
Further, we consider a two-player game where each player is playing a no policy regret algorithm. It is known that no external regret play converges to a coarse correlated equilibrium (CCE) in such a game, but what happens when players are using no policy regret algorithms? We show in Theorem 4.8 that the average play in repeated games between no policy regret players converges to a policy equilibrium, a new notion of equilibrium that we introduce. Policy equilibria differ from more traditional notions of equilibria such as Nash or CCEs in a crucial way. Recall that a CCE is defined to be a recommended joint strategy for players in a game such that there is no incentive for any player to deviate unilaterally from the recommended strategy if other players do not deviate.
What happens if the other players react to one player’s deviation by deviating themselves? This type of reasoning is not captured by external regret, but is essentially what is captured by policy regret. Thus, our notion of policy equilibrium must take into account these counterfactuals, and so the definition is significantly more complex. But, by considering functions rather than just actions, we can define such equilibria and prove that they exactly characterize no policy regret play.
Finally, it becomes natural to determine the relationship between policy equilibria (which characterize no policy regret play) and CCEs (which characterize no external regret play). We show in Theorems 4.9 and 4.10 that the set of CCEs is a strict subset of policy equilibria. In other words, every CCE can be thought of as a policy regret equilibrium, but no policy regret play might not converge to a CCE.
2 Related work
The problem of minimizing policy regret in a fully adversarial setting was first studied by Merhav et al. (2002). Their work dealt specifically with the full observation setting and assumed that the utility (or loss) functions were m-memory bounded. They gave regret bounds in O(T 23). The follow-up work by Farias and Megiddo (2006) designed algorithms in a reactive bandit setting. However, their results were not in the form of regret bounds but rather introduced a new way to compare against acting according to a fixed expert strategy. Arora et al. (2012b) studied m-memory bounded adversaries both in the bandit and full information settings and provided extensions to more powerful competitor classes considered in swap regret and more general -regret. Dekel et al. (2014) provided a lower bound in the bandit setting for switching cost adversaries, which also leads to a tight lower bound for policy regret in the order ⌦(T 23). Their results were later extended by Koren et al. (2017a) and Koren et al. (2017b). More recently, Heidari et al. (2016) considered the multi-armed bandit problem where each arm’s loss evolves with consecutive pulls. The process according to which the loss evolves was not assumed to be stochastic but it was not arbitrary either – in particular, the authors required either the losses to be concave, increasing and to satisfy a decreasing marginal returns property, or decreasing. The regret bounds given are in terms of the time required to distinguish the optimal arm from all others.
A large part of reinforcement learning is also aimed at studying sequential decision making problems. In particular, one can define a Markov Decision Process (MDP) by a set of states equipped with transition distributions, a set of actions and a set of reward or loss distributions associated with each state action pair. The transition and reward distributions are assumed unknown and the goal is to play according to a strategy that minimizes loss or maximizes reward. We refer the reader to (Sutton and Barto, 1998; Kakade et al., 2003; Szepesvári, 2010) for general results in RL. MDPs in the online setting with bandit feedback or arbitrary payoff processes have been studied by Even-Dar et al. (2009a); Yu et al. (2009); Neu et al. (2010) and Arora et al. (2012a).
The tight connection between no-regret algorithms and correlated equilibria was established and studied by Foster and Vohra (1997); Fudenberg and Levine (1999); Hart and Mas-Colell (2000); Blum and Mansour (2007). A general extension to games with compact, convex strategy sets was given by Stoltz and Lugosi (2007). No external regret dynamics were studied in the context of socially concave games (Even-Dar et al., 2009b). More recently, Hazan and Kale (2008) considered more general notions of regret and established an equivalence result between fixed-point computation, the existence of certain no-regret algorithms, and the convergence to the corresponding equilibria. In a follow-up work by Mohri and Yang (2014) and Mohri and Yang (2017), the authors considered a more powerful set of competitors and showed that the repeated play according to conditional swap-regret or transductive regret algorithms leads to a new set of equilibria.
3 Policy regret in reactive versus strategic environments
Often, distant actions in the past influence an adversary more than more recent ones. The definition of policy regret (2) models this influence decay by assuming that the adversary is m-memory bounded for some m ∈ N. This assumption is somewhat stringent, however, since ideally we could model the current move of the adversary as a function of the entire past, even if actions taken further in the past have less significance. Thus, we extend the definition of Arora et al. (2012b) as follows. Definition 3.1. The m-memory policy regret at time T of a sequence of actions (at)Tt=1 with respect to a fixed action a in the action set A and the sequence of utilities (ut)Tt=1, where ut ∶ At → R and m ∈ N{∞} is
P (T, a) = T t=1ut(a1,, at−m, a,, a) − T t=1ut(a1,, at).
We say that the sequence (at)Tt=1 has sublinear policy regret (or no policy regret) if P (T, a) < o(T ), for all actions a ∈ A. Let us emphasize that this definition is just an extension of the standard policy regret definition and that, when the utility functions are m-memory bounded, the two definitions exactly coincide.
While the motivation for policy regret suggests that this should be a stronger notion compared to external regret, we show that not only that these notions are incomparable in the general adversarial setting, but that they are also incompatible in a strong sense. Theorem 3.2. There exists a sequence of m-memory bounded utility functions (ut)Tt=1, where ut ∶ A → R, such that for any constant m ≥ 2 (independent of T ), any action sequence with sublinear policy regret will have linear external regret and any action sequence with sublinear external regret will have linear policy regret.
The proof of the above theorem constructs a sequence for which no reasonable play can attain sublinear external regret. In particular, the only way the learner can have sublinear external regret is if they choose to have very small utility. To achieve this, the utility functions chosen by the adversary are the following. At time t, if the player chose to play the same action as their past 2 actions then they get utility 12 . If the player’s past two actions were equal but their current action is different, then they get utility 1, and if their past two actions differ then no matter what their current action is they receive utility 0. It is easy to see that the maximum utility play for this sequence (and the lowest 2-memory bounded policy regret strategy) is choosing the same action at every round. However, such an action sequence admits linear external regret. Moreover, every sublinear external regret strategy must then admit sublinear utility and thus linear policy regret.
As discussed in Section 1, in many realistic environments we can instead think of the adversary as a self-interested agent trying to maximize their own utility, rather than trying to maximize the regret of the player. This more strategic environment is better captured by the game theory setting, in particular a 2-player game where both players are trying to maximize their utility. Even though we have argued that external regret is not a good measure, our next result shows that minimizing policy regret in games can be done if both players choose their strategies according to certain no external regret algorithms. More generally, we adapt a classical notion of stability from the statistical machine learning setting and argue that if the players use no external regret algorithms that are stable, then the players will have no policy regret in expectation. To state the result formally we first need to introduce some notation.
Game definition: We consider a 2-player game G, with players 1 and 2. The action set of player i is denoted by Ai, which we think of as being embedded into RAi in the obvious way where each action corresponds to a standard basis vector. The corresponding simplex is Ai. The action of player 1 at time t is at and of player 2 is bt. The observed utility for player i at time t is ui(at, bt) and this is a bi-linear form with corresponding matrix Pi. We assume that the utilities are bounded in[0,1]. Algorithm of the player: When discussing algorithms, we take the view of player 1. Specifically, at time t, player 1 plays according to an algorithm which can be described as Algt ∶ (A1×A2)t → A1. We distinguish between two settings: full information, in which the player observes the full utility
function at time t (i.e., u1(⋅, bt)), and the bandit setting, in which the player only observes u1(at, bt). In the full information setting, algorithms like multiplicative weight updates (MWU Arora et al. (2012c)) depend only on the past t − 1 utility functions (u1(⋅, b`))t−1`=1, and thus we can think of Algt as a function ft ∶ At2 → A1. In the bandit setting, though, the output at time t of the algorithm depends both on the previous t− 1 actions (a`)t−1`=1 and on the utility functions (i.e., the actions picked by the other player).
But even in the bandit setting, we would like to think of the player’s algorithm as a function ft ∶ At2 → A1. We cannot quite do this, however we can think of the player’s algorithm as a distribution over such functions. So how do we remove the dependence on At1? Intuitively, if we fix the sequence of actions played by player 2, we want to take the expectation of Algt over possible choices of the t actions played by player 1. In order to do this more formally, consider the distribution µ over At−11 ×At−12 generated by simulating the play of the players for t rounds. Then let µb0∶t be the distribution obtained by conditioning µ on the actions of player 2 being b0∶t. Now we let ft(b0∶t−1) be the distribution obtained by sampling a0∶t−1 from µb1∶t−1 and using Alg(a0∶t−1, b0∶t−1). When taking expectations over ft, the expectation is taken with respect to the above distribution. We also refer to the output pt = ft(b0∶t−1) as the strategy of the player at time t. Now that we can refer to algorithms simply as functions (or distributions over functions), we introduce the notion of a stable algorithm. Definition 3.3. Let ft ∶ At2 → A1 be a sample from Algt (as described above), mapping the past t actions in A2 to a distribution over the action set A1. Let the distribution returned at time t be p 1 t = ft(b1, . . . , bt). We call this algorithm on average (m,S(T )) stable with respect to the norm⋅, if for any b′t−m+1, . . . , b′t ∈ A2 such that p̃1t = ft(b1, . . . , bt−m, b′t−m+1, . . . , b′t) ∈ A1, it holds that E[∑Tt=1 p1t − p̃1t ] ≤ S(T ), where the expectation is taken with respect to the randomization in the algorithm.
Even though this definition of stability is given with respect to the game setting, it is not hard to see that it can be extended to the general online learning setting, and in fact this definition is similar in spirit to the one given in Saha et al. (2012). It turns out that most natural no external regret algorithms are stable. In particular we show, in the supplementary, that both Exp3 Auer et al. (2002) and MWU are on average (m,m√T ) stable with respect to `1 norm for any m < o(√T ). It is now possible to show that if each of the players are facing stable no external regret algorithms, they will also have bounded policy regret (so the incompatibility from Theorem 3.2 cannot occur in this case). Theorem 3.4. Let (at)Tt=1 and (bt)Tt=1 be the action sequences of player 1 and 2 and suppose that they are coming from no external regret algorithms modeled by functions ft and gt, with regrets R1(T ) and R2(T ) respectively. Assume that the algorithms are on average (m,S(T )) stable with respect to the `2 norm. Then
E [P (T, a)] ≤ P1S(T ) +R1(T ) E [P (T, b)] ≤ P2S(T ) +R2(T ),
where ut(a1∶t) in the definition of P (T, a) equals u1(at, gt(a0∶t−1)) and similarly in the definition of P (T, b), equals u2(bt, ft(b0∶t−1)). The above holds for any fixed actions b ∈ A2 and a ∈ A1. Here the matrix norm ⋅ is the spectral norm.
4 Policy equilibrium
Recall that unlike external regret, policy regret captures how other players in a game might react if a player decides to deviate from their strategy. The story is similar when considering different notions of equilibria. In particular Nash equlibria, Correlated equilibria and CCEs can be interpreted in the following way: if player i deviates from the equilibrium play, their utility will not increase no matter how they decide to switch, provided that all other players continue to play according to the equilibrium. This sentiment is a reflection of what no external and no swap regret algorithms guarantee. Equipped with the knowledge that no policy regret sequences are obtainable in the game setting under reasonable play from all parties, it is natural to reason how other players would react if player i deviated and what would be the cost of deviation when taking into account possible reactions.
Let us again consider the 2-player game setup through the view of player 1. The player believes their opponent might be m-memory bounded and decides to proceed by playing according to a no policy
regret algorithm. After many rounds of the game, player 1 has computed an empirical distribution of play ̂ over A ∶= A1 × A2. The player is familiar with the guarantees of the algorithm and knows that, if instead, they changed to playing any fixed action a ∈ A1, then the resulting empirical distribution of play ̂a, where player 2 has responded accordingly in a memory-bounded way, is such that E(a,b)∼̂ [u1(a, b)] ≥ E(a,b)∼̂a [u1(a, b)] − ✏. This thought experiment suggests that if no policy regret play converges to an equilibrium, then the equilibrium is not only described by the deviations of player 1, but also through the change in player 2’s behavior, which is encoded in the distribution ̂a. Thus, any equilibrium induced by no policy regret play, can be described by tuples of distributions {( , a, b) ∶ (a, b) ∈ A}, where a is the distribution corresponding to player 1’s deviation to the fixed action a ∈ A1 and b captures player 2’s deviation to the fixed action b ∈ A2. Clearly a and b are not arbitrary but we still need a formal way to describe how they arise.
For convenience, lets restrict the memory of player 2 to be 1. Thus, what player 1 believes is that at each round t of the game, they play an action at and player 2 plays a function ft ∶ A1 → A2, mapping at−1 to bt = ft(at−1). Finally, the observed utility is u1(at, ft(at−1)). The empirical distribution of play, ̂, from the perspective of player 1, is formed from the observed play (at, ft(at−1))Tt=1. Moreover, the distribution, ̂a, that would have occurred if player 1 chose to play action a on every round is formed from the play (a, ft(a))Tt=1. In the view of the world of player 1, the actions taken by player 2 are actually functions rather than actions in A2. This suggests that the equilibrium induced by a no-policy regret play, is a distribution over the functional space defined below. Definition 4.1. Let F1 ∶= {f ∶ Am12 → A1} and F2 ∶= {g ∶ Am21 → A2} denote the functional spaces of play of players 1 and 2, respectively. Denote the product space by F ∶= F1 ×F2. Note that when m1 =m2 = 0, F is in a one-to-one correspondence with A, i.e. when players believe their opponents are oblivious, we recover the action set studied in standard equilibria. For simplicity, for the remainder of the paper we assume that m1 = m2 = 1. However, all of the definitions and results that follow can be extended to the fully general setting of arbitrary m1 and m2; see the supplementary for details. Let us now investigate how a distribution ⇡ over F can give rise to a tuple of distributions (̂, ̂a, ̂b). We begin by defining the utility of ⇡ such that it equals the utility of a distribution over A i.e., we want E(f,g)∼⇡ [u1(f, g)] = E(a,b)∼ [u1(a, b)]. Since utilities are not defined for functions, we need an interpretation of E(f,g)∼⇡ [u1(f, g)] which makes sense. We notice that ⇡ induces a Markov chain with state space A in the following way. Definition 4.2. Let ⇡ be any distribution over F . Then ⇡ induces a Markov process with transition probabilities P [(a2, b2)(a1, b1)] = ∑(f,g)∈F1×F2∶f(b1)=a2,g(a1)=b2 ⇡(f, g). We associate with this Markov process the transition matrix M ∈ RA×A, with Mx1,x2 = P [x2x1] where xi = (ai, bi). Since every Markov chain with a finite state space has a stationary distribution, we think of utility of ⇡ as the utility of a particular stationary distribution of M. How we choose among all stationary distributions is going to become clear later, but for now we can think about as the distribution which maximizes the utilities of both players. Next, we need to construct a and b, which capture the deviation in play, when player 1 switches to action a and player 2 switches to action b. The no-policy regret guarantee can be interpreted as E(f,g)∼⇡ [u1(f, g)] ≥ E(f,g)∼⇡ [u1(a, g(a))] i.e., if player 1 chose to switch to a fixed action (or equivalently, the constant function which maps everything to the action a ∈ A1), then their utility should not increase. Switching to a fixed action a, changes ⇡ to a new distribution ⇡a over F . This turns out to be a product distribution which also induces a Markov chain. Definition 4.3. Let ⇡ be any distribution over F . Let a be the distribution over F1 putting all mass on the constant function mapping all actions b ∈ A2 to the fixed action a ∈ A1. Let ⇡F2 be the marginal of ⇡ over F2. The distribution resulting from player 1 switching to playing a fixed action a ∈ A, is denoted as ⇡a = a × ⇡F2 . This distribution induces a Markov chain with transition probabilities P [(a, b2)(a1, b1)] = ∑(f,g)∶g(a1)=b2 ⇡(f, g) and the transition matrix of this Markov process is denoted by Ma. The distribution ⇡b and matrix Mb are defined similarly for player 2.
Since the no policy regret algorithms we work with do not directly induce distributions over the functional space F but rather only distributions over the action space A, we would like to state all of our utility inequalities in terms of distributions over A. Thus, we would like to check if there is a stationary distribution a of Ma such that E(f,g)∼⇡ [u1(a, g(a))] = E(a,b)∼ a [u1(a, b)]. This is indeed the case as verified by the following theorem.
Theorem 4.4. Let ⇡ be a distribution over the product of function spaces F1 × F2. There exists a stationary distribution a of the Markov chain Ma for any fixed a ∈ A1 such that E(a,b)∼ a [u1(a, b)] = E(f,g)∼⇡ [u1(a, g(a))]. Similarly, for every fixed action b ∈ A2, there exists a stationary distribution b of Mb such that E(a,b)∼ b [u2(a, b)] = E(f,g)∼⇡ [u2(f(b), b)]. The proof of this theorem is constructive and can be found in the supplementary. With all of this notation we are ready to formally describe what no-policy regret play promises in the game setting in terms of an equilibrium. Definition 4.5. A distribution ⇡ over F1 ×F2 is a policy equilibrium if for all fixed actions a ∈ A1 and b ∈ A2, which generate Markov chains Ma and Mb respectively, with stationary distributions a and b from Theorem 4.4, there exists a stationary distribution of the Markov chain M induced by ⇡ such that:
E(a,b)∼ [u1(a, b)] ≥ E(a,b)∼ a [u1(a, b)] E(a,b)∼ [u2(a, b)] ≥ E(a,b)∼ b [u2(a, b)] . (3)
In other words, ⇡ is a policy equilibrium if there exists a stationary distribution of the Markov chain corresponding to ⇡, such that, when actions are drawn according to , no player has incentive to change their action. For a simple example of a policy equilibrium see Section E in the supplementary.
4.1 Convergence to the set of policy equilibria
We have tried to formally capture the notion of equilibria in which player 1’s deviation would lead to a reaction from player 2 and vice versa in Definition 4.5. This definition is inspired by the counterfactual guarantees of no policy regret play and we would like to check that if players’ strategies yield sublinear policy regret then the play converges to a policy equilibrium. Since the definition of sublinear policy regret does not include a distribution over functional spaces but only works with empirical distributions of play, we would like to present our result in terms of distributions over the action space A. Thus we begin by defining the set of all product distributions × a × b, induced by policy equilibria ⇡ as described in the previous subsection. Here a and b represent the deviation in strategy if player 1 changed to playing the fixed action a ∈ A1 and player 2 changed to playing the fixed action b ∈ A2 respectively as constructed in Theorem 4.4. Definition 4.6. For a policy equilibrium ⇡, let S⇡ be the set of all stationary distributions which satisfy the equilibrium inequalities (3), S⇡ ∶= { × a × b ∶ (a, b) ∈ A} . Define S = ⇡∈⇧ S⇡ , where ⇧ is the set of all policy equilibria.
Our main result states that the sequence of empirical product distributions formed after T rounds of the game ̂ × ̂a × ̂b is going to converge to S. Here ̂a and ̂b denote the distributions of deviation in play, when player 1 switches to the fixed action a ∈ A1 and player 2 switches to the fixed action b ∈ A2 respectively. We now define these distributions formally. Definition 4.7. Suppose player 1 is playing an algorithm with output at time t given by ft ∶ At2 → A1 i.e. p1t = ft(b0∶t−1). Similarly, suppose player 2 is playing an algorithm with output at time t given by p2t = gt(a0∶t−1). The empirical distribution at time T is ̂ ∶= 1T ∑Tt=1 pt, where pt = p1t × p2t is the product distribution over A at time t. Further let (p2a)t = gt(a0∶t−m, a, . . . , a) denote the distribution at time t, provided that player 1 switched their strategy to the constant action a ∈ A1. Let a denote the distribution over A1 which puts all the probability mass on action a. Let(pa)t = a × (p2a)t be the product distribution over A, corresponding to the change of play at time t. Denote by ̂a = 1T ∑Tt=1(pa)t the empirical distribution corresponding to the change of play. The distribution ̂b is defined similarly.
Suppose that ft and gt are no-policy regret algorithms, then our main result states that the sequence(̂ × ̂a × ̂b)T converges to the set S. Theorem 4.8. If the algorithms played by player 1 in the form of ft and player 2 in the form of gt give sub-linear policy regret sequences, then the sequence of product distributions (̂ × ̂a × ̂b)∞T=1 converges weakly to the set S.
In particular if both players are playing MWU or Exp3, we know that they will have sublinear policy regret. Not surprisingly, we can show something slightly stronger as well. Let ̃, ̃a and ̃b denote the empirical distributions of observed play corresponding to ̂, ̂a and ̂b, i.e. ̃ = 1T t, where t
denotes the Dirac distribution, putting all weight on the played actions at time t. Then these empirical distributions also converge to S almost surely.
4.2 Sketch of proof of the main result
The proof of Theorem 4.8 has three main steps. The first step defines the natural empirical Markov chains M̂, M̂a and M̂b from the empirical play (pt)∞t=1 (see Definition B.2) and shows that the empirical distributions ̂, ̂a and ̂b are stationary distributions of the respective Markov chains. The latter is done in Lemma B.3. The next step is to show that the empirical Markov chains converge to Markov chains M, Ma and Mb induced by some distribution ⇡ over F . In particular, we construct an empirical distribution ⇡̂ and distributions ⇡̂a and ⇡̂b corresponding to player’s deviations (see Definition B.5), and show that these induce the Markov chains M̂, M̂a and M̂b respectively (Lemma B.7). The distribution ⇡ we want is now the limit of the sequence (⇡̂)T . The final step is to show that ⇡ is a policy equilibrium. The proof goes by contradiction. Assume ⇡ is not a policy equilibrium, this implies that no stationary distribution of M and corresponding stationary distributions of Ma and Mb can satisfy inequalities (3). Since the empirical distributions ̂, ̂a and ̂b of the play satisfies inequalities (3) up to an o(1) additive factor, we can show, in Theorem B.8, that in the limit, the policy equilibrium inequalities are exactly satisfied. Combined with the convergence of M̂, M̂a and M̂b to M, Ma and Mb, respectively, this implies that stationary distributions of M, Ma and Mb, satisfying (3), giving a contradiction.
We would like to emphasize that the convergence guarantee of Theorem 4.8 does not rely on there being a unique stationary distribution of the empirical Markov chains M̂, M̂a and M̂b or their respective limits M,Ma,Mb. Indeed, Theorem 4.8 shows that any limit point of {(̂, ̂a, ̂b)T }∞T=1 satisfies the conditions of Definition 4.5. The proof does not require that any of the respective Markov chains have a unique stationary distribution, but rather requires only that ̂ has sublinear policy regret. We would also like to remark that {(̂, ̂a, ̂b)T }∞T=1 need not have a unique limit and our convergence result only guarantees that the sequence is going to the set S. This is standard when showing that any type of no regret play converges to an equilibrium, see for example Stoltz and Lugosi (2007).
4.3 Relation of policy equlibria to CCEs
So far we have defined a new class of equilibria and shown that they correspond to no policy regret play. Furthermore, we know that if both players in a 2-player game play stable no external regret algorithms, then their play also has sublinear policy regret. It is natural to ask if every CCE is also a policy equilibrium: if is a CCE, is there a corresponding policy equilibrium ⇡ which induces a Markov chain M for which is a stationary distribution satisfying (3)? We show that the answer to this question is positive: Theorem 4.9. For any CCE of a 2-player game G, there exists a policy-equilibrium ⇡ which induces a Markov chain M with stationary distribution .
To prove this, we show that for any CCE we can construct stable no-external regret algorithm which converge to it, and so since stable no-external regret algorithms always converge to policy equilibria (Theorem 3.4), this implies the CCE is also a policy equilibrium.
However, we show the converse is not true: policy equilibria can give rise to behavior which is not a CCE. Our proof appeals to a utility sequence which is similar in spirit to the one in Theorem 3.2, but is adapted to the game setting. Theorem 4.10. There exists a 2-player game G and product distributions × a × b ∈ S (where S is defined in Definition 4.6 as the possible distributions of play from policy equilibria), such that is not a CCE of G. In Section E of the supplementary we give a simple example of a policy equilibrium which is not a CCE.
5 Discussion
In this work we gave a new twist on policy regret by examining it in the game setting, where we introduced the notion of policy equilibrium and showed that it captures the behavior of no policy
regret players. While our characterization is precise, we view this as only the first step towards truly understanding policy regret and its variants in the game setting. Many interesting open questions remain. Even with our current definitions, since we now have a broader class of equilibria to consider it is natural to go back to the extensive literature in algorithmic game theory on the price of anarchy and price of stability and reconsider it in the context of policy equilibria. For example Roughgarden (2015) showed that in “smooth games” the worst CCE is no worse than the worst Nash. Since policy equilibria contain all CCEs (Theorem 4.9), is the same true for policy equilibria?
Even more interesting questions remain if we change our definitions to be more general. For example, what happens with more than 2 players? With three or more players, definitions of “reaction” by necessity become more complicated. Or what happens when m is not a constant? No policy regret algorithms exist for superconstant m, but our notion of equilibrium requires m to be constant in order for the Markov chains to make sense. Finally, what if we compare against deviations that are more complicated than a single action, in the spirit of swap regret or -regret?
From an online learning perspective, note that our notion of on average stable and the definition of m-memory boundedness are different notions of stability. Is there one unified definition of “stable” which would allow us to give no policy regret algorithms against stable adversaries even outside of the game setting?
Acknowledgments
This work was supported in part by NSF BIGDATA grant IIS-1546482, NSF BIGDATA grant IIS1838139, NSF CCF-1535987, NSF IIS-1618662, NSF CCF-1464239, and NSF AITF CCF-1535887. | 1. What is the focus of the paper regarding policy regret and its relationship with external regret?
2. What is the new equilibrium concept introduced by the authors, and how does it relate to coarse correlated equilibria?
3. How significant and surprising are the contributions of the paper, particularly the relationship between external and policy regrets?
4. How well-written is the paper, especially for readers who are not experts in bandit algorithms?
5. What are some suggested improvements for the paper, such as providing more intuition in certain sections or discussing the significance and applications of the work? | Review | Review
This paper contributes a number of theoretical results regarding policy regret (previously introduced in Arora et al. 2012), and introduces and analyzes a new equilibrium concept called policy equilibrium. Specifically, the authors investigate the relationship between external regret and policy regret, and demonstrate that sublinear regret in one does not imply sublinear regret in the other. Next, the authors introduce the concept of a policy equilibrium, and demonstrate that coarse correlated equilibria are a subset of policy equilibria. The paper is purely theoretical and does not present any experimental results. The contributions of this paper seem to me to be significant and surprising. In particular, the fact that low external regret does not imply low policy regret, and vice versa, but that nevertheless coarse correlated equilibria are a subset of policy equilibria is an interesting result. The theoretical analysis is also highly non-trivial and seems to me to merit publication. The paper is well-written. As someone who is not an expert in bandit algorithms, I was able to follow both the intuition and the mathematics reasonably well, even though the paper is highly theoretical. I had some trouble around the definition of a stable no-regret algorithm, and in section 4.1. The paper would benefit from more intuition in those sections. One small suggestion I have is that since the authors are only minimizing policy regret for a fixed memory length m, it seems to me that the term for policy regret P(T,a) should instead be something like P_m(T,a) or P^m(T,a). Also, I think the paper would benefit from a greater discussion of the significance and applications of this work. The authors make a convincing argument that when dealing with adaptive adversaries, policy regret seems more appropriate than external regret. But external regret minimization is used in all sorts of practical applications, including some with adaptive adversaries. Are there concrete situations where policy regret minimization and policy equilibria would be more appropriate and lead to better results (for some definition of "better")? |
NIPS | Title
Policy Regret in Repeated Games
Abstract
The notion of policy regret in online learning is a well defined performance measure for the common scenario of adaptive adversaries, which more traditional quantities such as external regret do not take into account. We revisit the notion of policy regret and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play that achieves a favorable regret with respect to one definition must do poorly with respect to the other. We then focus on the game-theoretic setting where the adversary is a self-interested agent. In that setting, we show that external regret and policy regret are not in conflict and, in fact, that a wide class of algorithms can ensure a favorable regret with respect to both definitions, so long as the adversary is also using such an algorithm. We also show that the sequence of play of no-policy regret algorithms converges to a policy equilibrium, a new notion of equilibrium that we introduce. Relating this back to external regret, we show that coarse correlated equilibria, which no-external regret players converge to, are a strict subset of policy equilibria. Thus, in game-theoretic settings, every sequence of play with no external regret also admits no policy regret, but the converse does not hold.
1 Introduction
Learning in dynamically evolving environments can be described as a repeated game between a player, an online learning algorithm, and an adversary. At each round of the game, the player selects an action, e.g. invests in a specific stock, the adversary, which may be the stock market, chooses a utility function, and the player gains the utility value of its action. The player observes the utility value and uses it to update its strategy for subsequent rounds. The player’s goal is to accumulate the largest possible utility over a finite number of rounds of play.1
The standard measure of the performance of a player is its regret, that is the difference between the utility achieved by the best offline solution from some restricted class and the utility obtained by the online player, when utilities are revealed incrementally. Formally, we can model learning as the following problem. Consider an action setA. The player selects an action at at round t, the adversary picks a utility function ut, and the player gains the utility value ut(at). While in a full observation setting the player observes the entire utility function ut, in a bandit setting the player only observes
1Such games can be equivalently described in terms of minimizing losses rather than maximizing utilities. All our results can be equivalently expressed in terms of losses instead of utilities.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the utility value of its own action, ut(at). We use the shorthand a1∶t to denote the player’s sequence of actions (a1, . . . , at) and denote by Ut = {u∶At → R} the family of utility functions ut. The objective of the player is to maximize its expected cumulative utility over T rounds, i.e. maximize E[∑Tt=1 ft(a1∶t)], where the expectation is over the player’s (possible) internal randomization. Since this is clearly impossible to maximize without knowledge of the future, the algorithm instead seeks to achieve a performance comparable to that of the best fixed action in hindsight. Formally, external regret is defined as
R(T ) = E max a∈A T t=1ut(a1∶t−1, a) − T t=1ut(a1∶t) . (1)
A player is said to admit no external regret if the external regret is sublinear, that is R(T ) = o(T ). In contrast to statistical learning, online learning algorithms do not need to make stochastic assumptions about data generation: strong regret bounds are possible even if the utility functions are adversarial.
There are two main adversarial settings in online learning: the oblivious setting where the adversary ignores the player’s actions and where the utility functions can be thought of as determined before the game starts (for instance, in weather prediction); and the adaptive setting where the adversary can react to the player’s actions, thus seeking to throw the player off track (e.g., competing with other agents in the stock market). More generally, we define an m-memory bounded adversary as one that at any time t selects a utility function based on the player’s past m actions: ut(a′1, . . . , a′t−m−1, at−m, . . . , at) = ut(a1, . . . , at−m−1, at−m, . . . , at), for all a′1, . . . , a′t−m−1 and all a1, . . . , at. An oblivious adversary can therefore be equivalently viewed as a 0-memory bounded adversary and an adaptive adversary as an 8-memory bounded adversary. For an oblivious adversary, external regret in Equation 1 reduces to R(T ) = E[maxa∈A∑Tt=1 ut(a)−ut(at)], since the utility functions do not depend upon past actions. Thus, external regret is meaningful when the adversary is oblivious, but it does not admit any natural interpretation when the adversary is adaptive. The problem stems from the fact that in the definition of external regret, the benchmark is a function of the player’s actions. Thus, if the adversary is adaptive, or even memory-bounded for some m > 0, then, external regret does not take into account how the adversary would react had the player selected some other action.
To resolve this critical issue, Arora et al. (2012b) introduced an alternative measure of performance called policy regret for which the benchmark does not depend on the player’s actions. Policy regret is defined as follows
P (T ) =max a∈A T t=1ut(a, . . . , a) −E T t=1ut(a1∶t) . (2)
Arora et al. (2012b) further gave a reduction, using a mini-batch technique where the minibatch size is larger than the memory m of adversary, that turns any algorithm with a sublinear external regret against an oblivious adversary into an algorithm with a sublinear policy regret against an m-memory bounded adversary, albeit at the price of a somewhat worse regret bound, which is still sublinear.
In this paper, we revisit the problem of online learning against adaptive adversaries. Since Arora et al. (2012b) showed that there exists an adaptive adversary against which any online learning algorithm admits linear policy regret, even when the external regret may be sublinear, we ask if no policy regret implies no external regret. One could expect this to be the case since policy regret seems to be a stronger notion than external regret. However, our first main result (Theorem 3.2) shows that this in fact is not the case and that the two notions of regret are incompatible: there exist adversaries (or sequence of utilities) on which action sequences with sublinear external regret admit linear policy regret and action sequences with sublinear policy regret incur linear external regret.
We argue, however, that such sequences may not arise in practical settings, that is in settings where the adversary is a self-interested entity. In such settings, rather than considering a malicious opponent whose goal is to hurt the player by inflicting large regret, it seems more reasonable to consider an opponent whose goal is to maximize his own utility. In zero-sum games, maximizing one’s utility comes at the expense of the other player’s, but there is a subtle difference between an adversary who is seeking to maximize the player’s regret and an adversary who is seeking to minimize the player’s utility (or maximize his own utility). We show that in such strategic game settings there is indeed a strong relationship between external regret and policy regret. In particular, we show in Theorem 3.4 that a large class of stable online learning algorithms with sublinear external regret also benefit from sublinear policy regret.
Further, we consider a two-player game where each player is playing a no policy regret algorithm. It is known that no external regret play converges to a coarse correlated equilibrium (CCE) in such a game, but what happens when players are using no policy regret algorithms? We show in Theorem 4.8 that the average play in repeated games between no policy regret players converges to a policy equilibrium, a new notion of equilibrium that we introduce. Policy equilibria differ from more traditional notions of equilibria such as Nash or CCEs in a crucial way. Recall that a CCE is defined to be a recommended joint strategy for players in a game such that there is no incentive for any player to deviate unilaterally from the recommended strategy if other players do not deviate.
What happens if the other players react to one player’s deviation by deviating themselves? This type of reasoning is not captured by external regret, but is essentially what is captured by policy regret. Thus, our notion of policy equilibrium must take into account these counterfactuals, and so the definition is significantly more complex. But, by considering functions rather than just actions, we can define such equilibria and prove that they exactly characterize no policy regret play.
Finally, it becomes natural to determine the relationship between policy equilibria (which characterize no policy regret play) and CCEs (which characterize no external regret play). We show in Theorems 4.9 and 4.10 that the set of CCEs is a strict subset of policy equilibria. In other words, every CCE can be thought of as a policy regret equilibrium, but no policy regret play might not converge to a CCE.
2 Related work
The problem of minimizing policy regret in a fully adversarial setting was first studied by Merhav et al. (2002). Their work dealt specifically with the full observation setting and assumed that the utility (or loss) functions were m-memory bounded. They gave regret bounds in O(T 23). The follow-up work by Farias and Megiddo (2006) designed algorithms in a reactive bandit setting. However, their results were not in the form of regret bounds but rather introduced a new way to compare against acting according to a fixed expert strategy. Arora et al. (2012b) studied m-memory bounded adversaries both in the bandit and full information settings and provided extensions to more powerful competitor classes considered in swap regret and more general -regret. Dekel et al. (2014) provided a lower bound in the bandit setting for switching cost adversaries, which also leads to a tight lower bound for policy regret in the order ⌦(T 23). Their results were later extended by Koren et al. (2017a) and Koren et al. (2017b). More recently, Heidari et al. (2016) considered the multi-armed bandit problem where each arm’s loss evolves with consecutive pulls. The process according to which the loss evolves was not assumed to be stochastic but it was not arbitrary either – in particular, the authors required either the losses to be concave, increasing and to satisfy a decreasing marginal returns property, or decreasing. The regret bounds given are in terms of the time required to distinguish the optimal arm from all others.
A large part of reinforcement learning is also aimed at studying sequential decision making problems. In particular, one can define a Markov Decision Process (MDP) by a set of states equipped with transition distributions, a set of actions and a set of reward or loss distributions associated with each state action pair. The transition and reward distributions are assumed unknown and the goal is to play according to a strategy that minimizes loss or maximizes reward. We refer the reader to (Sutton and Barto, 1998; Kakade et al., 2003; Szepesvári, 2010) for general results in RL. MDPs in the online setting with bandit feedback or arbitrary payoff processes have been studied by Even-Dar et al. (2009a); Yu et al. (2009); Neu et al. (2010) and Arora et al. (2012a).
The tight connection between no-regret algorithms and correlated equilibria was established and studied by Foster and Vohra (1997); Fudenberg and Levine (1999); Hart and Mas-Colell (2000); Blum and Mansour (2007). A general extension to games with compact, convex strategy sets was given by Stoltz and Lugosi (2007). No external regret dynamics were studied in the context of socially concave games (Even-Dar et al., 2009b). More recently, Hazan and Kale (2008) considered more general notions of regret and established an equivalence result between fixed-point computation, the existence of certain no-regret algorithms, and the convergence to the corresponding equilibria. In a follow-up work by Mohri and Yang (2014) and Mohri and Yang (2017), the authors considered a more powerful set of competitors and showed that the repeated play according to conditional swap-regret or transductive regret algorithms leads to a new set of equilibria.
3 Policy regret in reactive versus strategic environments
Often, distant actions in the past influence an adversary more than more recent ones. The definition of policy regret (2) models this influence decay by assuming that the adversary is m-memory bounded for some m ∈ N. This assumption is somewhat stringent, however, since ideally we could model the current move of the adversary as a function of the entire past, even if actions taken further in the past have less significance. Thus, we extend the definition of Arora et al. (2012b) as follows. Definition 3.1. The m-memory policy regret at time T of a sequence of actions (at)Tt=1 with respect to a fixed action a in the action set A and the sequence of utilities (ut)Tt=1, where ut ∶ At → R and m ∈ N{∞} is
P (T, a) = T t=1ut(a1,, at−m, a,, a) − T t=1ut(a1,, at).
We say that the sequence (at)Tt=1 has sublinear policy regret (or no policy regret) if P (T, a) < o(T ), for all actions a ∈ A. Let us emphasize that this definition is just an extension of the standard policy regret definition and that, when the utility functions are m-memory bounded, the two definitions exactly coincide.
While the motivation for policy regret suggests that this should be a stronger notion compared to external regret, we show that not only that these notions are incomparable in the general adversarial setting, but that they are also incompatible in a strong sense. Theorem 3.2. There exists a sequence of m-memory bounded utility functions (ut)Tt=1, where ut ∶ A → R, such that for any constant m ≥ 2 (independent of T ), any action sequence with sublinear policy regret will have linear external regret and any action sequence with sublinear external regret will have linear policy regret.
The proof of the above theorem constructs a sequence for which no reasonable play can attain sublinear external regret. In particular, the only way the learner can have sublinear external regret is if they choose to have very small utility. To achieve this, the utility functions chosen by the adversary are the following. At time t, if the player chose to play the same action as their past 2 actions then they get utility 12 . If the player’s past two actions were equal but their current action is different, then they get utility 1, and if their past two actions differ then no matter what their current action is they receive utility 0. It is easy to see that the maximum utility play for this sequence (and the lowest 2-memory bounded policy regret strategy) is choosing the same action at every round. However, such an action sequence admits linear external regret. Moreover, every sublinear external regret strategy must then admit sublinear utility and thus linear policy regret.
As discussed in Section 1, in many realistic environments we can instead think of the adversary as a self-interested agent trying to maximize their own utility, rather than trying to maximize the regret of the player. This more strategic environment is better captured by the game theory setting, in particular a 2-player game where both players are trying to maximize their utility. Even though we have argued that external regret is not a good measure, our next result shows that minimizing policy regret in games can be done if both players choose their strategies according to certain no external regret algorithms. More generally, we adapt a classical notion of stability from the statistical machine learning setting and argue that if the players use no external regret algorithms that are stable, then the players will have no policy regret in expectation. To state the result formally we first need to introduce some notation.
Game definition: We consider a 2-player game G, with players 1 and 2. The action set of player i is denoted by Ai, which we think of as being embedded into RAi in the obvious way where each action corresponds to a standard basis vector. The corresponding simplex is Ai. The action of player 1 at time t is at and of player 2 is bt. The observed utility for player i at time t is ui(at, bt) and this is a bi-linear form with corresponding matrix Pi. We assume that the utilities are bounded in[0,1]. Algorithm of the player: When discussing algorithms, we take the view of player 1. Specifically, at time t, player 1 plays according to an algorithm which can be described as Algt ∶ (A1×A2)t → A1. We distinguish between two settings: full information, in which the player observes the full utility
function at time t (i.e., u1(⋅, bt)), and the bandit setting, in which the player only observes u1(at, bt). In the full information setting, algorithms like multiplicative weight updates (MWU Arora et al. (2012c)) depend only on the past t − 1 utility functions (u1(⋅, b`))t−1`=1, and thus we can think of Algt as a function ft ∶ At2 → A1. In the bandit setting, though, the output at time t of the algorithm depends both on the previous t− 1 actions (a`)t−1`=1 and on the utility functions (i.e., the actions picked by the other player).
But even in the bandit setting, we would like to think of the player’s algorithm as a function ft ∶ At2 → A1. We cannot quite do this, however we can think of the player’s algorithm as a distribution over such functions. So how do we remove the dependence on At1? Intuitively, if we fix the sequence of actions played by player 2, we want to take the expectation of Algt over possible choices of the t actions played by player 1. In order to do this more formally, consider the distribution µ over At−11 ×At−12 generated by simulating the play of the players for t rounds. Then let µb0∶t be the distribution obtained by conditioning µ on the actions of player 2 being b0∶t. Now we let ft(b0∶t−1) be the distribution obtained by sampling a0∶t−1 from µb1∶t−1 and using Alg(a0∶t−1, b0∶t−1). When taking expectations over ft, the expectation is taken with respect to the above distribution. We also refer to the output pt = ft(b0∶t−1) as the strategy of the player at time t. Now that we can refer to algorithms simply as functions (or distributions over functions), we introduce the notion of a stable algorithm. Definition 3.3. Let ft ∶ At2 → A1 be a sample from Algt (as described above), mapping the past t actions in A2 to a distribution over the action set A1. Let the distribution returned at time t be p 1 t = ft(b1, . . . , bt). We call this algorithm on average (m,S(T )) stable with respect to the norm⋅, if for any b′t−m+1, . . . , b′t ∈ A2 such that p̃1t = ft(b1, . . . , bt−m, b′t−m+1, . . . , b′t) ∈ A1, it holds that E[∑Tt=1 p1t − p̃1t ] ≤ S(T ), where the expectation is taken with respect to the randomization in the algorithm.
Even though this definition of stability is given with respect to the game setting, it is not hard to see that it can be extended to the general online learning setting, and in fact this definition is similar in spirit to the one given in Saha et al. (2012). It turns out that most natural no external regret algorithms are stable. In particular we show, in the supplementary, that both Exp3 Auer et al. (2002) and MWU are on average (m,m√T ) stable with respect to `1 norm for any m < o(√T ). It is now possible to show that if each of the players are facing stable no external regret algorithms, they will also have bounded policy regret (so the incompatibility from Theorem 3.2 cannot occur in this case). Theorem 3.4. Let (at)Tt=1 and (bt)Tt=1 be the action sequences of player 1 and 2 and suppose that they are coming from no external regret algorithms modeled by functions ft and gt, with regrets R1(T ) and R2(T ) respectively. Assume that the algorithms are on average (m,S(T )) stable with respect to the `2 norm. Then
E [P (T, a)] ≤ P1S(T ) +R1(T ) E [P (T, b)] ≤ P2S(T ) +R2(T ),
where ut(a1∶t) in the definition of P (T, a) equals u1(at, gt(a0∶t−1)) and similarly in the definition of P (T, b), equals u2(bt, ft(b0∶t−1)). The above holds for any fixed actions b ∈ A2 and a ∈ A1. Here the matrix norm ⋅ is the spectral norm.
4 Policy equilibrium
Recall that unlike external regret, policy regret captures how other players in a game might react if a player decides to deviate from their strategy. The story is similar when considering different notions of equilibria. In particular Nash equlibria, Correlated equilibria and CCEs can be interpreted in the following way: if player i deviates from the equilibrium play, their utility will not increase no matter how they decide to switch, provided that all other players continue to play according to the equilibrium. This sentiment is a reflection of what no external and no swap regret algorithms guarantee. Equipped with the knowledge that no policy regret sequences are obtainable in the game setting under reasonable play from all parties, it is natural to reason how other players would react if player i deviated and what would be the cost of deviation when taking into account possible reactions.
Let us again consider the 2-player game setup through the view of player 1. The player believes their opponent might be m-memory bounded and decides to proceed by playing according to a no policy
regret algorithm. After many rounds of the game, player 1 has computed an empirical distribution of play ̂ over A ∶= A1 × A2. The player is familiar with the guarantees of the algorithm and knows that, if instead, they changed to playing any fixed action a ∈ A1, then the resulting empirical distribution of play ̂a, where player 2 has responded accordingly in a memory-bounded way, is such that E(a,b)∼̂ [u1(a, b)] ≥ E(a,b)∼̂a [u1(a, b)] − ✏. This thought experiment suggests that if no policy regret play converges to an equilibrium, then the equilibrium is not only described by the deviations of player 1, but also through the change in player 2’s behavior, which is encoded in the distribution ̂a. Thus, any equilibrium induced by no policy regret play, can be described by tuples of distributions {( , a, b) ∶ (a, b) ∈ A}, where a is the distribution corresponding to player 1’s deviation to the fixed action a ∈ A1 and b captures player 2’s deviation to the fixed action b ∈ A2. Clearly a and b are not arbitrary but we still need a formal way to describe how they arise.
For convenience, lets restrict the memory of player 2 to be 1. Thus, what player 1 believes is that at each round t of the game, they play an action at and player 2 plays a function ft ∶ A1 → A2, mapping at−1 to bt = ft(at−1). Finally, the observed utility is u1(at, ft(at−1)). The empirical distribution of play, ̂, from the perspective of player 1, is formed from the observed play (at, ft(at−1))Tt=1. Moreover, the distribution, ̂a, that would have occurred if player 1 chose to play action a on every round is formed from the play (a, ft(a))Tt=1. In the view of the world of player 1, the actions taken by player 2 are actually functions rather than actions in A2. This suggests that the equilibrium induced by a no-policy regret play, is a distribution over the functional space defined below. Definition 4.1. Let F1 ∶= {f ∶ Am12 → A1} and F2 ∶= {g ∶ Am21 → A2} denote the functional spaces of play of players 1 and 2, respectively. Denote the product space by F ∶= F1 ×F2. Note that when m1 =m2 = 0, F is in a one-to-one correspondence with A, i.e. when players believe their opponents are oblivious, we recover the action set studied in standard equilibria. For simplicity, for the remainder of the paper we assume that m1 = m2 = 1. However, all of the definitions and results that follow can be extended to the fully general setting of arbitrary m1 and m2; see the supplementary for details. Let us now investigate how a distribution ⇡ over F can give rise to a tuple of distributions (̂, ̂a, ̂b). We begin by defining the utility of ⇡ such that it equals the utility of a distribution over A i.e., we want E(f,g)∼⇡ [u1(f, g)] = E(a,b)∼ [u1(a, b)]. Since utilities are not defined for functions, we need an interpretation of E(f,g)∼⇡ [u1(f, g)] which makes sense. We notice that ⇡ induces a Markov chain with state space A in the following way. Definition 4.2. Let ⇡ be any distribution over F . Then ⇡ induces a Markov process with transition probabilities P [(a2, b2)(a1, b1)] = ∑(f,g)∈F1×F2∶f(b1)=a2,g(a1)=b2 ⇡(f, g). We associate with this Markov process the transition matrix M ∈ RA×A, with Mx1,x2 = P [x2x1] where xi = (ai, bi). Since every Markov chain with a finite state space has a stationary distribution, we think of utility of ⇡ as the utility of a particular stationary distribution of M. How we choose among all stationary distributions is going to become clear later, but for now we can think about as the distribution which maximizes the utilities of both players. Next, we need to construct a and b, which capture the deviation in play, when player 1 switches to action a and player 2 switches to action b. The no-policy regret guarantee can be interpreted as E(f,g)∼⇡ [u1(f, g)] ≥ E(f,g)∼⇡ [u1(a, g(a))] i.e., if player 1 chose to switch to a fixed action (or equivalently, the constant function which maps everything to the action a ∈ A1), then their utility should not increase. Switching to a fixed action a, changes ⇡ to a new distribution ⇡a over F . This turns out to be a product distribution which also induces a Markov chain. Definition 4.3. Let ⇡ be any distribution over F . Let a be the distribution over F1 putting all mass on the constant function mapping all actions b ∈ A2 to the fixed action a ∈ A1. Let ⇡F2 be the marginal of ⇡ over F2. The distribution resulting from player 1 switching to playing a fixed action a ∈ A, is denoted as ⇡a = a × ⇡F2 . This distribution induces a Markov chain with transition probabilities P [(a, b2)(a1, b1)] = ∑(f,g)∶g(a1)=b2 ⇡(f, g) and the transition matrix of this Markov process is denoted by Ma. The distribution ⇡b and matrix Mb are defined similarly for player 2.
Since the no policy regret algorithms we work with do not directly induce distributions over the functional space F but rather only distributions over the action space A, we would like to state all of our utility inequalities in terms of distributions over A. Thus, we would like to check if there is a stationary distribution a of Ma such that E(f,g)∼⇡ [u1(a, g(a))] = E(a,b)∼ a [u1(a, b)]. This is indeed the case as verified by the following theorem.
Theorem 4.4. Let ⇡ be a distribution over the product of function spaces F1 × F2. There exists a stationary distribution a of the Markov chain Ma for any fixed a ∈ A1 such that E(a,b)∼ a [u1(a, b)] = E(f,g)∼⇡ [u1(a, g(a))]. Similarly, for every fixed action b ∈ A2, there exists a stationary distribution b of Mb such that E(a,b)∼ b [u2(a, b)] = E(f,g)∼⇡ [u2(f(b), b)]. The proof of this theorem is constructive and can be found in the supplementary. With all of this notation we are ready to formally describe what no-policy regret play promises in the game setting in terms of an equilibrium. Definition 4.5. A distribution ⇡ over F1 ×F2 is a policy equilibrium if for all fixed actions a ∈ A1 and b ∈ A2, which generate Markov chains Ma and Mb respectively, with stationary distributions a and b from Theorem 4.4, there exists a stationary distribution of the Markov chain M induced by ⇡ such that:
E(a,b)∼ [u1(a, b)] ≥ E(a,b)∼ a [u1(a, b)] E(a,b)∼ [u2(a, b)] ≥ E(a,b)∼ b [u2(a, b)] . (3)
In other words, ⇡ is a policy equilibrium if there exists a stationary distribution of the Markov chain corresponding to ⇡, such that, when actions are drawn according to , no player has incentive to change their action. For a simple example of a policy equilibrium see Section E in the supplementary.
4.1 Convergence to the set of policy equilibria
We have tried to formally capture the notion of equilibria in which player 1’s deviation would lead to a reaction from player 2 and vice versa in Definition 4.5. This definition is inspired by the counterfactual guarantees of no policy regret play and we would like to check that if players’ strategies yield sublinear policy regret then the play converges to a policy equilibrium. Since the definition of sublinear policy regret does not include a distribution over functional spaces but only works with empirical distributions of play, we would like to present our result in terms of distributions over the action space A. Thus we begin by defining the set of all product distributions × a × b, induced by policy equilibria ⇡ as described in the previous subsection. Here a and b represent the deviation in strategy if player 1 changed to playing the fixed action a ∈ A1 and player 2 changed to playing the fixed action b ∈ A2 respectively as constructed in Theorem 4.4. Definition 4.6. For a policy equilibrium ⇡, let S⇡ be the set of all stationary distributions which satisfy the equilibrium inequalities (3), S⇡ ∶= { × a × b ∶ (a, b) ∈ A} . Define S = ⇡∈⇧ S⇡ , where ⇧ is the set of all policy equilibria.
Our main result states that the sequence of empirical product distributions formed after T rounds of the game ̂ × ̂a × ̂b is going to converge to S. Here ̂a and ̂b denote the distributions of deviation in play, when player 1 switches to the fixed action a ∈ A1 and player 2 switches to the fixed action b ∈ A2 respectively. We now define these distributions formally. Definition 4.7. Suppose player 1 is playing an algorithm with output at time t given by ft ∶ At2 → A1 i.e. p1t = ft(b0∶t−1). Similarly, suppose player 2 is playing an algorithm with output at time t given by p2t = gt(a0∶t−1). The empirical distribution at time T is ̂ ∶= 1T ∑Tt=1 pt, where pt = p1t × p2t is the product distribution over A at time t. Further let (p2a)t = gt(a0∶t−m, a, . . . , a) denote the distribution at time t, provided that player 1 switched their strategy to the constant action a ∈ A1. Let a denote the distribution over A1 which puts all the probability mass on action a. Let(pa)t = a × (p2a)t be the product distribution over A, corresponding to the change of play at time t. Denote by ̂a = 1T ∑Tt=1(pa)t the empirical distribution corresponding to the change of play. The distribution ̂b is defined similarly.
Suppose that ft and gt are no-policy regret algorithms, then our main result states that the sequence(̂ × ̂a × ̂b)T converges to the set S. Theorem 4.8. If the algorithms played by player 1 in the form of ft and player 2 in the form of gt give sub-linear policy regret sequences, then the sequence of product distributions (̂ × ̂a × ̂b)∞T=1 converges weakly to the set S.
In particular if both players are playing MWU or Exp3, we know that they will have sublinear policy regret. Not surprisingly, we can show something slightly stronger as well. Let ̃, ̃a and ̃b denote the empirical distributions of observed play corresponding to ̂, ̂a and ̂b, i.e. ̃ = 1T t, where t
denotes the Dirac distribution, putting all weight on the played actions at time t. Then these empirical distributions also converge to S almost surely.
4.2 Sketch of proof of the main result
The proof of Theorem 4.8 has three main steps. The first step defines the natural empirical Markov chains M̂, M̂a and M̂b from the empirical play (pt)∞t=1 (see Definition B.2) and shows that the empirical distributions ̂, ̂a and ̂b are stationary distributions of the respective Markov chains. The latter is done in Lemma B.3. The next step is to show that the empirical Markov chains converge to Markov chains M, Ma and Mb induced by some distribution ⇡ over F . In particular, we construct an empirical distribution ⇡̂ and distributions ⇡̂a and ⇡̂b corresponding to player’s deviations (see Definition B.5), and show that these induce the Markov chains M̂, M̂a and M̂b respectively (Lemma B.7). The distribution ⇡ we want is now the limit of the sequence (⇡̂)T . The final step is to show that ⇡ is a policy equilibrium. The proof goes by contradiction. Assume ⇡ is not a policy equilibrium, this implies that no stationary distribution of M and corresponding stationary distributions of Ma and Mb can satisfy inequalities (3). Since the empirical distributions ̂, ̂a and ̂b of the play satisfies inequalities (3) up to an o(1) additive factor, we can show, in Theorem B.8, that in the limit, the policy equilibrium inequalities are exactly satisfied. Combined with the convergence of M̂, M̂a and M̂b to M, Ma and Mb, respectively, this implies that stationary distributions of M, Ma and Mb, satisfying (3), giving a contradiction.
We would like to emphasize that the convergence guarantee of Theorem 4.8 does not rely on there being a unique stationary distribution of the empirical Markov chains M̂, M̂a and M̂b or their respective limits M,Ma,Mb. Indeed, Theorem 4.8 shows that any limit point of {(̂, ̂a, ̂b)T }∞T=1 satisfies the conditions of Definition 4.5. The proof does not require that any of the respective Markov chains have a unique stationary distribution, but rather requires only that ̂ has sublinear policy regret. We would also like to remark that {(̂, ̂a, ̂b)T }∞T=1 need not have a unique limit and our convergence result only guarantees that the sequence is going to the set S. This is standard when showing that any type of no regret play converges to an equilibrium, see for example Stoltz and Lugosi (2007).
4.3 Relation of policy equlibria to CCEs
So far we have defined a new class of equilibria and shown that they correspond to no policy regret play. Furthermore, we know that if both players in a 2-player game play stable no external regret algorithms, then their play also has sublinear policy regret. It is natural to ask if every CCE is also a policy equilibrium: if is a CCE, is there a corresponding policy equilibrium ⇡ which induces a Markov chain M for which is a stationary distribution satisfying (3)? We show that the answer to this question is positive: Theorem 4.9. For any CCE of a 2-player game G, there exists a policy-equilibrium ⇡ which induces a Markov chain M with stationary distribution .
To prove this, we show that for any CCE we can construct stable no-external regret algorithm which converge to it, and so since stable no-external regret algorithms always converge to policy equilibria (Theorem 3.4), this implies the CCE is also a policy equilibrium.
However, we show the converse is not true: policy equilibria can give rise to behavior which is not a CCE. Our proof appeals to a utility sequence which is similar in spirit to the one in Theorem 3.2, but is adapted to the game setting. Theorem 4.10. There exists a 2-player game G and product distributions × a × b ∈ S (where S is defined in Definition 4.6 as the possible distributions of play from policy equilibria), such that is not a CCE of G. In Section E of the supplementary we give a simple example of a policy equilibrium which is not a CCE.
5 Discussion
In this work we gave a new twist on policy regret by examining it in the game setting, where we introduced the notion of policy equilibrium and showed that it captures the behavior of no policy
regret players. While our characterization is precise, we view this as only the first step towards truly understanding policy regret and its variants in the game setting. Many interesting open questions remain. Even with our current definitions, since we now have a broader class of equilibria to consider it is natural to go back to the extensive literature in algorithmic game theory on the price of anarchy and price of stability and reconsider it in the context of policy equilibria. For example Roughgarden (2015) showed that in “smooth games” the worst CCE is no worse than the worst Nash. Since policy equilibria contain all CCEs (Theorem 4.9), is the same true for policy equilibria?
Even more interesting questions remain if we change our definitions to be more general. For example, what happens with more than 2 players? With three or more players, definitions of “reaction” by necessity become more complicated. Or what happens when m is not a constant? No policy regret algorithms exist for superconstant m, but our notion of equilibrium requires m to be constant in order for the Markov chains to make sense. Finally, what if we compare against deviations that are more complicated than a single action, in the spirit of swap regret or -regret?
From an online learning perspective, note that our notion of on average stable and the definition of m-memory boundedness are different notions of stability. Is there one unified definition of “stable” which would allow us to give no policy regret algorithms against stable adversaries even outside of the game setting?
Acknowledgments
This work was supported in part by NSF BIGDATA grant IIS-1546482, NSF BIGDATA grant IIS1838139, NSF CCF-1535987, NSF IIS-1618662, NSF CCF-1464239, and NSF AITF CCF-1535887. | 1. What is the main contribution of the paper regarding policy regret in repeated 2-player games?
2. What are the strengths of the paper, particularly in its thoroughness, creativity, elegance, and non-triviality?
3. Do you have any minor comments or suggestions for improvements regarding related work, theorem statements, and definitions? | Review | Review
The paper studies the notion of policy-regret, introduced by Arora et al (2012), in the context of repeated 2-player games. Policy regret is an adaptation of the standard external regret that captures counterfactual interactions between the player and the adversary in an online learning setting, but so far has not been seriously studied in the natural setting of repeated interaction between two players/agents in a game, interested in maximizing their own utilities. The paper addresses various different aspects of policy regret minimization in repeated games. Most notably, the authors give a precise characterization of the set of equilibria (termed âpolicy equilibriaâ) approached by two players in a repeated game when both follow policy-regret minimization algorithms, and show that this set strictly contains the set of all coarse-correlated equilibria (which are approached by classical external regret-minimizing algorithms). The authorsâ study of the topic is thorough and the paper feels complete. The questions considered are natural and compelling, and the results established are creative, elegant and non-trivial. The writing is perhaps a bit too verbose but overall excellent. The main text feels polished and the technical proofs are clear and detailed. My only complaint (besides a couple of minor comments listed below) is that, as the precise definitions of âpolicy equilibriaâ and related concepts turn out to be quite complex, it would have been helpful to include few simple examples where these take a more explicit form and are easier to interpret. (This could be addressed in a future, longer version of the paper.) Overall, an excellent paper that has been a pleasure to read. I strongly support acceptance. Few minor comments: * Some corrections/additions to related work: the MWU and Exp3 algorithms are mentioned without a proper citation; the tight policy-regret bound for switching costs adversaries was in fact proved by Dekel, Ding, Koren & Peres (STOCâ14) (and was later extended to more general movement costs by Koren, Livni & Mansour (COLTâ17, NIPSâ17)); another paper perhaps worths mentioning is Even-Dar, Mansour & Nadav (STOCâ09) that studies regret minimization dynamics in concave games. * Theorem 3.2: the theorem statement is for m ⥠2 but the proof (in the supplementary) seems to apply only for m ⥠2 ...? Also, the quantifier âfor any constant m ⥠2â should be moved to the beginning of the theorem statement. * Section 4: I couldnât find the definition of the action set \mathcal{A}. * For completeness, it would be worthwhile to include the definition of the Prokhorov metric and state Prokhorovâs theorem. |
NIPS | Title
Policy Regret in Repeated Games
Abstract
The notion of policy regret in online learning is a well defined performance measure for the common scenario of adaptive adversaries, which more traditional quantities such as external regret do not take into account. We revisit the notion of policy regret and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play that achieves a favorable regret with respect to one definition must do poorly with respect to the other. We then focus on the game-theoretic setting where the adversary is a self-interested agent. In that setting, we show that external regret and policy regret are not in conflict and, in fact, that a wide class of algorithms can ensure a favorable regret with respect to both definitions, so long as the adversary is also using such an algorithm. We also show that the sequence of play of no-policy regret algorithms converges to a policy equilibrium, a new notion of equilibrium that we introduce. Relating this back to external regret, we show that coarse correlated equilibria, which no-external regret players converge to, are a strict subset of policy equilibria. Thus, in game-theoretic settings, every sequence of play with no external regret also admits no policy regret, but the converse does not hold.
1 Introduction
Learning in dynamically evolving environments can be described as a repeated game between a player, an online learning algorithm, and an adversary. At each round of the game, the player selects an action, e.g. invests in a specific stock, the adversary, which may be the stock market, chooses a utility function, and the player gains the utility value of its action. The player observes the utility value and uses it to update its strategy for subsequent rounds. The player’s goal is to accumulate the largest possible utility over a finite number of rounds of play.1
The standard measure of the performance of a player is its regret, that is the difference between the utility achieved by the best offline solution from some restricted class and the utility obtained by the online player, when utilities are revealed incrementally. Formally, we can model learning as the following problem. Consider an action setA. The player selects an action at at round t, the adversary picks a utility function ut, and the player gains the utility value ut(at). While in a full observation setting the player observes the entire utility function ut, in a bandit setting the player only observes
1Such games can be equivalently described in terms of minimizing losses rather than maximizing utilities. All our results can be equivalently expressed in terms of losses instead of utilities.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
the utility value of its own action, ut(at). We use the shorthand a1∶t to denote the player’s sequence of actions (a1, . . . , at) and denote by Ut = {u∶At → R} the family of utility functions ut. The objective of the player is to maximize its expected cumulative utility over T rounds, i.e. maximize E[∑Tt=1 ft(a1∶t)], where the expectation is over the player’s (possible) internal randomization. Since this is clearly impossible to maximize without knowledge of the future, the algorithm instead seeks to achieve a performance comparable to that of the best fixed action in hindsight. Formally, external regret is defined as
R(T ) = E max a∈A T t=1ut(a1∶t−1, a) − T t=1ut(a1∶t) . (1)
A player is said to admit no external regret if the external regret is sublinear, that is R(T ) = o(T ). In contrast to statistical learning, online learning algorithms do not need to make stochastic assumptions about data generation: strong regret bounds are possible even if the utility functions are adversarial.
There are two main adversarial settings in online learning: the oblivious setting where the adversary ignores the player’s actions and where the utility functions can be thought of as determined before the game starts (for instance, in weather prediction); and the adaptive setting where the adversary can react to the player’s actions, thus seeking to throw the player off track (e.g., competing with other agents in the stock market). More generally, we define an m-memory bounded adversary as one that at any time t selects a utility function based on the player’s past m actions: ut(a′1, . . . , a′t−m−1, at−m, . . . , at) = ut(a1, . . . , at−m−1, at−m, . . . , at), for all a′1, . . . , a′t−m−1 and all a1, . . . , at. An oblivious adversary can therefore be equivalently viewed as a 0-memory bounded adversary and an adaptive adversary as an 8-memory bounded adversary. For an oblivious adversary, external regret in Equation 1 reduces to R(T ) = E[maxa∈A∑Tt=1 ut(a)−ut(at)], since the utility functions do not depend upon past actions. Thus, external regret is meaningful when the adversary is oblivious, but it does not admit any natural interpretation when the adversary is adaptive. The problem stems from the fact that in the definition of external regret, the benchmark is a function of the player’s actions. Thus, if the adversary is adaptive, or even memory-bounded for some m > 0, then, external regret does not take into account how the adversary would react had the player selected some other action.
To resolve this critical issue, Arora et al. (2012b) introduced an alternative measure of performance called policy regret for which the benchmark does not depend on the player’s actions. Policy regret is defined as follows
P (T ) =max a∈A T t=1ut(a, . . . , a) −E T t=1ut(a1∶t) . (2)
Arora et al. (2012b) further gave a reduction, using a mini-batch technique where the minibatch size is larger than the memory m of adversary, that turns any algorithm with a sublinear external regret against an oblivious adversary into an algorithm with a sublinear policy regret against an m-memory bounded adversary, albeit at the price of a somewhat worse regret bound, which is still sublinear.
In this paper, we revisit the problem of online learning against adaptive adversaries. Since Arora et al. (2012b) showed that there exists an adaptive adversary against which any online learning algorithm admits linear policy regret, even when the external regret may be sublinear, we ask if no policy regret implies no external regret. One could expect this to be the case since policy regret seems to be a stronger notion than external regret. However, our first main result (Theorem 3.2) shows that this in fact is not the case and that the two notions of regret are incompatible: there exist adversaries (or sequence of utilities) on which action sequences with sublinear external regret admit linear policy regret and action sequences with sublinear policy regret incur linear external regret.
We argue, however, that such sequences may not arise in practical settings, that is in settings where the adversary is a self-interested entity. In such settings, rather than considering a malicious opponent whose goal is to hurt the player by inflicting large regret, it seems more reasonable to consider an opponent whose goal is to maximize his own utility. In zero-sum games, maximizing one’s utility comes at the expense of the other player’s, but there is a subtle difference between an adversary who is seeking to maximize the player’s regret and an adversary who is seeking to minimize the player’s utility (or maximize his own utility). We show that in such strategic game settings there is indeed a strong relationship between external regret and policy regret. In particular, we show in Theorem 3.4 that a large class of stable online learning algorithms with sublinear external regret also benefit from sublinear policy regret.
Further, we consider a two-player game where each player is playing a no policy regret algorithm. It is known that no external regret play converges to a coarse correlated equilibrium (CCE) in such a game, but what happens when players are using no policy regret algorithms? We show in Theorem 4.8 that the average play in repeated games between no policy regret players converges to a policy equilibrium, a new notion of equilibrium that we introduce. Policy equilibria differ from more traditional notions of equilibria such as Nash or CCEs in a crucial way. Recall that a CCE is defined to be a recommended joint strategy for players in a game such that there is no incentive for any player to deviate unilaterally from the recommended strategy if other players do not deviate.
What happens if the other players react to one player’s deviation by deviating themselves? This type of reasoning is not captured by external regret, but is essentially what is captured by policy regret. Thus, our notion of policy equilibrium must take into account these counterfactuals, and so the definition is significantly more complex. But, by considering functions rather than just actions, we can define such equilibria and prove that they exactly characterize no policy regret play.
Finally, it becomes natural to determine the relationship between policy equilibria (which characterize no policy regret play) and CCEs (which characterize no external regret play). We show in Theorems 4.9 and 4.10 that the set of CCEs is a strict subset of policy equilibria. In other words, every CCE can be thought of as a policy regret equilibrium, but no policy regret play might not converge to a CCE.
2 Related work
The problem of minimizing policy regret in a fully adversarial setting was first studied by Merhav et al. (2002). Their work dealt specifically with the full observation setting and assumed that the utility (or loss) functions were m-memory bounded. They gave regret bounds in O(T 23). The follow-up work by Farias and Megiddo (2006) designed algorithms in a reactive bandit setting. However, their results were not in the form of regret bounds but rather introduced a new way to compare against acting according to a fixed expert strategy. Arora et al. (2012b) studied m-memory bounded adversaries both in the bandit and full information settings and provided extensions to more powerful competitor classes considered in swap regret and more general -regret. Dekel et al. (2014) provided a lower bound in the bandit setting for switching cost adversaries, which also leads to a tight lower bound for policy regret in the order ⌦(T 23). Their results were later extended by Koren et al. (2017a) and Koren et al. (2017b). More recently, Heidari et al. (2016) considered the multi-armed bandit problem where each arm’s loss evolves with consecutive pulls. The process according to which the loss evolves was not assumed to be stochastic but it was not arbitrary either – in particular, the authors required either the losses to be concave, increasing and to satisfy a decreasing marginal returns property, or decreasing. The regret bounds given are in terms of the time required to distinguish the optimal arm from all others.
A large part of reinforcement learning is also aimed at studying sequential decision making problems. In particular, one can define a Markov Decision Process (MDP) by a set of states equipped with transition distributions, a set of actions and a set of reward or loss distributions associated with each state action pair. The transition and reward distributions are assumed unknown and the goal is to play according to a strategy that minimizes loss or maximizes reward. We refer the reader to (Sutton and Barto, 1998; Kakade et al., 2003; Szepesvári, 2010) for general results in RL. MDPs in the online setting with bandit feedback or arbitrary payoff processes have been studied by Even-Dar et al. (2009a); Yu et al. (2009); Neu et al. (2010) and Arora et al. (2012a).
The tight connection between no-regret algorithms and correlated equilibria was established and studied by Foster and Vohra (1997); Fudenberg and Levine (1999); Hart and Mas-Colell (2000); Blum and Mansour (2007). A general extension to games with compact, convex strategy sets was given by Stoltz and Lugosi (2007). No external regret dynamics were studied in the context of socially concave games (Even-Dar et al., 2009b). More recently, Hazan and Kale (2008) considered more general notions of regret and established an equivalence result between fixed-point computation, the existence of certain no-regret algorithms, and the convergence to the corresponding equilibria. In a follow-up work by Mohri and Yang (2014) and Mohri and Yang (2017), the authors considered a more powerful set of competitors and showed that the repeated play according to conditional swap-regret or transductive regret algorithms leads to a new set of equilibria.
3 Policy regret in reactive versus strategic environments
Often, distant actions in the past influence an adversary more than more recent ones. The definition of policy regret (2) models this influence decay by assuming that the adversary is m-memory bounded for some m ∈ N. This assumption is somewhat stringent, however, since ideally we could model the current move of the adversary as a function of the entire past, even if actions taken further in the past have less significance. Thus, we extend the definition of Arora et al. (2012b) as follows. Definition 3.1. The m-memory policy regret at time T of a sequence of actions (at)Tt=1 with respect to a fixed action a in the action set A and the sequence of utilities (ut)Tt=1, where ut ∶ At → R and m ∈ N{∞} is
P (T, a) = T t=1ut(a1,, at−m, a,, a) − T t=1ut(a1,, at).
We say that the sequence (at)Tt=1 has sublinear policy regret (or no policy regret) if P (T, a) < o(T ), for all actions a ∈ A. Let us emphasize that this definition is just an extension of the standard policy regret definition and that, when the utility functions are m-memory bounded, the two definitions exactly coincide.
While the motivation for policy regret suggests that this should be a stronger notion compared to external regret, we show that not only that these notions are incomparable in the general adversarial setting, but that they are also incompatible in a strong sense. Theorem 3.2. There exists a sequence of m-memory bounded utility functions (ut)Tt=1, where ut ∶ A → R, such that for any constant m ≥ 2 (independent of T ), any action sequence with sublinear policy regret will have linear external regret and any action sequence with sublinear external regret will have linear policy regret.
The proof of the above theorem constructs a sequence for which no reasonable play can attain sublinear external regret. In particular, the only way the learner can have sublinear external regret is if they choose to have very small utility. To achieve this, the utility functions chosen by the adversary are the following. At time t, if the player chose to play the same action as their past 2 actions then they get utility 12 . If the player’s past two actions were equal but their current action is different, then they get utility 1, and if their past two actions differ then no matter what their current action is they receive utility 0. It is easy to see that the maximum utility play for this sequence (and the lowest 2-memory bounded policy regret strategy) is choosing the same action at every round. However, such an action sequence admits linear external regret. Moreover, every sublinear external regret strategy must then admit sublinear utility and thus linear policy regret.
As discussed in Section 1, in many realistic environments we can instead think of the adversary as a self-interested agent trying to maximize their own utility, rather than trying to maximize the regret of the player. This more strategic environment is better captured by the game theory setting, in particular a 2-player game where both players are trying to maximize their utility. Even though we have argued that external regret is not a good measure, our next result shows that minimizing policy regret in games can be done if both players choose their strategies according to certain no external regret algorithms. More generally, we adapt a classical notion of stability from the statistical machine learning setting and argue that if the players use no external regret algorithms that are stable, then the players will have no policy regret in expectation. To state the result formally we first need to introduce some notation.
Game definition: We consider a 2-player game G, with players 1 and 2. The action set of player i is denoted by Ai, which we think of as being embedded into RAi in the obvious way where each action corresponds to a standard basis vector. The corresponding simplex is Ai. The action of player 1 at time t is at and of player 2 is bt. The observed utility for player i at time t is ui(at, bt) and this is a bi-linear form with corresponding matrix Pi. We assume that the utilities are bounded in[0,1]. Algorithm of the player: When discussing algorithms, we take the view of player 1. Specifically, at time t, player 1 plays according to an algorithm which can be described as Algt ∶ (A1×A2)t → A1. We distinguish between two settings: full information, in which the player observes the full utility
function at time t (i.e., u1(⋅, bt)), and the bandit setting, in which the player only observes u1(at, bt). In the full information setting, algorithms like multiplicative weight updates (MWU Arora et al. (2012c)) depend only on the past t − 1 utility functions (u1(⋅, b`))t−1`=1, and thus we can think of Algt as a function ft ∶ At2 → A1. In the bandit setting, though, the output at time t of the algorithm depends both on the previous t− 1 actions (a`)t−1`=1 and on the utility functions (i.e., the actions picked by the other player).
But even in the bandit setting, we would like to think of the player’s algorithm as a function ft ∶ At2 → A1. We cannot quite do this, however we can think of the player’s algorithm as a distribution over such functions. So how do we remove the dependence on At1? Intuitively, if we fix the sequence of actions played by player 2, we want to take the expectation of Algt over possible choices of the t actions played by player 1. In order to do this more formally, consider the distribution µ over At−11 ×At−12 generated by simulating the play of the players for t rounds. Then let µb0∶t be the distribution obtained by conditioning µ on the actions of player 2 being b0∶t. Now we let ft(b0∶t−1) be the distribution obtained by sampling a0∶t−1 from µb1∶t−1 and using Alg(a0∶t−1, b0∶t−1). When taking expectations over ft, the expectation is taken with respect to the above distribution. We also refer to the output pt = ft(b0∶t−1) as the strategy of the player at time t. Now that we can refer to algorithms simply as functions (or distributions over functions), we introduce the notion of a stable algorithm. Definition 3.3. Let ft ∶ At2 → A1 be a sample from Algt (as described above), mapping the past t actions in A2 to a distribution over the action set A1. Let the distribution returned at time t be p 1 t = ft(b1, . . . , bt). We call this algorithm on average (m,S(T )) stable with respect to the norm⋅, if for any b′t−m+1, . . . , b′t ∈ A2 such that p̃1t = ft(b1, . . . , bt−m, b′t−m+1, . . . , b′t) ∈ A1, it holds that E[∑Tt=1 p1t − p̃1t ] ≤ S(T ), where the expectation is taken with respect to the randomization in the algorithm.
Even though this definition of stability is given with respect to the game setting, it is not hard to see that it can be extended to the general online learning setting, and in fact this definition is similar in spirit to the one given in Saha et al. (2012). It turns out that most natural no external regret algorithms are stable. In particular we show, in the supplementary, that both Exp3 Auer et al. (2002) and MWU are on average (m,m√T ) stable with respect to `1 norm for any m < o(√T ). It is now possible to show that if each of the players are facing stable no external regret algorithms, they will also have bounded policy regret (so the incompatibility from Theorem 3.2 cannot occur in this case). Theorem 3.4. Let (at)Tt=1 and (bt)Tt=1 be the action sequences of player 1 and 2 and suppose that they are coming from no external regret algorithms modeled by functions ft and gt, with regrets R1(T ) and R2(T ) respectively. Assume that the algorithms are on average (m,S(T )) stable with respect to the `2 norm. Then
E [P (T, a)] ≤ P1S(T ) +R1(T ) E [P (T, b)] ≤ P2S(T ) +R2(T ),
where ut(a1∶t) in the definition of P (T, a) equals u1(at, gt(a0∶t−1)) and similarly in the definition of P (T, b), equals u2(bt, ft(b0∶t−1)). The above holds for any fixed actions b ∈ A2 and a ∈ A1. Here the matrix norm ⋅ is the spectral norm.
4 Policy equilibrium
Recall that unlike external regret, policy regret captures how other players in a game might react if a player decides to deviate from their strategy. The story is similar when considering different notions of equilibria. In particular Nash equlibria, Correlated equilibria and CCEs can be interpreted in the following way: if player i deviates from the equilibrium play, their utility will not increase no matter how they decide to switch, provided that all other players continue to play according to the equilibrium. This sentiment is a reflection of what no external and no swap regret algorithms guarantee. Equipped with the knowledge that no policy regret sequences are obtainable in the game setting under reasonable play from all parties, it is natural to reason how other players would react if player i deviated and what would be the cost of deviation when taking into account possible reactions.
Let us again consider the 2-player game setup through the view of player 1. The player believes their opponent might be m-memory bounded and decides to proceed by playing according to a no policy
regret algorithm. After many rounds of the game, player 1 has computed an empirical distribution of play ̂ over A ∶= A1 × A2. The player is familiar with the guarantees of the algorithm and knows that, if instead, they changed to playing any fixed action a ∈ A1, then the resulting empirical distribution of play ̂a, where player 2 has responded accordingly in a memory-bounded way, is such that E(a,b)∼̂ [u1(a, b)] ≥ E(a,b)∼̂a [u1(a, b)] − ✏. This thought experiment suggests that if no policy regret play converges to an equilibrium, then the equilibrium is not only described by the deviations of player 1, but also through the change in player 2’s behavior, which is encoded in the distribution ̂a. Thus, any equilibrium induced by no policy regret play, can be described by tuples of distributions {( , a, b) ∶ (a, b) ∈ A}, where a is the distribution corresponding to player 1’s deviation to the fixed action a ∈ A1 and b captures player 2’s deviation to the fixed action b ∈ A2. Clearly a and b are not arbitrary but we still need a formal way to describe how they arise.
For convenience, lets restrict the memory of player 2 to be 1. Thus, what player 1 believes is that at each round t of the game, they play an action at and player 2 plays a function ft ∶ A1 → A2, mapping at−1 to bt = ft(at−1). Finally, the observed utility is u1(at, ft(at−1)). The empirical distribution of play, ̂, from the perspective of player 1, is formed from the observed play (at, ft(at−1))Tt=1. Moreover, the distribution, ̂a, that would have occurred if player 1 chose to play action a on every round is formed from the play (a, ft(a))Tt=1. In the view of the world of player 1, the actions taken by player 2 are actually functions rather than actions in A2. This suggests that the equilibrium induced by a no-policy regret play, is a distribution over the functional space defined below. Definition 4.1. Let F1 ∶= {f ∶ Am12 → A1} and F2 ∶= {g ∶ Am21 → A2} denote the functional spaces of play of players 1 and 2, respectively. Denote the product space by F ∶= F1 ×F2. Note that when m1 =m2 = 0, F is in a one-to-one correspondence with A, i.e. when players believe their opponents are oblivious, we recover the action set studied in standard equilibria. For simplicity, for the remainder of the paper we assume that m1 = m2 = 1. However, all of the definitions and results that follow can be extended to the fully general setting of arbitrary m1 and m2; see the supplementary for details. Let us now investigate how a distribution ⇡ over F can give rise to a tuple of distributions (̂, ̂a, ̂b). We begin by defining the utility of ⇡ such that it equals the utility of a distribution over A i.e., we want E(f,g)∼⇡ [u1(f, g)] = E(a,b)∼ [u1(a, b)]. Since utilities are not defined for functions, we need an interpretation of E(f,g)∼⇡ [u1(f, g)] which makes sense. We notice that ⇡ induces a Markov chain with state space A in the following way. Definition 4.2. Let ⇡ be any distribution over F . Then ⇡ induces a Markov process with transition probabilities P [(a2, b2)(a1, b1)] = ∑(f,g)∈F1×F2∶f(b1)=a2,g(a1)=b2 ⇡(f, g). We associate with this Markov process the transition matrix M ∈ RA×A, with Mx1,x2 = P [x2x1] where xi = (ai, bi). Since every Markov chain with a finite state space has a stationary distribution, we think of utility of ⇡ as the utility of a particular stationary distribution of M. How we choose among all stationary distributions is going to become clear later, but for now we can think about as the distribution which maximizes the utilities of both players. Next, we need to construct a and b, which capture the deviation in play, when player 1 switches to action a and player 2 switches to action b. The no-policy regret guarantee can be interpreted as E(f,g)∼⇡ [u1(f, g)] ≥ E(f,g)∼⇡ [u1(a, g(a))] i.e., if player 1 chose to switch to a fixed action (or equivalently, the constant function which maps everything to the action a ∈ A1), then their utility should not increase. Switching to a fixed action a, changes ⇡ to a new distribution ⇡a over F . This turns out to be a product distribution which also induces a Markov chain. Definition 4.3. Let ⇡ be any distribution over F . Let a be the distribution over F1 putting all mass on the constant function mapping all actions b ∈ A2 to the fixed action a ∈ A1. Let ⇡F2 be the marginal of ⇡ over F2. The distribution resulting from player 1 switching to playing a fixed action a ∈ A, is denoted as ⇡a = a × ⇡F2 . This distribution induces a Markov chain with transition probabilities P [(a, b2)(a1, b1)] = ∑(f,g)∶g(a1)=b2 ⇡(f, g) and the transition matrix of this Markov process is denoted by Ma. The distribution ⇡b and matrix Mb are defined similarly for player 2.
Since the no policy regret algorithms we work with do not directly induce distributions over the functional space F but rather only distributions over the action space A, we would like to state all of our utility inequalities in terms of distributions over A. Thus, we would like to check if there is a stationary distribution a of Ma such that E(f,g)∼⇡ [u1(a, g(a))] = E(a,b)∼ a [u1(a, b)]. This is indeed the case as verified by the following theorem.
Theorem 4.4. Let ⇡ be a distribution over the product of function spaces F1 × F2. There exists a stationary distribution a of the Markov chain Ma for any fixed a ∈ A1 such that E(a,b)∼ a [u1(a, b)] = E(f,g)∼⇡ [u1(a, g(a))]. Similarly, for every fixed action b ∈ A2, there exists a stationary distribution b of Mb such that E(a,b)∼ b [u2(a, b)] = E(f,g)∼⇡ [u2(f(b), b)]. The proof of this theorem is constructive and can be found in the supplementary. With all of this notation we are ready to formally describe what no-policy regret play promises in the game setting in terms of an equilibrium. Definition 4.5. A distribution ⇡ over F1 ×F2 is a policy equilibrium if for all fixed actions a ∈ A1 and b ∈ A2, which generate Markov chains Ma and Mb respectively, with stationary distributions a and b from Theorem 4.4, there exists a stationary distribution of the Markov chain M induced by ⇡ such that:
E(a,b)∼ [u1(a, b)] ≥ E(a,b)∼ a [u1(a, b)] E(a,b)∼ [u2(a, b)] ≥ E(a,b)∼ b [u2(a, b)] . (3)
In other words, ⇡ is a policy equilibrium if there exists a stationary distribution of the Markov chain corresponding to ⇡, such that, when actions are drawn according to , no player has incentive to change their action. For a simple example of a policy equilibrium see Section E in the supplementary.
4.1 Convergence to the set of policy equilibria
We have tried to formally capture the notion of equilibria in which player 1’s deviation would lead to a reaction from player 2 and vice versa in Definition 4.5. This definition is inspired by the counterfactual guarantees of no policy regret play and we would like to check that if players’ strategies yield sublinear policy regret then the play converges to a policy equilibrium. Since the definition of sublinear policy regret does not include a distribution over functional spaces but only works with empirical distributions of play, we would like to present our result in terms of distributions over the action space A. Thus we begin by defining the set of all product distributions × a × b, induced by policy equilibria ⇡ as described in the previous subsection. Here a and b represent the deviation in strategy if player 1 changed to playing the fixed action a ∈ A1 and player 2 changed to playing the fixed action b ∈ A2 respectively as constructed in Theorem 4.4. Definition 4.6. For a policy equilibrium ⇡, let S⇡ be the set of all stationary distributions which satisfy the equilibrium inequalities (3), S⇡ ∶= { × a × b ∶ (a, b) ∈ A} . Define S = ⇡∈⇧ S⇡ , where ⇧ is the set of all policy equilibria.
Our main result states that the sequence of empirical product distributions formed after T rounds of the game ̂ × ̂a × ̂b is going to converge to S. Here ̂a and ̂b denote the distributions of deviation in play, when player 1 switches to the fixed action a ∈ A1 and player 2 switches to the fixed action b ∈ A2 respectively. We now define these distributions formally. Definition 4.7. Suppose player 1 is playing an algorithm with output at time t given by ft ∶ At2 → A1 i.e. p1t = ft(b0∶t−1). Similarly, suppose player 2 is playing an algorithm with output at time t given by p2t = gt(a0∶t−1). The empirical distribution at time T is ̂ ∶= 1T ∑Tt=1 pt, where pt = p1t × p2t is the product distribution over A at time t. Further let (p2a)t = gt(a0∶t−m, a, . . . , a) denote the distribution at time t, provided that player 1 switched their strategy to the constant action a ∈ A1. Let a denote the distribution over A1 which puts all the probability mass on action a. Let(pa)t = a × (p2a)t be the product distribution over A, corresponding to the change of play at time t. Denote by ̂a = 1T ∑Tt=1(pa)t the empirical distribution corresponding to the change of play. The distribution ̂b is defined similarly.
Suppose that ft and gt are no-policy regret algorithms, then our main result states that the sequence(̂ × ̂a × ̂b)T converges to the set S. Theorem 4.8. If the algorithms played by player 1 in the form of ft and player 2 in the form of gt give sub-linear policy regret sequences, then the sequence of product distributions (̂ × ̂a × ̂b)∞T=1 converges weakly to the set S.
In particular if both players are playing MWU or Exp3, we know that they will have sublinear policy regret. Not surprisingly, we can show something slightly stronger as well. Let ̃, ̃a and ̃b denote the empirical distributions of observed play corresponding to ̂, ̂a and ̂b, i.e. ̃ = 1T t, where t
denotes the Dirac distribution, putting all weight on the played actions at time t. Then these empirical distributions also converge to S almost surely.
4.2 Sketch of proof of the main result
The proof of Theorem 4.8 has three main steps. The first step defines the natural empirical Markov chains M̂, M̂a and M̂b from the empirical play (pt)∞t=1 (see Definition B.2) and shows that the empirical distributions ̂, ̂a and ̂b are stationary distributions of the respective Markov chains. The latter is done in Lemma B.3. The next step is to show that the empirical Markov chains converge to Markov chains M, Ma and Mb induced by some distribution ⇡ over F . In particular, we construct an empirical distribution ⇡̂ and distributions ⇡̂a and ⇡̂b corresponding to player’s deviations (see Definition B.5), and show that these induce the Markov chains M̂, M̂a and M̂b respectively (Lemma B.7). The distribution ⇡ we want is now the limit of the sequence (⇡̂)T . The final step is to show that ⇡ is a policy equilibrium. The proof goes by contradiction. Assume ⇡ is not a policy equilibrium, this implies that no stationary distribution of M and corresponding stationary distributions of Ma and Mb can satisfy inequalities (3). Since the empirical distributions ̂, ̂a and ̂b of the play satisfies inequalities (3) up to an o(1) additive factor, we can show, in Theorem B.8, that in the limit, the policy equilibrium inequalities are exactly satisfied. Combined with the convergence of M̂, M̂a and M̂b to M, Ma and Mb, respectively, this implies that stationary distributions of M, Ma and Mb, satisfying (3), giving a contradiction.
We would like to emphasize that the convergence guarantee of Theorem 4.8 does not rely on there being a unique stationary distribution of the empirical Markov chains M̂, M̂a and M̂b or their respective limits M,Ma,Mb. Indeed, Theorem 4.8 shows that any limit point of {(̂, ̂a, ̂b)T }∞T=1 satisfies the conditions of Definition 4.5. The proof does not require that any of the respective Markov chains have a unique stationary distribution, but rather requires only that ̂ has sublinear policy regret. We would also like to remark that {(̂, ̂a, ̂b)T }∞T=1 need not have a unique limit and our convergence result only guarantees that the sequence is going to the set S. This is standard when showing that any type of no regret play converges to an equilibrium, see for example Stoltz and Lugosi (2007).
4.3 Relation of policy equlibria to CCEs
So far we have defined a new class of equilibria and shown that they correspond to no policy regret play. Furthermore, we know that if both players in a 2-player game play stable no external regret algorithms, then their play also has sublinear policy regret. It is natural to ask if every CCE is also a policy equilibrium: if is a CCE, is there a corresponding policy equilibrium ⇡ which induces a Markov chain M for which is a stationary distribution satisfying (3)? We show that the answer to this question is positive: Theorem 4.9. For any CCE of a 2-player game G, there exists a policy-equilibrium ⇡ which induces a Markov chain M with stationary distribution .
To prove this, we show that for any CCE we can construct stable no-external regret algorithm which converge to it, and so since stable no-external regret algorithms always converge to policy equilibria (Theorem 3.4), this implies the CCE is also a policy equilibrium.
However, we show the converse is not true: policy equilibria can give rise to behavior which is not a CCE. Our proof appeals to a utility sequence which is similar in spirit to the one in Theorem 3.2, but is adapted to the game setting. Theorem 4.10. There exists a 2-player game G and product distributions × a × b ∈ S (where S is defined in Definition 4.6 as the possible distributions of play from policy equilibria), such that is not a CCE of G. In Section E of the supplementary we give a simple example of a policy equilibrium which is not a CCE.
5 Discussion
In this work we gave a new twist on policy regret by examining it in the game setting, where we introduced the notion of policy equilibrium and showed that it captures the behavior of no policy
regret players. While our characterization is precise, we view this as only the first step towards truly understanding policy regret and its variants in the game setting. Many interesting open questions remain. Even with our current definitions, since we now have a broader class of equilibria to consider it is natural to go back to the extensive literature in algorithmic game theory on the price of anarchy and price of stability and reconsider it in the context of policy equilibria. For example Roughgarden (2015) showed that in “smooth games” the worst CCE is no worse than the worst Nash. Since policy equilibria contain all CCEs (Theorem 4.9), is the same true for policy equilibria?
Even more interesting questions remain if we change our definitions to be more general. For example, what happens with more than 2 players? With three or more players, definitions of “reaction” by necessity become more complicated. Or what happens when m is not a constant? No policy regret algorithms exist for superconstant m, but our notion of equilibrium requires m to be constant in order for the Markov chains to make sense. Finally, what if we compare against deviations that are more complicated than a single action, in the spirit of swap regret or -regret?
From an online learning perspective, note that our notion of on average stable and the definition of m-memory boundedness are different notions of stability. Is there one unified definition of “stable” which would allow us to give no policy regret algorithms against stable adversaries even outside of the game setting?
Acknowledgments
This work was supported in part by NSF BIGDATA grant IIS-1546482, NSF BIGDATA grant IIS1838139, NSF CCF-1535987, NSF IIS-1618662, NSF CCF-1464239, and NSF AITF CCF-1535887. | 1. What is the main contribution of the paper in terms of defining bounded-memory policy regret and its relationship to coarse correlated equilibria?
2. What are the strengths and weaknesses of the paper's technical results, particularly in the context of learning in games?
3. How does the paper's definition of policy regret differ from existing definitions, and what are the implications of this modification?
4. What is the significance of the proposed policy equilibrium concept, and how does it compare to other equilibrium concepts in repeated games?
5. How does the paper address the issue of non-uniqueness of stationary distributions in its equilibrium concept?
6. Are there any presentation issues or ambiguities in the paper that need to be clarified or improved? | Review | Review
The paper proposes a definition of bounded-memory policy regret, then defines a corresponding equilibrium concept for two-player games, and proves results on (i) convergence (in a non-standard sense) of no-policy-regret strategies to policy equilibria, and (ii) relating policy equilibria to coarse correlated equilibria. The ideas presented in the paper are clearly motivated: a notion of regret which captures adaptive reaction of the adversary is useful to study. The technical results are also interesting in the context of learning in games. I found the paper hard to read, however, due to presentation issues. The definitions are often preceded by ambiguous statements. For example: - In the paragraph "Algorithm of the player" line 180: Alg_t is defined as a Alg_t: (A_1 \times A_2)^t \to \Delta A_1, but then redefined as a function from A_2^t \to \Delta A_1 in the full information case, and finally as a distribution over such functions in the bandit setting. In Definition 3.3, the second definition seems to be used. In the same definition, "a possibly random algorithm" is not clearly defined, since the randomness can refer either to randomization over the action set, or to randomization over functions f_t. Line 194: what is meant by "simulating the play of players"? It is possible the reader could guess a definition of \mu, but it would be much better to give a rigorous and unambiguous definition (e.g. in terms of a distribution of an appropriate Markov chain). This entire section should be replaced with an unambiguous mathematical description of the game setting. - In Section 4, the discussion leading to the main definition (Definition 4.5) often involves imprecise statements, e.g. "under reasonable play from all parties" (line 229), "The player believes their opponent might be" (line 231). \epsilon is undefined on line 237. - When switching between distributions over the action set A and the joint policy set F, the presentation can be improved. I believe it would be better to start by defining the equilibrium concept (Definition 4.5) then discuss its interpretation. - \tilde \sigma_a and \tilde \sigma_b (line 336) are not clearly defined. Besides presentation, several points require discussion and justification: 1) Modifying the definition of policy regret (Definition 3.1): overloading existing definitions has serious drawbacks. In this case it is even harder to justify, since the whole point of defining policy regret is to take into account the adaptivity of the adversary (a point that the paper highlighted several times). The sequence of plays in the proposed definition simply does not reflect a fixed action policy. 2) Justification of the new equilibrium concept (policy equilibrium). Policy equilibrium compares stationary distribution of Markov chains. This needs careful justification and interpretation. For example, do the players care about their stationary distributions because the game is played infinitely often? Why is this a better equilibrium concept than existing concepts for repeated games? How does it generalize to n players? The proposed concept only applies in a context of repeated play (unlike other concepts mentioned in the paper, e.g., Nash equilibria and correlated equilibria). It should be related and compared to other equilibrium concepts of repeated games. 3) The non-uniquess of stationary distributions also raises some issues (the ability of the players to select these distributions). The comment following Definition 4.5 ignores the issue by referring to "the stationary distribution", omitting that there may be more than one. ================= Thank you for your responses and clarifications, in particular regarding Definition 3.1, this addresses one of my concerns. I agree that Definition 4.5 does not rely on the uniqueness of the stationary distribution, but the point is rather the motivation of this new equilibrium concept: when there are multiple stationary distributions, the play could converge to a different stationary distribution than the one satisfying the guarantees, and this is problematic. As raised during discussion with other reviewers, a better justification could be that one can ensure uniqueness of the stationary distribution at a relatively small cost, by mixing in a uniform distribution. A more careful discussion should be included in the revision. |
NIPS | Title
Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
Abstract
Imbalanced data pose challenges for deep learning based classification models. One of the most widely-used approaches for tackling imbalanced data is re-weighting, where training samples are associated with different weights in the loss function. Most of existing re-weighting approaches treat the example weights as the learnable parameter and optimize the weights on the meta set, entailing expensive bilevel optimization. In this paper, we propose a novel reweighting method based on optimal transport (OT) from a distributional point of view. Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set. The weights of the training samples are the probability mass of the imbalanced distribution and learned by minimizing the OT distance between the two distributions. Compared with existing methods, our proposed one disengages the dependence of the weight learning on the concerned classifier at each iteration. Experiments on image, text and point cloud datasets demonstrate that our proposed re-weighting method has excellent performance, achieving state-of-the-art results in many cases and providing a promising tool for addressing the imbalanced classification issue. The code has been made available at https://github.com/DandanGuo1993/reweight-imbalance-classification-with-OT.
1 Introduction
Deep neural networks (DNNs) have achieved remarkable success in various applications, which is undoubtedly inseparable from the high-quality large-scale datasets. Usually, the number of samples for each class in these datasets are manually selected resulting in balanced datasets. However, most real-world datasets are imbalanced, such as a few classes (a.k.a. head or majority class) occupy most of the data while most classes (a.k.a. tail or minority class) have a few samples. A model trained on the imbalanced training set but without considering such class imbalance would be significantly dominated by those majority classes, and thus underperform on a balanced test dataset. This can also be known as the long-tailed problem and exists in many domains, such as text classification [1, 2], object detection [3] and image classification [4–6].
There are rich research lines to solve the imbalance problem, including re-sampling [7–10], class-level or instance-level re-weighting [1, 2, 4, 11–18], meta-learning [4, 5, 15, 16, 19], two-stage methods
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
[4–6, 17] and post-hoc correction [20, 21]. Inspired by [2], re-weighting strategies can be roughly grouped into empirical re-weighting and automatic re-weighting. The former aims to design weights manually with the major insight that the minority class example will be assigned a larger weight value than that of the majority class [12–14]. However, manually setting weights can be less adaptive to different datasets [2]. The latter aims to assign adaptive weights to the examples through learning mechanisms [1, 2, 4, 15, 16]. As the representative automatic re-weighting method, L2RW [15] optimizes the weight vector as a learnable parameter with an unbiased meta set (i.e., validation set). Although L2RW and its followers have received widespread attention, most of them may be limited to optimizing the weights by the classification loss on the meta set: The gradient of weights is usually coupled with the to-be-learned classifier at each training iteration. Since classifier is the major concern in imbalanced issue [6], the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights.
This paper develops a novel automatic re-weighting method for imbalanced classification based on optimal transport (OT). As discussed by Jamal et al. [4], the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset (seen by a machine learning model) and the balanced test set (used to test the learned model). To this end, we aim to view the learning of the weight vector as the distribution approximation problem. We adopt the two-stage learning manner motivated by [6], where stage 1 and stage 2 focus on learning the feature extractor with the standard cross-entropy loss and the classifier with our proposed method, respectively. Specifically, we represent the imbalanced training set as a discrete empirical distribution P over all samples within it and view the to-be-learned weight vector w as its probability measure. Then we represent the balanced meta set as a discrete empirical distribution Q over all samples within it (in the same space with P ), which has a uniform probability measure for being balanced. Therefore, the learning of a weight vector can be formulated as the process of learning the distribution P to be as close to the balanced distribution Q as possible, a process facilitated by leveraging the OT distance [22]. Notably, the cost function plays a paramount role when learning the transport plan for OT, where we use the features and ground-truth labels of samples to design it. Due to the flexibility of our method, we can also learn an explicit weight net directly from data like [16, 23] but with a different structure, optimized by OT loss instead of the classification loss on the meta set. Generally, at each training iteration at stage 2, we minimize the OT loss to learn the weight vector (or weight net) for the current mini-batch, which is further used to re-weight the training loss for optimizing the model. As we can see, the gradient of weights only relies on the OT loss and thus is independent of the classifier. More importantly, our proposed method is robust to the distribution Q. To save the memory consumption, we introduce the prototype-oriented OT loss by building a new distribution Q based on prototypes instead of samples (one prototype for each class). More importantly, our proposed method can achieve a reasonably good performance even if we randomly select a mini-batch from all prototypes to build Q, making our method applicable to datasets with a large number of classes.
We summarize our main contributions as follows: (1) We formulate the learning of weight vector or weight net as the distribution approximation problem by minimizing the statistical distance between to-be-learned distribution over samples from imbalanced training set and another balanced distribution over samples from the meta set. (2) We leverage the OT distance between the distributions to guide the learning of weight vector or weight net. (3) We apply our method to imbalanced classification tasks including image, text and point cloud. Experiments demonstrate that introducing the OT loss to learn the example weights can produce effective and efficient classification performance.
2 Related Work
Empirical Re-weighting A classic empirical re-weighting scheme is to provide the examples of each class with the same weight, such as inverse class frequency [11, 14]. It has been further improved by the class-balanced loss [13], which calculates the effective number of examples as class frequency. Focal Loss [12] uses the predicted probability to calculate higher weights for the hard examples and dynamically adjust the weights. LDAM-DRW [17] designs a label-distribution-aware loss function and adopts a deferred class-level re-weighting method (i.e., inverse class frequency).
Automatic Re-weighting The automatic re-weighting methods learn the weights with learning mechanisms. L2RW [15] adopts a meta-learning manner to learn the example weights, which are optimized by the classification loss on the balanced meta set. Hu et al. [1] further improve L2RW
by iteratively optimizing weights instead of re-estimation at each iteration. Meta-weight-net [16] aims to learn an explicit weight net directly from data and optimize it by a meta-learning manner. Meta-class-weight [4] defines the weight for each example as the combination of class-level weight (estimated by Cui et al. [13]) and instance-level weight, optimized with a meta-learning approach similar to L2RW. Influence-balanced loss (IB) is proposed to [18] re-weight samples by the magnitude of the gradient. Recently, Liu et al. [2] propose to update the weights and model under a constraint. Our method belongs to automatic re-weighting group, and the idea of building an explicit weight net is similar to Shu et al. [16]. However, the major difference is that we bypass the classification loss on the meta set and use OT to learn the weights from the view of distribution approximation, disengaging the dependence of the weight learning on the concerned classifier at each iteration.
Meta Learning and Two-stage Learning Recently, researchers have proposed to tackle the imbalance issue with meta-learning, which can be applied to build a Balanced Meta-Softmax (BALMS)[19], learn weights [4, 15, 16] or transformed semantic directions for augmenting the minority classes in MetaSAug [5]. Two-stage methods, where the first stage and second stage focus on representation learning and classifier learning, respectively, have been proved effective for solving the imbalanced issue [5, 6, 16, 24]. BBN [25] unifies two stages with a specific cumulative learning strategy.
Optimal Transport Recently, OT has been used to solve the regression problem under the covariate shift [26], unsupervised domain adaption [27, 28], including sample-level, class-level or domain-level weight vector. Although they also adopt the re-weighting strategy and OT distance, they are distinct form ours in terms of task and technical detail. Also, the dynamic importance weighting which adopts MMD to re-weight samples for label-noise and class-prior-shift tasks [29] is also different from ours, where we provide a more flexible way for learning the weights of samples and disengage the dependence of the weight learning on the concerned classifier at each iteration. To the best of our knowledge, the works that solve imbalanced classification problem with OT are still very limited. An oversampling method via OT (OTOS) [30] aims to make synthetic samples follow a similar distribution to that of minority class samples. However, ours is a novel re-weighting method based on OT, without augmenting samples. Another recent work is Optimal Transport via Linear Mapping (OTLM) [21], which performs the post-hoc correction from the OT perspective and proposes a linear mapping to replace the original exact cost matrix in OT problem. Different from OTLM that belongs to the post-hoc correction group and aims to learn refined prediction matrix, ours falls into the training-aware group and aims to re-weight the training classification loss.
3 Background
Imbalanced Classification Consider a training set Dtrain ={(xi, yi)}Ni=1, where (x, y) is the input and target pair, xi the i-th sample, yi ∈ (0, 1)K the one-hot associated label vector over K classes, and N the number of the entire training data. Besides, consider a small balanced meta set Dmeta = {(xj , yj)}Mj=1, where M is the amount of total samples and M≪N . Denote the model parameterized with θ as f(x,θ), where θ is usually optimized by empirical risk minimization over the training set, i.e., θ∗ = argminθ 1N ∑N i=1 ℓ (yi, f (xi;θ)). For notational convenience, we denote l train i (θ) = ℓ (yi, f (xi;θ)) to represent the training loss function of pair (xi, yi). However, the model trained by this method will prefer the majority class if the training dataset is imbalanced.
Learning to Re-Weight Examples To solve the imbalanced issue, a kind of re-weighting methods is to treat the weights as the learnable parameter and learn a fair model to the minority and the majority classes by optimizing the weighted training loss. At each training iteration, the model is updated by
θ∗(w) = argmin θ ∑N i=1 wil train i (θ), (1)
where w=(w1, . . . , wN ) T is the weight vector (usually with a simplex constraint) of all training examples. Then the optimal w is obtained by making the model parameter θ∗(w) from Eq. (1) minimize the classification loss on a balanced meta set, formulated as
w∗ = argmin w
1
M ∑M j=1 lmetaj (θ ∗(w)) , (2)
where lmetaj is the loss function of pair (xj , yj) from meta set and the updated w ∗ is used to ameliorate the model. Generally, model θ consists of two key components, feature extractor and classifier, where
the classifier has been proved to be the major concerning part in imbalanced issue [6]. However, the gradient of weights in Eq. (2) always depends on the to-be-concerned classifier at each training iteration, which may result in inaccurate learning of the weights. Most automatic re-weighting methods learn the weight vectors or weight-related parameters (e.g., weight net) following this line; see more details from the previous works [4, 15, 16].
Optimal Transport Theory OT has been widely used to calculate the cost of transporting one probability measure to another in various machine learning problems, such as generative models [31], text analysis [32, 33], adversarial robustness [34], and meta learning [35, 36]. Among the rich theory of OT, this work presents a brief introduction to OT for discrete distributions; see Peyré and Cuturi [22] for more details. Consider p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj as two probability distributions, where xi and yj live in the arbitrary same space and δ is the Dirac function. Then, we can denote a ∈ ∆n and b ∈ ∆m as the probability simplex of Rn and Rm, respectively. The OT distance between p and q can be expressed as:
OT(p, q) = min T∈Π(p,q) ⟨T,C⟩, (3)
where ⟨·, ·⟩ is the Frobenius dot-product and C ∈ Rn×m≥0 is the transport cost matrix constructed by Cij = C(xi, yj). The transport probability matrix T ∈ Rn×m>0 , which satisfies Π(p, q) := {T | ∑n i=1 Tij = bj , ∑m j=1 Tij = ai}, is learned by minimizing OT(p, q). Directly optimizing Eq. (3) often comes at the cost of heavy computational demands, and OT with entropic regularization is introduced to allow the optimization at small computational cost in sufficient smoothness [37].
4 Re-weighting Method with Optimal Transport
This work views a training set as a to-be-learned distribution, whose probability measure is set as learnable weight vector w. We use OT distance to optimize w for re-weighting the training loss.
4.1 Main Objective
Given the imbalanced training set Dtrain , we can represent it as an empirical distribution over N pairs, where each pair (xi, yi)train has the sample probability wi (i.e., the weight), defined as:
P (w) = ∑N
i=1 wiδ(xi,yi)train , (4)
where (xi, yi)train is the i-th pair from the training set and the learnable weight vector w of all training examples means probability simplex of RN . Since the meta set Dmeta is balanced for all classes and closely related with the training set, it is reasonable to assume that meta set has already achieved the balanced data distribution that the training set aims to approximate. For meta set, we thus can sample each pair from it with equal probability and present it with an empirical distribution Q:
Q = ∑M
j=1
1
M δ(xj ,yj)meta , (5)
where (xj , yj)meta is the j-th pair from the meta set. To learn w, different from most automatic re-weighting methods, which minimize the classification loss on the meta set, we aim to enforce the to-be-learned distribution P (w) to stay close to the balanced distribution Q. Here, we explore the re-weighting method by adopting the OT distance between P (w) and Q:
min w OT(P (w), Q) def.= min w min T∈Π(P (w),Q) ⟨T,C⟩, (6)
where cost matrix C∈RN×M≥0 is described below and transport probability matrix T∈R N×M >0 should satisfy Π(P (w), Q) := {T | ∑N i=1 Tij=1/M, ∑M j=1 Tij=wi}.
4.2 Cost Function
For notation convenience, we reformulate the model as f(x,θ) = f2(f1(x;θ1);θ2), where f1 parameterized with θ1 denotes the representation learning part before the classifier, and f2 parameterized
with θ2 denotes the classifier. Intuitively, the cost Cij measures the distance between pair i in training set and pair j in meta set, which can be flexibly defined in different ways. We explore a few conceptually intuitive options of Cij , although other reasonable choices can also be used.
Label-aware Cost As the first option, we can define Cij with the ground-truth labels of two samples:
Cij = d Lab(ytraini , y meta j ), (7)
where dLab(·, ·) also denotes a distance measure, and ytraini , ymetaj are the ground-truth label vectors of the two samples, respectively. Intuitively, if we use the euclidean distance, then C is a 0−1 matrix (we can transfer the non-zero constant to 1), i.e., Cij=0 if xtraini and x meta j are from the same class, and Cij=1 otherwise. Now the OT loss is influenced by neither feature extractor θ1 nor classifier θ2.
Feature-aware Cost Besides, we can define Cij purely based on the features of samples:
Cij = d Fea(ztraini , z meta j ), (8)
where ztraini = f1(x train i ;θ1) ∈ RE and zmetaj = f1(xmetaj ;θ1) ∈ RE denote the E-dimensional representation of xtraini and x meta j , respectively. d
Fea(·, ·) denotes any commonly used distance measure and we empirically find the cosine distance is a good choice. It is easy to see that if xtraini and x meta j ’s features are close, their cost is small. Here the OT loss is influenced by the feature extractor θ1.
Combined Cost Finally, we can use both features and labels to define Cij , denoted as
Cij = d Fea(ztraini , z meta j ) + d Lab(ytraini , y meta j ). (9)
Intuitively, Cij will be small if two samples have the same label and similar features. Empirically, we find that using the dFea=1−cosine(·, ·) and euclidean distance for dLab gives better performance. Interestingly, given the feature-aware cost (8) or label-aware cost (7), the learned weight vector can be interpreted as the instance-level or class-level re-weighting method, respectively. The weight vector learned from the combined cost can be interpreted as the combination of class-level and instance-level weights, although no specialized design for two-component weights like previous [4]; see Fig. 1.
4.3 Learn the Weight Vector
Given the defined cost function, we adopt the entropy regularized OT loss [37] to learn the weight vector. We thus rewrite (6) as the following optimization problem:
min w LOT = ⟨C,T∗λ(w)⟩ , subject to T∗λ(w) = argmin T∈Π(P (w),Q) ⟨T,C⟩ − λH(T), (10)
where λ > 0 is a hyper-parameter for the entropic constraint H(T) =− ∑
ij Tij lnTij . Note that (10) provides us a new perspective to interpret the relationship between w and T, where w is the parameter of the leader problem and T is the parameter of the follower problem, which is of the lower priority. Accordingly, when we minimize (10) with respect to w using gradient descent, we should differentiate through T. Below we investigate the following two ways to optimize the weight vector.
Optimizing w directly Specifically, at each training iteration, we define P (w) with current w, use the Sinkhorn algorithm [37] to compute OT loss, then optimize w by w∗ = argminw LOT.
Amortizing the learning of w We also provide an alternative method by constructing an explicit weight net to output the example weights, whose structure can be designed flexibly. For example, we can build the following weight net and take the sample features as input:
w = softmax (s) , si=watt tanh ( Wvzz train i ) , (11)
where si is the i-th element of s ∈ RN , watt ∈ R1×A and Wvz ∈ RA×E are the learned parameters (we omit the bias for convenience), denoted as Ω = {watt,Wvz}. Denote S(z;Ω) as the weight net parameterized by Ω, which can be optimized by Ω∗ = argminΩ LOT.
5 Overall Algorithm and Implementations
To integrate our proposed method with deep learning frameworks, we adopt a stochastic setting, i.e., a mini-batch setting at each iteration. Following [4, 5], we adopt two-stage learning, where
Algorithm 1 Workflow about our re-weighting method for optimizing θ and w. Require: Datasets Dtrain , Dmeta , initial model parameter θ and weight vector, hyper-parameters {α, β, λ} for t = 1, 2, ..., t1 do
Sample a mini-batch B from the training set Dtrain ; Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B ℓ ( yi, f ( xi;θ (t) ))
; end for for t = t1 + 1, ..., t1 + t2 do
Sample a mini-batch B from the training set Dtrain ; Step (a): Update θ̂ (t+1) (w(t)) ← θ(t) − α∇θLB where LB =
1 |B| ∑ i∈B w (t) i ℓ ( yi, f ( xi;θ (t) )) Use Dmeta to build Q in (12) and B with wt to build P (wt) (4); Step (b): Compute LOT ( θ̂ (t+1) 1 (w t),w(t) ) with cost (9); Optimize w(t+1) ← w(t) −
β∇wLOT ( θ̂ (t+1) 1 (w t),w(t) ) Step (c): Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B w (t+1) i ℓ ( yi, f ( xi;θ (t) ))
end for
stage 1 trains the model f(θ) by the standard cross-entropy loss on the imbalanced training set and stage 2 aims to learn the weight vector w and meanwhile continue to update the model f(θ). Generally, at stage 2, calculating the optimal θ and w requires two nested loops of optimization, which is cost-expensive. Motivated by Hu et al. [1], we optimize θ and w alternatively, corresponding to (1) and (10) respectively, where w is maintained and updated throughout the training, so that re-estimation from scratch can be avoided in each iteration. The implementation process of our proposed method with w optimized directly is shown in Algorithm 1, where the key steps are highlighted in Step (a), (b), and (c). Specifically, at each training iteration t, in Step (a), we have θ̂ (t+1) (wt) = {θ̂ (t+1)
1 (w t), θ̂
(t+1) 2 (w t)} and α is the step size for θ; in Step (b), as the cost function
based on features is related with θ̂ (t+1)
1 (w t), the OT loss relies on θ̂
(t+1) 1 (w t), and β is the step size
for w; in Step (c), we ameliorate model parameters θ(t+1). We defer the learning of θ and Ω for the amortized learning of w.
Discussion From Step (b), we find the gradient of w is unrelated to classifier θ2 regardless of which cost function we choose. If we use the label-aware cost or freeze the feature extractor parameterized by θ1, which is trained in the first stage, the OT loss in Step (b) can be further reduced as LOT (wt), where we only need Steps (b)-(c) at each iteration. This is different from most of automatic reweighting methods, where the gradient of w is always related with the to-be-learned model {θ1,θ2} or classifier θ2 (when freezing θ1) for minimizing the classification loss on meta set.
Prototype-oriented OT loss (POT) Recall that we represent a balanced meta set with M samples as distribution Q in (5), where M/K is the number of data in each class and usually larger than 1. Computing the OT loss requires to learn a B ×M -dimensional transport matrix at each iteration. To improve the efficiency of algorithm, we average all samples from each class in the meta set to achieve its prototype and propose a new Q distribution over K prototypes:
Q = ∑K
k=1
1
K δ(x̂k,yk)meta , x̂k =
K
M ∑M/K j=1 xmetakj , (12)
where POT loss only needs a B ×K-dimensional transport matrix. Due to the robustness of our method to Q, when dealing with a large number of classes, we can randomly sample a mini-batch from K prototypes at each iteration to build Q.
6 Experiments
We conduct extensive experiments to validate the effectiveness of our proposed method on text, image, and point cloud imbalanced classification tasks. Notably, different from the imbalanced image
and point cloud classification, we find that optimizing the weight net is better than optimizing the weight vector directly in the text classification. Therefore, we optimize the weight vector for the image and point cloud cases and build a weight net for text case. Unless specified otherwise, we adopt the combined cost and set the hyper-parameter for the entropic constraint as λ = 0.1 and the maximum iteration number in the Sinkhorn algorithm as 200. We define the imbalance factor (IF) of a dataset as the data point amount ratio between the largest and smallest classes.
6.1 Experiments on Imbalanced Image Classification
Datasets and Baselines We evaluate our method on CIFAR-LT-10, CIFAR-LT-100, ImageNet-LT and Places-LT. We create CIFAR-LT-10 (CIFAR-LT-100) from CIFAR-10 (CIFAR-100)[38] by downsampling samples per class with IF∈{200, 100, 50, 20} [5, 13]. ImageNet-LT is built from the classic ImageNet with 1000 classes[39] and IF=1280/5 [5, 24]. Places-LT is created from Places-2 [40] with 365 classes and IF=4980/5 [4, 24]. We randomly select 10 training images per class as meta set [5]; see more details in Appendix B. We consider the following baselines: (1) Cross-entropy (CE), the model trained on the imbalanced training set with CE loss. (2) Empirical re-weighting methods, like Focal loss [12], Class-balanced (CB) loss [13] and LDAM-DRW [17]. (3) Automatic re-weighting methods, including L2RW [15], IB [18], Meta-Weight-Net [16] and Meta-class-weight [4]. (4) Meta-learning methods, including MetaSAug [5] and above methods of [4, 15, 16, 19]. (5) Two-stage methods, such as OLTR [24], cRT [6], LWS [6], BBN [25] and methods of [4, 5].
Experimental details and results on CIFAR-LT For a fair comparison, we use ResNet-32 [41] as the backbone on CIFAR-LT-10 and CIFAR-LT-100. Following Li et al. [5], at stage 1, we use 200 epochs, set the learning rate α of θ as 0.1, which is decayed by 1e−2 at the 160th and 180th epochs. At stage 2, we use 40 epochs, set α as 2e−5 and learning rate β of weights as 1e−3. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 16. We list the recognition results of different methods on CIFAR-LT-10 and CIFAR-LT-100 with different imbalance factors in Table 1.We report the average result of 5 random experiments without standard deviation which is of small scale(e.g., 1e-2). We can see that our re-weighting method outperforms CE training by a large margin and performs better than the empirical or automatic re-weighting methods. Remarkably, our proposed method outperforms competing MetaSAug that conducts a meta semantic augmentation approach to learn appropriate class-wise covariance matrices when IF is 200, 100 and 50. Importantly, as the training data becomes more imbalanced, our method is more advantageous. Even though our proposed method is inferior to MetaSAug when the dataset is less imbalanced (IF=20), it can still achieve competing results and surpasses related re-weighting methods. This suggests that our proposed method can be used to enhance the imbalanced classification, without the requirement of designing complicated models or augmenting samples on purpose.
To more comprehensively understand our method, we provide a series of ablation studies on CIFARLT-100 with IF=200 in Table 2. Firstly, to explore the impact of cost function, we use different cost functions for the OT loss. We can see that the combined cost performs better than label-aware cost and feature-aware cost, confirming the validity of combining features and labels to define cost. Besides, using either label-aware or feature-aware cost can still achieve acceptable performance, indicating the usefulness of OT loss in the imbalanced issue. Secondly, to explore the robustness of the meta distribution Q, we adopt three ways to build Q: (1) using prototypes defined in Eq. (12) (K samples) ; (2) using all samples defined in Eq. (5) (10 ∗K samples) ; (3) randomly sampling one point from each class (K samples) in meta set. We find that prototype-based meta performs best, and the performance with random-sample meta or whole meta is still competitive, which demonstrates the robustness of our proposed method to the distribution Q and the benefit of using the prototypes to build Q. Third, we compare two ways for learning w in each iteration, where one is re-estimating w from scratch and another one is maintaining and updating w throughout the training (i.e., iteratively optimizing weights). We find that iteratively optimizing performs better.
Since cost function is essential in optimizing the OT loss, we are interested in examining the learned weight vectors given by different cost functions. Here, we use CIFAR-LT-10, randomly choose {10, 9, ..., 1} training samples from class {1, 2, ..., 10} and obtain 55 samples, which are used to build distribution P . Besides, the 10 prototypes from meta set are used to build the distribution Q. Given the different cost functions, we show the learned weight vectors of 55 training samples in Fig. 1, which have very different properties. Specifically, the label-aware cost and feature-aware cost lead to class-level weights and sample-level weights, respectively. It is reasonable that label-aware cost
only decides whether the two samples (from the meta set and training set) belong to the same class, resulting in class-level measure. However, feature-aware cost measures the distance between samples from the sample-level, where each sample has its own feature. More interestingly, the learned weights with the combined cost own the characteristics of class-level and sample-level weights simultaneously, where example weights of different classes are far away and example weights of the same class are close. Coincidentally, using the combined cost to define the OT loss can reach the same goal of [4], which explicitly considers class-level and sample-level weight. Besides, we find that the learned example weights of the minority class are usually more prominent than those of the majority classes.
To verify whether our method ameliorates the performance on minority classes, we plot the confusion matrices of CE, MetaSAug, and ours on CIFAR-LT-10 with IF=200 in Fig. 2. As expected, although CE training can almost perfectly classify the samples in majority classes, it suffers severe performance degeneration in the minority classes. MetaSAug improves the accuracies of the minority classes, where is still a big gap between the performance on the minority classes and the majority classes. In
contrast, ours does not show a very clear preference for a certain class and outperforms the strong baseline on the overall performance, which is the goal of on an imbalanced classification task.
Experimental details and results on Places-LT and ImageNet-LT Following [6], we employ ResNet-152 pre-trained on the full ImageNet as the backbone on Places-LT. For stage 1, we set the initial learning rate as 0.01, which is decayed by 1e−1 every 10 epochs. In the stage 2 of our method, we only fine-tune the last fully-connected layer for training efficiency and set α as 1e−4 and β as 1e−3 within 50 epochs. The mini-batch size is 32 and the optimizer is SGD with momentum 0.9 and weight decay 5e−3. As shown in Table 3, our method outperforms all baselines. It further suggests that our method has excellent performance in the extreme imbalance setting with IF=4980/5. For a fair comparison, we implement our method on ImageNet-LT with the same experimental conditions of [5], from which we have taken the results of other comparison methods. We consider ResNet-50 [41] as the backbone on ImageNet-LT. In stage 1, we run 200 epochs and decay the learning rate by 0.1 at the 60th and 80th epochs. In stage 2, we implement our method for 50 epochs, set learning rate α as 2e−5 and β as 1e−2, and only fine-tune the last fully-connected layer for training efficiency. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 128. The results on ImageNet-LT of different models reported in Table 4 indicate the effectiveness of our proposed method on ImageNet-LT when comparing with strong baseline MetaSAug. Besides, we further consider randomly sampling a mini-batch of size 100 from all prototypes at each iteration to build Q, whose performance is comparable to the Q from all prototypes. Thus, with a stochastic setting for Q, our proposed method can be used to the imbalanced training set with a large number of classes. We defer the time computational complexity, additional quantitative results and qualitative results on different image datasets to Appendix B.
6.2 Experiments on Imbalanced Text Classification
Datasets and settings Following [1, 2], we adopt the popular SST-2 for 2-class and SST-5 for 5-class sentence sentiment [43]. For a fair comparison, we use the same imbalanced datasets and settings with [2]. Specifically, we set class 1 as the minority class and the rest as the majority classes, where the number of examples in the majority class is fixed as 1000 (SST-2) and 500 (SST-5) and we achieve different imbalance settings by varying the number of examples in the minority class. Besides, the number of samples in the meta set is 10 for each class. We use the BERT (base, uncased) model [44] as feature extractor and a simple 3-layer fully-connected network (FCN) with the structure in Appendix C as classifier. To make subsequent experiments on strong models, following [2], we use an additional balanced training set (500 samples in each class) to fine-tune the BERT model, which is randomly selected from the remaining examples in each dataset except the imbalanced training set, meta set and to-be evaluated test set. Based on the fine-tuned BERT, we adopt the two-stage manner for the imbalanced text datasets, where we train the BERT + FCN in the first stage with CE loss and train the FCN with our proposed method by freezing the BERT in the second stage. The settings of the training process are deferred to Appendix C.
Baselines We consider the following methods: (1) vanilla BERT, the vanilla pretrained language model. (2) Fine-tuned BERT , where the pretrained BERT is fine-tuned on an additional balanced training set. (3) Fine-tuned BERT + CE, the fine-tuned BERT model followed by the FCN which
is further trained by the CE loss on the imbalanced training set following [1, 2]. (4) Automatic re-weighting methods, including the method of Hu et al. [1] and constraint-based re-weighting [2]. Since few works consider imbalanced text classification, we further consider (5) Empirical reweighting methods, including re-weighting with inverse class frequency (i.e., Proportion) [11, 14]) and LDAM-DRW [17] and (6) Logit adjustment [20] using their official codes and settings1 2. We repeat all experiments 10 times and report the mean and standard deviation.
Experimental details and results on SST-2 and SST-5 We report the text classification results of compared methods under different imbalance factors in Table 5. We find that our proposed method outperforms all competing methods in all imbalance factor settings, which demonstrates the effectiveness of our proposed method. Although all methods could achieve acceptable performance in a slight imbalance, the performance of three baselines (Vanilla BERT, Fine-Tuned BERT and FineTuned BERT+CE) drop dramatically, indicating the importance of proposing specialized methods for handling imbalanced training datasets. Logit adjustment (post-hoc correction), is very competitive to ours on SST-2, which, however, only produces similar results to the three above-mentioned baselines on SST-5. In contrast, ours is robust to not only the imbalance factors but also the number of classes, where the results are consistent with the image case. We provide more results in Appendix C.4. In addition to 1D text and 2D image, we further investigate the robustness of our method on 3D point cloud data, where we use the popular ModelNet10 [45] and defer the experiments to Appendix D.
7 Conclusion
This paper introduces a novel automatic re-weighting method for imbalance classification based on optimal transport (OT). This method presents the imbalanced training set as a to-be-learned distribution over its training examples, each of which is associated with a probability weight. Similarly, our method views another balanced meta set as a balanced distribution over the examples. By minimizing the OT distance between the two distributions in terms of the defined cost function, the learning of weight vector is formulated as a distribution approximation problem. Our proposed re-weighting method bypasses the commonly-used classification loss on the meta set and uses OT to learn the weights, disengaging the dependence of the weight learning on the concerned classifier at each iteration. This is an approach different from most of the existing re-weighting methods and may provide new thoughts for future work. Experimental results on a variety of imbalanced datasets of both images and texts validate the effectiveness and flexibility of our proposed method.
Acknowledgements. This work is partially supported by a grant from the Shenzhen Science and Technology Program (JCYJ20210324120011032) and Shenzhen Institute of Artificial Intelligence and Robotics for Society.
1https://github.com/kaidic/LDAM-DRW 2https://github.com/google-research/google-research/tree/master/logit_adjustment | 1. What is the focus and contribution of the paper regarding imbalanced classification?
2. What are the strengths of the proposed approach, particularly its novelty and effectiveness?
3. What are the weaknesses of the paper, especially regarding its motivation and comparisons with other works?
4. Do you have any concerns or suggestions regarding the method's fairness and practicality?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Imbalanced classification is a significant problem that many real-world datasets are imbalanced and the model trained on these datasets can yield poor performance. To tackle this problem, this paper proposes a novel re-weighting method using optimal transport (OT). Using a balanced meta set, the weight vectors are learned by minimizing the distance between an imbalanced training set and balanced meta set. This paper demonstrates the effectiveness on multiple benchmark datasets including image and text classification.
Strengths And Weaknesses
Strengths
The idea of minimizing the distance between an imbalanced training set and a balanced meta set is intuitive, and easy to understand.
Re-weighting from optimal transport (OT) perspective is novel.
The proposed method shows good performance on multiple benchmark datasets, including image and text classification.
The paper is easy to follow.
Weaknesses
The motivation of using OT for re-weighting is not fully convincing. What other automatic re-weighting methods have not been able to solve? And so, how OT can benefit ‘re-weighting’?
Using a balanced meta-set seems unfair compared with other methods which do not use a balanced meta-set. For example, ten training samples per class are selected from the CIFAR dataset. In the case of IF=100, the least frequent class has only 5 samples on CIFAR-100-LT. It means, the meta-set has more samples than the training set, which can give a significant information for the minority classes. For fair comparison, in my opinion, other methods should also additionally use the meta-set in their training.
This paper missed some important recent baselines. For example, [1] automatically re-weights samples using the gradients and does not require a meta-set. [2] proposed Balanced Softmax to fix the label distribution shift between training and testing.
[1] Influence-Balanced Loss for Imbalanced Visual Classification, ICCV, 2021.
[2] Balanced meta-softmax for long-tailed visual recognition, NeurIPS, 2020.
Questions
Please refer to the weaknesses.
Limitations
The authors addressed the limitations and potential negative societal impact. |
NIPS | Title
Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
Abstract
Imbalanced data pose challenges for deep learning based classification models. One of the most widely-used approaches for tackling imbalanced data is re-weighting, where training samples are associated with different weights in the loss function. Most of existing re-weighting approaches treat the example weights as the learnable parameter and optimize the weights on the meta set, entailing expensive bilevel optimization. In this paper, we propose a novel reweighting method based on optimal transport (OT) from a distributional point of view. Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set. The weights of the training samples are the probability mass of the imbalanced distribution and learned by minimizing the OT distance between the two distributions. Compared with existing methods, our proposed one disengages the dependence of the weight learning on the concerned classifier at each iteration. Experiments on image, text and point cloud datasets demonstrate that our proposed re-weighting method has excellent performance, achieving state-of-the-art results in many cases and providing a promising tool for addressing the imbalanced classification issue. The code has been made available at https://github.com/DandanGuo1993/reweight-imbalance-classification-with-OT.
1 Introduction
Deep neural networks (DNNs) have achieved remarkable success in various applications, which is undoubtedly inseparable from the high-quality large-scale datasets. Usually, the number of samples for each class in these datasets are manually selected resulting in balanced datasets. However, most real-world datasets are imbalanced, such as a few classes (a.k.a. head or majority class) occupy most of the data while most classes (a.k.a. tail or minority class) have a few samples. A model trained on the imbalanced training set but without considering such class imbalance would be significantly dominated by those majority classes, and thus underperform on a balanced test dataset. This can also be known as the long-tailed problem and exists in many domains, such as text classification [1, 2], object detection [3] and image classification [4–6].
There are rich research lines to solve the imbalance problem, including re-sampling [7–10], class-level or instance-level re-weighting [1, 2, 4, 11–18], meta-learning [4, 5, 15, 16, 19], two-stage methods
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
[4–6, 17] and post-hoc correction [20, 21]. Inspired by [2], re-weighting strategies can be roughly grouped into empirical re-weighting and automatic re-weighting. The former aims to design weights manually with the major insight that the minority class example will be assigned a larger weight value than that of the majority class [12–14]. However, manually setting weights can be less adaptive to different datasets [2]. The latter aims to assign adaptive weights to the examples through learning mechanisms [1, 2, 4, 15, 16]. As the representative automatic re-weighting method, L2RW [15] optimizes the weight vector as a learnable parameter with an unbiased meta set (i.e., validation set). Although L2RW and its followers have received widespread attention, most of them may be limited to optimizing the weights by the classification loss on the meta set: The gradient of weights is usually coupled with the to-be-learned classifier at each training iteration. Since classifier is the major concern in imbalanced issue [6], the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights.
This paper develops a novel automatic re-weighting method for imbalanced classification based on optimal transport (OT). As discussed by Jamal et al. [4], the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset (seen by a machine learning model) and the balanced test set (used to test the learned model). To this end, we aim to view the learning of the weight vector as the distribution approximation problem. We adopt the two-stage learning manner motivated by [6], where stage 1 and stage 2 focus on learning the feature extractor with the standard cross-entropy loss and the classifier with our proposed method, respectively. Specifically, we represent the imbalanced training set as a discrete empirical distribution P over all samples within it and view the to-be-learned weight vector w as its probability measure. Then we represent the balanced meta set as a discrete empirical distribution Q over all samples within it (in the same space with P ), which has a uniform probability measure for being balanced. Therefore, the learning of a weight vector can be formulated as the process of learning the distribution P to be as close to the balanced distribution Q as possible, a process facilitated by leveraging the OT distance [22]. Notably, the cost function plays a paramount role when learning the transport plan for OT, where we use the features and ground-truth labels of samples to design it. Due to the flexibility of our method, we can also learn an explicit weight net directly from data like [16, 23] but with a different structure, optimized by OT loss instead of the classification loss on the meta set. Generally, at each training iteration at stage 2, we minimize the OT loss to learn the weight vector (or weight net) for the current mini-batch, which is further used to re-weight the training loss for optimizing the model. As we can see, the gradient of weights only relies on the OT loss and thus is independent of the classifier. More importantly, our proposed method is robust to the distribution Q. To save the memory consumption, we introduce the prototype-oriented OT loss by building a new distribution Q based on prototypes instead of samples (one prototype for each class). More importantly, our proposed method can achieve a reasonably good performance even if we randomly select a mini-batch from all prototypes to build Q, making our method applicable to datasets with a large number of classes.
We summarize our main contributions as follows: (1) We formulate the learning of weight vector or weight net as the distribution approximation problem by minimizing the statistical distance between to-be-learned distribution over samples from imbalanced training set and another balanced distribution over samples from the meta set. (2) We leverage the OT distance between the distributions to guide the learning of weight vector or weight net. (3) We apply our method to imbalanced classification tasks including image, text and point cloud. Experiments demonstrate that introducing the OT loss to learn the example weights can produce effective and efficient classification performance.
2 Related Work
Empirical Re-weighting A classic empirical re-weighting scheme is to provide the examples of each class with the same weight, such as inverse class frequency [11, 14]. It has been further improved by the class-balanced loss [13], which calculates the effective number of examples as class frequency. Focal Loss [12] uses the predicted probability to calculate higher weights for the hard examples and dynamically adjust the weights. LDAM-DRW [17] designs a label-distribution-aware loss function and adopts a deferred class-level re-weighting method (i.e., inverse class frequency).
Automatic Re-weighting The automatic re-weighting methods learn the weights with learning mechanisms. L2RW [15] adopts a meta-learning manner to learn the example weights, which are optimized by the classification loss on the balanced meta set. Hu et al. [1] further improve L2RW
by iteratively optimizing weights instead of re-estimation at each iteration. Meta-weight-net [16] aims to learn an explicit weight net directly from data and optimize it by a meta-learning manner. Meta-class-weight [4] defines the weight for each example as the combination of class-level weight (estimated by Cui et al. [13]) and instance-level weight, optimized with a meta-learning approach similar to L2RW. Influence-balanced loss (IB) is proposed to [18] re-weight samples by the magnitude of the gradient. Recently, Liu et al. [2] propose to update the weights and model under a constraint. Our method belongs to automatic re-weighting group, and the idea of building an explicit weight net is similar to Shu et al. [16]. However, the major difference is that we bypass the classification loss on the meta set and use OT to learn the weights from the view of distribution approximation, disengaging the dependence of the weight learning on the concerned classifier at each iteration.
Meta Learning and Two-stage Learning Recently, researchers have proposed to tackle the imbalance issue with meta-learning, which can be applied to build a Balanced Meta-Softmax (BALMS)[19], learn weights [4, 15, 16] or transformed semantic directions for augmenting the minority classes in MetaSAug [5]. Two-stage methods, where the first stage and second stage focus on representation learning and classifier learning, respectively, have been proved effective for solving the imbalanced issue [5, 6, 16, 24]. BBN [25] unifies two stages with a specific cumulative learning strategy.
Optimal Transport Recently, OT has been used to solve the regression problem under the covariate shift [26], unsupervised domain adaption [27, 28], including sample-level, class-level or domain-level weight vector. Although they also adopt the re-weighting strategy and OT distance, they are distinct form ours in terms of task and technical detail. Also, the dynamic importance weighting which adopts MMD to re-weight samples for label-noise and class-prior-shift tasks [29] is also different from ours, where we provide a more flexible way for learning the weights of samples and disengage the dependence of the weight learning on the concerned classifier at each iteration. To the best of our knowledge, the works that solve imbalanced classification problem with OT are still very limited. An oversampling method via OT (OTOS) [30] aims to make synthetic samples follow a similar distribution to that of minority class samples. However, ours is a novel re-weighting method based on OT, without augmenting samples. Another recent work is Optimal Transport via Linear Mapping (OTLM) [21], which performs the post-hoc correction from the OT perspective and proposes a linear mapping to replace the original exact cost matrix in OT problem. Different from OTLM that belongs to the post-hoc correction group and aims to learn refined prediction matrix, ours falls into the training-aware group and aims to re-weight the training classification loss.
3 Background
Imbalanced Classification Consider a training set Dtrain ={(xi, yi)}Ni=1, where (x, y) is the input and target pair, xi the i-th sample, yi ∈ (0, 1)K the one-hot associated label vector over K classes, and N the number of the entire training data. Besides, consider a small balanced meta set Dmeta = {(xj , yj)}Mj=1, where M is the amount of total samples and M≪N . Denote the model parameterized with θ as f(x,θ), where θ is usually optimized by empirical risk minimization over the training set, i.e., θ∗ = argminθ 1N ∑N i=1 ℓ (yi, f (xi;θ)). For notational convenience, we denote l train i (θ) = ℓ (yi, f (xi;θ)) to represent the training loss function of pair (xi, yi). However, the model trained by this method will prefer the majority class if the training dataset is imbalanced.
Learning to Re-Weight Examples To solve the imbalanced issue, a kind of re-weighting methods is to treat the weights as the learnable parameter and learn a fair model to the minority and the majority classes by optimizing the weighted training loss. At each training iteration, the model is updated by
θ∗(w) = argmin θ ∑N i=1 wil train i (θ), (1)
where w=(w1, . . . , wN ) T is the weight vector (usually with a simplex constraint) of all training examples. Then the optimal w is obtained by making the model parameter θ∗(w) from Eq. (1) minimize the classification loss on a balanced meta set, formulated as
w∗ = argmin w
1
M ∑M j=1 lmetaj (θ ∗(w)) , (2)
where lmetaj is the loss function of pair (xj , yj) from meta set and the updated w ∗ is used to ameliorate the model. Generally, model θ consists of two key components, feature extractor and classifier, where
the classifier has been proved to be the major concerning part in imbalanced issue [6]. However, the gradient of weights in Eq. (2) always depends on the to-be-concerned classifier at each training iteration, which may result in inaccurate learning of the weights. Most automatic re-weighting methods learn the weight vectors or weight-related parameters (e.g., weight net) following this line; see more details from the previous works [4, 15, 16].
Optimal Transport Theory OT has been widely used to calculate the cost of transporting one probability measure to another in various machine learning problems, such as generative models [31], text analysis [32, 33], adversarial robustness [34], and meta learning [35, 36]. Among the rich theory of OT, this work presents a brief introduction to OT for discrete distributions; see Peyré and Cuturi [22] for more details. Consider p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj as two probability distributions, where xi and yj live in the arbitrary same space and δ is the Dirac function. Then, we can denote a ∈ ∆n and b ∈ ∆m as the probability simplex of Rn and Rm, respectively. The OT distance between p and q can be expressed as:
OT(p, q) = min T∈Π(p,q) ⟨T,C⟩, (3)
where ⟨·, ·⟩ is the Frobenius dot-product and C ∈ Rn×m≥0 is the transport cost matrix constructed by Cij = C(xi, yj). The transport probability matrix T ∈ Rn×m>0 , which satisfies Π(p, q) := {T | ∑n i=1 Tij = bj , ∑m j=1 Tij = ai}, is learned by minimizing OT(p, q). Directly optimizing Eq. (3) often comes at the cost of heavy computational demands, and OT with entropic regularization is introduced to allow the optimization at small computational cost in sufficient smoothness [37].
4 Re-weighting Method with Optimal Transport
This work views a training set as a to-be-learned distribution, whose probability measure is set as learnable weight vector w. We use OT distance to optimize w for re-weighting the training loss.
4.1 Main Objective
Given the imbalanced training set Dtrain , we can represent it as an empirical distribution over N pairs, where each pair (xi, yi)train has the sample probability wi (i.e., the weight), defined as:
P (w) = ∑N
i=1 wiδ(xi,yi)train , (4)
where (xi, yi)train is the i-th pair from the training set and the learnable weight vector w of all training examples means probability simplex of RN . Since the meta set Dmeta is balanced for all classes and closely related with the training set, it is reasonable to assume that meta set has already achieved the balanced data distribution that the training set aims to approximate. For meta set, we thus can sample each pair from it with equal probability and present it with an empirical distribution Q:
Q = ∑M
j=1
1
M δ(xj ,yj)meta , (5)
where (xj , yj)meta is the j-th pair from the meta set. To learn w, different from most automatic re-weighting methods, which minimize the classification loss on the meta set, we aim to enforce the to-be-learned distribution P (w) to stay close to the balanced distribution Q. Here, we explore the re-weighting method by adopting the OT distance between P (w) and Q:
min w OT(P (w), Q) def.= min w min T∈Π(P (w),Q) ⟨T,C⟩, (6)
where cost matrix C∈RN×M≥0 is described below and transport probability matrix T∈R N×M >0 should satisfy Π(P (w), Q) := {T | ∑N i=1 Tij=1/M, ∑M j=1 Tij=wi}.
4.2 Cost Function
For notation convenience, we reformulate the model as f(x,θ) = f2(f1(x;θ1);θ2), where f1 parameterized with θ1 denotes the representation learning part before the classifier, and f2 parameterized
with θ2 denotes the classifier. Intuitively, the cost Cij measures the distance between pair i in training set and pair j in meta set, which can be flexibly defined in different ways. We explore a few conceptually intuitive options of Cij , although other reasonable choices can also be used.
Label-aware Cost As the first option, we can define Cij with the ground-truth labels of two samples:
Cij = d Lab(ytraini , y meta j ), (7)
where dLab(·, ·) also denotes a distance measure, and ytraini , ymetaj are the ground-truth label vectors of the two samples, respectively. Intuitively, if we use the euclidean distance, then C is a 0−1 matrix (we can transfer the non-zero constant to 1), i.e., Cij=0 if xtraini and x meta j are from the same class, and Cij=1 otherwise. Now the OT loss is influenced by neither feature extractor θ1 nor classifier θ2.
Feature-aware Cost Besides, we can define Cij purely based on the features of samples:
Cij = d Fea(ztraini , z meta j ), (8)
where ztraini = f1(x train i ;θ1) ∈ RE and zmetaj = f1(xmetaj ;θ1) ∈ RE denote the E-dimensional representation of xtraini and x meta j , respectively. d
Fea(·, ·) denotes any commonly used distance measure and we empirically find the cosine distance is a good choice. It is easy to see that if xtraini and x meta j ’s features are close, their cost is small. Here the OT loss is influenced by the feature extractor θ1.
Combined Cost Finally, we can use both features and labels to define Cij , denoted as
Cij = d Fea(ztraini , z meta j ) + d Lab(ytraini , y meta j ). (9)
Intuitively, Cij will be small if two samples have the same label and similar features. Empirically, we find that using the dFea=1−cosine(·, ·) and euclidean distance for dLab gives better performance. Interestingly, given the feature-aware cost (8) or label-aware cost (7), the learned weight vector can be interpreted as the instance-level or class-level re-weighting method, respectively. The weight vector learned from the combined cost can be interpreted as the combination of class-level and instance-level weights, although no specialized design for two-component weights like previous [4]; see Fig. 1.
4.3 Learn the Weight Vector
Given the defined cost function, we adopt the entropy regularized OT loss [37] to learn the weight vector. We thus rewrite (6) as the following optimization problem:
min w LOT = ⟨C,T∗λ(w)⟩ , subject to T∗λ(w) = argmin T∈Π(P (w),Q) ⟨T,C⟩ − λH(T), (10)
where λ > 0 is a hyper-parameter for the entropic constraint H(T) =− ∑
ij Tij lnTij . Note that (10) provides us a new perspective to interpret the relationship between w and T, where w is the parameter of the leader problem and T is the parameter of the follower problem, which is of the lower priority. Accordingly, when we minimize (10) with respect to w using gradient descent, we should differentiate through T. Below we investigate the following two ways to optimize the weight vector.
Optimizing w directly Specifically, at each training iteration, we define P (w) with current w, use the Sinkhorn algorithm [37] to compute OT loss, then optimize w by w∗ = argminw LOT.
Amortizing the learning of w We also provide an alternative method by constructing an explicit weight net to output the example weights, whose structure can be designed flexibly. For example, we can build the following weight net and take the sample features as input:
w = softmax (s) , si=watt tanh ( Wvzz train i ) , (11)
where si is the i-th element of s ∈ RN , watt ∈ R1×A and Wvz ∈ RA×E are the learned parameters (we omit the bias for convenience), denoted as Ω = {watt,Wvz}. Denote S(z;Ω) as the weight net parameterized by Ω, which can be optimized by Ω∗ = argminΩ LOT.
5 Overall Algorithm and Implementations
To integrate our proposed method with deep learning frameworks, we adopt a stochastic setting, i.e., a mini-batch setting at each iteration. Following [4, 5], we adopt two-stage learning, where
Algorithm 1 Workflow about our re-weighting method for optimizing θ and w. Require: Datasets Dtrain , Dmeta , initial model parameter θ and weight vector, hyper-parameters {α, β, λ} for t = 1, 2, ..., t1 do
Sample a mini-batch B from the training set Dtrain ; Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B ℓ ( yi, f ( xi;θ (t) ))
; end for for t = t1 + 1, ..., t1 + t2 do
Sample a mini-batch B from the training set Dtrain ; Step (a): Update θ̂ (t+1) (w(t)) ← θ(t) − α∇θLB where LB =
1 |B| ∑ i∈B w (t) i ℓ ( yi, f ( xi;θ (t) )) Use Dmeta to build Q in (12) and B with wt to build P (wt) (4); Step (b): Compute LOT ( θ̂ (t+1) 1 (w t),w(t) ) with cost (9); Optimize w(t+1) ← w(t) −
β∇wLOT ( θ̂ (t+1) 1 (w t),w(t) ) Step (c): Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B w (t+1) i ℓ ( yi, f ( xi;θ (t) ))
end for
stage 1 trains the model f(θ) by the standard cross-entropy loss on the imbalanced training set and stage 2 aims to learn the weight vector w and meanwhile continue to update the model f(θ). Generally, at stage 2, calculating the optimal θ and w requires two nested loops of optimization, which is cost-expensive. Motivated by Hu et al. [1], we optimize θ and w alternatively, corresponding to (1) and (10) respectively, where w is maintained and updated throughout the training, so that re-estimation from scratch can be avoided in each iteration. The implementation process of our proposed method with w optimized directly is shown in Algorithm 1, where the key steps are highlighted in Step (a), (b), and (c). Specifically, at each training iteration t, in Step (a), we have θ̂ (t+1) (wt) = {θ̂ (t+1)
1 (w t), θ̂
(t+1) 2 (w t)} and α is the step size for θ; in Step (b), as the cost function
based on features is related with θ̂ (t+1)
1 (w t), the OT loss relies on θ̂
(t+1) 1 (w t), and β is the step size
for w; in Step (c), we ameliorate model parameters θ(t+1). We defer the learning of θ and Ω for the amortized learning of w.
Discussion From Step (b), we find the gradient of w is unrelated to classifier θ2 regardless of which cost function we choose. If we use the label-aware cost or freeze the feature extractor parameterized by θ1, which is trained in the first stage, the OT loss in Step (b) can be further reduced as LOT (wt), where we only need Steps (b)-(c) at each iteration. This is different from most of automatic reweighting methods, where the gradient of w is always related with the to-be-learned model {θ1,θ2} or classifier θ2 (when freezing θ1) for minimizing the classification loss on meta set.
Prototype-oriented OT loss (POT) Recall that we represent a balanced meta set with M samples as distribution Q in (5), where M/K is the number of data in each class and usually larger than 1. Computing the OT loss requires to learn a B ×M -dimensional transport matrix at each iteration. To improve the efficiency of algorithm, we average all samples from each class in the meta set to achieve its prototype and propose a new Q distribution over K prototypes:
Q = ∑K
k=1
1
K δ(x̂k,yk)meta , x̂k =
K
M ∑M/K j=1 xmetakj , (12)
where POT loss only needs a B ×K-dimensional transport matrix. Due to the robustness of our method to Q, when dealing with a large number of classes, we can randomly sample a mini-batch from K prototypes at each iteration to build Q.
6 Experiments
We conduct extensive experiments to validate the effectiveness of our proposed method on text, image, and point cloud imbalanced classification tasks. Notably, different from the imbalanced image
and point cloud classification, we find that optimizing the weight net is better than optimizing the weight vector directly in the text classification. Therefore, we optimize the weight vector for the image and point cloud cases and build a weight net for text case. Unless specified otherwise, we adopt the combined cost and set the hyper-parameter for the entropic constraint as λ = 0.1 and the maximum iteration number in the Sinkhorn algorithm as 200. We define the imbalance factor (IF) of a dataset as the data point amount ratio between the largest and smallest classes.
6.1 Experiments on Imbalanced Image Classification
Datasets and Baselines We evaluate our method on CIFAR-LT-10, CIFAR-LT-100, ImageNet-LT and Places-LT. We create CIFAR-LT-10 (CIFAR-LT-100) from CIFAR-10 (CIFAR-100)[38] by downsampling samples per class with IF∈{200, 100, 50, 20} [5, 13]. ImageNet-LT is built from the classic ImageNet with 1000 classes[39] and IF=1280/5 [5, 24]. Places-LT is created from Places-2 [40] with 365 classes and IF=4980/5 [4, 24]. We randomly select 10 training images per class as meta set [5]; see more details in Appendix B. We consider the following baselines: (1) Cross-entropy (CE), the model trained on the imbalanced training set with CE loss. (2) Empirical re-weighting methods, like Focal loss [12], Class-balanced (CB) loss [13] and LDAM-DRW [17]. (3) Automatic re-weighting methods, including L2RW [15], IB [18], Meta-Weight-Net [16] and Meta-class-weight [4]. (4) Meta-learning methods, including MetaSAug [5] and above methods of [4, 15, 16, 19]. (5) Two-stage methods, such as OLTR [24], cRT [6], LWS [6], BBN [25] and methods of [4, 5].
Experimental details and results on CIFAR-LT For a fair comparison, we use ResNet-32 [41] as the backbone on CIFAR-LT-10 and CIFAR-LT-100. Following Li et al. [5], at stage 1, we use 200 epochs, set the learning rate α of θ as 0.1, which is decayed by 1e−2 at the 160th and 180th epochs. At stage 2, we use 40 epochs, set α as 2e−5 and learning rate β of weights as 1e−3. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 16. We list the recognition results of different methods on CIFAR-LT-10 and CIFAR-LT-100 with different imbalance factors in Table 1.We report the average result of 5 random experiments without standard deviation which is of small scale(e.g., 1e-2). We can see that our re-weighting method outperforms CE training by a large margin and performs better than the empirical or automatic re-weighting methods. Remarkably, our proposed method outperforms competing MetaSAug that conducts a meta semantic augmentation approach to learn appropriate class-wise covariance matrices when IF is 200, 100 and 50. Importantly, as the training data becomes more imbalanced, our method is more advantageous. Even though our proposed method is inferior to MetaSAug when the dataset is less imbalanced (IF=20), it can still achieve competing results and surpasses related re-weighting methods. This suggests that our proposed method can be used to enhance the imbalanced classification, without the requirement of designing complicated models or augmenting samples on purpose.
To more comprehensively understand our method, we provide a series of ablation studies on CIFARLT-100 with IF=200 in Table 2. Firstly, to explore the impact of cost function, we use different cost functions for the OT loss. We can see that the combined cost performs better than label-aware cost and feature-aware cost, confirming the validity of combining features and labels to define cost. Besides, using either label-aware or feature-aware cost can still achieve acceptable performance, indicating the usefulness of OT loss in the imbalanced issue. Secondly, to explore the robustness of the meta distribution Q, we adopt three ways to build Q: (1) using prototypes defined in Eq. (12) (K samples) ; (2) using all samples defined in Eq. (5) (10 ∗K samples) ; (3) randomly sampling one point from each class (K samples) in meta set. We find that prototype-based meta performs best, and the performance with random-sample meta or whole meta is still competitive, which demonstrates the robustness of our proposed method to the distribution Q and the benefit of using the prototypes to build Q. Third, we compare two ways for learning w in each iteration, where one is re-estimating w from scratch and another one is maintaining and updating w throughout the training (i.e., iteratively optimizing weights). We find that iteratively optimizing performs better.
Since cost function is essential in optimizing the OT loss, we are interested in examining the learned weight vectors given by different cost functions. Here, we use CIFAR-LT-10, randomly choose {10, 9, ..., 1} training samples from class {1, 2, ..., 10} and obtain 55 samples, which are used to build distribution P . Besides, the 10 prototypes from meta set are used to build the distribution Q. Given the different cost functions, we show the learned weight vectors of 55 training samples in Fig. 1, which have very different properties. Specifically, the label-aware cost and feature-aware cost lead to class-level weights and sample-level weights, respectively. It is reasonable that label-aware cost
only decides whether the two samples (from the meta set and training set) belong to the same class, resulting in class-level measure. However, feature-aware cost measures the distance between samples from the sample-level, where each sample has its own feature. More interestingly, the learned weights with the combined cost own the characteristics of class-level and sample-level weights simultaneously, where example weights of different classes are far away and example weights of the same class are close. Coincidentally, using the combined cost to define the OT loss can reach the same goal of [4], which explicitly considers class-level and sample-level weight. Besides, we find that the learned example weights of the minority class are usually more prominent than those of the majority classes.
To verify whether our method ameliorates the performance on minority classes, we plot the confusion matrices of CE, MetaSAug, and ours on CIFAR-LT-10 with IF=200 in Fig. 2. As expected, although CE training can almost perfectly classify the samples in majority classes, it suffers severe performance degeneration in the minority classes. MetaSAug improves the accuracies of the minority classes, where is still a big gap between the performance on the minority classes and the majority classes. In
contrast, ours does not show a very clear preference for a certain class and outperforms the strong baseline on the overall performance, which is the goal of on an imbalanced classification task.
Experimental details and results on Places-LT and ImageNet-LT Following [6], we employ ResNet-152 pre-trained on the full ImageNet as the backbone on Places-LT. For stage 1, we set the initial learning rate as 0.01, which is decayed by 1e−1 every 10 epochs. In the stage 2 of our method, we only fine-tune the last fully-connected layer for training efficiency and set α as 1e−4 and β as 1e−3 within 50 epochs. The mini-batch size is 32 and the optimizer is SGD with momentum 0.9 and weight decay 5e−3. As shown in Table 3, our method outperforms all baselines. It further suggests that our method has excellent performance in the extreme imbalance setting with IF=4980/5. For a fair comparison, we implement our method on ImageNet-LT with the same experimental conditions of [5], from which we have taken the results of other comparison methods. We consider ResNet-50 [41] as the backbone on ImageNet-LT. In stage 1, we run 200 epochs and decay the learning rate by 0.1 at the 60th and 80th epochs. In stage 2, we implement our method for 50 epochs, set learning rate α as 2e−5 and β as 1e−2, and only fine-tune the last fully-connected layer for training efficiency. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 128. The results on ImageNet-LT of different models reported in Table 4 indicate the effectiveness of our proposed method on ImageNet-LT when comparing with strong baseline MetaSAug. Besides, we further consider randomly sampling a mini-batch of size 100 from all prototypes at each iteration to build Q, whose performance is comparable to the Q from all prototypes. Thus, with a stochastic setting for Q, our proposed method can be used to the imbalanced training set with a large number of classes. We defer the time computational complexity, additional quantitative results and qualitative results on different image datasets to Appendix B.
6.2 Experiments on Imbalanced Text Classification
Datasets and settings Following [1, 2], we adopt the popular SST-2 for 2-class and SST-5 for 5-class sentence sentiment [43]. For a fair comparison, we use the same imbalanced datasets and settings with [2]. Specifically, we set class 1 as the minority class and the rest as the majority classes, where the number of examples in the majority class is fixed as 1000 (SST-2) and 500 (SST-5) and we achieve different imbalance settings by varying the number of examples in the minority class. Besides, the number of samples in the meta set is 10 for each class. We use the BERT (base, uncased) model [44] as feature extractor and a simple 3-layer fully-connected network (FCN) with the structure in Appendix C as classifier. To make subsequent experiments on strong models, following [2], we use an additional balanced training set (500 samples in each class) to fine-tune the BERT model, which is randomly selected from the remaining examples in each dataset except the imbalanced training set, meta set and to-be evaluated test set. Based on the fine-tuned BERT, we adopt the two-stage manner for the imbalanced text datasets, where we train the BERT + FCN in the first stage with CE loss and train the FCN with our proposed method by freezing the BERT in the second stage. The settings of the training process are deferred to Appendix C.
Baselines We consider the following methods: (1) vanilla BERT, the vanilla pretrained language model. (2) Fine-tuned BERT , where the pretrained BERT is fine-tuned on an additional balanced training set. (3) Fine-tuned BERT + CE, the fine-tuned BERT model followed by the FCN which
is further trained by the CE loss on the imbalanced training set following [1, 2]. (4) Automatic re-weighting methods, including the method of Hu et al. [1] and constraint-based re-weighting [2]. Since few works consider imbalanced text classification, we further consider (5) Empirical reweighting methods, including re-weighting with inverse class frequency (i.e., Proportion) [11, 14]) and LDAM-DRW [17] and (6) Logit adjustment [20] using their official codes and settings1 2. We repeat all experiments 10 times and report the mean and standard deviation.
Experimental details and results on SST-2 and SST-5 We report the text classification results of compared methods under different imbalance factors in Table 5. We find that our proposed method outperforms all competing methods in all imbalance factor settings, which demonstrates the effectiveness of our proposed method. Although all methods could achieve acceptable performance in a slight imbalance, the performance of three baselines (Vanilla BERT, Fine-Tuned BERT and FineTuned BERT+CE) drop dramatically, indicating the importance of proposing specialized methods for handling imbalanced training datasets. Logit adjustment (post-hoc correction), is very competitive to ours on SST-2, which, however, only produces similar results to the three above-mentioned baselines on SST-5. In contrast, ours is robust to not only the imbalance factors but also the number of classes, where the results are consistent with the image case. We provide more results in Appendix C.4. In addition to 1D text and 2D image, we further investigate the robustness of our method on 3D point cloud data, where we use the popular ModelNet10 [45] and defer the experiments to Appendix D.
7 Conclusion
This paper introduces a novel automatic re-weighting method for imbalance classification based on optimal transport (OT). This method presents the imbalanced training set as a to-be-learned distribution over its training examples, each of which is associated with a probability weight. Similarly, our method views another balanced meta set as a balanced distribution over the examples. By minimizing the OT distance between the two distributions in terms of the defined cost function, the learning of weight vector is formulated as a distribution approximation problem. Our proposed re-weighting method bypasses the commonly-used classification loss on the meta set and uses OT to learn the weights, disengaging the dependence of the weight learning on the concerned classifier at each iteration. This is an approach different from most of the existing re-weighting methods and may provide new thoughts for future work. Experimental results on a variety of imbalanced datasets of both images and texts validate the effectiveness and flexibility of our proposed method.
Acknowledgements. This work is partially supported by a grant from the Shenzhen Science and Technology Program (JCYJ20210324120011032) and Shenzhen Institute of Artificial Intelligence and Robotics for Society.
1https://github.com/kaidic/LDAM-DRW 2https://github.com/google-research/google-research/tree/master/logit_adjustment | 1. What is the main contribution of the paper regarding handling class imbalance in deep learning?
2. What are the strengths and weaknesses of the proposed method compared to existing works like [1]?
3. How does the reviewer assess the novelty and significance of the paper's contribution?
4. Are there any concerns or questions regarding the method's ability to handle other settings, such as label noise?
5. How does the reviewer evaluate the effectiveness of the proposed method based on the provided empirical evidence?
6. What are the limitations and potential negative societal impacts of the work that the authors should address? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In this paper, the authors proposed a reweighting method for class imbalance classification based on optimal transport. Experiments on both image and text data demonstrate the effectiveness of the proposed method.
Strengths And Weaknesses
Strengths:
The paper is clear and well-written.
The paper provides lots of empirical evidence to demonstrate the effectiveness of the proposed method.
Weaknesses:
This paper looks quite similar to an existing work [1].
In [1], class imbalanced problem (i.e., called class-prior-shift in [1]) is studied as a case study under the same setting (i.e., also using a small balanced meta set), and according to importance weighting, the optimal weights can be computed directly (i.e., w = p_{meta}(y)/p_{train}(y)). I'm wondering why not use this simple method to compute weights if such a balanced meta set is available?
Also, compared with [1], the core difference may be that this paper learns weights by minimizing an OT distance while [1] is by minimizing a maximum mean discrepancy (MMD). What is the critical contribution of this paper?
In L2RW, their baseline RANDOM (i.e, using random weights from a rectified Gaussian distribution) is quite strong. It would be interesting to compare the proposed method with the RANDOM baseline.
There is another crucial work worth to be referred and discussing in this area [2].
References:
[1] Fang et al. "Rethinking importance weighting for deep learning under distribution shift." In NeurIPS 2020.
[2] Jiang et al. "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels." In ICML 2018.
Questions
I'm wondering if the method proposed in this paper can handle other settings, e.g., label noise as in the baseline L2RW?
In Stage1 of the proposed method, if t1 is large, would it cause an overfitting issue by pre-training on an unweighted imbalanced dataset? How to decide the value of this parameter in practice?
--------------Post-rebuttal comments--------------
Thank the authors very much for the rebuttal and the additional results!
After carefully reading the rebuttal, some of my concerns are resolved by the additional experimental results of baselines. But my major concern still remains, i.e., the novelty of this paper may not be so strong given [1] as prior work. I agree with the authors that the two papers are the same in high-level thinking but different in the tasks. [1] considers a more general setting of distribution shift and this paper focuses on class imbalance. Also, the contribution of this paper includes using OT loss in class imbalance and empirical contributions on many datasets. I will slightly raise my score for this and the additional experimental results.
However, the authors said the key difference is that this paper disengages the dependence of the weight learning on the concerned classifier while [1] does not. This may not be true. If I understand correctly, in [1] the weight learning is independent of the classifier when using hidden-layer-output transformation as the non-linear transformation of data. I would suggest the authors carefully check this point to avoid any potential misunderstanding about the position of this paper with related work.
Limitations
It seems that the authors haven't adequately addressed the limitations and potential negative societal impact of the work. |
NIPS | Title
Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
Abstract
Imbalanced data pose challenges for deep learning based classification models. One of the most widely-used approaches for tackling imbalanced data is re-weighting, where training samples are associated with different weights in the loss function. Most of existing re-weighting approaches treat the example weights as the learnable parameter and optimize the weights on the meta set, entailing expensive bilevel optimization. In this paper, we propose a novel reweighting method based on optimal transport (OT) from a distributional point of view. Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set. The weights of the training samples are the probability mass of the imbalanced distribution and learned by minimizing the OT distance between the two distributions. Compared with existing methods, our proposed one disengages the dependence of the weight learning on the concerned classifier at each iteration. Experiments on image, text and point cloud datasets demonstrate that our proposed re-weighting method has excellent performance, achieving state-of-the-art results in many cases and providing a promising tool for addressing the imbalanced classification issue. The code has been made available at https://github.com/DandanGuo1993/reweight-imbalance-classification-with-OT.
1 Introduction
Deep neural networks (DNNs) have achieved remarkable success in various applications, which is undoubtedly inseparable from the high-quality large-scale datasets. Usually, the number of samples for each class in these datasets are manually selected resulting in balanced datasets. However, most real-world datasets are imbalanced, such as a few classes (a.k.a. head or majority class) occupy most of the data while most classes (a.k.a. tail or minority class) have a few samples. A model trained on the imbalanced training set but without considering such class imbalance would be significantly dominated by those majority classes, and thus underperform on a balanced test dataset. This can also be known as the long-tailed problem and exists in many domains, such as text classification [1, 2], object detection [3] and image classification [4–6].
There are rich research lines to solve the imbalance problem, including re-sampling [7–10], class-level or instance-level re-weighting [1, 2, 4, 11–18], meta-learning [4, 5, 15, 16, 19], two-stage methods
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
[4–6, 17] and post-hoc correction [20, 21]. Inspired by [2], re-weighting strategies can be roughly grouped into empirical re-weighting and automatic re-weighting. The former aims to design weights manually with the major insight that the minority class example will be assigned a larger weight value than that of the majority class [12–14]. However, manually setting weights can be less adaptive to different datasets [2]. The latter aims to assign adaptive weights to the examples through learning mechanisms [1, 2, 4, 15, 16]. As the representative automatic re-weighting method, L2RW [15] optimizes the weight vector as a learnable parameter with an unbiased meta set (i.e., validation set). Although L2RW and its followers have received widespread attention, most of them may be limited to optimizing the weights by the classification loss on the meta set: The gradient of weights is usually coupled with the to-be-learned classifier at each training iteration. Since classifier is the major concern in imbalanced issue [6], the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights.
This paper develops a novel automatic re-weighting method for imbalanced classification based on optimal transport (OT). As discussed by Jamal et al. [4], the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset (seen by a machine learning model) and the balanced test set (used to test the learned model). To this end, we aim to view the learning of the weight vector as the distribution approximation problem. We adopt the two-stage learning manner motivated by [6], where stage 1 and stage 2 focus on learning the feature extractor with the standard cross-entropy loss and the classifier with our proposed method, respectively. Specifically, we represent the imbalanced training set as a discrete empirical distribution P over all samples within it and view the to-be-learned weight vector w as its probability measure. Then we represent the balanced meta set as a discrete empirical distribution Q over all samples within it (in the same space with P ), which has a uniform probability measure for being balanced. Therefore, the learning of a weight vector can be formulated as the process of learning the distribution P to be as close to the balanced distribution Q as possible, a process facilitated by leveraging the OT distance [22]. Notably, the cost function plays a paramount role when learning the transport plan for OT, where we use the features and ground-truth labels of samples to design it. Due to the flexibility of our method, we can also learn an explicit weight net directly from data like [16, 23] but with a different structure, optimized by OT loss instead of the classification loss on the meta set. Generally, at each training iteration at stage 2, we minimize the OT loss to learn the weight vector (or weight net) for the current mini-batch, which is further used to re-weight the training loss for optimizing the model. As we can see, the gradient of weights only relies on the OT loss and thus is independent of the classifier. More importantly, our proposed method is robust to the distribution Q. To save the memory consumption, we introduce the prototype-oriented OT loss by building a new distribution Q based on prototypes instead of samples (one prototype for each class). More importantly, our proposed method can achieve a reasonably good performance even if we randomly select a mini-batch from all prototypes to build Q, making our method applicable to datasets with a large number of classes.
We summarize our main contributions as follows: (1) We formulate the learning of weight vector or weight net as the distribution approximation problem by minimizing the statistical distance between to-be-learned distribution over samples from imbalanced training set and another balanced distribution over samples from the meta set. (2) We leverage the OT distance between the distributions to guide the learning of weight vector or weight net. (3) We apply our method to imbalanced classification tasks including image, text and point cloud. Experiments demonstrate that introducing the OT loss to learn the example weights can produce effective and efficient classification performance.
2 Related Work
Empirical Re-weighting A classic empirical re-weighting scheme is to provide the examples of each class with the same weight, such as inverse class frequency [11, 14]. It has been further improved by the class-balanced loss [13], which calculates the effective number of examples as class frequency. Focal Loss [12] uses the predicted probability to calculate higher weights for the hard examples and dynamically adjust the weights. LDAM-DRW [17] designs a label-distribution-aware loss function and adopts a deferred class-level re-weighting method (i.e., inverse class frequency).
Automatic Re-weighting The automatic re-weighting methods learn the weights with learning mechanisms. L2RW [15] adopts a meta-learning manner to learn the example weights, which are optimized by the classification loss on the balanced meta set. Hu et al. [1] further improve L2RW
by iteratively optimizing weights instead of re-estimation at each iteration. Meta-weight-net [16] aims to learn an explicit weight net directly from data and optimize it by a meta-learning manner. Meta-class-weight [4] defines the weight for each example as the combination of class-level weight (estimated by Cui et al. [13]) and instance-level weight, optimized with a meta-learning approach similar to L2RW. Influence-balanced loss (IB) is proposed to [18] re-weight samples by the magnitude of the gradient. Recently, Liu et al. [2] propose to update the weights and model under a constraint. Our method belongs to automatic re-weighting group, and the idea of building an explicit weight net is similar to Shu et al. [16]. However, the major difference is that we bypass the classification loss on the meta set and use OT to learn the weights from the view of distribution approximation, disengaging the dependence of the weight learning on the concerned classifier at each iteration.
Meta Learning and Two-stage Learning Recently, researchers have proposed to tackle the imbalance issue with meta-learning, which can be applied to build a Balanced Meta-Softmax (BALMS)[19], learn weights [4, 15, 16] or transformed semantic directions for augmenting the minority classes in MetaSAug [5]. Two-stage methods, where the first stage and second stage focus on representation learning and classifier learning, respectively, have been proved effective for solving the imbalanced issue [5, 6, 16, 24]. BBN [25] unifies two stages with a specific cumulative learning strategy.
Optimal Transport Recently, OT has been used to solve the regression problem under the covariate shift [26], unsupervised domain adaption [27, 28], including sample-level, class-level or domain-level weight vector. Although they also adopt the re-weighting strategy and OT distance, they are distinct form ours in terms of task and technical detail. Also, the dynamic importance weighting which adopts MMD to re-weight samples for label-noise and class-prior-shift tasks [29] is also different from ours, where we provide a more flexible way for learning the weights of samples and disengage the dependence of the weight learning on the concerned classifier at each iteration. To the best of our knowledge, the works that solve imbalanced classification problem with OT are still very limited. An oversampling method via OT (OTOS) [30] aims to make synthetic samples follow a similar distribution to that of minority class samples. However, ours is a novel re-weighting method based on OT, without augmenting samples. Another recent work is Optimal Transport via Linear Mapping (OTLM) [21], which performs the post-hoc correction from the OT perspective and proposes a linear mapping to replace the original exact cost matrix in OT problem. Different from OTLM that belongs to the post-hoc correction group and aims to learn refined prediction matrix, ours falls into the training-aware group and aims to re-weight the training classification loss.
3 Background
Imbalanced Classification Consider a training set Dtrain ={(xi, yi)}Ni=1, where (x, y) is the input and target pair, xi the i-th sample, yi ∈ (0, 1)K the one-hot associated label vector over K classes, and N the number of the entire training data. Besides, consider a small balanced meta set Dmeta = {(xj , yj)}Mj=1, where M is the amount of total samples and M≪N . Denote the model parameterized with θ as f(x,θ), where θ is usually optimized by empirical risk minimization over the training set, i.e., θ∗ = argminθ 1N ∑N i=1 ℓ (yi, f (xi;θ)). For notational convenience, we denote l train i (θ) = ℓ (yi, f (xi;θ)) to represent the training loss function of pair (xi, yi). However, the model trained by this method will prefer the majority class if the training dataset is imbalanced.
Learning to Re-Weight Examples To solve the imbalanced issue, a kind of re-weighting methods is to treat the weights as the learnable parameter and learn a fair model to the minority and the majority classes by optimizing the weighted training loss. At each training iteration, the model is updated by
θ∗(w) = argmin θ ∑N i=1 wil train i (θ), (1)
where w=(w1, . . . , wN ) T is the weight vector (usually with a simplex constraint) of all training examples. Then the optimal w is obtained by making the model parameter θ∗(w) from Eq. (1) minimize the classification loss on a balanced meta set, formulated as
w∗ = argmin w
1
M ∑M j=1 lmetaj (θ ∗(w)) , (2)
where lmetaj is the loss function of pair (xj , yj) from meta set and the updated w ∗ is used to ameliorate the model. Generally, model θ consists of two key components, feature extractor and classifier, where
the classifier has been proved to be the major concerning part in imbalanced issue [6]. However, the gradient of weights in Eq. (2) always depends on the to-be-concerned classifier at each training iteration, which may result in inaccurate learning of the weights. Most automatic re-weighting methods learn the weight vectors or weight-related parameters (e.g., weight net) following this line; see more details from the previous works [4, 15, 16].
Optimal Transport Theory OT has been widely used to calculate the cost of transporting one probability measure to another in various machine learning problems, such as generative models [31], text analysis [32, 33], adversarial robustness [34], and meta learning [35, 36]. Among the rich theory of OT, this work presents a brief introduction to OT for discrete distributions; see Peyré and Cuturi [22] for more details. Consider p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj as two probability distributions, where xi and yj live in the arbitrary same space and δ is the Dirac function. Then, we can denote a ∈ ∆n and b ∈ ∆m as the probability simplex of Rn and Rm, respectively. The OT distance between p and q can be expressed as:
OT(p, q) = min T∈Π(p,q) ⟨T,C⟩, (3)
where ⟨·, ·⟩ is the Frobenius dot-product and C ∈ Rn×m≥0 is the transport cost matrix constructed by Cij = C(xi, yj). The transport probability matrix T ∈ Rn×m>0 , which satisfies Π(p, q) := {T | ∑n i=1 Tij = bj , ∑m j=1 Tij = ai}, is learned by minimizing OT(p, q). Directly optimizing Eq. (3) often comes at the cost of heavy computational demands, and OT with entropic regularization is introduced to allow the optimization at small computational cost in sufficient smoothness [37].
4 Re-weighting Method with Optimal Transport
This work views a training set as a to-be-learned distribution, whose probability measure is set as learnable weight vector w. We use OT distance to optimize w for re-weighting the training loss.
4.1 Main Objective
Given the imbalanced training set Dtrain , we can represent it as an empirical distribution over N pairs, where each pair (xi, yi)train has the sample probability wi (i.e., the weight), defined as:
P (w) = ∑N
i=1 wiδ(xi,yi)train , (4)
where (xi, yi)train is the i-th pair from the training set and the learnable weight vector w of all training examples means probability simplex of RN . Since the meta set Dmeta is balanced for all classes and closely related with the training set, it is reasonable to assume that meta set has already achieved the balanced data distribution that the training set aims to approximate. For meta set, we thus can sample each pair from it with equal probability and present it with an empirical distribution Q:
Q = ∑M
j=1
1
M δ(xj ,yj)meta , (5)
where (xj , yj)meta is the j-th pair from the meta set. To learn w, different from most automatic re-weighting methods, which minimize the classification loss on the meta set, we aim to enforce the to-be-learned distribution P (w) to stay close to the balanced distribution Q. Here, we explore the re-weighting method by adopting the OT distance between P (w) and Q:
min w OT(P (w), Q) def.= min w min T∈Π(P (w),Q) ⟨T,C⟩, (6)
where cost matrix C∈RN×M≥0 is described below and transport probability matrix T∈R N×M >0 should satisfy Π(P (w), Q) := {T | ∑N i=1 Tij=1/M, ∑M j=1 Tij=wi}.
4.2 Cost Function
For notation convenience, we reformulate the model as f(x,θ) = f2(f1(x;θ1);θ2), where f1 parameterized with θ1 denotes the representation learning part before the classifier, and f2 parameterized
with θ2 denotes the classifier. Intuitively, the cost Cij measures the distance between pair i in training set and pair j in meta set, which can be flexibly defined in different ways. We explore a few conceptually intuitive options of Cij , although other reasonable choices can also be used.
Label-aware Cost As the first option, we can define Cij with the ground-truth labels of two samples:
Cij = d Lab(ytraini , y meta j ), (7)
where dLab(·, ·) also denotes a distance measure, and ytraini , ymetaj are the ground-truth label vectors of the two samples, respectively. Intuitively, if we use the euclidean distance, then C is a 0−1 matrix (we can transfer the non-zero constant to 1), i.e., Cij=0 if xtraini and x meta j are from the same class, and Cij=1 otherwise. Now the OT loss is influenced by neither feature extractor θ1 nor classifier θ2.
Feature-aware Cost Besides, we can define Cij purely based on the features of samples:
Cij = d Fea(ztraini , z meta j ), (8)
where ztraini = f1(x train i ;θ1) ∈ RE and zmetaj = f1(xmetaj ;θ1) ∈ RE denote the E-dimensional representation of xtraini and x meta j , respectively. d
Fea(·, ·) denotes any commonly used distance measure and we empirically find the cosine distance is a good choice. It is easy to see that if xtraini and x meta j ’s features are close, their cost is small. Here the OT loss is influenced by the feature extractor θ1.
Combined Cost Finally, we can use both features and labels to define Cij , denoted as
Cij = d Fea(ztraini , z meta j ) + d Lab(ytraini , y meta j ). (9)
Intuitively, Cij will be small if two samples have the same label and similar features. Empirically, we find that using the dFea=1−cosine(·, ·) and euclidean distance for dLab gives better performance. Interestingly, given the feature-aware cost (8) or label-aware cost (7), the learned weight vector can be interpreted as the instance-level or class-level re-weighting method, respectively. The weight vector learned from the combined cost can be interpreted as the combination of class-level and instance-level weights, although no specialized design for two-component weights like previous [4]; see Fig. 1.
4.3 Learn the Weight Vector
Given the defined cost function, we adopt the entropy regularized OT loss [37] to learn the weight vector. We thus rewrite (6) as the following optimization problem:
min w LOT = ⟨C,T∗λ(w)⟩ , subject to T∗λ(w) = argmin T∈Π(P (w),Q) ⟨T,C⟩ − λH(T), (10)
where λ > 0 is a hyper-parameter for the entropic constraint H(T) =− ∑
ij Tij lnTij . Note that (10) provides us a new perspective to interpret the relationship between w and T, where w is the parameter of the leader problem and T is the parameter of the follower problem, which is of the lower priority. Accordingly, when we minimize (10) with respect to w using gradient descent, we should differentiate through T. Below we investigate the following two ways to optimize the weight vector.
Optimizing w directly Specifically, at each training iteration, we define P (w) with current w, use the Sinkhorn algorithm [37] to compute OT loss, then optimize w by w∗ = argminw LOT.
Amortizing the learning of w We also provide an alternative method by constructing an explicit weight net to output the example weights, whose structure can be designed flexibly. For example, we can build the following weight net and take the sample features as input:
w = softmax (s) , si=watt tanh ( Wvzz train i ) , (11)
where si is the i-th element of s ∈ RN , watt ∈ R1×A and Wvz ∈ RA×E are the learned parameters (we omit the bias for convenience), denoted as Ω = {watt,Wvz}. Denote S(z;Ω) as the weight net parameterized by Ω, which can be optimized by Ω∗ = argminΩ LOT.
5 Overall Algorithm and Implementations
To integrate our proposed method with deep learning frameworks, we adopt a stochastic setting, i.e., a mini-batch setting at each iteration. Following [4, 5], we adopt two-stage learning, where
Algorithm 1 Workflow about our re-weighting method for optimizing θ and w. Require: Datasets Dtrain , Dmeta , initial model parameter θ and weight vector, hyper-parameters {α, β, λ} for t = 1, 2, ..., t1 do
Sample a mini-batch B from the training set Dtrain ; Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B ℓ ( yi, f ( xi;θ (t) ))
; end for for t = t1 + 1, ..., t1 + t2 do
Sample a mini-batch B from the training set Dtrain ; Step (a): Update θ̂ (t+1) (w(t)) ← θ(t) − α∇θLB where LB =
1 |B| ∑ i∈B w (t) i ℓ ( yi, f ( xi;θ (t) )) Use Dmeta to build Q in (12) and B with wt to build P (wt) (4); Step (b): Compute LOT ( θ̂ (t+1) 1 (w t),w(t) ) with cost (9); Optimize w(t+1) ← w(t) −
β∇wLOT ( θ̂ (t+1) 1 (w t),w(t) ) Step (c): Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B w (t+1) i ℓ ( yi, f ( xi;θ (t) ))
end for
stage 1 trains the model f(θ) by the standard cross-entropy loss on the imbalanced training set and stage 2 aims to learn the weight vector w and meanwhile continue to update the model f(θ). Generally, at stage 2, calculating the optimal θ and w requires two nested loops of optimization, which is cost-expensive. Motivated by Hu et al. [1], we optimize θ and w alternatively, corresponding to (1) and (10) respectively, where w is maintained and updated throughout the training, so that re-estimation from scratch can be avoided in each iteration. The implementation process of our proposed method with w optimized directly is shown in Algorithm 1, where the key steps are highlighted in Step (a), (b), and (c). Specifically, at each training iteration t, in Step (a), we have θ̂ (t+1) (wt) = {θ̂ (t+1)
1 (w t), θ̂
(t+1) 2 (w t)} and α is the step size for θ; in Step (b), as the cost function
based on features is related with θ̂ (t+1)
1 (w t), the OT loss relies on θ̂
(t+1) 1 (w t), and β is the step size
for w; in Step (c), we ameliorate model parameters θ(t+1). We defer the learning of θ and Ω for the amortized learning of w.
Discussion From Step (b), we find the gradient of w is unrelated to classifier θ2 regardless of which cost function we choose. If we use the label-aware cost or freeze the feature extractor parameterized by θ1, which is trained in the first stage, the OT loss in Step (b) can be further reduced as LOT (wt), where we only need Steps (b)-(c) at each iteration. This is different from most of automatic reweighting methods, where the gradient of w is always related with the to-be-learned model {θ1,θ2} or classifier θ2 (when freezing θ1) for minimizing the classification loss on meta set.
Prototype-oriented OT loss (POT) Recall that we represent a balanced meta set with M samples as distribution Q in (5), where M/K is the number of data in each class and usually larger than 1. Computing the OT loss requires to learn a B ×M -dimensional transport matrix at each iteration. To improve the efficiency of algorithm, we average all samples from each class in the meta set to achieve its prototype and propose a new Q distribution over K prototypes:
Q = ∑K
k=1
1
K δ(x̂k,yk)meta , x̂k =
K
M ∑M/K j=1 xmetakj , (12)
where POT loss only needs a B ×K-dimensional transport matrix. Due to the robustness of our method to Q, when dealing with a large number of classes, we can randomly sample a mini-batch from K prototypes at each iteration to build Q.
6 Experiments
We conduct extensive experiments to validate the effectiveness of our proposed method on text, image, and point cloud imbalanced classification tasks. Notably, different from the imbalanced image
and point cloud classification, we find that optimizing the weight net is better than optimizing the weight vector directly in the text classification. Therefore, we optimize the weight vector for the image and point cloud cases and build a weight net for text case. Unless specified otherwise, we adopt the combined cost and set the hyper-parameter for the entropic constraint as λ = 0.1 and the maximum iteration number in the Sinkhorn algorithm as 200. We define the imbalance factor (IF) of a dataset as the data point amount ratio between the largest and smallest classes.
6.1 Experiments on Imbalanced Image Classification
Datasets and Baselines We evaluate our method on CIFAR-LT-10, CIFAR-LT-100, ImageNet-LT and Places-LT. We create CIFAR-LT-10 (CIFAR-LT-100) from CIFAR-10 (CIFAR-100)[38] by downsampling samples per class with IF∈{200, 100, 50, 20} [5, 13]. ImageNet-LT is built from the classic ImageNet with 1000 classes[39] and IF=1280/5 [5, 24]. Places-LT is created from Places-2 [40] with 365 classes and IF=4980/5 [4, 24]. We randomly select 10 training images per class as meta set [5]; see more details in Appendix B. We consider the following baselines: (1) Cross-entropy (CE), the model trained on the imbalanced training set with CE loss. (2) Empirical re-weighting methods, like Focal loss [12], Class-balanced (CB) loss [13] and LDAM-DRW [17]. (3) Automatic re-weighting methods, including L2RW [15], IB [18], Meta-Weight-Net [16] and Meta-class-weight [4]. (4) Meta-learning methods, including MetaSAug [5] and above methods of [4, 15, 16, 19]. (5) Two-stage methods, such as OLTR [24], cRT [6], LWS [6], BBN [25] and methods of [4, 5].
Experimental details and results on CIFAR-LT For a fair comparison, we use ResNet-32 [41] as the backbone on CIFAR-LT-10 and CIFAR-LT-100. Following Li et al. [5], at stage 1, we use 200 epochs, set the learning rate α of θ as 0.1, which is decayed by 1e−2 at the 160th and 180th epochs. At stage 2, we use 40 epochs, set α as 2e−5 and learning rate β of weights as 1e−3. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 16. We list the recognition results of different methods on CIFAR-LT-10 and CIFAR-LT-100 with different imbalance factors in Table 1.We report the average result of 5 random experiments without standard deviation which is of small scale(e.g., 1e-2). We can see that our re-weighting method outperforms CE training by a large margin and performs better than the empirical or automatic re-weighting methods. Remarkably, our proposed method outperforms competing MetaSAug that conducts a meta semantic augmentation approach to learn appropriate class-wise covariance matrices when IF is 200, 100 and 50. Importantly, as the training data becomes more imbalanced, our method is more advantageous. Even though our proposed method is inferior to MetaSAug when the dataset is less imbalanced (IF=20), it can still achieve competing results and surpasses related re-weighting methods. This suggests that our proposed method can be used to enhance the imbalanced classification, without the requirement of designing complicated models or augmenting samples on purpose.
To more comprehensively understand our method, we provide a series of ablation studies on CIFARLT-100 with IF=200 in Table 2. Firstly, to explore the impact of cost function, we use different cost functions for the OT loss. We can see that the combined cost performs better than label-aware cost and feature-aware cost, confirming the validity of combining features and labels to define cost. Besides, using either label-aware or feature-aware cost can still achieve acceptable performance, indicating the usefulness of OT loss in the imbalanced issue. Secondly, to explore the robustness of the meta distribution Q, we adopt three ways to build Q: (1) using prototypes defined in Eq. (12) (K samples) ; (2) using all samples defined in Eq. (5) (10 ∗K samples) ; (3) randomly sampling one point from each class (K samples) in meta set. We find that prototype-based meta performs best, and the performance with random-sample meta or whole meta is still competitive, which demonstrates the robustness of our proposed method to the distribution Q and the benefit of using the prototypes to build Q. Third, we compare two ways for learning w in each iteration, where one is re-estimating w from scratch and another one is maintaining and updating w throughout the training (i.e., iteratively optimizing weights). We find that iteratively optimizing performs better.
Since cost function is essential in optimizing the OT loss, we are interested in examining the learned weight vectors given by different cost functions. Here, we use CIFAR-LT-10, randomly choose {10, 9, ..., 1} training samples from class {1, 2, ..., 10} and obtain 55 samples, which are used to build distribution P . Besides, the 10 prototypes from meta set are used to build the distribution Q. Given the different cost functions, we show the learned weight vectors of 55 training samples in Fig. 1, which have very different properties. Specifically, the label-aware cost and feature-aware cost lead to class-level weights and sample-level weights, respectively. It is reasonable that label-aware cost
only decides whether the two samples (from the meta set and training set) belong to the same class, resulting in class-level measure. However, feature-aware cost measures the distance between samples from the sample-level, where each sample has its own feature. More interestingly, the learned weights with the combined cost own the characteristics of class-level and sample-level weights simultaneously, where example weights of different classes are far away and example weights of the same class are close. Coincidentally, using the combined cost to define the OT loss can reach the same goal of [4], which explicitly considers class-level and sample-level weight. Besides, we find that the learned example weights of the minority class are usually more prominent than those of the majority classes.
To verify whether our method ameliorates the performance on minority classes, we plot the confusion matrices of CE, MetaSAug, and ours on CIFAR-LT-10 with IF=200 in Fig. 2. As expected, although CE training can almost perfectly classify the samples in majority classes, it suffers severe performance degeneration in the minority classes. MetaSAug improves the accuracies of the minority classes, where is still a big gap between the performance on the minority classes and the majority classes. In
contrast, ours does not show a very clear preference for a certain class and outperforms the strong baseline on the overall performance, which is the goal of on an imbalanced classification task.
Experimental details and results on Places-LT and ImageNet-LT Following [6], we employ ResNet-152 pre-trained on the full ImageNet as the backbone on Places-LT. For stage 1, we set the initial learning rate as 0.01, which is decayed by 1e−1 every 10 epochs. In the stage 2 of our method, we only fine-tune the last fully-connected layer for training efficiency and set α as 1e−4 and β as 1e−3 within 50 epochs. The mini-batch size is 32 and the optimizer is SGD with momentum 0.9 and weight decay 5e−3. As shown in Table 3, our method outperforms all baselines. It further suggests that our method has excellent performance in the extreme imbalance setting with IF=4980/5. For a fair comparison, we implement our method on ImageNet-LT with the same experimental conditions of [5], from which we have taken the results of other comparison methods. We consider ResNet-50 [41] as the backbone on ImageNet-LT. In stage 1, we run 200 epochs and decay the learning rate by 0.1 at the 60th and 80th epochs. In stage 2, we implement our method for 50 epochs, set learning rate α as 2e−5 and β as 1e−2, and only fine-tune the last fully-connected layer for training efficiency. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 128. The results on ImageNet-LT of different models reported in Table 4 indicate the effectiveness of our proposed method on ImageNet-LT when comparing with strong baseline MetaSAug. Besides, we further consider randomly sampling a mini-batch of size 100 from all prototypes at each iteration to build Q, whose performance is comparable to the Q from all prototypes. Thus, with a stochastic setting for Q, our proposed method can be used to the imbalanced training set with a large number of classes. We defer the time computational complexity, additional quantitative results and qualitative results on different image datasets to Appendix B.
6.2 Experiments on Imbalanced Text Classification
Datasets and settings Following [1, 2], we adopt the popular SST-2 for 2-class and SST-5 for 5-class sentence sentiment [43]. For a fair comparison, we use the same imbalanced datasets and settings with [2]. Specifically, we set class 1 as the minority class and the rest as the majority classes, where the number of examples in the majority class is fixed as 1000 (SST-2) and 500 (SST-5) and we achieve different imbalance settings by varying the number of examples in the minority class. Besides, the number of samples in the meta set is 10 for each class. We use the BERT (base, uncased) model [44] as feature extractor and a simple 3-layer fully-connected network (FCN) with the structure in Appendix C as classifier. To make subsequent experiments on strong models, following [2], we use an additional balanced training set (500 samples in each class) to fine-tune the BERT model, which is randomly selected from the remaining examples in each dataset except the imbalanced training set, meta set and to-be evaluated test set. Based on the fine-tuned BERT, we adopt the two-stage manner for the imbalanced text datasets, where we train the BERT + FCN in the first stage with CE loss and train the FCN with our proposed method by freezing the BERT in the second stage. The settings of the training process are deferred to Appendix C.
Baselines We consider the following methods: (1) vanilla BERT, the vanilla pretrained language model. (2) Fine-tuned BERT , where the pretrained BERT is fine-tuned on an additional balanced training set. (3) Fine-tuned BERT + CE, the fine-tuned BERT model followed by the FCN which
is further trained by the CE loss on the imbalanced training set following [1, 2]. (4) Automatic re-weighting methods, including the method of Hu et al. [1] and constraint-based re-weighting [2]. Since few works consider imbalanced text classification, we further consider (5) Empirical reweighting methods, including re-weighting with inverse class frequency (i.e., Proportion) [11, 14]) and LDAM-DRW [17] and (6) Logit adjustment [20] using their official codes and settings1 2. We repeat all experiments 10 times and report the mean and standard deviation.
Experimental details and results on SST-2 and SST-5 We report the text classification results of compared methods under different imbalance factors in Table 5. We find that our proposed method outperforms all competing methods in all imbalance factor settings, which demonstrates the effectiveness of our proposed method. Although all methods could achieve acceptable performance in a slight imbalance, the performance of three baselines (Vanilla BERT, Fine-Tuned BERT and FineTuned BERT+CE) drop dramatically, indicating the importance of proposing specialized methods for handling imbalanced training datasets. Logit adjustment (post-hoc correction), is very competitive to ours on SST-2, which, however, only produces similar results to the three above-mentioned baselines on SST-5. In contrast, ours is robust to not only the imbalance factors but also the number of classes, where the results are consistent with the image case. We provide more results in Appendix C.4. In addition to 1D text and 2D image, we further investigate the robustness of our method on 3D point cloud data, where we use the popular ModelNet10 [45] and defer the experiments to Appendix D.
7 Conclusion
This paper introduces a novel automatic re-weighting method for imbalance classification based on optimal transport (OT). This method presents the imbalanced training set as a to-be-learned distribution over its training examples, each of which is associated with a probability weight. Similarly, our method views another balanced meta set as a balanced distribution over the examples. By minimizing the OT distance between the two distributions in terms of the defined cost function, the learning of weight vector is formulated as a distribution approximation problem. Our proposed re-weighting method bypasses the commonly-used classification loss on the meta set and uses OT to learn the weights, disengaging the dependence of the weight learning on the concerned classifier at each iteration. This is an approach different from most of the existing re-weighting methods and may provide new thoughts for future work. Experimental results on a variety of imbalanced datasets of both images and texts validate the effectiveness and flexibility of our proposed method.
Acknowledgements. This work is partially supported by a grant from the Shenzhen Science and Technology Program (JCYJ20210324120011032) and Shenzhen Institute of Artificial Intelligence and Robotics for Society.
1https://github.com/kaidic/LDAM-DRW 2https://github.com/google-research/google-research/tree/master/logit_adjustment | 1. What is the main contribution of the paper regarding imbalanced classification?
2. What are the strengths and weaknesses of the proposed approach in addressing the challenges of imbalanced classification?
3. Do you have any questions regarding the reasoning behind the proposed method?
4. How does the reviewer assess the clarity and organization of the paper's content?
5. Are there any typos or errors in the paper that need correction?
6. Are there any limitations to the proposed approach that should be considered?
7. Are there any recent works related to imbalanced classification that the reviewer thinks the authors should consider comparing their approach to? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper argues that the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset and the balanced test set. Besides, the general optimizing the weights by the classification loss may lead to inaccurate learning of the weights. Therefore, the authors formulate the learning of weight vector as the distribution approximation problem by minimizing the OT distance between to-be-learned distribution of samples in imbalanced training set and another balanced meta set. To show the effectiveness of the proposed method, they conduct extensive experiments on imbalanced classification tasks including image, text, and point cloud.
Strengths And Weaknesses
Pros:
The idea of sample-reweighting is not new. But the method proposed by the author that learning the sample weights by minimizing the OT distance between the imbalanced training set and a balanced meta set seems reasonably novel.
The paper is well organized and easy to follow.
Cons:
Why does the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights? A further explanation is required to highlight the reasonability of the proposed method.
The reasonability of the statement that "the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset and the balanced test set" also needs further consideration or explanation.
An analysis or discussion about the computation complexity of optimizing the OT distance is necessary.
Questions
Why does the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights?
Why are the weights obtained when the imbalanced training set is approximated to a balanced meta data better for training a classifier?
An analysis or discussion about the computation complexity of optimizing the OT distance is necessary.
Missing the comparison with the highly related IB (Influence-balanced loss for imbalanced visual classification, ICCV 2021) and DisAlign (Distribution alignment: A unified framework for long-tail visual recognition, CVPR 2021.).
Typos:
θ
in Eq.(2) should be bolded.
In Eq.(10), should
η
-->
λ
?
Limitations
A hold-out balanced meta-data is required. |
NIPS | Title
Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
Abstract
Imbalanced data pose challenges for deep learning based classification models. One of the most widely-used approaches for tackling imbalanced data is re-weighting, where training samples are associated with different weights in the loss function. Most of existing re-weighting approaches treat the example weights as the learnable parameter and optimize the weights on the meta set, entailing expensive bilevel optimization. In this paper, we propose a novel reweighting method based on optimal transport (OT) from a distributional point of view. Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set. The weights of the training samples are the probability mass of the imbalanced distribution and learned by minimizing the OT distance between the two distributions. Compared with existing methods, our proposed one disengages the dependence of the weight learning on the concerned classifier at each iteration. Experiments on image, text and point cloud datasets demonstrate that our proposed re-weighting method has excellent performance, achieving state-of-the-art results in many cases and providing a promising tool for addressing the imbalanced classification issue. The code has been made available at https://github.com/DandanGuo1993/reweight-imbalance-classification-with-OT.
1 Introduction
Deep neural networks (DNNs) have achieved remarkable success in various applications, which is undoubtedly inseparable from the high-quality large-scale datasets. Usually, the number of samples for each class in these datasets are manually selected resulting in balanced datasets. However, most real-world datasets are imbalanced, such as a few classes (a.k.a. head or majority class) occupy most of the data while most classes (a.k.a. tail or minority class) have a few samples. A model trained on the imbalanced training set but without considering such class imbalance would be significantly dominated by those majority classes, and thus underperform on a balanced test dataset. This can also be known as the long-tailed problem and exists in many domains, such as text classification [1, 2], object detection [3] and image classification [4–6].
There are rich research lines to solve the imbalance problem, including re-sampling [7–10], class-level or instance-level re-weighting [1, 2, 4, 11–18], meta-learning [4, 5, 15, 16, 19], two-stage methods
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
[4–6, 17] and post-hoc correction [20, 21]. Inspired by [2], re-weighting strategies can be roughly grouped into empirical re-weighting and automatic re-weighting. The former aims to design weights manually with the major insight that the minority class example will be assigned a larger weight value than that of the majority class [12–14]. However, manually setting weights can be less adaptive to different datasets [2]. The latter aims to assign adaptive weights to the examples through learning mechanisms [1, 2, 4, 15, 16]. As the representative automatic re-weighting method, L2RW [15] optimizes the weight vector as a learnable parameter with an unbiased meta set (i.e., validation set). Although L2RW and its followers have received widespread attention, most of them may be limited to optimizing the weights by the classification loss on the meta set: The gradient of weights is usually coupled with the to-be-learned classifier at each training iteration. Since classifier is the major concern in imbalanced issue [6], the dependence of weights on classifier at training stage may lead to inaccurate learning of the weights.
This paper develops a novel automatic re-weighting method for imbalanced classification based on optimal transport (OT). As discussed by Jamal et al. [4], the major challenge for imbalanced classification is essentially the mismatch between the imbalanced training dataset (seen by a machine learning model) and the balanced test set (used to test the learned model). To this end, we aim to view the learning of the weight vector as the distribution approximation problem. We adopt the two-stage learning manner motivated by [6], where stage 1 and stage 2 focus on learning the feature extractor with the standard cross-entropy loss and the classifier with our proposed method, respectively. Specifically, we represent the imbalanced training set as a discrete empirical distribution P over all samples within it and view the to-be-learned weight vector w as its probability measure. Then we represent the balanced meta set as a discrete empirical distribution Q over all samples within it (in the same space with P ), which has a uniform probability measure for being balanced. Therefore, the learning of a weight vector can be formulated as the process of learning the distribution P to be as close to the balanced distribution Q as possible, a process facilitated by leveraging the OT distance [22]. Notably, the cost function plays a paramount role when learning the transport plan for OT, where we use the features and ground-truth labels of samples to design it. Due to the flexibility of our method, we can also learn an explicit weight net directly from data like [16, 23] but with a different structure, optimized by OT loss instead of the classification loss on the meta set. Generally, at each training iteration at stage 2, we minimize the OT loss to learn the weight vector (or weight net) for the current mini-batch, which is further used to re-weight the training loss for optimizing the model. As we can see, the gradient of weights only relies on the OT loss and thus is independent of the classifier. More importantly, our proposed method is robust to the distribution Q. To save the memory consumption, we introduce the prototype-oriented OT loss by building a new distribution Q based on prototypes instead of samples (one prototype for each class). More importantly, our proposed method can achieve a reasonably good performance even if we randomly select a mini-batch from all prototypes to build Q, making our method applicable to datasets with a large number of classes.
We summarize our main contributions as follows: (1) We formulate the learning of weight vector or weight net as the distribution approximation problem by minimizing the statistical distance between to-be-learned distribution over samples from imbalanced training set and another balanced distribution over samples from the meta set. (2) We leverage the OT distance between the distributions to guide the learning of weight vector or weight net. (3) We apply our method to imbalanced classification tasks including image, text and point cloud. Experiments demonstrate that introducing the OT loss to learn the example weights can produce effective and efficient classification performance.
2 Related Work
Empirical Re-weighting A classic empirical re-weighting scheme is to provide the examples of each class with the same weight, such as inverse class frequency [11, 14]. It has been further improved by the class-balanced loss [13], which calculates the effective number of examples as class frequency. Focal Loss [12] uses the predicted probability to calculate higher weights for the hard examples and dynamically adjust the weights. LDAM-DRW [17] designs a label-distribution-aware loss function and adopts a deferred class-level re-weighting method (i.e., inverse class frequency).
Automatic Re-weighting The automatic re-weighting methods learn the weights with learning mechanisms. L2RW [15] adopts a meta-learning manner to learn the example weights, which are optimized by the classification loss on the balanced meta set. Hu et al. [1] further improve L2RW
by iteratively optimizing weights instead of re-estimation at each iteration. Meta-weight-net [16] aims to learn an explicit weight net directly from data and optimize it by a meta-learning manner. Meta-class-weight [4] defines the weight for each example as the combination of class-level weight (estimated by Cui et al. [13]) and instance-level weight, optimized with a meta-learning approach similar to L2RW. Influence-balanced loss (IB) is proposed to [18] re-weight samples by the magnitude of the gradient. Recently, Liu et al. [2] propose to update the weights and model under a constraint. Our method belongs to automatic re-weighting group, and the idea of building an explicit weight net is similar to Shu et al. [16]. However, the major difference is that we bypass the classification loss on the meta set and use OT to learn the weights from the view of distribution approximation, disengaging the dependence of the weight learning on the concerned classifier at each iteration.
Meta Learning and Two-stage Learning Recently, researchers have proposed to tackle the imbalance issue with meta-learning, which can be applied to build a Balanced Meta-Softmax (BALMS)[19], learn weights [4, 15, 16] or transformed semantic directions for augmenting the minority classes in MetaSAug [5]. Two-stage methods, where the first stage and second stage focus on representation learning and classifier learning, respectively, have been proved effective for solving the imbalanced issue [5, 6, 16, 24]. BBN [25] unifies two stages with a specific cumulative learning strategy.
Optimal Transport Recently, OT has been used to solve the regression problem under the covariate shift [26], unsupervised domain adaption [27, 28], including sample-level, class-level or domain-level weight vector. Although they also adopt the re-weighting strategy and OT distance, they are distinct form ours in terms of task and technical detail. Also, the dynamic importance weighting which adopts MMD to re-weight samples for label-noise and class-prior-shift tasks [29] is also different from ours, where we provide a more flexible way for learning the weights of samples and disengage the dependence of the weight learning on the concerned classifier at each iteration. To the best of our knowledge, the works that solve imbalanced classification problem with OT are still very limited. An oversampling method via OT (OTOS) [30] aims to make synthetic samples follow a similar distribution to that of minority class samples. However, ours is a novel re-weighting method based on OT, without augmenting samples. Another recent work is Optimal Transport via Linear Mapping (OTLM) [21], which performs the post-hoc correction from the OT perspective and proposes a linear mapping to replace the original exact cost matrix in OT problem. Different from OTLM that belongs to the post-hoc correction group and aims to learn refined prediction matrix, ours falls into the training-aware group and aims to re-weight the training classification loss.
3 Background
Imbalanced Classification Consider a training set Dtrain ={(xi, yi)}Ni=1, where (x, y) is the input and target pair, xi the i-th sample, yi ∈ (0, 1)K the one-hot associated label vector over K classes, and N the number of the entire training data. Besides, consider a small balanced meta set Dmeta = {(xj , yj)}Mj=1, where M is the amount of total samples and M≪N . Denote the model parameterized with θ as f(x,θ), where θ is usually optimized by empirical risk minimization over the training set, i.e., θ∗ = argminθ 1N ∑N i=1 ℓ (yi, f (xi;θ)). For notational convenience, we denote l train i (θ) = ℓ (yi, f (xi;θ)) to represent the training loss function of pair (xi, yi). However, the model trained by this method will prefer the majority class if the training dataset is imbalanced.
Learning to Re-Weight Examples To solve the imbalanced issue, a kind of re-weighting methods is to treat the weights as the learnable parameter and learn a fair model to the minority and the majority classes by optimizing the weighted training loss. At each training iteration, the model is updated by
θ∗(w) = argmin θ ∑N i=1 wil train i (θ), (1)
where w=(w1, . . . , wN ) T is the weight vector (usually with a simplex constraint) of all training examples. Then the optimal w is obtained by making the model parameter θ∗(w) from Eq. (1) minimize the classification loss on a balanced meta set, formulated as
w∗ = argmin w
1
M ∑M j=1 lmetaj (θ ∗(w)) , (2)
where lmetaj is the loss function of pair (xj , yj) from meta set and the updated w ∗ is used to ameliorate the model. Generally, model θ consists of two key components, feature extractor and classifier, where
the classifier has been proved to be the major concerning part in imbalanced issue [6]. However, the gradient of weights in Eq. (2) always depends on the to-be-concerned classifier at each training iteration, which may result in inaccurate learning of the weights. Most automatic re-weighting methods learn the weight vectors or weight-related parameters (e.g., weight net) following this line; see more details from the previous works [4, 15, 16].
Optimal Transport Theory OT has been widely used to calculate the cost of transporting one probability measure to another in various machine learning problems, such as generative models [31], text analysis [32, 33], adversarial robustness [34], and meta learning [35, 36]. Among the rich theory of OT, this work presents a brief introduction to OT for discrete distributions; see Peyré and Cuturi [22] for more details. Consider p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj as two probability distributions, where xi and yj live in the arbitrary same space and δ is the Dirac function. Then, we can denote a ∈ ∆n and b ∈ ∆m as the probability simplex of Rn and Rm, respectively. The OT distance between p and q can be expressed as:
OT(p, q) = min T∈Π(p,q) ⟨T,C⟩, (3)
where ⟨·, ·⟩ is the Frobenius dot-product and C ∈ Rn×m≥0 is the transport cost matrix constructed by Cij = C(xi, yj). The transport probability matrix T ∈ Rn×m>0 , which satisfies Π(p, q) := {T | ∑n i=1 Tij = bj , ∑m j=1 Tij = ai}, is learned by minimizing OT(p, q). Directly optimizing Eq. (3) often comes at the cost of heavy computational demands, and OT with entropic regularization is introduced to allow the optimization at small computational cost in sufficient smoothness [37].
4 Re-weighting Method with Optimal Transport
This work views a training set as a to-be-learned distribution, whose probability measure is set as learnable weight vector w. We use OT distance to optimize w for re-weighting the training loss.
4.1 Main Objective
Given the imbalanced training set Dtrain , we can represent it as an empirical distribution over N pairs, where each pair (xi, yi)train has the sample probability wi (i.e., the weight), defined as:
P (w) = ∑N
i=1 wiδ(xi,yi)train , (4)
where (xi, yi)train is the i-th pair from the training set and the learnable weight vector w of all training examples means probability simplex of RN . Since the meta set Dmeta is balanced for all classes and closely related with the training set, it is reasonable to assume that meta set has already achieved the balanced data distribution that the training set aims to approximate. For meta set, we thus can sample each pair from it with equal probability and present it with an empirical distribution Q:
Q = ∑M
j=1
1
M δ(xj ,yj)meta , (5)
where (xj , yj)meta is the j-th pair from the meta set. To learn w, different from most automatic re-weighting methods, which minimize the classification loss on the meta set, we aim to enforce the to-be-learned distribution P (w) to stay close to the balanced distribution Q. Here, we explore the re-weighting method by adopting the OT distance between P (w) and Q:
min w OT(P (w), Q) def.= min w min T∈Π(P (w),Q) ⟨T,C⟩, (6)
where cost matrix C∈RN×M≥0 is described below and transport probability matrix T∈R N×M >0 should satisfy Π(P (w), Q) := {T | ∑N i=1 Tij=1/M, ∑M j=1 Tij=wi}.
4.2 Cost Function
For notation convenience, we reformulate the model as f(x,θ) = f2(f1(x;θ1);θ2), where f1 parameterized with θ1 denotes the representation learning part before the classifier, and f2 parameterized
with θ2 denotes the classifier. Intuitively, the cost Cij measures the distance between pair i in training set and pair j in meta set, which can be flexibly defined in different ways. We explore a few conceptually intuitive options of Cij , although other reasonable choices can also be used.
Label-aware Cost As the first option, we can define Cij with the ground-truth labels of two samples:
Cij = d Lab(ytraini , y meta j ), (7)
where dLab(·, ·) also denotes a distance measure, and ytraini , ymetaj are the ground-truth label vectors of the two samples, respectively. Intuitively, if we use the euclidean distance, then C is a 0−1 matrix (we can transfer the non-zero constant to 1), i.e., Cij=0 if xtraini and x meta j are from the same class, and Cij=1 otherwise. Now the OT loss is influenced by neither feature extractor θ1 nor classifier θ2.
Feature-aware Cost Besides, we can define Cij purely based on the features of samples:
Cij = d Fea(ztraini , z meta j ), (8)
where ztraini = f1(x train i ;θ1) ∈ RE and zmetaj = f1(xmetaj ;θ1) ∈ RE denote the E-dimensional representation of xtraini and x meta j , respectively. d
Fea(·, ·) denotes any commonly used distance measure and we empirically find the cosine distance is a good choice. It is easy to see that if xtraini and x meta j ’s features are close, their cost is small. Here the OT loss is influenced by the feature extractor θ1.
Combined Cost Finally, we can use both features and labels to define Cij , denoted as
Cij = d Fea(ztraini , z meta j ) + d Lab(ytraini , y meta j ). (9)
Intuitively, Cij will be small if two samples have the same label and similar features. Empirically, we find that using the dFea=1−cosine(·, ·) and euclidean distance for dLab gives better performance. Interestingly, given the feature-aware cost (8) or label-aware cost (7), the learned weight vector can be interpreted as the instance-level or class-level re-weighting method, respectively. The weight vector learned from the combined cost can be interpreted as the combination of class-level and instance-level weights, although no specialized design for two-component weights like previous [4]; see Fig. 1.
4.3 Learn the Weight Vector
Given the defined cost function, we adopt the entropy regularized OT loss [37] to learn the weight vector. We thus rewrite (6) as the following optimization problem:
min w LOT = ⟨C,T∗λ(w)⟩ , subject to T∗λ(w) = argmin T∈Π(P (w),Q) ⟨T,C⟩ − λH(T), (10)
where λ > 0 is a hyper-parameter for the entropic constraint H(T) =− ∑
ij Tij lnTij . Note that (10) provides us a new perspective to interpret the relationship between w and T, where w is the parameter of the leader problem and T is the parameter of the follower problem, which is of the lower priority. Accordingly, when we minimize (10) with respect to w using gradient descent, we should differentiate through T. Below we investigate the following two ways to optimize the weight vector.
Optimizing w directly Specifically, at each training iteration, we define P (w) with current w, use the Sinkhorn algorithm [37] to compute OT loss, then optimize w by w∗ = argminw LOT.
Amortizing the learning of w We also provide an alternative method by constructing an explicit weight net to output the example weights, whose structure can be designed flexibly. For example, we can build the following weight net and take the sample features as input:
w = softmax (s) , si=watt tanh ( Wvzz train i ) , (11)
where si is the i-th element of s ∈ RN , watt ∈ R1×A and Wvz ∈ RA×E are the learned parameters (we omit the bias for convenience), denoted as Ω = {watt,Wvz}. Denote S(z;Ω) as the weight net parameterized by Ω, which can be optimized by Ω∗ = argminΩ LOT.
5 Overall Algorithm and Implementations
To integrate our proposed method with deep learning frameworks, we adopt a stochastic setting, i.e., a mini-batch setting at each iteration. Following [4, 5], we adopt two-stage learning, where
Algorithm 1 Workflow about our re-weighting method for optimizing θ and w. Require: Datasets Dtrain , Dmeta , initial model parameter θ and weight vector, hyper-parameters {α, β, λ} for t = 1, 2, ..., t1 do
Sample a mini-batch B from the training set Dtrain ; Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B ℓ ( yi, f ( xi;θ (t) ))
; end for for t = t1 + 1, ..., t1 + t2 do
Sample a mini-batch B from the training set Dtrain ; Step (a): Update θ̂ (t+1) (w(t)) ← θ(t) − α∇θLB where LB =
1 |B| ∑ i∈B w (t) i ℓ ( yi, f ( xi;θ (t) )) Use Dmeta to build Q in (12) and B with wt to build P (wt) (4); Step (b): Compute LOT ( θ̂ (t+1) 1 (w t),w(t) ) with cost (9); Optimize w(t+1) ← w(t) −
β∇wLOT ( θ̂ (t+1) 1 (w t),w(t) ) Step (c): Update θ(t+1) ← θ(t) − α∇θLB where LB = 1|B| ∑ i∈B w (t+1) i ℓ ( yi, f ( xi;θ (t) ))
end for
stage 1 trains the model f(θ) by the standard cross-entropy loss on the imbalanced training set and stage 2 aims to learn the weight vector w and meanwhile continue to update the model f(θ). Generally, at stage 2, calculating the optimal θ and w requires two nested loops of optimization, which is cost-expensive. Motivated by Hu et al. [1], we optimize θ and w alternatively, corresponding to (1) and (10) respectively, where w is maintained and updated throughout the training, so that re-estimation from scratch can be avoided in each iteration. The implementation process of our proposed method with w optimized directly is shown in Algorithm 1, where the key steps are highlighted in Step (a), (b), and (c). Specifically, at each training iteration t, in Step (a), we have θ̂ (t+1) (wt) = {θ̂ (t+1)
1 (w t), θ̂
(t+1) 2 (w t)} and α is the step size for θ; in Step (b), as the cost function
based on features is related with θ̂ (t+1)
1 (w t), the OT loss relies on θ̂
(t+1) 1 (w t), and β is the step size
for w; in Step (c), we ameliorate model parameters θ(t+1). We defer the learning of θ and Ω for the amortized learning of w.
Discussion From Step (b), we find the gradient of w is unrelated to classifier θ2 regardless of which cost function we choose. If we use the label-aware cost or freeze the feature extractor parameterized by θ1, which is trained in the first stage, the OT loss in Step (b) can be further reduced as LOT (wt), where we only need Steps (b)-(c) at each iteration. This is different from most of automatic reweighting methods, where the gradient of w is always related with the to-be-learned model {θ1,θ2} or classifier θ2 (when freezing θ1) for minimizing the classification loss on meta set.
Prototype-oriented OT loss (POT) Recall that we represent a balanced meta set with M samples as distribution Q in (5), where M/K is the number of data in each class and usually larger than 1. Computing the OT loss requires to learn a B ×M -dimensional transport matrix at each iteration. To improve the efficiency of algorithm, we average all samples from each class in the meta set to achieve its prototype and propose a new Q distribution over K prototypes:
Q = ∑K
k=1
1
K δ(x̂k,yk)meta , x̂k =
K
M ∑M/K j=1 xmetakj , (12)
where POT loss only needs a B ×K-dimensional transport matrix. Due to the robustness of our method to Q, when dealing with a large number of classes, we can randomly sample a mini-batch from K prototypes at each iteration to build Q.
6 Experiments
We conduct extensive experiments to validate the effectiveness of our proposed method on text, image, and point cloud imbalanced classification tasks. Notably, different from the imbalanced image
and point cloud classification, we find that optimizing the weight net is better than optimizing the weight vector directly in the text classification. Therefore, we optimize the weight vector for the image and point cloud cases and build a weight net for text case. Unless specified otherwise, we adopt the combined cost and set the hyper-parameter for the entropic constraint as λ = 0.1 and the maximum iteration number in the Sinkhorn algorithm as 200. We define the imbalance factor (IF) of a dataset as the data point amount ratio between the largest and smallest classes.
6.1 Experiments on Imbalanced Image Classification
Datasets and Baselines We evaluate our method on CIFAR-LT-10, CIFAR-LT-100, ImageNet-LT and Places-LT. We create CIFAR-LT-10 (CIFAR-LT-100) from CIFAR-10 (CIFAR-100)[38] by downsampling samples per class with IF∈{200, 100, 50, 20} [5, 13]. ImageNet-LT is built from the classic ImageNet with 1000 classes[39] and IF=1280/5 [5, 24]. Places-LT is created from Places-2 [40] with 365 classes and IF=4980/5 [4, 24]. We randomly select 10 training images per class as meta set [5]; see more details in Appendix B. We consider the following baselines: (1) Cross-entropy (CE), the model trained on the imbalanced training set with CE loss. (2) Empirical re-weighting methods, like Focal loss [12], Class-balanced (CB) loss [13] and LDAM-DRW [17]. (3) Automatic re-weighting methods, including L2RW [15], IB [18], Meta-Weight-Net [16] and Meta-class-weight [4]. (4) Meta-learning methods, including MetaSAug [5] and above methods of [4, 15, 16, 19]. (5) Two-stage methods, such as OLTR [24], cRT [6], LWS [6], BBN [25] and methods of [4, 5].
Experimental details and results on CIFAR-LT For a fair comparison, we use ResNet-32 [41] as the backbone on CIFAR-LT-10 and CIFAR-LT-100. Following Li et al. [5], at stage 1, we use 200 epochs, set the learning rate α of θ as 0.1, which is decayed by 1e−2 at the 160th and 180th epochs. At stage 2, we use 40 epochs, set α as 2e−5 and learning rate β of weights as 1e−3. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 16. We list the recognition results of different methods on CIFAR-LT-10 and CIFAR-LT-100 with different imbalance factors in Table 1.We report the average result of 5 random experiments without standard deviation which is of small scale(e.g., 1e-2). We can see that our re-weighting method outperforms CE training by a large margin and performs better than the empirical or automatic re-weighting methods. Remarkably, our proposed method outperforms competing MetaSAug that conducts a meta semantic augmentation approach to learn appropriate class-wise covariance matrices when IF is 200, 100 and 50. Importantly, as the training data becomes more imbalanced, our method is more advantageous. Even though our proposed method is inferior to MetaSAug when the dataset is less imbalanced (IF=20), it can still achieve competing results and surpasses related re-weighting methods. This suggests that our proposed method can be used to enhance the imbalanced classification, without the requirement of designing complicated models or augmenting samples on purpose.
To more comprehensively understand our method, we provide a series of ablation studies on CIFARLT-100 with IF=200 in Table 2. Firstly, to explore the impact of cost function, we use different cost functions for the OT loss. We can see that the combined cost performs better than label-aware cost and feature-aware cost, confirming the validity of combining features and labels to define cost. Besides, using either label-aware or feature-aware cost can still achieve acceptable performance, indicating the usefulness of OT loss in the imbalanced issue. Secondly, to explore the robustness of the meta distribution Q, we adopt three ways to build Q: (1) using prototypes defined in Eq. (12) (K samples) ; (2) using all samples defined in Eq. (5) (10 ∗K samples) ; (3) randomly sampling one point from each class (K samples) in meta set. We find that prototype-based meta performs best, and the performance with random-sample meta or whole meta is still competitive, which demonstrates the robustness of our proposed method to the distribution Q and the benefit of using the prototypes to build Q. Third, we compare two ways for learning w in each iteration, where one is re-estimating w from scratch and another one is maintaining and updating w throughout the training (i.e., iteratively optimizing weights). We find that iteratively optimizing performs better.
Since cost function is essential in optimizing the OT loss, we are interested in examining the learned weight vectors given by different cost functions. Here, we use CIFAR-LT-10, randomly choose {10, 9, ..., 1} training samples from class {1, 2, ..., 10} and obtain 55 samples, which are used to build distribution P . Besides, the 10 prototypes from meta set are used to build the distribution Q. Given the different cost functions, we show the learned weight vectors of 55 training samples in Fig. 1, which have very different properties. Specifically, the label-aware cost and feature-aware cost lead to class-level weights and sample-level weights, respectively. It is reasonable that label-aware cost
only decides whether the two samples (from the meta set and training set) belong to the same class, resulting in class-level measure. However, feature-aware cost measures the distance between samples from the sample-level, where each sample has its own feature. More interestingly, the learned weights with the combined cost own the characteristics of class-level and sample-level weights simultaneously, where example weights of different classes are far away and example weights of the same class are close. Coincidentally, using the combined cost to define the OT loss can reach the same goal of [4], which explicitly considers class-level and sample-level weight. Besides, we find that the learned example weights of the minority class are usually more prominent than those of the majority classes.
To verify whether our method ameliorates the performance on minority classes, we plot the confusion matrices of CE, MetaSAug, and ours on CIFAR-LT-10 with IF=200 in Fig. 2. As expected, although CE training can almost perfectly classify the samples in majority classes, it suffers severe performance degeneration in the minority classes. MetaSAug improves the accuracies of the minority classes, where is still a big gap between the performance on the minority classes and the majority classes. In
contrast, ours does not show a very clear preference for a certain class and outperforms the strong baseline on the overall performance, which is the goal of on an imbalanced classification task.
Experimental details and results on Places-LT and ImageNet-LT Following [6], we employ ResNet-152 pre-trained on the full ImageNet as the backbone on Places-LT. For stage 1, we set the initial learning rate as 0.01, which is decayed by 1e−1 every 10 epochs. In the stage 2 of our method, we only fine-tune the last fully-connected layer for training efficiency and set α as 1e−4 and β as 1e−3 within 50 epochs. The mini-batch size is 32 and the optimizer is SGD with momentum 0.9 and weight decay 5e−3. As shown in Table 3, our method outperforms all baselines. It further suggests that our method has excellent performance in the extreme imbalance setting with IF=4980/5. For a fair comparison, we implement our method on ImageNet-LT with the same experimental conditions of [5], from which we have taken the results of other comparison methods. We consider ResNet-50 [41] as the backbone on ImageNet-LT. In stage 1, we run 200 epochs and decay the learning rate by 0.1 at the 60th and 80th epochs. In stage 2, we implement our method for 50 epochs, set learning rate α as 2e−5 and β as 1e−2, and only fine-tune the last fully-connected layer for training efficiency. We use the SGD optimizer with momentum 0.9, weight decay 5e−4 and set the batch size as 128. The results on ImageNet-LT of different models reported in Table 4 indicate the effectiveness of our proposed method on ImageNet-LT when comparing with strong baseline MetaSAug. Besides, we further consider randomly sampling a mini-batch of size 100 from all prototypes at each iteration to build Q, whose performance is comparable to the Q from all prototypes. Thus, with a stochastic setting for Q, our proposed method can be used to the imbalanced training set with a large number of classes. We defer the time computational complexity, additional quantitative results and qualitative results on different image datasets to Appendix B.
6.2 Experiments on Imbalanced Text Classification
Datasets and settings Following [1, 2], we adopt the popular SST-2 for 2-class and SST-5 for 5-class sentence sentiment [43]. For a fair comparison, we use the same imbalanced datasets and settings with [2]. Specifically, we set class 1 as the minority class and the rest as the majority classes, where the number of examples in the majority class is fixed as 1000 (SST-2) and 500 (SST-5) and we achieve different imbalance settings by varying the number of examples in the minority class. Besides, the number of samples in the meta set is 10 for each class. We use the BERT (base, uncased) model [44] as feature extractor and a simple 3-layer fully-connected network (FCN) with the structure in Appendix C as classifier. To make subsequent experiments on strong models, following [2], we use an additional balanced training set (500 samples in each class) to fine-tune the BERT model, which is randomly selected from the remaining examples in each dataset except the imbalanced training set, meta set and to-be evaluated test set. Based on the fine-tuned BERT, we adopt the two-stage manner for the imbalanced text datasets, where we train the BERT + FCN in the first stage with CE loss and train the FCN with our proposed method by freezing the BERT in the second stage. The settings of the training process are deferred to Appendix C.
Baselines We consider the following methods: (1) vanilla BERT, the vanilla pretrained language model. (2) Fine-tuned BERT , where the pretrained BERT is fine-tuned on an additional balanced training set. (3) Fine-tuned BERT + CE, the fine-tuned BERT model followed by the FCN which
is further trained by the CE loss on the imbalanced training set following [1, 2]. (4) Automatic re-weighting methods, including the method of Hu et al. [1] and constraint-based re-weighting [2]. Since few works consider imbalanced text classification, we further consider (5) Empirical reweighting methods, including re-weighting with inverse class frequency (i.e., Proportion) [11, 14]) and LDAM-DRW [17] and (6) Logit adjustment [20] using their official codes and settings1 2. We repeat all experiments 10 times and report the mean and standard deviation.
Experimental details and results on SST-2 and SST-5 We report the text classification results of compared methods under different imbalance factors in Table 5. We find that our proposed method outperforms all competing methods in all imbalance factor settings, which demonstrates the effectiveness of our proposed method. Although all methods could achieve acceptable performance in a slight imbalance, the performance of three baselines (Vanilla BERT, Fine-Tuned BERT and FineTuned BERT+CE) drop dramatically, indicating the importance of proposing specialized methods for handling imbalanced training datasets. Logit adjustment (post-hoc correction), is very competitive to ours on SST-2, which, however, only produces similar results to the three above-mentioned baselines on SST-5. In contrast, ours is robust to not only the imbalance factors but also the number of classes, where the results are consistent with the image case. We provide more results in Appendix C.4. In addition to 1D text and 2D image, we further investigate the robustness of our method on 3D point cloud data, where we use the popular ModelNet10 [45] and defer the experiments to Appendix D.
7 Conclusion
This paper introduces a novel automatic re-weighting method for imbalance classification based on optimal transport (OT). This method presents the imbalanced training set as a to-be-learned distribution over its training examples, each of which is associated with a probability weight. Similarly, our method views another balanced meta set as a balanced distribution over the examples. By minimizing the OT distance between the two distributions in terms of the defined cost function, the learning of weight vector is formulated as a distribution approximation problem. Our proposed re-weighting method bypasses the commonly-used classification loss on the meta set and uses OT to learn the weights, disengaging the dependence of the weight learning on the concerned classifier at each iteration. This is an approach different from most of the existing re-weighting methods and may provide new thoughts for future work. Experimental results on a variety of imbalanced datasets of both images and texts validate the effectiveness and flexibility of our proposed method.
Acknowledgements. This work is partially supported by a grant from the Shenzhen Science and Technology Program (JCYJ20210324120011032) and Shenzhen Institute of Artificial Intelligence and Robotics for Society.
1https://github.com/kaidic/LDAM-DRW 2https://github.com/google-research/google-research/tree/master/logit_adjustment | 1. What is the main contribution of the paper regarding imbalance learning?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. Do you have any concerns or questions regarding the method's limitations, applications, and comparisons with other approaches?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose a new sampling method to cope with imbalance learning, which is based on optimal transport. The advantages of the method are illustrated on several different types of datasets. Unlike the traditional reweight approach, the authors look at the problem from the perspective of distribution matching, which leverages the OT distance between the distributions to guide the learning of weight vector or weight net.
Strengths And Weaknesses
It is a very interesting idea to reweight the training samples based on OT in the training phase, this approach is therefore more flexible than the previous OT-based approaches to long-tailed problems. Whereas previous approaches have involved adjustments to the model's results in the post-processing stage, the authors go further and apply OT to the training stage and propose a corresponding solution. The authors' experiments are well developed and they validate the superiority of their approach on multiple data sets.
In addition to the above mentioned points, I have some concerns as follows:
The technique presented by the authors for reweighting with OT actually proposes a similar approach in many domains. For example, the usage of OT to address domain adaptation under conditional domain matching and label shift is proposed in [1]. Similarly, reweighting of samples using OT is described in [1]. A similar approach is suggested in [2] and [3]. I hope the author can explain the difference between them.
The authors do not seem to be comparing their experiments with the most advanced methods, and as far as I can tell, it seems that many recently published methods are better than the results in the authors' experiments. I wish the authors had compared their experiments with more methods than the ones in the paper.
Also, it seems that the author's method can only be applied to classification. I wonder if the author's method can cope with imbalance in object detection, or object segmentation, and also if the method can be extended to imbalance regression? The authors should explain more about the limitations of this work.
From the results on CIFAR and ImageNet, some of the authors' results seem to have very weak boosts, for example, almost less than 1%. The authors' method seems to be limited in terms of experimental effectiveness, and I would like to see the authors take multiple experiments and average them, and report the variance of the experiments. This is what the NeurIPS conference promotes the need to report. Of course, I note that the authors state that these values are reported for text classification, but I think the authors should report these values for all experiments.
In Table 2, the author has conducted comparative experiments on CIFAR-100 to compare which cost matrix is better, but some of the results do not appear to be very different from each other, so it is difficult to derive any significant differences between these different methods in terms of the results.
The writing of the paper also needs further improvement, some issues are as follows:
The description of OT is very inadequate, which is very difficult to understand for many people from outside this field. I also hope that the authors will explain how Sinkhorn can be used to solve OT problems.
The author should present Related Work after Introduction to make it easier for others to learn about related work in this field.
There is some confusion in the author's notation format, e.g. for scalars, vectors, sets, parameters, etc. The author should have followed some notational conventions. It will make the text look more rigorous.
In line 313, "... indicates the effectiveness" should be "indicate the effectiveness".
[1] Optimal Transport for Conditional Domain Matching and Label Shift.
[2] Multi-source Domain Adaptation via Weighted Joint Distributions Optimal Transport.
[3] Reweighting samples under covariate shift using a Wasserstein distance criterion.
Questions
See above.
Limitations
See above. |
NIPS | Title
Fine-Grained Zero-Shot Learning with DNA as Side Information
Abstract
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
1 Introduction
Fine-grained species classification is essential in monitoring biodiversity. Diversity of life is the central tenet to biology and preserving biodiversity is key to a more sustainable life. Monitoring biodiversity requires identifying living organisms at the lowest taxonomic level possible. The traditional approach to identification uses published morphological dichotomous keys to identify the collected sample. This identification involves a tedious process of manually assessing the presence or absence of a long list of morphological traits arranged at hierarchical levels. The analysis is often performed in a laboratory setting by a well-trained human taxonomist and is difficult to do at scale. Fortunately, advances in technology have addressed this challenge to some extent through the use of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
DNA barcodes. DNA barcoding is a technique that uses a short section of DNA from a specific gene, such as cytochrome C oxidase I (COI), found in mitochondrial DNA, and offers specific information about speciation in living organisms and can achieve nearly perfect classification accuracy at the species level (26; 17).
As it is costly to obtain the label information for fine-grained image classification of species, Zero-Shot Learning (ZSL) that handles missing label information is a suitable task. In ZSL, side information is used to associate seen and unseen classes. Popular choices for side-information are manually annotated attributes (21; 12), word embeddings (41; 14; 27) derived from free-form text or the WordNet hierarchy (28; 2). It is often assumed that an exhaustive list of visual attributes characterizing all object classes (both seen and unseen) can be determined based only on seen classes. However, taking insects as our object classes, if no seen class species have antennae, the attribute list may not contain antenna, which may in fact be necessary to distinguish unseen species. In the United States alone, more than 40% of all insect species (>70,000) remain undescribed (42), which is a clear sign of the limitations of existing identification techniques that rely on visual attributes. Similarly, free-form text is unlikely to contain sufficiently descriptive information about fine-grained objects to generate discriminative vector embeddings. For example, tiger beetle is a class in the ImageNet dataset. However, the tiger beetle group itself contains thousands of known species and the Wikipedia pages for these species either do not exist or are limited to short text that does not necessarily contain any information about species’ morphological characteristics. WordNet hierarchy may not be useful either as most of the species names do not exist in WordNet.
Given that DNA information can be readily available for training (35; 36), species-level DNA information can be used as highly specific side information to replace high-level semantic information in ZSL. For seen classes, species-level DNA information can be obtained by finding the consensus nucleotide sequence among samples of a given species or by averaging corresponding sequence embeddings of samples. For unseen classes, species-level DNA information can be obtained from actual samples, if available, in the same way as seen classes, or can be simulated in a non-trivial way to represent potentially existing species.
Our approach uses DNA as side information for the first time for zero-shot classification of species. In fine-grained, large-scale species classification, no other side information can explain class dichotomy better than DNA, as new species are explicitly defined based on variations in DNA. The hierarchical Bayesian model leverages the implicit inter-species association of DNA and phenotypic traits and ultimately allows us to establish a Bayesian hierarchy based on DNA similarity between unseen and seen classes. We compare DNA against word representations for assessing class similarity and show that the Bayesian model that uses DNA to identify similar classes achieves favorable results compared to the version that uses word representations on a well-known ZSL benchmark species dataset involving slightly less than 200 bird species. In the particular case of an insect dataset with over 1000 species, when visual attributes or word representations may not offer feasible alternatives, we show that our hierarchical model that relies on DNA to establish class hierarchy significantly outperforms all other embedding-based methods and feature generating networks.
Our contributions are on three fronts. First, we introduce DNA as side information for fine-grained ZSL tasks, implement a Convolutional Neural Net (CNN) model to learn DNA barcode embeddings, and show that the embeddings are robust and highly specific for closed-set classification of species, even when training and test sets of species are mutually exclusive. We use the benchmark CUB dataset as a case study to show that DNA embeddings are competitive to word embeddings as side information. Second, we propose a fine-grained insect dataset involving 21, 212 matching image/DNA pairs from 578 genera and 1, 213 species as a new benchmark dataset and discuss the limitations of current ZSL paradigms for fine-grained ZSL tasks when there is no strong association between side information and image features. Third, we perform extensive studies to show that a simple hierarchical Bayesian model that uses DNA as side information outperforms state-of-the-art ZSL techniques on the newly introduced insect dataset by a wide margin.
2 Related Work
Zero-Shot Learning. Early ZSL literature is dominated by methods that embed image features into a semantic space and perform various forms of nearest neighbor search to do inference (14; 41; 1). As the dimensionality of semantic space is usually much smaller than the feature space this leads to the
hubness problem where some classes become hub and occur as the nearest neighbor of many samples. In an effort to alleviate the hubness problem, (50; 40) changed the direction of the embedding from semantic to image feature space. This was followed by a line of work that investigates bidirectional embedding between semantic and image spaces through a latent space (51; 43; 2; 32; 38).
In (25; 15), a new strategy of synthesizing features for unseen classes and converting the challenging ZSL problem into traditional supervised learning is introduced (23; 44; 9; 13; 48; 52; 29; 39; 4). Although feature generating networks (FGNs) currently achieve state-of-the-art results in ZSL, they suffer from the same problem as earlier lines of work in ZSL: hypersensitivity towards side information not strongly correlated with visual attributes. The vulnerability of both embedding and FGN-based methods toward sources of side information different than visual attributes, such as word vectors or WordNet hierarchy, is investigated in (2; 39; 44). Another limitation of FGNs is that features generated for unseen classes are significantly less dispersed than actual features due to the generator failing to span more than a small subset of modes available in the data. Recent deep generative models mitigate this problem by proposing different loss functions that can better explore inter-sample and inter-class relationships (3; 7; 8; 19; 45). However, these methods fail to scale well with an increasing number of classes with an especially high inter-class similarity (24).
Side Information in ZSL. Side information serves as the backbone of ZSL as it bridges the knowledge gap between seen and unseen classes. Earlier lines of work (22; 1) use visual attributes to characterize object classes. Although visual attributes achieve compelling results, obtaining them involves a laborious process that requires manual annotation by human experts not scalable to data sets with a large number of fine-grained object classes. When dealing with fine-grained species classification, apart from scalability, a more pressing obstacle is how to define subtle attributes potentially characteristic of species that have never been observed.
As an alternative to manual annotation, several studies (11; 14; 2; 46; 34; 5) proposed to learn side information that requires less effort and minimal expert labor such as textual descriptions, distributed text representations, like Word2Vec (27) and GloVe (33), learned from large unsupervised text corpora, taxonomical order built from a pre-defined ontology like WordNet (28), or even human gaze reaction to images (20). The accessibility, however, comes at the cost of performance degradation (2; 39). A majority of ZSL methods implicitly assume strong correlation between side information and image features, which is true for handcrafted attributes but less likely to be true for text representations or taxonomic orders. Consequently, all these methods experience significant decline in performance when side information is not based on visual attributes.
3 Barcode of Life Data and DNA Embeddings
In this study, we present the fine-grained INSECT dataset with 21, 212 matching image/DNA pairs from 1, 213 species (see Fig. 1 for sample images). Unlike existing benchmark ZSL datasets, this new
dataset uses DNA as side information1 and can be best characterized with the high degree of similarity among classes. Among the existing benchmark datasets, SUN contains the largest number of classes (717) but classes in SUN represent a wide range of scene categories related to transportation, indoor and outdoors, nature, underwater etc., and as such can be considered a relatively coarse-grained dataset compared to the INSECT dataset we are introducing in this study.
All insect images and associated DNA barcodes in our dataset come from the Barcode of Life Data System (BOLD) (35; 36). BOLD is an open-access database in which users can upload DNA sequences and other identifying information for any living organism on Earth. The database provides approximately 658 base pairs of the mitochondrial DNA barcode extracted from the cytochrome c oxidase I (COI) gene along with additional information such as country of origin, life-stage, order, family, subfamily, and genus/species names.
Data Collection. We collected image/DNA pairs of insects that originate from three orders: Diptera (true flies), Coleoptera (beetles) and Hymenoptera (sawflies, wasps, bees, and ants). While the dataset is in general clean, manual effort was devoted to further curate the dataset. Only cases with images and matching DNA barcodes of adult insects are included. Images from each species were visually inspected and poor quality images were deleted. Only species with more than ten instances were included. The final dataset consisted of 21, 212 images and 1, 213 insects species of which 254 belong to Diptera (133 genera), 564 to Coleoptera (315 genera) and 395 to Hymenoptera (130 genera). We extracted image features, namely image embeddings, using a pre-trained (on ImageNet 1000 classes) ResNet101 model (16). Images are resized to 256× 256 and center-cropped before fed to the ResNet model. No other pre-processing is applied to the images.
Data Split. We randomly chose 10% of all species as unseen classes for the test set leading to 1, 092 seen and 121 unseen classes. Similarly, we randomly chose 10% of the 1, 092 training classes as unseen classes for the validation set. Samples from seen classes were split by a 80/20 ratio in a stratified fashion to create seen portion of the train and test datasets. In the dataset there were a few hundred cases where multiple image views (dorsal, ventral, and lateral) of the same insect were present. To avoid splitting these cases between train and test, we made sure all instances of the same insect are included in the
training set. As a result, 12 of the 1, 092 seen classes in the training set were not represented in the test set. Our dataset splits are summarized in Table 1.
DNA Embeddings. Although it is the first time DNA barcodes are used as side information in ZSL domain, there have been some work investigating vector embeddings for DNA sequences. Authors of (31) trained a CNN model to do binary DNA sequence classification considering sequences as a text data. Imitating amino acid structure, each triplet of base pairs are treated as a word and sequences are converted into one-hot vector representation. Taking (27) as the base, (30) trained a shallow neural network on human genome data to generate representation for k-mers. Unlike these techniques we deal with DNA Barcodes represented by nucleotide sequences and aim to convert the entire character sequence into a vector embedding useful for species classification with more than 1,000 classes.
1Please refer to supplementary material for discussion on limitations of using DNA as side information
Most recently, DNABERT (18) adapted the powerful text transformer model (10) to a genomic DNA setting and generated vector embeddings for long DNA sequences.
In this paper, we trained a CNN model to learn a vector representation of DNA barcodes in the Euclidean space. First, the consensus sequence of all DNA barcodes in the training set with 658bp is obtained. Then, all sequences are aligned with respect to this consensus sequence using a progressive alignment technique implemented in MATLAB R2020A (Natick, MA, USA). A total of five tokens are used, one for each of the four bases, Adenine, Guanine, Cytosine, Thymine, and one for others. All ambiguous and missing symbols are included in the others token. In pre-processing, barcodes are one hot encoded into a 658x5 2D array, where 658 is the length of the barcode sequence (median of the nucleotide length of the DNA data).
To train the CNN model, a balanced subset of the training data is subsampled, where each class size is capped at 50 samples. The CNN is trained with 14, 723 barcodes from 1, 092 classes. No barcodes from the 121 unseen classes are employed during model training. The training set is further split into two as train (80%) and validation (20%) by random sampling. We used 3 blocks of convolutional layers each followed by batch normalization and 2D max-pooling. The output of the third convolutional layer is flattened and batch normalized before feeding the data into a fully-connected layer with 500 units. The CNN architecture is completed by a softmax layer. We used the output of the fully-connected layer as the embeddings for DNA. Class level attributes are computed by the mean embedding of each class. The DNA-based attribute extraction is illustrated in Figure 2. The details of the model architecture is depicted in Figure 3 in Supplementary material. We used ADAM optimizer for training the model for five epochs with a batch size of 32 (with a step-decay initial learning rate = 0.0005 and drop factor= 0.5, β1 = 0.9, β2 = 0.999). The model is developed in Python with Tensorflow-Keras API.
Predictive accuracy of DNA embeddings. Although the insect barcodes we used are extracted from a single gene (COI) of the mitochondrial DNA with a relatively short sequence length of 658 base pairs, they are proven to have exceptional predictive accuracy; the CNN model achieves a 99.1% accuracy on the held-out validation set. Note that, we only used the data from training seen classes to train the CNN model. In order to validate the generalizability of embeddings to unseen data, we trained a simple K-Nearest Neighbor classifier (K = 1) on the randomly sampled 80% of the DNA-embeddings of unseen classes and tested on the remaining 20%. The classifier had a perfect accuracy for all 121 but one classes with an overall accuracy of 99.8%.
In addition to our CNN model we have explored a DNABERT (18) model for converting DNA barcodes to vector embeddings. The pretrained DNABERT model achieves around 85% (vs 99% from CNN) top-1 KNN accuracy (averaged over 10 runs) on the unseen classes. Pretrained DNABERT can be fine-tuned for species classification however because of the vast number of parameters to tune each run takes a few hours on a relatively sophisticated GPU, significantly more than CNN training. Similarly, a simple LSTM model with half of the parameters as the CNN model is almost 5 times slower than the CNN model and requires more epochs to reach a reasonable accuracy. Therefore, we use a simple 3-layer CNN that trains in an hour and achieves almost perfect top-1 KNN accuracy.
To demonstrate that the approach can be easily extended to larger members of the animal kingdom, we compiled approximately 26, 000 DNA barcodes from 1, 047 bird species to train another CNN model (ceteris paribus) to learn the DNA embeddings for CUB dataset (see the Supp. materials for details). The CNN model achieved a compelling 95.60% on the held-out validation set.
4 Bayesian Zero-shot Learning
Object classes in nature already tend to emerge at varying levels of abstraction, but the class hierarchy is more evident when classes represent species and species are considered the lowest taxonomic rank of living organisms. We build our approach on a two layer hierarchical Bayesian model that was previously introduced and evaluated on benchmark ZSL datasets with promising results (6). The model assumes that there are latent classes that define the class hierarchy in the image space and uses side information to build the Bayesian hierarchy around these latent classes. Two types of Bayesian priors are utilized in the model: global and local. As the name suggests, global priors are shared across all classes, whereas local priors represent latent classes, and are only shared among similar classes. Class similarity is evaluated based on side information in the Euclidean space. Unlike
standard Bayesian models where the posterior predictive distribution (PPD) forms a compromise between prior and likelihood, this approach utilizes posterior predictive distributions to blend local and global priors with data likelihood for each class. Inference for a test image is performed by evaluating posterior predictive distributions and assigning the sample to the class that maximizes the posterior predictive likelihood.
Generative Model. The two-layer generative model is given below,
xjik ∼ N(µji,Σj), µji ∼ N(µj ,Σjκ−11 ), µj ∼ N(µ0,Σjκ −1 0 ), Σj ∼W−1(Σ0,m)
(1)
where j, i, k represent indices for local priors, classes, and image instances, respectively. We assume that image feature vectors xjik come from a Gaussian distribution with mean µji and covariance matrix Σj , and are generated independently conditioned not only on the global prior but also on their corresponding local priors.
Each local prior is characterized by the parameters µj and Σj . µ0 is the mean of the Gaussian prior defined over the mean vectors of local priors, κ0 is a scaling constant that adjusts the dispersion of the means of local priors around µ0. A smaller value for κ0 suggests that means of the local priors are expected to be farther apart from each other whereas a larger value suggests they are expected to be closer. On the other hand, Σ0 and m dictate the expected shape of the class distributions, as under the inverse Wishart distribution assumption the expected covariance is E(Σ|Σ0,m) = Σ0m−D−1 , where D is the dimensionality of the image feature space. The minimum feasible value of m is equal to D+ 2, and the larger the m is the less individual covariance matrices will deviate from the expected shape. The hyperparameter κ1 is a scaling constant that adjusts the dispersion of the class means around the centers of their corresponding local priors. A larger κ1 leads to smaller variations in class means relative to the mean of their corresponding local prior, suggesting a fine-grained relationship among classes sharing the same local prior. Conversely, a smaller κ1 dictates coarse-grained relationships among classes sharing the same local prior. To preserve conjugacy of the model, the proposed model constrains classes sharing the same local prior to share the same covariance matrix Σj . Test examples are classified by evaluating posterior predictive distributions (PPD) of seen and unseen classes. As illustrated in Fig. 3 the PPD in general incorporates three sources of information: the data likelihood that arises from the current class, the local prior that results from other classes sharing the same local prior as the current class, and global prior defined in terms of hyperparameters. PPDs for seen classes include the global prior and data likelihood and are derived in the form a Student-t distribution whereas for unseen classes the data likelihood does not exist as no image samples are available for these classes. We leave the details of derivations to the supplementary material.
Surrogate classes. According to the generative model in (1), groupings among classes are determined based on local priors. Thus, once estimated from seen classes, local priors can be used to define surrogate classes for unseen ones during inference. Associating each unseen class with a unique local prior forms the basis of our approach. The local prior for each unseen class is defined by finding the K seen classes most similar to that unseen class. The similarity is evaluated by computing the L2 (Euclidean) distance between class-level attribute or embedding vectors (φ) obtained from the side information available. Once a local prior is defined for each unseen class the PPD for the corresponding surrogate class can be derived in terms of only global and local priors as in equation (2). Test examples are classified based on class-conditional likelihoods evaluated for both seen and surrogate classes.
P (x|{x̄ji, Sji}ti=j ,µ0, κ0, κ1) = T (x|µ̄j , Σ̄j , v̄j); µ̄j = ∑ i:ti=j njiκ1 (nji+κ1)
x̄ji + κ0µ0∑ i:ti=j njiκ1 (nji+κ1) + κ0 ,
v̄j = ∑ i:ti=j (nji − 1) +m−D + 1, Σ̄j = (Σ0 +
∑ i:ti=j Sji)(κ̃j + 1)
κ̃j v̄j (2)
where, x̄ji, Sji and nji represent sample mean, scatter matrix and size of class i associated with local prior j, respectively and κ̃j is defined as in Eq. (30) in the supplementary material2.
Rationale for the hierarchical Bayesian approach and limitations. We believe that the hierarchical Bayesian model is ideally suited for fine-grained zero-shot classification of species when DNA is used as side information for the following reasons. The performance of the model in identifying unseen classes depends on how robust the local priors can be estimated. This in turn depends on whether or not the set of seen classes contain any classes similar to unseen ones. As the number of seen classes increases, seen classes become more representative of their local priors, more robust estimates of local priors can be obtained, and thus, unseen classes sharing the same local priors as seen classes can be more accurately identified. On the other hand, if the class-level side information is not specific enough to uniquely characterize a large number of classes, then the model cannot evaluate class similarity accurately and local priors are estimated based on potentially incorrect association between seen and unseen classes. In this case having a large number of seen classes available may not necessarily help. Instead, highly specific DNA as side information comes into play for accurately evaluating class similarity. If a unique local prior can be eventually described for each unseen class, then unseen classes can be classified during test time without the model having to learn the mapping between side information and image features beforehand. Uniqueness of the local prior can only be ensured when the number of seen classes is large compared to the number of unseen classes. Thus, the ratio of the number of seen and unseen classes becomes the ultimate determinant of performance for the hierarchical Bayesian model. The higher this ratio is the higher the accuracy of the model will be. An experiment demonstrating this effect is performed in Section 5.3.
If the same set of K classes is found to be the most similar for two different unseen classes, then these two unseen classes will inherit the same local prior and thus they will not be statistically identifiable during test time. The likelihood of such a tie happening for fine-grained data sets quickly decreases as the number of classes increases. In practice we deal with this problem by replacing the least similar of the K most similar seen classes by the next most similar seen class for one of the unseen classes.
5 Experiments
In this section we report results of experiments with two species datasets that use DNA as side information. Details of training and hyperparameter tuning are provided in the supplementary material along with the source code of our methods.
5.1 Experiments with the INSECT dataset
We compare our model (BZSL) against state-ofthe-art (SotA) ZSL methods proved to be most competitive on benchmark ZSL datasets that use visual attributes or word vector representations as side information. Selected SotA models represent various ZSL categories: (1) Embedding methods with traditional (1; 37) and end-to-end neural network (49) approaches, (2) FGNs using VAE (39) and GAN (44), and (3) end-to-end few shot learning approach extended to ZSL (43). Table 2 displays seen (S) and unseen (US) accuracies and their harmonic mean (H) on the
INSECT data using DNA as the side information. Results suggest that the large number of seen classes along with the highly specific nature of DNA information in characterizing classes particularly favors the Bayesian method to more accurately estimate local priors and characterize surrogate classes. The harmonic mean achieved by the proposed method is 52% higher than the harmonic mean achieved by the second best performing technique. Similar levels of improvements are maintained on both seen and unseen class accuracies. The next top performers are FGNs. CADA-VAE uses a VAE whereas LsrGan utilizes GAN to synthesize unseen class features, then both train a LogSoftmax classifier for inference. Lower unseen class accuracies suggest that FGNs struggle to synthesize meaningful
2The code and dataset are available at https://github.com/sbadirli/Fine-Grained-ZSL-with-DNA
features in the image space. On the other hand, CRNet that uses end-to-end neural network to learn the embedding between semantic and image spaces renders slightly worse performance than FGNs. It seems, non-linear embedding also works better than a linear (ESZSL) and bilinear (ALE) ones for this specific dataset. RelationNet is amongst the ones with the lowest performance, as the method is explicitly designed for Few-shot learning and expects the side information to be strongly correlated with image features. The weak association between side information and image features affects the performance of both FGNs and embedding methods, but the traditional embedding methods suffer the most.
5.2 Experiments with the benchmark CUB dataset
To demonstrate the utility of DNA-based attributes in a broader spectrum of species classification, we procured DNA barcodes, again from the BOLD system, for bird species in the CUB dataset. For this experiment, we derived 400 dimensional embeddings in order to have the same size with word vectors and eliminate the attribute size effect. There were 6 classes, 4 seen and 2 unseen, that did not have DNA barcodes extracted from COI gene in the BOLD system. These classes were excluded from the dataset but the proposed split from (47) is preserved otherwise.
The results shown in Table 3 validate our hypothesis that when side information is not strongly correlated with visual characteristics of object classes (like in word vectors or DNA) both embedding methods and FGNs display significant performance degradation. With the exception of the proposed Bayesian model, word vector representation yields better accuracy than DNA-based attributes for all models. This phenomenon can be explained by our observation that text fragments related to common animals/birds in the Wikipedia/Internet often include some morphological traits of the underlying species. Hence, word vector representation is expected to have higher degree of correlation to visual attributes than DNA information. Our model produces the best results, 34.97% vs 32.45% when the side information is not derived from visual characteristics of classes. This outcome validates the robustness of the Bayesian model to diverse sources of side information and emphasizes the need for more robust FGN or embedding based models in more realistic scenarios where hand-crafted visual attributes are not feasible.
5.3 The effect of the number of seen classes on performance
Local priors are central to the performance of the hierarchical Bayesian model. Here, we perform experiments to show that as the number of seen classes increases while the number of unseen classes is fixed, each unseen class can be associated with a larger pool of candidate seen classes and more informative local priors can potentially be obtained, which in turn leads to more accurate identification of unseen classes. To demonstrate this effect we run two experiments. In the first experiment we use the same set of unseen classes as in Section 5.1 but gradually increase the number of seen classes used for training. In the second experiment we double the size of the unseen classes and gradually include the remaining classes into training as seen classes. The first experiment is also performed for CADA-VAE. LsrGan is skipped for this experiment due to long training time. To account for random subsampling of seen classes each experiment is repeated five times and error bars are included in each plot. There is a clear trend in these results that further highlights the intuition behind the hierarchical Bayesian model and explains why this model is well-suited for fine-grained ZSL. When 10% of the
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(a) BZSL results in original setup (Y str = 1, 092 and Y u = 121)
10%
(98 ) 2
0% (19 6) 30 % (29 4) 40 % (39 3) 60 % (58 9) 80 % (78 6) 1 00%
(98 3)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(b) BZSL results with Y str = 983 and Y u = 230
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(c) CADA-VAE in original setup (Y str = 1, 092 and Y u = 121)
Figure 4: The effect of the number of seen classes on the performance of BZSL and CADA-VAE. Each experiment is repeated five times to account for random subsampling of seen classes.
classes are used as unseen, unseen class accuracy improves with increasing number of seen classes until it flatlines beyond the 60% mark while seen class accuracy always maintains around the same level (see Fig. 4a). When 20% of the classes are used as unseen no flatlining effect in unseen class accuracy is observed even at 100% mark, which suggest that there is still room for improvement in unseen class accuracy if more seen classes become available (see Fig. 4b). For CADA-VAE unseen class accuracy initially improves and then flatlines beyond 80% mark but this improvement comes at the expense of significant degradation in seen class accuracy, which suggest that as the number of seen classes increase generated features further confound the classifier as would be expected of an FGN for a fine-grained dataset.
6 Conclusions
For the first time in the ZSL literature we use DNA as a side information and demonstrate its utility in evaluating class similarity for the purpose of identifying unseen classes in a fine-grained ZSL setting. On the CUB dataset, despite being trained with less than 30,000 very short sequences, we find DNA embeddings to be highly competitive with word vector representations trained on massive text corpora. We emphasize the importance of DNA as side information in zero-shot classification of highly fine-grained species datasets involving thousands of species, and on the INSECT dataset, show that a simple Bayesian model that readily exploits inherent class hierarchy with the help of DNA can significantly outperform highly complex models. We show that SotA ZSL methods that take the presence of an explicit association between visual attributes and image features for granted, suffer significant performance degradation when non-visual attributes such as word vectors and WordNet are used as side information. The same effect is observed with DNA embeddings as well. Although visual attributes tend to be the best alternative as side information for a coarse-grained species classification task, they quickly lose their appeal with an increasing number of classes. Considering the tens of thousands of described species and even larger number of undescribed species, DNA seems to be the only feasible alternative to side information for large-scale, fine-grained zero-shot classification of species.
These favorable results by a simpler model suggest that as the number of classes increases along with inter-class similarity, the complexity of the mapping between side information and image attributes emerges as a major bottleneck at the forefront of zero-shot classification. A promising future research avenue appears to be implementing hierarchically organized FGNs where each subcomponent only operates with a small subset of seen classes all sharing the same local prior.
This work does not present any foreseeable negative societal consequences beyond those already associated with generic machine learning classification algorithms.
Acknowledgements: This research was sponsored by the National Science Foundation (NSF) under Grant Number IIS-1252648 (CAREER). This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 – Project number 390727645). GM acknowledges support from NSF-ATD grant 2124313. The content is solely the responsibility of the authors and does not necessarily represent the official views of NSF. | 1. What is the focus and contribution of the paper regarding zero-shot learning?
2. What are the strengths of the proposed approach, particularly in utilizing DNA as side information?
3. Do you have any concerns about the representation of DNA data, specifically the choice of using a CNN instead of other methods?
4. How does the reviewer assess the comparisons made between the proposed method and other approaches in the experiments?
5. What are the limitations of the proposed method, such as the trade-off between seen and unseen classes, and is there a way to overcome them? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed to use DNA as side information for the zero-shot learning tasks. A CNN is used to generate DNA embeddings, and a Bayesian model is used for zero-shot learning. Experiments are performed on a newly collected fine-grained insect dataset with image and DNA pairs, as well as on the CUB dataset. Results showed that the Bayesian model outperforms other methods on the insect dataset using DNA as attributes. On the CUB dataset, using DNA is comparable with using word vectors, but pre-defined attributes still work better.
Review
This paper proposed to use DNA for zero-shot learning, unlike prior works using attributes or text descriptions for side information. Results on the collected insect dataset show improvements over other methods. On the CUB dataset, using Bayesian modeling is better for modeling DNA information, but not on visual attributes or word vectors.
In general, it is interesting to see that DNA data is helpful for ZSL, and a simple Bayesian model performs well on the DNA data. The paper is well-written, and all the details of training and datasets are included. Ablation studies on the number of seen classes and hyper-parameters are also presented.
Questions:
Why use CNN (on top of one-hot encoding) instead of RNN or other methods for encoding the DNA data? For the CUB dataset, the DNA embedding is trained on an external dataset of 1047 bird species. Is it a fair comparison between using attributes and word vectors (in Table 3)?
As stated in L. 246-264, the performance of the Bayesian model is highly correlated to the ratio of the number of seen and unseen classes. If there are similar classes in the seen classes, then the model can perform well. Is this limitation the reason why using visual attributes performs better than using DNA on the CUB dataset?
Another limitation of the proposed method is the trade-off between seen and unseen classes with different hyper-parameters (Fig. 5) Is there a way to mitigate this, i.e., having good performance on both seen and unseen classes?
Typo:
Reference [5] is missing
L.10 "of" CUB dataset |
NIPS | Title
Fine-Grained Zero-Shot Learning with DNA as Side Information
Abstract
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
1 Introduction
Fine-grained species classification is essential in monitoring biodiversity. Diversity of life is the central tenet to biology and preserving biodiversity is key to a more sustainable life. Monitoring biodiversity requires identifying living organisms at the lowest taxonomic level possible. The traditional approach to identification uses published morphological dichotomous keys to identify the collected sample. This identification involves a tedious process of manually assessing the presence or absence of a long list of morphological traits arranged at hierarchical levels. The analysis is often performed in a laboratory setting by a well-trained human taxonomist and is difficult to do at scale. Fortunately, advances in technology have addressed this challenge to some extent through the use of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
DNA barcodes. DNA barcoding is a technique that uses a short section of DNA from a specific gene, such as cytochrome C oxidase I (COI), found in mitochondrial DNA, and offers specific information about speciation in living organisms and can achieve nearly perfect classification accuracy at the species level (26; 17).
As it is costly to obtain the label information for fine-grained image classification of species, Zero-Shot Learning (ZSL) that handles missing label information is a suitable task. In ZSL, side information is used to associate seen and unseen classes. Popular choices for side-information are manually annotated attributes (21; 12), word embeddings (41; 14; 27) derived from free-form text or the WordNet hierarchy (28; 2). It is often assumed that an exhaustive list of visual attributes characterizing all object classes (both seen and unseen) can be determined based only on seen classes. However, taking insects as our object classes, if no seen class species have antennae, the attribute list may not contain antenna, which may in fact be necessary to distinguish unseen species. In the United States alone, more than 40% of all insect species (>70,000) remain undescribed (42), which is a clear sign of the limitations of existing identification techniques that rely on visual attributes. Similarly, free-form text is unlikely to contain sufficiently descriptive information about fine-grained objects to generate discriminative vector embeddings. For example, tiger beetle is a class in the ImageNet dataset. However, the tiger beetle group itself contains thousands of known species and the Wikipedia pages for these species either do not exist or are limited to short text that does not necessarily contain any information about species’ morphological characteristics. WordNet hierarchy may not be useful either as most of the species names do not exist in WordNet.
Given that DNA information can be readily available for training (35; 36), species-level DNA information can be used as highly specific side information to replace high-level semantic information in ZSL. For seen classes, species-level DNA information can be obtained by finding the consensus nucleotide sequence among samples of a given species or by averaging corresponding sequence embeddings of samples. For unseen classes, species-level DNA information can be obtained from actual samples, if available, in the same way as seen classes, or can be simulated in a non-trivial way to represent potentially existing species.
Our approach uses DNA as side information for the first time for zero-shot classification of species. In fine-grained, large-scale species classification, no other side information can explain class dichotomy better than DNA, as new species are explicitly defined based on variations in DNA. The hierarchical Bayesian model leverages the implicit inter-species association of DNA and phenotypic traits and ultimately allows us to establish a Bayesian hierarchy based on DNA similarity between unseen and seen classes. We compare DNA against word representations for assessing class similarity and show that the Bayesian model that uses DNA to identify similar classes achieves favorable results compared to the version that uses word representations on a well-known ZSL benchmark species dataset involving slightly less than 200 bird species. In the particular case of an insect dataset with over 1000 species, when visual attributes or word representations may not offer feasible alternatives, we show that our hierarchical model that relies on DNA to establish class hierarchy significantly outperforms all other embedding-based methods and feature generating networks.
Our contributions are on three fronts. First, we introduce DNA as side information for fine-grained ZSL tasks, implement a Convolutional Neural Net (CNN) model to learn DNA barcode embeddings, and show that the embeddings are robust and highly specific for closed-set classification of species, even when training and test sets of species are mutually exclusive. We use the benchmark CUB dataset as a case study to show that DNA embeddings are competitive to word embeddings as side information. Second, we propose a fine-grained insect dataset involving 21, 212 matching image/DNA pairs from 578 genera and 1, 213 species as a new benchmark dataset and discuss the limitations of current ZSL paradigms for fine-grained ZSL tasks when there is no strong association between side information and image features. Third, we perform extensive studies to show that a simple hierarchical Bayesian model that uses DNA as side information outperforms state-of-the-art ZSL techniques on the newly introduced insect dataset by a wide margin.
2 Related Work
Zero-Shot Learning. Early ZSL literature is dominated by methods that embed image features into a semantic space and perform various forms of nearest neighbor search to do inference (14; 41; 1). As the dimensionality of semantic space is usually much smaller than the feature space this leads to the
hubness problem where some classes become hub and occur as the nearest neighbor of many samples. In an effort to alleviate the hubness problem, (50; 40) changed the direction of the embedding from semantic to image feature space. This was followed by a line of work that investigates bidirectional embedding between semantic and image spaces through a latent space (51; 43; 2; 32; 38).
In (25; 15), a new strategy of synthesizing features for unseen classes and converting the challenging ZSL problem into traditional supervised learning is introduced (23; 44; 9; 13; 48; 52; 29; 39; 4). Although feature generating networks (FGNs) currently achieve state-of-the-art results in ZSL, they suffer from the same problem as earlier lines of work in ZSL: hypersensitivity towards side information not strongly correlated with visual attributes. The vulnerability of both embedding and FGN-based methods toward sources of side information different than visual attributes, such as word vectors or WordNet hierarchy, is investigated in (2; 39; 44). Another limitation of FGNs is that features generated for unseen classes are significantly less dispersed than actual features due to the generator failing to span more than a small subset of modes available in the data. Recent deep generative models mitigate this problem by proposing different loss functions that can better explore inter-sample and inter-class relationships (3; 7; 8; 19; 45). However, these methods fail to scale well with an increasing number of classes with an especially high inter-class similarity (24).
Side Information in ZSL. Side information serves as the backbone of ZSL as it bridges the knowledge gap between seen and unseen classes. Earlier lines of work (22; 1) use visual attributes to characterize object classes. Although visual attributes achieve compelling results, obtaining them involves a laborious process that requires manual annotation by human experts not scalable to data sets with a large number of fine-grained object classes. When dealing with fine-grained species classification, apart from scalability, a more pressing obstacle is how to define subtle attributes potentially characteristic of species that have never been observed.
As an alternative to manual annotation, several studies (11; 14; 2; 46; 34; 5) proposed to learn side information that requires less effort and minimal expert labor such as textual descriptions, distributed text representations, like Word2Vec (27) and GloVe (33), learned from large unsupervised text corpora, taxonomical order built from a pre-defined ontology like WordNet (28), or even human gaze reaction to images (20). The accessibility, however, comes at the cost of performance degradation (2; 39). A majority of ZSL methods implicitly assume strong correlation between side information and image features, which is true for handcrafted attributes but less likely to be true for text representations or taxonomic orders. Consequently, all these methods experience significant decline in performance when side information is not based on visual attributes.
3 Barcode of Life Data and DNA Embeddings
In this study, we present the fine-grained INSECT dataset with 21, 212 matching image/DNA pairs from 1, 213 species (see Fig. 1 for sample images). Unlike existing benchmark ZSL datasets, this new
dataset uses DNA as side information1 and can be best characterized with the high degree of similarity among classes. Among the existing benchmark datasets, SUN contains the largest number of classes (717) but classes in SUN represent a wide range of scene categories related to transportation, indoor and outdoors, nature, underwater etc., and as such can be considered a relatively coarse-grained dataset compared to the INSECT dataset we are introducing in this study.
All insect images and associated DNA barcodes in our dataset come from the Barcode of Life Data System (BOLD) (35; 36). BOLD is an open-access database in which users can upload DNA sequences and other identifying information for any living organism on Earth. The database provides approximately 658 base pairs of the mitochondrial DNA barcode extracted from the cytochrome c oxidase I (COI) gene along with additional information such as country of origin, life-stage, order, family, subfamily, and genus/species names.
Data Collection. We collected image/DNA pairs of insects that originate from three orders: Diptera (true flies), Coleoptera (beetles) and Hymenoptera (sawflies, wasps, bees, and ants). While the dataset is in general clean, manual effort was devoted to further curate the dataset. Only cases with images and matching DNA barcodes of adult insects are included. Images from each species were visually inspected and poor quality images were deleted. Only species with more than ten instances were included. The final dataset consisted of 21, 212 images and 1, 213 insects species of which 254 belong to Diptera (133 genera), 564 to Coleoptera (315 genera) and 395 to Hymenoptera (130 genera). We extracted image features, namely image embeddings, using a pre-trained (on ImageNet 1000 classes) ResNet101 model (16). Images are resized to 256× 256 and center-cropped before fed to the ResNet model. No other pre-processing is applied to the images.
Data Split. We randomly chose 10% of all species as unseen classes for the test set leading to 1, 092 seen and 121 unseen classes. Similarly, we randomly chose 10% of the 1, 092 training classes as unseen classes for the validation set. Samples from seen classes were split by a 80/20 ratio in a stratified fashion to create seen portion of the train and test datasets. In the dataset there were a few hundred cases where multiple image views (dorsal, ventral, and lateral) of the same insect were present. To avoid splitting these cases between train and test, we made sure all instances of the same insect are included in the
training set. As a result, 12 of the 1, 092 seen classes in the training set were not represented in the test set. Our dataset splits are summarized in Table 1.
DNA Embeddings. Although it is the first time DNA barcodes are used as side information in ZSL domain, there have been some work investigating vector embeddings for DNA sequences. Authors of (31) trained a CNN model to do binary DNA sequence classification considering sequences as a text data. Imitating amino acid structure, each triplet of base pairs are treated as a word and sequences are converted into one-hot vector representation. Taking (27) as the base, (30) trained a shallow neural network on human genome data to generate representation for k-mers. Unlike these techniques we deal with DNA Barcodes represented by nucleotide sequences and aim to convert the entire character sequence into a vector embedding useful for species classification with more than 1,000 classes.
1Please refer to supplementary material for discussion on limitations of using DNA as side information
Most recently, DNABERT (18) adapted the powerful text transformer model (10) to a genomic DNA setting and generated vector embeddings for long DNA sequences.
In this paper, we trained a CNN model to learn a vector representation of DNA barcodes in the Euclidean space. First, the consensus sequence of all DNA barcodes in the training set with 658bp is obtained. Then, all sequences are aligned with respect to this consensus sequence using a progressive alignment technique implemented in MATLAB R2020A (Natick, MA, USA). A total of five tokens are used, one for each of the four bases, Adenine, Guanine, Cytosine, Thymine, and one for others. All ambiguous and missing symbols are included in the others token. In pre-processing, barcodes are one hot encoded into a 658x5 2D array, where 658 is the length of the barcode sequence (median of the nucleotide length of the DNA data).
To train the CNN model, a balanced subset of the training data is subsampled, where each class size is capped at 50 samples. The CNN is trained with 14, 723 barcodes from 1, 092 classes. No barcodes from the 121 unseen classes are employed during model training. The training set is further split into two as train (80%) and validation (20%) by random sampling. We used 3 blocks of convolutional layers each followed by batch normalization and 2D max-pooling. The output of the third convolutional layer is flattened and batch normalized before feeding the data into a fully-connected layer with 500 units. The CNN architecture is completed by a softmax layer. We used the output of the fully-connected layer as the embeddings for DNA. Class level attributes are computed by the mean embedding of each class. The DNA-based attribute extraction is illustrated in Figure 2. The details of the model architecture is depicted in Figure 3 in Supplementary material. We used ADAM optimizer for training the model for five epochs with a batch size of 32 (with a step-decay initial learning rate = 0.0005 and drop factor= 0.5, β1 = 0.9, β2 = 0.999). The model is developed in Python with Tensorflow-Keras API.
Predictive accuracy of DNA embeddings. Although the insect barcodes we used are extracted from a single gene (COI) of the mitochondrial DNA with a relatively short sequence length of 658 base pairs, they are proven to have exceptional predictive accuracy; the CNN model achieves a 99.1% accuracy on the held-out validation set. Note that, we only used the data from training seen classes to train the CNN model. In order to validate the generalizability of embeddings to unseen data, we trained a simple K-Nearest Neighbor classifier (K = 1) on the randomly sampled 80% of the DNA-embeddings of unseen classes and tested on the remaining 20%. The classifier had a perfect accuracy for all 121 but one classes with an overall accuracy of 99.8%.
In addition to our CNN model we have explored a DNABERT (18) model for converting DNA barcodes to vector embeddings. The pretrained DNABERT model achieves around 85% (vs 99% from CNN) top-1 KNN accuracy (averaged over 10 runs) on the unseen classes. Pretrained DNABERT can be fine-tuned for species classification however because of the vast number of parameters to tune each run takes a few hours on a relatively sophisticated GPU, significantly more than CNN training. Similarly, a simple LSTM model with half of the parameters as the CNN model is almost 5 times slower than the CNN model and requires more epochs to reach a reasonable accuracy. Therefore, we use a simple 3-layer CNN that trains in an hour and achieves almost perfect top-1 KNN accuracy.
To demonstrate that the approach can be easily extended to larger members of the animal kingdom, we compiled approximately 26, 000 DNA barcodes from 1, 047 bird species to train another CNN model (ceteris paribus) to learn the DNA embeddings for CUB dataset (see the Supp. materials for details). The CNN model achieved a compelling 95.60% on the held-out validation set.
4 Bayesian Zero-shot Learning
Object classes in nature already tend to emerge at varying levels of abstraction, but the class hierarchy is more evident when classes represent species and species are considered the lowest taxonomic rank of living organisms. We build our approach on a two layer hierarchical Bayesian model that was previously introduced and evaluated on benchmark ZSL datasets with promising results (6). The model assumes that there are latent classes that define the class hierarchy in the image space and uses side information to build the Bayesian hierarchy around these latent classes. Two types of Bayesian priors are utilized in the model: global and local. As the name suggests, global priors are shared across all classes, whereas local priors represent latent classes, and are only shared among similar classes. Class similarity is evaluated based on side information in the Euclidean space. Unlike
standard Bayesian models where the posterior predictive distribution (PPD) forms a compromise between prior and likelihood, this approach utilizes posterior predictive distributions to blend local and global priors with data likelihood for each class. Inference for a test image is performed by evaluating posterior predictive distributions and assigning the sample to the class that maximizes the posterior predictive likelihood.
Generative Model. The two-layer generative model is given below,
xjik ∼ N(µji,Σj), µji ∼ N(µj ,Σjκ−11 ), µj ∼ N(µ0,Σjκ −1 0 ), Σj ∼W−1(Σ0,m)
(1)
where j, i, k represent indices for local priors, classes, and image instances, respectively. We assume that image feature vectors xjik come from a Gaussian distribution with mean µji and covariance matrix Σj , and are generated independently conditioned not only on the global prior but also on their corresponding local priors.
Each local prior is characterized by the parameters µj and Σj . µ0 is the mean of the Gaussian prior defined over the mean vectors of local priors, κ0 is a scaling constant that adjusts the dispersion of the means of local priors around µ0. A smaller value for κ0 suggests that means of the local priors are expected to be farther apart from each other whereas a larger value suggests they are expected to be closer. On the other hand, Σ0 and m dictate the expected shape of the class distributions, as under the inverse Wishart distribution assumption the expected covariance is E(Σ|Σ0,m) = Σ0m−D−1 , where D is the dimensionality of the image feature space. The minimum feasible value of m is equal to D+ 2, and the larger the m is the less individual covariance matrices will deviate from the expected shape. The hyperparameter κ1 is a scaling constant that adjusts the dispersion of the class means around the centers of their corresponding local priors. A larger κ1 leads to smaller variations in class means relative to the mean of their corresponding local prior, suggesting a fine-grained relationship among classes sharing the same local prior. Conversely, a smaller κ1 dictates coarse-grained relationships among classes sharing the same local prior. To preserve conjugacy of the model, the proposed model constrains classes sharing the same local prior to share the same covariance matrix Σj . Test examples are classified by evaluating posterior predictive distributions (PPD) of seen and unseen classes. As illustrated in Fig. 3 the PPD in general incorporates three sources of information: the data likelihood that arises from the current class, the local prior that results from other classes sharing the same local prior as the current class, and global prior defined in terms of hyperparameters. PPDs for seen classes include the global prior and data likelihood and are derived in the form a Student-t distribution whereas for unseen classes the data likelihood does not exist as no image samples are available for these classes. We leave the details of derivations to the supplementary material.
Surrogate classes. According to the generative model in (1), groupings among classes are determined based on local priors. Thus, once estimated from seen classes, local priors can be used to define surrogate classes for unseen ones during inference. Associating each unseen class with a unique local prior forms the basis of our approach. The local prior for each unseen class is defined by finding the K seen classes most similar to that unseen class. The similarity is evaluated by computing the L2 (Euclidean) distance between class-level attribute or embedding vectors (φ) obtained from the side information available. Once a local prior is defined for each unseen class the PPD for the corresponding surrogate class can be derived in terms of only global and local priors as in equation (2). Test examples are classified based on class-conditional likelihoods evaluated for both seen and surrogate classes.
P (x|{x̄ji, Sji}ti=j ,µ0, κ0, κ1) = T (x|µ̄j , Σ̄j , v̄j); µ̄j = ∑ i:ti=j njiκ1 (nji+κ1)
x̄ji + κ0µ0∑ i:ti=j njiκ1 (nji+κ1) + κ0 ,
v̄j = ∑ i:ti=j (nji − 1) +m−D + 1, Σ̄j = (Σ0 +
∑ i:ti=j Sji)(κ̃j + 1)
κ̃j v̄j (2)
where, x̄ji, Sji and nji represent sample mean, scatter matrix and size of class i associated with local prior j, respectively and κ̃j is defined as in Eq. (30) in the supplementary material2.
Rationale for the hierarchical Bayesian approach and limitations. We believe that the hierarchical Bayesian model is ideally suited for fine-grained zero-shot classification of species when DNA is used as side information for the following reasons. The performance of the model in identifying unseen classes depends on how robust the local priors can be estimated. This in turn depends on whether or not the set of seen classes contain any classes similar to unseen ones. As the number of seen classes increases, seen classes become more representative of their local priors, more robust estimates of local priors can be obtained, and thus, unseen classes sharing the same local priors as seen classes can be more accurately identified. On the other hand, if the class-level side information is not specific enough to uniquely characterize a large number of classes, then the model cannot evaluate class similarity accurately and local priors are estimated based on potentially incorrect association between seen and unseen classes. In this case having a large number of seen classes available may not necessarily help. Instead, highly specific DNA as side information comes into play for accurately evaluating class similarity. If a unique local prior can be eventually described for each unseen class, then unseen classes can be classified during test time without the model having to learn the mapping between side information and image features beforehand. Uniqueness of the local prior can only be ensured when the number of seen classes is large compared to the number of unseen classes. Thus, the ratio of the number of seen and unseen classes becomes the ultimate determinant of performance for the hierarchical Bayesian model. The higher this ratio is the higher the accuracy of the model will be. An experiment demonstrating this effect is performed in Section 5.3.
If the same set of K classes is found to be the most similar for two different unseen classes, then these two unseen classes will inherit the same local prior and thus they will not be statistically identifiable during test time. The likelihood of such a tie happening for fine-grained data sets quickly decreases as the number of classes increases. In practice we deal with this problem by replacing the least similar of the K most similar seen classes by the next most similar seen class for one of the unseen classes.
5 Experiments
In this section we report results of experiments with two species datasets that use DNA as side information. Details of training and hyperparameter tuning are provided in the supplementary material along with the source code of our methods.
5.1 Experiments with the INSECT dataset
We compare our model (BZSL) against state-ofthe-art (SotA) ZSL methods proved to be most competitive on benchmark ZSL datasets that use visual attributes or word vector representations as side information. Selected SotA models represent various ZSL categories: (1) Embedding methods with traditional (1; 37) and end-to-end neural network (49) approaches, (2) FGNs using VAE (39) and GAN (44), and (3) end-to-end few shot learning approach extended to ZSL (43). Table 2 displays seen (S) and unseen (US) accuracies and their harmonic mean (H) on the
INSECT data using DNA as the side information. Results suggest that the large number of seen classes along with the highly specific nature of DNA information in characterizing classes particularly favors the Bayesian method to more accurately estimate local priors and characterize surrogate classes. The harmonic mean achieved by the proposed method is 52% higher than the harmonic mean achieved by the second best performing technique. Similar levels of improvements are maintained on both seen and unseen class accuracies. The next top performers are FGNs. CADA-VAE uses a VAE whereas LsrGan utilizes GAN to synthesize unseen class features, then both train a LogSoftmax classifier for inference. Lower unseen class accuracies suggest that FGNs struggle to synthesize meaningful
2The code and dataset are available at https://github.com/sbadirli/Fine-Grained-ZSL-with-DNA
features in the image space. On the other hand, CRNet that uses end-to-end neural network to learn the embedding between semantic and image spaces renders slightly worse performance than FGNs. It seems, non-linear embedding also works better than a linear (ESZSL) and bilinear (ALE) ones for this specific dataset. RelationNet is amongst the ones with the lowest performance, as the method is explicitly designed for Few-shot learning and expects the side information to be strongly correlated with image features. The weak association between side information and image features affects the performance of both FGNs and embedding methods, but the traditional embedding methods suffer the most.
5.2 Experiments with the benchmark CUB dataset
To demonstrate the utility of DNA-based attributes in a broader spectrum of species classification, we procured DNA barcodes, again from the BOLD system, for bird species in the CUB dataset. For this experiment, we derived 400 dimensional embeddings in order to have the same size with word vectors and eliminate the attribute size effect. There were 6 classes, 4 seen and 2 unseen, that did not have DNA barcodes extracted from COI gene in the BOLD system. These classes were excluded from the dataset but the proposed split from (47) is preserved otherwise.
The results shown in Table 3 validate our hypothesis that when side information is not strongly correlated with visual characteristics of object classes (like in word vectors or DNA) both embedding methods and FGNs display significant performance degradation. With the exception of the proposed Bayesian model, word vector representation yields better accuracy than DNA-based attributes for all models. This phenomenon can be explained by our observation that text fragments related to common animals/birds in the Wikipedia/Internet often include some morphological traits of the underlying species. Hence, word vector representation is expected to have higher degree of correlation to visual attributes than DNA information. Our model produces the best results, 34.97% vs 32.45% when the side information is not derived from visual characteristics of classes. This outcome validates the robustness of the Bayesian model to diverse sources of side information and emphasizes the need for more robust FGN or embedding based models in more realistic scenarios where hand-crafted visual attributes are not feasible.
5.3 The effect of the number of seen classes on performance
Local priors are central to the performance of the hierarchical Bayesian model. Here, we perform experiments to show that as the number of seen classes increases while the number of unseen classes is fixed, each unseen class can be associated with a larger pool of candidate seen classes and more informative local priors can potentially be obtained, which in turn leads to more accurate identification of unseen classes. To demonstrate this effect we run two experiments. In the first experiment we use the same set of unseen classes as in Section 5.1 but gradually increase the number of seen classes used for training. In the second experiment we double the size of the unseen classes and gradually include the remaining classes into training as seen classes. The first experiment is also performed for CADA-VAE. LsrGan is skipped for this experiment due to long training time. To account for random subsampling of seen classes each experiment is repeated five times and error bars are included in each plot. There is a clear trend in these results that further highlights the intuition behind the hierarchical Bayesian model and explains why this model is well-suited for fine-grained ZSL. When 10% of the
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(a) BZSL results in original setup (Y str = 1, 092 and Y u = 121)
10%
(98 ) 2
0% (19 6) 30 % (29 4) 40 % (39 3) 60 % (58 9) 80 % (78 6) 1 00%
(98 3)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(b) BZSL results with Y str = 983 and Y u = 230
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(c) CADA-VAE in original setup (Y str = 1, 092 and Y u = 121)
Figure 4: The effect of the number of seen classes on the performance of BZSL and CADA-VAE. Each experiment is repeated five times to account for random subsampling of seen classes.
classes are used as unseen, unseen class accuracy improves with increasing number of seen classes until it flatlines beyond the 60% mark while seen class accuracy always maintains around the same level (see Fig. 4a). When 20% of the classes are used as unseen no flatlining effect in unseen class accuracy is observed even at 100% mark, which suggest that there is still room for improvement in unseen class accuracy if more seen classes become available (see Fig. 4b). For CADA-VAE unseen class accuracy initially improves and then flatlines beyond 80% mark but this improvement comes at the expense of significant degradation in seen class accuracy, which suggest that as the number of seen classes increase generated features further confound the classifier as would be expected of an FGN for a fine-grained dataset.
6 Conclusions
For the first time in the ZSL literature we use DNA as a side information and demonstrate its utility in evaluating class similarity for the purpose of identifying unseen classes in a fine-grained ZSL setting. On the CUB dataset, despite being trained with less than 30,000 very short sequences, we find DNA embeddings to be highly competitive with word vector representations trained on massive text corpora. We emphasize the importance of DNA as side information in zero-shot classification of highly fine-grained species datasets involving thousands of species, and on the INSECT dataset, show that a simple Bayesian model that readily exploits inherent class hierarchy with the help of DNA can significantly outperform highly complex models. We show that SotA ZSL methods that take the presence of an explicit association between visual attributes and image features for granted, suffer significant performance degradation when non-visual attributes such as word vectors and WordNet are used as side information. The same effect is observed with DNA embeddings as well. Although visual attributes tend to be the best alternative as side information for a coarse-grained species classification task, they quickly lose their appeal with an increasing number of classes. Considering the tens of thousands of described species and even larger number of undescribed species, DNA seems to be the only feasible alternative to side information for large-scale, fine-grained zero-shot classification of species.
These favorable results by a simpler model suggest that as the number of classes increases along with inter-class similarity, the complexity of the mapping between side information and image attributes emerges as a major bottleneck at the forefront of zero-shot classification. A promising future research avenue appears to be implementing hierarchically organized FGNs where each subcomponent only operates with a small subset of seen classes all sharing the same local prior.
This work does not present any foreseeable negative societal consequences beyond those already associated with generic machine learning classification algorithms.
Acknowledgements: This research was sponsored by the National Science Foundation (NSF) under Grant Number IIS-1252648 (CAREER). This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 – Project number 390727645). GM acknowledges support from NSF-ATD grant 2124313. The content is solely the responsibility of the authors and does not necessarily represent the official views of NSF. | 1. What is the focus and contribution of the paper on zero-shot image classification?
2. What are the strengths of the proposed method, particularly in its use of DNA as side information?
3. What are the weaknesses of the paper regarding its explanations and presentations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
Authors propose to use DNA as side information for zero-shot image classification and a method that can leverage this information. The method is based on a Bayessian model that captures similarity between different classes as well as between images of a single class through global and local priors. Unseen classes are represented using the statistics of a fixed number of seen classes that are the most similar according to the side-information. Authors introduce an INSECT database and compare their method on INSECT and CUB with several other zero-shot methods. When using DNA as side information, the proposed method performs the best.
Review
As of my knowledge this is the first paper proposing to use DNA as side information for zero-shot image classification. The hierarchical Bayessian approach to model the data is also relevant and I can imagine it becoming one of usual baselines (though it is a pity it is coded in matlab). The exposition is overall clear, find few suggestions below. An added value is also the INSECT dataset.
Suggestions:
The main text falls short to explain how to arrive from the seen class statistics to computing PPD for an unseen class.
Table 3 - Include what US/S/H means in the caption.
Fig. 4 - Use a larger font. Put equal y-axis scale in all three plots.
Line 76 - "state of the art" => "state-of-the-art"
Line 133 - "larger" => "more"
Line 209 - "farther" => "further"
Line 325 - "fixed" => "stays fixed"
Line 336 - "maintained" => "maintains"
Final recommendation: I have read the other reviews and authors responses. I would like thank the authors for the responses and urge them to reflect the criticism in the final paper. I keep my recommendation to accept the paper (with the current score). |
NIPS | Title
Fine-Grained Zero-Shot Learning with DNA as Side Information
Abstract
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
1 Introduction
Fine-grained species classification is essential in monitoring biodiversity. Diversity of life is the central tenet to biology and preserving biodiversity is key to a more sustainable life. Monitoring biodiversity requires identifying living organisms at the lowest taxonomic level possible. The traditional approach to identification uses published morphological dichotomous keys to identify the collected sample. This identification involves a tedious process of manually assessing the presence or absence of a long list of morphological traits arranged at hierarchical levels. The analysis is often performed in a laboratory setting by a well-trained human taxonomist and is difficult to do at scale. Fortunately, advances in technology have addressed this challenge to some extent through the use of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
DNA barcodes. DNA barcoding is a technique that uses a short section of DNA from a specific gene, such as cytochrome C oxidase I (COI), found in mitochondrial DNA, and offers specific information about speciation in living organisms and can achieve nearly perfect classification accuracy at the species level (26; 17).
As it is costly to obtain the label information for fine-grained image classification of species, Zero-Shot Learning (ZSL) that handles missing label information is a suitable task. In ZSL, side information is used to associate seen and unseen classes. Popular choices for side-information are manually annotated attributes (21; 12), word embeddings (41; 14; 27) derived from free-form text or the WordNet hierarchy (28; 2). It is often assumed that an exhaustive list of visual attributes characterizing all object classes (both seen and unseen) can be determined based only on seen classes. However, taking insects as our object classes, if no seen class species have antennae, the attribute list may not contain antenna, which may in fact be necessary to distinguish unseen species. In the United States alone, more than 40% of all insect species (>70,000) remain undescribed (42), which is a clear sign of the limitations of existing identification techniques that rely on visual attributes. Similarly, free-form text is unlikely to contain sufficiently descriptive information about fine-grained objects to generate discriminative vector embeddings. For example, tiger beetle is a class in the ImageNet dataset. However, the tiger beetle group itself contains thousands of known species and the Wikipedia pages for these species either do not exist or are limited to short text that does not necessarily contain any information about species’ morphological characteristics. WordNet hierarchy may not be useful either as most of the species names do not exist in WordNet.
Given that DNA information can be readily available for training (35; 36), species-level DNA information can be used as highly specific side information to replace high-level semantic information in ZSL. For seen classes, species-level DNA information can be obtained by finding the consensus nucleotide sequence among samples of a given species or by averaging corresponding sequence embeddings of samples. For unseen classes, species-level DNA information can be obtained from actual samples, if available, in the same way as seen classes, or can be simulated in a non-trivial way to represent potentially existing species.
Our approach uses DNA as side information for the first time for zero-shot classification of species. In fine-grained, large-scale species classification, no other side information can explain class dichotomy better than DNA, as new species are explicitly defined based on variations in DNA. The hierarchical Bayesian model leverages the implicit inter-species association of DNA and phenotypic traits and ultimately allows us to establish a Bayesian hierarchy based on DNA similarity between unseen and seen classes. We compare DNA against word representations for assessing class similarity and show that the Bayesian model that uses DNA to identify similar classes achieves favorable results compared to the version that uses word representations on a well-known ZSL benchmark species dataset involving slightly less than 200 bird species. In the particular case of an insect dataset with over 1000 species, when visual attributes or word representations may not offer feasible alternatives, we show that our hierarchical model that relies on DNA to establish class hierarchy significantly outperforms all other embedding-based methods and feature generating networks.
Our contributions are on three fronts. First, we introduce DNA as side information for fine-grained ZSL tasks, implement a Convolutional Neural Net (CNN) model to learn DNA barcode embeddings, and show that the embeddings are robust and highly specific for closed-set classification of species, even when training and test sets of species are mutually exclusive. We use the benchmark CUB dataset as a case study to show that DNA embeddings are competitive to word embeddings as side information. Second, we propose a fine-grained insect dataset involving 21, 212 matching image/DNA pairs from 578 genera and 1, 213 species as a new benchmark dataset and discuss the limitations of current ZSL paradigms for fine-grained ZSL tasks when there is no strong association between side information and image features. Third, we perform extensive studies to show that a simple hierarchical Bayesian model that uses DNA as side information outperforms state-of-the-art ZSL techniques on the newly introduced insect dataset by a wide margin.
2 Related Work
Zero-Shot Learning. Early ZSL literature is dominated by methods that embed image features into a semantic space and perform various forms of nearest neighbor search to do inference (14; 41; 1). As the dimensionality of semantic space is usually much smaller than the feature space this leads to the
hubness problem where some classes become hub and occur as the nearest neighbor of many samples. In an effort to alleviate the hubness problem, (50; 40) changed the direction of the embedding from semantic to image feature space. This was followed by a line of work that investigates bidirectional embedding between semantic and image spaces through a latent space (51; 43; 2; 32; 38).
In (25; 15), a new strategy of synthesizing features for unseen classes and converting the challenging ZSL problem into traditional supervised learning is introduced (23; 44; 9; 13; 48; 52; 29; 39; 4). Although feature generating networks (FGNs) currently achieve state-of-the-art results in ZSL, they suffer from the same problem as earlier lines of work in ZSL: hypersensitivity towards side information not strongly correlated with visual attributes. The vulnerability of both embedding and FGN-based methods toward sources of side information different than visual attributes, such as word vectors or WordNet hierarchy, is investigated in (2; 39; 44). Another limitation of FGNs is that features generated for unseen classes are significantly less dispersed than actual features due to the generator failing to span more than a small subset of modes available in the data. Recent deep generative models mitigate this problem by proposing different loss functions that can better explore inter-sample and inter-class relationships (3; 7; 8; 19; 45). However, these methods fail to scale well with an increasing number of classes with an especially high inter-class similarity (24).
Side Information in ZSL. Side information serves as the backbone of ZSL as it bridges the knowledge gap between seen and unseen classes. Earlier lines of work (22; 1) use visual attributes to characterize object classes. Although visual attributes achieve compelling results, obtaining them involves a laborious process that requires manual annotation by human experts not scalable to data sets with a large number of fine-grained object classes. When dealing with fine-grained species classification, apart from scalability, a more pressing obstacle is how to define subtle attributes potentially characteristic of species that have never been observed.
As an alternative to manual annotation, several studies (11; 14; 2; 46; 34; 5) proposed to learn side information that requires less effort and minimal expert labor such as textual descriptions, distributed text representations, like Word2Vec (27) and GloVe (33), learned from large unsupervised text corpora, taxonomical order built from a pre-defined ontology like WordNet (28), or even human gaze reaction to images (20). The accessibility, however, comes at the cost of performance degradation (2; 39). A majority of ZSL methods implicitly assume strong correlation between side information and image features, which is true for handcrafted attributes but less likely to be true for text representations or taxonomic orders. Consequently, all these methods experience significant decline in performance when side information is not based on visual attributes.
3 Barcode of Life Data and DNA Embeddings
In this study, we present the fine-grained INSECT dataset with 21, 212 matching image/DNA pairs from 1, 213 species (see Fig. 1 for sample images). Unlike existing benchmark ZSL datasets, this new
dataset uses DNA as side information1 and can be best characterized with the high degree of similarity among classes. Among the existing benchmark datasets, SUN contains the largest number of classes (717) but classes in SUN represent a wide range of scene categories related to transportation, indoor and outdoors, nature, underwater etc., and as such can be considered a relatively coarse-grained dataset compared to the INSECT dataset we are introducing in this study.
All insect images and associated DNA barcodes in our dataset come from the Barcode of Life Data System (BOLD) (35; 36). BOLD is an open-access database in which users can upload DNA sequences and other identifying information for any living organism on Earth. The database provides approximately 658 base pairs of the mitochondrial DNA barcode extracted from the cytochrome c oxidase I (COI) gene along with additional information such as country of origin, life-stage, order, family, subfamily, and genus/species names.
Data Collection. We collected image/DNA pairs of insects that originate from three orders: Diptera (true flies), Coleoptera (beetles) and Hymenoptera (sawflies, wasps, bees, and ants). While the dataset is in general clean, manual effort was devoted to further curate the dataset. Only cases with images and matching DNA barcodes of adult insects are included. Images from each species were visually inspected and poor quality images were deleted. Only species with more than ten instances were included. The final dataset consisted of 21, 212 images and 1, 213 insects species of which 254 belong to Diptera (133 genera), 564 to Coleoptera (315 genera) and 395 to Hymenoptera (130 genera). We extracted image features, namely image embeddings, using a pre-trained (on ImageNet 1000 classes) ResNet101 model (16). Images are resized to 256× 256 and center-cropped before fed to the ResNet model. No other pre-processing is applied to the images.
Data Split. We randomly chose 10% of all species as unseen classes for the test set leading to 1, 092 seen and 121 unseen classes. Similarly, we randomly chose 10% of the 1, 092 training classes as unseen classes for the validation set. Samples from seen classes were split by a 80/20 ratio in a stratified fashion to create seen portion of the train and test datasets. In the dataset there were a few hundred cases where multiple image views (dorsal, ventral, and lateral) of the same insect were present. To avoid splitting these cases between train and test, we made sure all instances of the same insect are included in the
training set. As a result, 12 of the 1, 092 seen classes in the training set were not represented in the test set. Our dataset splits are summarized in Table 1.
DNA Embeddings. Although it is the first time DNA barcodes are used as side information in ZSL domain, there have been some work investigating vector embeddings for DNA sequences. Authors of (31) trained a CNN model to do binary DNA sequence classification considering sequences as a text data. Imitating amino acid structure, each triplet of base pairs are treated as a word and sequences are converted into one-hot vector representation. Taking (27) as the base, (30) trained a shallow neural network on human genome data to generate representation for k-mers. Unlike these techniques we deal with DNA Barcodes represented by nucleotide sequences and aim to convert the entire character sequence into a vector embedding useful for species classification with more than 1,000 classes.
1Please refer to supplementary material for discussion on limitations of using DNA as side information
Most recently, DNABERT (18) adapted the powerful text transformer model (10) to a genomic DNA setting and generated vector embeddings for long DNA sequences.
In this paper, we trained a CNN model to learn a vector representation of DNA barcodes in the Euclidean space. First, the consensus sequence of all DNA barcodes in the training set with 658bp is obtained. Then, all sequences are aligned with respect to this consensus sequence using a progressive alignment technique implemented in MATLAB R2020A (Natick, MA, USA). A total of five tokens are used, one for each of the four bases, Adenine, Guanine, Cytosine, Thymine, and one for others. All ambiguous and missing symbols are included in the others token. In pre-processing, barcodes are one hot encoded into a 658x5 2D array, where 658 is the length of the barcode sequence (median of the nucleotide length of the DNA data).
To train the CNN model, a balanced subset of the training data is subsampled, where each class size is capped at 50 samples. The CNN is trained with 14, 723 barcodes from 1, 092 classes. No barcodes from the 121 unseen classes are employed during model training. The training set is further split into two as train (80%) and validation (20%) by random sampling. We used 3 blocks of convolutional layers each followed by batch normalization and 2D max-pooling. The output of the third convolutional layer is flattened and batch normalized before feeding the data into a fully-connected layer with 500 units. The CNN architecture is completed by a softmax layer. We used the output of the fully-connected layer as the embeddings for DNA. Class level attributes are computed by the mean embedding of each class. The DNA-based attribute extraction is illustrated in Figure 2. The details of the model architecture is depicted in Figure 3 in Supplementary material. We used ADAM optimizer for training the model for five epochs with a batch size of 32 (with a step-decay initial learning rate = 0.0005 and drop factor= 0.5, β1 = 0.9, β2 = 0.999). The model is developed in Python with Tensorflow-Keras API.
Predictive accuracy of DNA embeddings. Although the insect barcodes we used are extracted from a single gene (COI) of the mitochondrial DNA with a relatively short sequence length of 658 base pairs, they are proven to have exceptional predictive accuracy; the CNN model achieves a 99.1% accuracy on the held-out validation set. Note that, we only used the data from training seen classes to train the CNN model. In order to validate the generalizability of embeddings to unseen data, we trained a simple K-Nearest Neighbor classifier (K = 1) on the randomly sampled 80% of the DNA-embeddings of unseen classes and tested on the remaining 20%. The classifier had a perfect accuracy for all 121 but one classes with an overall accuracy of 99.8%.
In addition to our CNN model we have explored a DNABERT (18) model for converting DNA barcodes to vector embeddings. The pretrained DNABERT model achieves around 85% (vs 99% from CNN) top-1 KNN accuracy (averaged over 10 runs) on the unseen classes. Pretrained DNABERT can be fine-tuned for species classification however because of the vast number of parameters to tune each run takes a few hours on a relatively sophisticated GPU, significantly more than CNN training. Similarly, a simple LSTM model with half of the parameters as the CNN model is almost 5 times slower than the CNN model and requires more epochs to reach a reasonable accuracy. Therefore, we use a simple 3-layer CNN that trains in an hour and achieves almost perfect top-1 KNN accuracy.
To demonstrate that the approach can be easily extended to larger members of the animal kingdom, we compiled approximately 26, 000 DNA barcodes from 1, 047 bird species to train another CNN model (ceteris paribus) to learn the DNA embeddings for CUB dataset (see the Supp. materials for details). The CNN model achieved a compelling 95.60% on the held-out validation set.
4 Bayesian Zero-shot Learning
Object classes in nature already tend to emerge at varying levels of abstraction, but the class hierarchy is more evident when classes represent species and species are considered the lowest taxonomic rank of living organisms. We build our approach on a two layer hierarchical Bayesian model that was previously introduced and evaluated on benchmark ZSL datasets with promising results (6). The model assumes that there are latent classes that define the class hierarchy in the image space and uses side information to build the Bayesian hierarchy around these latent classes. Two types of Bayesian priors are utilized in the model: global and local. As the name suggests, global priors are shared across all classes, whereas local priors represent latent classes, and are only shared among similar classes. Class similarity is evaluated based on side information in the Euclidean space. Unlike
standard Bayesian models where the posterior predictive distribution (PPD) forms a compromise between prior and likelihood, this approach utilizes posterior predictive distributions to blend local and global priors with data likelihood for each class. Inference for a test image is performed by evaluating posterior predictive distributions and assigning the sample to the class that maximizes the posterior predictive likelihood.
Generative Model. The two-layer generative model is given below,
xjik ∼ N(µji,Σj), µji ∼ N(µj ,Σjκ−11 ), µj ∼ N(µ0,Σjκ −1 0 ), Σj ∼W−1(Σ0,m)
(1)
where j, i, k represent indices for local priors, classes, and image instances, respectively. We assume that image feature vectors xjik come from a Gaussian distribution with mean µji and covariance matrix Σj , and are generated independently conditioned not only on the global prior but also on their corresponding local priors.
Each local prior is characterized by the parameters µj and Σj . µ0 is the mean of the Gaussian prior defined over the mean vectors of local priors, κ0 is a scaling constant that adjusts the dispersion of the means of local priors around µ0. A smaller value for κ0 suggests that means of the local priors are expected to be farther apart from each other whereas a larger value suggests they are expected to be closer. On the other hand, Σ0 and m dictate the expected shape of the class distributions, as under the inverse Wishart distribution assumption the expected covariance is E(Σ|Σ0,m) = Σ0m−D−1 , where D is the dimensionality of the image feature space. The minimum feasible value of m is equal to D+ 2, and the larger the m is the less individual covariance matrices will deviate from the expected shape. The hyperparameter κ1 is a scaling constant that adjusts the dispersion of the class means around the centers of their corresponding local priors. A larger κ1 leads to smaller variations in class means relative to the mean of their corresponding local prior, suggesting a fine-grained relationship among classes sharing the same local prior. Conversely, a smaller κ1 dictates coarse-grained relationships among classes sharing the same local prior. To preserve conjugacy of the model, the proposed model constrains classes sharing the same local prior to share the same covariance matrix Σj . Test examples are classified by evaluating posterior predictive distributions (PPD) of seen and unseen classes. As illustrated in Fig. 3 the PPD in general incorporates three sources of information: the data likelihood that arises from the current class, the local prior that results from other classes sharing the same local prior as the current class, and global prior defined in terms of hyperparameters. PPDs for seen classes include the global prior and data likelihood and are derived in the form a Student-t distribution whereas for unseen classes the data likelihood does not exist as no image samples are available for these classes. We leave the details of derivations to the supplementary material.
Surrogate classes. According to the generative model in (1), groupings among classes are determined based on local priors. Thus, once estimated from seen classes, local priors can be used to define surrogate classes for unseen ones during inference. Associating each unseen class with a unique local prior forms the basis of our approach. The local prior for each unseen class is defined by finding the K seen classes most similar to that unseen class. The similarity is evaluated by computing the L2 (Euclidean) distance between class-level attribute or embedding vectors (φ) obtained from the side information available. Once a local prior is defined for each unseen class the PPD for the corresponding surrogate class can be derived in terms of only global and local priors as in equation (2). Test examples are classified based on class-conditional likelihoods evaluated for both seen and surrogate classes.
P (x|{x̄ji, Sji}ti=j ,µ0, κ0, κ1) = T (x|µ̄j , Σ̄j , v̄j); µ̄j = ∑ i:ti=j njiκ1 (nji+κ1)
x̄ji + κ0µ0∑ i:ti=j njiκ1 (nji+κ1) + κ0 ,
v̄j = ∑ i:ti=j (nji − 1) +m−D + 1, Σ̄j = (Σ0 +
∑ i:ti=j Sji)(κ̃j + 1)
κ̃j v̄j (2)
where, x̄ji, Sji and nji represent sample mean, scatter matrix and size of class i associated with local prior j, respectively and κ̃j is defined as in Eq. (30) in the supplementary material2.
Rationale for the hierarchical Bayesian approach and limitations. We believe that the hierarchical Bayesian model is ideally suited for fine-grained zero-shot classification of species when DNA is used as side information for the following reasons. The performance of the model in identifying unseen classes depends on how robust the local priors can be estimated. This in turn depends on whether or not the set of seen classes contain any classes similar to unseen ones. As the number of seen classes increases, seen classes become more representative of their local priors, more robust estimates of local priors can be obtained, and thus, unseen classes sharing the same local priors as seen classes can be more accurately identified. On the other hand, if the class-level side information is not specific enough to uniquely characterize a large number of classes, then the model cannot evaluate class similarity accurately and local priors are estimated based on potentially incorrect association between seen and unseen classes. In this case having a large number of seen classes available may not necessarily help. Instead, highly specific DNA as side information comes into play for accurately evaluating class similarity. If a unique local prior can be eventually described for each unseen class, then unseen classes can be classified during test time without the model having to learn the mapping between side information and image features beforehand. Uniqueness of the local prior can only be ensured when the number of seen classes is large compared to the number of unseen classes. Thus, the ratio of the number of seen and unseen classes becomes the ultimate determinant of performance for the hierarchical Bayesian model. The higher this ratio is the higher the accuracy of the model will be. An experiment demonstrating this effect is performed in Section 5.3.
If the same set of K classes is found to be the most similar for two different unseen classes, then these two unseen classes will inherit the same local prior and thus they will not be statistically identifiable during test time. The likelihood of such a tie happening for fine-grained data sets quickly decreases as the number of classes increases. In practice we deal with this problem by replacing the least similar of the K most similar seen classes by the next most similar seen class for one of the unseen classes.
5 Experiments
In this section we report results of experiments with two species datasets that use DNA as side information. Details of training and hyperparameter tuning are provided in the supplementary material along with the source code of our methods.
5.1 Experiments with the INSECT dataset
We compare our model (BZSL) against state-ofthe-art (SotA) ZSL methods proved to be most competitive on benchmark ZSL datasets that use visual attributes or word vector representations as side information. Selected SotA models represent various ZSL categories: (1) Embedding methods with traditional (1; 37) and end-to-end neural network (49) approaches, (2) FGNs using VAE (39) and GAN (44), and (3) end-to-end few shot learning approach extended to ZSL (43). Table 2 displays seen (S) and unseen (US) accuracies and their harmonic mean (H) on the
INSECT data using DNA as the side information. Results suggest that the large number of seen classes along with the highly specific nature of DNA information in characterizing classes particularly favors the Bayesian method to more accurately estimate local priors and characterize surrogate classes. The harmonic mean achieved by the proposed method is 52% higher than the harmonic mean achieved by the second best performing technique. Similar levels of improvements are maintained on both seen and unseen class accuracies. The next top performers are FGNs. CADA-VAE uses a VAE whereas LsrGan utilizes GAN to synthesize unseen class features, then both train a LogSoftmax classifier for inference. Lower unseen class accuracies suggest that FGNs struggle to synthesize meaningful
2The code and dataset are available at https://github.com/sbadirli/Fine-Grained-ZSL-with-DNA
features in the image space. On the other hand, CRNet that uses end-to-end neural network to learn the embedding between semantic and image spaces renders slightly worse performance than FGNs. It seems, non-linear embedding also works better than a linear (ESZSL) and bilinear (ALE) ones for this specific dataset. RelationNet is amongst the ones with the lowest performance, as the method is explicitly designed for Few-shot learning and expects the side information to be strongly correlated with image features. The weak association between side information and image features affects the performance of both FGNs and embedding methods, but the traditional embedding methods suffer the most.
5.2 Experiments with the benchmark CUB dataset
To demonstrate the utility of DNA-based attributes in a broader spectrum of species classification, we procured DNA barcodes, again from the BOLD system, for bird species in the CUB dataset. For this experiment, we derived 400 dimensional embeddings in order to have the same size with word vectors and eliminate the attribute size effect. There were 6 classes, 4 seen and 2 unseen, that did not have DNA barcodes extracted from COI gene in the BOLD system. These classes were excluded from the dataset but the proposed split from (47) is preserved otherwise.
The results shown in Table 3 validate our hypothesis that when side information is not strongly correlated with visual characteristics of object classes (like in word vectors or DNA) both embedding methods and FGNs display significant performance degradation. With the exception of the proposed Bayesian model, word vector representation yields better accuracy than DNA-based attributes for all models. This phenomenon can be explained by our observation that text fragments related to common animals/birds in the Wikipedia/Internet often include some morphological traits of the underlying species. Hence, word vector representation is expected to have higher degree of correlation to visual attributes than DNA information. Our model produces the best results, 34.97% vs 32.45% when the side information is not derived from visual characteristics of classes. This outcome validates the robustness of the Bayesian model to diverse sources of side information and emphasizes the need for more robust FGN or embedding based models in more realistic scenarios where hand-crafted visual attributes are not feasible.
5.3 The effect of the number of seen classes on performance
Local priors are central to the performance of the hierarchical Bayesian model. Here, we perform experiments to show that as the number of seen classes increases while the number of unseen classes is fixed, each unseen class can be associated with a larger pool of candidate seen classes and more informative local priors can potentially be obtained, which in turn leads to more accurate identification of unseen classes. To demonstrate this effect we run two experiments. In the first experiment we use the same set of unseen classes as in Section 5.1 but gradually increase the number of seen classes used for training. In the second experiment we double the size of the unseen classes and gradually include the remaining classes into training as seen classes. The first experiment is also performed for CADA-VAE. LsrGan is skipped for this experiment due to long training time. To account for random subsampling of seen classes each experiment is repeated five times and error bars are included in each plot. There is a clear trend in these results that further highlights the intuition behind the hierarchical Bayesian model and explains why this model is well-suited for fine-grained ZSL. When 10% of the
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(a) BZSL results in original setup (Y str = 1, 092 and Y u = 121)
10%
(98 ) 2
0% (19 6) 30 % (29 4) 40 % (39 3) 60 % (58 9) 80 % (78 6) 1 00%
(98 3)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(b) BZSL results with Y str = 983 and Y u = 230
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(c) CADA-VAE in original setup (Y str = 1, 092 and Y u = 121)
Figure 4: The effect of the number of seen classes on the performance of BZSL and CADA-VAE. Each experiment is repeated five times to account for random subsampling of seen classes.
classes are used as unseen, unseen class accuracy improves with increasing number of seen classes until it flatlines beyond the 60% mark while seen class accuracy always maintains around the same level (see Fig. 4a). When 20% of the classes are used as unseen no flatlining effect in unseen class accuracy is observed even at 100% mark, which suggest that there is still room for improvement in unseen class accuracy if more seen classes become available (see Fig. 4b). For CADA-VAE unseen class accuracy initially improves and then flatlines beyond 80% mark but this improvement comes at the expense of significant degradation in seen class accuracy, which suggest that as the number of seen classes increase generated features further confound the classifier as would be expected of an FGN for a fine-grained dataset.
6 Conclusions
For the first time in the ZSL literature we use DNA as a side information and demonstrate its utility in evaluating class similarity for the purpose of identifying unseen classes in a fine-grained ZSL setting. On the CUB dataset, despite being trained with less than 30,000 very short sequences, we find DNA embeddings to be highly competitive with word vector representations trained on massive text corpora. We emphasize the importance of DNA as side information in zero-shot classification of highly fine-grained species datasets involving thousands of species, and on the INSECT dataset, show that a simple Bayesian model that readily exploits inherent class hierarchy with the help of DNA can significantly outperform highly complex models. We show that SotA ZSL methods that take the presence of an explicit association between visual attributes and image features for granted, suffer significant performance degradation when non-visual attributes such as word vectors and WordNet are used as side information. The same effect is observed with DNA embeddings as well. Although visual attributes tend to be the best alternative as side information for a coarse-grained species classification task, they quickly lose their appeal with an increasing number of classes. Considering the tens of thousands of described species and even larger number of undescribed species, DNA seems to be the only feasible alternative to side information for large-scale, fine-grained zero-shot classification of species.
These favorable results by a simpler model suggest that as the number of classes increases along with inter-class similarity, the complexity of the mapping between side information and image attributes emerges as a major bottleneck at the forefront of zero-shot classification. A promising future research avenue appears to be implementing hierarchically organized FGNs where each subcomponent only operates with a small subset of seen classes all sharing the same local prior.
This work does not present any foreseeable negative societal consequences beyond those already associated with generic machine learning classification algorithms.
Acknowledgements: This research was sponsored by the National Science Foundation (NSF) under Grant Number IIS-1252648 (CAREER). This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 – Project number 390727645). GM acknowledges support from NSF-ATD grant 2124313. The content is solely the responsibility of the authors and does not necessarily represent the official views of NSF. | 1. What is the main contribution of the paper regarding zero-shot learning using DNA barcodes?
2. What are the strengths and weaknesses of the proposed approach compared to prior works relying on visual attributes or word vectors?
3. How does the reviewer assess the accuracy and reliability of the dataset given known challenges in accurate insect identification?
4. What additional analyses or visualizations would the reviewer like to see to better understand the distribution of specimens across digitizing entities and taxonomic levels?
5. Is the DNA embedding network trained with a species classification loss, and how does the similarity of seen classes affect the local prior for surrogate classes?
6. How much is the performance gain over SOTA due to the use of DNA versus visual attributes or word vectors?
7. What clarifications or elaborations would the reviewer suggest regarding fine-grained species classification, data types, problem formulation, and simulating DNA metadata for unseen species?
8. How would the reviewer summarize the hubness problem, and what is the significance of US, S, and H in Tables 2 & 3? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes using DNA as side information for fine-grained zero-shot learning of insects, as opposed to visual attribute labels. Their proposed approach establishes a hierarchy over the images using DNA information, via a hierarchical Bayesian model over a learned embedding of DNA barcodes. Their method particularly outshines attribute-label-based approaches on an insect identification application, where visual attributes (from public Wikipedia pages, for example) are sparsely labeled or not available. The provide as an additional contribution their paired DNA/image benchmark dataset containing 21,212 examples of 1,213 insect species.
Review
This paper provides an interesting alternative, DNA, as side information for zero-shot learning. The main drawback to the generality and scalability of this method is that it requires paired DNA information to be extracted from the specimen in question, which is not necessarily available for any given sighting - in fact it is most likely only available when digitizing and analyzing large natural history collections or within research labs that collect physical specimens (very common in entomology). That said, when that information is available it should be used, and this paper provides a clean formulation of the problem, benchmark dataset, and proposes a methodology for how to use DNA information effectively in zero-shot learning.
Some insect species, for example Rove Beetles, cannot be visually identified by experts unless they are dissected and the genitalia in inspected under a microscope. The error rates in classification from DNA barcoding can be up to 30%, and museum collections are known to contain labeling errors, particularly for challenging-to-identify species. Do the authors have a sense of the accuracy of their dataset subject to these known challenges in accurate insect ID?
It would be very interesting to see a visualization of the taxonomic distribution of the data, and particularly see the resultant distributions across the 3 orders considered of the random train/test/unseen splits (I’m imagining a taxonomic tree with the different splits colored at the end node, or possibly the proportion of categories in each split at each taxonomic level colored in some way). It would also be good to show the distribution of examples per class in each split, to see the amount of imbalance represented. Further, it would be interesting to look at the distribution of these specimens across digitizing entities contribution to BOLD, to probe whether any potential correlations exist in digitization practice (camera type, background, DNA extraction method, etc) for certain taxa.
Is the DNA embedding network trained with a species classification loss? I assume so but it isn’t explicit in the text.
How much does the similarity of seen classes affect the local prior for surrogate classes (beyond just the ratio of seen to unseen)? This would be interesting to explore. Perhaps you could probe how results shift based on the number of seen classes within some distance threshold in embedding space? Or build an explicit split based on taxonomy, with ie held out genera instead of species, to probe the robustness of the model to less “familiar” taxonomic branches?
How much is the performance gain over SOTA based on the fact that those methods were not designed with DNA in mind, but instead visual attributes or word vectors?
Text Suggestions & Nits:
In the introduction, your first sentence “Fine-grained species classification is essential in monitoring biodiversity.” is perhaps not well justified for a machine learning audience. It may help to provide a few examples of how it is used - what happens after you identify the species?
In line 30, “fine-grained classification of species” does this refer to classification generally across data type? Or from images specifically? Throughout these first paragraphs clarification about what data type and problem formulation you are referring to would help.
In line 53: “or can be simulated in a non-trivial way 54 to represent potentially existing species.” Can you further elaborate? How would you simulate the DNA metadata for an image of an unseen species in a robust way in order to provide that side information to your model?
Line 82: give a quick high-level description of the hubness problem
Define US, S, H in the caption of Tables 2&3 so that readers do not need to search the text to understand the table, it’s also a bit odd that table 3 appears before table 2 in the document |
NIPS | Title
Fine-Grained Zero-Shot Learning with DNA as Side Information
Abstract
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
1 Introduction
Fine-grained species classification is essential in monitoring biodiversity. Diversity of life is the central tenet to biology and preserving biodiversity is key to a more sustainable life. Monitoring biodiversity requires identifying living organisms at the lowest taxonomic level possible. The traditional approach to identification uses published morphological dichotomous keys to identify the collected sample. This identification involves a tedious process of manually assessing the presence or absence of a long list of morphological traits arranged at hierarchical levels. The analysis is often performed in a laboratory setting by a well-trained human taxonomist and is difficult to do at scale. Fortunately, advances in technology have addressed this challenge to some extent through the use of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
DNA barcodes. DNA barcoding is a technique that uses a short section of DNA from a specific gene, such as cytochrome C oxidase I (COI), found in mitochondrial DNA, and offers specific information about speciation in living organisms and can achieve nearly perfect classification accuracy at the species level (26; 17).
As it is costly to obtain the label information for fine-grained image classification of species, Zero-Shot Learning (ZSL) that handles missing label information is a suitable task. In ZSL, side information is used to associate seen and unseen classes. Popular choices for side-information are manually annotated attributes (21; 12), word embeddings (41; 14; 27) derived from free-form text or the WordNet hierarchy (28; 2). It is often assumed that an exhaustive list of visual attributes characterizing all object classes (both seen and unseen) can be determined based only on seen classes. However, taking insects as our object classes, if no seen class species have antennae, the attribute list may not contain antenna, which may in fact be necessary to distinguish unseen species. In the United States alone, more than 40% of all insect species (>70,000) remain undescribed (42), which is a clear sign of the limitations of existing identification techniques that rely on visual attributes. Similarly, free-form text is unlikely to contain sufficiently descriptive information about fine-grained objects to generate discriminative vector embeddings. For example, tiger beetle is a class in the ImageNet dataset. However, the tiger beetle group itself contains thousands of known species and the Wikipedia pages for these species either do not exist or are limited to short text that does not necessarily contain any information about species’ morphological characteristics. WordNet hierarchy may not be useful either as most of the species names do not exist in WordNet.
Given that DNA information can be readily available for training (35; 36), species-level DNA information can be used as highly specific side information to replace high-level semantic information in ZSL. For seen classes, species-level DNA information can be obtained by finding the consensus nucleotide sequence among samples of a given species or by averaging corresponding sequence embeddings of samples. For unseen classes, species-level DNA information can be obtained from actual samples, if available, in the same way as seen classes, or can be simulated in a non-trivial way to represent potentially existing species.
Our approach uses DNA as side information for the first time for zero-shot classification of species. In fine-grained, large-scale species classification, no other side information can explain class dichotomy better than DNA, as new species are explicitly defined based on variations in DNA. The hierarchical Bayesian model leverages the implicit inter-species association of DNA and phenotypic traits and ultimately allows us to establish a Bayesian hierarchy based on DNA similarity between unseen and seen classes. We compare DNA against word representations for assessing class similarity and show that the Bayesian model that uses DNA to identify similar classes achieves favorable results compared to the version that uses word representations on a well-known ZSL benchmark species dataset involving slightly less than 200 bird species. In the particular case of an insect dataset with over 1000 species, when visual attributes or word representations may not offer feasible alternatives, we show that our hierarchical model that relies on DNA to establish class hierarchy significantly outperforms all other embedding-based methods and feature generating networks.
Our contributions are on three fronts. First, we introduce DNA as side information for fine-grained ZSL tasks, implement a Convolutional Neural Net (CNN) model to learn DNA barcode embeddings, and show that the embeddings are robust and highly specific for closed-set classification of species, even when training and test sets of species are mutually exclusive. We use the benchmark CUB dataset as a case study to show that DNA embeddings are competitive to word embeddings as side information. Second, we propose a fine-grained insect dataset involving 21, 212 matching image/DNA pairs from 578 genera and 1, 213 species as a new benchmark dataset and discuss the limitations of current ZSL paradigms for fine-grained ZSL tasks when there is no strong association between side information and image features. Third, we perform extensive studies to show that a simple hierarchical Bayesian model that uses DNA as side information outperforms state-of-the-art ZSL techniques on the newly introduced insect dataset by a wide margin.
2 Related Work
Zero-Shot Learning. Early ZSL literature is dominated by methods that embed image features into a semantic space and perform various forms of nearest neighbor search to do inference (14; 41; 1). As the dimensionality of semantic space is usually much smaller than the feature space this leads to the
hubness problem where some classes become hub and occur as the nearest neighbor of many samples. In an effort to alleviate the hubness problem, (50; 40) changed the direction of the embedding from semantic to image feature space. This was followed by a line of work that investigates bidirectional embedding between semantic and image spaces through a latent space (51; 43; 2; 32; 38).
In (25; 15), a new strategy of synthesizing features for unseen classes and converting the challenging ZSL problem into traditional supervised learning is introduced (23; 44; 9; 13; 48; 52; 29; 39; 4). Although feature generating networks (FGNs) currently achieve state-of-the-art results in ZSL, they suffer from the same problem as earlier lines of work in ZSL: hypersensitivity towards side information not strongly correlated with visual attributes. The vulnerability of both embedding and FGN-based methods toward sources of side information different than visual attributes, such as word vectors or WordNet hierarchy, is investigated in (2; 39; 44). Another limitation of FGNs is that features generated for unseen classes are significantly less dispersed than actual features due to the generator failing to span more than a small subset of modes available in the data. Recent deep generative models mitigate this problem by proposing different loss functions that can better explore inter-sample and inter-class relationships (3; 7; 8; 19; 45). However, these methods fail to scale well with an increasing number of classes with an especially high inter-class similarity (24).
Side Information in ZSL. Side information serves as the backbone of ZSL as it bridges the knowledge gap between seen and unseen classes. Earlier lines of work (22; 1) use visual attributes to characterize object classes. Although visual attributes achieve compelling results, obtaining them involves a laborious process that requires manual annotation by human experts not scalable to data sets with a large number of fine-grained object classes. When dealing with fine-grained species classification, apart from scalability, a more pressing obstacle is how to define subtle attributes potentially characteristic of species that have never been observed.
As an alternative to manual annotation, several studies (11; 14; 2; 46; 34; 5) proposed to learn side information that requires less effort and minimal expert labor such as textual descriptions, distributed text representations, like Word2Vec (27) and GloVe (33), learned from large unsupervised text corpora, taxonomical order built from a pre-defined ontology like WordNet (28), or even human gaze reaction to images (20). The accessibility, however, comes at the cost of performance degradation (2; 39). A majority of ZSL methods implicitly assume strong correlation between side information and image features, which is true for handcrafted attributes but less likely to be true for text representations or taxonomic orders. Consequently, all these methods experience significant decline in performance when side information is not based on visual attributes.
3 Barcode of Life Data and DNA Embeddings
In this study, we present the fine-grained INSECT dataset with 21, 212 matching image/DNA pairs from 1, 213 species (see Fig. 1 for sample images). Unlike existing benchmark ZSL datasets, this new
dataset uses DNA as side information1 and can be best characterized with the high degree of similarity among classes. Among the existing benchmark datasets, SUN contains the largest number of classes (717) but classes in SUN represent a wide range of scene categories related to transportation, indoor and outdoors, nature, underwater etc., and as such can be considered a relatively coarse-grained dataset compared to the INSECT dataset we are introducing in this study.
All insect images and associated DNA barcodes in our dataset come from the Barcode of Life Data System (BOLD) (35; 36). BOLD is an open-access database in which users can upload DNA sequences and other identifying information for any living organism on Earth. The database provides approximately 658 base pairs of the mitochondrial DNA barcode extracted from the cytochrome c oxidase I (COI) gene along with additional information such as country of origin, life-stage, order, family, subfamily, and genus/species names.
Data Collection. We collected image/DNA pairs of insects that originate from three orders: Diptera (true flies), Coleoptera (beetles) and Hymenoptera (sawflies, wasps, bees, and ants). While the dataset is in general clean, manual effort was devoted to further curate the dataset. Only cases with images and matching DNA barcodes of adult insects are included. Images from each species were visually inspected and poor quality images were deleted. Only species with more than ten instances were included. The final dataset consisted of 21, 212 images and 1, 213 insects species of which 254 belong to Diptera (133 genera), 564 to Coleoptera (315 genera) and 395 to Hymenoptera (130 genera). We extracted image features, namely image embeddings, using a pre-trained (on ImageNet 1000 classes) ResNet101 model (16). Images are resized to 256× 256 and center-cropped before fed to the ResNet model. No other pre-processing is applied to the images.
Data Split. We randomly chose 10% of all species as unseen classes for the test set leading to 1, 092 seen and 121 unseen classes. Similarly, we randomly chose 10% of the 1, 092 training classes as unseen classes for the validation set. Samples from seen classes were split by a 80/20 ratio in a stratified fashion to create seen portion of the train and test datasets. In the dataset there were a few hundred cases where multiple image views (dorsal, ventral, and lateral) of the same insect were present. To avoid splitting these cases between train and test, we made sure all instances of the same insect are included in the
training set. As a result, 12 of the 1, 092 seen classes in the training set were not represented in the test set. Our dataset splits are summarized in Table 1.
DNA Embeddings. Although it is the first time DNA barcodes are used as side information in ZSL domain, there have been some work investigating vector embeddings for DNA sequences. Authors of (31) trained a CNN model to do binary DNA sequence classification considering sequences as a text data. Imitating amino acid structure, each triplet of base pairs are treated as a word and sequences are converted into one-hot vector representation. Taking (27) as the base, (30) trained a shallow neural network on human genome data to generate representation for k-mers. Unlike these techniques we deal with DNA Barcodes represented by nucleotide sequences and aim to convert the entire character sequence into a vector embedding useful for species classification with more than 1,000 classes.
1Please refer to supplementary material for discussion on limitations of using DNA as side information
Most recently, DNABERT (18) adapted the powerful text transformer model (10) to a genomic DNA setting and generated vector embeddings for long DNA sequences.
In this paper, we trained a CNN model to learn a vector representation of DNA barcodes in the Euclidean space. First, the consensus sequence of all DNA barcodes in the training set with 658bp is obtained. Then, all sequences are aligned with respect to this consensus sequence using a progressive alignment technique implemented in MATLAB R2020A (Natick, MA, USA). A total of five tokens are used, one for each of the four bases, Adenine, Guanine, Cytosine, Thymine, and one for others. All ambiguous and missing symbols are included in the others token. In pre-processing, barcodes are one hot encoded into a 658x5 2D array, where 658 is the length of the barcode sequence (median of the nucleotide length of the DNA data).
To train the CNN model, a balanced subset of the training data is subsampled, where each class size is capped at 50 samples. The CNN is trained with 14, 723 barcodes from 1, 092 classes. No barcodes from the 121 unseen classes are employed during model training. The training set is further split into two as train (80%) and validation (20%) by random sampling. We used 3 blocks of convolutional layers each followed by batch normalization and 2D max-pooling. The output of the third convolutional layer is flattened and batch normalized before feeding the data into a fully-connected layer with 500 units. The CNN architecture is completed by a softmax layer. We used the output of the fully-connected layer as the embeddings for DNA. Class level attributes are computed by the mean embedding of each class. The DNA-based attribute extraction is illustrated in Figure 2. The details of the model architecture is depicted in Figure 3 in Supplementary material. We used ADAM optimizer for training the model for five epochs with a batch size of 32 (with a step-decay initial learning rate = 0.0005 and drop factor= 0.5, β1 = 0.9, β2 = 0.999). The model is developed in Python with Tensorflow-Keras API.
Predictive accuracy of DNA embeddings. Although the insect barcodes we used are extracted from a single gene (COI) of the mitochondrial DNA with a relatively short sequence length of 658 base pairs, they are proven to have exceptional predictive accuracy; the CNN model achieves a 99.1% accuracy on the held-out validation set. Note that, we only used the data from training seen classes to train the CNN model. In order to validate the generalizability of embeddings to unseen data, we trained a simple K-Nearest Neighbor classifier (K = 1) on the randomly sampled 80% of the DNA-embeddings of unseen classes and tested on the remaining 20%. The classifier had a perfect accuracy for all 121 but one classes with an overall accuracy of 99.8%.
In addition to our CNN model we have explored a DNABERT (18) model for converting DNA barcodes to vector embeddings. The pretrained DNABERT model achieves around 85% (vs 99% from CNN) top-1 KNN accuracy (averaged over 10 runs) on the unseen classes. Pretrained DNABERT can be fine-tuned for species classification however because of the vast number of parameters to tune each run takes a few hours on a relatively sophisticated GPU, significantly more than CNN training. Similarly, a simple LSTM model with half of the parameters as the CNN model is almost 5 times slower than the CNN model and requires more epochs to reach a reasonable accuracy. Therefore, we use a simple 3-layer CNN that trains in an hour and achieves almost perfect top-1 KNN accuracy.
To demonstrate that the approach can be easily extended to larger members of the animal kingdom, we compiled approximately 26, 000 DNA barcodes from 1, 047 bird species to train another CNN model (ceteris paribus) to learn the DNA embeddings for CUB dataset (see the Supp. materials for details). The CNN model achieved a compelling 95.60% on the held-out validation set.
4 Bayesian Zero-shot Learning
Object classes in nature already tend to emerge at varying levels of abstraction, but the class hierarchy is more evident when classes represent species and species are considered the lowest taxonomic rank of living organisms. We build our approach on a two layer hierarchical Bayesian model that was previously introduced and evaluated on benchmark ZSL datasets with promising results (6). The model assumes that there are latent classes that define the class hierarchy in the image space and uses side information to build the Bayesian hierarchy around these latent classes. Two types of Bayesian priors are utilized in the model: global and local. As the name suggests, global priors are shared across all classes, whereas local priors represent latent classes, and are only shared among similar classes. Class similarity is evaluated based on side information in the Euclidean space. Unlike
standard Bayesian models where the posterior predictive distribution (PPD) forms a compromise between prior and likelihood, this approach utilizes posterior predictive distributions to blend local and global priors with data likelihood for each class. Inference for a test image is performed by evaluating posterior predictive distributions and assigning the sample to the class that maximizes the posterior predictive likelihood.
Generative Model. The two-layer generative model is given below,
xjik ∼ N(µji,Σj), µji ∼ N(µj ,Σjκ−11 ), µj ∼ N(µ0,Σjκ −1 0 ), Σj ∼W−1(Σ0,m)
(1)
where j, i, k represent indices for local priors, classes, and image instances, respectively. We assume that image feature vectors xjik come from a Gaussian distribution with mean µji and covariance matrix Σj , and are generated independently conditioned not only on the global prior but also on their corresponding local priors.
Each local prior is characterized by the parameters µj and Σj . µ0 is the mean of the Gaussian prior defined over the mean vectors of local priors, κ0 is a scaling constant that adjusts the dispersion of the means of local priors around µ0. A smaller value for κ0 suggests that means of the local priors are expected to be farther apart from each other whereas a larger value suggests they are expected to be closer. On the other hand, Σ0 and m dictate the expected shape of the class distributions, as under the inverse Wishart distribution assumption the expected covariance is E(Σ|Σ0,m) = Σ0m−D−1 , where D is the dimensionality of the image feature space. The minimum feasible value of m is equal to D+ 2, and the larger the m is the less individual covariance matrices will deviate from the expected shape. The hyperparameter κ1 is a scaling constant that adjusts the dispersion of the class means around the centers of their corresponding local priors. A larger κ1 leads to smaller variations in class means relative to the mean of their corresponding local prior, suggesting a fine-grained relationship among classes sharing the same local prior. Conversely, a smaller κ1 dictates coarse-grained relationships among classes sharing the same local prior. To preserve conjugacy of the model, the proposed model constrains classes sharing the same local prior to share the same covariance matrix Σj . Test examples are classified by evaluating posterior predictive distributions (PPD) of seen and unseen classes. As illustrated in Fig. 3 the PPD in general incorporates three sources of information: the data likelihood that arises from the current class, the local prior that results from other classes sharing the same local prior as the current class, and global prior defined in terms of hyperparameters. PPDs for seen classes include the global prior and data likelihood and are derived in the form a Student-t distribution whereas for unseen classes the data likelihood does not exist as no image samples are available for these classes. We leave the details of derivations to the supplementary material.
Surrogate classes. According to the generative model in (1), groupings among classes are determined based on local priors. Thus, once estimated from seen classes, local priors can be used to define surrogate classes for unseen ones during inference. Associating each unseen class with a unique local prior forms the basis of our approach. The local prior for each unseen class is defined by finding the K seen classes most similar to that unseen class. The similarity is evaluated by computing the L2 (Euclidean) distance between class-level attribute or embedding vectors (φ) obtained from the side information available. Once a local prior is defined for each unseen class the PPD for the corresponding surrogate class can be derived in terms of only global and local priors as in equation (2). Test examples are classified based on class-conditional likelihoods evaluated for both seen and surrogate classes.
P (x|{x̄ji, Sji}ti=j ,µ0, κ0, κ1) = T (x|µ̄j , Σ̄j , v̄j); µ̄j = ∑ i:ti=j njiκ1 (nji+κ1)
x̄ji + κ0µ0∑ i:ti=j njiκ1 (nji+κ1) + κ0 ,
v̄j = ∑ i:ti=j (nji − 1) +m−D + 1, Σ̄j = (Σ0 +
∑ i:ti=j Sji)(κ̃j + 1)
κ̃j v̄j (2)
where, x̄ji, Sji and nji represent sample mean, scatter matrix and size of class i associated with local prior j, respectively and κ̃j is defined as in Eq. (30) in the supplementary material2.
Rationale for the hierarchical Bayesian approach and limitations. We believe that the hierarchical Bayesian model is ideally suited for fine-grained zero-shot classification of species when DNA is used as side information for the following reasons. The performance of the model in identifying unseen classes depends on how robust the local priors can be estimated. This in turn depends on whether or not the set of seen classes contain any classes similar to unseen ones. As the number of seen classes increases, seen classes become more representative of their local priors, more robust estimates of local priors can be obtained, and thus, unseen classes sharing the same local priors as seen classes can be more accurately identified. On the other hand, if the class-level side information is not specific enough to uniquely characterize a large number of classes, then the model cannot evaluate class similarity accurately and local priors are estimated based on potentially incorrect association between seen and unseen classes. In this case having a large number of seen classes available may not necessarily help. Instead, highly specific DNA as side information comes into play for accurately evaluating class similarity. If a unique local prior can be eventually described for each unseen class, then unseen classes can be classified during test time without the model having to learn the mapping between side information and image features beforehand. Uniqueness of the local prior can only be ensured when the number of seen classes is large compared to the number of unseen classes. Thus, the ratio of the number of seen and unseen classes becomes the ultimate determinant of performance for the hierarchical Bayesian model. The higher this ratio is the higher the accuracy of the model will be. An experiment demonstrating this effect is performed in Section 5.3.
If the same set of K classes is found to be the most similar for two different unseen classes, then these two unseen classes will inherit the same local prior and thus they will not be statistically identifiable during test time. The likelihood of such a tie happening for fine-grained data sets quickly decreases as the number of classes increases. In practice we deal with this problem by replacing the least similar of the K most similar seen classes by the next most similar seen class for one of the unseen classes.
5 Experiments
In this section we report results of experiments with two species datasets that use DNA as side information. Details of training and hyperparameter tuning are provided in the supplementary material along with the source code of our methods.
5.1 Experiments with the INSECT dataset
We compare our model (BZSL) against state-ofthe-art (SotA) ZSL methods proved to be most competitive on benchmark ZSL datasets that use visual attributes or word vector representations as side information. Selected SotA models represent various ZSL categories: (1) Embedding methods with traditional (1; 37) and end-to-end neural network (49) approaches, (2) FGNs using VAE (39) and GAN (44), and (3) end-to-end few shot learning approach extended to ZSL (43). Table 2 displays seen (S) and unseen (US) accuracies and their harmonic mean (H) on the
INSECT data using DNA as the side information. Results suggest that the large number of seen classes along with the highly specific nature of DNA information in characterizing classes particularly favors the Bayesian method to more accurately estimate local priors and characterize surrogate classes. The harmonic mean achieved by the proposed method is 52% higher than the harmonic mean achieved by the second best performing technique. Similar levels of improvements are maintained on both seen and unseen class accuracies. The next top performers are FGNs. CADA-VAE uses a VAE whereas LsrGan utilizes GAN to synthesize unseen class features, then both train a LogSoftmax classifier for inference. Lower unseen class accuracies suggest that FGNs struggle to synthesize meaningful
2The code and dataset are available at https://github.com/sbadirli/Fine-Grained-ZSL-with-DNA
features in the image space. On the other hand, CRNet that uses end-to-end neural network to learn the embedding between semantic and image spaces renders slightly worse performance than FGNs. It seems, non-linear embedding also works better than a linear (ESZSL) and bilinear (ALE) ones for this specific dataset. RelationNet is amongst the ones with the lowest performance, as the method is explicitly designed for Few-shot learning and expects the side information to be strongly correlated with image features. The weak association between side information and image features affects the performance of both FGNs and embedding methods, but the traditional embedding methods suffer the most.
5.2 Experiments with the benchmark CUB dataset
To demonstrate the utility of DNA-based attributes in a broader spectrum of species classification, we procured DNA barcodes, again from the BOLD system, for bird species in the CUB dataset. For this experiment, we derived 400 dimensional embeddings in order to have the same size with word vectors and eliminate the attribute size effect. There were 6 classes, 4 seen and 2 unseen, that did not have DNA barcodes extracted from COI gene in the BOLD system. These classes were excluded from the dataset but the proposed split from (47) is preserved otherwise.
The results shown in Table 3 validate our hypothesis that when side information is not strongly correlated with visual characteristics of object classes (like in word vectors or DNA) both embedding methods and FGNs display significant performance degradation. With the exception of the proposed Bayesian model, word vector representation yields better accuracy than DNA-based attributes for all models. This phenomenon can be explained by our observation that text fragments related to common animals/birds in the Wikipedia/Internet often include some morphological traits of the underlying species. Hence, word vector representation is expected to have higher degree of correlation to visual attributes than DNA information. Our model produces the best results, 34.97% vs 32.45% when the side information is not derived from visual characteristics of classes. This outcome validates the robustness of the Bayesian model to diverse sources of side information and emphasizes the need for more robust FGN or embedding based models in more realistic scenarios where hand-crafted visual attributes are not feasible.
5.3 The effect of the number of seen classes on performance
Local priors are central to the performance of the hierarchical Bayesian model. Here, we perform experiments to show that as the number of seen classes increases while the number of unseen classes is fixed, each unseen class can be associated with a larger pool of candidate seen classes and more informative local priors can potentially be obtained, which in turn leads to more accurate identification of unseen classes. To demonstrate this effect we run two experiments. In the first experiment we use the same set of unseen classes as in Section 5.1 but gradually increase the number of seen classes used for training. In the second experiment we double the size of the unseen classes and gradually include the remaining classes into training as seen classes. The first experiment is also performed for CADA-VAE. LsrGan is skipped for this experiment due to long training time. To account for random subsampling of seen classes each experiment is repeated five times and error bars are included in each plot. There is a clear trend in these results that further highlights the intuition behind the hierarchical Bayesian model and explains why this model is well-suited for fine-grained ZSL. When 10% of the
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(a) BZSL results in original setup (Y str = 1, 092 and Y u = 121)
10%
(98 ) 2
0% (19 6) 30 % (29 4) 40 % (39 3) 60 % (58 9) 80 % (78 6) 1 00%
(98 3)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(b) BZSL results with Y str = 983 and Y u = 230
10%
(10 9)
20% (21 8) 30 % (32 7) 40 % (43 6) 60 % (65 5) 80 % (87 3) 1 00% (10 92)
% Seen class
0
0.1
0.2
0.3
0.4
0.5
0.6
A cc
ur ac
y
H Seen acc. Unseen acc.
(c) CADA-VAE in original setup (Y str = 1, 092 and Y u = 121)
Figure 4: The effect of the number of seen classes on the performance of BZSL and CADA-VAE. Each experiment is repeated five times to account for random subsampling of seen classes.
classes are used as unseen, unseen class accuracy improves with increasing number of seen classes until it flatlines beyond the 60% mark while seen class accuracy always maintains around the same level (see Fig. 4a). When 20% of the classes are used as unseen no flatlining effect in unseen class accuracy is observed even at 100% mark, which suggest that there is still room for improvement in unseen class accuracy if more seen classes become available (see Fig. 4b). For CADA-VAE unseen class accuracy initially improves and then flatlines beyond 80% mark but this improvement comes at the expense of significant degradation in seen class accuracy, which suggest that as the number of seen classes increase generated features further confound the classifier as would be expected of an FGN for a fine-grained dataset.
6 Conclusions
For the first time in the ZSL literature we use DNA as a side information and demonstrate its utility in evaluating class similarity for the purpose of identifying unseen classes in a fine-grained ZSL setting. On the CUB dataset, despite being trained with less than 30,000 very short sequences, we find DNA embeddings to be highly competitive with word vector representations trained on massive text corpora. We emphasize the importance of DNA as side information in zero-shot classification of highly fine-grained species datasets involving thousands of species, and on the INSECT dataset, show that a simple Bayesian model that readily exploits inherent class hierarchy with the help of DNA can significantly outperform highly complex models. We show that SotA ZSL methods that take the presence of an explicit association between visual attributes and image features for granted, suffer significant performance degradation when non-visual attributes such as word vectors and WordNet are used as side information. The same effect is observed with DNA embeddings as well. Although visual attributes tend to be the best alternative as side information for a coarse-grained species classification task, they quickly lose their appeal with an increasing number of classes. Considering the tens of thousands of described species and even larger number of undescribed species, DNA seems to be the only feasible alternative to side information for large-scale, fine-grained zero-shot classification of species.
These favorable results by a simpler model suggest that as the number of classes increases along with inter-class similarity, the complexity of the mapping between side information and image attributes emerges as a major bottleneck at the forefront of zero-shot classification. A promising future research avenue appears to be implementing hierarchically organized FGNs where each subcomponent only operates with a small subset of seen classes all sharing the same local prior.
This work does not present any foreseeable negative societal consequences beyond those already associated with generic machine learning classification algorithms.
Acknowledgements: This research was sponsored by the National Science Foundation (NSF) under Grant Number IIS-1252648 (CAREER). This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 – Project number 390727645). GM acknowledges support from NSF-ATD grant 2124313. The content is solely the responsibility of the authors and does not necessarily represent the official views of NSF. | 1. What is the focus and contribution of the paper on zero-shot species classification?
2. What are the strengths of the proposed approach, particularly in terms of utilizing DNA as side information?
3. What are the weaknesses of the paper regarding its technique, organization, clarity, and comparisons with other works?
4. How does the reviewer assess the novelty and effectiveness of the proposed CNN model for calculating DNA embeddings?
5. What are the suggestions for improving the paper's organization, clarity, and experimental settings? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to use DNA as the side information for zero-shot species classification. The idea is novel and biologically sound. Experiments show that DNA embeddings achieve comparable performance as text embeddings when serving as side information.
Review
Advantages:
This paper is the first work that utilizes DNA as side information for Zero-shot species classification, and the authors clearly explain the reasons and insights behind this.
This paper introduces a new dataset for Zero-shot species classification and provides codes to reproduce their experiments.
Weaknesses:
On the technique side, the paper's main contribution is developing a CNN model to calculate DNA embeddings. However, there are many existing works (e.g., dna2vec, Gene2vec, DNABERT, etc.) that calculate embeddings for DNA sequences. This paper doesn't mention, use or compare with them. Thus, it is unclear how novel/effective the proposed model is.
The paper is not well-written and relatively hard to follow.
It is not well-organized. In section 3, the authors introduce data collection, data processing, DNA embedding calculation, and some experimental details and results on DNA embedding, while in Section 4, they introduce the rest part of the proposed method. It may be better to re-organize this paper.
The Experimental setting and baseline models are not clearly explained. The tables are hard to understand. It can be very helpful to include more information in the table caption (e.g., explain the experimental setting, evaluation metrics, and meaning of the abbreviations).
Other issues:
Table 3 should appear after Table 2.
Caption of a table should appear above it.
The most important reference "[5]" that the proposed model is mainly based on is missing.
......
As mentioned in Line 189-190, the Bayesian Zero-shot Learning introduced in Section 4 is previously proposed in the missing reference. It is unclear how this paper relates to that one. |
NIPS | Title
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
Abstract
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time counterparts of deep residual neural networks (NNs), and numerous extensions for recurrent NNs have been proposed. Since the 1980s, ODEs have also been used to derive theoretical results for NN learning rules, e.g., the famous connection between Oja’s rule and principal component analysis. Such rules are typically expressed as additive iterative update processes which have straightforward ODE counterparts. Here we introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets that learn to manipulate short-term memory in rapidly changing synaptic connections of other nets. This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their fundamental scalability limitations. Our code is public.1
1 Introduction
Neural ordinary differential equations (NODEs) [1] have opened a new perspective on continuoustime computation with neural networks (NNs) as a practical framework for machine learning based on differential equations. While the original approach—proposed as a continuous-depth version of deep feed-forward residual NNs [2, 3]—only covers autonomous ODEs entirely determined by the initial conditions, more recent extensions deal with sequential data (reviewed in Sec. 2.1) in a way similar to what is typically done with standard recurrent NNs (RNNs) in the discrete-time scenario. This potential for continuous-time (CT) sequence processing (CTSP) is particularly interesting, since there are many applications where datapoints are observed at irregularly spaced time steps, and CT sequence models might better deal with such data than their discrete-time counterparts. However, the development of NODEs for CTSP is still at an early stage. For example, a popular approach of Neural Controlled Differential Equations [4] (NCDEs; also reviewed in Sec. 2.1) has in practice only one architectural variant corresponding to the “vanilla” RNN [5]. Discrete-time processing, however, exploits many different RNN architectures as well as Transformers [6].
While it is not straightforward to transform the standard Transformer into a CT sequence processor, we’ll show that the closely related Fast Weight Programmers (FWPs) [7, 8, 9, 10] and linear Transformers [11] (reviewed in Sec. 2.3) have direct CT counterparts. In FWPs, temporal processing of short-term memory (stored in fast weight matrices) uses learnable sequences of learning rules. Hence CT versions of FWPs will require differential equations to model the learning rules. This relates to a trend of the 1980s/90s. Among many old connections between NNs and dynamical systems described by ODEs (e.g., [12, 13, 14, 15, 16, 17]), the theoretical analysis of NN learning rules in the ODE
1https://github.com/IDSIA/neuraldiffeq-fwp
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
framework has been particularly fruitful. Consider the famous example of Oja’s rule [18] (briefly reviewed in Sec. 2.2): many results on its stability, convergence, and connection to Principal Component Analysis [19, 20] were obtained using its ODE counterpart (e.g., [18, 21, 22, 23, 24, 25, 26, 27]).
Here we propose a novel combination of Neural ODEs and learning rules, to obtain a new class of sequence processing Neural ODEs which are continuous-time counterparts of Fast Weight Programmers and linear Transformers. The resulting models are general-purpose CT sequence-processing NNs, which can directly replace the standard Neural CDE models typically used for supervised CT sequence processing tasks. To the best of our knowledge, there is no previous work on Neural ODEbased Transformer families for CT sequence processing, despite their popularity in important types of discrete time computation such as Natural Language Processing and beyond. We also show how our approach solves the fundamental limitation of existing Neural CDEs in terms of model size scalability.
We conduct experiments on three standard time series classification tasks covering various scenarios (regularly sampled, irregularly sampled with missing values, and very long time series). We demonstrate that our novel models outperform existing Neural ODE-based sequence processors, in some cases by a large margin.
2 Background
We briefly review the main background concepts this work builds upon: NODEs for sequence processing (Sec. 2.1), NN learning rules and their connection to ODEs (Sec. 2.2), and Fast Weight Programmers whose memory update is based on learning rules controlled by an NN (Sec. 2.3).
2.1 Neural ODEs (NODEs) and Their Extensions for Sequence Processing
Here we review the core idea of NODEs [1]. In what follows, let n, N , d, din denote positive integers, T be a positive real number, and θ denote an arbitrary set of real numbers. We consider a residual layer (say, the n-th layer with a dimension d) in an N -layer deep NN which transforms an input hn−1 ∈ Rd to an output hn ∈ Rd with a parameterised function fθ : Rd → Rd as follows:
hn = hn−1 + fθ(hn−1) (1)
This coincides [28, 29, 30, 31, 32, 33, 34, 1] with the following equation for ϵ = 1
h(tn) = h(tn−1) + ϵfθ(h(tn−1)) (2)
where h : [0, T ] → Rd is a function such that h(tn) = hn holds for all n : 0 ≤ n ≤ N and tn ∈ [0, T ] such that tn − tn−1 = ϵ > 0 if n ≥ 1. This equation is a forward Euler discretisation of the ordinary differential equation defined for all t ∈ (t0, T ] as
h′(t) = fθ(h(t)) or h(t) = h(t0) + ∫ t s=t0 fθ(h(s))ds (3)
where h′ denotes the first order derivative. This establishes the connection between the ODE and the deep residual net with parameters θ shared across layers2: given the initial condition h(t0) = h0, the solution to this equation evaluated at time T , i.e., h(T ), corresponds to the output of this deep residual NN, which can be computed by an ODE solver. We denote it as a function ODESolve taking four variables: h(T ) = ODESolve(fθ,h0, t0, T ). During training, instead of backpropagating through the ODE solver’s operations, the continuous adjoint sensitivity method [35] (which essentially solves another ODE but backward in time) can compute gradients with O(d) memory requirement, constant w.r.t. T [1].
A natural next step is to extend this formulation for RNNs, i.e., the index n now denotes the time step, and we assume an external input xn ∈ Rdin at each step n to update the hidden state hn−1 to hn as
hn = fθ(hn−1,xn) (4)
Depending on the property of external inputs (xn)Nn=1 = (x1, ...,xN ), there are different ways of defining NODEs for sequence processing. We mainly distinguish three cases.
2Or we make θ dependent of t such that parameters are “depth/layer-dependent” as in standard deep nets.
First, when there is a possibility to construct a differentiable control signal x : t 7→ x(t) ∈ Rdin for t ∈ [t0, T ] from the inputs (xn)Nn=1; an attractive approach by Kidger et al. [4] handles the corresponding dynamics in a neural controlled differential equation (NCDE):
h(t) = h(t0) + ∫ t s=t0 Fθ(h(s))dx(s) = h(t0) + ∫ t s=t0 Fθ(h(s))x ′(s)ds (5)
where Fθ is a parameterised function (typically a few-layer NN) which maps a vector h(s) ∈ Rd to a matrix Fθ(h(s)) ∈ Rd×din (we already relate this component to Recurrent Fast Weight Programmers below) and thus, Fθ(h(s))dx(s) denotes a matrix-vector multiplication. There are several methods to construct the control x : [t0, T ] → Rdin based on the discrete data points (xn)Nn=1, such that its differentiability is guaranteed. In this work, we follow Kidger et al. [4] and mainly use natural cubic splines over all data points (which, however, makes it incompatible with auto-regressive processing); for better alternatives, we refer to Morrill et al. [36]. Since the final equation is again an NODE with a vector field of form gθ,x′(s,h(s)) = Fθ(h(s))x′(s), all methods described above are applicable: ODE solver for evaluation and continuous adjoint method for memory efficient training. A notable extension of Neural CDEs is the use of log-signatures to sub-sample the input sequence [37]. The resulting NCDEs are called neural rough differential equations (NRDEs), which are relevant for processing long sequences. One fundamental limitation of the NCDEs above is the lack of scalability of Fθ : Rd → Rd×din . For example, if we naively parameterise Fθ using a linear layer, the size of its weight matrix is d2 ∗ din which quadratically increases with the hidden state size d. Previous attempts [38] do not successfully resolve this issue without performance degradation. In Sec. 5, we’ll discuss how our models (Sec. 3.2) naturally circumvent this limitation while remaining powerful NCDEs.
On a side note, the NCDE is often referred to as the “continuous-time analogue” to RNNs [4], but this is a bit misleading: discrete-time RNN equations corresponding to the continuous-time Eq. 5 do not reflect the standard RNN of Eq. 4 but:
hn = hn−1 +Wn−1(xn − xn−1) (6) Wn = Fθ(hn) (7)
where one network (Eq. 6) learns to translate the variation of inputs (xn − xn−1) into a change in the state space, using a weight matrix Wn−1 which itself is generated by another network (Fθ : Rd → Rd×din ; Eq. 7) on the fly from the hidden state. This model is thus a kind of Recurrent FWP [39, 40, 10].
Second, even if x is not differentiable, having access to (piece-wise) continuous x defined and bounded over an interval of interest [t0, T ] is enough to define a sequence processing NODE, by making it part of the vector field:
h(t) = h(t0) + ∫ t s=t0 fθ(h(s),x(s))ds (8)
where the vector field fθ(h(t),x(t)) = gθ,x(t,h(t)) can effectively be evaluated at any time t ∈ [t0, T ]. We refer to this second approach as a direct NODE method. While Kidger et al. [4] theoretically and empirically show that this approach is less expressive than the NCDEs above, we’ll show how in our case of learning rules one can derive interesting models within this framework, which empirically perform on par with the CDE variants.
Finally, when no control function with one of the above properties can be constructed, a mainstream approach dissociates the continuous-time hidden state update via ODE for the time between two observations (e.g., Eq. 9 below) from integration of the new data (Eq. 10 below). Notable examples of this category include ODE-RNNs [41, 42] which transform the hidden states hn−1 to hn for each observation xn available at time tn as follows:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (9) hn = ϕθ2(xn,un) (10)
where Eq. 9 autonomously updates the hidden state between two observations using a function fθ1 parameterised by θ1, while in Eq. 10, function ϕθ2 parameterised by θ2 integrates the new input xn into the hidden state. In Latent ODE-RNN [41], a popular extension of this approach to the variational setting, the initial recurrent state h0 is sampled from a prior (during training, an additional encoder is trained to map sequences of inputs to parameters of the prior). While this third case is not our focus, we’ll also show how to use FWPs in this scenario in Sec. 3.3 for the sake of completeness.
2.2 Learning Rules and Their Connections to ODEs
Learning rules of artificial NNs describe the process which modifies their weights in response to some inputs. This includes the standard backpropagation rule (also known as the reverse mode of automatic differentiation) derived for the case of supervised learning, as well as rules inspired by Hebb’s informal rule [43] in “unsupervised” settings. Here we focus on the latter. Let n, din dout be positive integers. Given a linear layer with a weight matrix Wn ∈ Rdout×din (the single output neuron case dout = 1 is the focus of the classic works) at time n which transforms input xn ∈ Rdin to output yn ∈ Rdout as
yn = Wn−1xn (11)
the pure Hebb-style additive learning rule modifies the weights according to
Wn = Wn−1 + ηnyn ⊗ xn (12) where ⊗ denotes outer product and ηn ∈ R+ is a learning rate at time n. Oja [18] proposed stability improvements to this rule through a decay term
Wn = Wn−1 + ηnyn ⊗ (xn −W⊤n−1yn) (13) whose theoretical analysis has since the 1980s been a subject of many researchers covering stability, convergence, and relation to Principal Component Analysis [18, 21, 22, 23, 24, 25, 26, 27, 44]. One key approach for such theoretical analysis is to view the equation above as a discretisation of the following ODE:
W ′(t) = η(t)y(t)⊗ (x(t)−W (t− 1)⊤y(t)) (14) On a related note, studies of RNNs (e.g., [45, 46]) or learning dynamics (e.g., [47]) have also profited from ODEs.
2.3 Fast Weight Programmers & Linear Transformers
Fast Weight Programmers (FWP; [7, 8, 9, 10]) are general-purpose (auto-regressive) sequence processing NNs. In general, an FWP is a system of two NNs: a slow NN, the programmer, rapidly generates during runtime weight changes of another neural network, the fast NN. The (slow) weights of the slow net are typically trained by gradient descent. Variants of FWPs whose weight generation is based on outer products between keys and values [7] have been shown [9] to be equivalent to Linear Transformers [11] (using the mathematical equivalence known from perceptron/kernel machine duality [48, 49]). These FWPs use sequences of learning rules to update short-term memory in form of a fast weight matrix. A practical example of such FWPs is the DeltaNet [9] which transforms an input xn ∈ Rdin into an output yn ∈ Rdout at each time step n while updating its fast weight matrix Wn−1 ∈ Rdout×dkey as follows:
βn, qn,kn,vn = Wslowxn (15) Wn = Wn−1 + σ(βn)(vn −Wn−1ϕ(kn))⊗ ϕ(kn) (16) yn = Wnϕ(qn) (17)
where the slow net (Eq. 15; with weights Wslow ∈ R(1+2∗dkey+dout)×din ) generates key/value vectors kn ∈ Rdkey and vn ∈ Rdout as well as a scalar βn ∈ R to obtain a dynamic learning rate by applying a sigmoid function σ, and ϕ is an element-wise activation function whose output elements are positive and sum up to one (typically softmax). These fast dynamic variables generated by a slow NN are used in a learning rule (Eq. 16) akin to the classic delta rule [50] to update the fast weight matrix. The output is finally produced by the forward computation of the fast NN, i.e., by querying the fast weight matrix by the generated query vector qn ∈ Rdkey (Eq. 17). An intuitive interpretation of the fast weight matrix is a key-value associative memory with write and read operations defined by Eq. 16 and 17, respectively. This encourages intuitive thoughts about memory capacity (limited by the number of “keys” we can store without interference) [9]. For instance, if we replace the learning rule (i.e., memory writing operation) of Eq. 16 by a pure additive Hebb-style rule (and a fixed learning rate of 1.0): Wn = Wn−1 + vn ⊗ ϕ(kn), we obtain the Linear Transformer [11] (we refer to prior work [9] for further explanations of the omission of attention normalisation). Such a purely additive learning rule often suffers from long term dependencies, unlike the delta rule [9]. We’ll confirm this trend also in the CT models (using the EigenWorms dataset). For later convenience, we introduce a notation FWP which denotes generic FWP operations: yn,Wn = FWP(xn,Wn−1;Wslow).
3 Continuous-Time Fast Weight Programmers
We propose continuous-time counterparts of Fast Weight Programmers (Sec. 2.3) which naturally combine ODEs for learning rules (Sec. 2.2) and existing approaches for sequence processing with NODEs (Sec. 2.1). We present three types of these CT FWP models in line with the categorisation of Sec. 2.1 while the main focus of this work is on the two first cases.
3.1 Direct NODE-based FWPs
In the direct NODE approach (reviewed in Sec. 2.1), we assume a (piece-wise) continuous control signal x : t 7→ x(t) bounded over an interval [t0, T ]. We make it part of the vector field to define an ODE describing a continuous-time learning rule for a fast weight matrix W (t):
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s))ds (18)
where W : t 7→ W (t) ∈ Rdout×dkey is a function defined on [t0, T ], and Fθ is an NN parameterised by θ which maps onto Rdout×dkey . This is a neural differential equation for learning to program a neural net through continuous learning rules, that is, to train a fast weight matrix W (t) of a fast NN (Eq. 20 below) for each sequential control x. Like in the discrete-time FWPs (Sec. 2.3), the output y(T ) ∈ Rdout is obtained by querying this fast weight matrix3 (e.g., at the last time step T ):
q(T ) = Wqx(T ) (19) y(T ) = W (T )q(T ) (20)
where Wq ∈ Rdkey×din is a slow weight matrix used to generate the query q(T ) ∈ Rdkey (Eq. 19). Now we need to specify Fθ in Eq. 18 to fully define the learning rule. We focus on three variants:
Fθ(W (s),x(s)) = σ(β(s)) k(s)⊗ v(s) Hebb-style v(s)⊗ ( k(s)−W (s)⊤v(s) ) Oja-style(
v(s)−W (s)k(s) ) ⊗ k(s) Delta-style
(21)
where [β(s),k(s),v(s)] = Wslowx(s) with a slow weight matrix Wslow ∈ R(1+dkey+dout)×din . As in the discrete-time FWP (Sec. 2.3), the slow NN generates β(s) ∈ R (to which we apply the sigmoid function σ to obtain a learning rate), key k(s) ∈ Rdkey and value v(s) ∈ Rdout vectors from input x(s). These variants are inspired by the respective classic learning rules of the same name, while they are crucially different from the classic ones in the sense that all variables involved (key, value, learning rate) are continually generated by the slow NN. In the experimental section, we’ll comment on how some of these design choices can result in task-dependent performance gaps. In practice, we use the multi-head version of the operations above (i.e., by letting H be a positive integer denoting the number of heads, query/key/value vectors are split into H sub-vectors and Eqs. 20-21 are conducted independently for each head). The output is followed by the standard feed-forward block like in Transformers [6]. Possible extensions for deeper models are discussed in Appendix C.1.
3.2 NCDE-based FWPs
Here we present models based on NCDEs (reviewed in Sec. 2.1). We assume availability of a differentiable control signal x(t), whose first order derivative is denoted by x′(t). Given the NCDE formulation of Eq. 5, the most straight-forward approach to obtain a CT Fast Weight Programmer is to extend the dimensionality of the recurrent hidden state, i.e., we introduce a parameterised function Fθ which maps a matrix W (t) ∈ Rdout×dkey to a third-order tensor Fθ(W (t)) ∈ Rdout×dkey×din :
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s))dx(s) = W (t0) + ∫ t s=t0 Fθ(W (s))x′(s)ds (22)
However, this approach is obviously not scalable since the input and output dimensions (dout × dkey and dout × dkey × din) of Fθ can be too large in practice. A more tractable CDE-based approach can
3In practice, we also apply element-wise activation functions to query/key/value vectors where appropriate, which we omit here for readability. We refer to Appendix A for further details.
be obtained by providing x and/or x′ to the vector field:
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s),x′(s))x′(s)ds (23)
While this equation still remains a CDE because of the multiplication from the right by dx = x′(s)ds, the additional inputs to the vector field offer a way of making use of various learning rules, as in the case of direct NODE approach above (Sec. 3.1). To be specific, either x and x′ or only x′ is required in the vector field to obtain these tractable CDEs. Here we present the version which uses both x and x′ 4. The resulting vector fields for different cases are:
Fθ ( W (s),x(s),x′(s) ) x′(s) = σ(β(s)) Wkx(s)⊗Wvx′(s) Hebb( Wkx(s)−W (s)⊤Wvx′(s) ) ⊗Wvx′(s) Oja(
Wvx(s)−W (s)Wkx′(s) ) ⊗Wkx′(s) Delta
(24)
As can be seen above, the use of CDEs to describe a continuous fast weight learning rule thus naturally results in a key/value memory where x′ is used to generate either key or value vectors. Because of the multiplication from the right by x′, the role of x′ changes depending on the choice of learning rule: x′ is used to generate the key in the Delta case but the value vector in the case of Oja. In the case of Hebb, the choice made in Eq. 24 of using x for keys and x′ for values is arbitrary since Eq. 24 is symmetric in terms of roles of keys and values (see an ablation study in Appendix C.2 for the other case where we use x′ to generate the key and x for the value). The querying operation (analogous to Eqs. 19-20 for the direct NODE case) is also modified accordingly, depending on the choice of learning rule, such that the same input (x or x′) is used to generate both key and query:
y(T ) = { W (T )⊤Wqx(T ) Hebb and Oja W (T )Wqx ′(T ) Delta (25)
Note that since the proposed vector field Fθ ( W (s),x(s),x′(s) ) x′(s) is more general than the one
used in the original NCDE Fθ ( W (s) ) x′(s), any theoretical results on the CDE remain valid (which, however, does not tell us anything about the best choice for its exact parameterisation).
3.3 ODE-RFWP and Latent ODE-RFWP
The main focus of this work is the setting of Kidger et al. [4] where we assume the existence of some control signal x (Sec. 3.1 and 3.2 above). However, here we also show a way of using FWPs in the third/last case presented in Sec. 2.1 where no control x(t) is available (or can be constructed), i.e., we only have access to discrete observations (xn)Nn=0. Here we cannot directly define the vector field involving continuous transformations using the inputs. We follow the existing approaches (ODERNN or Latent ODE; Sec. 2.1) which use two separate update functions: A discrete recurrent state update is executed every time a new observation is available to the model, while a continuous update using an autonomous ODE is conducted in between observations. Unlike with standard recurrent state vectors, however, it is not practical to autonomously evolve high-dimensional fast weight matrices5. We therefore opt for using a Recurrent FWP (RFWP) [10] and combine it with an ODE:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (26) hn,Wn = FWP([xn,un],Wn−1; θ2) (27)
where we keep the fast weight learning rule itself discrete (Eq. 27), but evolve the recurrent state vector un using an ODE (Eq. 26) such that the information to be read/written to the fast weight matrix is controlled by a variable which is continuously updated between observations. We refer to this model as ODE-RFWP and its variational variant as Latent ODE-RFWP.
Since we focus on continuous-time learning rules, the case above is not of central interest as the learning rule remains discrete here (Eq. 27). Nevertheless, in Appendix C.3, we also provide some experimental results for model-based reinforcement learning settings corresponding to this case.
4 The equations for the version using only x′ can be obtained by replacing x by x′ in Eq. 24. We provide an ablation in Appendix C.2. As a side note, we also obtain the equation for the CDE using only x′ by replacing x by x′ in Eq. 18 for the direct NODE case.
5Such an approach would require a computationally expensive matrix-to-matrix transforming NN, whose scalability is limited.
4 Experiments
We consider three datasets covering three types of time series which are regularly sampled (Speech Commands [51]), irregularly sampled with partially missing features (PhysioNet Sepsis [52]), or very long (EigenWorms [53]). We compare the proposed direct NODE and CDE based FWP models (Sec. 3.1 & 3.2) to NODE baselines previously reported on the same datasets [4, 36]. Appendix B provides further experimental details including hyper-parameters.
Speech Commands. The Speech Commands [51] is a single word speech recognition task. The datapoints are regularly sampled, and the sequence lengths are relatively short (≤ 160 frames), which makes this task a popular sanity check. Following prior work on NCDEs [4], we use 20 mel frequency cepstral coefficients as speech features and classify the resulting sequence to one out of ten keywords. The middle column of Table 1 shows the results. The table is split into the direct NODE (top) and CDE (bottom) based approaches. We first observe that among the direct NODE approaches, all our FWPs largely outperform (≥ 80% accuracy) the baseline GRU-ODE performance of 47.9% (the best direct NODE baseline from Kidger et al. [4]). This demonstrates that with a good parameterisation of the vector field, the direct NODE approach can achieve competitive performance. On the other hand, all CDE-based approaches yield similar performance. We also only see slight differences in terms of performance among different learning rules, without a clear winner for this task. This may indicate that the ordinary nature of this task (regularly sampled; short sequences) does not allow for differentiating among these CDE models, including the baseline.
PhysioNet Sepsis. The PhysioNet Sepsis is a dataset of the sepsis prediction task from the PhysioNet challenge 2019 [52]. This is again a dataset used by Kidger et al. [4] to evaluate NCDEs. The task is a binary prediction of sepsis from a time series consisting of measurements of 34 medical features (e.g., respiration rate) of patients’ stays at an ICU. Each sequence is additionally labelled by five static features of the patient (e.g., age) which are fed to the model to generate the initial state of the ODE. Sequences are relatively short (≤ 72 frames) but datapoints are irregularly sampled and many entries are missing, which makes this task challenging. It comes in two versions: with and without the so-called observation intensity information (denoted as “OI” and “no-OI”) which is one extra input feature indicating each observation’s time stamp (providing the models with information on measurement frequency). This distinction is important since the prior work [4] has reported that existing ODE/CDE-based approaches struggle with the no-OI case of this task. Following the previous work, we report the performance in terms of Area Under the ROC Curve (AUC). The right part of Table 1 shows the results. We obtain large improvements in the no-IO case (from 77.6 to 85.7% for the CDEs and from 77.1 to 84.5% for the direct NODEs), while also obtaining small improvements in the OI case (from 85.2 to 90.4% for direct NODEs, and from 88.0 to 91.2% for CDEs). The no-OI performance of our models is also comparable to the best overall performance reported by Kidger [38]: 85.0 % (1.3) achieved by GRU-D [54]. This demonstrates the efficacy of
CT FWP model variants for handling irregularly sampled data with partially missing features even in the case without frequency information. Differences between various learning rules are rather small again. In some cases, we observe performance to be very sensitive to hyper-parameters. For example, the best Oja-CDE configuration achieves 85.1% (2.5) with a learning rate of 6e-5, while this goes down to 79.6% (4.7) when the learning rate is changed to 5e-5.
EigenWorms. The EigenWorms dataset (which is part of the UEA benchmark [53]) is a 5-way classification of roundworm types based on time series tracking their movements. To be more specific, motions of a worm are represented by six features corresponding to their projections to six template movement shapes, called “eigenworms.” While this dataset contains only 259 examples, it is notable for its very long sequences (raw sequence lengths exceed 17 K) and long-span temporal dependencies [37, 55, 56]. We use the same train/validation/test split ratio as the prior work [37] which reports Neural RDEs (NRDEs) as achieving the best NODE model performance on this dataset. The equations of our CT FWPs for the RDE case can be straightforwardly obtained by replacing the input x in Eqs. 18-19 of the direct NODE formulation (or6, x and x′ in Eqs. 23-25 of the NCDEs) by the corresponding log-signatures. Table 2 shows the results, where “Step” denotes the time subsampling rate which is fixed to 4 for which the prior work [37] reports the best NRDE and NCDE performance. “Sig-Depth” denotes the depth of the log-signature (the deeper, the more log-signature terms we take into account, thus ending up with a larger input feature vector; we refer to the original paper [37] for further details). We consider two values for this parameter: 1 and 2. When set to 1, the input feature contains only the first derivative x′(s) and thus the NRDE is reduced to an NCDE (with controls constructed via linear interpolation). We take the best NCDE performance from Morrill et al. [37] as the depth-1 baseline. Morrill et al. [37] report the best overall performance for the depth-2 NRDE (depth-2 baseline in our table). In both cases, we first note a large performance gap between models with different learning rules. While the naive Hebb and Oja based models struggle with this very long sequence processing (sequence length still exceeds 4 K with a down-sampling step size of 4), the Delta rule performs very well. This confirms the prior result in the discrete-time domain [9] which motivated the Delta rule design by its potential for handling long sequences (we refer to prior work [9] for further explanations). Since its performance on other tasks is comparable to the one of Hebb and Oja variants, the Delta rule is a natural default choice for parameterising the CT FWPs.
In both the depth-1 and depth-2 cases, we obtain large improvements compared to the respective baselines. It is counter-intuitive to find certain depth-2 models underperforming their depth-1 counterparts, but this trend has also been observed in the original NRDEs [36]. Our best overall performance is obtained in the depth-1 case: 91.8 % (3.4) exceeds the previous best NRDE based model per-
6Conceptually these two approaches are equivalent: the direct NODE and NCDE coincide here. In practice, there can be a subtle difference due to an implementation detail. The direct NODE approach can apply layer normalisation to the input fed to both key and value projections (as they are both inside the vector field). In this case (as in our implementation), the corresponding NCDE formulation we obtain is based on the normalised input.
formance of 83.8 % (3.0) [36]. This almost matches the state-of-the-art accuracy of 92.8 % (1.8) reported by Rusch et al. [56] (using an ODE-inspired discrete-time model). Our model’s performance variance is high (best single seed performance is 97.4% while the standard deviation is 3.4). The wall clock time is similar for our best model (last row in Table 2) and the NRDE baseline (36s/epoch on a GeForce RTX 2080 Ti) and their sizes are comparable (87 K vs. 65 K parameters respectively).
5 Discussions
Scalability Advantage Compared to Standard NCDEs. In addition to the good empirical results shown above, our FWP approach also addresses an important limitation of existing NCDEs [4]: their scalability in terms of model size. The vector field in standard NCDEs (Eq. 5) requires an NN Fθ which takes a vector h(s) ∈ Rd as an input to produce a matrix of size Rd×din . This can be very challenging when din or/and d is large. Actually, the same bottleneck is present in the weight generation of FWPs [7]. The use of outer products can remediate this issue in discrete FWPs as well as in CT FWPs: the computations in our FWP-based NODE/NCDEs only involve “firstorder” dimensions (i.e., no multiplication between different dimensions, such as d × din) for NN outputs. This can scale well with increased model size, making feasible larger scale tasks infeasible for existing NCDEs. On the other hand, Kidger et al. [4] report that using outer products (in their “Sec. 6.1 on limitations”) in standard NCDEs does not perform well. Why do outer products work well in our models but not in the original NCDEs? The answer may be simple. In the original NCDEs (Eq. 5), multiplications occur at each (infinitesimal) time step between the generated rank-one weight matrix Fθ(h(s)) and x′(s) before the sum. All these transformations are thus of rank one while we expect expressive transformations to be necessary to translate x′(s) into changes in the state space. In contrast, in our CT FWPs, the ODE only parameterises the weight generation process of another net, and thus the rank-one matrices are never used in isolation: they are summed up over time (Eq. 18 or 23) to form an expressive weight matrix which is only then used for matrix multiplication (Eq. 20 or 25. The proposed FWP-NODE/NCDEs thus offer scalable alternatives to existing NCDEs, also yielding good empirical performance (Sec. 4).
Importance of Memory Efficient Backpropagation for FWPs. Memory efficiency of continuous adjoint backpropagation may be not so important for standard NCDEs of state size O(d), but is crucial for FWPs of state size O(d2) which can quickly become prohibitive for long sequences, as naive backpropagation stores all states used in the forward pass. Prior works on discrete FWPs [9, 57, 10] solve this problem by a custom memory-efficient implementation. Here, the continuous adjoint method naturally addresses this problem.
Limitations. Our ablation studies and hyper-parameter tuning focus on optimising the model configuration/architecture. From the NODE perspective, other parameters may further improve performance or alleviate performance variability/stability issues observed in some cases (Sec. 4). For example, we use the numerical solver configurations of the baselines (see Appendix B) without tuning them. Similarly, we use natural cubic splines to construct the differentiable control signals for NCDEs in the Speech Commands and PhysioNet Sepsis tasks, following the original NCDE paper [4]. Morrill et al. [36] report performance enhancements by improving the corresponding interpolation methods (e.g., 93.7% on Speech Commands). Such further optimisation is not conducted here.
Generally speaking, real benefits of using continuous-time sequence processing models are yet to be proved. While we achieve improvements over the best existing NODE/NCDE models on multiple datasets, discrete-time models tailored to the corresponding problem still perform as well or even better than our improved CT models (e.g., Rusch et al. [56] for EigenWorms and Che et al. [54] for PhysioNet Sepsis; see Sec. 4).
Related Work on Parameter/Weight ODEs. Other works use ODEs to parameterise the timeevolving weights of some model. However, they are limited to autonomous ODEs (i.e., no external control x is involved). Zhang et al. [58] and Choromanski et al. [59] study coupled ODEs where one ODE is used for temporal evolution of parameters of the main Neural ODE. The scope of these two works is limited to autonomous ODEs corresponding to continuous-depth residual NNs with different parameters per depth. Deleu et al. [60] consider an ODE version of a gradient descent learning process for adaptation, but also formulated as an autonomous ODE. In contrast, our focus is really on
sequence processing where the model continuously receives external controls x and translates them into weight changes of another network.
6 Conclusion
We introduced novel continuous-time sequence processing neural networks that learn to use sequences of ODE-based continuous learning rules as elementary programming instructions to manipulate shortterm memory in rapidly changing synaptic connections of another network. The proposed models are continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our new models experimentally outperform by a large margin existing Neural ODE based sequence processors on very long or irregularly sampled time series. Our Neural ODE/CDE based FWPs also address the fundamental scalability problem of the original Neural CDEs, which is highly promising for future applications of ODE based sequence processors to large scale problems.
7 Acknowledgements
We would like to thank Kidger et al. [4], Morrill et al. [37] and Du et al. [61] for their public code. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021_192356, project NEUSYM. We are thankful for hardware donations from NVIDIA and IBM. The resources used for this work were partially provided by Swiss National Supercomputing Centre (CSCS) project s1145 and s1154. | 1. What is the main contribution of the paper regarding continuous time versions of Fast Weight Programmers family of sequence classification models?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental comparisons and usage of "memory efficient" adjoint method?
3. Do you have any questions or concerns regarding the paper's clarity, novelty, and relevance to the field of neural ODEs?
4. How does the reviewer assess the significance and impact of the paper's findings in the context of sequence classification tasks and beyond? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper presents a continuous time version for Fast Weight Programmers family of sequence classification models. It presents continuous time analogues to Hebbian learning rules. It introduces continuous time versions of multiple FWPs variants. The continuous time versions are empirically tested three standard sequence classification tasks, and compared to other sequence classifiers. In the appendix, the method is also applied to two basic RL problems.
Strengths And Weaknesses
The paper is overall well presented with a thorough explanation. The continuous analogous of FWPs is an original contribution to the literature of neural ODEs. A particular novelty is the consideration of Hebbian learning methods.
A wide variety of experiments are performed, which is a strength of the paper. However, the main weakness of this paper is that each experiment only has one baseline method as comparison, and does not have enough details, which makes it unclear if the proposed method is significantly better than existing methods.
This is a minor aspect of the work: the “memory saving” adjoint method is mentioned as a “pro” of their method. A questionable claim is “Memory Efficient Backpropagation for FWPs”. The “memory-saving” adjoint method only works if the equation is reversible (not all ODEs can be integrated in both directions c.f. heat equation). Otherwise, checkpointing of the forward pass of the ODE is required. (See, e.g. Zhuang et al. “Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE'' 2020). Additionally, the baseline methods could also use it (with the same error) so it is not a pro unique to their method.
Questions
Experiments:
In the experiments, why were these particular baselines chosen? Why were only ODE based methods chosen as comparison? Could a non-ode method be included in these problems?
Tables 1 & 2: Are the models comparable? What are the parameter counts of each row? What are the runtimes of training and inference? What are the architecture hyperparameters?
Table 2: Are the results for “Direct NODE” or “NCDE”?
Line 264: Did the authors perform the experiment comparing NODE and NCDE equations for this problem? It is not clear from the text or results.
Usage of “memory efficient” adjoint
As mentioned, it is not clear that the checkpoint-free adjoint method can be used to these equations. Are the equations of the continuous FWP models reversible, and guaranteed to remain so?
Is the adjoint method only applied on the fast weight learning rule, or the entire network?
Did you compare using the checkpoint-free adjoint method vs. regular backpropagation in your examples?
Were the baseline methods also able to use the “memory saving” adjoint method for training?
Clarifications
Line 211: Why is it not practical? Is it because of the number of parameters, or computational cost, something else?
Line 217: Is the Latent ODE RFWP used anywhere in this work? Line 218: This last sentence is confusing. “While this case” Which is “this case”? Does this paragraph mean to say that all of section 3.3 is not of central interest?
Line 263: What is an RDE?
Minor
This is a very superficial comment, but the paper title makes it difficult to understand what the paper is about. The phrase “Learning to Program” suggests a different field of program synthesis or general optimization algorithms, which does not fit. The title could be simplified.
Limitations
The authors focus on how their method mitigates limitations of the original family of methods. The authors have some discussions about the cost limitations of the family methods, and hypotheses on the mechanism by which their method performs well with low-rank methods when previous variants do not.
There are no potential negative societal impacts that are particular to their work. |
NIPS | Title
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
Abstract
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time counterparts of deep residual neural networks (NNs), and numerous extensions for recurrent NNs have been proposed. Since the 1980s, ODEs have also been used to derive theoretical results for NN learning rules, e.g., the famous connection between Oja’s rule and principal component analysis. Such rules are typically expressed as additive iterative update processes which have straightforward ODE counterparts. Here we introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets that learn to manipulate short-term memory in rapidly changing synaptic connections of other nets. This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their fundamental scalability limitations. Our code is public.1
1 Introduction
Neural ordinary differential equations (NODEs) [1] have opened a new perspective on continuoustime computation with neural networks (NNs) as a practical framework for machine learning based on differential equations. While the original approach—proposed as a continuous-depth version of deep feed-forward residual NNs [2, 3]—only covers autonomous ODEs entirely determined by the initial conditions, more recent extensions deal with sequential data (reviewed in Sec. 2.1) in a way similar to what is typically done with standard recurrent NNs (RNNs) in the discrete-time scenario. This potential for continuous-time (CT) sequence processing (CTSP) is particularly interesting, since there are many applications where datapoints are observed at irregularly spaced time steps, and CT sequence models might better deal with such data than their discrete-time counterparts. However, the development of NODEs for CTSP is still at an early stage. For example, a popular approach of Neural Controlled Differential Equations [4] (NCDEs; also reviewed in Sec. 2.1) has in practice only one architectural variant corresponding to the “vanilla” RNN [5]. Discrete-time processing, however, exploits many different RNN architectures as well as Transformers [6].
While it is not straightforward to transform the standard Transformer into a CT sequence processor, we’ll show that the closely related Fast Weight Programmers (FWPs) [7, 8, 9, 10] and linear Transformers [11] (reviewed in Sec. 2.3) have direct CT counterparts. In FWPs, temporal processing of short-term memory (stored in fast weight matrices) uses learnable sequences of learning rules. Hence CT versions of FWPs will require differential equations to model the learning rules. This relates to a trend of the 1980s/90s. Among many old connections between NNs and dynamical systems described by ODEs (e.g., [12, 13, 14, 15, 16, 17]), the theoretical analysis of NN learning rules in the ODE
1https://github.com/IDSIA/neuraldiffeq-fwp
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
framework has been particularly fruitful. Consider the famous example of Oja’s rule [18] (briefly reviewed in Sec. 2.2): many results on its stability, convergence, and connection to Principal Component Analysis [19, 20] were obtained using its ODE counterpart (e.g., [18, 21, 22, 23, 24, 25, 26, 27]).
Here we propose a novel combination of Neural ODEs and learning rules, to obtain a new class of sequence processing Neural ODEs which are continuous-time counterparts of Fast Weight Programmers and linear Transformers. The resulting models are general-purpose CT sequence-processing NNs, which can directly replace the standard Neural CDE models typically used for supervised CT sequence processing tasks. To the best of our knowledge, there is no previous work on Neural ODEbased Transformer families for CT sequence processing, despite their popularity in important types of discrete time computation such as Natural Language Processing and beyond. We also show how our approach solves the fundamental limitation of existing Neural CDEs in terms of model size scalability.
We conduct experiments on three standard time series classification tasks covering various scenarios (regularly sampled, irregularly sampled with missing values, and very long time series). We demonstrate that our novel models outperform existing Neural ODE-based sequence processors, in some cases by a large margin.
2 Background
We briefly review the main background concepts this work builds upon: NODEs for sequence processing (Sec. 2.1), NN learning rules and their connection to ODEs (Sec. 2.2), and Fast Weight Programmers whose memory update is based on learning rules controlled by an NN (Sec. 2.3).
2.1 Neural ODEs (NODEs) and Their Extensions for Sequence Processing
Here we review the core idea of NODEs [1]. In what follows, let n, N , d, din denote positive integers, T be a positive real number, and θ denote an arbitrary set of real numbers. We consider a residual layer (say, the n-th layer with a dimension d) in an N -layer deep NN which transforms an input hn−1 ∈ Rd to an output hn ∈ Rd with a parameterised function fθ : Rd → Rd as follows:
hn = hn−1 + fθ(hn−1) (1)
This coincides [28, 29, 30, 31, 32, 33, 34, 1] with the following equation for ϵ = 1
h(tn) = h(tn−1) + ϵfθ(h(tn−1)) (2)
where h : [0, T ] → Rd is a function such that h(tn) = hn holds for all n : 0 ≤ n ≤ N and tn ∈ [0, T ] such that tn − tn−1 = ϵ > 0 if n ≥ 1. This equation is a forward Euler discretisation of the ordinary differential equation defined for all t ∈ (t0, T ] as
h′(t) = fθ(h(t)) or h(t) = h(t0) + ∫ t s=t0 fθ(h(s))ds (3)
where h′ denotes the first order derivative. This establishes the connection between the ODE and the deep residual net with parameters θ shared across layers2: given the initial condition h(t0) = h0, the solution to this equation evaluated at time T , i.e., h(T ), corresponds to the output of this deep residual NN, which can be computed by an ODE solver. We denote it as a function ODESolve taking four variables: h(T ) = ODESolve(fθ,h0, t0, T ). During training, instead of backpropagating through the ODE solver’s operations, the continuous adjoint sensitivity method [35] (which essentially solves another ODE but backward in time) can compute gradients with O(d) memory requirement, constant w.r.t. T [1].
A natural next step is to extend this formulation for RNNs, i.e., the index n now denotes the time step, and we assume an external input xn ∈ Rdin at each step n to update the hidden state hn−1 to hn as
hn = fθ(hn−1,xn) (4)
Depending on the property of external inputs (xn)Nn=1 = (x1, ...,xN ), there are different ways of defining NODEs for sequence processing. We mainly distinguish three cases.
2Or we make θ dependent of t such that parameters are “depth/layer-dependent” as in standard deep nets.
First, when there is a possibility to construct a differentiable control signal x : t 7→ x(t) ∈ Rdin for t ∈ [t0, T ] from the inputs (xn)Nn=1; an attractive approach by Kidger et al. [4] handles the corresponding dynamics in a neural controlled differential equation (NCDE):
h(t) = h(t0) + ∫ t s=t0 Fθ(h(s))dx(s) = h(t0) + ∫ t s=t0 Fθ(h(s))x ′(s)ds (5)
where Fθ is a parameterised function (typically a few-layer NN) which maps a vector h(s) ∈ Rd to a matrix Fθ(h(s)) ∈ Rd×din (we already relate this component to Recurrent Fast Weight Programmers below) and thus, Fθ(h(s))dx(s) denotes a matrix-vector multiplication. There are several methods to construct the control x : [t0, T ] → Rdin based on the discrete data points (xn)Nn=1, such that its differentiability is guaranteed. In this work, we follow Kidger et al. [4] and mainly use natural cubic splines over all data points (which, however, makes it incompatible with auto-regressive processing); for better alternatives, we refer to Morrill et al. [36]. Since the final equation is again an NODE with a vector field of form gθ,x′(s,h(s)) = Fθ(h(s))x′(s), all methods described above are applicable: ODE solver for evaluation and continuous adjoint method for memory efficient training. A notable extension of Neural CDEs is the use of log-signatures to sub-sample the input sequence [37]. The resulting NCDEs are called neural rough differential equations (NRDEs), which are relevant for processing long sequences. One fundamental limitation of the NCDEs above is the lack of scalability of Fθ : Rd → Rd×din . For example, if we naively parameterise Fθ using a linear layer, the size of its weight matrix is d2 ∗ din which quadratically increases with the hidden state size d. Previous attempts [38] do not successfully resolve this issue without performance degradation. In Sec. 5, we’ll discuss how our models (Sec. 3.2) naturally circumvent this limitation while remaining powerful NCDEs.
On a side note, the NCDE is often referred to as the “continuous-time analogue” to RNNs [4], but this is a bit misleading: discrete-time RNN equations corresponding to the continuous-time Eq. 5 do not reflect the standard RNN of Eq. 4 but:
hn = hn−1 +Wn−1(xn − xn−1) (6) Wn = Fθ(hn) (7)
where one network (Eq. 6) learns to translate the variation of inputs (xn − xn−1) into a change in the state space, using a weight matrix Wn−1 which itself is generated by another network (Fθ : Rd → Rd×din ; Eq. 7) on the fly from the hidden state. This model is thus a kind of Recurrent FWP [39, 40, 10].
Second, even if x is not differentiable, having access to (piece-wise) continuous x defined and bounded over an interval of interest [t0, T ] is enough to define a sequence processing NODE, by making it part of the vector field:
h(t) = h(t0) + ∫ t s=t0 fθ(h(s),x(s))ds (8)
where the vector field fθ(h(t),x(t)) = gθ,x(t,h(t)) can effectively be evaluated at any time t ∈ [t0, T ]. We refer to this second approach as a direct NODE method. While Kidger et al. [4] theoretically and empirically show that this approach is less expressive than the NCDEs above, we’ll show how in our case of learning rules one can derive interesting models within this framework, which empirically perform on par with the CDE variants.
Finally, when no control function with one of the above properties can be constructed, a mainstream approach dissociates the continuous-time hidden state update via ODE for the time between two observations (e.g., Eq. 9 below) from integration of the new data (Eq. 10 below). Notable examples of this category include ODE-RNNs [41, 42] which transform the hidden states hn−1 to hn for each observation xn available at time tn as follows:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (9) hn = ϕθ2(xn,un) (10)
where Eq. 9 autonomously updates the hidden state between two observations using a function fθ1 parameterised by θ1, while in Eq. 10, function ϕθ2 parameterised by θ2 integrates the new input xn into the hidden state. In Latent ODE-RNN [41], a popular extension of this approach to the variational setting, the initial recurrent state h0 is sampled from a prior (during training, an additional encoder is trained to map sequences of inputs to parameters of the prior). While this third case is not our focus, we’ll also show how to use FWPs in this scenario in Sec. 3.3 for the sake of completeness.
2.2 Learning Rules and Their Connections to ODEs
Learning rules of artificial NNs describe the process which modifies their weights in response to some inputs. This includes the standard backpropagation rule (also known as the reverse mode of automatic differentiation) derived for the case of supervised learning, as well as rules inspired by Hebb’s informal rule [43] in “unsupervised” settings. Here we focus on the latter. Let n, din dout be positive integers. Given a linear layer with a weight matrix Wn ∈ Rdout×din (the single output neuron case dout = 1 is the focus of the classic works) at time n which transforms input xn ∈ Rdin to output yn ∈ Rdout as
yn = Wn−1xn (11)
the pure Hebb-style additive learning rule modifies the weights according to
Wn = Wn−1 + ηnyn ⊗ xn (12) where ⊗ denotes outer product and ηn ∈ R+ is a learning rate at time n. Oja [18] proposed stability improvements to this rule through a decay term
Wn = Wn−1 + ηnyn ⊗ (xn −W⊤n−1yn) (13) whose theoretical analysis has since the 1980s been a subject of many researchers covering stability, convergence, and relation to Principal Component Analysis [18, 21, 22, 23, 24, 25, 26, 27, 44]. One key approach for such theoretical analysis is to view the equation above as a discretisation of the following ODE:
W ′(t) = η(t)y(t)⊗ (x(t)−W (t− 1)⊤y(t)) (14) On a related note, studies of RNNs (e.g., [45, 46]) or learning dynamics (e.g., [47]) have also profited from ODEs.
2.3 Fast Weight Programmers & Linear Transformers
Fast Weight Programmers (FWP; [7, 8, 9, 10]) are general-purpose (auto-regressive) sequence processing NNs. In general, an FWP is a system of two NNs: a slow NN, the programmer, rapidly generates during runtime weight changes of another neural network, the fast NN. The (slow) weights of the slow net are typically trained by gradient descent. Variants of FWPs whose weight generation is based on outer products between keys and values [7] have been shown [9] to be equivalent to Linear Transformers [11] (using the mathematical equivalence known from perceptron/kernel machine duality [48, 49]). These FWPs use sequences of learning rules to update short-term memory in form of a fast weight matrix. A practical example of such FWPs is the DeltaNet [9] which transforms an input xn ∈ Rdin into an output yn ∈ Rdout at each time step n while updating its fast weight matrix Wn−1 ∈ Rdout×dkey as follows:
βn, qn,kn,vn = Wslowxn (15) Wn = Wn−1 + σ(βn)(vn −Wn−1ϕ(kn))⊗ ϕ(kn) (16) yn = Wnϕ(qn) (17)
where the slow net (Eq. 15; with weights Wslow ∈ R(1+2∗dkey+dout)×din ) generates key/value vectors kn ∈ Rdkey and vn ∈ Rdout as well as a scalar βn ∈ R to obtain a dynamic learning rate by applying a sigmoid function σ, and ϕ is an element-wise activation function whose output elements are positive and sum up to one (typically softmax). These fast dynamic variables generated by a slow NN are used in a learning rule (Eq. 16) akin to the classic delta rule [50] to update the fast weight matrix. The output is finally produced by the forward computation of the fast NN, i.e., by querying the fast weight matrix by the generated query vector qn ∈ Rdkey (Eq. 17). An intuitive interpretation of the fast weight matrix is a key-value associative memory with write and read operations defined by Eq. 16 and 17, respectively. This encourages intuitive thoughts about memory capacity (limited by the number of “keys” we can store without interference) [9]. For instance, if we replace the learning rule (i.e., memory writing operation) of Eq. 16 by a pure additive Hebb-style rule (and a fixed learning rate of 1.0): Wn = Wn−1 + vn ⊗ ϕ(kn), we obtain the Linear Transformer [11] (we refer to prior work [9] for further explanations of the omission of attention normalisation). Such a purely additive learning rule often suffers from long term dependencies, unlike the delta rule [9]. We’ll confirm this trend also in the CT models (using the EigenWorms dataset). For later convenience, we introduce a notation FWP which denotes generic FWP operations: yn,Wn = FWP(xn,Wn−1;Wslow).
3 Continuous-Time Fast Weight Programmers
We propose continuous-time counterparts of Fast Weight Programmers (Sec. 2.3) which naturally combine ODEs for learning rules (Sec. 2.2) and existing approaches for sequence processing with NODEs (Sec. 2.1). We present three types of these CT FWP models in line with the categorisation of Sec. 2.1 while the main focus of this work is on the two first cases.
3.1 Direct NODE-based FWPs
In the direct NODE approach (reviewed in Sec. 2.1), we assume a (piece-wise) continuous control signal x : t 7→ x(t) bounded over an interval [t0, T ]. We make it part of the vector field to define an ODE describing a continuous-time learning rule for a fast weight matrix W (t):
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s))ds (18)
where W : t 7→ W (t) ∈ Rdout×dkey is a function defined on [t0, T ], and Fθ is an NN parameterised by θ which maps onto Rdout×dkey . This is a neural differential equation for learning to program a neural net through continuous learning rules, that is, to train a fast weight matrix W (t) of a fast NN (Eq. 20 below) for each sequential control x. Like in the discrete-time FWPs (Sec. 2.3), the output y(T ) ∈ Rdout is obtained by querying this fast weight matrix3 (e.g., at the last time step T ):
q(T ) = Wqx(T ) (19) y(T ) = W (T )q(T ) (20)
where Wq ∈ Rdkey×din is a slow weight matrix used to generate the query q(T ) ∈ Rdkey (Eq. 19). Now we need to specify Fθ in Eq. 18 to fully define the learning rule. We focus on three variants:
Fθ(W (s),x(s)) = σ(β(s)) k(s)⊗ v(s) Hebb-style v(s)⊗ ( k(s)−W (s)⊤v(s) ) Oja-style(
v(s)−W (s)k(s) ) ⊗ k(s) Delta-style
(21)
where [β(s),k(s),v(s)] = Wslowx(s) with a slow weight matrix Wslow ∈ R(1+dkey+dout)×din . As in the discrete-time FWP (Sec. 2.3), the slow NN generates β(s) ∈ R (to which we apply the sigmoid function σ to obtain a learning rate), key k(s) ∈ Rdkey and value v(s) ∈ Rdout vectors from input x(s). These variants are inspired by the respective classic learning rules of the same name, while they are crucially different from the classic ones in the sense that all variables involved (key, value, learning rate) are continually generated by the slow NN. In the experimental section, we’ll comment on how some of these design choices can result in task-dependent performance gaps. In practice, we use the multi-head version of the operations above (i.e., by letting H be a positive integer denoting the number of heads, query/key/value vectors are split into H sub-vectors and Eqs. 20-21 are conducted independently for each head). The output is followed by the standard feed-forward block like in Transformers [6]. Possible extensions for deeper models are discussed in Appendix C.1.
3.2 NCDE-based FWPs
Here we present models based on NCDEs (reviewed in Sec. 2.1). We assume availability of a differentiable control signal x(t), whose first order derivative is denoted by x′(t). Given the NCDE formulation of Eq. 5, the most straight-forward approach to obtain a CT Fast Weight Programmer is to extend the dimensionality of the recurrent hidden state, i.e., we introduce a parameterised function Fθ which maps a matrix W (t) ∈ Rdout×dkey to a third-order tensor Fθ(W (t)) ∈ Rdout×dkey×din :
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s))dx(s) = W (t0) + ∫ t s=t0 Fθ(W (s))x′(s)ds (22)
However, this approach is obviously not scalable since the input and output dimensions (dout × dkey and dout × dkey × din) of Fθ can be too large in practice. A more tractable CDE-based approach can
3In practice, we also apply element-wise activation functions to query/key/value vectors where appropriate, which we omit here for readability. We refer to Appendix A for further details.
be obtained by providing x and/or x′ to the vector field:
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s),x′(s))x′(s)ds (23)
While this equation still remains a CDE because of the multiplication from the right by dx = x′(s)ds, the additional inputs to the vector field offer a way of making use of various learning rules, as in the case of direct NODE approach above (Sec. 3.1). To be specific, either x and x′ or only x′ is required in the vector field to obtain these tractable CDEs. Here we present the version which uses both x and x′ 4. The resulting vector fields for different cases are:
Fθ ( W (s),x(s),x′(s) ) x′(s) = σ(β(s)) Wkx(s)⊗Wvx′(s) Hebb( Wkx(s)−W (s)⊤Wvx′(s) ) ⊗Wvx′(s) Oja(
Wvx(s)−W (s)Wkx′(s) ) ⊗Wkx′(s) Delta
(24)
As can be seen above, the use of CDEs to describe a continuous fast weight learning rule thus naturally results in a key/value memory where x′ is used to generate either key or value vectors. Because of the multiplication from the right by x′, the role of x′ changes depending on the choice of learning rule: x′ is used to generate the key in the Delta case but the value vector in the case of Oja. In the case of Hebb, the choice made in Eq. 24 of using x for keys and x′ for values is arbitrary since Eq. 24 is symmetric in terms of roles of keys and values (see an ablation study in Appendix C.2 for the other case where we use x′ to generate the key and x for the value). The querying operation (analogous to Eqs. 19-20 for the direct NODE case) is also modified accordingly, depending on the choice of learning rule, such that the same input (x or x′) is used to generate both key and query:
y(T ) = { W (T )⊤Wqx(T ) Hebb and Oja W (T )Wqx ′(T ) Delta (25)
Note that since the proposed vector field Fθ ( W (s),x(s),x′(s) ) x′(s) is more general than the one
used in the original NCDE Fθ ( W (s) ) x′(s), any theoretical results on the CDE remain valid (which, however, does not tell us anything about the best choice for its exact parameterisation).
3.3 ODE-RFWP and Latent ODE-RFWP
The main focus of this work is the setting of Kidger et al. [4] where we assume the existence of some control signal x (Sec. 3.1 and 3.2 above). However, here we also show a way of using FWPs in the third/last case presented in Sec. 2.1 where no control x(t) is available (or can be constructed), i.e., we only have access to discrete observations (xn)Nn=0. Here we cannot directly define the vector field involving continuous transformations using the inputs. We follow the existing approaches (ODERNN or Latent ODE; Sec. 2.1) which use two separate update functions: A discrete recurrent state update is executed every time a new observation is available to the model, while a continuous update using an autonomous ODE is conducted in between observations. Unlike with standard recurrent state vectors, however, it is not practical to autonomously evolve high-dimensional fast weight matrices5. We therefore opt for using a Recurrent FWP (RFWP) [10] and combine it with an ODE:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (26) hn,Wn = FWP([xn,un],Wn−1; θ2) (27)
where we keep the fast weight learning rule itself discrete (Eq. 27), but evolve the recurrent state vector un using an ODE (Eq. 26) such that the information to be read/written to the fast weight matrix is controlled by a variable which is continuously updated between observations. We refer to this model as ODE-RFWP and its variational variant as Latent ODE-RFWP.
Since we focus on continuous-time learning rules, the case above is not of central interest as the learning rule remains discrete here (Eq. 27). Nevertheless, in Appendix C.3, we also provide some experimental results for model-based reinforcement learning settings corresponding to this case.
4 The equations for the version using only x′ can be obtained by replacing x by x′ in Eq. 24. We provide an ablation in Appendix C.2. As a side note, we also obtain the equation for the CDE using only x′ by replacing x by x′ in Eq. 18 for the direct NODE case.
5Such an approach would require a computationally expensive matrix-to-matrix transforming NN, whose scalability is limited.
4 Experiments
We consider three datasets covering three types of time series which are regularly sampled (Speech Commands [51]), irregularly sampled with partially missing features (PhysioNet Sepsis [52]), or very long (EigenWorms [53]). We compare the proposed direct NODE and CDE based FWP models (Sec. 3.1 & 3.2) to NODE baselines previously reported on the same datasets [4, 36]. Appendix B provides further experimental details including hyper-parameters.
Speech Commands. The Speech Commands [51] is a single word speech recognition task. The datapoints are regularly sampled, and the sequence lengths are relatively short (≤ 160 frames), which makes this task a popular sanity check. Following prior work on NCDEs [4], we use 20 mel frequency cepstral coefficients as speech features and classify the resulting sequence to one out of ten keywords. The middle column of Table 1 shows the results. The table is split into the direct NODE (top) and CDE (bottom) based approaches. We first observe that among the direct NODE approaches, all our FWPs largely outperform (≥ 80% accuracy) the baseline GRU-ODE performance of 47.9% (the best direct NODE baseline from Kidger et al. [4]). This demonstrates that with a good parameterisation of the vector field, the direct NODE approach can achieve competitive performance. On the other hand, all CDE-based approaches yield similar performance. We also only see slight differences in terms of performance among different learning rules, without a clear winner for this task. This may indicate that the ordinary nature of this task (regularly sampled; short sequences) does not allow for differentiating among these CDE models, including the baseline.
PhysioNet Sepsis. The PhysioNet Sepsis is a dataset of the sepsis prediction task from the PhysioNet challenge 2019 [52]. This is again a dataset used by Kidger et al. [4] to evaluate NCDEs. The task is a binary prediction of sepsis from a time series consisting of measurements of 34 medical features (e.g., respiration rate) of patients’ stays at an ICU. Each sequence is additionally labelled by five static features of the patient (e.g., age) which are fed to the model to generate the initial state of the ODE. Sequences are relatively short (≤ 72 frames) but datapoints are irregularly sampled and many entries are missing, which makes this task challenging. It comes in two versions: with and without the so-called observation intensity information (denoted as “OI” and “no-OI”) which is one extra input feature indicating each observation’s time stamp (providing the models with information on measurement frequency). This distinction is important since the prior work [4] has reported that existing ODE/CDE-based approaches struggle with the no-OI case of this task. Following the previous work, we report the performance in terms of Area Under the ROC Curve (AUC). The right part of Table 1 shows the results. We obtain large improvements in the no-IO case (from 77.6 to 85.7% for the CDEs and from 77.1 to 84.5% for the direct NODEs), while also obtaining small improvements in the OI case (from 85.2 to 90.4% for direct NODEs, and from 88.0 to 91.2% for CDEs). The no-OI performance of our models is also comparable to the best overall performance reported by Kidger [38]: 85.0 % (1.3) achieved by GRU-D [54]. This demonstrates the efficacy of
CT FWP model variants for handling irregularly sampled data with partially missing features even in the case without frequency information. Differences between various learning rules are rather small again. In some cases, we observe performance to be very sensitive to hyper-parameters. For example, the best Oja-CDE configuration achieves 85.1% (2.5) with a learning rate of 6e-5, while this goes down to 79.6% (4.7) when the learning rate is changed to 5e-5.
EigenWorms. The EigenWorms dataset (which is part of the UEA benchmark [53]) is a 5-way classification of roundworm types based on time series tracking their movements. To be more specific, motions of a worm are represented by six features corresponding to their projections to six template movement shapes, called “eigenworms.” While this dataset contains only 259 examples, it is notable for its very long sequences (raw sequence lengths exceed 17 K) and long-span temporal dependencies [37, 55, 56]. We use the same train/validation/test split ratio as the prior work [37] which reports Neural RDEs (NRDEs) as achieving the best NODE model performance on this dataset. The equations of our CT FWPs for the RDE case can be straightforwardly obtained by replacing the input x in Eqs. 18-19 of the direct NODE formulation (or6, x and x′ in Eqs. 23-25 of the NCDEs) by the corresponding log-signatures. Table 2 shows the results, where “Step” denotes the time subsampling rate which is fixed to 4 for which the prior work [37] reports the best NRDE and NCDE performance. “Sig-Depth” denotes the depth of the log-signature (the deeper, the more log-signature terms we take into account, thus ending up with a larger input feature vector; we refer to the original paper [37] for further details). We consider two values for this parameter: 1 and 2. When set to 1, the input feature contains only the first derivative x′(s) and thus the NRDE is reduced to an NCDE (with controls constructed via linear interpolation). We take the best NCDE performance from Morrill et al. [37] as the depth-1 baseline. Morrill et al. [37] report the best overall performance for the depth-2 NRDE (depth-2 baseline in our table). In both cases, we first note a large performance gap between models with different learning rules. While the naive Hebb and Oja based models struggle with this very long sequence processing (sequence length still exceeds 4 K with a down-sampling step size of 4), the Delta rule performs very well. This confirms the prior result in the discrete-time domain [9] which motivated the Delta rule design by its potential for handling long sequences (we refer to prior work [9] for further explanations). Since its performance on other tasks is comparable to the one of Hebb and Oja variants, the Delta rule is a natural default choice for parameterising the CT FWPs.
In both the depth-1 and depth-2 cases, we obtain large improvements compared to the respective baselines. It is counter-intuitive to find certain depth-2 models underperforming their depth-1 counterparts, but this trend has also been observed in the original NRDEs [36]. Our best overall performance is obtained in the depth-1 case: 91.8 % (3.4) exceeds the previous best NRDE based model per-
6Conceptually these two approaches are equivalent: the direct NODE and NCDE coincide here. In practice, there can be a subtle difference due to an implementation detail. The direct NODE approach can apply layer normalisation to the input fed to both key and value projections (as they are both inside the vector field). In this case (as in our implementation), the corresponding NCDE formulation we obtain is based on the normalised input.
formance of 83.8 % (3.0) [36]. This almost matches the state-of-the-art accuracy of 92.8 % (1.8) reported by Rusch et al. [56] (using an ODE-inspired discrete-time model). Our model’s performance variance is high (best single seed performance is 97.4% while the standard deviation is 3.4). The wall clock time is similar for our best model (last row in Table 2) and the NRDE baseline (36s/epoch on a GeForce RTX 2080 Ti) and their sizes are comparable (87 K vs. 65 K parameters respectively).
5 Discussions
Scalability Advantage Compared to Standard NCDEs. In addition to the good empirical results shown above, our FWP approach also addresses an important limitation of existing NCDEs [4]: their scalability in terms of model size. The vector field in standard NCDEs (Eq. 5) requires an NN Fθ which takes a vector h(s) ∈ Rd as an input to produce a matrix of size Rd×din . This can be very challenging when din or/and d is large. Actually, the same bottleneck is present in the weight generation of FWPs [7]. The use of outer products can remediate this issue in discrete FWPs as well as in CT FWPs: the computations in our FWP-based NODE/NCDEs only involve “firstorder” dimensions (i.e., no multiplication between different dimensions, such as d × din) for NN outputs. This can scale well with increased model size, making feasible larger scale tasks infeasible for existing NCDEs. On the other hand, Kidger et al. [4] report that using outer products (in their “Sec. 6.1 on limitations”) in standard NCDEs does not perform well. Why do outer products work well in our models but not in the original NCDEs? The answer may be simple. In the original NCDEs (Eq. 5), multiplications occur at each (infinitesimal) time step between the generated rank-one weight matrix Fθ(h(s)) and x′(s) before the sum. All these transformations are thus of rank one while we expect expressive transformations to be necessary to translate x′(s) into changes in the state space. In contrast, in our CT FWPs, the ODE only parameterises the weight generation process of another net, and thus the rank-one matrices are never used in isolation: they are summed up over time (Eq. 18 or 23) to form an expressive weight matrix which is only then used for matrix multiplication (Eq. 20 or 25. The proposed FWP-NODE/NCDEs thus offer scalable alternatives to existing NCDEs, also yielding good empirical performance (Sec. 4).
Importance of Memory Efficient Backpropagation for FWPs. Memory efficiency of continuous adjoint backpropagation may be not so important for standard NCDEs of state size O(d), but is crucial for FWPs of state size O(d2) which can quickly become prohibitive for long sequences, as naive backpropagation stores all states used in the forward pass. Prior works on discrete FWPs [9, 57, 10] solve this problem by a custom memory-efficient implementation. Here, the continuous adjoint method naturally addresses this problem.
Limitations. Our ablation studies and hyper-parameter tuning focus on optimising the model configuration/architecture. From the NODE perspective, other parameters may further improve performance or alleviate performance variability/stability issues observed in some cases (Sec. 4). For example, we use the numerical solver configurations of the baselines (see Appendix B) without tuning them. Similarly, we use natural cubic splines to construct the differentiable control signals for NCDEs in the Speech Commands and PhysioNet Sepsis tasks, following the original NCDE paper [4]. Morrill et al. [36] report performance enhancements by improving the corresponding interpolation methods (e.g., 93.7% on Speech Commands). Such further optimisation is not conducted here.
Generally speaking, real benefits of using continuous-time sequence processing models are yet to be proved. While we achieve improvements over the best existing NODE/NCDE models on multiple datasets, discrete-time models tailored to the corresponding problem still perform as well or even better than our improved CT models (e.g., Rusch et al. [56] for EigenWorms and Che et al. [54] for PhysioNet Sepsis; see Sec. 4).
Related Work on Parameter/Weight ODEs. Other works use ODEs to parameterise the timeevolving weights of some model. However, they are limited to autonomous ODEs (i.e., no external control x is involved). Zhang et al. [58] and Choromanski et al. [59] study coupled ODEs where one ODE is used for temporal evolution of parameters of the main Neural ODE. The scope of these two works is limited to autonomous ODEs corresponding to continuous-depth residual NNs with different parameters per depth. Deleu et al. [60] consider an ODE version of a gradient descent learning process for adaptation, but also formulated as an autonomous ODE. In contrast, our focus is really on
sequence processing where the model continuously receives external controls x and translates them into weight changes of another network.
6 Conclusion
We introduced novel continuous-time sequence processing neural networks that learn to use sequences of ODE-based continuous learning rules as elementary programming instructions to manipulate shortterm memory in rapidly changing synaptic connections of another network. The proposed models are continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our new models experimentally outperform by a large margin existing Neural ODE based sequence processors on very long or irregularly sampled time series. Our Neural ODE/CDE based FWPs also address the fundamental scalability problem of the original Neural CDEs, which is highly promising for future applications of ODE based sequence processors to large scale problems.
7 Acknowledgements
We would like to thank Kidger et al. [4], Morrill et al. [37] and Du et al. [61] for their public code. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021_192356, project NEUSYM. We are thankful for hardware donations from NVIDIA and IBM. The resources used for this work were partially provided by Swiss National Supercomputing Centre (CSCS) project s1145 and s1154. | 1. What is the focus and contribution of the paper on neural ODEs/CDEs/ODE-RNNs?
2. What are the strengths of the proposed approach, particularly in terms of its originality and scalability?
3. Are there any limitations or weaknesses in the experimental comparisons or presentation?
4. Do you have any comments regarding the choice of numerical methods or software used in the study?
5. How does the reviewer assess the significance and impact of the paper in the field of RNNs and NCDEs? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose several parameterisation of the vector field in neural ODEs/CDEs/ODE-RNNs based on FWP and learning rules.
Strengths And Weaknesses
Strength: topic
There has been relatively little convincing work on good architectures for RNNs, NCDEs, etc. This work addresses a meaningful problem.
Strength: originality
To the best of my knowledge the ideas presented in this paper (combining NCDEs with FWP, etc.) are new.
The scalability, as compared to the original formulation of neural CDEs, is an important contribution.
Strength: literature review
This work ties together many individual lines of work -- neural CDEs, neural RDEs, long sequence modelling, learning rules, FWP, etc. -- and so far as I can determine does an excellent job providing complete references for all of them.
Strength: experiments
The experiments look to have been carefully conducted. They go into a great deal of detail. Moreover the results obtained demonstrate meaningful improvements on existing benchmarks.
Strength: presentation
The prose is carefully worded and precise. I enjoyed reading this paper.
Weakness: experiments
I believe the experiments are overly-focused on neural CDE and neural RDE comparisons. Whilst these are the natural choice given the framing of the paper, I think it would improve the paper to include more quantitative comparisons against other works, e.g. [0], [Ref. 34, 46, 47 from the paper]. Indeed there have been quite a few papers proposing novel architectures for RNNs/etc.
Weakness: presentation
At least for me -- I am largely unfamiliarS with the work on learning rules, FWP etc. -- I had to wait until the experimental section to determine that what is being proposed is essentially a new architecture, for the purposes of supervised learning etc. I would suggest explicitly stating this kind of thing up-front.
Weakness: "the adjoint sensitivity method" (line 66)
The term "adjoint" has become heavily overloaded. In the current literature is is often used to mean specifically optimise-then-discretise, but in older literature it was often used to mean specifically discretise-then-optimise. Following [1, Remark 5.5] I recommend avoiding it altogether.
In passing, since backpropagation is performed via OtD, then the authors may find speedups by utilising the technique of [2].
Weakness: numerical/software
The choice of computational framework, and choice of numerical solvers, seems to go entirely undiscussed. (I did not see it mentioned in the appendix either.)
This is a meaningful limitation: the choice of numerical solver, its choice of tolerance, etc., can greatly affect results.
Minor issues
Line 33: I wouldn't use the word "ancient". Some of us were around back then!
Line 78: I would argue that natural cubic splines should not be mentioned here. As [Ref. 31 of this paper] shows, there are now much better alternatives, so I think it would be better to avoid discussing natural cubic splines altogether. (e.g. they are not even implemented in [3])
Commentary
Line 85: I'm not sure I completely agree with this analysis. As per the next heading ("Second") then the absolute value of
x
may be treated as an input as well, by recording and replaying it. In practice it's not been demonstrated that this actually occurs in practice, so I think it possible that this paper's side-note may be true in practice, if not in theory.
I quite like the idea of directly providing
x
or
x
′
to the vector field, in addition to leaving
x
′
outside, as in equation (23). Most works have usually framed the inside/outside placement of
x
or
x
′
as a dichotomy.
References
[0] Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies, Rusch and Mishra, ICLR 2021
[1] On Neural Differential Equations, Kidger, Doctoral Thesis, University of Oxford 2021
[2] "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms, Kidger et al, ICML 2021
[3] Diffrax, https://github.com/patrick-kidger/diffrax
Questions
I have direct questions for the authors. I have made some suggestions above; to highlight them:
I think the experiments could be improved by moving beyond just the CDE/RDE case.
The choice of numerical method and software should definitely be discussed.
Limitations
There are no ethical concerns with this work. |
NIPS | Title
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
Abstract
Neural ordinary differential equations (ODEs) have attracted much attention as continuous-time counterparts of deep residual neural networks (NNs), and numerous extensions for recurrent NNs have been proposed. Since the 1980s, ODEs have also been used to derive theoretical results for NN learning rules, e.g., the famous connection between Oja’s rule and principal component analysis. Such rules are typically expressed as additive iterative update processes which have straightforward ODE counterparts. Here we introduce a novel combination of learning rules and Neural ODEs to build continuous-time sequence processing nets that learn to manipulate short-term memory in rapidly changing synaptic connections of other nets. This yields continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their fundamental scalability limitations. Our code is public.1
1 Introduction
Neural ordinary differential equations (NODEs) [1] have opened a new perspective on continuoustime computation with neural networks (NNs) as a practical framework for machine learning based on differential equations. While the original approach—proposed as a continuous-depth version of deep feed-forward residual NNs [2, 3]—only covers autonomous ODEs entirely determined by the initial conditions, more recent extensions deal with sequential data (reviewed in Sec. 2.1) in a way similar to what is typically done with standard recurrent NNs (RNNs) in the discrete-time scenario. This potential for continuous-time (CT) sequence processing (CTSP) is particularly interesting, since there are many applications where datapoints are observed at irregularly spaced time steps, and CT sequence models might better deal with such data than their discrete-time counterparts. However, the development of NODEs for CTSP is still at an early stage. For example, a popular approach of Neural Controlled Differential Equations [4] (NCDEs; also reviewed in Sec. 2.1) has in practice only one architectural variant corresponding to the “vanilla” RNN [5]. Discrete-time processing, however, exploits many different RNN architectures as well as Transformers [6].
While it is not straightforward to transform the standard Transformer into a CT sequence processor, we’ll show that the closely related Fast Weight Programmers (FWPs) [7, 8, 9, 10] and linear Transformers [11] (reviewed in Sec. 2.3) have direct CT counterparts. In FWPs, temporal processing of short-term memory (stored in fast weight matrices) uses learnable sequences of learning rules. Hence CT versions of FWPs will require differential equations to model the learning rules. This relates to a trend of the 1980s/90s. Among many old connections between NNs and dynamical systems described by ODEs (e.g., [12, 13, 14, 15, 16, 17]), the theoretical analysis of NN learning rules in the ODE
1https://github.com/IDSIA/neuraldiffeq-fwp
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
framework has been particularly fruitful. Consider the famous example of Oja’s rule [18] (briefly reviewed in Sec. 2.2): many results on its stability, convergence, and connection to Principal Component Analysis [19, 20] were obtained using its ODE counterpart (e.g., [18, 21, 22, 23, 24, 25, 26, 27]).
Here we propose a novel combination of Neural ODEs and learning rules, to obtain a new class of sequence processing Neural ODEs which are continuous-time counterparts of Fast Weight Programmers and linear Transformers. The resulting models are general-purpose CT sequence-processing NNs, which can directly replace the standard Neural CDE models typically used for supervised CT sequence processing tasks. To the best of our knowledge, there is no previous work on Neural ODEbased Transformer families for CT sequence processing, despite their popularity in important types of discrete time computation such as Natural Language Processing and beyond. We also show how our approach solves the fundamental limitation of existing Neural CDEs in terms of model size scalability.
We conduct experiments on three standard time series classification tasks covering various scenarios (regularly sampled, irregularly sampled with missing values, and very long time series). We demonstrate that our novel models outperform existing Neural ODE-based sequence processors, in some cases by a large margin.
2 Background
We briefly review the main background concepts this work builds upon: NODEs for sequence processing (Sec. 2.1), NN learning rules and their connection to ODEs (Sec. 2.2), and Fast Weight Programmers whose memory update is based on learning rules controlled by an NN (Sec. 2.3).
2.1 Neural ODEs (NODEs) and Their Extensions for Sequence Processing
Here we review the core idea of NODEs [1]. In what follows, let n, N , d, din denote positive integers, T be a positive real number, and θ denote an arbitrary set of real numbers. We consider a residual layer (say, the n-th layer with a dimension d) in an N -layer deep NN which transforms an input hn−1 ∈ Rd to an output hn ∈ Rd with a parameterised function fθ : Rd → Rd as follows:
hn = hn−1 + fθ(hn−1) (1)
This coincides [28, 29, 30, 31, 32, 33, 34, 1] with the following equation for ϵ = 1
h(tn) = h(tn−1) + ϵfθ(h(tn−1)) (2)
where h : [0, T ] → Rd is a function such that h(tn) = hn holds for all n : 0 ≤ n ≤ N and tn ∈ [0, T ] such that tn − tn−1 = ϵ > 0 if n ≥ 1. This equation is a forward Euler discretisation of the ordinary differential equation defined for all t ∈ (t0, T ] as
h′(t) = fθ(h(t)) or h(t) = h(t0) + ∫ t s=t0 fθ(h(s))ds (3)
where h′ denotes the first order derivative. This establishes the connection between the ODE and the deep residual net with parameters θ shared across layers2: given the initial condition h(t0) = h0, the solution to this equation evaluated at time T , i.e., h(T ), corresponds to the output of this deep residual NN, which can be computed by an ODE solver. We denote it as a function ODESolve taking four variables: h(T ) = ODESolve(fθ,h0, t0, T ). During training, instead of backpropagating through the ODE solver’s operations, the continuous adjoint sensitivity method [35] (which essentially solves another ODE but backward in time) can compute gradients with O(d) memory requirement, constant w.r.t. T [1].
A natural next step is to extend this formulation for RNNs, i.e., the index n now denotes the time step, and we assume an external input xn ∈ Rdin at each step n to update the hidden state hn−1 to hn as
hn = fθ(hn−1,xn) (4)
Depending on the property of external inputs (xn)Nn=1 = (x1, ...,xN ), there are different ways of defining NODEs for sequence processing. We mainly distinguish three cases.
2Or we make θ dependent of t such that parameters are “depth/layer-dependent” as in standard deep nets.
First, when there is a possibility to construct a differentiable control signal x : t 7→ x(t) ∈ Rdin for t ∈ [t0, T ] from the inputs (xn)Nn=1; an attractive approach by Kidger et al. [4] handles the corresponding dynamics in a neural controlled differential equation (NCDE):
h(t) = h(t0) + ∫ t s=t0 Fθ(h(s))dx(s) = h(t0) + ∫ t s=t0 Fθ(h(s))x ′(s)ds (5)
where Fθ is a parameterised function (typically a few-layer NN) which maps a vector h(s) ∈ Rd to a matrix Fθ(h(s)) ∈ Rd×din (we already relate this component to Recurrent Fast Weight Programmers below) and thus, Fθ(h(s))dx(s) denotes a matrix-vector multiplication. There are several methods to construct the control x : [t0, T ] → Rdin based on the discrete data points (xn)Nn=1, such that its differentiability is guaranteed. In this work, we follow Kidger et al. [4] and mainly use natural cubic splines over all data points (which, however, makes it incompatible with auto-regressive processing); for better alternatives, we refer to Morrill et al. [36]. Since the final equation is again an NODE with a vector field of form gθ,x′(s,h(s)) = Fθ(h(s))x′(s), all methods described above are applicable: ODE solver for evaluation and continuous adjoint method for memory efficient training. A notable extension of Neural CDEs is the use of log-signatures to sub-sample the input sequence [37]. The resulting NCDEs are called neural rough differential equations (NRDEs), which are relevant for processing long sequences. One fundamental limitation of the NCDEs above is the lack of scalability of Fθ : Rd → Rd×din . For example, if we naively parameterise Fθ using a linear layer, the size of its weight matrix is d2 ∗ din which quadratically increases with the hidden state size d. Previous attempts [38] do not successfully resolve this issue without performance degradation. In Sec. 5, we’ll discuss how our models (Sec. 3.2) naturally circumvent this limitation while remaining powerful NCDEs.
On a side note, the NCDE is often referred to as the “continuous-time analogue” to RNNs [4], but this is a bit misleading: discrete-time RNN equations corresponding to the continuous-time Eq. 5 do not reflect the standard RNN of Eq. 4 but:
hn = hn−1 +Wn−1(xn − xn−1) (6) Wn = Fθ(hn) (7)
where one network (Eq. 6) learns to translate the variation of inputs (xn − xn−1) into a change in the state space, using a weight matrix Wn−1 which itself is generated by another network (Fθ : Rd → Rd×din ; Eq. 7) on the fly from the hidden state. This model is thus a kind of Recurrent FWP [39, 40, 10].
Second, even if x is not differentiable, having access to (piece-wise) continuous x defined and bounded over an interval of interest [t0, T ] is enough to define a sequence processing NODE, by making it part of the vector field:
h(t) = h(t0) + ∫ t s=t0 fθ(h(s),x(s))ds (8)
where the vector field fθ(h(t),x(t)) = gθ,x(t,h(t)) can effectively be evaluated at any time t ∈ [t0, T ]. We refer to this second approach as a direct NODE method. While Kidger et al. [4] theoretically and empirically show that this approach is less expressive than the NCDEs above, we’ll show how in our case of learning rules one can derive interesting models within this framework, which empirically perform on par with the CDE variants.
Finally, when no control function with one of the above properties can be constructed, a mainstream approach dissociates the continuous-time hidden state update via ODE for the time between two observations (e.g., Eq. 9 below) from integration of the new data (Eq. 10 below). Notable examples of this category include ODE-RNNs [41, 42] which transform the hidden states hn−1 to hn for each observation xn available at time tn as follows:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (9) hn = ϕθ2(xn,un) (10)
where Eq. 9 autonomously updates the hidden state between two observations using a function fθ1 parameterised by θ1, while in Eq. 10, function ϕθ2 parameterised by θ2 integrates the new input xn into the hidden state. In Latent ODE-RNN [41], a popular extension of this approach to the variational setting, the initial recurrent state h0 is sampled from a prior (during training, an additional encoder is trained to map sequences of inputs to parameters of the prior). While this third case is not our focus, we’ll also show how to use FWPs in this scenario in Sec. 3.3 for the sake of completeness.
2.2 Learning Rules and Their Connections to ODEs
Learning rules of artificial NNs describe the process which modifies their weights in response to some inputs. This includes the standard backpropagation rule (also known as the reverse mode of automatic differentiation) derived for the case of supervised learning, as well as rules inspired by Hebb’s informal rule [43] in “unsupervised” settings. Here we focus on the latter. Let n, din dout be positive integers. Given a linear layer with a weight matrix Wn ∈ Rdout×din (the single output neuron case dout = 1 is the focus of the classic works) at time n which transforms input xn ∈ Rdin to output yn ∈ Rdout as
yn = Wn−1xn (11)
the pure Hebb-style additive learning rule modifies the weights according to
Wn = Wn−1 + ηnyn ⊗ xn (12) where ⊗ denotes outer product and ηn ∈ R+ is a learning rate at time n. Oja [18] proposed stability improvements to this rule through a decay term
Wn = Wn−1 + ηnyn ⊗ (xn −W⊤n−1yn) (13) whose theoretical analysis has since the 1980s been a subject of many researchers covering stability, convergence, and relation to Principal Component Analysis [18, 21, 22, 23, 24, 25, 26, 27, 44]. One key approach for such theoretical analysis is to view the equation above as a discretisation of the following ODE:
W ′(t) = η(t)y(t)⊗ (x(t)−W (t− 1)⊤y(t)) (14) On a related note, studies of RNNs (e.g., [45, 46]) or learning dynamics (e.g., [47]) have also profited from ODEs.
2.3 Fast Weight Programmers & Linear Transformers
Fast Weight Programmers (FWP; [7, 8, 9, 10]) are general-purpose (auto-regressive) sequence processing NNs. In general, an FWP is a system of two NNs: a slow NN, the programmer, rapidly generates during runtime weight changes of another neural network, the fast NN. The (slow) weights of the slow net are typically trained by gradient descent. Variants of FWPs whose weight generation is based on outer products between keys and values [7] have been shown [9] to be equivalent to Linear Transformers [11] (using the mathematical equivalence known from perceptron/kernel machine duality [48, 49]). These FWPs use sequences of learning rules to update short-term memory in form of a fast weight matrix. A practical example of such FWPs is the DeltaNet [9] which transforms an input xn ∈ Rdin into an output yn ∈ Rdout at each time step n while updating its fast weight matrix Wn−1 ∈ Rdout×dkey as follows:
βn, qn,kn,vn = Wslowxn (15) Wn = Wn−1 + σ(βn)(vn −Wn−1ϕ(kn))⊗ ϕ(kn) (16) yn = Wnϕ(qn) (17)
where the slow net (Eq. 15; with weights Wslow ∈ R(1+2∗dkey+dout)×din ) generates key/value vectors kn ∈ Rdkey and vn ∈ Rdout as well as a scalar βn ∈ R to obtain a dynamic learning rate by applying a sigmoid function σ, and ϕ is an element-wise activation function whose output elements are positive and sum up to one (typically softmax). These fast dynamic variables generated by a slow NN are used in a learning rule (Eq. 16) akin to the classic delta rule [50] to update the fast weight matrix. The output is finally produced by the forward computation of the fast NN, i.e., by querying the fast weight matrix by the generated query vector qn ∈ Rdkey (Eq. 17). An intuitive interpretation of the fast weight matrix is a key-value associative memory with write and read operations defined by Eq. 16 and 17, respectively. This encourages intuitive thoughts about memory capacity (limited by the number of “keys” we can store without interference) [9]. For instance, if we replace the learning rule (i.e., memory writing operation) of Eq. 16 by a pure additive Hebb-style rule (and a fixed learning rate of 1.0): Wn = Wn−1 + vn ⊗ ϕ(kn), we obtain the Linear Transformer [11] (we refer to prior work [9] for further explanations of the omission of attention normalisation). Such a purely additive learning rule often suffers from long term dependencies, unlike the delta rule [9]. We’ll confirm this trend also in the CT models (using the EigenWorms dataset). For later convenience, we introduce a notation FWP which denotes generic FWP operations: yn,Wn = FWP(xn,Wn−1;Wslow).
3 Continuous-Time Fast Weight Programmers
We propose continuous-time counterparts of Fast Weight Programmers (Sec. 2.3) which naturally combine ODEs for learning rules (Sec. 2.2) and existing approaches for sequence processing with NODEs (Sec. 2.1). We present three types of these CT FWP models in line with the categorisation of Sec. 2.1 while the main focus of this work is on the two first cases.
3.1 Direct NODE-based FWPs
In the direct NODE approach (reviewed in Sec. 2.1), we assume a (piece-wise) continuous control signal x : t 7→ x(t) bounded over an interval [t0, T ]. We make it part of the vector field to define an ODE describing a continuous-time learning rule for a fast weight matrix W (t):
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s))ds (18)
where W : t 7→ W (t) ∈ Rdout×dkey is a function defined on [t0, T ], and Fθ is an NN parameterised by θ which maps onto Rdout×dkey . This is a neural differential equation for learning to program a neural net through continuous learning rules, that is, to train a fast weight matrix W (t) of a fast NN (Eq. 20 below) for each sequential control x. Like in the discrete-time FWPs (Sec. 2.3), the output y(T ) ∈ Rdout is obtained by querying this fast weight matrix3 (e.g., at the last time step T ):
q(T ) = Wqx(T ) (19) y(T ) = W (T )q(T ) (20)
where Wq ∈ Rdkey×din is a slow weight matrix used to generate the query q(T ) ∈ Rdkey (Eq. 19). Now we need to specify Fθ in Eq. 18 to fully define the learning rule. We focus on three variants:
Fθ(W (s),x(s)) = σ(β(s)) k(s)⊗ v(s) Hebb-style v(s)⊗ ( k(s)−W (s)⊤v(s) ) Oja-style(
v(s)−W (s)k(s) ) ⊗ k(s) Delta-style
(21)
where [β(s),k(s),v(s)] = Wslowx(s) with a slow weight matrix Wslow ∈ R(1+dkey+dout)×din . As in the discrete-time FWP (Sec. 2.3), the slow NN generates β(s) ∈ R (to which we apply the sigmoid function σ to obtain a learning rate), key k(s) ∈ Rdkey and value v(s) ∈ Rdout vectors from input x(s). These variants are inspired by the respective classic learning rules of the same name, while they are crucially different from the classic ones in the sense that all variables involved (key, value, learning rate) are continually generated by the slow NN. In the experimental section, we’ll comment on how some of these design choices can result in task-dependent performance gaps. In practice, we use the multi-head version of the operations above (i.e., by letting H be a positive integer denoting the number of heads, query/key/value vectors are split into H sub-vectors and Eqs. 20-21 are conducted independently for each head). The output is followed by the standard feed-forward block like in Transformers [6]. Possible extensions for deeper models are discussed in Appendix C.1.
3.2 NCDE-based FWPs
Here we present models based on NCDEs (reviewed in Sec. 2.1). We assume availability of a differentiable control signal x(t), whose first order derivative is denoted by x′(t). Given the NCDE formulation of Eq. 5, the most straight-forward approach to obtain a CT Fast Weight Programmer is to extend the dimensionality of the recurrent hidden state, i.e., we introduce a parameterised function Fθ which maps a matrix W (t) ∈ Rdout×dkey to a third-order tensor Fθ(W (t)) ∈ Rdout×dkey×din :
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s))dx(s) = W (t0) + ∫ t s=t0 Fθ(W (s))x′(s)ds (22)
However, this approach is obviously not scalable since the input and output dimensions (dout × dkey and dout × dkey × din) of Fθ can be too large in practice. A more tractable CDE-based approach can
3In practice, we also apply element-wise activation functions to query/key/value vectors where appropriate, which we omit here for readability. We refer to Appendix A for further details.
be obtained by providing x and/or x′ to the vector field:
W (t) = W (t0) + ∫ t s=t0 Fθ(W (s),x(s),x′(s))x′(s)ds (23)
While this equation still remains a CDE because of the multiplication from the right by dx = x′(s)ds, the additional inputs to the vector field offer a way of making use of various learning rules, as in the case of direct NODE approach above (Sec. 3.1). To be specific, either x and x′ or only x′ is required in the vector field to obtain these tractable CDEs. Here we present the version which uses both x and x′ 4. The resulting vector fields for different cases are:
Fθ ( W (s),x(s),x′(s) ) x′(s) = σ(β(s)) Wkx(s)⊗Wvx′(s) Hebb( Wkx(s)−W (s)⊤Wvx′(s) ) ⊗Wvx′(s) Oja(
Wvx(s)−W (s)Wkx′(s) ) ⊗Wkx′(s) Delta
(24)
As can be seen above, the use of CDEs to describe a continuous fast weight learning rule thus naturally results in a key/value memory where x′ is used to generate either key or value vectors. Because of the multiplication from the right by x′, the role of x′ changes depending on the choice of learning rule: x′ is used to generate the key in the Delta case but the value vector in the case of Oja. In the case of Hebb, the choice made in Eq. 24 of using x for keys and x′ for values is arbitrary since Eq. 24 is symmetric in terms of roles of keys and values (see an ablation study in Appendix C.2 for the other case where we use x′ to generate the key and x for the value). The querying operation (analogous to Eqs. 19-20 for the direct NODE case) is also modified accordingly, depending on the choice of learning rule, such that the same input (x or x′) is used to generate both key and query:
y(T ) = { W (T )⊤Wqx(T ) Hebb and Oja W (T )Wqx ′(T ) Delta (25)
Note that since the proposed vector field Fθ ( W (s),x(s),x′(s) ) x′(s) is more general than the one
used in the original NCDE Fθ ( W (s) ) x′(s), any theoretical results on the CDE remain valid (which, however, does not tell us anything about the best choice for its exact parameterisation).
3.3 ODE-RFWP and Latent ODE-RFWP
The main focus of this work is the setting of Kidger et al. [4] where we assume the existence of some control signal x (Sec. 3.1 and 3.2 above). However, here we also show a way of using FWPs in the third/last case presented in Sec. 2.1 where no control x(t) is available (or can be constructed), i.e., we only have access to discrete observations (xn)Nn=0. Here we cannot directly define the vector field involving continuous transformations using the inputs. We follow the existing approaches (ODERNN or Latent ODE; Sec. 2.1) which use two separate update functions: A discrete recurrent state update is executed every time a new observation is available to the model, while a continuous update using an autonomous ODE is conducted in between observations. Unlike with standard recurrent state vectors, however, it is not practical to autonomously evolve high-dimensional fast weight matrices5. We therefore opt for using a Recurrent FWP (RFWP) [10] and combine it with an ODE:
un = ODESolve(fθ1 ,hn−1, tn−1, tn) (26) hn,Wn = FWP([xn,un],Wn−1; θ2) (27)
where we keep the fast weight learning rule itself discrete (Eq. 27), but evolve the recurrent state vector un using an ODE (Eq. 26) such that the information to be read/written to the fast weight matrix is controlled by a variable which is continuously updated between observations. We refer to this model as ODE-RFWP and its variational variant as Latent ODE-RFWP.
Since we focus on continuous-time learning rules, the case above is not of central interest as the learning rule remains discrete here (Eq. 27). Nevertheless, in Appendix C.3, we also provide some experimental results for model-based reinforcement learning settings corresponding to this case.
4 The equations for the version using only x′ can be obtained by replacing x by x′ in Eq. 24. We provide an ablation in Appendix C.2. As a side note, we also obtain the equation for the CDE using only x′ by replacing x by x′ in Eq. 18 for the direct NODE case.
5Such an approach would require a computationally expensive matrix-to-matrix transforming NN, whose scalability is limited.
4 Experiments
We consider three datasets covering three types of time series which are regularly sampled (Speech Commands [51]), irregularly sampled with partially missing features (PhysioNet Sepsis [52]), or very long (EigenWorms [53]). We compare the proposed direct NODE and CDE based FWP models (Sec. 3.1 & 3.2) to NODE baselines previously reported on the same datasets [4, 36]. Appendix B provides further experimental details including hyper-parameters.
Speech Commands. The Speech Commands [51] is a single word speech recognition task. The datapoints are regularly sampled, and the sequence lengths are relatively short (≤ 160 frames), which makes this task a popular sanity check. Following prior work on NCDEs [4], we use 20 mel frequency cepstral coefficients as speech features and classify the resulting sequence to one out of ten keywords. The middle column of Table 1 shows the results. The table is split into the direct NODE (top) and CDE (bottom) based approaches. We first observe that among the direct NODE approaches, all our FWPs largely outperform (≥ 80% accuracy) the baseline GRU-ODE performance of 47.9% (the best direct NODE baseline from Kidger et al. [4]). This demonstrates that with a good parameterisation of the vector field, the direct NODE approach can achieve competitive performance. On the other hand, all CDE-based approaches yield similar performance. We also only see slight differences in terms of performance among different learning rules, without a clear winner for this task. This may indicate that the ordinary nature of this task (regularly sampled; short sequences) does not allow for differentiating among these CDE models, including the baseline.
PhysioNet Sepsis. The PhysioNet Sepsis is a dataset of the sepsis prediction task from the PhysioNet challenge 2019 [52]. This is again a dataset used by Kidger et al. [4] to evaluate NCDEs. The task is a binary prediction of sepsis from a time series consisting of measurements of 34 medical features (e.g., respiration rate) of patients’ stays at an ICU. Each sequence is additionally labelled by five static features of the patient (e.g., age) which are fed to the model to generate the initial state of the ODE. Sequences are relatively short (≤ 72 frames) but datapoints are irregularly sampled and many entries are missing, which makes this task challenging. It comes in two versions: with and without the so-called observation intensity information (denoted as “OI” and “no-OI”) which is one extra input feature indicating each observation’s time stamp (providing the models with information on measurement frequency). This distinction is important since the prior work [4] has reported that existing ODE/CDE-based approaches struggle with the no-OI case of this task. Following the previous work, we report the performance in terms of Area Under the ROC Curve (AUC). The right part of Table 1 shows the results. We obtain large improvements in the no-IO case (from 77.6 to 85.7% for the CDEs and from 77.1 to 84.5% for the direct NODEs), while also obtaining small improvements in the OI case (from 85.2 to 90.4% for direct NODEs, and from 88.0 to 91.2% for CDEs). The no-OI performance of our models is also comparable to the best overall performance reported by Kidger [38]: 85.0 % (1.3) achieved by GRU-D [54]. This demonstrates the efficacy of
CT FWP model variants for handling irregularly sampled data with partially missing features even in the case without frequency information. Differences between various learning rules are rather small again. In some cases, we observe performance to be very sensitive to hyper-parameters. For example, the best Oja-CDE configuration achieves 85.1% (2.5) with a learning rate of 6e-5, while this goes down to 79.6% (4.7) when the learning rate is changed to 5e-5.
EigenWorms. The EigenWorms dataset (which is part of the UEA benchmark [53]) is a 5-way classification of roundworm types based on time series tracking their movements. To be more specific, motions of a worm are represented by six features corresponding to their projections to six template movement shapes, called “eigenworms.” While this dataset contains only 259 examples, it is notable for its very long sequences (raw sequence lengths exceed 17 K) and long-span temporal dependencies [37, 55, 56]. We use the same train/validation/test split ratio as the prior work [37] which reports Neural RDEs (NRDEs) as achieving the best NODE model performance on this dataset. The equations of our CT FWPs for the RDE case can be straightforwardly obtained by replacing the input x in Eqs. 18-19 of the direct NODE formulation (or6, x and x′ in Eqs. 23-25 of the NCDEs) by the corresponding log-signatures. Table 2 shows the results, where “Step” denotes the time subsampling rate which is fixed to 4 for which the prior work [37] reports the best NRDE and NCDE performance. “Sig-Depth” denotes the depth of the log-signature (the deeper, the more log-signature terms we take into account, thus ending up with a larger input feature vector; we refer to the original paper [37] for further details). We consider two values for this parameter: 1 and 2. When set to 1, the input feature contains only the first derivative x′(s) and thus the NRDE is reduced to an NCDE (with controls constructed via linear interpolation). We take the best NCDE performance from Morrill et al. [37] as the depth-1 baseline. Morrill et al. [37] report the best overall performance for the depth-2 NRDE (depth-2 baseline in our table). In both cases, we first note a large performance gap between models with different learning rules. While the naive Hebb and Oja based models struggle with this very long sequence processing (sequence length still exceeds 4 K with a down-sampling step size of 4), the Delta rule performs very well. This confirms the prior result in the discrete-time domain [9] which motivated the Delta rule design by its potential for handling long sequences (we refer to prior work [9] for further explanations). Since its performance on other tasks is comparable to the one of Hebb and Oja variants, the Delta rule is a natural default choice for parameterising the CT FWPs.
In both the depth-1 and depth-2 cases, we obtain large improvements compared to the respective baselines. It is counter-intuitive to find certain depth-2 models underperforming their depth-1 counterparts, but this trend has also been observed in the original NRDEs [36]. Our best overall performance is obtained in the depth-1 case: 91.8 % (3.4) exceeds the previous best NRDE based model per-
6Conceptually these two approaches are equivalent: the direct NODE and NCDE coincide here. In practice, there can be a subtle difference due to an implementation detail. The direct NODE approach can apply layer normalisation to the input fed to both key and value projections (as they are both inside the vector field). In this case (as in our implementation), the corresponding NCDE formulation we obtain is based on the normalised input.
formance of 83.8 % (3.0) [36]. This almost matches the state-of-the-art accuracy of 92.8 % (1.8) reported by Rusch et al. [56] (using an ODE-inspired discrete-time model). Our model’s performance variance is high (best single seed performance is 97.4% while the standard deviation is 3.4). The wall clock time is similar for our best model (last row in Table 2) and the NRDE baseline (36s/epoch on a GeForce RTX 2080 Ti) and their sizes are comparable (87 K vs. 65 K parameters respectively).
5 Discussions
Scalability Advantage Compared to Standard NCDEs. In addition to the good empirical results shown above, our FWP approach also addresses an important limitation of existing NCDEs [4]: their scalability in terms of model size. The vector field in standard NCDEs (Eq. 5) requires an NN Fθ which takes a vector h(s) ∈ Rd as an input to produce a matrix of size Rd×din . This can be very challenging when din or/and d is large. Actually, the same bottleneck is present in the weight generation of FWPs [7]. The use of outer products can remediate this issue in discrete FWPs as well as in CT FWPs: the computations in our FWP-based NODE/NCDEs only involve “firstorder” dimensions (i.e., no multiplication between different dimensions, such as d × din) for NN outputs. This can scale well with increased model size, making feasible larger scale tasks infeasible for existing NCDEs. On the other hand, Kidger et al. [4] report that using outer products (in their “Sec. 6.1 on limitations”) in standard NCDEs does not perform well. Why do outer products work well in our models but not in the original NCDEs? The answer may be simple. In the original NCDEs (Eq. 5), multiplications occur at each (infinitesimal) time step between the generated rank-one weight matrix Fθ(h(s)) and x′(s) before the sum. All these transformations are thus of rank one while we expect expressive transformations to be necessary to translate x′(s) into changes in the state space. In contrast, in our CT FWPs, the ODE only parameterises the weight generation process of another net, and thus the rank-one matrices are never used in isolation: they are summed up over time (Eq. 18 or 23) to form an expressive weight matrix which is only then used for matrix multiplication (Eq. 20 or 25. The proposed FWP-NODE/NCDEs thus offer scalable alternatives to existing NCDEs, also yielding good empirical performance (Sec. 4).
Importance of Memory Efficient Backpropagation for FWPs. Memory efficiency of continuous adjoint backpropagation may be not so important for standard NCDEs of state size O(d), but is crucial for FWPs of state size O(d2) which can quickly become prohibitive for long sequences, as naive backpropagation stores all states used in the forward pass. Prior works on discrete FWPs [9, 57, 10] solve this problem by a custom memory-efficient implementation. Here, the continuous adjoint method naturally addresses this problem.
Limitations. Our ablation studies and hyper-parameter tuning focus on optimising the model configuration/architecture. From the NODE perspective, other parameters may further improve performance or alleviate performance variability/stability issues observed in some cases (Sec. 4). For example, we use the numerical solver configurations of the baselines (see Appendix B) without tuning them. Similarly, we use natural cubic splines to construct the differentiable control signals for NCDEs in the Speech Commands and PhysioNet Sepsis tasks, following the original NCDE paper [4]. Morrill et al. [36] report performance enhancements by improving the corresponding interpolation methods (e.g., 93.7% on Speech Commands). Such further optimisation is not conducted here.
Generally speaking, real benefits of using continuous-time sequence processing models are yet to be proved. While we achieve improvements over the best existing NODE/NCDE models on multiple datasets, discrete-time models tailored to the corresponding problem still perform as well or even better than our improved CT models (e.g., Rusch et al. [56] for EigenWorms and Che et al. [54] for PhysioNet Sepsis; see Sec. 4).
Related Work on Parameter/Weight ODEs. Other works use ODEs to parameterise the timeevolving weights of some model. However, they are limited to autonomous ODEs (i.e., no external control x is involved). Zhang et al. [58] and Choromanski et al. [59] study coupled ODEs where one ODE is used for temporal evolution of parameters of the main Neural ODE. The scope of these two works is limited to autonomous ODEs corresponding to continuous-depth residual NNs with different parameters per depth. Deleu et al. [60] consider an ODE version of a gradient descent learning process for adaptation, but also formulated as an autonomous ODE. In contrast, our focus is really on
sequence processing where the model continuously receives external controls x and translates them into weight changes of another network.
6 Conclusion
We introduced novel continuous-time sequence processing neural networks that learn to use sequences of ODE-based continuous learning rules as elementary programming instructions to manipulate shortterm memory in rapidly changing synaptic connections of another network. The proposed models are continuous-time counterparts of Fast Weight Programmers and linear Transformers. Our new models experimentally outperform by a large margin existing Neural ODE based sequence processors on very long or irregularly sampled time series. Our Neural ODE/CDE based FWPs also address the fundamental scalability problem of the original Neural CDEs, which is highly promising for future applications of ODE based sequence processors to large scale problems.
7 Acknowledgements
We would like to thank Kidger et al. [4], Morrill et al. [37] and Du et al. [61] for their public code. This research was partially funded by ERC Advanced grant no: 742870, project AlgoRNN, and by Swiss National Science Foundation grant no: 200021_192356, project NEUSYM. We are thankful for hardware donations from NVIDIA and IBM. The resources used for this work were partially provided by Swiss National Supercomputing Centre (CSCS) project s1145 and s1154. | 1. What is the main contribution of the paper regarding Fast Weight Programmers?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the limitations of the method, particularly in its applicability to broader problems?
5. Are there any minor issues or suggestions for improvement in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work introduces variants of Fast Weight Programmers, where the Fast Weight matrix W evolves via neural DEs, with vector field structures inspired by learning rules (namely Hebb, Oja, and Delta). These can be applied to continuous learning tasks.
The structure of the neural DEs here fall under a subclass considered by Kidger et al in the paper introducing neural CDEs (that is, the neural CDEs subsume these methods). However, this paper demonstrates empirically that the specific restriction to these models seem to perform well compared with optimising over the broader class of neural CDEs, as well as providing scalability benefits.
Strengths And Weaknesses
This is a very nicely written paper, which presents the work done in a clear and concise manner. The background and literature is explained well. There is full transparency, the code is available for validation and reproduction of the results.
An observation is that this is pooling together a few ideas from different areas, so the improvement is more incremental, however this is not necessarily a weakness, and the empirical results and benefits seem to be good.
In terms of weaknesses, although it is not realistic to be comparing with all models out there, this paper does not discuss that in Kidger et al, the model that performed best for sepsis no-OI is in fact GRU-D, with a performance of 85% plus/minus 1.3, which is superior to the best performance of the three models introduced in this paper.
Furthermore, most of the comparison here is with Kidger et al. (2020), which has been superseded by Morrill et al. (2021) in Neural CDEs for online prediction tasks. It would be good for come comments and comparison with the results in the latter.
Some minor issues: "ancient" for the 80s/90s can be slightly offensive! Eqn (27) should be RFWP? Table 1 should be AUC rather than AUR.
Questions
In Section 3.3, it is assumed that the continuous control is not available and only discrete observations are accessible. In practice, this is almost always the case, in the paper by Morrill et al. [31], there is an extensive discussion on the construction of a continuous control path from the discrete observations. Perhaps the authors can highlight and clarify, when might one choose to take the discrete observations and apply methods in Section 3.3 rather than apply interpolation and use Neural CDEs?
Limitations
The authors have discussed that some of their results have rather large variances, in particular Delta achieved the highest single seed performance for sepsis no-OI, but on average Hebb performed best in this case.
As discussed in weaknesses, the authors do not discuss comparisons with other baselines from non-ODE based neural networks. A discussion on where this work sits in the broader literature and any limitation of application of these new networks in broader problems would be beneficial.
Negative societal impact not applicable for this paper. |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What is the focus of the paper in terms of contributions and novel aspects?
2. What are the strengths of the proposed method, particularly regarding its ability to handle more general network structures?
3. Do you have any concerns or questions about the assumptions made throughout the paper?
4. How does the reviewer assess the penalized quasi log likelihood function minimization approach used in the paper?
5. What are the limitations of the method regarding computation complexity and optimization issues?
6. Are there any open questions or areas for future research related to this work on differential networking analysis? | Review | Review
The present paper deals with differential networking analysis. The authors propose a new model in this framework called "Latent Differential Graph Model" allowing more general networks structures (transelliptical distribution) than just Gaussian. The assumption made throughout the paper, which is now classical in this framework, is the sparsity of the matrix, says $\Delta$, obtained by the difference between the precision matrices of the two networks. This assumption is honest since it means that the network does not change a lot from one state to the other. Here the goal is to estimate $\Delta$ and the authors use some kind of penalized (quasi) log likelihood function minimization. There are two novelties with this criterion: - First, in the cost function the true covariance matrices are estimated by Kendall's tau statistic (other choices are possible and there is no real benefit of using this estimate rather than using the classical sample covariance matrix). - Second, the penalty is non-convex and defined in a general way (see Assumption 4.3). Hence, SCAD and MCP are special case of the procedure. While the penalty is non-convex, the global criterion to be minimized is convex using some constraints on modulus of convexity of the loss function. This makes optimization possible in high dimension. Under classical assumptions, the authors provide upper bounds on the estimation error of $\Delta$ for the infinity and the Frobenius norms. Moreover, numerical studies are performed showing the good behavior of the method. An interesting point is that the new model (LDGM) allows more general distribution for networks while adding only small technical difficulties (as compared to Gaussian distributions). I have some comments; 1) Even if the global criterion is convex, it could be interesting to consider the conmputation complexity of the method as compared to critetion with convex penalty (as the $\ell_1$ penalty). 2) Theorems 4.4 and 4.7 are valid for all the penalties satisfying Assumptions 4.3. Since the $\ell_1$ penalty satisfies this assumption, the results are also valid for it. However, the authors argue that the result in Theorem 4.7 is better than recent (a state-of-the-art) results because of the non-convexity of the penalty. This argument fails with the $\ell_1$ penalty. Can the authors makes clearer this point? I think that the result is just "better" because of proofs arguments. Moreover, I believe that the results are not really better since there are two parts in the bound (in Theorem 4.7) and the sparsity parameters $s_1$ and $s_2$ are unknown. 3) Finally, the authors argue quite often that their results and conditions are optimal. Can they provide a reference for lower bounds? For me the factor $M$ appearing in the bounds is tricky. It can be seen as some constant times $s$. Then, it should be taken in consideration in the rates. This is not the case in the present paper. Lastly here is a minor comment: there could be an error in the legend of the second box of Figure 1. As setted, it is the LDGM-L1 the best. |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What is the focus and contribution of the paper on comparing pairs of graph structures?
2. What are the strengths and weaknesses of the proposed approach in contrast to previously developed approaches?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the limitations regarding Assumption 4.2, and how would the authors respond to these concerns?
5. Can the authors provide more intuition for the significance of Assumption 4.3?
6. How do the simulation results in section 5.1 explore performance with respect to different sparsity patterns of \Sigma_x, \Sigma_y, and delta?
7. How convincing is the real-world data application in section 5.2, and what are some potential issues with the validation process?
8. How might the method be improved for handling noisy and messy real-world data?
9. What is the motivation behind the choice of functions f (line 230), and how do the results for LDGM-MCP seem to recapitulate those from the DPM reference [32]?
10. Are there any minor typos or errors in language/flow of writing that could be addressed for better readability and transparency? | Review | Review
The authors present a new approach to comparing pairs of graph structures. In contrast to previously developed approaches, the proposed approach 1) estimates the difference-graph directly and in a sparse way, thus enabling a more sparse representation of the difference graph (as opposed to estimation of the two graphs first then comparing), 2) the proposed approach relaxes the Gaussian assumption typically made by other methods by using transelliptical distributions, and 3) it uses a family of non-convex penalty functions that allow efficient inference.The article is well organized and the results are very thoroughly presented (over 10 pages of proofs). - In the introduction, the authors state estimation of two graphs first, then comparison to estimate the difference, requires estimating twice as many parameters (and therefore twice as many observations) to ensure estimation accuracy. Is this true for the class of methods that regularize the difference between two graphs (e.g. penalize differences in edge weights between two graphs)? - I find Assumption 4.2 to be quite strong and unrealistic â can we expect that the number of non-zero entries in the difference-matrix to not depend on the dimensionality of the data (d)? E.g. Would we expect the number of non-zero entries between graphs of dimension 100 to be on the same order as differences between graphs of 1,000,000 dimensions? - It would be nice if the authors provide some intuition for the significance of Assumption 4.3. - It was unclear to me how sparse the matrices are that were generated from simulations in section 5.1. Also, what is the motivation behind the choice of functions f (line 230)? - The simulations in 5.1 need to explore performance with respect to different sparsity patterns of \Sigma_x, \Sigma_y and delta, since the proposed method is geared towards sparse estimation of the difference-graph. Also, larger graphs (e.g. 1,000 nodes, 10,000 nodes) would be useful as well (since the proposed approach is more likely to be used on high dimensional data). - I found the real-world data application of 5.2 unconvincing for the following reason. The pathway TGFB was hand-chosen, likely because of its extremely high importance in cancer. However, first the validation involved verifying genes involved in differential edges are important in the same kind of cancer. TGF-B is heavily implicated in many, many cancers, and so you would expect many of the genes in that pathway a priori to be involved in cancer to begin with (without even looking at the graph). Second, the validation is of the genes (the nodes), not of the differential edges, which is what the method is inferring. Third, it is well known that hubs in a network tend to be more important (and also more likely to be co-opted by cancer). Hubs in either graph (or both) are also more likely to have a differential edge detected by the method (you canât have a differential edge adjacent to a node if that node does not have any edge in either graph). So, the method is more likely to pull down a hub node by chance anyways (e.g. a competitive method that just identifies nodes based on degrees in either network may be just as good at pulling down relevant genes). Fourth, there is no mention of false-negative rate (how many ovarian cancer genes were missed by the method/did not have a differential edge). Better validation would have been to identify some genes that are unique to the subtypes studied and looked at their recovery. - Figure 3 was also unconvincing for the same reason. - Real world data is noisy and messy. It is important to record how the networks change for different bootstraps of the samples, or at least the variance for the 5 different 5-fold cross-validations at the optimal values of the tuning parameters of \lambda and b. (The optimal values of \lambda and b should be stated as well.) - The interpretation of the differential networks is unclear, as the edges between genes in the differential networks do not match direct interactions in the KEGG pathways and their relationships are not discussed in the article. Is the motivation simply to find individual genes which are differentially expressed? It is also unclear why the gene CSF2RB found by SepGlasso (p. 8, line 277-9) is âwrongâ rather than just âdifferentâ. (Searching the NCBI entry for this gene gives a reference to an article on colon cancer.) - Has the data been normalized (i.e. assumed to be Gaussian), or modeled using a transelliptical distribution (as discussed on p.1, line 21-24)? Are the assumptions on the data satisfied, or if not (d > n?), why is it justifiable to use the method on this dataset? - The results for LDGM-MCP seem to recapitulate those from the DPM reference [32]. However, only one gene (e.g. THBS2 rather than COMP and THBS2 for the TGF-beta pathway) appear in the results for DPM in the manuscript, whereas both appear in [32]. Why is there a discrepancy? - In searching the KEGG Apoptosis pathway from (http://www.genome.jp/dbget-bin/www_bget?hsa04210), I could not find gene PRKAR2B or its Entrez gene ID 5577 on the page. - Given that in the ovarian cancer dataset both n and d are small (approx 100), the two precision matrices and their difference could be calculated in this case, for comparison with LDGM-MCP. (The only advantage to estimating the differential network in the manuscript is said to be the factor of 2 in the number of parameters to be estimated, p.2 lines 43-44 and lines 73-74). However, I was wondering if there also an improvement in stability when only the difference is calculated? - I have not seen the Kendall tau statistic used estimations of sample covariance of application papers involving gene expression profiles. Its use seems necessary in the proofs (Lemma C.1), and this should be stated if it is indeed the case. - It would be very useful to provide an implementation of the method (or even better, the source code) to ensure transparency and reproducibility of the results. - There are some minor typos and errors in language/flow of writing. Some of them are listed below. âS = supp(\Delta^*)â appears on p.4, line 122 but âsuppâ is only defined on p.4, line 141. The definition could be included in the âNotationâ section at the end of the Introduction, which was useful. âtheoriesâ should be replaced with âtheoremsâ (e.g. p.4 line 141, p.5, line 174) p.4, line 156, â||\Delta||^*_{0,0}â should be replaced by â||\Delta^*||_{0,0}â p. 6, line 227: âby the huge packageâ (or âby the huge R packageâ to specify that it is in R), and the version used should also be mentioned, for reproducibility p. 6, line 232, âmdoelsâ to be replaced by âmodelsâ |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What is the main contribution of the paper in the field of graph difference estimation?
2. What are the strengths of the proposed method, particularly in terms of its ability to achieve a fast rate of convergence?
3. What are the weaknesses of the paper regarding its assumptions and experimental comparisons with other works?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any recent works related to graph difference estimation that the authors could consider for a final draft? | Review | Review
This paper presents and analyzes a method for estimating the difference network of closely related graphs, corresponding to the difference of two precision matrices. This is done using a generalization of the gaussian the transelliptical distribution. The estimator is based on a non-convex regularized quasi-loglikelihood. It is shown that under specific assumptions this non-convex objective becomes convex allowing for the statistical rate to be analyzed. It is shown that in this regime a fast rate of convergence is achieved. The problem of graph difference estimation is interesting and very recently quite popular. Indeed many reasonable regularizers in this setting lead to non-convex penalties. This paper rigorously shows a powerful penalty and that it achieves the oracle properties under mild conditions. I would be interested to obtain more intuition on when the assumptions 4.3 hold. The synthetic and real experiments demonstrate that this method works well compared to natural baselines. The lasso nodewise regression methods of (meinhausen 2006) has been shown to perform better than graphical lasso for estimation of the edge structure in some cases. It would be interesting to see this compared as well if possible. One thing that can be made more clear in the experiments is how the parameters of the estimator are chosen in practice, as this is critical for the guarantees of the method to hold. Overall this is a strong paper that fills an important Gap. There is also some recent literature on this in this years ICML and on arxiv. I don't expect the authors to have addressed these but it would be interesting to consider for a final draft. Fazayeli Generalized Direct Change Estimation in Ising Model Structure. ICML 2016 |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What is the main contribution of the paper regarding learning the latent differential graph?
2. What are the strengths of the proposed method compared to prior works, particularly in terms of its generality and statistical rate?
3. Do you have any suggestions for additional experiments or comparisons that could enhance the paper's findings?
4. Are there any other recent works that investigate a similar problem, and how do they compare to the approach proposed in this paper? | Review | Review
In this paper, authors propose a novel method which learns the "latent differential graph" between two graphical models, represented by semiparametric elliptical distributions. The estimator is built upon the "quasi-likelihood" and a non-convex regularizer (such as SCAD or MCP). Comparing to a prior work [32], a faster statistical rate is given and performance is verified via synthetic and real-world datasets.Authors studied a problem proposed by a prior work [32]: learning the differential graph between two graphical models and propose a novel (though similar) estimator with a non-convex regularizer. Thanks to the rank base estimator and the non-convex regularizer, Authors' method is more general (semi-parametric) and enjoy a faster statistical rate comparing to [32]. In general, this paper is well-written and easy to read. I also have a few comments: 1. Authors proved rates for two scenarios: Thm 4.4 Large magnitude only and Thm 4.7 Large + Small magnitude. However, it would be definitely interesting to include experiments based on these two scenarios, and perhaps the advantage of using a non-convex regularizer would become clearer. 2. It might be interesting to perform a short empirical study on the computational speed of all methods. 3. Although under a slightly different framework, a recent work (see below) also investigated a similar problem. it would be interesting to see some comparisons between these methods in future works. https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/viewFile/9454/9942 |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What is the main contribution of the paper in terms of graphical model reconstruction?
2. What are the strengths and weaknesses of the proposed Latent Differential Graph Model?
3. How does the reviewer assess the novelty and advantages of the proposed method compared to other works, such as Ref.[32]?
4. What are some potential limitations or extensions of the method regarding its ability to handle multiple independent data sets and learn a robust graphical structure? | Review | Review
Paper suggests a method which allows to reconstruct a difference between two graphical models producing two distinct data sets. To achieve this goal the authors proposed a Latent Differential Graph Model, where the two Graphical Models themselves are among the latent variables (summed over). The resulting problem is solved by standard (even though nontrivial) optimization methods. It would be helpful to extend experimental part, e.g. by comparing with the methods of Ref.[32]. I am not convinced that your approach is advantageous over a simpler one from [32]. I would also like to see comparison with a straightforward methods - where the two graphical models (for two data sets) are reconstructed independently and then the differential graph is retrieved directly. Finally, does your method extends to the case when I have many independent data sets giving different graphical models and I what to learn "robust" graphical structure, i.e. edges present in all (or significant majority) of the graphical models? |
NIPS | Title
Semiparametric Differential Graph Models
Abstract
In many cases of network analysis, it is more attractive to study how a network varies under different conditions than an individual static network. We propose a novel graphical model, namely Latent Differential Graph Model, where the networks under two different conditions are represented by two semiparametric elliptical distributions respectively, and the variation of these two networks (i.e., differential graph) is characterized by the difference between their latent precision matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments on both synthetic and real world data support our theory.
1 Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between a group of variables, such as molecular entities including RNAs and proteins in genetic networks [3]. Networks are often modeled as graphical models. For instance, in gene regulatory network, the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is then employed by representing different genes as nodes and the regulation between genes as edges in the graph. In particular, two genes are conditionally independent given the others if and only if the corresponding entry of the precision matrix of the multivariate normal distribution is zero. Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the gene expression values from high-throughput method, even after being normalized, do not follow a normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships among genes. In order to address this problem, various semiparametric Gaussian graphical models [21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under various environmental and experimental conditions [1]. Take the genetic networks for example, two genes may be positively conditionally dependent under some conditions but negatively conditionally dependent under others. Therefore, in many cases, more attention is attracted not by a particular individual network but rather by whether and how the network varies with genetic and environmental alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd) respectively. Here TEd(⌃⇤X , ⇠; f1, . . . , fd) denotes a d-dimensional transelliptical distribution with latent correlation matrix ⌃⇤X 2 Rd⇥d, and will be defined in detail in Section 3. More specifically, the connectivity of the individual network is encoded by the latent precision matrix (e.g., ⇥⇤X = (⌃ ⇤ X)
1) of the corresponding transelliptical distribution, such that [⇥⇤X ]jk 6= 0 if and only if there is an edge between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ⇤ = ⇥⇤Y ⇥⇤X . Our goal is to estimate
⇤ based on observations sampled from TEd(⌃⇤X , ⇠; f1, . . . , fd) and TEd(⌃ ⇤ Y , ⇠; g1, . . . , gd). A simple procedure is estimating ⇥⇤X and ⇥ ⇤ Y separately, followed by calculating their difference. However, it requires estimating 2d2 parameters (i.e., ⇥⇤X and ⇥ ⇤ Y ), while our ultimate goal is only estimating d2 parameters (i.e., ⇤). In order to overcome this problem, we assume that the difference of the two latent precision matrices, i.e., ⇤ is sparse and propose to directly estimate it by quasi likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to correct the intrinsic estimation bias incurred by convex penalty [10, 36]. We prove that, when the true differential graph is s-sparse, our estimator attains O( p s 1 /n+ p s 2
log d/n) convergence rate in terms of Frobenius norm, which is faster than the estimation error bound O( p s log d/n) of `
1,1
penalty based estimator in [38]. Here n is the sample size, s 1 is the number of entries in ⇤ with large magnitude, s
2 is the number of entries with small magnitude and s = s 1 + s 2 . We show that our method enjoys the oracle property under a very mild condition. Thorough numerical experiments on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed estimator. In Section 4, we present our main theories for estimation in semiparametric differential graph models. Experiments on both synthetic and real world data are provided in Section 5. Section 6 concludes with discussion.
Notation For x = (x 1 , . . . , xd)> 2 Rd and 0 < q < 1, we define the `0, `q and `1 vector norms as kxk
0
= Pd i=1 1(xi 6= 0), kxkq = Pd i=1 |xi|q 1/q, and kxk1 = max1id |xi|, where 1(·)
is the indicator function. For A = (Aij) 2 Rd⇥d, we define the matrix `0,0, `1,1, `1,1 and `F norms as: kAk 0,0 = Pd i,j=1 1 (Aij 6= 0), kAk1,1 = Pd
i,j=1 |Aij |, kAk1,1 = max1i,jd |Aij |, and kAkF = qP ij |Aij |2. The induced norm for matrix is defined as kAkq = maxkxkq=1 kAxkq , for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A (jk)](jk)2S , and vec(S) is the vectorized index set of S.
2 Related Work
There exist several lines of research for differential network analysis. One natural procedure is to estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates the two networks by assuming that they share common structural patterns and therefore uses joint likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the estimated precision matrices, the differential graph can be obtained by calculating their difference. However, both of these two types of methods suffer from the drawback that they need to estimate twice the number of parameters, and hence require roughly doubled observations to ensure the estimation accuracy. In order to address this drawback, some methods are proposed to estimate the difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty, our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure between two high dimensional Ising models.
3 Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric differential graph model. Then we will present the estimator for differential graph, followed by the introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
Definition 3.1 (Elliptical distribution). Let µ 2 Rd and ⌃⇤ 2 Rd⇥d with rank(⌃⇤) = q d. A random vector X 2 Rd follows an elliptical distribution, denoted by ECd(µ,⌃⇤, ⇠), if it can be represented as X = µ + ⇠AU, where A is a deterministic matrix satisfying A>A = ⌃⇤, U is a random vector uniformly distributed on the unit sphere in Rq , and ⇠ ? U is a random variable. Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution. Definition 3.2 (Transelliptical distribution). A random vector X = (X
1 , X 2 , . . . , Xd)> 2 Rd is transelliptical, denoted by TEd(⌃⇤, ⇠; f1, . . . , fd), if there exists a set of monotone univariate functions f
1 , . . . , fd and a nonnegative random variable ⇠, such that (f1(X1), . . . , fd(Xd))> follows an elliptical distribution ECd(0,⌃⇤, ⇠).
3.2 Kendall’s tau Statistic
In semiparametric setting, the Pearson’s sample covariance matrix can be inconsistent in estimating ⌃⇤. Given n independent observations X
1 , ...,Xn, where Xi = (Xi1, ..., Xid)> ⇠ TEd(⌃⇤, ⇠; f1, . . . , fd), [20] proposed a rank-based estimator, the Kendall’s tau statistic, to estimate ⌃⇤, due to its invariance under monotonic marginal transformations. The Kendall’s tau estimator is defined as
b⌧jk = 2 n(n 1) X
1i<i0n sign
⇥ Xij Xi0j Xik Xi0k ⇤ . (3.1)
It has been shown that b⌧jk is an unbiased estimator of ⌧jk = 2/⇡ arcsin(⌃⇤jk) [20], and the correlation matrix ⌃⇤ can be estimated by b⌃ = [b⌃jk] 2 Rd⇥d, where
b ⌃jk = sin ⇣⇡ 2 b⌧jk ⌘ . (3.2)
We use T⇤ to denote the matrix with entries ⌧jk and bT with entries b⌧jk, for j, k = 1, . . . d.
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random vectors X and Y satisfy X ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd) and Y ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd). The differential graph is defined to be the difference of the two latent precision matrices,
⇤ = ⇥ ⇤ Y ⇥⇤X , (3.3)
where ⇥⇤X = ⌃ ⇤ 1 X and ⇥ ⇤ Y = ⌃ ⇤ 1 Y . It immediately implies
⌃ ⇤ X ⇤ ⌃ ⇤ Y (⌃⇤X ⌃⇤Y ) = 0, and ⌃⇤Y ⇤⌃⇤X (⌃⇤X ⌃⇤Y ) = 0. (3.4)
Given i.i.d. copies X 1 , . . . ,XnX of X , and i.i.d. copies Y1, . . . ,YnY of Y , without loss of generality, we assume nX = nY = n, and we denote the Kendall’s tau correlation matrices defined in (3.2) as b ⌃X and b⌃Y . Following (3.4), a reasonable procedure for estimating ⇤ is to solve the following equation for
1
2
b ⌃X b ⌃Y + 1
2
b ⌃Y b ⌃X (b⌃X b⌃Y ) = 0, (3.5)
where we add up the two equations in (3.4) and replace the latent population correlation matrices ⌃
⇤ X , ⌃ ⇤ Y with the Kendall’s tau estimators b⌃X , b⌃Y . Note that (3.5) is a Z-estimator [30], which can be translated into a M-estimator, by noticing that 1/2b⌃X b⌃Y + 1/2b⌃Y b⌃X (b⌃X b⌃Y ) can be seen as a score function of the following quasi log likelihood function
`( ) = 1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) . (3.6)
Let S = supp( ⇤), in this paper, we assume that ⇤ is sparse, i.e., |S| s with s > 0. Based on (3.6), we propose to estimate ⇤ by the following M-estimator with non-convex penalty
b = argmin
2Rd⇥d
1
2
tr( b ⌃Y b ⌃X) tr ( b ⌃X b⌃Y ) + G ( ), (3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function, i.e., G ( ) = Pd j,k=1 g ( jk), such as smoothly clipped absolute deviation (SCAD) penalty [10] or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty and a concave component h ( ), i.e., g ( ) = | |+ h ( ). Take MCP penalty for example. The corresponding g ( ) and h ( ) are defined as follows
g ( ) =
Z | |
0
✓ 1 z
b
◆
+
dz, for any 2 R,
where > 0 is the regularization parameter and b > 0 is a fixed parameter, and
h ( ) = 2 2b 1(| | b ) +
✓ b 2
2
| | ◆ 1(| | > b ).
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the estimation accuracy.
4 Main Theory
In this section, we present our main theories. Let S = supp( ⇤) be the support of the true differential graph. We introduce the following oracle estimator of ⇤:
b O = argmin
supp( )✓S `( ), (4.1)
where `( ) = 1/2 tr( b⌃Y b⌃X) tr ( b ⌃X b⌃Y ) . The oracle estimator b O is not a practical estimator, since we do not know the true support in practice. An estimator is said to have the oracle property, if it is identical to the oracle estimator b O under certain conditions. We will show that our estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis. Assumption 4.1. There exist constants
1 , 2 > 0 such that 1 min (⌃ ⇤ X) max(⌃⇤X) 1/1
and 2 min (⌃ ⇤ Y ) max(⌃⇤Y ) 1/2. The true covariance matrices have bounded `1 norm, i.e.,k⌃⇤Xk1 X , k⌃⇤Y k1 Y , where X , Y > 0 are constants. And the true precision matrices have bounded matrix ` 1
-norm, i.e., k⇥⇤Xk1 ✓X and k⇥⇤Y k1 ✓Y , where ✓X , ✓Y > 0 are constants. The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ⌃⇤X ,⌃ ⇤ Y are bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix ⇤ = ⌃⇤ 1Y ⌃⇤ 1X has s nonzero entries, i.e.,k ⇤k 0,0 s and has bounded `1,1 norm, i.e., k ⇤k1,1 M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( ) can be written as g ( ) = | |+ h ( ). Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ⌫ such that g0 ( ) = 0, for | | ⌫ > 0. (b) There exists a constant ⇣ 0 such that h ( ) + ⇣ /2 · 2 is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h 0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| for any . Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable. Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In particular, MCP penalty satisfies Assumption 4.3 with ⌫ = b and ⇣ = 1/b. Furthermore, according to condition (b), if ⇣ is smaller than the modulus of the restricted strong convexity for `( ), (3.7) will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for example, this can be achieved by choosing a sufficiently large b in MCP such that ⇣ is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition on nonzero entries of the true differential graph ⇤, our estimator attains a faster convergence rate, which matches the minimax rate in the classical regime. Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ⇤ satisfy min
(j,k)2S | ⇤jk| ⌫ + C✓2X✓ 2 Y X Y M p log s/n, for the estimator b in (3.7) with the regularization parameter satisfying
= 2CM p log d/n and ⇣ 12/2, we have that
b ⇤ 1,1 2 p 10⇡✓2X✓ 2 Y X Y M
r log s
n
holds with probability at least 1 2/s. Furthermore, we have that
k b ⇤kF C1M 1 2
r s
n
holds with probability at least 1 3/s, where C 1 is an absolute constant. Remark 4.5. Theorem 4.4 suggests that under the large magnitude assumption, the statistical rate of our estimator is O( p s/n) in terms of Frobenius norm. This is faster than the rate O( p s log d/n) in [38] which matches the minimax lower bound for sparse differential graph estimation. Note that our faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle estimator b O in (4.1), we have with probability at least 1 3/s that b = b O, which further implies supp( b ) = supp( b O) = supp( ⇤ ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high probability, when the nonzero entries in ⇤ satisfy min (j,k)2S | ⇤jk| ⌫ + C✓2X✓2Y X Y M p
log s/n. This condition is optimal up to the logarithmic factor p log s.
Now we turn to the general case when the nonzero entries of ⇤ have both large and small magnitudes. Define Sc = {(j, k) : j, k = 1, . . . , d} \ S, S
1 = {(j, k) 2 S : | ⇤jk| > ⌫}, and S2 = {(j, k) 2 S : | ⇤jk| ⌫}. Denote |S1| = s1 and |S2| = s2. Clearly, we have s = s1 + s2. Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. For the estimator in (3.7) with the regularization parameter = 2CM p log d/n and ⇣ 12/4, we have that
k b ⇤kF 16 p 3⇡M
1 2
r s 1
n +
10⇡MC
1 2
r s 2 log d
n
holds with probability at least 1 3/s 1 , where C is an absolute constant. Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our estimator is still able to attain a faster rate. Specifically, for those nonzero entries of ⇤ with large magnitude, the estimation error bound in terms of Frobenius norm is O( p s 1 /n), which is the same
as the bound in Theorem 4.4. For those nonzero entries of ⇤ with small magnitude, the estimation error is O( p s 2
log d/n), which matches the convergence rate in [38]. Overall, our estimator obtains a refined rate of convergence rate O( p s 1 /n+ p s 2
log d/n), which is faster than [38]. In particular, if s⇤
2
= 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5 Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments for our estimator using both SCAD and MCP penalties. We did not find any significant difference in the results and thus we only report the results of our estimator with MCP penalty. To choose the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the latent precision matrices separately using graphical Lasso and Kendall’s tau correlation matrices [20], followed by calculating their difference; (2) DPM: directly estimating differential precision matrix [38]. In addition, we also test differential graph model with `
1,1 penalty, denoted as LDGM-L1. Note that LDGM-L1 is a special case of our method, since `
1,1 norm penalty is a special case of MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the author’s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1 Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian distribution, it is natural to show that our approach also works well for the latter one. We consider the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data are generated as follows: (1) For the Gaussian distribution, we generate data {Xi}ni=1 ⇠ N(0,⌃⇤X) and {Yi}ni=1 ⇠ N(0,⌃⇤Y ) with precision matrices ⌃⇤ 1X and ⌃⇤ 1Y generated by huge package 1. (2) For the transelliptical distribution, we consider the following generating scheme: {Xi}ni=1 ⇠ TEd(⌃⇤X , ⇠; f1, . . . , fd), {Yi}ni=1 ⇠ TEd(⌃⇤Y , ⇠; g1, . . . , gd), where ⇠ ⇠ d, f 11 (·) = . . . = f 1d = sign(·)| · |3 and g 11 (·) = . . . = g 1d (·) = sign(·)| · |1/2. The latent precision matrices ⌃⇤ 1X and ⌃⇤ 1Y are generated in the same way as the Gaussian data. For both Gaussian and transelliptical differential graph mdoels, we consider two settings for individual graph structures: (1) both ⌃⇤ 1X and ⌃
⇤ 1 Y have "random" structures; (2) ⌃ ⇤ 1 X has a "band" structure, ⌃ ⇤ 1 Y has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP = Pd j,k=1 1( b jk 6= 0 and ⇤jk 6= 0)Pd
j,k=1 1( ⇤ jk 6= 0)
, TN = Pd j,k=1 1( b jk = 0 and ⇤jk = 0)Pd
j,k=1 1( ⇤ jk = 0)
.
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are shown in Figure 1, which report the performances of different methods on support recovery. The ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate of convergence. Note that SepGlasso’s performace is poor since it highly depends on the sparsity of both individual graphs. When n > 100, the DPM method failed to output the solution in one day and thus no result was presented. This computational burden is also stated in their paper. We use the Frobenius norm k b ⇤kF and infinity norm k b ⇤k1,1 of estimation errors to evaluate the performances of different methods in estimation. The results averaged over 10 replicates for transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the experiment results for Gaussian differential graph model to the appendix.
1Available on http://cran.r-project.org/web/packages/huge
1-TN 0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(a) Setting 1: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso DPM LDGM-L1 LDGM-MCP
(b) Setting 2: n=100,d=100 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(c) Setting 1: n=200,d=400 1-TN
0 0.2 0.4 0.6 0.8 1
TP
0
0.2
0.4
0.6
0.8
1
SepGlasso LDGM-L1 LDGM-MCP
(d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two settings of graph structure. Note that DPM is not scalable to d = 400.
5.2 Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter survival times, and was characterized by differential expression of genes associated with stromal and immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also associated with the genetic differential networks. The subjects were divided into two groups: Group 1 with n
1 = 78 patients containing C1 subtype, and Group 2 with n 2 = 113 patients containing C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17] respectively. In each pathway, we applied different methods to determine whether there is any difference in the conditional dependency relationships of the gene expression levels between the aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their conditional dependency relationship given the others changed in either magnitude or sign. In order to obtain a clear view of the differential graph, we only plotted genes whose conditional dependency with others changed between the two groups. To interpret the results, the genes associated with more edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the number of genes d = 80 is greater than n
1 , the sample size of Group 1. LDGM-MCP identified two important genes, COMP and THBS2, both of which have been suggested to be related to resistance to platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also greater than n
1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed, both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32]. LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However, it still performed poorly and identified the wrong gene CSF2RB.
6 Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator, which results in a faster rate for parameter estimation than existing methods. We also prove that the proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic and real world data further support our theory. Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. Research was supported by NSF grant III-1618948. | 1. What are the strengths and contributions of the paper regarding graph estimation?
2. Are there any limitations or concerns regarding the assumptions made in the paper, such as sample size equality?
3. How does the method handle missing data in certain attributes?
4. Can the approach be extended to handle situations with multiple conditions? | Review | Review
The paper presented a graph estimation approach from an interesting perspective. That is not assuming the sparsity in the graph of a single condition but the sparsity in the differential graph of two conditions. Moreover, using more general transelliptical distributions than common Gaussian distributions and a robust Kendall's tau estimate largely broaden the application of the approach for its robustness. Real data analysis also checked the validity of the proposed approach. The comments about the paper are as below. 1. Line 115: is assuming that n_X ( the sample number in the first condition ) equals n_Y ( the sample number in the second condition ) necessary? In practice, sample sizes of two experiments are often different and dropping some samples to keep equal sample sizes would lose part of information. I am wondering how to employ the Kendall tau estimator when n_X does not equal to n_Y. 2. As is often the case, some attributes might not have observations in both two conditions. For example, when the attribute is the gene, this gene is measured in the first condition, but it is not measured in the second condition. How to deal with the commonly encountered missing data problem. 3. This paper only provided the approach to detecting differential graph for two conditions. If multiple conditions exist, pairwisely analyzing the graph would cost quadratic computational time. Is it possible to extent the approach to this kind of situation? |
NIPS | Title
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence
Abstract
To regulate a social system comprised of self-interested agents, economic incentives are often required to induce a desirable outcome. This incentive design problem naturally possesses a bilevel structure, in which a designer modifies the rewards of the agents with incentives while anticipating the response of the agents, who play a non-cooperative game that converges to an equilibrium. The existing bilevel optimization algorithms raise a dilemma when applied to this problem: anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient; bypassing the timeconsuming step of equilibrium-finding can reduce the computational cost, but may lead the designer to a sub-optimal solution. To address such a dilemma, we propose a method that tackles the designer’s and agents’ problems simultaneously in a single loop. Specifically, at each iteration, both the designer and the agents only move one step. Nevertheless, we allow the designer to gradually learn the overall influence of the incentives on the agents, which guarantees optimality after convergence. The convergence rate of the proposed scheme is also established for a broad class of games.
1 Introduction
A common thread in human history is how to "properly" regulate a social system comprised of self-interested individuals. In a laissez-faire economy, for example, the competitive market itself is the primary regulatory mechanism [47, 16]. However, a laissez-faire economy may falter due to the existence of significant "externalities" [8, 18], which may arise wherever the self-interested agents do not bear the external cost of their behaviors in the entirety. The right response, many argue, is to introduce corrective policies in the form of economic incentives (e.g., tolls, taxes, and subsidies) [32]. By modifying the rewards of the agents, these incentives can encourage (discourage) the agents to engage in activities that create positive (negative) side effects for the society, and thus guide the self-interests of the agents towards a socially desirable end. For example, carbon taxes can be levied on carbon emissions to protect the environment during the production of goods and services [35].
⇤Equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surge pricing has been widely used to boost supply and dampen demand in volatile ride-hail markets [37]. Lately, subsidies and penalties were both introduced to overcome vaccine hesitancy in the world’s hard-fought battle against the COVID-19 pandemic.
The goal of this paper is to develop a provably efficient method for guiding the agents in a noncooperative game towards a socially desirable outcome — e.g., the one that maximizes the social welfare — by modifying their payoffs with incentives. The resulting problem may be naturally interpreted as a Stackelberg game [50] in which the “incentive designer" is the leader while the agents being regulated are the followers. Hence, it naturally possesses a bilevel structure [3]: at the upper level, the "designer" optimizes the incentives by anticipating and regulating the best response of the agents, who play a non-cooperative game at the lower level. As the lower-level agents pursue their self-interests freely, their best response can be predicted by the Nash equilibrium [39], which dictates no agent can do better by unilaterally changing their strategy. Accordingly, the incentive design problem is a mathematical program with equilibrium constraints (MPEC) [30].
In the optimization literature, MPECs are well-known for their intractability [10]. Specifically, even getting a first-order derivative through their bilevel structure is a challenge. In the incentive design problem, for example, to calculate the gradient of the designer’s objective at equilibrium, which provides a principled direction for the designer to update the incentives, one must anticipate how the equilibrium is affected by the changes [17]. This is usually achieved by performing a sensitivity analysis, which in turn requires differentiation through the lower-level equilibrium problem, either implicitly or explicitly [25]. No matter how the sensitivity analysis is carried out, the equilibrium problem must be solved before updating the incentives. The resulting algorithm thus admits a double loop structure: in the outer loop, the designer iteratively moves along the gradient; but to find the gradient, it must allow the lower level game dynamics to run its course to arrive at the equilibrium given the current incentives.
Because of the inherent inefficiency of the double-loop structure, many heuristics methods have also been developed for bilevel programs in machine learning [27, 29, 14]. When applied to the incentive design problem, these methods assume that the designer does not solve the equilibrium exactly to evaluate the gradient. Instead, at each iteration, the game is allowed to run just a few rounds, enough for the designer to obtain a reasonable approximation of the gradient. Although such a method promises to reduce the computational cost significantly at each iteration, it may never converge to the same optimal solution obtained without the approximation.
Contribution. In a nutshell, correctly anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient. On the other hand, simply bypassing the time-consuming step of equilibrium-finding may lead the designer to a sub-optimal solution. This dilemma prompts the following question that motivates this study: can we obtain the optimal solution to an incentive design problem without repeatedly solving the equilibrium problem?
In this paper, we propose an efficient principled method that tackles the designer’s problem and agents’ problem simultaneously in a single loop. At the lower level, we use the mirror descent method [40] to model the process through which the agents move towards equilibrium. At the upper level, we use the gradient descent method to update the incentives towards optimality. At each iteration, both the designer and the agents only move one step based on the first-order information. However, as discussed before, the upper gradient relies on the corresponding lower equilibrium, which is not available in the single-loop update. Hence, we propose to use the implicit differentiation formula—with equilibrium strategy replaced by the current strategy—to estimate the upper gradient, which might be biased at the beginning. Nevertheless, we prove that if we improve the lower-level solution with larger step sizes, the upper-level and lower-level problems may converge simultaneously at a fast rate. The proposed scheme hence guarantees optimality because it can anticipate the overall influence of the incentives on the agents eventually after convergence.
Organization. In Section 2, we discuss related work. In Section 3, we provide the mathematical formulation of the incentive design problem. In Section 4, we design algorithms for solving the problem. In Section 5, we establish conditions under which the proposed scheme globally converges to the optimal solution and analyze the convergence rate. The convergence analysis is restricted to games with a unique equilibrium. In Section 6, we discuss how to apply our algorithms to games with multiple equilibria. Eventually, we conduct experiments to test our algorithms in Section 7.
Notation. We denote h·, ·i as the inner product in vector spaces. For a vector form a = (ai), we denote a i = (aj)j 6=i. For a finite set X 2 Rn, we denote (X ) = {⇡ 2 Rn+ : P xi2X ⇡xi = 1}. For any vector norm k · k, we denote k · k⇤ = supkzk1h·, zi as its dual norm. We refer readers to Appendix A for a collection of frequently used problem-specific notations.
2 Related work
The incentive design problem studied in this paper is a special case of mathematical programs with equilibrium constraints (MPEC) [19], a class of optimization problems constrained by equilibrium conditions. MPEC is closely related to bilevel programs [10], which bind two mathematical programs together by treating one program as part of the constraints for the other.
Bilevel Programming. In the optimization literature, bilevel programming was first introduced to tackle resource allocation problems [7] and has since found applications in such diverse topics as revenue management, network design, traffic control, and energy systems. In the past decade, researchers have discovered numerous applications of bilevel programming in machine learning, including meta-learning (ML) [14], adversarial learning [22], hyperparameter optimization, [31] and neural architecture search [27]. These newly found bilevel programs in ML are often solved by gradient descent methods, which require differentiating through the (usually unconstrained) lower-level optimization problem [28]. The differentiation can be carried out either implicitly on the optimality conditions as in the conventional sensitivity analysis [see e.g., 2, 43, 4], or explicitly by unrolling the numerical procedure used to solve the lower-level problem [see e.g., 31, 15]. In the explicit approach, one may "partially" unroll the solution procedure (i.e., stop after just a few rounds, or even only one round) to reduce the computational cost. Although this popular heuristic has delivered satisfactory performance on many practical tasks [29, 36, 14, 27], it cannot guarantee optimality for bilevel programs under the general setting, as it cannot derive the accurate upper-level gradient at each iteration [53].
MPEC. Unlike bilevel programs, MPEC is relatively under-explored in the ML literature so far. Recently, Li et al. [25] extended the explicit differentiation method for bilevel programs to MPECs. Their algorithm unrolls an iterative projection algorithm for solving the lower-level problem, which is formulated as a variational inequality (VI) problem. Leveraging the recent advance in differentiable programming [2], they embedded each projection iteration as a differentiable layer in a computational graph, and accordingly, transform the explicit differentiation as standard backpropagation through the graph. The algorithm proposed by Li et al. [26] has a similar overall structure, but theirs casts the lower-level solution process as the imitative logit dynamics [6] drawn from the evolutionary game theory, which can be more efficiently unrolled. Although backpropagation is efficient, constructing and storing such a graph — with potentially a large number of projection layers needed to find a good solution to the lower-level problem — is still demanding. To reduce this burden, partially unrolling the iterative projection algorithm is a solution. Yet, it still cannot guarantee optimality for MPECs due to the same reason as for bilevel programs.
The simultaneous design-and-play approach is proposed to address this dilemma. Our approach follows the algorithm of Hong et al. [21] and Chen et al. [9], which solves bilevel programs via singleloop update. Importantly, they solve both the upper- and the lower-level problem using a gradient descent algorithm and establish the relationship between the convergence rate of the single-loop algorithm and the step size used in gradient descent. However, their algorithms are limited to the cases where the lower-level optimization problem is unconstrained. Our work extends these single-loop algorithms to MPECs that have an equilibrium problem at the lower level. We choose mirror descent as the solution method to the lower-level problem because of its broad applicability to optimization problems with constraints [40] and generality in the behavioral interpretation of games [34, 23]. We show that the convergence of the proposed simultaneous design-and-play approach relies on the setting of the step size for both the upper- and lower-level updates, a finding that echos the key result in [21]. We first give the convergence rate under mirror descent and the unconstrained assumption and then extend the result to the constrained case. For the latter, we show that convergence cannot be guaranteed if the lower-level solution gets too close to the boundary early in the simultaneous solution process. To avoid this trap, the standard mirror descent method is revised to carefully steer the lower-level solution away from the boundary.
3 Problem Formulation
We study incentive design in both atomic games [39] and nonatomic games [45], classified depending on whether the set of agents is endowed with an atomic or a nonatomic measure. In social systems, both types of games can be useful, although the application context varies. Atomic games are typically employed when each agent has a non-trivial influence on the rewards of other agents. In a nonatomic game, on the contrary, a single agent’s influence is negligible and the reward could only be affected by the collective behavior of agents.
Atomic Game. Consider a game played by a finite set of agents N = {1, . . . , n}, where each agent i 2 N selects a strategy ai 2 Ai ✓ Rdi to maximize the reward received, which is determined by a continuously differentiable function ui : A = Q i2N Ai ! R. Formally, a joint strategy a⇤ 2 A is a Nash equilibrium if
ui(ai⇤, a i ⇤ ) ui(ai, a i⇤ ), 8 ai 2 Ai, 8i 2 N .
Suppose that for all i 2 N , the strategy set Ai is closed and convex, and the reward function ui is convex in ai, then a⇤ 2 A is a Nash equilibrium if and only if there exists i, . . . , n > 0 such that [46]
nX
i=1
i · ⌦ raiui(a⇤), ai ai⇤ ↵ 0, for all a 2 A. (3.1)
Example 3.1 (Oligopoly model). In an oligopoly model, there is finite set N = {1, . . . , n} of firms, each of which supplies the market with a quantity ai (ai 0) of goods. Under this setting, we have A = Rn+. The good is then priced as p(q) = p0 · q, where p0, > 0 and q = P j2N a
i it the total output. The profit and the marginal profit of the firm i are then given by
ui(a) = ai · ✓ p0 · X j2N aj ◆ ci, raiui(a) = p0 · ✓ ai + X j2N aj ◆ ,
respectively, where ci is the constant marginal cost2 for firm i.
Nonatomic Game. Consider a game played by a continuous set of agents, which can be divided into a finite set of classes N = {1, . . . , n}. We assume that each i 2 N represents a class of infinitesimal and homogeneous agents sharing the finite strategy set Ai with |Ai| = di. The mass distribution for the class i is defined as a vector qi 2 (Ai) that gives the proportion of agents using each strategy. Let the cost of an agent in class i to select a strategy a 2 Ai given q = (q1, . . . , qn) be cia(q). Formally, a joint mass distribution q 2 (A) = Q i2N (Ai) is a Nash equilibrium, also known as a Wardrop equilibrium [51], if for all i 2 N , there exists bi such that ⇢ cia(q⇤) = bi, if qia⇤ > 0, cia(q⇤) bi, if qia⇤ = 0.
The following result extends the VI formulation to Nash equilibrium in nonatomic game: denote ci(q) = (cia(q))a2Ai , then q⇤ is a Nash equilibrium if and only if [11]
X i2N i · ⌦ ci(q⇤), q i qi⇤ ↵ 0, for all q 2 (A). (3.2)
Example 3.2 (Routing game). Consider a set of agents traveling from source nodes to sink nodes in a directed graph with nodes V and edges E . Denote N ✓ V ⇥ V as the set of source-sink pairs, Ai ✓ 2E as the set of paths connecting i 2 N and E ia ✓ E as the set of all edges on the path a 2 Ai. Suppose that each source-sink pair i 2 N is associated with ⇢i nonatomic agents aiming to choose a route from Ai to minimize the total cost incurred. Let qia 2 (Ai) be the proportion of travelers using the path a 2 Aw, xe 2 R+ be the number of travelers using the edge e and te(xe) 2 R+ be the cost for using edge e. Then we have xe = P i2N P a2Ai ⇢
i · qia · eia, where eik equals 1 if e 2 E ia and 0 otherwise. The total cost for a traveler selecting a path a 2 Ai will then be cia(q) = P e2E t
e(xe) · eia. 2Throughout this paper, we use the term “reward” to describe the scenario where the agents aim to maximize
ui, and use “cost” when the agents aim to do the opposite.
Incentive Design. Despite the difference, we see that an equilibrium of both atomic and nonatomic games can be formulated as a solution to a corresponding VI problem in the following form
X i2N i · ⌦ vi(x⇤), x i xi⇤ ↵ 0, for all x 2 X = Y i2N X i, (3.3)
where vi and X i denote different terms in the two types of games. Suppose that there exists an incentive designer aiming to induce a desired equilibrium. To this end, the designer can add incentives ✓ 2 ⇥ ✓ Rd, which is assumed to enter the reward/cost functions and thus leads to a parameterized vi✓(x). We assume that the designer’s objective is determined by a function f : ⇥⇥ X ! R. Denote S(✓) as the solution set to (3.3). We then obtain the uniform formulation of the incentive design problem for both atomic games and nonatomic games
min ✓2⇥ f⇤(✓) = f(✓, x⇤), s.t. x⇤ 2 S(✓). (3.4)
If the equilibrium problem admits multiple solutions, the agents may converge to different ones and it would be difficult to determine which one can better predict the behaviors of the agents without additional information. In this paper, we first consider the case where the game admits a unique equilibrium. Sufficient conditions under which the game admits a unique equilibrium will also be provided later. We would consider the non-unique case later and show that our algorithms can still become applicable by adding an appropriate regularizer in the cost function.
Stochastic Environment. In the aforementioned settings, vi✓(x) is a deterministic function. Although most MPEC algorithms in the optimization literature follow this deterministic setting, in this paper, we hope our algorithm can handle more realistic scenarios. Specifically, in the real world, the environment could be stochastic if some environment parameters that fluctuate over days. In a traffic system, for example, both worse weather and special events may affect the road condition, hence the travel time vi✓(x) experienced by the drivers. We expect our algorithm can still work in the face of such stochasticity. To this end, we assume that vi✓(x) represents the expected value of the cost function. On each day, however, the agents can only receive a noised feedback bvi✓ as an estimation. In the next section, we develop algorithms based on such noisy feedback.
4 Algorithm
We propose to update ✓ and x simultaneously to improve the computational efficiency. The game dynamics at the lower level is modeled using the mirror descent method. Specifically, at the stage k, given ✓k and xk, the agent first receives vi✓k(xk) as the feedback. After receiving the feedback, the agents update their strategies via
xik+1 = argmax xi2X i hvi✓k(xk), x ii 1/ ik ·D i(xik, xi) , (4.1)
where D i(xik, x i) is the Bregman divergence induced by a strongly convex function i. The accurate value of rf⇤(✓k), the gradient of its objective function, equals
r✓f ✓k, x⇤(✓k) + ⇥ r✓x⇤(✓k) ⇤> ·rxf ✓k, x⇤(✓k) ,
which requires the exact lower-level equilibrium x⇤(✓k). However, at the stage k, we only have the current strategy xk. Therefore, we also have to establish an estimator of rx⇤(✓k) and rf⇤(✓k) using xk, the form of which will be specified later. Remark 4.1. The standard gradient descent method is double-loop because at each ✓k it involves an inner loop for solving the exact value of x⇤(✓k) and then calculating the exact gradient.
4.1 Unconstrained Game
We first consider unconstrained games with X i = Rdi , for all i 2 N . We select i(·) as smooth function, i.e., there exists a constant H 1 such that for all i 2 N and xi, xi0 2 X i,
r i(xi) r i(xi0) 2 H · kxi xi0k2. (4.2)
Example of i satisfying this assumption include (but is not limited to) i(xi) = (xi)>Qixi/2, where Qi 2 Rdi ⇥ Rdi is a positive definite matrix. It can be directly checked that we can set
H = maxi2N i, where i is the largest singular value of Qi. In this case, the corresponding Bregman divergence becomes D i(xi, xi0) = (xi xi0)>Qi(xi xi0)/2, which is known as the squared Mahalanobis distance. Before laying out the algorithm, we first give the following lemma characterizing r✓x⇤(✓). Lemma 4.2. When X i = Rdi and rxv✓(x⇤(✓)) is non-singular, it holds that
r✓x⇤(✓) = h rxv✓ x⇤(✓) i 1 ·r✓v✓ x⇤(✓) .
Proof. See Appendix B.2 for detailed proof.
For any given ✓ 2 ⇥ and x 2 X , we define erf(✓, x) = r✓f(✓, x) ⇥ r✓v✓(x) ⇤> · ⇥ rxv✓(x) ⇤ 1 ·rxf(✓, x). (4.3)
Although we cannot obtain the exact value of rf⇤(✓k), we may use brf(✓k, xk) as a surrogate and update ✓k based on brf⇤(✓k, xk) instead. Now we are ready to present the following bilevel incentive design algorithm for unconstrained games (see Algorithm 1).
Algorithm 1 Bilevel incentive design for unconstrained games Input: ✓0 2 ⇥, x0 2 X = Rd, where d = P i2N d
i, sequence of step sizes (↵k, { ik}i2N ). For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2X i hbvik, xii 1/ ik ·D i(x i k, x i) , (4.4)
for all i 2 N , where bvik is an estimator of vi✓k (xk). Update incentive parameter
✓k+1 = argmax ✓2⇥
herf(✓k, xk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
In Algorithm 1, if ✓k and xk converge to fixed points ✓̄ and x̄, respectively, then x̄ = x⇤(✓̄) is expected to be satisfied. Thus, we would also have brf(✓̄, x̄) = rf⇤(✓̄). Thus, the optimality of ✓̄ can be then guaranteed. It implies that the algorithm would find the optimal solution if it converges. Instead, the difficult part is how to design appropriate step sizes that ensure convergence. In this paper, we provide such conditions in Section 5.1.
4.2 Simplex-Constrained Game
We then consider the case where for all i 2 N , xi is constrained within the probability simplex
[di] = xi 2 Rd i xi 0, (xi)>1di = 1 ,
where 1di 2 Rd i
is the vector of all ones. Here we remark that any classic game-theoretic models are simplex-constrained. In fact, as long as the agent faces a finite number of choices and adopts a mixed strategy, its decision space would be a probability simplex [39]. In addition, some other types of decisions may also be constrained by a simplex. For example, financial investment concerns how to split the money on different assets. In such a scenario, the budget constraint can also be represented by a probability simplex. In such a case, we naturally consider i(xi) = P
j2[di][x i]j · log[xi]j , which is the Shannon
entropy. Such a choice gives the Bregman divergence D i(xi, xi0) = P
j2[di][x i]j · log([xi0]j/[xi]j),
which is known as the KL divergence. In this case, we still first need to characterize r✓x⇤(✓), which also has an analytic form. Specifically, if we define a function h✓(x) = (hi✓(x i))i2N that satisfies
hi✓(x i) = argmax
x0i2X i
n⌦ vi✓(x i), x0i ↵ 1/ ik ·D i(xi, x0i) o ,
then for any ✓, xi⇤(✓) satisfies xi⇤(✓) = h✓(xi⇤(✓)) [12]. Implicitly differentiating through this fixed point equation then yields rx⇤(✓) = r✓h✓(x⇤(✓)) · (I rxh✓(x⇤(✓))) 1. Then, similar to (4.3), we may use
erf(✓, x) = r✓f(✓, x) r✓h✓(x) · I rxh✓(x) 1 ·rxf(✓, x) (4.5)
to approximate the actual gradient rf⇤(✓) and then update ✓k based on rf⇤(✓k) instead. Remark 4.3. The mapping h✓(x) has an analytic expression, which reads
hi✓(x) = x i · exp
ik · vi✓(x) . xi · exp ik · vi✓(x)
1 .
Hence, both rxh✓(x) and r✓h✓(x) can also be calculated analytically.
In addition to a different gradient estimate, we also modify Algorithm 1 to keep the iterations xk from hitting the boundary at the early stage. The modification involves an additional step that mixes the strategy with a uniform strategy 1di/di, i.e., imposing an additional step
exik+1 = (1 ⌫k+1) · xk+1 + ⌫k+1 · 1di/di
upon finishing the update (4.4), where ⌫k+1 2 (0, 1) is a the mixing parameter, decreasing to 0 when k ! 1. In the following, we give the formal presentation of the modified bilevel incentive design algorithm for simplex-constrained games (see Algorithm 2).
Algorithm 2 Bilevel incentive design for simplex constrained games Input: ✓0 2 ⇥, x0 2 X , step sizes (↵k, { ik}i2N ), k 0, and mixing parameters ⌫k, k 0. For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2 ([di]) hbvik, xii 1/ ik ·D i(ex i k, x i) ,
exik+1 = (1 ⌫k) · xk+1 + ⌫k · 1di/d i, (4.6)
for all i 2 N , where bvik is an estimator of vi✓k (exk). Update incentive parameter
✓k+1 argmax ✓2⇥
herf(✓k, exk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
Similar to Algorithm 1, at the core of the convergence of Algorithm 2 is still the step size. This case is even more complicated, as we need to design ↵k, k, and ⌫k at the same time. In this paper, a provably convergent scheme is provided in Section 5.2.
Before closing this section, we remark that the algorithm can be easily adapted to other types of constraints by using another h✓(x) to model the game dynamics. Particularly, the projected gradient descent dynamics has very broad applicability. In this case, the algorithm for calculating r✓h✓(x) and rxh✓(x) is given by, for example, Amos and Kolter [2]. The additional step (4.6) then becomes unnecessary as it is dedicated to simplex constraints.
5 Convergence Analysis
In this section, we study the convergence of the proposed algorithms. For simplicity, define D x, x0 = P
i2N D i xi, xi0 . We make the following assumptions.
Assumption 5.1. The lower-level problem in (3.4) satisfies the following conditions. (1) The strategy set X i of agent i is a nonempty, compact, and convex subset of Rdi . (2) For each i 2 N , the gradient vi✓(·) is Hu-Lipschitz continuous with respect to D , i.e., for all i 2 N and x, x0 2 X , kvi✓(x) vi✓(x0)k2⇤ H2u ·D (x, x0). (3) There exist constants ⇢✓, ⇢x > 0 such that for all x 2 X and
✓ 2 ⇥, kr✓v✓(x)k2 < ⇢✓, and k[rxv✓(x)] 1k2 1/⇢x. (4) For all ✓ 2 ⇥, the equilibrium x⇤(✓) of the game is strongly stable with respect to D , i.e., for all x 2 X , P i2N
i · hvi✓(x), xi⇤(✓) xii D (x⇤(✓), x). Assumption 5.2. The upper-level problem in (3.4) satisfies the following properties. (1) The set ⇥ is compact and convex. The function f⇤(✓) is µ-strongly convex and rf⇤(✓) has 2-norm uniformly bounded by M . (2) The extended gradient erf(✓, x) is eH-Lipschitz continuous with respect to D , i.e., for all x, x0 2 X , kerf(✓, x) erf(✓, x0)k22 eH2 ·D (x, x0). Assumption 5.3. Define the filtration by F✓0 = {✓0}, Fx0 = ;, F✓k = F✓k 1 [ {xk 1, ✓k}, and Fxk = Fxk 1 [ {✓k, xk}. We assume (1) the feedback bvk is an unbiased estimate, i.e., for all i 2 N , we have E[bvik | Fxk ] = vi✓k(xk); (2) The feedback bvk has bounded mean squared estimation errors, i.e., there exists u > 0 such that E[kbvik vi✓k(xk)k 2 ⇤ | Fxk ] 2u for all i 2 N .
Below we discuss when the proposed assumptions hold, and if they are violated, how would our algorithm works. Assumption 5.1 includes the condition that x⇤(✓) is strongly stable. In this case, it is the unique Nash equilibrium of the game [34]. It is also a common assumption in the analysis of the mirror descent dynamics itself [13]. We provide sufficient conditions for checking strong stability in Appendix B.1. We refer the readers to Section 6 for an explanation of how to extend our algorithm when this assumption is violated. Assumption 5.2 includes the convexity of the upper-level problem, which is usually a necessary condition to ensure global convergence. Yet, without convexity, our algorithm can still converge to a local minimum. Assumption 5.3 becomes unnecessary if we simply assume that the environment is deterministic. In this case, the accurate value of vi✓(x) is available. Yet, if the noises are added to the feedback, assuming that the noisy feedback is unbiased and bounded is still reasonable.
5.1 Unconstrained Game
In this part, we establish the convergence guarantee of Algorithm 1 for unconstrained games. We define the optimality gap ✏✓k and the equilibrium gap ✏ x k+1 as
✏✓k := E ⇥ k✓k ✓⇤k22 ⇤ , ✏xk+1 := E h D x⇤(✓k), xk+1 i .
We track such two gaps as the convergence criteria in the subsequent results. Theorem 5.4. For Algorithm 1, set the step sizes ↵k = ↵/(k + 1), k = /(k + 1)2/3, and ik = i · k with constants ↵ > 0 and > 0 satisfying
1/N ·H2uk k22, ↵/ 3/2 1/12 ·H eHH⇤,
where H⇤ = ⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold, then we have
✏✓k = O(k 2/3), ✏xk = O(k 2/3).
Proof. See Appendix C for detailed proof and a detailed expression of convergence rates.
5.2 Simplex-Constrained Game
In this part, we establish the convergence guarantee of Algorithm 2 for simplex constrained games. We still define optimality gap ✏✓k as ✏ ✓ k = E ⇥ k✓k ✓⇤k22 ⇤ . Yet, corresponding to (4.6), we track e✏xk+1 as a measure of convergence for the strategies of the agents, which is defined as
e✏xk+1 = E h D ex⇤(✓k), exk+1 i ,
where ex⇤(✓k) = (1 ⌫k) · x⇤(✓k) + ⌫k · 1/di. We are now ready to give the convergence guarantee of Algorithm 2. Theorem 5.5. For Algorithm 2, set the step sizes ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, ik = i · k, and ⌫k = ⌫/(k + 1)4/7 with constants ↵ > 0 and > 0 satisfying
1/6 ·NH2uk k22, ↵/ 3/2 1/7 · eH eH⇤,
where eH⇤ = (1 + d)⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold. If there exists some constant V⇤ > 0 such that kv✓(x⇤(✓))k1 V⇤ for any ✓ 2 ⇥, we then have
✏✓k = O(k 2/7), e✏xk = O(k 2/7).
Proof. See Appendix D for detailed proof and a detailed form of the convergence rates.
6 Extensions to Games with Multiple Equilibria
We then briefly discuss how to apply our algorithms when the lower-level game has multiple equilibria.
Case I: If the function v✓(x) = (vi✓(x))i is strongly monotone in the neighbourhood of each equilibrium, then all equilibria are strongly stable in a neighbourhood and hence isolated [38]. In this case, our algorithms can be directly applied as rxv✓(x) is non-singular in these neighborhoods. It is commonly believed that the most likely equilibrium is the one reached by the game dynamics [52]; our algorithm naturally converges to this one.
Case II: If the function v✓(x) = (vi✓(x))i is monotone but not strongly monotone, then the equilibrium set is a convex and closed region [34]. This case is challenging, as the matrix rxv✓(x) needed to be inverted would become singular. Nevertheless, we can simply assume the agents are bounded rational [42, 1, 33]. The bounded rationality would result in a quantal response equilibrium for predicting the agents’ response. We refer the readers to Appendix F for a detailed explanation (with numerical examples for illustration). Here we briefly sum up the key takeaways: (1) it is equivalent to add a regularizer ⌘i · (log(xi + ✏) + 1) to vi✓(x) for some ⌘i > 0 and ✏ 0; (2) as long as ✓ > 0, the strong stability condition in Assumption 5.1 would then be satisfied, hence a unique equilibrium would exist; (3) as long as ✏ > 0, the Lipschitz continuous condition in Assumption 5.1 will also not be violated. In a nutshell, the bounded rationality assumption can simultaneously make our model more realistic and satisfy the assumptions in Section 5.
7 Numerical Experiments
In this section, we conduct two numerical experiments to test our algorithms. All numerical results reported in this section were produced either on a MacBook Pro (15-inch, 2017) with 2.9 GHz Quad-Core Intel Core i7 CPU.
Pollution Control via Emission Tax. We first consider the oligopoly model introduced in Example 3.1. We assume that producing ai units of output, firm i would generate ei = diai units of emissions. We consider the following social welfare function [44]
W (a) =
Z Pn i=1 a i
0 (p0 · q) dq
nX
i=1
ci · qi ⌧ · nX
i=1
diai,
where the first term is the consumers’ surplus, the second term is the total production cost, and the third term is the social damage caused by pollution. To maximize social welfare, an authority can impose emission taxes on the productions. Specifically, whenever producing ai units of output, firm i could be charged ⇡i · ai, where ⇡i is specialized for the firm i. In the experiment, we set n = 100, p0 = 100, = 1, ⌧ = 10, and di = 10 exp(ci) + ✏i, where {ci}100i=1 are evenly spaced between 1 and 2 and {✏i}100i=1 are the white noises. Under this setting, ci and ei are negatively correlated, which is realistic because if a firm hopes to reduce their pollution by updating their emission control systems, the production cost must be increased accordingly.
Through this small numerical example, we hope to illustrate that the single-loop scheme developed in this paper is indeed much more efficient than previous double-loop algorithms. To this end, we compare our algorithm with two double-loop schemes proposed by Li et al. [25]. In both approaches, the lower-level equilibrium problem is solved exactly first at each iteration. But afterwards, the upperlevel gradients are respectively obtained via automatic differentiation (AD) and implicit differentiation (ID). To make a fair comparison, the same hyperparameters — including the initial solutions, the learning rates, and the tolerance values for both upper- and lower-level problems — are employed for the tested algorithms (double-loop AD, double-loop ID, and our algorithm).
Table 1 reports statistics related to the computation performance, including the total CPU time, the total iteration number, and the CPU time per iteration. The results reveal that all tested algorithm take a similar number of iterations to reach the same level of precision. However, the running time per iteration required by our algorithm is significantly lower than the two double-loop approaches. Hence, in general, our scheme is more efficient.
Second-Best Congestion Pricing. We then consider the routing game model introduced in Example 3.2. To minimize the total travel delay, an authority on behalf of the public sector could impose appropriate tolls on selected roads [49]. This problem of determining tolls is commonly known as the congestion pricing problem. The second-best scheme assumes that only a subset of links can be charged [48]. Specifically, we write ⇡ 2 R|E|+ as the toll imposed on all the links and Etoll as the set of tollable links. We model total cost for a traveler selecting a path a 2 A as
cia(⇡, q) = X
e2E
te(xe) + ⇡e · eia + ⌘ · log(qia) + 1 ,
where we add an extra term (log(qia) + 1) to characterize the uncertainties in travelers’ route choices [20, 5]. It results in a quantal response equilibrium, as discussed in Section 6. We test our algorithm on a real-world traffic network: the Sioux-Falls network (See Lawphongpanich and Hearn [24] for its structure). We select 20 links (11, 35, 32, 68, 46, 21, 65, 52, 71, 74, 33, 64, 69, 14, 18, 39, 57, 48, 15, 51) for imposing congestion tolls.
We run Algorithm 2 to solve the problem and compare 4 different settings on the selection of step sizes. Setting A: ↵k = ↵/(k + 1)1/2, k = /(k + 1)2/7, and ⌫k = ⌫/(k + 1)4/7. It ensures convergence according to Theorem 5.5. Setting B: ↵k = ↵/(k + 1)1/2, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)4/7. It only increases the decreasing rate of the step size in the inner loop. Setting C: ↵k = ↵/(k + 1)1, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)1. It assumes that all step sizes decrease based on the classic O(1/k) rate. Setting D: ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, and ⌫k = 0. It does not adopt the mixing step proposed in our paper. We add white noise to the costs received by the agents based on Gaussian distributions and run our algorithm under each setting with 10 times. The mean value of upper-level optimality gaps and lower-level equilibrium gaps are reported in Figure 1 (the shadowed areas are plotted based on "mean ± std" area over all sampled trajectories).
Below we summarize a few observations. First, without the mixing step proposed in our paper, the algorithm does hit the boundary too early. The algorithm fails after just a few iterations. It verifies our earlier claim that directly extending previous methods [21, 9] designed for bilevel optimization problems to MPECs is problematic. Second, the step size given by Theorem 5.5 ensures the fastest and the most stable convergence.
Acknowledgments and Disclosure of Funding
Mingyi Hong’s research is funded by NSF under the award numbers CIF-1910385 and CMMI1727757. Yu (Marco) Nie’s research is funded by NSF under the award number CMMI-2225087. Zhaoran Wang’s research is funded by NSF under the award number ECCS-2048075. | 1. What is the focus and contribution of the paper regarding incentive design processes?
2. What are the strengths of the proposed approach, particularly in its ability to converge to optimal policies?
3. What are the weaknesses of the paper, especially regarding its comparison to prior works and empirical performance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In large scale incentive design processes, the classical approach is one of estimating behavior, adjusting incentives, evaluating the outcome, repeated until a fixed point is reached. At each step, you may need to wait long enough for behavior to stabilize into an equilibrium or to see enough data for the econometric task at hand.
This work considers an alternative approach in which the behavior gradient is used to estimate how the behavior will change, and hence allowing the algorithm to adjust incentives faster to the way that agents are responding to the incentives, effectively adjusting both in continuous time to lead to equilibrium (e.g., more of an optimal control based approach).
The approach is implemented on a few problems: second-best tolling, and pollution taxes, both showing empirical convergence in line with the theoretical convergence results.
Strengths And Weaknesses
The work seems to be a substantial original contribution in showing that simultaneous design and react can be used to quickly converge to optimal policies, across a broad range of problems.
The exposition does a good job of clearly illustrating the robustness of the techniques.
The simultaneous design & move approach is advocated for as an improvement over waiting for convergence to equilibrium at each step. While this claim makes intuitive sense that it would be an improvement, it would strengthen the argument to show an explicit theoretical or empirical gain in convergence relative to the bilevel approaches.
Questions
Can you discuss the improvement in theoretical convergence or empirical performance relative to the best known prior approaches?
Limitations
Yes, the authors have adequately addressed limitations. |
NIPS | Title
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence
Abstract
To regulate a social system comprised of self-interested agents, economic incentives are often required to induce a desirable outcome. This incentive design problem naturally possesses a bilevel structure, in which a designer modifies the rewards of the agents with incentives while anticipating the response of the agents, who play a non-cooperative game that converges to an equilibrium. The existing bilevel optimization algorithms raise a dilemma when applied to this problem: anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient; bypassing the timeconsuming step of equilibrium-finding can reduce the computational cost, but may lead the designer to a sub-optimal solution. To address such a dilemma, we propose a method that tackles the designer’s and agents’ problems simultaneously in a single loop. Specifically, at each iteration, both the designer and the agents only move one step. Nevertheless, we allow the designer to gradually learn the overall influence of the incentives on the agents, which guarantees optimality after convergence. The convergence rate of the proposed scheme is also established for a broad class of games.
1 Introduction
A common thread in human history is how to "properly" regulate a social system comprised of self-interested individuals. In a laissez-faire economy, for example, the competitive market itself is the primary regulatory mechanism [47, 16]. However, a laissez-faire economy may falter due to the existence of significant "externalities" [8, 18], which may arise wherever the self-interested agents do not bear the external cost of their behaviors in the entirety. The right response, many argue, is to introduce corrective policies in the form of economic incentives (e.g., tolls, taxes, and subsidies) [32]. By modifying the rewards of the agents, these incentives can encourage (discourage) the agents to engage in activities that create positive (negative) side effects for the society, and thus guide the self-interests of the agents towards a socially desirable end. For example, carbon taxes can be levied on carbon emissions to protect the environment during the production of goods and services [35].
⇤Equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surge pricing has been widely used to boost supply and dampen demand in volatile ride-hail markets [37]. Lately, subsidies and penalties were both introduced to overcome vaccine hesitancy in the world’s hard-fought battle against the COVID-19 pandemic.
The goal of this paper is to develop a provably efficient method for guiding the agents in a noncooperative game towards a socially desirable outcome — e.g., the one that maximizes the social welfare — by modifying their payoffs with incentives. The resulting problem may be naturally interpreted as a Stackelberg game [50] in which the “incentive designer" is the leader while the agents being regulated are the followers. Hence, it naturally possesses a bilevel structure [3]: at the upper level, the "designer" optimizes the incentives by anticipating and regulating the best response of the agents, who play a non-cooperative game at the lower level. As the lower-level agents pursue their self-interests freely, their best response can be predicted by the Nash equilibrium [39], which dictates no agent can do better by unilaterally changing their strategy. Accordingly, the incentive design problem is a mathematical program with equilibrium constraints (MPEC) [30].
In the optimization literature, MPECs are well-known for their intractability [10]. Specifically, even getting a first-order derivative through their bilevel structure is a challenge. In the incentive design problem, for example, to calculate the gradient of the designer’s objective at equilibrium, which provides a principled direction for the designer to update the incentives, one must anticipate how the equilibrium is affected by the changes [17]. This is usually achieved by performing a sensitivity analysis, which in turn requires differentiation through the lower-level equilibrium problem, either implicitly or explicitly [25]. No matter how the sensitivity analysis is carried out, the equilibrium problem must be solved before updating the incentives. The resulting algorithm thus admits a double loop structure: in the outer loop, the designer iteratively moves along the gradient; but to find the gradient, it must allow the lower level game dynamics to run its course to arrive at the equilibrium given the current incentives.
Because of the inherent inefficiency of the double-loop structure, many heuristics methods have also been developed for bilevel programs in machine learning [27, 29, 14]. When applied to the incentive design problem, these methods assume that the designer does not solve the equilibrium exactly to evaluate the gradient. Instead, at each iteration, the game is allowed to run just a few rounds, enough for the designer to obtain a reasonable approximation of the gradient. Although such a method promises to reduce the computational cost significantly at each iteration, it may never converge to the same optimal solution obtained without the approximation.
Contribution. In a nutshell, correctly anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient. On the other hand, simply bypassing the time-consuming step of equilibrium-finding may lead the designer to a sub-optimal solution. This dilemma prompts the following question that motivates this study: can we obtain the optimal solution to an incentive design problem without repeatedly solving the equilibrium problem?
In this paper, we propose an efficient principled method that tackles the designer’s problem and agents’ problem simultaneously in a single loop. At the lower level, we use the mirror descent method [40] to model the process through which the agents move towards equilibrium. At the upper level, we use the gradient descent method to update the incentives towards optimality. At each iteration, both the designer and the agents only move one step based on the first-order information. However, as discussed before, the upper gradient relies on the corresponding lower equilibrium, which is not available in the single-loop update. Hence, we propose to use the implicit differentiation formula—with equilibrium strategy replaced by the current strategy—to estimate the upper gradient, which might be biased at the beginning. Nevertheless, we prove that if we improve the lower-level solution with larger step sizes, the upper-level and lower-level problems may converge simultaneously at a fast rate. The proposed scheme hence guarantees optimality because it can anticipate the overall influence of the incentives on the agents eventually after convergence.
Organization. In Section 2, we discuss related work. In Section 3, we provide the mathematical formulation of the incentive design problem. In Section 4, we design algorithms for solving the problem. In Section 5, we establish conditions under which the proposed scheme globally converges to the optimal solution and analyze the convergence rate. The convergence analysis is restricted to games with a unique equilibrium. In Section 6, we discuss how to apply our algorithms to games with multiple equilibria. Eventually, we conduct experiments to test our algorithms in Section 7.
Notation. We denote h·, ·i as the inner product in vector spaces. For a vector form a = (ai), we denote a i = (aj)j 6=i. For a finite set X 2 Rn, we denote (X ) = {⇡ 2 Rn+ : P xi2X ⇡xi = 1}. For any vector norm k · k, we denote k · k⇤ = supkzk1h·, zi as its dual norm. We refer readers to Appendix A for a collection of frequently used problem-specific notations.
2 Related work
The incentive design problem studied in this paper is a special case of mathematical programs with equilibrium constraints (MPEC) [19], a class of optimization problems constrained by equilibrium conditions. MPEC is closely related to bilevel programs [10], which bind two mathematical programs together by treating one program as part of the constraints for the other.
Bilevel Programming. In the optimization literature, bilevel programming was first introduced to tackle resource allocation problems [7] and has since found applications in such diverse topics as revenue management, network design, traffic control, and energy systems. In the past decade, researchers have discovered numerous applications of bilevel programming in machine learning, including meta-learning (ML) [14], adversarial learning [22], hyperparameter optimization, [31] and neural architecture search [27]. These newly found bilevel programs in ML are often solved by gradient descent methods, which require differentiating through the (usually unconstrained) lower-level optimization problem [28]. The differentiation can be carried out either implicitly on the optimality conditions as in the conventional sensitivity analysis [see e.g., 2, 43, 4], or explicitly by unrolling the numerical procedure used to solve the lower-level problem [see e.g., 31, 15]. In the explicit approach, one may "partially" unroll the solution procedure (i.e., stop after just a few rounds, or even only one round) to reduce the computational cost. Although this popular heuristic has delivered satisfactory performance on many practical tasks [29, 36, 14, 27], it cannot guarantee optimality for bilevel programs under the general setting, as it cannot derive the accurate upper-level gradient at each iteration [53].
MPEC. Unlike bilevel programs, MPEC is relatively under-explored in the ML literature so far. Recently, Li et al. [25] extended the explicit differentiation method for bilevel programs to MPECs. Their algorithm unrolls an iterative projection algorithm for solving the lower-level problem, which is formulated as a variational inequality (VI) problem. Leveraging the recent advance in differentiable programming [2], they embedded each projection iteration as a differentiable layer in a computational graph, and accordingly, transform the explicit differentiation as standard backpropagation through the graph. The algorithm proposed by Li et al. [26] has a similar overall structure, but theirs casts the lower-level solution process as the imitative logit dynamics [6] drawn from the evolutionary game theory, which can be more efficiently unrolled. Although backpropagation is efficient, constructing and storing such a graph — with potentially a large number of projection layers needed to find a good solution to the lower-level problem — is still demanding. To reduce this burden, partially unrolling the iterative projection algorithm is a solution. Yet, it still cannot guarantee optimality for MPECs due to the same reason as for bilevel programs.
The simultaneous design-and-play approach is proposed to address this dilemma. Our approach follows the algorithm of Hong et al. [21] and Chen et al. [9], which solves bilevel programs via singleloop update. Importantly, they solve both the upper- and the lower-level problem using a gradient descent algorithm and establish the relationship between the convergence rate of the single-loop algorithm and the step size used in gradient descent. However, their algorithms are limited to the cases where the lower-level optimization problem is unconstrained. Our work extends these single-loop algorithms to MPECs that have an equilibrium problem at the lower level. We choose mirror descent as the solution method to the lower-level problem because of its broad applicability to optimization problems with constraints [40] and generality in the behavioral interpretation of games [34, 23]. We show that the convergence of the proposed simultaneous design-and-play approach relies on the setting of the step size for both the upper- and lower-level updates, a finding that echos the key result in [21]. We first give the convergence rate under mirror descent and the unconstrained assumption and then extend the result to the constrained case. For the latter, we show that convergence cannot be guaranteed if the lower-level solution gets too close to the boundary early in the simultaneous solution process. To avoid this trap, the standard mirror descent method is revised to carefully steer the lower-level solution away from the boundary.
3 Problem Formulation
We study incentive design in both atomic games [39] and nonatomic games [45], classified depending on whether the set of agents is endowed with an atomic or a nonatomic measure. In social systems, both types of games can be useful, although the application context varies. Atomic games are typically employed when each agent has a non-trivial influence on the rewards of other agents. In a nonatomic game, on the contrary, a single agent’s influence is negligible and the reward could only be affected by the collective behavior of agents.
Atomic Game. Consider a game played by a finite set of agents N = {1, . . . , n}, where each agent i 2 N selects a strategy ai 2 Ai ✓ Rdi to maximize the reward received, which is determined by a continuously differentiable function ui : A = Q i2N Ai ! R. Formally, a joint strategy a⇤ 2 A is a Nash equilibrium if
ui(ai⇤, a i ⇤ ) ui(ai, a i⇤ ), 8 ai 2 Ai, 8i 2 N .
Suppose that for all i 2 N , the strategy set Ai is closed and convex, and the reward function ui is convex in ai, then a⇤ 2 A is a Nash equilibrium if and only if there exists i, . . . , n > 0 such that [46]
nX
i=1
i · ⌦ raiui(a⇤), ai ai⇤ ↵ 0, for all a 2 A. (3.1)
Example 3.1 (Oligopoly model). In an oligopoly model, there is finite set N = {1, . . . , n} of firms, each of which supplies the market with a quantity ai (ai 0) of goods. Under this setting, we have A = Rn+. The good is then priced as p(q) = p0 · q, where p0, > 0 and q = P j2N a
i it the total output. The profit and the marginal profit of the firm i are then given by
ui(a) = ai · ✓ p0 · X j2N aj ◆ ci, raiui(a) = p0 · ✓ ai + X j2N aj ◆ ,
respectively, where ci is the constant marginal cost2 for firm i.
Nonatomic Game. Consider a game played by a continuous set of agents, which can be divided into a finite set of classes N = {1, . . . , n}. We assume that each i 2 N represents a class of infinitesimal and homogeneous agents sharing the finite strategy set Ai with |Ai| = di. The mass distribution for the class i is defined as a vector qi 2 (Ai) that gives the proportion of agents using each strategy. Let the cost of an agent in class i to select a strategy a 2 Ai given q = (q1, . . . , qn) be cia(q). Formally, a joint mass distribution q 2 (A) = Q i2N (Ai) is a Nash equilibrium, also known as a Wardrop equilibrium [51], if for all i 2 N , there exists bi such that ⇢ cia(q⇤) = bi, if qia⇤ > 0, cia(q⇤) bi, if qia⇤ = 0.
The following result extends the VI formulation to Nash equilibrium in nonatomic game: denote ci(q) = (cia(q))a2Ai , then q⇤ is a Nash equilibrium if and only if [11]
X i2N i · ⌦ ci(q⇤), q i qi⇤ ↵ 0, for all q 2 (A). (3.2)
Example 3.2 (Routing game). Consider a set of agents traveling from source nodes to sink nodes in a directed graph with nodes V and edges E . Denote N ✓ V ⇥ V as the set of source-sink pairs, Ai ✓ 2E as the set of paths connecting i 2 N and E ia ✓ E as the set of all edges on the path a 2 Ai. Suppose that each source-sink pair i 2 N is associated with ⇢i nonatomic agents aiming to choose a route from Ai to minimize the total cost incurred. Let qia 2 (Ai) be the proportion of travelers using the path a 2 Aw, xe 2 R+ be the number of travelers using the edge e and te(xe) 2 R+ be the cost for using edge e. Then we have xe = P i2N P a2Ai ⇢
i · qia · eia, where eik equals 1 if e 2 E ia and 0 otherwise. The total cost for a traveler selecting a path a 2 Ai will then be cia(q) = P e2E t
e(xe) · eia. 2Throughout this paper, we use the term “reward” to describe the scenario where the agents aim to maximize
ui, and use “cost” when the agents aim to do the opposite.
Incentive Design. Despite the difference, we see that an equilibrium of both atomic and nonatomic games can be formulated as a solution to a corresponding VI problem in the following form
X i2N i · ⌦ vi(x⇤), x i xi⇤ ↵ 0, for all x 2 X = Y i2N X i, (3.3)
where vi and X i denote different terms in the two types of games. Suppose that there exists an incentive designer aiming to induce a desired equilibrium. To this end, the designer can add incentives ✓ 2 ⇥ ✓ Rd, which is assumed to enter the reward/cost functions and thus leads to a parameterized vi✓(x). We assume that the designer’s objective is determined by a function f : ⇥⇥ X ! R. Denote S(✓) as the solution set to (3.3). We then obtain the uniform formulation of the incentive design problem for both atomic games and nonatomic games
min ✓2⇥ f⇤(✓) = f(✓, x⇤), s.t. x⇤ 2 S(✓). (3.4)
If the equilibrium problem admits multiple solutions, the agents may converge to different ones and it would be difficult to determine which one can better predict the behaviors of the agents without additional information. In this paper, we first consider the case where the game admits a unique equilibrium. Sufficient conditions under which the game admits a unique equilibrium will also be provided later. We would consider the non-unique case later and show that our algorithms can still become applicable by adding an appropriate regularizer in the cost function.
Stochastic Environment. In the aforementioned settings, vi✓(x) is a deterministic function. Although most MPEC algorithms in the optimization literature follow this deterministic setting, in this paper, we hope our algorithm can handle more realistic scenarios. Specifically, in the real world, the environment could be stochastic if some environment parameters that fluctuate over days. In a traffic system, for example, both worse weather and special events may affect the road condition, hence the travel time vi✓(x) experienced by the drivers. We expect our algorithm can still work in the face of such stochasticity. To this end, we assume that vi✓(x) represents the expected value of the cost function. On each day, however, the agents can only receive a noised feedback bvi✓ as an estimation. In the next section, we develop algorithms based on such noisy feedback.
4 Algorithm
We propose to update ✓ and x simultaneously to improve the computational efficiency. The game dynamics at the lower level is modeled using the mirror descent method. Specifically, at the stage k, given ✓k and xk, the agent first receives vi✓k(xk) as the feedback. After receiving the feedback, the agents update their strategies via
xik+1 = argmax xi2X i hvi✓k(xk), x ii 1/ ik ·D i(xik, xi) , (4.1)
where D i(xik, x i) is the Bregman divergence induced by a strongly convex function i. The accurate value of rf⇤(✓k), the gradient of its objective function, equals
r✓f ✓k, x⇤(✓k) + ⇥ r✓x⇤(✓k) ⇤> ·rxf ✓k, x⇤(✓k) ,
which requires the exact lower-level equilibrium x⇤(✓k). However, at the stage k, we only have the current strategy xk. Therefore, we also have to establish an estimator of rx⇤(✓k) and rf⇤(✓k) using xk, the form of which will be specified later. Remark 4.1. The standard gradient descent method is double-loop because at each ✓k it involves an inner loop for solving the exact value of x⇤(✓k) and then calculating the exact gradient.
4.1 Unconstrained Game
We first consider unconstrained games with X i = Rdi , for all i 2 N . We select i(·) as smooth function, i.e., there exists a constant H 1 such that for all i 2 N and xi, xi0 2 X i,
r i(xi) r i(xi0) 2 H · kxi xi0k2. (4.2)
Example of i satisfying this assumption include (but is not limited to) i(xi) = (xi)>Qixi/2, where Qi 2 Rdi ⇥ Rdi is a positive definite matrix. It can be directly checked that we can set
H = maxi2N i, where i is the largest singular value of Qi. In this case, the corresponding Bregman divergence becomes D i(xi, xi0) = (xi xi0)>Qi(xi xi0)/2, which is known as the squared Mahalanobis distance. Before laying out the algorithm, we first give the following lemma characterizing r✓x⇤(✓). Lemma 4.2. When X i = Rdi and rxv✓(x⇤(✓)) is non-singular, it holds that
r✓x⇤(✓) = h rxv✓ x⇤(✓) i 1 ·r✓v✓ x⇤(✓) .
Proof. See Appendix B.2 for detailed proof.
For any given ✓ 2 ⇥ and x 2 X , we define erf(✓, x) = r✓f(✓, x) ⇥ r✓v✓(x) ⇤> · ⇥ rxv✓(x) ⇤ 1 ·rxf(✓, x). (4.3)
Although we cannot obtain the exact value of rf⇤(✓k), we may use brf(✓k, xk) as a surrogate and update ✓k based on brf⇤(✓k, xk) instead. Now we are ready to present the following bilevel incentive design algorithm for unconstrained games (see Algorithm 1).
Algorithm 1 Bilevel incentive design for unconstrained games Input: ✓0 2 ⇥, x0 2 X = Rd, where d = P i2N d
i, sequence of step sizes (↵k, { ik}i2N ). For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2X i hbvik, xii 1/ ik ·D i(x i k, x i) , (4.4)
for all i 2 N , where bvik is an estimator of vi✓k (xk). Update incentive parameter
✓k+1 = argmax ✓2⇥
herf(✓k, xk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
In Algorithm 1, if ✓k and xk converge to fixed points ✓̄ and x̄, respectively, then x̄ = x⇤(✓̄) is expected to be satisfied. Thus, we would also have brf(✓̄, x̄) = rf⇤(✓̄). Thus, the optimality of ✓̄ can be then guaranteed. It implies that the algorithm would find the optimal solution if it converges. Instead, the difficult part is how to design appropriate step sizes that ensure convergence. In this paper, we provide such conditions in Section 5.1.
4.2 Simplex-Constrained Game
We then consider the case where for all i 2 N , xi is constrained within the probability simplex
[di] = xi 2 Rd i xi 0, (xi)>1di = 1 ,
where 1di 2 Rd i
is the vector of all ones. Here we remark that any classic game-theoretic models are simplex-constrained. In fact, as long as the agent faces a finite number of choices and adopts a mixed strategy, its decision space would be a probability simplex [39]. In addition, some other types of decisions may also be constrained by a simplex. For example, financial investment concerns how to split the money on different assets. In such a scenario, the budget constraint can also be represented by a probability simplex. In such a case, we naturally consider i(xi) = P
j2[di][x i]j · log[xi]j , which is the Shannon
entropy. Such a choice gives the Bregman divergence D i(xi, xi0) = P
j2[di][x i]j · log([xi0]j/[xi]j),
which is known as the KL divergence. In this case, we still first need to characterize r✓x⇤(✓), which also has an analytic form. Specifically, if we define a function h✓(x) = (hi✓(x i))i2N that satisfies
hi✓(x i) = argmax
x0i2X i
n⌦ vi✓(x i), x0i ↵ 1/ ik ·D i(xi, x0i) o ,
then for any ✓, xi⇤(✓) satisfies xi⇤(✓) = h✓(xi⇤(✓)) [12]. Implicitly differentiating through this fixed point equation then yields rx⇤(✓) = r✓h✓(x⇤(✓)) · (I rxh✓(x⇤(✓))) 1. Then, similar to (4.3), we may use
erf(✓, x) = r✓f(✓, x) r✓h✓(x) · I rxh✓(x) 1 ·rxf(✓, x) (4.5)
to approximate the actual gradient rf⇤(✓) and then update ✓k based on rf⇤(✓k) instead. Remark 4.3. The mapping h✓(x) has an analytic expression, which reads
hi✓(x) = x i · exp
ik · vi✓(x) . xi · exp ik · vi✓(x)
1 .
Hence, both rxh✓(x) and r✓h✓(x) can also be calculated analytically.
In addition to a different gradient estimate, we also modify Algorithm 1 to keep the iterations xk from hitting the boundary at the early stage. The modification involves an additional step that mixes the strategy with a uniform strategy 1di/di, i.e., imposing an additional step
exik+1 = (1 ⌫k+1) · xk+1 + ⌫k+1 · 1di/di
upon finishing the update (4.4), where ⌫k+1 2 (0, 1) is a the mixing parameter, decreasing to 0 when k ! 1. In the following, we give the formal presentation of the modified bilevel incentive design algorithm for simplex-constrained games (see Algorithm 2).
Algorithm 2 Bilevel incentive design for simplex constrained games Input: ✓0 2 ⇥, x0 2 X , step sizes (↵k, { ik}i2N ), k 0, and mixing parameters ⌫k, k 0. For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2 ([di]) hbvik, xii 1/ ik ·D i(ex i k, x i) ,
exik+1 = (1 ⌫k) · xk+1 + ⌫k · 1di/d i, (4.6)
for all i 2 N , where bvik is an estimator of vi✓k (exk). Update incentive parameter
✓k+1 argmax ✓2⇥
herf(✓k, exk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
Similar to Algorithm 1, at the core of the convergence of Algorithm 2 is still the step size. This case is even more complicated, as we need to design ↵k, k, and ⌫k at the same time. In this paper, a provably convergent scheme is provided in Section 5.2.
Before closing this section, we remark that the algorithm can be easily adapted to other types of constraints by using another h✓(x) to model the game dynamics. Particularly, the projected gradient descent dynamics has very broad applicability. In this case, the algorithm for calculating r✓h✓(x) and rxh✓(x) is given by, for example, Amos and Kolter [2]. The additional step (4.6) then becomes unnecessary as it is dedicated to simplex constraints.
5 Convergence Analysis
In this section, we study the convergence of the proposed algorithms. For simplicity, define D x, x0 = P
i2N D i xi, xi0 . We make the following assumptions.
Assumption 5.1. The lower-level problem in (3.4) satisfies the following conditions. (1) The strategy set X i of agent i is a nonempty, compact, and convex subset of Rdi . (2) For each i 2 N , the gradient vi✓(·) is Hu-Lipschitz continuous with respect to D , i.e., for all i 2 N and x, x0 2 X , kvi✓(x) vi✓(x0)k2⇤ H2u ·D (x, x0). (3) There exist constants ⇢✓, ⇢x > 0 such that for all x 2 X and
✓ 2 ⇥, kr✓v✓(x)k2 < ⇢✓, and k[rxv✓(x)] 1k2 1/⇢x. (4) For all ✓ 2 ⇥, the equilibrium x⇤(✓) of the game is strongly stable with respect to D , i.e., for all x 2 X , P i2N
i · hvi✓(x), xi⇤(✓) xii D (x⇤(✓), x). Assumption 5.2. The upper-level problem in (3.4) satisfies the following properties. (1) The set ⇥ is compact and convex. The function f⇤(✓) is µ-strongly convex and rf⇤(✓) has 2-norm uniformly bounded by M . (2) The extended gradient erf(✓, x) is eH-Lipschitz continuous with respect to D , i.e., for all x, x0 2 X , kerf(✓, x) erf(✓, x0)k22 eH2 ·D (x, x0). Assumption 5.3. Define the filtration by F✓0 = {✓0}, Fx0 = ;, F✓k = F✓k 1 [ {xk 1, ✓k}, and Fxk = Fxk 1 [ {✓k, xk}. We assume (1) the feedback bvk is an unbiased estimate, i.e., for all i 2 N , we have E[bvik | Fxk ] = vi✓k(xk); (2) The feedback bvk has bounded mean squared estimation errors, i.e., there exists u > 0 such that E[kbvik vi✓k(xk)k 2 ⇤ | Fxk ] 2u for all i 2 N .
Below we discuss when the proposed assumptions hold, and if they are violated, how would our algorithm works. Assumption 5.1 includes the condition that x⇤(✓) is strongly stable. In this case, it is the unique Nash equilibrium of the game [34]. It is also a common assumption in the analysis of the mirror descent dynamics itself [13]. We provide sufficient conditions for checking strong stability in Appendix B.1. We refer the readers to Section 6 for an explanation of how to extend our algorithm when this assumption is violated. Assumption 5.2 includes the convexity of the upper-level problem, which is usually a necessary condition to ensure global convergence. Yet, without convexity, our algorithm can still converge to a local minimum. Assumption 5.3 becomes unnecessary if we simply assume that the environment is deterministic. In this case, the accurate value of vi✓(x) is available. Yet, if the noises are added to the feedback, assuming that the noisy feedback is unbiased and bounded is still reasonable.
5.1 Unconstrained Game
In this part, we establish the convergence guarantee of Algorithm 1 for unconstrained games. We define the optimality gap ✏✓k and the equilibrium gap ✏ x k+1 as
✏✓k := E ⇥ k✓k ✓⇤k22 ⇤ , ✏xk+1 := E h D x⇤(✓k), xk+1 i .
We track such two gaps as the convergence criteria in the subsequent results. Theorem 5.4. For Algorithm 1, set the step sizes ↵k = ↵/(k + 1), k = /(k + 1)2/3, and ik = i · k with constants ↵ > 0 and > 0 satisfying
1/N ·H2uk k22, ↵/ 3/2 1/12 ·H eHH⇤,
where H⇤ = ⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold, then we have
✏✓k = O(k 2/3), ✏xk = O(k 2/3).
Proof. See Appendix C for detailed proof and a detailed expression of convergence rates.
5.2 Simplex-Constrained Game
In this part, we establish the convergence guarantee of Algorithm 2 for simplex constrained games. We still define optimality gap ✏✓k as ✏ ✓ k = E ⇥ k✓k ✓⇤k22 ⇤ . Yet, corresponding to (4.6), we track e✏xk+1 as a measure of convergence for the strategies of the agents, which is defined as
e✏xk+1 = E h D ex⇤(✓k), exk+1 i ,
where ex⇤(✓k) = (1 ⌫k) · x⇤(✓k) + ⌫k · 1/di. We are now ready to give the convergence guarantee of Algorithm 2. Theorem 5.5. For Algorithm 2, set the step sizes ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, ik = i · k, and ⌫k = ⌫/(k + 1)4/7 with constants ↵ > 0 and > 0 satisfying
1/6 ·NH2uk k22, ↵/ 3/2 1/7 · eH eH⇤,
where eH⇤ = (1 + d)⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold. If there exists some constant V⇤ > 0 such that kv✓(x⇤(✓))k1 V⇤ for any ✓ 2 ⇥, we then have
✏✓k = O(k 2/7), e✏xk = O(k 2/7).
Proof. See Appendix D for detailed proof and a detailed form of the convergence rates.
6 Extensions to Games with Multiple Equilibria
We then briefly discuss how to apply our algorithms when the lower-level game has multiple equilibria.
Case I: If the function v✓(x) = (vi✓(x))i is strongly monotone in the neighbourhood of each equilibrium, then all equilibria are strongly stable in a neighbourhood and hence isolated [38]. In this case, our algorithms can be directly applied as rxv✓(x) is non-singular in these neighborhoods. It is commonly believed that the most likely equilibrium is the one reached by the game dynamics [52]; our algorithm naturally converges to this one.
Case II: If the function v✓(x) = (vi✓(x))i is monotone but not strongly monotone, then the equilibrium set is a convex and closed region [34]. This case is challenging, as the matrix rxv✓(x) needed to be inverted would become singular. Nevertheless, we can simply assume the agents are bounded rational [42, 1, 33]. The bounded rationality would result in a quantal response equilibrium for predicting the agents’ response. We refer the readers to Appendix F for a detailed explanation (with numerical examples for illustration). Here we briefly sum up the key takeaways: (1) it is equivalent to add a regularizer ⌘i · (log(xi + ✏) + 1) to vi✓(x) for some ⌘i > 0 and ✏ 0; (2) as long as ✓ > 0, the strong stability condition in Assumption 5.1 would then be satisfied, hence a unique equilibrium would exist; (3) as long as ✏ > 0, the Lipschitz continuous condition in Assumption 5.1 will also not be violated. In a nutshell, the bounded rationality assumption can simultaneously make our model more realistic and satisfy the assumptions in Section 5.
7 Numerical Experiments
In this section, we conduct two numerical experiments to test our algorithms. All numerical results reported in this section were produced either on a MacBook Pro (15-inch, 2017) with 2.9 GHz Quad-Core Intel Core i7 CPU.
Pollution Control via Emission Tax. We first consider the oligopoly model introduced in Example 3.1. We assume that producing ai units of output, firm i would generate ei = diai units of emissions. We consider the following social welfare function [44]
W (a) =
Z Pn i=1 a i
0 (p0 · q) dq
nX
i=1
ci · qi ⌧ · nX
i=1
diai,
where the first term is the consumers’ surplus, the second term is the total production cost, and the third term is the social damage caused by pollution. To maximize social welfare, an authority can impose emission taxes on the productions. Specifically, whenever producing ai units of output, firm i could be charged ⇡i · ai, where ⇡i is specialized for the firm i. In the experiment, we set n = 100, p0 = 100, = 1, ⌧ = 10, and di = 10 exp(ci) + ✏i, where {ci}100i=1 are evenly spaced between 1 and 2 and {✏i}100i=1 are the white noises. Under this setting, ci and ei are negatively correlated, which is realistic because if a firm hopes to reduce their pollution by updating their emission control systems, the production cost must be increased accordingly.
Through this small numerical example, we hope to illustrate that the single-loop scheme developed in this paper is indeed much more efficient than previous double-loop algorithms. To this end, we compare our algorithm with two double-loop schemes proposed by Li et al. [25]. In both approaches, the lower-level equilibrium problem is solved exactly first at each iteration. But afterwards, the upperlevel gradients are respectively obtained via automatic differentiation (AD) and implicit differentiation (ID). To make a fair comparison, the same hyperparameters — including the initial solutions, the learning rates, and the tolerance values for both upper- and lower-level problems — are employed for the tested algorithms (double-loop AD, double-loop ID, and our algorithm).
Table 1 reports statistics related to the computation performance, including the total CPU time, the total iteration number, and the CPU time per iteration. The results reveal that all tested algorithm take a similar number of iterations to reach the same level of precision. However, the running time per iteration required by our algorithm is significantly lower than the two double-loop approaches. Hence, in general, our scheme is more efficient.
Second-Best Congestion Pricing. We then consider the routing game model introduced in Example 3.2. To minimize the total travel delay, an authority on behalf of the public sector could impose appropriate tolls on selected roads [49]. This problem of determining tolls is commonly known as the congestion pricing problem. The second-best scheme assumes that only a subset of links can be charged [48]. Specifically, we write ⇡ 2 R|E|+ as the toll imposed on all the links and Etoll as the set of tollable links. We model total cost for a traveler selecting a path a 2 A as
cia(⇡, q) = X
e2E
te(xe) + ⇡e · eia + ⌘ · log(qia) + 1 ,
where we add an extra term (log(qia) + 1) to characterize the uncertainties in travelers’ route choices [20, 5]. It results in a quantal response equilibrium, as discussed in Section 6. We test our algorithm on a real-world traffic network: the Sioux-Falls network (See Lawphongpanich and Hearn [24] for its structure). We select 20 links (11, 35, 32, 68, 46, 21, 65, 52, 71, 74, 33, 64, 69, 14, 18, 39, 57, 48, 15, 51) for imposing congestion tolls.
We run Algorithm 2 to solve the problem and compare 4 different settings on the selection of step sizes. Setting A: ↵k = ↵/(k + 1)1/2, k = /(k + 1)2/7, and ⌫k = ⌫/(k + 1)4/7. It ensures convergence according to Theorem 5.5. Setting B: ↵k = ↵/(k + 1)1/2, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)4/7. It only increases the decreasing rate of the step size in the inner loop. Setting C: ↵k = ↵/(k + 1)1, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)1. It assumes that all step sizes decrease based on the classic O(1/k) rate. Setting D: ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, and ⌫k = 0. It does not adopt the mixing step proposed in our paper. We add white noise to the costs received by the agents based on Gaussian distributions and run our algorithm under each setting with 10 times. The mean value of upper-level optimality gaps and lower-level equilibrium gaps are reported in Figure 1 (the shadowed areas are plotted based on "mean ± std" area over all sampled trajectories).
Below we summarize a few observations. First, without the mixing step proposed in our paper, the algorithm does hit the boundary too early. The algorithm fails after just a few iterations. It verifies our earlier claim that directly extending previous methods [21, 9] designed for bilevel optimization problems to MPECs is problematic. Second, the step size given by Theorem 5.5 ensures the fastest and the most stable convergence.
Acknowledgments and Disclosure of Funding
Mingyi Hong’s research is funded by NSF under the award numbers CIF-1910385 and CMMI1727757. Yu (Marco) Nie’s research is funded by NSF under the award number CMMI-2225087. Zhaoran Wang’s research is funded by NSF under the award number ECCS-2048075. | 1. What is the focus and contribution of the paper regarding Stackelberg games?
2. What are the strengths of the proposed algorithm, particularly in its simultaneous design-and-play approach?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works?
4. How does the reviewer assess the clarity and presentation of the paper's content?
5. Are there any concerns or suggestions regarding the novel approach, such as comparing it to a "brute force" flattened approach?
6. Do you have any questions about the limitations of the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors study how to compute the Stackelberg equilibrium in a Stackelberg (bilevel) game where the first mover is a social designer and the second movers are agents that simultaneously best respond to the designed incentives. Different from existing methods that require solving the agents' level equilibrium repeatedly for each update on the designer's strategy, the authors propose a simultaneous design-and-play algorithm that updates strategies on both levels in each update step. The authors show convergence of the new algorithm mainly under conditions that guarantee the uniqueness of the equilibrium and discussed the multiple equilibria case briefly.
Strengths And Weaknesses
Strengths:
Clear problem statement
Clear presentation of the algorithms and the theoretical guarantees of the new algorithms
Weaknesses:
The performance bounds on the related works or baselines are not clearly given, making it hard to know the efficiency improvement of the new algorithms. Personally, I would like to see a table showing what are the common assumptions used in this paper and other related works, and whether each of them guarantees convergence and at what convergence rate, adding other comparison metrics like the order of floating-point operations steps would be even better
It would be better to have a table in the appendix showing the notations used in the paper
Questions
So a natural thought on this new approach is flattening the bilevel game into a single-level game and treating the designer as another agent in the network that is connected to all other agents. If we use this flattened view to solve the variational inequalities directly, can it still work under your assumptions, and what are the main differences between your approach and this "brute force" flattened approach?
Same as discussed in the weaknesses, is it possible to add quantitative comparisons between your algorithms and other bi-level optimization and MPEC works?
Limitations
As discussed in the weaknesses, the paper seems to be a bit short of information in its quantitative comparisons to the related works. |
NIPS | Title
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence
Abstract
To regulate a social system comprised of self-interested agents, economic incentives are often required to induce a desirable outcome. This incentive design problem naturally possesses a bilevel structure, in which a designer modifies the rewards of the agents with incentives while anticipating the response of the agents, who play a non-cooperative game that converges to an equilibrium. The existing bilevel optimization algorithms raise a dilemma when applied to this problem: anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient; bypassing the timeconsuming step of equilibrium-finding can reduce the computational cost, but may lead the designer to a sub-optimal solution. To address such a dilemma, we propose a method that tackles the designer’s and agents’ problems simultaneously in a single loop. Specifically, at each iteration, both the designer and the agents only move one step. Nevertheless, we allow the designer to gradually learn the overall influence of the incentives on the agents, which guarantees optimality after convergence. The convergence rate of the proposed scheme is also established for a broad class of games.
1 Introduction
A common thread in human history is how to "properly" regulate a social system comprised of self-interested individuals. In a laissez-faire economy, for example, the competitive market itself is the primary regulatory mechanism [47, 16]. However, a laissez-faire economy may falter due to the existence of significant "externalities" [8, 18], which may arise wherever the self-interested agents do not bear the external cost of their behaviors in the entirety. The right response, many argue, is to introduce corrective policies in the form of economic incentives (e.g., tolls, taxes, and subsidies) [32]. By modifying the rewards of the agents, these incentives can encourage (discourage) the agents to engage in activities that create positive (negative) side effects for the society, and thus guide the self-interests of the agents towards a socially desirable end. For example, carbon taxes can be levied on carbon emissions to protect the environment during the production of goods and services [35].
⇤Equal contribution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Surge pricing has been widely used to boost supply and dampen demand in volatile ride-hail markets [37]. Lately, subsidies and penalties were both introduced to overcome vaccine hesitancy in the world’s hard-fought battle against the COVID-19 pandemic.
The goal of this paper is to develop a provably efficient method for guiding the agents in a noncooperative game towards a socially desirable outcome — e.g., the one that maximizes the social welfare — by modifying their payoffs with incentives. The resulting problem may be naturally interpreted as a Stackelberg game [50] in which the “incentive designer" is the leader while the agents being regulated are the followers. Hence, it naturally possesses a bilevel structure [3]: at the upper level, the "designer" optimizes the incentives by anticipating and regulating the best response of the agents, who play a non-cooperative game at the lower level. As the lower-level agents pursue their self-interests freely, their best response can be predicted by the Nash equilibrium [39], which dictates no agent can do better by unilaterally changing their strategy. Accordingly, the incentive design problem is a mathematical program with equilibrium constraints (MPEC) [30].
In the optimization literature, MPECs are well-known for their intractability [10]. Specifically, even getting a first-order derivative through their bilevel structure is a challenge. In the incentive design problem, for example, to calculate the gradient of the designer’s objective at equilibrium, which provides a principled direction for the designer to update the incentives, one must anticipate how the equilibrium is affected by the changes [17]. This is usually achieved by performing a sensitivity analysis, which in turn requires differentiation through the lower-level equilibrium problem, either implicitly or explicitly [25]. No matter how the sensitivity analysis is carried out, the equilibrium problem must be solved before updating the incentives. The resulting algorithm thus admits a double loop structure: in the outer loop, the designer iteratively moves along the gradient; but to find the gradient, it must allow the lower level game dynamics to run its course to arrive at the equilibrium given the current incentives.
Because of the inherent inefficiency of the double-loop structure, many heuristics methods have also been developed for bilevel programs in machine learning [27, 29, 14]. When applied to the incentive design problem, these methods assume that the designer does not solve the equilibrium exactly to evaluate the gradient. Instead, at each iteration, the game is allowed to run just a few rounds, enough for the designer to obtain a reasonable approximation of the gradient. Although such a method promises to reduce the computational cost significantly at each iteration, it may never converge to the same optimal solution obtained without the approximation.
Contribution. In a nutshell, correctly anticipating how incentives affect the agents at equilibrium requires solving the equilibrium problem repeatedly, which is computationally inefficient. On the other hand, simply bypassing the time-consuming step of equilibrium-finding may lead the designer to a sub-optimal solution. This dilemma prompts the following question that motivates this study: can we obtain the optimal solution to an incentive design problem without repeatedly solving the equilibrium problem?
In this paper, we propose an efficient principled method that tackles the designer’s problem and agents’ problem simultaneously in a single loop. At the lower level, we use the mirror descent method [40] to model the process through which the agents move towards equilibrium. At the upper level, we use the gradient descent method to update the incentives towards optimality. At each iteration, both the designer and the agents only move one step based on the first-order information. However, as discussed before, the upper gradient relies on the corresponding lower equilibrium, which is not available in the single-loop update. Hence, we propose to use the implicit differentiation formula—with equilibrium strategy replaced by the current strategy—to estimate the upper gradient, which might be biased at the beginning. Nevertheless, we prove that if we improve the lower-level solution with larger step sizes, the upper-level and lower-level problems may converge simultaneously at a fast rate. The proposed scheme hence guarantees optimality because it can anticipate the overall influence of the incentives on the agents eventually after convergence.
Organization. In Section 2, we discuss related work. In Section 3, we provide the mathematical formulation of the incentive design problem. In Section 4, we design algorithms for solving the problem. In Section 5, we establish conditions under which the proposed scheme globally converges to the optimal solution and analyze the convergence rate. The convergence analysis is restricted to games with a unique equilibrium. In Section 6, we discuss how to apply our algorithms to games with multiple equilibria. Eventually, we conduct experiments to test our algorithms in Section 7.
Notation. We denote h·, ·i as the inner product in vector spaces. For a vector form a = (ai), we denote a i = (aj)j 6=i. For a finite set X 2 Rn, we denote (X ) = {⇡ 2 Rn+ : P xi2X ⇡xi = 1}. For any vector norm k · k, we denote k · k⇤ = supkzk1h·, zi as its dual norm. We refer readers to Appendix A for a collection of frequently used problem-specific notations.
2 Related work
The incentive design problem studied in this paper is a special case of mathematical programs with equilibrium constraints (MPEC) [19], a class of optimization problems constrained by equilibrium conditions. MPEC is closely related to bilevel programs [10], which bind two mathematical programs together by treating one program as part of the constraints for the other.
Bilevel Programming. In the optimization literature, bilevel programming was first introduced to tackle resource allocation problems [7] and has since found applications in such diverse topics as revenue management, network design, traffic control, and energy systems. In the past decade, researchers have discovered numerous applications of bilevel programming in machine learning, including meta-learning (ML) [14], adversarial learning [22], hyperparameter optimization, [31] and neural architecture search [27]. These newly found bilevel programs in ML are often solved by gradient descent methods, which require differentiating through the (usually unconstrained) lower-level optimization problem [28]. The differentiation can be carried out either implicitly on the optimality conditions as in the conventional sensitivity analysis [see e.g., 2, 43, 4], or explicitly by unrolling the numerical procedure used to solve the lower-level problem [see e.g., 31, 15]. In the explicit approach, one may "partially" unroll the solution procedure (i.e., stop after just a few rounds, or even only one round) to reduce the computational cost. Although this popular heuristic has delivered satisfactory performance on many practical tasks [29, 36, 14, 27], it cannot guarantee optimality for bilevel programs under the general setting, as it cannot derive the accurate upper-level gradient at each iteration [53].
MPEC. Unlike bilevel programs, MPEC is relatively under-explored in the ML literature so far. Recently, Li et al. [25] extended the explicit differentiation method for bilevel programs to MPECs. Their algorithm unrolls an iterative projection algorithm for solving the lower-level problem, which is formulated as a variational inequality (VI) problem. Leveraging the recent advance in differentiable programming [2], they embedded each projection iteration as a differentiable layer in a computational graph, and accordingly, transform the explicit differentiation as standard backpropagation through the graph. The algorithm proposed by Li et al. [26] has a similar overall structure, but theirs casts the lower-level solution process as the imitative logit dynamics [6] drawn from the evolutionary game theory, which can be more efficiently unrolled. Although backpropagation is efficient, constructing and storing such a graph — with potentially a large number of projection layers needed to find a good solution to the lower-level problem — is still demanding. To reduce this burden, partially unrolling the iterative projection algorithm is a solution. Yet, it still cannot guarantee optimality for MPECs due to the same reason as for bilevel programs.
The simultaneous design-and-play approach is proposed to address this dilemma. Our approach follows the algorithm of Hong et al. [21] and Chen et al. [9], which solves bilevel programs via singleloop update. Importantly, they solve both the upper- and the lower-level problem using a gradient descent algorithm and establish the relationship between the convergence rate of the single-loop algorithm and the step size used in gradient descent. However, their algorithms are limited to the cases where the lower-level optimization problem is unconstrained. Our work extends these single-loop algorithms to MPECs that have an equilibrium problem at the lower level. We choose mirror descent as the solution method to the lower-level problem because of its broad applicability to optimization problems with constraints [40] and generality in the behavioral interpretation of games [34, 23]. We show that the convergence of the proposed simultaneous design-and-play approach relies on the setting of the step size for both the upper- and lower-level updates, a finding that echos the key result in [21]. We first give the convergence rate under mirror descent and the unconstrained assumption and then extend the result to the constrained case. For the latter, we show that convergence cannot be guaranteed if the lower-level solution gets too close to the boundary early in the simultaneous solution process. To avoid this trap, the standard mirror descent method is revised to carefully steer the lower-level solution away from the boundary.
3 Problem Formulation
We study incentive design in both atomic games [39] and nonatomic games [45], classified depending on whether the set of agents is endowed with an atomic or a nonatomic measure. In social systems, both types of games can be useful, although the application context varies. Atomic games are typically employed when each agent has a non-trivial influence on the rewards of other agents. In a nonatomic game, on the contrary, a single agent’s influence is negligible and the reward could only be affected by the collective behavior of agents.
Atomic Game. Consider a game played by a finite set of agents N = {1, . . . , n}, where each agent i 2 N selects a strategy ai 2 Ai ✓ Rdi to maximize the reward received, which is determined by a continuously differentiable function ui : A = Q i2N Ai ! R. Formally, a joint strategy a⇤ 2 A is a Nash equilibrium if
ui(ai⇤, a i ⇤ ) ui(ai, a i⇤ ), 8 ai 2 Ai, 8i 2 N .
Suppose that for all i 2 N , the strategy set Ai is closed and convex, and the reward function ui is convex in ai, then a⇤ 2 A is a Nash equilibrium if and only if there exists i, . . . , n > 0 such that [46]
nX
i=1
i · ⌦ raiui(a⇤), ai ai⇤ ↵ 0, for all a 2 A. (3.1)
Example 3.1 (Oligopoly model). In an oligopoly model, there is finite set N = {1, . . . , n} of firms, each of which supplies the market with a quantity ai (ai 0) of goods. Under this setting, we have A = Rn+. The good is then priced as p(q) = p0 · q, where p0, > 0 and q = P j2N a
i it the total output. The profit and the marginal profit of the firm i are then given by
ui(a) = ai · ✓ p0 · X j2N aj ◆ ci, raiui(a) = p0 · ✓ ai + X j2N aj ◆ ,
respectively, where ci is the constant marginal cost2 for firm i.
Nonatomic Game. Consider a game played by a continuous set of agents, which can be divided into a finite set of classes N = {1, . . . , n}. We assume that each i 2 N represents a class of infinitesimal and homogeneous agents sharing the finite strategy set Ai with |Ai| = di. The mass distribution for the class i is defined as a vector qi 2 (Ai) that gives the proportion of agents using each strategy. Let the cost of an agent in class i to select a strategy a 2 Ai given q = (q1, . . . , qn) be cia(q). Formally, a joint mass distribution q 2 (A) = Q i2N (Ai) is a Nash equilibrium, also known as a Wardrop equilibrium [51], if for all i 2 N , there exists bi such that ⇢ cia(q⇤) = bi, if qia⇤ > 0, cia(q⇤) bi, if qia⇤ = 0.
The following result extends the VI formulation to Nash equilibrium in nonatomic game: denote ci(q) = (cia(q))a2Ai , then q⇤ is a Nash equilibrium if and only if [11]
X i2N i · ⌦ ci(q⇤), q i qi⇤ ↵ 0, for all q 2 (A). (3.2)
Example 3.2 (Routing game). Consider a set of agents traveling from source nodes to sink nodes in a directed graph with nodes V and edges E . Denote N ✓ V ⇥ V as the set of source-sink pairs, Ai ✓ 2E as the set of paths connecting i 2 N and E ia ✓ E as the set of all edges on the path a 2 Ai. Suppose that each source-sink pair i 2 N is associated with ⇢i nonatomic agents aiming to choose a route from Ai to minimize the total cost incurred. Let qia 2 (Ai) be the proportion of travelers using the path a 2 Aw, xe 2 R+ be the number of travelers using the edge e and te(xe) 2 R+ be the cost for using edge e. Then we have xe = P i2N P a2Ai ⇢
i · qia · eia, where eik equals 1 if e 2 E ia and 0 otherwise. The total cost for a traveler selecting a path a 2 Ai will then be cia(q) = P e2E t
e(xe) · eia. 2Throughout this paper, we use the term “reward” to describe the scenario where the agents aim to maximize
ui, and use “cost” when the agents aim to do the opposite.
Incentive Design. Despite the difference, we see that an equilibrium of both atomic and nonatomic games can be formulated as a solution to a corresponding VI problem in the following form
X i2N i · ⌦ vi(x⇤), x i xi⇤ ↵ 0, for all x 2 X = Y i2N X i, (3.3)
where vi and X i denote different terms in the two types of games. Suppose that there exists an incentive designer aiming to induce a desired equilibrium. To this end, the designer can add incentives ✓ 2 ⇥ ✓ Rd, which is assumed to enter the reward/cost functions and thus leads to a parameterized vi✓(x). We assume that the designer’s objective is determined by a function f : ⇥⇥ X ! R. Denote S(✓) as the solution set to (3.3). We then obtain the uniform formulation of the incentive design problem for both atomic games and nonatomic games
min ✓2⇥ f⇤(✓) = f(✓, x⇤), s.t. x⇤ 2 S(✓). (3.4)
If the equilibrium problem admits multiple solutions, the agents may converge to different ones and it would be difficult to determine which one can better predict the behaviors of the agents without additional information. In this paper, we first consider the case where the game admits a unique equilibrium. Sufficient conditions under which the game admits a unique equilibrium will also be provided later. We would consider the non-unique case later and show that our algorithms can still become applicable by adding an appropriate regularizer in the cost function.
Stochastic Environment. In the aforementioned settings, vi✓(x) is a deterministic function. Although most MPEC algorithms in the optimization literature follow this deterministic setting, in this paper, we hope our algorithm can handle more realistic scenarios. Specifically, in the real world, the environment could be stochastic if some environment parameters that fluctuate over days. In a traffic system, for example, both worse weather and special events may affect the road condition, hence the travel time vi✓(x) experienced by the drivers. We expect our algorithm can still work in the face of such stochasticity. To this end, we assume that vi✓(x) represents the expected value of the cost function. On each day, however, the agents can only receive a noised feedback bvi✓ as an estimation. In the next section, we develop algorithms based on such noisy feedback.
4 Algorithm
We propose to update ✓ and x simultaneously to improve the computational efficiency. The game dynamics at the lower level is modeled using the mirror descent method. Specifically, at the stage k, given ✓k and xk, the agent first receives vi✓k(xk) as the feedback. After receiving the feedback, the agents update their strategies via
xik+1 = argmax xi2X i hvi✓k(xk), x ii 1/ ik ·D i(xik, xi) , (4.1)
where D i(xik, x i) is the Bregman divergence induced by a strongly convex function i. The accurate value of rf⇤(✓k), the gradient of its objective function, equals
r✓f ✓k, x⇤(✓k) + ⇥ r✓x⇤(✓k) ⇤> ·rxf ✓k, x⇤(✓k) ,
which requires the exact lower-level equilibrium x⇤(✓k). However, at the stage k, we only have the current strategy xk. Therefore, we also have to establish an estimator of rx⇤(✓k) and rf⇤(✓k) using xk, the form of which will be specified later. Remark 4.1. The standard gradient descent method is double-loop because at each ✓k it involves an inner loop for solving the exact value of x⇤(✓k) and then calculating the exact gradient.
4.1 Unconstrained Game
We first consider unconstrained games with X i = Rdi , for all i 2 N . We select i(·) as smooth function, i.e., there exists a constant H 1 such that for all i 2 N and xi, xi0 2 X i,
r i(xi) r i(xi0) 2 H · kxi xi0k2. (4.2)
Example of i satisfying this assumption include (but is not limited to) i(xi) = (xi)>Qixi/2, where Qi 2 Rdi ⇥ Rdi is a positive definite matrix. It can be directly checked that we can set
H = maxi2N i, where i is the largest singular value of Qi. In this case, the corresponding Bregman divergence becomes D i(xi, xi0) = (xi xi0)>Qi(xi xi0)/2, which is known as the squared Mahalanobis distance. Before laying out the algorithm, we first give the following lemma characterizing r✓x⇤(✓). Lemma 4.2. When X i = Rdi and rxv✓(x⇤(✓)) is non-singular, it holds that
r✓x⇤(✓) = h rxv✓ x⇤(✓) i 1 ·r✓v✓ x⇤(✓) .
Proof. See Appendix B.2 for detailed proof.
For any given ✓ 2 ⇥ and x 2 X , we define erf(✓, x) = r✓f(✓, x) ⇥ r✓v✓(x) ⇤> · ⇥ rxv✓(x) ⇤ 1 ·rxf(✓, x). (4.3)
Although we cannot obtain the exact value of rf⇤(✓k), we may use brf(✓k, xk) as a surrogate and update ✓k based on brf⇤(✓k, xk) instead. Now we are ready to present the following bilevel incentive design algorithm for unconstrained games (see Algorithm 1).
Algorithm 1 Bilevel incentive design for unconstrained games Input: ✓0 2 ⇥, x0 2 X = Rd, where d = P i2N d
i, sequence of step sizes (↵k, { ik}i2N ). For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2X i hbvik, xii 1/ ik ·D i(x i k, x i) , (4.4)
for all i 2 N , where bvik is an estimator of vi✓k (xk). Update incentive parameter
✓k+1 = argmax ✓2⇥
herf(✓k, xk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
In Algorithm 1, if ✓k and xk converge to fixed points ✓̄ and x̄, respectively, then x̄ = x⇤(✓̄) is expected to be satisfied. Thus, we would also have brf(✓̄, x̄) = rf⇤(✓̄). Thus, the optimality of ✓̄ can be then guaranteed. It implies that the algorithm would find the optimal solution if it converges. Instead, the difficult part is how to design appropriate step sizes that ensure convergence. In this paper, we provide such conditions in Section 5.1.
4.2 Simplex-Constrained Game
We then consider the case where for all i 2 N , xi is constrained within the probability simplex
[di] = xi 2 Rd i xi 0, (xi)>1di = 1 ,
where 1di 2 Rd i
is the vector of all ones. Here we remark that any classic game-theoretic models are simplex-constrained. In fact, as long as the agent faces a finite number of choices and adopts a mixed strategy, its decision space would be a probability simplex [39]. In addition, some other types of decisions may also be constrained by a simplex. For example, financial investment concerns how to split the money on different assets. In such a scenario, the budget constraint can also be represented by a probability simplex. In such a case, we naturally consider i(xi) = P
j2[di][x i]j · log[xi]j , which is the Shannon
entropy. Such a choice gives the Bregman divergence D i(xi, xi0) = P
j2[di][x i]j · log([xi0]j/[xi]j),
which is known as the KL divergence. In this case, we still first need to characterize r✓x⇤(✓), which also has an analytic form. Specifically, if we define a function h✓(x) = (hi✓(x i))i2N that satisfies
hi✓(x i) = argmax
x0i2X i
n⌦ vi✓(x i), x0i ↵ 1/ ik ·D i(xi, x0i) o ,
then for any ✓, xi⇤(✓) satisfies xi⇤(✓) = h✓(xi⇤(✓)) [12]. Implicitly differentiating through this fixed point equation then yields rx⇤(✓) = r✓h✓(x⇤(✓)) · (I rxh✓(x⇤(✓))) 1. Then, similar to (4.3), we may use
erf(✓, x) = r✓f(✓, x) r✓h✓(x) · I rxh✓(x) 1 ·rxf(✓, x) (4.5)
to approximate the actual gradient rf⇤(✓) and then update ✓k based on rf⇤(✓k) instead. Remark 4.3. The mapping h✓(x) has an analytic expression, which reads
hi✓(x) = x i · exp
ik · vi✓(x) . xi · exp ik · vi✓(x)
1 .
Hence, both rxh✓(x) and r✓h✓(x) can also be calculated analytically.
In addition to a different gradient estimate, we also modify Algorithm 1 to keep the iterations xk from hitting the boundary at the early stage. The modification involves an additional step that mixes the strategy with a uniform strategy 1di/di, i.e., imposing an additional step
exik+1 = (1 ⌫k+1) · xk+1 + ⌫k+1 · 1di/di
upon finishing the update (4.4), where ⌫k+1 2 (0, 1) is a the mixing parameter, decreasing to 0 when k ! 1. In the following, we give the formal presentation of the modified bilevel incentive design algorithm for simplex-constrained games (see Algorithm 2).
Algorithm 2 Bilevel incentive design for simplex constrained games Input: ✓0 2 ⇥, x0 2 X , step sizes (↵k, { ik}i2N ), k 0, and mixing parameters ⌫k, k 0. For k = 0, 1, . . . do:
Update strategy profile
xik+1 = argmax xi2 ([di]) hbvik, xii 1/ ik ·D i(ex i k, x i) ,
exik+1 = (1 ⌫k) · xk+1 + ⌫k · 1di/d i, (4.6)
for all i 2 N , where bvik is an estimator of vi✓k (exk). Update incentive parameter
✓k+1 argmax ✓2⇥
herf(✓k, exk+1), ✓i 1/↵k · k✓ ✓kk22 .
EndFor Output: Last iteration incentive parameter ✓k+1 and strategy profile xk+1.
Similar to Algorithm 1, at the core of the convergence of Algorithm 2 is still the step size. This case is even more complicated, as we need to design ↵k, k, and ⌫k at the same time. In this paper, a provably convergent scheme is provided in Section 5.2.
Before closing this section, we remark that the algorithm can be easily adapted to other types of constraints by using another h✓(x) to model the game dynamics. Particularly, the projected gradient descent dynamics has very broad applicability. In this case, the algorithm for calculating r✓h✓(x) and rxh✓(x) is given by, for example, Amos and Kolter [2]. The additional step (4.6) then becomes unnecessary as it is dedicated to simplex constraints.
5 Convergence Analysis
In this section, we study the convergence of the proposed algorithms. For simplicity, define D x, x0 = P
i2N D i xi, xi0 . We make the following assumptions.
Assumption 5.1. The lower-level problem in (3.4) satisfies the following conditions. (1) The strategy set X i of agent i is a nonempty, compact, and convex subset of Rdi . (2) For each i 2 N , the gradient vi✓(·) is Hu-Lipschitz continuous with respect to D , i.e., for all i 2 N and x, x0 2 X , kvi✓(x) vi✓(x0)k2⇤ H2u ·D (x, x0). (3) There exist constants ⇢✓, ⇢x > 0 such that for all x 2 X and
✓ 2 ⇥, kr✓v✓(x)k2 < ⇢✓, and k[rxv✓(x)] 1k2 1/⇢x. (4) For all ✓ 2 ⇥, the equilibrium x⇤(✓) of the game is strongly stable with respect to D , i.e., for all x 2 X , P i2N
i · hvi✓(x), xi⇤(✓) xii D (x⇤(✓), x). Assumption 5.2. The upper-level problem in (3.4) satisfies the following properties. (1) The set ⇥ is compact and convex. The function f⇤(✓) is µ-strongly convex and rf⇤(✓) has 2-norm uniformly bounded by M . (2) The extended gradient erf(✓, x) is eH-Lipschitz continuous with respect to D , i.e., for all x, x0 2 X , kerf(✓, x) erf(✓, x0)k22 eH2 ·D (x, x0). Assumption 5.3. Define the filtration by F✓0 = {✓0}, Fx0 = ;, F✓k = F✓k 1 [ {xk 1, ✓k}, and Fxk = Fxk 1 [ {✓k, xk}. We assume (1) the feedback bvk is an unbiased estimate, i.e., for all i 2 N , we have E[bvik | Fxk ] = vi✓k(xk); (2) The feedback bvk has bounded mean squared estimation errors, i.e., there exists u > 0 such that E[kbvik vi✓k(xk)k 2 ⇤ | Fxk ] 2u for all i 2 N .
Below we discuss when the proposed assumptions hold, and if they are violated, how would our algorithm works. Assumption 5.1 includes the condition that x⇤(✓) is strongly stable. In this case, it is the unique Nash equilibrium of the game [34]. It is also a common assumption in the analysis of the mirror descent dynamics itself [13]. We provide sufficient conditions for checking strong stability in Appendix B.1. We refer the readers to Section 6 for an explanation of how to extend our algorithm when this assumption is violated. Assumption 5.2 includes the convexity of the upper-level problem, which is usually a necessary condition to ensure global convergence. Yet, without convexity, our algorithm can still converge to a local minimum. Assumption 5.3 becomes unnecessary if we simply assume that the environment is deterministic. In this case, the accurate value of vi✓(x) is available. Yet, if the noises are added to the feedback, assuming that the noisy feedback is unbiased and bounded is still reasonable.
5.1 Unconstrained Game
In this part, we establish the convergence guarantee of Algorithm 1 for unconstrained games. We define the optimality gap ✏✓k and the equilibrium gap ✏ x k+1 as
✏✓k := E ⇥ k✓k ✓⇤k22 ⇤ , ✏xk+1 := E h D x⇤(✓k), xk+1 i .
We track such two gaps as the convergence criteria in the subsequent results. Theorem 5.4. For Algorithm 1, set the step sizes ↵k = ↵/(k + 1), k = /(k + 1)2/3, and ik = i · k with constants ↵ > 0 and > 0 satisfying
1/N ·H2uk k22, ↵/ 3/2 1/12 ·H eHH⇤,
where H⇤ = ⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold, then we have
✏✓k = O(k 2/3), ✏xk = O(k 2/3).
Proof. See Appendix C for detailed proof and a detailed expression of convergence rates.
5.2 Simplex-Constrained Game
In this part, we establish the convergence guarantee of Algorithm 2 for simplex constrained games. We still define optimality gap ✏✓k as ✏ ✓ k = E ⇥ k✓k ✓⇤k22 ⇤ . Yet, corresponding to (4.6), we track e✏xk+1 as a measure of convergence for the strategies of the agents, which is defined as
e✏xk+1 = E h D ex⇤(✓k), exk+1 i ,
where ex⇤(✓k) = (1 ⌫k) · x⇤(✓k) + ⌫k · 1/di. We are now ready to give the convergence guarantee of Algorithm 2. Theorem 5.5. For Algorithm 2, set the step sizes ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, ik = i · k, and ⌫k = ⌫/(k + 1)4/7 with constants ↵ > 0 and > 0 satisfying
1/6 ·NH2uk k22, ↵/ 3/2 1/7 · eH eH⇤,
where eH⇤ = (1 + d)⇢✓/⇢x. Suppose that Assumptions 5.1-5.3 hold. If there exists some constant V⇤ > 0 such that kv✓(x⇤(✓))k1 V⇤ for any ✓ 2 ⇥, we then have
✏✓k = O(k 2/7), e✏xk = O(k 2/7).
Proof. See Appendix D for detailed proof and a detailed form of the convergence rates.
6 Extensions to Games with Multiple Equilibria
We then briefly discuss how to apply our algorithms when the lower-level game has multiple equilibria.
Case I: If the function v✓(x) = (vi✓(x))i is strongly monotone in the neighbourhood of each equilibrium, then all equilibria are strongly stable in a neighbourhood and hence isolated [38]. In this case, our algorithms can be directly applied as rxv✓(x) is non-singular in these neighborhoods. It is commonly believed that the most likely equilibrium is the one reached by the game dynamics [52]; our algorithm naturally converges to this one.
Case II: If the function v✓(x) = (vi✓(x))i is monotone but not strongly monotone, then the equilibrium set is a convex and closed region [34]. This case is challenging, as the matrix rxv✓(x) needed to be inverted would become singular. Nevertheless, we can simply assume the agents are bounded rational [42, 1, 33]. The bounded rationality would result in a quantal response equilibrium for predicting the agents’ response. We refer the readers to Appendix F for a detailed explanation (with numerical examples for illustration). Here we briefly sum up the key takeaways: (1) it is equivalent to add a regularizer ⌘i · (log(xi + ✏) + 1) to vi✓(x) for some ⌘i > 0 and ✏ 0; (2) as long as ✓ > 0, the strong stability condition in Assumption 5.1 would then be satisfied, hence a unique equilibrium would exist; (3) as long as ✏ > 0, the Lipschitz continuous condition in Assumption 5.1 will also not be violated. In a nutshell, the bounded rationality assumption can simultaneously make our model more realistic and satisfy the assumptions in Section 5.
7 Numerical Experiments
In this section, we conduct two numerical experiments to test our algorithms. All numerical results reported in this section were produced either on a MacBook Pro (15-inch, 2017) with 2.9 GHz Quad-Core Intel Core i7 CPU.
Pollution Control via Emission Tax. We first consider the oligopoly model introduced in Example 3.1. We assume that producing ai units of output, firm i would generate ei = diai units of emissions. We consider the following social welfare function [44]
W (a) =
Z Pn i=1 a i
0 (p0 · q) dq
nX
i=1
ci · qi ⌧ · nX
i=1
diai,
where the first term is the consumers’ surplus, the second term is the total production cost, and the third term is the social damage caused by pollution. To maximize social welfare, an authority can impose emission taxes on the productions. Specifically, whenever producing ai units of output, firm i could be charged ⇡i · ai, where ⇡i is specialized for the firm i. In the experiment, we set n = 100, p0 = 100, = 1, ⌧ = 10, and di = 10 exp(ci) + ✏i, where {ci}100i=1 are evenly spaced between 1 and 2 and {✏i}100i=1 are the white noises. Under this setting, ci and ei are negatively correlated, which is realistic because if a firm hopes to reduce their pollution by updating their emission control systems, the production cost must be increased accordingly.
Through this small numerical example, we hope to illustrate that the single-loop scheme developed in this paper is indeed much more efficient than previous double-loop algorithms. To this end, we compare our algorithm with two double-loop schemes proposed by Li et al. [25]. In both approaches, the lower-level equilibrium problem is solved exactly first at each iteration. But afterwards, the upperlevel gradients are respectively obtained via automatic differentiation (AD) and implicit differentiation (ID). To make a fair comparison, the same hyperparameters — including the initial solutions, the learning rates, and the tolerance values for both upper- and lower-level problems — are employed for the tested algorithms (double-loop AD, double-loop ID, and our algorithm).
Table 1 reports statistics related to the computation performance, including the total CPU time, the total iteration number, and the CPU time per iteration. The results reveal that all tested algorithm take a similar number of iterations to reach the same level of precision. However, the running time per iteration required by our algorithm is significantly lower than the two double-loop approaches. Hence, in general, our scheme is more efficient.
Second-Best Congestion Pricing. We then consider the routing game model introduced in Example 3.2. To minimize the total travel delay, an authority on behalf of the public sector could impose appropriate tolls on selected roads [49]. This problem of determining tolls is commonly known as the congestion pricing problem. The second-best scheme assumes that only a subset of links can be charged [48]. Specifically, we write ⇡ 2 R|E|+ as the toll imposed on all the links and Etoll as the set of tollable links. We model total cost for a traveler selecting a path a 2 A as
cia(⇡, q) = X
e2E
te(xe) + ⇡e · eia + ⌘ · log(qia) + 1 ,
where we add an extra term (log(qia) + 1) to characterize the uncertainties in travelers’ route choices [20, 5]. It results in a quantal response equilibrium, as discussed in Section 6. We test our algorithm on a real-world traffic network: the Sioux-Falls network (See Lawphongpanich and Hearn [24] for its structure). We select 20 links (11, 35, 32, 68, 46, 21, 65, 52, 71, 74, 33, 64, 69, 14, 18, 39, 57, 48, 15, 51) for imposing congestion tolls.
We run Algorithm 2 to solve the problem and compare 4 different settings on the selection of step sizes. Setting A: ↵k = ↵/(k + 1)1/2, k = /(k + 1)2/7, and ⌫k = ⌫/(k + 1)4/7. It ensures convergence according to Theorem 5.5. Setting B: ↵k = ↵/(k + 1)1/2, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)4/7. It only increases the decreasing rate of the step size in the inner loop. Setting C: ↵k = ↵/(k + 1)1, k = /(k + 1)1, and ⌫k = ⌫/(k + 1)1. It assumes that all step sizes decrease based on the classic O(1/k) rate. Setting D: ↵k = ↵/(k+1)1/2, k = /(k+1)2/7, and ⌫k = 0. It does not adopt the mixing step proposed in our paper. We add white noise to the costs received by the agents based on Gaussian distributions and run our algorithm under each setting with 10 times. The mean value of upper-level optimality gaps and lower-level equilibrium gaps are reported in Figure 1 (the shadowed areas are plotted based on "mean ± std" area over all sampled trajectories).
Below we summarize a few observations. First, without the mixing step proposed in our paper, the algorithm does hit the boundary too early. The algorithm fails after just a few iterations. It verifies our earlier claim that directly extending previous methods [21, 9] designed for bilevel optimization problems to MPECs is problematic. Second, the step size given by Theorem 5.5 ensures the fastest and the most stable convergence.
Acknowledgments and Disclosure of Funding
Mingyi Hong’s research is funded by NSF under the award numbers CIF-1910385 and CMMI1727757. Yu (Marco) Nie’s research is funded by NSF under the award number CMMI-2225087. Zhaoran Wang’s research is funded by NSF under the award number ECCS-2048075. | 1. What is the focus and contribution of the paper on incentive design in convex games?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its connection to prior work in computational game theory?
3. Do you have any concerns regarding the assumptions made in the paper, such as monotonicity and strong stability?
4. How does the paper relate to normal-form games, and what are the implications for finding equilibria in such games?
5. Are there any issues with the exposition, such as the separation of atomic and nonatomic games, or the introduction of a continuum of players?
6. Are there any minor errors in the paper, such as the use of "quantum" instead of "quantal"? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper claims to introduce algorithms for incentive design in convex games. In particular, the incentive designer is faced with selecting a parameter
θ
such that the resulting game has an equilibrium
x
∗
(
θ
)
that is desirable for the designer. The authors state some preliminary results about such games, and then give an algorithm that attempts to find an optimal parameter
θ
, prove properties of their algorithms, and give some experimental evidence of their effectiveness in practice (appendix).
Strengths And Weaknesses
The problem and analysis presented in the paper are interesting. However, I have some concerns, mostly pertaining to setting this paper within the field of computational game theory.
Perhaps most notably, I do not think the authors have done a sufficient job relating this paper to prior work in computational game theory. For example, the techniques of this paper are based on online learning--Algorithms 1 & 2 are, modulo some careful tuning of step sizes, essentially "every player run online mirror descent", which is a very well-known idea in computational game theory. However, the authors do not discuss this connection at all. Further, the "bilevel optimization problem" is essentially a Stackelberg game with a single leader and multiple followers, and I believe there should be a concrete connection drawn to that literature. For example, if
n
=
1
, then it is known that equilibria are easy to find even in general normal-form games, by a standard linear programming algorithm.
I would also like to see a much more in-depth discussion of the Assumptions 5.1-5.3 as they pertain to useful classes of games. For example:
The monotonicity/strong stability condition (e.g., Assumption 5.1, Section 6 Case II) seems very strong. I understand that this is the only thing preventing the paper from breaking long-standing assumptions in complexity theory (e.g., implying a polynomial-time algorithm for Nash in normal-form games), but I feel that the authors should take more time to discuss intuition for what these assumptions mean in practice. For example: what, if anything, does this paper say about normal-form ("nonatomic", in this paper's language) games?
The convexity of
f
∗
(
θ
)
is perhaps too strong an assumption. The authors admit this, but that does not make it less bothersome.
Can the bounded variance in Assumption 5.3 be replaced with an
x
i
-dependent variance, as is usual in the online learning literature (see, e.g., the analysis of Exp3)?
Questions
A few minor points about the exposition (atomic and nonatomic games):
Why separate "atomic games" from "nonatomic games"? It seems to me that a nonatomic game is simply an atomic game over simplices with multilinear utility functions.
Also, why define one in terms of utility and one in terms of cost? To me this only introduces extra needless confusion.
Isn't "nonatomic game" just a normal-form game here? Why bother introducing "continuum of players", and why not use the usual language of normal-form games?
Miscellaneous things:
Throughout the paper, "quantum" should probably be "quantal", as in "quantal response equilibrium". I've never heard of that referred to as "quantum equilibrium".
Line 145: since the utility is being maximized,
u
i
should be concave, not convex
Limitations
The authors have adequately addressed limitations, modulo concerns I have already stated above. |
NIPS | Title
Effectiveness of Vision Transformer for Fast and Accurate Single-Stage Pedestrian Detection
Abstract
Vision transformers have demonstrated remarkable performance on a variety of computer vision tasks. In this paper, we illustrate the effectiveness of the deformable vision transformer for single-stage pedestrian detection and propose a spatial and multi-scale feature enhancement module, which aims to achieve the optimal balance between speed and accuracy. Performance improvement with vision transformers on various commonly used single-stage structures is demonstrated. The design of the proposed architecture is investigated in depth. Comprehensive comparisons with state-of-the-art singleand two-stage detectors on different pedestrian datasets are performed. The proposed detector achieves leading performance on Caltech and Citypersons datasets among singleand two-stage methods using fewer parameters than the baseline. The log-average miss rates for Reasonable and Heavy are decreased to 2.6% and 28.0% on the Caltech test set, and 10.9% and 38.6% on the Citypersons validation set, respectively. The proposed method outperforms SOTA two-stage detectors in the Heavy subset on the Citypersons validation set with considerably faster inference speed.
1 Introduction
Pedestrian detection is a popular task subordinate to object detection in computer vision. This task aims to locate and classify pedestrians in images or videos accurately. Pedestrian detection is very important as it serves as the prerequisite of various vision tasks [1], such as human-centric tasks (person re-identification [2, 3], person search [4], human pose estimation [5] etc.) and more generic multi-object tracking [6]. It has been applied to autonomous driving [7, 8], video surveillance [9] and action tracking. In this paper, we focus on the detection based on RGB images.
Pedestrian detection suffers from significant occlusion and varying scales. Intra- and inter-class occlusion occur when a pedestrian is occluded by other pedestrians or objects like cars, bicycles etc. Both significantly reduce the discriminative features and destroy the regular shape of pedestrians. For varying scales, large pedestrians tend to have more informative features. Still, they are difficult to fully extract from a vast region, while features of small targets are compact but relatively ambiguous with less preserved details. In summary, the fluctuating amount and varying shape of effective features are the core problems. They challenge the capabilities of feature extraction modules, which act as a long-standing bottleneck in pedestrian detection.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To deal with these problems, two-stage methods [10–15] based on Fast [16] and Faster R-CNN [16] have pervaded pedestrian detection tasks owing to the high detection accuracy. These methods first make coarse predictions of targets via the Region Proposal Network (RPN), then refine the bounding boxes and predict the final scores based on the features inside the proposal regions. In addition to methods for general object detection, pedestrian detectors take advantage of unique characteristics of targets, such as the mask of visible parts [17, 10] and key points of human bodies [18]. However, the inference speed of these methods is limited by the repeated predictions which makes them hard to be applied to real-world scenes.
To achieve faster inference, single-stage approaches, which only make one round prediction, are developed [19] and applied to pedestrian detection [20–24]. However, they suffer from decreased detection accuracy. The miss rates of two-stage methods in the Reasonable subset on Caltech are reduced to less than 4% [14, 10, 25, 18], while those for single-stage methods are larger than 4.5% [22]. For the Citypersons validation set, the miss rates for the former are less than 40% in the Heavy subset, which is much lower than the latter (42% [24]). In this paper, we aim to improve the detection accuracy, especially in Reasonable and Heavy subsets, and to narrow the gap between single- and two-stage methods with fast inference.
Typical single-stage detectors use anchors (SSD [26]) or are anchor-free (CSP [22]). The former generates rectangular bounding boxes with different aspect ratios and scales centered at each pixel of the feature maps at certain levels. Anchor scales are designed to be smaller at lower levels to facilitate the detection of small objects. These methods predict the offsets w.r.t. the upper left position, height and width of the anchor. Taking CSP as an example, the latter method only predicts the logarithm height and offsets w.r.t. centers of each pixel. For anchor-based single-stage detectors, ALFNet [21] refines anchors progressively with stacked prediction blocks to remedy the lack of proposal regions.
In the past year, most research has focused on the fusion of representative features [27–32] to improve single-stage methods. For example, [30] enhances features via increasing semantic information at a low level and enriches the localization information at a high level. Similarly, [32] fuses the feature maps with different scales in adequate proportions. [29, 31] also explore new strategies to aggregate multi-level features. These feature enhancements are mainly performed along the dimension of feature level due to the intrinsically unbalanced feature information between shallow and deep feature levels. However, this unbalance also exists in two-stage detectors. As such, this is not the particular reason for the poorer accuracy of single-stage methods.
The general architecture of single- and two-stage detectors are compared in Figure 1. Assuming that the training strategies and the detection head are the same for both methods, the difference in the architecture lies in the information fed into the detection head. For two-stage methods, both positions and features of the proposal regions are fed into the detection head. These proposals contain potential pedestrians. Thereby, the detection head classifies spatially target-focused features with fewer background interruptions and refines the bounding boxes by predicting small offsets from the proposal positions. For single-stage methods, each pixel in the feature map serves as the ‘proposal region’ with no pre-estimated positions. The receptive fields of these pixels share the same size, which may be too small to include sufficient information for large targets or so large that the background information overwhelms the useful features. This is more challenging for the classifier compared to two-stage methods. Additionally, single-stage methods have to regress from scratch, which is more difficult than simple refinement. Thus, the lack of spatially target-focused feature representation and the prediction of bounding boxes from scratch are the two key bottlenecks hindering the improvement of single-stage detectors.
To make the features fed into the detection head concentrate on the targets or other helpful information automatically without the assistance of proposal regions, we take advantage of vision transformers in this paper. Vision transformers describe the pairwise dependency of each entity in the feature map with attention weights. The output weighted averaged feature is the adaptive aggregation of important entities (with higher attention weights) while the disturbing information (with lower attention weights) is suppressed. Using such attention mechanism on top of the backbone enables the single-stage detector to supplement spatially filtered features easily for subsequent classification and regression. In this case, the modified detector makes the best use of the fast inference originating from single-stage methods and more effective features. Our main contributions are as follows:
• Demonstrate the effectiveness of the deformable vision transformer in improving the accuracy of commonly used single-stage detectors on pedestrian datasets.
• Extend the application of vision transformers on top of the backbone in pedestrian detection tasks.
• Achieve the best performance among single-stage detectors on the Caltech test set and Citypersons validation set while maintaining fast inference and reducing the number of parameters.
• Narrow the gap of detection accuracy between the single- and two-stage methods in pedestrian detection.
2 Related works
Currently, vision transformers are used to establish general-purpose backbones (ViT [33] and Swin Transformer [34]) or stack on top of the backbone (DETR [35]). Since we focus on pedestrian detection, this paper explores the latter case. DETR consists of the convolution backbone, six encoders and decoders and the prediction head. It is an inspiring end-to-end detector but is memory-consuming. DETR requires massive memory to store the self-attention weights within each Multi-Head SelfAttention (MHSA) layer. The memory cost is linear to the number of attention heads and is square to the number of pixels in the down-sampled feature map. Additionally, first and second-order momentums in optimization introduce further memory cost in the training procedure. In pedestrian detection, more attention heads and relatively large down-sampled feature maps are preferred to enhance detection accuracy, especially for small targets. This results in memory explosion using DETR. To this end, deformable DETR [36] is proposed. It only needs the attention weights at several sampling locations rather than each pixel in the feature map. The memory cost of the attention weights is linear to the number of pixels, which makes training with high resolutions possible. Experiments show that the deformable DETR outperforms the Faster R-CNN [37] and DETR on COCO 2017 validation set [38].
So far, vision transformers show great potential, but they have rarely been applied to pedestrian detection in the form of DETR or its variants. This is because it has been observed that they perform worse than the commonly used Faster R-CNN on CrowdHuman dataset [39] and require tenfold training time [40]. Although [40] proposed using dense object queries and the rectified attention field to enhance scale-adaptive feature extraction in the decoding phase, the modified deformable DETR still shows a limited advantage over the traditional Faster R-CNN. This implies that rigidly putting the whole six encoder-decoder pairs into the pedestrian detector may not be cost-effective. As an example, BoTNet [41] only substitutes the convolution layers in residual bottleneck blocks in the last stage of ResNet [42] for MHSA layers, but it produces strong performance on ImageNet validation set [43]. Inspired by this and the limitation above, our work only utilizes a single encoder of the deformable vision transformer as an adaptive feature extractor and applies it to commonly used single-stage detectors for better detection accuracy and fast inference.
3 Method
Deformable Vision Transformer Encoder: The deformable vision transformer encoder (Figure 2) takes the L feature maps { zl }L l=1 with height Hl and width Wl at scale l and reference points,
which are the positions of grid centers of the feature maps, as inputs. It outputs the enhanced feature maps with the same resolution as the input. The input feature maps are first added by fixed encoded positional [35] and learnable level information [36] to disambiguate spatial and scale positions, then projected to the query feature map zq via a linear layer. The feature maps also generate the value feature maps zv with a linear layer but without encoding. The query feature map zq, value feature map zv together with the pre-generated reference points are sent to the Multi-Scale Deformable Attention (MSDA) layer to enhance spatially adaptive features, followed by a Feed-Forward Network (FFN). In summary, the encoder supplements the semantic information of input feature maps via the embedded attention layer and FFN.
Multi-Scale Deformable Attention: The MSDA layer sums the selected entities at sampling locations in the value feature map zv with predicted attention weights by each corresponding query entity. These attention weights W are the linear projection of query features zq followed by a softmax operator along scale and sampling point dimensions. For a single query entity, it only needs NhNlNp attention weights, representing the significance of selected value features at different attention heads, scales, and points. Selections are decided by the sampling locations which are the summation of reference points p and sampling offsets ∆p which is the embedding of the query features. At each float sampling location, the selected value feature is bilinear interpolated for accuracy and training offset predictor. With weight, sampling locations and selected value features prepared, the q-th element of the separate output feature zso,h ∈ RNq×cv (Nq = ∑L l=1 HlWl, cv is the number of channels ) at attention head h (total Nh heads) is
zso,hq = Np∑ p=1 L∑ l=1 Wplhqvpql+∆pplhq (1)
where p, q, l and h index the sampling offsets, elements of the deformable attention feature zo, the scale of value v and attention head. Wplhq is a value from the weight W ∈ RNq×Nh×L×Np . p ∈ RNq×L×2 and ∆p ∈ RNq×Nh×L×Np×2 are the reference points and sampling offsets. pql and ∆pplhq denote the position of a single reference point and one of its corresponding sampling offsets respectively. The separate output features from Nh attention heads are projected to the q-th element of the final output deformable attention feature zo by a linear layer expressed as
zoq = Nh∑ h=1 W ′hz so,h q (2)
where W ′h ∈ Rc×cv denotes the learnable weight for the h-th attention head. Proposed Feature Enhancement Module: The features are enhanced owing to the self-attention mechanism to supplement the spatially adaptive features across multiple scales. The proposed module
simply consists of convolution/deconvolution and normalization layer pairs ahead of and after the deformable encoder and a final feature fusion step as illustrated in Figure 3. In this module, input feature maps from the backbone are first upsampled with deconvolution layers or encoded with the convolution layer to generate multi-scale feature maps { zl }3 l=1
. They are followed by group normalization to prevent the Internal Covariate Shift (ICS) that might be inducted by subsequent linear operations in the deformable encoder. The encoder yields enhanced multi-scale feature maps. Enhanced features are upsampled to keep the resolution as (H/4,W/4) for accurate detection. They are normalized via L2Norm [22] before concatenation along the channel dimension. This makes the features at different scales contribute equally to the final feature representation fed into the detection head. Concatenated features are compressed along the channel dimension to reduce the network parameters. The output feature maps can be fed into the detection head used in SSD, CSP etc. Except for this standard structure, the use of convolution/deconvolution layers can be adjusted according to the resolution of input, and the concatenation step can be removed if predictions are made at separate levels.
Training: For anchor-free cases, namely CSP, the loss function follows [22]. The overall loss consists of three parts as Equation(3) where Lc, Lh and Lo stand for the center heatmap loss, height map loss and offset loss. Weights for each loss λc, λs and λo are set as 0.01, 1 and 0.1 [22]. For anchor-based cases, namely SSD or ALFNet, the multi-task loss function is formulated with two objectives as Equation(4) [21] where λcls is experimentally set as 0.01 in the following experiments.
Laf = λcLc + λsLh + λoLo (3)
Lab = λclsLcls + Lloc (4)
Inference: For anchor-free single-stage methods, the predicted width is the height multiplied by the uniform aspect ratio 0.41 [22]. If not specified, bounding boxes with scores above 0.01 are kept and merged by Non-Maximum Suppression (NMS) with the IoU threshold of 0.5.
4 Experiments
4.1 Settings
Datasets: The proposed detector is evaluated on two commonly used public pedestrian datasets: Caltech [44] and Citypersons [45]. The Caltech dataset is an approximately 10 hours of 480x640 video taken in a single urban city. The standard training set contains about 250k frames with 350k bounding boxes. In our experiments, the training data augmented by 10 folds containing 42782 images with 13674 persons and the standard test set containing 4024 images with corresponding new annotations [46] and fixed aspect-ratio for bounding boxes [47] are used. Citypersons training set recorded across 18 different cities, 3 seasons and various weather conditions with 19654 persons in 2975 high resolution (1024x2048) images. The validation set contains 500 images across 3 cities.
Training Details: If not specified, the ResNet50/VGG16 pre-trained on ImageNet, Adam with moving average weights [48] and step learning rate schedule are applied. Data augmentation techniques including random horizontal flips with a probability of 0.5 and scaling are applied. For Caltech, additional random color distortion and cropping are implemented. The input images are rescaled to 336x448 and 640x1280 for Caltech and Citypersons datasets respectively. The detectors are trained with a single NVIDIA GeForce RTX 3090 GPU for 10 and 75 epochs with batch size 16 and 4 on Caltech and Citypersons respectively using the anchor-free CSP detection head. The base learning rate is 0.5e-4 and decreased by a factor of 0.5 after 6 and 60 epochs respectively. Initialization is performed with a randomly chosen and fixed seed. Tools provided by [47] are used in the experiments.
Metrics: Log-average Miss Rate (denoted as MR−2 or miss rate in this paper) over False Positive Per Image (FPPI) in the range [10−2, 100] is calculated over Reasonable, Small, Heavy, and All subsets defined in Table 1. The lower the MR−2 the better.
4.2 Ablation Experiments
Effectiveness of Enhanced Feature Maps: For generality, the proposed module is applied to three commonly used single-stage structures in pedestrian detection: anchor-based, progressive refinement, and anchor-free.
For anchor-based single-stage methods, like SSD300 [26], Table 2 shows that the proposed module shows stable and significant improvement in all three subsets with an IoU threshold of 0.5 in NMS.
For refined anchors, we append two Convolutional Predictor Blocks [21] after the feature maps at the last three stages of the backbone and an extra stage. The second block refines the coarse anchors predicted by the first block. Table 3 demonstrates that even though the anchors are refined progressively to remedy the lack of proposal regions, the separate enhancement at each level can bring improvement in certain subsets (level 0 improves Reasonable and All subsets, level 1, 2 improves Heavy subset). The combination of multi-level inputs brings in stable improvement in all the subsets.
For anchor-free methods, We evaluated the influence of the proposed module on the baseline CSP [22] detector on Caltech (Table 4) and Citypersons (Table 5). Results from both datasets indicate that with the enhanced feature maps, the miss rates are decreased significantly in Reasonable, Heavy and All subsets, in particular, the Heavy subset witnesses a decrease of up to 7.9%, and the Reasonable subset up to 3.1%. The additional enhancement module increases the inference time, however, they contain fewer learnable parameters as shown in Table 6. As the combination of CSP achieves the best results, subsequent experiments follow this implementation.
According to the above, the proposed module is effective for general single-stage detectors on pedestrian datasets, owing to the multi-scale deformable self-attention mechanism to enhance spatially adaptive features across levels.
Feature Maps Scales: Different combinations of multi-scale feature maps { zl }3 l=1
with three downsampling ratios (1/4, 1/8 and 1/16) are compared in Table 7. In this comparison, only zo3
which has the same resolution as z3 is upsampled to (H/4,W/4), and is fed into the detection head without intermediate concatenation and L2Norm. Results show that feature maps with ratios 1/4, 1/8, and 1/6 for each scale produce the best performance, which is yielded by upsampling c1 and c2 by two times while maintaining the scale of c3.
Enhanced Feature Map Scales: Fix the downsampling ratios of { zl }3 l=1
as 1/4, 1/8, and 1/16, feed different collections of multi-scale enhanced feature maps {zol } 3 l=1 to the detection head. Note that the deformable encoder keeps the resolutions of the input feature maps, the downsampling ratios for {zol } 3 l=1 are 1/4, 1/8 and 1/6 respectively. All the enhanced feature maps are first upsampled to (H/4,W/4) if needed, followed by normalization when multiple feature maps are utilized. They are then processed in three ways: 1. Cat: Concatenate them along the channel dimension followed by a compression layer to reduce the number of channels to 256. 2. Add: Implement element-wise sum of the enhanced feature maps. 3. Sep: No fusion operations across multi-scales; send them separately to the detection head which doubles the number of predictions. Table 8 compares various strategies to fuse the multi-scale enhanced feature maps. It shows that concatenation followed by L2Norm produces the overall best results in both Reasonable and Heavy subsets on the Caltech test set.
Choice of the Normalization Method Applied to { cl }3 l=1
: As Table 8 shows, GN performs best in the Reasonable subset while L2N in the Heavy subset. Considering that the miss rate of GN is 11.6% lower in the Reasonable subset and only 1.2% higher in the Heavy subset compared to L2N, GN is utilized in our experiments before the encoder as presented in Figure 3.
Number of Encoders: Based on the settings in the last part, different numbers of encoders are tested. These encoders are connected in series. Table 9 presents that although more encoders can provide higher-level semantic information, the best result is observed when only a single encoder is applied. This phenomenon supports the design of BotNet [41] to some extent, which indicates that following the whole set of six encoders and decoders in (deformable) DETR may not be suitable for specific
object detection tasks. A single encoder can also work effectively with the least number of learnable parameters, which prevents overfitting.
4.3 Comparison with the state-of-the-arts
Table 10 and Table 11 show that the proposed module achieves the lowest miss rates in Reasonable and Heavy subsets and leading performance in the All subset on both datasets among presented single-stage detectors.
For the Caltech dataset, the lowest miss rate (3.7%) of the proposed detector in the Reasonable subset is 0.2% smaller than that of the two-stage KGSNet presented in the upper part of Table 10. With pretraining on the Citypersons dataset, the miss rates on all three subsets of the Caltech dataset are reduced significantly and reach the lowest compared to other pretrained detectors as shown in the bottom part of Table 10. For the Citypersons dataset, the proposed two detectors even outperform the competitive two-stage detectors in the Reasonable and especially Heavy subset with a miss rate of 38.6% which is 1.1% lower than that of the best of two-stage detectors.
Overall, with the enhanced spatially adaptive and multi-scale features, the gap between single- and two-stage detectors in Reasonable and Heavy subsets on different detectors has been narrowed. Surprisingly, the proposed single-stage method outperforms the accurate two-stage methods on certain pedestrian datasets, such as the Citypersons dataset. It should be noted that two-stage methods usually produce overall better accuracy than single-stage methods with the advantage of the region proposal network and refined bounding boxes and at the expense of inference time. Apart from accuracy, fast inference also matters for pedestrian detection in practical scenes. The combination of CSP and the proposed module has a simple structure, which is easy to perform and effective. On the contrary, the leading two-stage KGSNet, for example, takes advantage of the additional proposal
generation network, the refined bounding boxes, the key-point detector and the super-resolution network. With these components, the inference speed of KGSNet is 5.9 FPS and 3.2 FPS (Titan X GPU, not including the time of using ALFNet to generate the candidate proposals) on Caltech and Citypersons datasets [18] while ours achieves 29.5 FPS and 6.8 FPS (RTX 3090 GPU) respectively. Therefore, the enhanced single-stage pedestrian detector is cost-effective with fast inference and competitive accuracy compared to complicated two-stage methods.
5 Conclusion
The paper proposes a module to enhance spatial and multi-scale features based on a single encoder of the deformable vision transformer to improve the detection accuracy of single-stage pedestrian detectors with fast inference. This module is effective on commonly used single-stage structures, including (progressively refined) anchor-based and anchor-free cases. With the CSP detection head, more than 40% parameters are reduced in the detection neck compared to CSP; however, this combination still achieves the best results in Reasonable and Heavy subsets among presented single-stage detectors on both Caltech and Citypersons datasets. Utilising pre-training, the miss rates for these subsets can be decreased to 2.6% and 28.0%, respectively, which are far better than other single-stage methods and comparable to two-stage methods. The proposed method outperforms SOTA two-stage detectors in the Heavy subset by 1.1% on Citypersons with slightly decreased inference speed. This demonstrates that single-stage detectors can be improved if spatially adaptive and multi-scale features are jointly adopted, making them cost-effective and promising. It should be mentioned that false positives appear if the attention points of a negative reference point extend to the target areas. Although the performance has been improved with the current module, the generation of attention weights and sampling locations can be carefully designed to suppress the false positives for further improvements. | 1. What is the main contribution of the paper in the field of pedestrian detection?
2. What are the strengths of the proposed approach, particularly in terms of novelty and evaluation?
3. What is the reviewer's concern regarding the paper's contribution, and how does it relate to the issue of occlusion?
4. Are there any limitations in the paper aside from the concern raised by the reviewer? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper addresses the crucial problem in computer vision: pedestrian detection. The issue of faster inference (i..e real-time performance) with competitive accuracy is still a major problem as stated by the author(s). To solve this major issue/problem, the author(s) proposes a single-stage anchor-free pedestrian detector with the deformable vision transformer to balance the issue of faster inference and high accuracy. Together with strong experiments, solid baselines, and evaluation criteria(s), the author(s) achieves competitive results on well-known datasets of Caltech and Citypersons.
Strengths And Weaknesses
++ Novelty
The task formulation is concise, convincing, solid, and novel. A seemingly reasonable approach has been conducted in this paper. Compared to the existing single/two-stage detectors, the proposed single-stage detector based on the feature extractor and CSP strategy achieves competitive results.
To the best of my knowledge, this simple strategy of incorporating the vision transformers on top of the backbone of pedestrian detection tasks is a solid contribution to the computer vision community.
++ Evaluation
The experiments are strong, sufficient, and very well presented. The analysis presented here showcases the author's efforts on how the small things have been noted down while performing such extensive experiments with solid evaluation criteria(s).
The experimental evaluations demonstrate the effectiveness of the complete approach and showcase its practical value. The detailed analyses clarify the contribution of each component.
++ Clarity
The proposed solution by providing a multi-scale deformable feature extractor based on a deformable vision transformer is theoretically sound and well-written clearly describes the improvements and adequately contextualizes the contributions.
The manuscript also provides a good description of related work and background, motivating the problem. Additionally, it provides a thorough description of the part of the pipeline it deals with.
Questions
There is a major concern, I have on this manuscript's contribution that although the trade-off between higher accuracy and faster inference has been seriously taken by the author(s), there is little-to-less information has been provided on the occlusion part. Well, in general, if we narrow it down to common scenarios, we see an occluded person(s) in lot more cases and should be a concern to raise, whereas except for related work, I wasn't able to see major information on this issue. I appreciate the author's efforts in providing very clear information on their contribution, could I please ask the authors to also provide some insight on this case.
Limitations
There's a significant limitation in terms of providing insight on the occlusions/occluded person(s). However, I do not see any major limitations except the mentioned question, the proposed solution for a trade-off between high accuracy and faster inference using Vision Transformer is a pretty solid contribution, in my view to the computer vision community. |
NIPS | Title
Effectiveness of Vision Transformer for Fast and Accurate Single-Stage Pedestrian Detection
Abstract
Vision transformers have demonstrated remarkable performance on a variety of computer vision tasks. In this paper, we illustrate the effectiveness of the deformable vision transformer for single-stage pedestrian detection and propose a spatial and multi-scale feature enhancement module, which aims to achieve the optimal balance between speed and accuracy. Performance improvement with vision transformers on various commonly used single-stage structures is demonstrated. The design of the proposed architecture is investigated in depth. Comprehensive comparisons with state-of-the-art singleand two-stage detectors on different pedestrian datasets are performed. The proposed detector achieves leading performance on Caltech and Citypersons datasets among singleand two-stage methods using fewer parameters than the baseline. The log-average miss rates for Reasonable and Heavy are decreased to 2.6% and 28.0% on the Caltech test set, and 10.9% and 38.6% on the Citypersons validation set, respectively. The proposed method outperforms SOTA two-stage detectors in the Heavy subset on the Citypersons validation set with considerably faster inference speed.
1 Introduction
Pedestrian detection is a popular task subordinate to object detection in computer vision. This task aims to locate and classify pedestrians in images or videos accurately. Pedestrian detection is very important as it serves as the prerequisite of various vision tasks [1], such as human-centric tasks (person re-identification [2, 3], person search [4], human pose estimation [5] etc.) and more generic multi-object tracking [6]. It has been applied to autonomous driving [7, 8], video surveillance [9] and action tracking. In this paper, we focus on the detection based on RGB images.
Pedestrian detection suffers from significant occlusion and varying scales. Intra- and inter-class occlusion occur when a pedestrian is occluded by other pedestrians or objects like cars, bicycles etc. Both significantly reduce the discriminative features and destroy the regular shape of pedestrians. For varying scales, large pedestrians tend to have more informative features. Still, they are difficult to fully extract from a vast region, while features of small targets are compact but relatively ambiguous with less preserved details. In summary, the fluctuating amount and varying shape of effective features are the core problems. They challenge the capabilities of feature extraction modules, which act as a long-standing bottleneck in pedestrian detection.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To deal with these problems, two-stage methods [10–15] based on Fast [16] and Faster R-CNN [16] have pervaded pedestrian detection tasks owing to the high detection accuracy. These methods first make coarse predictions of targets via the Region Proposal Network (RPN), then refine the bounding boxes and predict the final scores based on the features inside the proposal regions. In addition to methods for general object detection, pedestrian detectors take advantage of unique characteristics of targets, such as the mask of visible parts [17, 10] and key points of human bodies [18]. However, the inference speed of these methods is limited by the repeated predictions which makes them hard to be applied to real-world scenes.
To achieve faster inference, single-stage approaches, which only make one round prediction, are developed [19] and applied to pedestrian detection [20–24]. However, they suffer from decreased detection accuracy. The miss rates of two-stage methods in the Reasonable subset on Caltech are reduced to less than 4% [14, 10, 25, 18], while those for single-stage methods are larger than 4.5% [22]. For the Citypersons validation set, the miss rates for the former are less than 40% in the Heavy subset, which is much lower than the latter (42% [24]). In this paper, we aim to improve the detection accuracy, especially in Reasonable and Heavy subsets, and to narrow the gap between single- and two-stage methods with fast inference.
Typical single-stage detectors use anchors (SSD [26]) or are anchor-free (CSP [22]). The former generates rectangular bounding boxes with different aspect ratios and scales centered at each pixel of the feature maps at certain levels. Anchor scales are designed to be smaller at lower levels to facilitate the detection of small objects. These methods predict the offsets w.r.t. the upper left position, height and width of the anchor. Taking CSP as an example, the latter method only predicts the logarithm height and offsets w.r.t. centers of each pixel. For anchor-based single-stage detectors, ALFNet [21] refines anchors progressively with stacked prediction blocks to remedy the lack of proposal regions.
In the past year, most research has focused on the fusion of representative features [27–32] to improve single-stage methods. For example, [30] enhances features via increasing semantic information at a low level and enriches the localization information at a high level. Similarly, [32] fuses the feature maps with different scales in adequate proportions. [29, 31] also explore new strategies to aggregate multi-level features. These feature enhancements are mainly performed along the dimension of feature level due to the intrinsically unbalanced feature information between shallow and deep feature levels. However, this unbalance also exists in two-stage detectors. As such, this is not the particular reason for the poorer accuracy of single-stage methods.
The general architecture of single- and two-stage detectors are compared in Figure 1. Assuming that the training strategies and the detection head are the same for both methods, the difference in the architecture lies in the information fed into the detection head. For two-stage methods, both positions and features of the proposal regions are fed into the detection head. These proposals contain potential pedestrians. Thereby, the detection head classifies spatially target-focused features with fewer background interruptions and refines the bounding boxes by predicting small offsets from the proposal positions. For single-stage methods, each pixel in the feature map serves as the ‘proposal region’ with no pre-estimated positions. The receptive fields of these pixels share the same size, which may be too small to include sufficient information for large targets or so large that the background information overwhelms the useful features. This is more challenging for the classifier compared to two-stage methods. Additionally, single-stage methods have to regress from scratch, which is more difficult than simple refinement. Thus, the lack of spatially target-focused feature representation and the prediction of bounding boxes from scratch are the two key bottlenecks hindering the improvement of single-stage detectors.
To make the features fed into the detection head concentrate on the targets or other helpful information automatically without the assistance of proposal regions, we take advantage of vision transformers in this paper. Vision transformers describe the pairwise dependency of each entity in the feature map with attention weights. The output weighted averaged feature is the adaptive aggregation of important entities (with higher attention weights) while the disturbing information (with lower attention weights) is suppressed. Using such attention mechanism on top of the backbone enables the single-stage detector to supplement spatially filtered features easily for subsequent classification and regression. In this case, the modified detector makes the best use of the fast inference originating from single-stage methods and more effective features. Our main contributions are as follows:
• Demonstrate the effectiveness of the deformable vision transformer in improving the accuracy of commonly used single-stage detectors on pedestrian datasets.
• Extend the application of vision transformers on top of the backbone in pedestrian detection tasks.
• Achieve the best performance among single-stage detectors on the Caltech test set and Citypersons validation set while maintaining fast inference and reducing the number of parameters.
• Narrow the gap of detection accuracy between the single- and two-stage methods in pedestrian detection.
2 Related works
Currently, vision transformers are used to establish general-purpose backbones (ViT [33] and Swin Transformer [34]) or stack on top of the backbone (DETR [35]). Since we focus on pedestrian detection, this paper explores the latter case. DETR consists of the convolution backbone, six encoders and decoders and the prediction head. It is an inspiring end-to-end detector but is memory-consuming. DETR requires massive memory to store the self-attention weights within each Multi-Head SelfAttention (MHSA) layer. The memory cost is linear to the number of attention heads and is square to the number of pixels in the down-sampled feature map. Additionally, first and second-order momentums in optimization introduce further memory cost in the training procedure. In pedestrian detection, more attention heads and relatively large down-sampled feature maps are preferred to enhance detection accuracy, especially for small targets. This results in memory explosion using DETR. To this end, deformable DETR [36] is proposed. It only needs the attention weights at several sampling locations rather than each pixel in the feature map. The memory cost of the attention weights is linear to the number of pixels, which makes training with high resolutions possible. Experiments show that the deformable DETR outperforms the Faster R-CNN [37] and DETR on COCO 2017 validation set [38].
So far, vision transformers show great potential, but they have rarely been applied to pedestrian detection in the form of DETR or its variants. This is because it has been observed that they perform worse than the commonly used Faster R-CNN on CrowdHuman dataset [39] and require tenfold training time [40]. Although [40] proposed using dense object queries and the rectified attention field to enhance scale-adaptive feature extraction in the decoding phase, the modified deformable DETR still shows a limited advantage over the traditional Faster R-CNN. This implies that rigidly putting the whole six encoder-decoder pairs into the pedestrian detector may not be cost-effective. As an example, BoTNet [41] only substitutes the convolution layers in residual bottleneck blocks in the last stage of ResNet [42] for MHSA layers, but it produces strong performance on ImageNet validation set [43]. Inspired by this and the limitation above, our work only utilizes a single encoder of the deformable vision transformer as an adaptive feature extractor and applies it to commonly used single-stage detectors for better detection accuracy and fast inference.
3 Method
Deformable Vision Transformer Encoder: The deformable vision transformer encoder (Figure 2) takes the L feature maps { zl }L l=1 with height Hl and width Wl at scale l and reference points,
which are the positions of grid centers of the feature maps, as inputs. It outputs the enhanced feature maps with the same resolution as the input. The input feature maps are first added by fixed encoded positional [35] and learnable level information [36] to disambiguate spatial and scale positions, then projected to the query feature map zq via a linear layer. The feature maps also generate the value feature maps zv with a linear layer but without encoding. The query feature map zq, value feature map zv together with the pre-generated reference points are sent to the Multi-Scale Deformable Attention (MSDA) layer to enhance spatially adaptive features, followed by a Feed-Forward Network (FFN). In summary, the encoder supplements the semantic information of input feature maps via the embedded attention layer and FFN.
Multi-Scale Deformable Attention: The MSDA layer sums the selected entities at sampling locations in the value feature map zv with predicted attention weights by each corresponding query entity. These attention weights W are the linear projection of query features zq followed by a softmax operator along scale and sampling point dimensions. For a single query entity, it only needs NhNlNp attention weights, representing the significance of selected value features at different attention heads, scales, and points. Selections are decided by the sampling locations which are the summation of reference points p and sampling offsets ∆p which is the embedding of the query features. At each float sampling location, the selected value feature is bilinear interpolated for accuracy and training offset predictor. With weight, sampling locations and selected value features prepared, the q-th element of the separate output feature zso,h ∈ RNq×cv (Nq = ∑L l=1 HlWl, cv is the number of channels ) at attention head h (total Nh heads) is
zso,hq = Np∑ p=1 L∑ l=1 Wplhqvpql+∆pplhq (1)
where p, q, l and h index the sampling offsets, elements of the deformable attention feature zo, the scale of value v and attention head. Wplhq is a value from the weight W ∈ RNq×Nh×L×Np . p ∈ RNq×L×2 and ∆p ∈ RNq×Nh×L×Np×2 are the reference points and sampling offsets. pql and ∆pplhq denote the position of a single reference point and one of its corresponding sampling offsets respectively. The separate output features from Nh attention heads are projected to the q-th element of the final output deformable attention feature zo by a linear layer expressed as
zoq = Nh∑ h=1 W ′hz so,h q (2)
where W ′h ∈ Rc×cv denotes the learnable weight for the h-th attention head. Proposed Feature Enhancement Module: The features are enhanced owing to the self-attention mechanism to supplement the spatially adaptive features across multiple scales. The proposed module
simply consists of convolution/deconvolution and normalization layer pairs ahead of and after the deformable encoder and a final feature fusion step as illustrated in Figure 3. In this module, input feature maps from the backbone are first upsampled with deconvolution layers or encoded with the convolution layer to generate multi-scale feature maps { zl }3 l=1
. They are followed by group normalization to prevent the Internal Covariate Shift (ICS) that might be inducted by subsequent linear operations in the deformable encoder. The encoder yields enhanced multi-scale feature maps. Enhanced features are upsampled to keep the resolution as (H/4,W/4) for accurate detection. They are normalized via L2Norm [22] before concatenation along the channel dimension. This makes the features at different scales contribute equally to the final feature representation fed into the detection head. Concatenated features are compressed along the channel dimension to reduce the network parameters. The output feature maps can be fed into the detection head used in SSD, CSP etc. Except for this standard structure, the use of convolution/deconvolution layers can be adjusted according to the resolution of input, and the concatenation step can be removed if predictions are made at separate levels.
Training: For anchor-free cases, namely CSP, the loss function follows [22]. The overall loss consists of three parts as Equation(3) where Lc, Lh and Lo stand for the center heatmap loss, height map loss and offset loss. Weights for each loss λc, λs and λo are set as 0.01, 1 and 0.1 [22]. For anchor-based cases, namely SSD or ALFNet, the multi-task loss function is formulated with two objectives as Equation(4) [21] where λcls is experimentally set as 0.01 in the following experiments.
Laf = λcLc + λsLh + λoLo (3)
Lab = λclsLcls + Lloc (4)
Inference: For anchor-free single-stage methods, the predicted width is the height multiplied by the uniform aspect ratio 0.41 [22]. If not specified, bounding boxes with scores above 0.01 are kept and merged by Non-Maximum Suppression (NMS) with the IoU threshold of 0.5.
4 Experiments
4.1 Settings
Datasets: The proposed detector is evaluated on two commonly used public pedestrian datasets: Caltech [44] and Citypersons [45]. The Caltech dataset is an approximately 10 hours of 480x640 video taken in a single urban city. The standard training set contains about 250k frames with 350k bounding boxes. In our experiments, the training data augmented by 10 folds containing 42782 images with 13674 persons and the standard test set containing 4024 images with corresponding new annotations [46] and fixed aspect-ratio for bounding boxes [47] are used. Citypersons training set recorded across 18 different cities, 3 seasons and various weather conditions with 19654 persons in 2975 high resolution (1024x2048) images. The validation set contains 500 images across 3 cities.
Training Details: If not specified, the ResNet50/VGG16 pre-trained on ImageNet, Adam with moving average weights [48] and step learning rate schedule are applied. Data augmentation techniques including random horizontal flips with a probability of 0.5 and scaling are applied. For Caltech, additional random color distortion and cropping are implemented. The input images are rescaled to 336x448 and 640x1280 for Caltech and Citypersons datasets respectively. The detectors are trained with a single NVIDIA GeForce RTX 3090 GPU for 10 and 75 epochs with batch size 16 and 4 on Caltech and Citypersons respectively using the anchor-free CSP detection head. The base learning rate is 0.5e-4 and decreased by a factor of 0.5 after 6 and 60 epochs respectively. Initialization is performed with a randomly chosen and fixed seed. Tools provided by [47] are used in the experiments.
Metrics: Log-average Miss Rate (denoted as MR−2 or miss rate in this paper) over False Positive Per Image (FPPI) in the range [10−2, 100] is calculated over Reasonable, Small, Heavy, and All subsets defined in Table 1. The lower the MR−2 the better.
4.2 Ablation Experiments
Effectiveness of Enhanced Feature Maps: For generality, the proposed module is applied to three commonly used single-stage structures in pedestrian detection: anchor-based, progressive refinement, and anchor-free.
For anchor-based single-stage methods, like SSD300 [26], Table 2 shows that the proposed module shows stable and significant improvement in all three subsets with an IoU threshold of 0.5 in NMS.
For refined anchors, we append two Convolutional Predictor Blocks [21] after the feature maps at the last three stages of the backbone and an extra stage. The second block refines the coarse anchors predicted by the first block. Table 3 demonstrates that even though the anchors are refined progressively to remedy the lack of proposal regions, the separate enhancement at each level can bring improvement in certain subsets (level 0 improves Reasonable and All subsets, level 1, 2 improves Heavy subset). The combination of multi-level inputs brings in stable improvement in all the subsets.
For anchor-free methods, We evaluated the influence of the proposed module on the baseline CSP [22] detector on Caltech (Table 4) and Citypersons (Table 5). Results from both datasets indicate that with the enhanced feature maps, the miss rates are decreased significantly in Reasonable, Heavy and All subsets, in particular, the Heavy subset witnesses a decrease of up to 7.9%, and the Reasonable subset up to 3.1%. The additional enhancement module increases the inference time, however, they contain fewer learnable parameters as shown in Table 6. As the combination of CSP achieves the best results, subsequent experiments follow this implementation.
According to the above, the proposed module is effective for general single-stage detectors on pedestrian datasets, owing to the multi-scale deformable self-attention mechanism to enhance spatially adaptive features across levels.
Feature Maps Scales: Different combinations of multi-scale feature maps { zl }3 l=1
with three downsampling ratios (1/4, 1/8 and 1/16) are compared in Table 7. In this comparison, only zo3
which has the same resolution as z3 is upsampled to (H/4,W/4), and is fed into the detection head without intermediate concatenation and L2Norm. Results show that feature maps with ratios 1/4, 1/8, and 1/6 for each scale produce the best performance, which is yielded by upsampling c1 and c2 by two times while maintaining the scale of c3.
Enhanced Feature Map Scales: Fix the downsampling ratios of { zl }3 l=1
as 1/4, 1/8, and 1/16, feed different collections of multi-scale enhanced feature maps {zol } 3 l=1 to the detection head. Note that the deformable encoder keeps the resolutions of the input feature maps, the downsampling ratios for {zol } 3 l=1 are 1/4, 1/8 and 1/6 respectively. All the enhanced feature maps are first upsampled to (H/4,W/4) if needed, followed by normalization when multiple feature maps are utilized. They are then processed in three ways: 1. Cat: Concatenate them along the channel dimension followed by a compression layer to reduce the number of channels to 256. 2. Add: Implement element-wise sum of the enhanced feature maps. 3. Sep: No fusion operations across multi-scales; send them separately to the detection head which doubles the number of predictions. Table 8 compares various strategies to fuse the multi-scale enhanced feature maps. It shows that concatenation followed by L2Norm produces the overall best results in both Reasonable and Heavy subsets on the Caltech test set.
Choice of the Normalization Method Applied to { cl }3 l=1
: As Table 8 shows, GN performs best in the Reasonable subset while L2N in the Heavy subset. Considering that the miss rate of GN is 11.6% lower in the Reasonable subset and only 1.2% higher in the Heavy subset compared to L2N, GN is utilized in our experiments before the encoder as presented in Figure 3.
Number of Encoders: Based on the settings in the last part, different numbers of encoders are tested. These encoders are connected in series. Table 9 presents that although more encoders can provide higher-level semantic information, the best result is observed when only a single encoder is applied. This phenomenon supports the design of BotNet [41] to some extent, which indicates that following the whole set of six encoders and decoders in (deformable) DETR may not be suitable for specific
object detection tasks. A single encoder can also work effectively with the least number of learnable parameters, which prevents overfitting.
4.3 Comparison with the state-of-the-arts
Table 10 and Table 11 show that the proposed module achieves the lowest miss rates in Reasonable and Heavy subsets and leading performance in the All subset on both datasets among presented single-stage detectors.
For the Caltech dataset, the lowest miss rate (3.7%) of the proposed detector in the Reasonable subset is 0.2% smaller than that of the two-stage KGSNet presented in the upper part of Table 10. With pretraining on the Citypersons dataset, the miss rates on all three subsets of the Caltech dataset are reduced significantly and reach the lowest compared to other pretrained detectors as shown in the bottom part of Table 10. For the Citypersons dataset, the proposed two detectors even outperform the competitive two-stage detectors in the Reasonable and especially Heavy subset with a miss rate of 38.6% which is 1.1% lower than that of the best of two-stage detectors.
Overall, with the enhanced spatially adaptive and multi-scale features, the gap between single- and two-stage detectors in Reasonable and Heavy subsets on different detectors has been narrowed. Surprisingly, the proposed single-stage method outperforms the accurate two-stage methods on certain pedestrian datasets, such as the Citypersons dataset. It should be noted that two-stage methods usually produce overall better accuracy than single-stage methods with the advantage of the region proposal network and refined bounding boxes and at the expense of inference time. Apart from accuracy, fast inference also matters for pedestrian detection in practical scenes. The combination of CSP and the proposed module has a simple structure, which is easy to perform and effective. On the contrary, the leading two-stage KGSNet, for example, takes advantage of the additional proposal
generation network, the refined bounding boxes, the key-point detector and the super-resolution network. With these components, the inference speed of KGSNet is 5.9 FPS and 3.2 FPS (Titan X GPU, not including the time of using ALFNet to generate the candidate proposals) on Caltech and Citypersons datasets [18] while ours achieves 29.5 FPS and 6.8 FPS (RTX 3090 GPU) respectively. Therefore, the enhanced single-stage pedestrian detector is cost-effective with fast inference and competitive accuracy compared to complicated two-stage methods.
5 Conclusion
The paper proposes a module to enhance spatial and multi-scale features based on a single encoder of the deformable vision transformer to improve the detection accuracy of single-stage pedestrian detectors with fast inference. This module is effective on commonly used single-stage structures, including (progressively refined) anchor-based and anchor-free cases. With the CSP detection head, more than 40% parameters are reduced in the detection neck compared to CSP; however, this combination still achieves the best results in Reasonable and Heavy subsets among presented single-stage detectors on both Caltech and Citypersons datasets. Utilising pre-training, the miss rates for these subsets can be decreased to 2.6% and 28.0%, respectively, which are far better than other single-stage methods and comparable to two-stage methods. The proposed method outperforms SOTA two-stage detectors in the Heavy subset by 1.1% on Citypersons with slightly decreased inference speed. This demonstrates that single-stage detectors can be improved if spatially adaptive and multi-scale features are jointly adopted, making them cost-effective and promising. It should be mentioned that false positives appear if the attention points of a negative reference point extend to the target areas. Although the performance has been improved with the current module, the generation of attention weights and sampling locations can be carefully designed to suppress the false positives for further improvements. | 1. What is the main contribution of the paper regarding pedestrian detection?
2. What are the strengths of the proposed approach, particularly in enhancing feature maps using transformers?
3. What are the weaknesses and limitations of the method, especially in comparison with general object detection?
4. How does the proposed method differ from other single-stage detectors, and what are the specific improvements made by the authors?
5. Can the authors provide more information or comments on the failure cases or limitations of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Pedestrian detection is crucial in many applications, but the fluctuation of information from the images makes it difficult to do it accurately. CNN-based two-stage detectors are used for this task but are slow at inference and are difficult to train. Single-stage detectors solve these issues but have a higher miss rate, especially in the presence of occlusion. The authors suggest that these problems stem from information fed to the detection head: single-stage detectors get insufficient information compared to two-stage detectors.
To solve the above-mentioned problems, the authors proposed to use a deformable vision transformer. The transformer enhances multi-scale features extracted from a backbone, thereby providing better information to the detection head. They also proposed to use center-and-scale prediction (CSP). The proposed method uses multi-scale features extracted from a CNN. These features are passed through a detection neck. These features are then passed through a deformable transformer to enhance them. These enhanced features are then used to find three outputs that are relevant to CSP. Comprehensive experiments are performed to show the performance of the method on two pedestrian datasets. A set of ablation studies are also conducted to show the effect of different components of the method.
Strengths And Weaknesses
Strengths
The paper is mostly well written and easy to follow.
The motivation, to bring the speed of inference in single-stage detectors by using features enhanced by transformers, is interesting.
The well-motivated problem is then solved in a way that makes sense.
Results (in Table 5, 6) show that enhanced feature maps do improve results, especially in the presence of occlusion.
Comprehensive experiments are performed to show not only the SOTA results but also the effect of individual components.
Weaknesses are pointed out in the next section.
Questions
Questions / Weaknesses
What is the main contribution of the paper. More specifically, what parts of the method are proposed. Or the main contribution is to show the effectiveness of vision transformers in overcoming the issues in single-stage detectors? Both contributions (proposing a new method or showing the effectiveness of existing ones) are equally important, but this should be highlighted clearly.
If I understand it correctly, the main proposed components are the use of deformable transformers to enhance features and the use of CSP.
If I'm not mistaken, similar architectures are also used in general object detection. It would be interesting to highlight the difference between general object detection and how the proposed method is different.
In the relevant work, it is noted that DETRs don't perform better than two-stage-detectors. Is this because DETR produces a certain number of encoded outputs? It would be great if the authors can comment on it.
Limitations
The limitations of the work are not highlighted clearly. It would be interesting to see some cases where the proposed method fails. |
NIPS | Title
Effectiveness of Vision Transformer for Fast and Accurate Single-Stage Pedestrian Detection
Abstract
Vision transformers have demonstrated remarkable performance on a variety of computer vision tasks. In this paper, we illustrate the effectiveness of the deformable vision transformer for single-stage pedestrian detection and propose a spatial and multi-scale feature enhancement module, which aims to achieve the optimal balance between speed and accuracy. Performance improvement with vision transformers on various commonly used single-stage structures is demonstrated. The design of the proposed architecture is investigated in depth. Comprehensive comparisons with state-of-the-art singleand two-stage detectors on different pedestrian datasets are performed. The proposed detector achieves leading performance on Caltech and Citypersons datasets among singleand two-stage methods using fewer parameters than the baseline. The log-average miss rates for Reasonable and Heavy are decreased to 2.6% and 28.0% on the Caltech test set, and 10.9% and 38.6% on the Citypersons validation set, respectively. The proposed method outperforms SOTA two-stage detectors in the Heavy subset on the Citypersons validation set with considerably faster inference speed.
1 Introduction
Pedestrian detection is a popular task subordinate to object detection in computer vision. This task aims to locate and classify pedestrians in images or videos accurately. Pedestrian detection is very important as it serves as the prerequisite of various vision tasks [1], such as human-centric tasks (person re-identification [2, 3], person search [4], human pose estimation [5] etc.) and more generic multi-object tracking [6]. It has been applied to autonomous driving [7, 8], video surveillance [9] and action tracking. In this paper, we focus on the detection based on RGB images.
Pedestrian detection suffers from significant occlusion and varying scales. Intra- and inter-class occlusion occur when a pedestrian is occluded by other pedestrians or objects like cars, bicycles etc. Both significantly reduce the discriminative features and destroy the regular shape of pedestrians. For varying scales, large pedestrians tend to have more informative features. Still, they are difficult to fully extract from a vast region, while features of small targets are compact but relatively ambiguous with less preserved details. In summary, the fluctuating amount and varying shape of effective features are the core problems. They challenge the capabilities of feature extraction modules, which act as a long-standing bottleneck in pedestrian detection.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To deal with these problems, two-stage methods [10–15] based on Fast [16] and Faster R-CNN [16] have pervaded pedestrian detection tasks owing to the high detection accuracy. These methods first make coarse predictions of targets via the Region Proposal Network (RPN), then refine the bounding boxes and predict the final scores based on the features inside the proposal regions. In addition to methods for general object detection, pedestrian detectors take advantage of unique characteristics of targets, such as the mask of visible parts [17, 10] and key points of human bodies [18]. However, the inference speed of these methods is limited by the repeated predictions which makes them hard to be applied to real-world scenes.
To achieve faster inference, single-stage approaches, which only make one round prediction, are developed [19] and applied to pedestrian detection [20–24]. However, they suffer from decreased detection accuracy. The miss rates of two-stage methods in the Reasonable subset on Caltech are reduced to less than 4% [14, 10, 25, 18], while those for single-stage methods are larger than 4.5% [22]. For the Citypersons validation set, the miss rates for the former are less than 40% in the Heavy subset, which is much lower than the latter (42% [24]). In this paper, we aim to improve the detection accuracy, especially in Reasonable and Heavy subsets, and to narrow the gap between single- and two-stage methods with fast inference.
Typical single-stage detectors use anchors (SSD [26]) or are anchor-free (CSP [22]). The former generates rectangular bounding boxes with different aspect ratios and scales centered at each pixel of the feature maps at certain levels. Anchor scales are designed to be smaller at lower levels to facilitate the detection of small objects. These methods predict the offsets w.r.t. the upper left position, height and width of the anchor. Taking CSP as an example, the latter method only predicts the logarithm height and offsets w.r.t. centers of each pixel. For anchor-based single-stage detectors, ALFNet [21] refines anchors progressively with stacked prediction blocks to remedy the lack of proposal regions.
In the past year, most research has focused on the fusion of representative features [27–32] to improve single-stage methods. For example, [30] enhances features via increasing semantic information at a low level and enriches the localization information at a high level. Similarly, [32] fuses the feature maps with different scales in adequate proportions. [29, 31] also explore new strategies to aggregate multi-level features. These feature enhancements are mainly performed along the dimension of feature level due to the intrinsically unbalanced feature information between shallow and deep feature levels. However, this unbalance also exists in two-stage detectors. As such, this is not the particular reason for the poorer accuracy of single-stage methods.
The general architecture of single- and two-stage detectors are compared in Figure 1. Assuming that the training strategies and the detection head are the same for both methods, the difference in the architecture lies in the information fed into the detection head. For two-stage methods, both positions and features of the proposal regions are fed into the detection head. These proposals contain potential pedestrians. Thereby, the detection head classifies spatially target-focused features with fewer background interruptions and refines the bounding boxes by predicting small offsets from the proposal positions. For single-stage methods, each pixel in the feature map serves as the ‘proposal region’ with no pre-estimated positions. The receptive fields of these pixels share the same size, which may be too small to include sufficient information for large targets or so large that the background information overwhelms the useful features. This is more challenging for the classifier compared to two-stage methods. Additionally, single-stage methods have to regress from scratch, which is more difficult than simple refinement. Thus, the lack of spatially target-focused feature representation and the prediction of bounding boxes from scratch are the two key bottlenecks hindering the improvement of single-stage detectors.
To make the features fed into the detection head concentrate on the targets or other helpful information automatically without the assistance of proposal regions, we take advantage of vision transformers in this paper. Vision transformers describe the pairwise dependency of each entity in the feature map with attention weights. The output weighted averaged feature is the adaptive aggregation of important entities (with higher attention weights) while the disturbing information (with lower attention weights) is suppressed. Using such attention mechanism on top of the backbone enables the single-stage detector to supplement spatially filtered features easily for subsequent classification and regression. In this case, the modified detector makes the best use of the fast inference originating from single-stage methods and more effective features. Our main contributions are as follows:
• Demonstrate the effectiveness of the deformable vision transformer in improving the accuracy of commonly used single-stage detectors on pedestrian datasets.
• Extend the application of vision transformers on top of the backbone in pedestrian detection tasks.
• Achieve the best performance among single-stage detectors on the Caltech test set and Citypersons validation set while maintaining fast inference and reducing the number of parameters.
• Narrow the gap of detection accuracy between the single- and two-stage methods in pedestrian detection.
2 Related works
Currently, vision transformers are used to establish general-purpose backbones (ViT [33] and Swin Transformer [34]) or stack on top of the backbone (DETR [35]). Since we focus on pedestrian detection, this paper explores the latter case. DETR consists of the convolution backbone, six encoders and decoders and the prediction head. It is an inspiring end-to-end detector but is memory-consuming. DETR requires massive memory to store the self-attention weights within each Multi-Head SelfAttention (MHSA) layer. The memory cost is linear to the number of attention heads and is square to the number of pixels in the down-sampled feature map. Additionally, first and second-order momentums in optimization introduce further memory cost in the training procedure. In pedestrian detection, more attention heads and relatively large down-sampled feature maps are preferred to enhance detection accuracy, especially for small targets. This results in memory explosion using DETR. To this end, deformable DETR [36] is proposed. It only needs the attention weights at several sampling locations rather than each pixel in the feature map. The memory cost of the attention weights is linear to the number of pixels, which makes training with high resolutions possible. Experiments show that the deformable DETR outperforms the Faster R-CNN [37] and DETR on COCO 2017 validation set [38].
So far, vision transformers show great potential, but they have rarely been applied to pedestrian detection in the form of DETR or its variants. This is because it has been observed that they perform worse than the commonly used Faster R-CNN on CrowdHuman dataset [39] and require tenfold training time [40]. Although [40] proposed using dense object queries and the rectified attention field to enhance scale-adaptive feature extraction in the decoding phase, the modified deformable DETR still shows a limited advantage over the traditional Faster R-CNN. This implies that rigidly putting the whole six encoder-decoder pairs into the pedestrian detector may not be cost-effective. As an example, BoTNet [41] only substitutes the convolution layers in residual bottleneck blocks in the last stage of ResNet [42] for MHSA layers, but it produces strong performance on ImageNet validation set [43]. Inspired by this and the limitation above, our work only utilizes a single encoder of the deformable vision transformer as an adaptive feature extractor and applies it to commonly used single-stage detectors for better detection accuracy and fast inference.
3 Method
Deformable Vision Transformer Encoder: The deformable vision transformer encoder (Figure 2) takes the L feature maps { zl }L l=1 with height Hl and width Wl at scale l and reference points,
which are the positions of grid centers of the feature maps, as inputs. It outputs the enhanced feature maps with the same resolution as the input. The input feature maps are first added by fixed encoded positional [35] and learnable level information [36] to disambiguate spatial and scale positions, then projected to the query feature map zq via a linear layer. The feature maps also generate the value feature maps zv with a linear layer but without encoding. The query feature map zq, value feature map zv together with the pre-generated reference points are sent to the Multi-Scale Deformable Attention (MSDA) layer to enhance spatially adaptive features, followed by a Feed-Forward Network (FFN). In summary, the encoder supplements the semantic information of input feature maps via the embedded attention layer and FFN.
Multi-Scale Deformable Attention: The MSDA layer sums the selected entities at sampling locations in the value feature map zv with predicted attention weights by each corresponding query entity. These attention weights W are the linear projection of query features zq followed by a softmax operator along scale and sampling point dimensions. For a single query entity, it only needs NhNlNp attention weights, representing the significance of selected value features at different attention heads, scales, and points. Selections are decided by the sampling locations which are the summation of reference points p and sampling offsets ∆p which is the embedding of the query features. At each float sampling location, the selected value feature is bilinear interpolated for accuracy and training offset predictor. With weight, sampling locations and selected value features prepared, the q-th element of the separate output feature zso,h ∈ RNq×cv (Nq = ∑L l=1 HlWl, cv is the number of channels ) at attention head h (total Nh heads) is
zso,hq = Np∑ p=1 L∑ l=1 Wplhqvpql+∆pplhq (1)
where p, q, l and h index the sampling offsets, elements of the deformable attention feature zo, the scale of value v and attention head. Wplhq is a value from the weight W ∈ RNq×Nh×L×Np . p ∈ RNq×L×2 and ∆p ∈ RNq×Nh×L×Np×2 are the reference points and sampling offsets. pql and ∆pplhq denote the position of a single reference point and one of its corresponding sampling offsets respectively. The separate output features from Nh attention heads are projected to the q-th element of the final output deformable attention feature zo by a linear layer expressed as
zoq = Nh∑ h=1 W ′hz so,h q (2)
where W ′h ∈ Rc×cv denotes the learnable weight for the h-th attention head. Proposed Feature Enhancement Module: The features are enhanced owing to the self-attention mechanism to supplement the spatially adaptive features across multiple scales. The proposed module
simply consists of convolution/deconvolution and normalization layer pairs ahead of and after the deformable encoder and a final feature fusion step as illustrated in Figure 3. In this module, input feature maps from the backbone are first upsampled with deconvolution layers or encoded with the convolution layer to generate multi-scale feature maps { zl }3 l=1
. They are followed by group normalization to prevent the Internal Covariate Shift (ICS) that might be inducted by subsequent linear operations in the deformable encoder. The encoder yields enhanced multi-scale feature maps. Enhanced features are upsampled to keep the resolution as (H/4,W/4) for accurate detection. They are normalized via L2Norm [22] before concatenation along the channel dimension. This makes the features at different scales contribute equally to the final feature representation fed into the detection head. Concatenated features are compressed along the channel dimension to reduce the network parameters. The output feature maps can be fed into the detection head used in SSD, CSP etc. Except for this standard structure, the use of convolution/deconvolution layers can be adjusted according to the resolution of input, and the concatenation step can be removed if predictions are made at separate levels.
Training: For anchor-free cases, namely CSP, the loss function follows [22]. The overall loss consists of three parts as Equation(3) where Lc, Lh and Lo stand for the center heatmap loss, height map loss and offset loss. Weights for each loss λc, λs and λo are set as 0.01, 1 and 0.1 [22]. For anchor-based cases, namely SSD or ALFNet, the multi-task loss function is formulated with two objectives as Equation(4) [21] where λcls is experimentally set as 0.01 in the following experiments.
Laf = λcLc + λsLh + λoLo (3)
Lab = λclsLcls + Lloc (4)
Inference: For anchor-free single-stage methods, the predicted width is the height multiplied by the uniform aspect ratio 0.41 [22]. If not specified, bounding boxes with scores above 0.01 are kept and merged by Non-Maximum Suppression (NMS) with the IoU threshold of 0.5.
4 Experiments
4.1 Settings
Datasets: The proposed detector is evaluated on two commonly used public pedestrian datasets: Caltech [44] and Citypersons [45]. The Caltech dataset is an approximately 10 hours of 480x640 video taken in a single urban city. The standard training set contains about 250k frames with 350k bounding boxes. In our experiments, the training data augmented by 10 folds containing 42782 images with 13674 persons and the standard test set containing 4024 images with corresponding new annotations [46] and fixed aspect-ratio for bounding boxes [47] are used. Citypersons training set recorded across 18 different cities, 3 seasons and various weather conditions with 19654 persons in 2975 high resolution (1024x2048) images. The validation set contains 500 images across 3 cities.
Training Details: If not specified, the ResNet50/VGG16 pre-trained on ImageNet, Adam with moving average weights [48] and step learning rate schedule are applied. Data augmentation techniques including random horizontal flips with a probability of 0.5 and scaling are applied. For Caltech, additional random color distortion and cropping are implemented. The input images are rescaled to 336x448 and 640x1280 for Caltech and Citypersons datasets respectively. The detectors are trained with a single NVIDIA GeForce RTX 3090 GPU for 10 and 75 epochs with batch size 16 and 4 on Caltech and Citypersons respectively using the anchor-free CSP detection head. The base learning rate is 0.5e-4 and decreased by a factor of 0.5 after 6 and 60 epochs respectively. Initialization is performed with a randomly chosen and fixed seed. Tools provided by [47] are used in the experiments.
Metrics: Log-average Miss Rate (denoted as MR−2 or miss rate in this paper) over False Positive Per Image (FPPI) in the range [10−2, 100] is calculated over Reasonable, Small, Heavy, and All subsets defined in Table 1. The lower the MR−2 the better.
4.2 Ablation Experiments
Effectiveness of Enhanced Feature Maps: For generality, the proposed module is applied to three commonly used single-stage structures in pedestrian detection: anchor-based, progressive refinement, and anchor-free.
For anchor-based single-stage methods, like SSD300 [26], Table 2 shows that the proposed module shows stable and significant improvement in all three subsets with an IoU threshold of 0.5 in NMS.
For refined anchors, we append two Convolutional Predictor Blocks [21] after the feature maps at the last three stages of the backbone and an extra stage. The second block refines the coarse anchors predicted by the first block. Table 3 demonstrates that even though the anchors are refined progressively to remedy the lack of proposal regions, the separate enhancement at each level can bring improvement in certain subsets (level 0 improves Reasonable and All subsets, level 1, 2 improves Heavy subset). The combination of multi-level inputs brings in stable improvement in all the subsets.
For anchor-free methods, We evaluated the influence of the proposed module on the baseline CSP [22] detector on Caltech (Table 4) and Citypersons (Table 5). Results from both datasets indicate that with the enhanced feature maps, the miss rates are decreased significantly in Reasonable, Heavy and All subsets, in particular, the Heavy subset witnesses a decrease of up to 7.9%, and the Reasonable subset up to 3.1%. The additional enhancement module increases the inference time, however, they contain fewer learnable parameters as shown in Table 6. As the combination of CSP achieves the best results, subsequent experiments follow this implementation.
According to the above, the proposed module is effective for general single-stage detectors on pedestrian datasets, owing to the multi-scale deformable self-attention mechanism to enhance spatially adaptive features across levels.
Feature Maps Scales: Different combinations of multi-scale feature maps { zl }3 l=1
with three downsampling ratios (1/4, 1/8 and 1/16) are compared in Table 7. In this comparison, only zo3
which has the same resolution as z3 is upsampled to (H/4,W/4), and is fed into the detection head without intermediate concatenation and L2Norm. Results show that feature maps with ratios 1/4, 1/8, and 1/6 for each scale produce the best performance, which is yielded by upsampling c1 and c2 by two times while maintaining the scale of c3.
Enhanced Feature Map Scales: Fix the downsampling ratios of { zl }3 l=1
as 1/4, 1/8, and 1/16, feed different collections of multi-scale enhanced feature maps {zol } 3 l=1 to the detection head. Note that the deformable encoder keeps the resolutions of the input feature maps, the downsampling ratios for {zol } 3 l=1 are 1/4, 1/8 and 1/6 respectively. All the enhanced feature maps are first upsampled to (H/4,W/4) if needed, followed by normalization when multiple feature maps are utilized. They are then processed in three ways: 1. Cat: Concatenate them along the channel dimension followed by a compression layer to reduce the number of channels to 256. 2. Add: Implement element-wise sum of the enhanced feature maps. 3. Sep: No fusion operations across multi-scales; send them separately to the detection head which doubles the number of predictions. Table 8 compares various strategies to fuse the multi-scale enhanced feature maps. It shows that concatenation followed by L2Norm produces the overall best results in both Reasonable and Heavy subsets on the Caltech test set.
Choice of the Normalization Method Applied to { cl }3 l=1
: As Table 8 shows, GN performs best in the Reasonable subset while L2N in the Heavy subset. Considering that the miss rate of GN is 11.6% lower in the Reasonable subset and only 1.2% higher in the Heavy subset compared to L2N, GN is utilized in our experiments before the encoder as presented in Figure 3.
Number of Encoders: Based on the settings in the last part, different numbers of encoders are tested. These encoders are connected in series. Table 9 presents that although more encoders can provide higher-level semantic information, the best result is observed when only a single encoder is applied. This phenomenon supports the design of BotNet [41] to some extent, which indicates that following the whole set of six encoders and decoders in (deformable) DETR may not be suitable for specific
object detection tasks. A single encoder can also work effectively with the least number of learnable parameters, which prevents overfitting.
4.3 Comparison with the state-of-the-arts
Table 10 and Table 11 show that the proposed module achieves the lowest miss rates in Reasonable and Heavy subsets and leading performance in the All subset on both datasets among presented single-stage detectors.
For the Caltech dataset, the lowest miss rate (3.7%) of the proposed detector in the Reasonable subset is 0.2% smaller than that of the two-stage KGSNet presented in the upper part of Table 10. With pretraining on the Citypersons dataset, the miss rates on all three subsets of the Caltech dataset are reduced significantly and reach the lowest compared to other pretrained detectors as shown in the bottom part of Table 10. For the Citypersons dataset, the proposed two detectors even outperform the competitive two-stage detectors in the Reasonable and especially Heavy subset with a miss rate of 38.6% which is 1.1% lower than that of the best of two-stage detectors.
Overall, with the enhanced spatially adaptive and multi-scale features, the gap between single- and two-stage detectors in Reasonable and Heavy subsets on different detectors has been narrowed. Surprisingly, the proposed single-stage method outperforms the accurate two-stage methods on certain pedestrian datasets, such as the Citypersons dataset. It should be noted that two-stage methods usually produce overall better accuracy than single-stage methods with the advantage of the region proposal network and refined bounding boxes and at the expense of inference time. Apart from accuracy, fast inference also matters for pedestrian detection in practical scenes. The combination of CSP and the proposed module has a simple structure, which is easy to perform and effective. On the contrary, the leading two-stage KGSNet, for example, takes advantage of the additional proposal
generation network, the refined bounding boxes, the key-point detector and the super-resolution network. With these components, the inference speed of KGSNet is 5.9 FPS and 3.2 FPS (Titan X GPU, not including the time of using ALFNet to generate the candidate proposals) on Caltech and Citypersons datasets [18] while ours achieves 29.5 FPS and 6.8 FPS (RTX 3090 GPU) respectively. Therefore, the enhanced single-stage pedestrian detector is cost-effective with fast inference and competitive accuracy compared to complicated two-stage methods.
5 Conclusion
The paper proposes a module to enhance spatial and multi-scale features based on a single encoder of the deformable vision transformer to improve the detection accuracy of single-stage pedestrian detectors with fast inference. This module is effective on commonly used single-stage structures, including (progressively refined) anchor-based and anchor-free cases. With the CSP detection head, more than 40% parameters are reduced in the detection neck compared to CSP; however, this combination still achieves the best results in Reasonable and Heavy subsets among presented single-stage detectors on both Caltech and Citypersons datasets. Utilising pre-training, the miss rates for these subsets can be decreased to 2.6% and 28.0%, respectively, which are far better than other single-stage methods and comparable to two-stage methods. The proposed method outperforms SOTA two-stage detectors in the Heavy subset by 1.1% on Citypersons with slightly decreased inference speed. This demonstrates that single-stage detectors can be improved if spatially adaptive and multi-scale features are jointly adopted, making them cost-effective and promising. It should be mentioned that false positives appear if the attention points of a negative reference point extend to the target areas. Although the performance has been improved with the current module, the generation of attention weights and sampling locations can be carefully designed to suppress the false positives for further improvements. | 1. What is the main contribution of the paper, and how does it address the lack of spatially adaptive features in previous works?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its explanation and justification of parameters?
3. Do you have any concerns about the absence of simulation results in popular detection benchmarks?
4. How does the reviewer assess the novelty and performance of the proposed method compared to prior works such as [A]?
5. Can you clarify some points raised by the reviewer, such as the meaning of R and HO, the necessity of the key tensor, and the differences between this work and [A]?
6. Are there any limitations that the authors should address in the conclusion or discussion section? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Due to the lack of spatially adaptive features, in this paper, the authors propose deformable vision transformer aiming to achieve the balance between speed and accuracy.
Strengths And Weaknesses
R an HO are not famous terminologies, but they explain these terms in Section 4.1. It hurts the readability.
The parameters in loss function are not well-justified by experiments. The same issue is also for inference stage.
Lack of simulations results in popular detection benchmarks.
We already have deformable vision transformer in CVPR2022 [A]. [A] Xia, Zhuofan, et al. "Vision transformer with deformable attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
The performance is not very impressive.
Questions
In page 4, line 153, what is the original work?
In fig2’s caption, why don’t we need the key tensor?
Can you explain the difference to [1]? [A] Xia, Zhuofan, et al. "Vision transformer with deformable attention." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Limitations
They should address their limitation in conclusion or discussion section. |
NIPS | Title
Effectiveness of Vision Transformer for Fast and Accurate Single-Stage Pedestrian Detection
Abstract
Vision transformers have demonstrated remarkable performance on a variety of computer vision tasks. In this paper, we illustrate the effectiveness of the deformable vision transformer for single-stage pedestrian detection and propose a spatial and multi-scale feature enhancement module, which aims to achieve the optimal balance between speed and accuracy. Performance improvement with vision transformers on various commonly used single-stage structures is demonstrated. The design of the proposed architecture is investigated in depth. Comprehensive comparisons with state-of-the-art singleand two-stage detectors on different pedestrian datasets are performed. The proposed detector achieves leading performance on Caltech and Citypersons datasets among singleand two-stage methods using fewer parameters than the baseline. The log-average miss rates for Reasonable and Heavy are decreased to 2.6% and 28.0% on the Caltech test set, and 10.9% and 38.6% on the Citypersons validation set, respectively. The proposed method outperforms SOTA two-stage detectors in the Heavy subset on the Citypersons validation set with considerably faster inference speed.
1 Introduction
Pedestrian detection is a popular task subordinate to object detection in computer vision. This task aims to locate and classify pedestrians in images or videos accurately. Pedestrian detection is very important as it serves as the prerequisite of various vision tasks [1], such as human-centric tasks (person re-identification [2, 3], person search [4], human pose estimation [5] etc.) and more generic multi-object tracking [6]. It has been applied to autonomous driving [7, 8], video surveillance [9] and action tracking. In this paper, we focus on the detection based on RGB images.
Pedestrian detection suffers from significant occlusion and varying scales. Intra- and inter-class occlusion occur when a pedestrian is occluded by other pedestrians or objects like cars, bicycles etc. Both significantly reduce the discriminative features and destroy the regular shape of pedestrians. For varying scales, large pedestrians tend to have more informative features. Still, they are difficult to fully extract from a vast region, while features of small targets are compact but relatively ambiguous with less preserved details. In summary, the fluctuating amount and varying shape of effective features are the core problems. They challenge the capabilities of feature extraction modules, which act as a long-standing bottleneck in pedestrian detection.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
To deal with these problems, two-stage methods [10–15] based on Fast [16] and Faster R-CNN [16] have pervaded pedestrian detection tasks owing to the high detection accuracy. These methods first make coarse predictions of targets via the Region Proposal Network (RPN), then refine the bounding boxes and predict the final scores based on the features inside the proposal regions. In addition to methods for general object detection, pedestrian detectors take advantage of unique characteristics of targets, such as the mask of visible parts [17, 10] and key points of human bodies [18]. However, the inference speed of these methods is limited by the repeated predictions which makes them hard to be applied to real-world scenes.
To achieve faster inference, single-stage approaches, which only make one round prediction, are developed [19] and applied to pedestrian detection [20–24]. However, they suffer from decreased detection accuracy. The miss rates of two-stage methods in the Reasonable subset on Caltech are reduced to less than 4% [14, 10, 25, 18], while those for single-stage methods are larger than 4.5% [22]. For the Citypersons validation set, the miss rates for the former are less than 40% in the Heavy subset, which is much lower than the latter (42% [24]). In this paper, we aim to improve the detection accuracy, especially in Reasonable and Heavy subsets, and to narrow the gap between single- and two-stage methods with fast inference.
Typical single-stage detectors use anchors (SSD [26]) or are anchor-free (CSP [22]). The former generates rectangular bounding boxes with different aspect ratios and scales centered at each pixel of the feature maps at certain levels. Anchor scales are designed to be smaller at lower levels to facilitate the detection of small objects. These methods predict the offsets w.r.t. the upper left position, height and width of the anchor. Taking CSP as an example, the latter method only predicts the logarithm height and offsets w.r.t. centers of each pixel. For anchor-based single-stage detectors, ALFNet [21] refines anchors progressively with stacked prediction blocks to remedy the lack of proposal regions.
In the past year, most research has focused on the fusion of representative features [27–32] to improve single-stage methods. For example, [30] enhances features via increasing semantic information at a low level and enriches the localization information at a high level. Similarly, [32] fuses the feature maps with different scales in adequate proportions. [29, 31] also explore new strategies to aggregate multi-level features. These feature enhancements are mainly performed along the dimension of feature level due to the intrinsically unbalanced feature information between shallow and deep feature levels. However, this unbalance also exists in two-stage detectors. As such, this is not the particular reason for the poorer accuracy of single-stage methods.
The general architecture of single- and two-stage detectors are compared in Figure 1. Assuming that the training strategies and the detection head are the same for both methods, the difference in the architecture lies in the information fed into the detection head. For two-stage methods, both positions and features of the proposal regions are fed into the detection head. These proposals contain potential pedestrians. Thereby, the detection head classifies spatially target-focused features with fewer background interruptions and refines the bounding boxes by predicting small offsets from the proposal positions. For single-stage methods, each pixel in the feature map serves as the ‘proposal region’ with no pre-estimated positions. The receptive fields of these pixels share the same size, which may be too small to include sufficient information for large targets or so large that the background information overwhelms the useful features. This is more challenging for the classifier compared to two-stage methods. Additionally, single-stage methods have to regress from scratch, which is more difficult than simple refinement. Thus, the lack of spatially target-focused feature representation and the prediction of bounding boxes from scratch are the two key bottlenecks hindering the improvement of single-stage detectors.
To make the features fed into the detection head concentrate on the targets or other helpful information automatically without the assistance of proposal regions, we take advantage of vision transformers in this paper. Vision transformers describe the pairwise dependency of each entity in the feature map with attention weights. The output weighted averaged feature is the adaptive aggregation of important entities (with higher attention weights) while the disturbing information (with lower attention weights) is suppressed. Using such attention mechanism on top of the backbone enables the single-stage detector to supplement spatially filtered features easily for subsequent classification and regression. In this case, the modified detector makes the best use of the fast inference originating from single-stage methods and more effective features. Our main contributions are as follows:
• Demonstrate the effectiveness of the deformable vision transformer in improving the accuracy of commonly used single-stage detectors on pedestrian datasets.
• Extend the application of vision transformers on top of the backbone in pedestrian detection tasks.
• Achieve the best performance among single-stage detectors on the Caltech test set and Citypersons validation set while maintaining fast inference and reducing the number of parameters.
• Narrow the gap of detection accuracy between the single- and two-stage methods in pedestrian detection.
2 Related works
Currently, vision transformers are used to establish general-purpose backbones (ViT [33] and Swin Transformer [34]) or stack on top of the backbone (DETR [35]). Since we focus on pedestrian detection, this paper explores the latter case. DETR consists of the convolution backbone, six encoders and decoders and the prediction head. It is an inspiring end-to-end detector but is memory-consuming. DETR requires massive memory to store the self-attention weights within each Multi-Head SelfAttention (MHSA) layer. The memory cost is linear to the number of attention heads and is square to the number of pixels in the down-sampled feature map. Additionally, first and second-order momentums in optimization introduce further memory cost in the training procedure. In pedestrian detection, more attention heads and relatively large down-sampled feature maps are preferred to enhance detection accuracy, especially for small targets. This results in memory explosion using DETR. To this end, deformable DETR [36] is proposed. It only needs the attention weights at several sampling locations rather than each pixel in the feature map. The memory cost of the attention weights is linear to the number of pixels, which makes training with high resolutions possible. Experiments show that the deformable DETR outperforms the Faster R-CNN [37] and DETR on COCO 2017 validation set [38].
So far, vision transformers show great potential, but they have rarely been applied to pedestrian detection in the form of DETR or its variants. This is because it has been observed that they perform worse than the commonly used Faster R-CNN on CrowdHuman dataset [39] and require tenfold training time [40]. Although [40] proposed using dense object queries and the rectified attention field to enhance scale-adaptive feature extraction in the decoding phase, the modified deformable DETR still shows a limited advantage over the traditional Faster R-CNN. This implies that rigidly putting the whole six encoder-decoder pairs into the pedestrian detector may not be cost-effective. As an example, BoTNet [41] only substitutes the convolution layers in residual bottleneck blocks in the last stage of ResNet [42] for MHSA layers, but it produces strong performance on ImageNet validation set [43]. Inspired by this and the limitation above, our work only utilizes a single encoder of the deformable vision transformer as an adaptive feature extractor and applies it to commonly used single-stage detectors for better detection accuracy and fast inference.
3 Method
Deformable Vision Transformer Encoder: The deformable vision transformer encoder (Figure 2) takes the L feature maps { zl }L l=1 with height Hl and width Wl at scale l and reference points,
which are the positions of grid centers of the feature maps, as inputs. It outputs the enhanced feature maps with the same resolution as the input. The input feature maps are first added by fixed encoded positional [35] and learnable level information [36] to disambiguate spatial and scale positions, then projected to the query feature map zq via a linear layer. The feature maps also generate the value feature maps zv with a linear layer but without encoding. The query feature map zq, value feature map zv together with the pre-generated reference points are sent to the Multi-Scale Deformable Attention (MSDA) layer to enhance spatially adaptive features, followed by a Feed-Forward Network (FFN). In summary, the encoder supplements the semantic information of input feature maps via the embedded attention layer and FFN.
Multi-Scale Deformable Attention: The MSDA layer sums the selected entities at sampling locations in the value feature map zv with predicted attention weights by each corresponding query entity. These attention weights W are the linear projection of query features zq followed by a softmax operator along scale and sampling point dimensions. For a single query entity, it only needs NhNlNp attention weights, representing the significance of selected value features at different attention heads, scales, and points. Selections are decided by the sampling locations which are the summation of reference points p and sampling offsets ∆p which is the embedding of the query features. At each float sampling location, the selected value feature is bilinear interpolated for accuracy and training offset predictor. With weight, sampling locations and selected value features prepared, the q-th element of the separate output feature zso,h ∈ RNq×cv (Nq = ∑L l=1 HlWl, cv is the number of channels ) at attention head h (total Nh heads) is
zso,hq = Np∑ p=1 L∑ l=1 Wplhqvpql+∆pplhq (1)
where p, q, l and h index the sampling offsets, elements of the deformable attention feature zo, the scale of value v and attention head. Wplhq is a value from the weight W ∈ RNq×Nh×L×Np . p ∈ RNq×L×2 and ∆p ∈ RNq×Nh×L×Np×2 are the reference points and sampling offsets. pql and ∆pplhq denote the position of a single reference point and one of its corresponding sampling offsets respectively. The separate output features from Nh attention heads are projected to the q-th element of the final output deformable attention feature zo by a linear layer expressed as
zoq = Nh∑ h=1 W ′hz so,h q (2)
where W ′h ∈ Rc×cv denotes the learnable weight for the h-th attention head. Proposed Feature Enhancement Module: The features are enhanced owing to the self-attention mechanism to supplement the spatially adaptive features across multiple scales. The proposed module
simply consists of convolution/deconvolution and normalization layer pairs ahead of and after the deformable encoder and a final feature fusion step as illustrated in Figure 3. In this module, input feature maps from the backbone are first upsampled with deconvolution layers or encoded with the convolution layer to generate multi-scale feature maps { zl }3 l=1
. They are followed by group normalization to prevent the Internal Covariate Shift (ICS) that might be inducted by subsequent linear operations in the deformable encoder. The encoder yields enhanced multi-scale feature maps. Enhanced features are upsampled to keep the resolution as (H/4,W/4) for accurate detection. They are normalized via L2Norm [22] before concatenation along the channel dimension. This makes the features at different scales contribute equally to the final feature representation fed into the detection head. Concatenated features are compressed along the channel dimension to reduce the network parameters. The output feature maps can be fed into the detection head used in SSD, CSP etc. Except for this standard structure, the use of convolution/deconvolution layers can be adjusted according to the resolution of input, and the concatenation step can be removed if predictions are made at separate levels.
Training: For anchor-free cases, namely CSP, the loss function follows [22]. The overall loss consists of three parts as Equation(3) where Lc, Lh and Lo stand for the center heatmap loss, height map loss and offset loss. Weights for each loss λc, λs and λo are set as 0.01, 1 and 0.1 [22]. For anchor-based cases, namely SSD or ALFNet, the multi-task loss function is formulated with two objectives as Equation(4) [21] where λcls is experimentally set as 0.01 in the following experiments.
Laf = λcLc + λsLh + λoLo (3)
Lab = λclsLcls + Lloc (4)
Inference: For anchor-free single-stage methods, the predicted width is the height multiplied by the uniform aspect ratio 0.41 [22]. If not specified, bounding boxes with scores above 0.01 are kept and merged by Non-Maximum Suppression (NMS) with the IoU threshold of 0.5.
4 Experiments
4.1 Settings
Datasets: The proposed detector is evaluated on two commonly used public pedestrian datasets: Caltech [44] and Citypersons [45]. The Caltech dataset is an approximately 10 hours of 480x640 video taken in a single urban city. The standard training set contains about 250k frames with 350k bounding boxes. In our experiments, the training data augmented by 10 folds containing 42782 images with 13674 persons and the standard test set containing 4024 images with corresponding new annotations [46] and fixed aspect-ratio for bounding boxes [47] are used. Citypersons training set recorded across 18 different cities, 3 seasons and various weather conditions with 19654 persons in 2975 high resolution (1024x2048) images. The validation set contains 500 images across 3 cities.
Training Details: If not specified, the ResNet50/VGG16 pre-trained on ImageNet, Adam with moving average weights [48] and step learning rate schedule are applied. Data augmentation techniques including random horizontal flips with a probability of 0.5 and scaling are applied. For Caltech, additional random color distortion and cropping are implemented. The input images are rescaled to 336x448 and 640x1280 for Caltech and Citypersons datasets respectively. The detectors are trained with a single NVIDIA GeForce RTX 3090 GPU for 10 and 75 epochs with batch size 16 and 4 on Caltech and Citypersons respectively using the anchor-free CSP detection head. The base learning rate is 0.5e-4 and decreased by a factor of 0.5 after 6 and 60 epochs respectively. Initialization is performed with a randomly chosen and fixed seed. Tools provided by [47] are used in the experiments.
Metrics: Log-average Miss Rate (denoted as MR−2 or miss rate in this paper) over False Positive Per Image (FPPI) in the range [10−2, 100] is calculated over Reasonable, Small, Heavy, and All subsets defined in Table 1. The lower the MR−2 the better.
4.2 Ablation Experiments
Effectiveness of Enhanced Feature Maps: For generality, the proposed module is applied to three commonly used single-stage structures in pedestrian detection: anchor-based, progressive refinement, and anchor-free.
For anchor-based single-stage methods, like SSD300 [26], Table 2 shows that the proposed module shows stable and significant improvement in all three subsets with an IoU threshold of 0.5 in NMS.
For refined anchors, we append two Convolutional Predictor Blocks [21] after the feature maps at the last three stages of the backbone and an extra stage. The second block refines the coarse anchors predicted by the first block. Table 3 demonstrates that even though the anchors are refined progressively to remedy the lack of proposal regions, the separate enhancement at each level can bring improvement in certain subsets (level 0 improves Reasonable and All subsets, level 1, 2 improves Heavy subset). The combination of multi-level inputs brings in stable improvement in all the subsets.
For anchor-free methods, We evaluated the influence of the proposed module on the baseline CSP [22] detector on Caltech (Table 4) and Citypersons (Table 5). Results from both datasets indicate that with the enhanced feature maps, the miss rates are decreased significantly in Reasonable, Heavy and All subsets, in particular, the Heavy subset witnesses a decrease of up to 7.9%, and the Reasonable subset up to 3.1%. The additional enhancement module increases the inference time, however, they contain fewer learnable parameters as shown in Table 6. As the combination of CSP achieves the best results, subsequent experiments follow this implementation.
According to the above, the proposed module is effective for general single-stage detectors on pedestrian datasets, owing to the multi-scale deformable self-attention mechanism to enhance spatially adaptive features across levels.
Feature Maps Scales: Different combinations of multi-scale feature maps { zl }3 l=1
with three downsampling ratios (1/4, 1/8 and 1/16) are compared in Table 7. In this comparison, only zo3
which has the same resolution as z3 is upsampled to (H/4,W/4), and is fed into the detection head without intermediate concatenation and L2Norm. Results show that feature maps with ratios 1/4, 1/8, and 1/6 for each scale produce the best performance, which is yielded by upsampling c1 and c2 by two times while maintaining the scale of c3.
Enhanced Feature Map Scales: Fix the downsampling ratios of { zl }3 l=1
as 1/4, 1/8, and 1/16, feed different collections of multi-scale enhanced feature maps {zol } 3 l=1 to the detection head. Note that the deformable encoder keeps the resolutions of the input feature maps, the downsampling ratios for {zol } 3 l=1 are 1/4, 1/8 and 1/6 respectively. All the enhanced feature maps are first upsampled to (H/4,W/4) if needed, followed by normalization when multiple feature maps are utilized. They are then processed in three ways: 1. Cat: Concatenate them along the channel dimension followed by a compression layer to reduce the number of channels to 256. 2. Add: Implement element-wise sum of the enhanced feature maps. 3. Sep: No fusion operations across multi-scales; send them separately to the detection head which doubles the number of predictions. Table 8 compares various strategies to fuse the multi-scale enhanced feature maps. It shows that concatenation followed by L2Norm produces the overall best results in both Reasonable and Heavy subsets on the Caltech test set.
Choice of the Normalization Method Applied to { cl }3 l=1
: As Table 8 shows, GN performs best in the Reasonable subset while L2N in the Heavy subset. Considering that the miss rate of GN is 11.6% lower in the Reasonable subset and only 1.2% higher in the Heavy subset compared to L2N, GN is utilized in our experiments before the encoder as presented in Figure 3.
Number of Encoders: Based on the settings in the last part, different numbers of encoders are tested. These encoders are connected in series. Table 9 presents that although more encoders can provide higher-level semantic information, the best result is observed when only a single encoder is applied. This phenomenon supports the design of BotNet [41] to some extent, which indicates that following the whole set of six encoders and decoders in (deformable) DETR may not be suitable for specific
object detection tasks. A single encoder can also work effectively with the least number of learnable parameters, which prevents overfitting.
4.3 Comparison with the state-of-the-arts
Table 10 and Table 11 show that the proposed module achieves the lowest miss rates in Reasonable and Heavy subsets and leading performance in the All subset on both datasets among presented single-stage detectors.
For the Caltech dataset, the lowest miss rate (3.7%) of the proposed detector in the Reasonable subset is 0.2% smaller than that of the two-stage KGSNet presented in the upper part of Table 10. With pretraining on the Citypersons dataset, the miss rates on all three subsets of the Caltech dataset are reduced significantly and reach the lowest compared to other pretrained detectors as shown in the bottom part of Table 10. For the Citypersons dataset, the proposed two detectors even outperform the competitive two-stage detectors in the Reasonable and especially Heavy subset with a miss rate of 38.6% which is 1.1% lower than that of the best of two-stage detectors.
Overall, with the enhanced spatially adaptive and multi-scale features, the gap between single- and two-stage detectors in Reasonable and Heavy subsets on different detectors has been narrowed. Surprisingly, the proposed single-stage method outperforms the accurate two-stage methods on certain pedestrian datasets, such as the Citypersons dataset. It should be noted that two-stage methods usually produce overall better accuracy than single-stage methods with the advantage of the region proposal network and refined bounding boxes and at the expense of inference time. Apart from accuracy, fast inference also matters for pedestrian detection in practical scenes. The combination of CSP and the proposed module has a simple structure, which is easy to perform and effective. On the contrary, the leading two-stage KGSNet, for example, takes advantage of the additional proposal
generation network, the refined bounding boxes, the key-point detector and the super-resolution network. With these components, the inference speed of KGSNet is 5.9 FPS and 3.2 FPS (Titan X GPU, not including the time of using ALFNet to generate the candidate proposals) on Caltech and Citypersons datasets [18] while ours achieves 29.5 FPS and 6.8 FPS (RTX 3090 GPU) respectively. Therefore, the enhanced single-stage pedestrian detector is cost-effective with fast inference and competitive accuracy compared to complicated two-stage methods.
5 Conclusion
The paper proposes a module to enhance spatial and multi-scale features based on a single encoder of the deformable vision transformer to improve the detection accuracy of single-stage pedestrian detectors with fast inference. This module is effective on commonly used single-stage structures, including (progressively refined) anchor-based and anchor-free cases. With the CSP detection head, more than 40% parameters are reduced in the detection neck compared to CSP; however, this combination still achieves the best results in Reasonable and Heavy subsets among presented single-stage detectors on both Caltech and Citypersons datasets. Utilising pre-training, the miss rates for these subsets can be decreased to 2.6% and 28.0%, respectively, which are far better than other single-stage methods and comparable to two-stage methods. The proposed method outperforms SOTA two-stage detectors in the Heavy subset by 1.1% on Citypersons with slightly decreased inference speed. This demonstrates that single-stage detectors can be improved if spatially adaptive and multi-scale features are jointly adopted, making them cost-effective and promising. It should be mentioned that false positives appear if the attention points of a negative reference point extend to the target areas. Although the performance has been improved with the current module, the generation of attention weights and sampling locations can be carefully designed to suppress the false positives for further improvements. | 1. What is the focus and contribution of the paper on pedestrian detection?
2. What are the strengths of the proposed approach, particularly in terms of feature enhancement?
3. What are the weaknesses of the paper, especially regarding its novelty and performance?
4. Do you have any concerns about the paper's grammar and proofreading?
5. Are there any limitations to the proposed method that should be addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper presents a deformable vision transformer for pedestrian detection. The vision transformer introduces multi-scale deformable attention [33] to enhance features for pedestrian classification and localization. The experiments on Caltech and CityPersons demonstrate that the enhanced features by the proposed deformable vision transformer help improve the pedestrian detection performance.
Strengths And Weaknesses
Strenghts:
The introduced multi-scale deformable attention enhances features for pedestrian detection in one-stage pdestrian detector. The results in Tables 5 and 6 show that it brings performance improvements on both Caltech and CityPersons datasets.
Overall, the paper is easy to follow.
Weaknesses:
The novelty and contributions of the paper are limited. It simply adopts multi-scale deformable attention [33] to enhance features for pedestrian detection.
The performance of the proposed method is not impressive enough. On CityPersons, its peformance is only on par with state-of-the-art.
Questions
The paper needs careful proofreading. It contains some grammatical errors. For example, Line 3 the latter categorys --> the latter categories or the latter category.
Limitations
No. See weaknesses. |
NIPS | Title
FrugalML: How to use ML Prediction APIs more accurately and cheaply
Abstract
Prediction APIs offered for a fee are a fast-growing industry and an important part of machine learning as a service. While many such services are available, the heterogeneity in their price and performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. We take a first step towards addressing this challenge by proposing FrugalML, a principled framework that jointly learns the strength and weakness of each API on different data, and performs an efficient optimization to automatically identify the best sequential strategy to adaptively use the available APIs within a budget constraint. Our theoretical analysis shows that natural sparsity in the formulation can be leveraged to make FrugalML efficient. We conduct systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Baidu and other providers for tasks including facial emotion recognition, sentiment analysis and speech recognition. Across various tasks, FrugalML can achieve up to 90% cost reduction while matching the accuracy of the best single API, or up to 5% better accuracy while matching the best API’s cost.
1 Introduction
Machine learning as a service (MLaaS) is a rapidly growing industry. For example, one could use Google prediction API [9] to classify an image for $0.0015 or to classify the sentiment of a text passage for $0.00025. MLaaS services are appealing because using such APIs reduces the need to develop one’s own ML models. The MLaaS market size was estimated at $1 billion in 2019, and it is expected to grow to $8.4 billion by 2025 [1].
Third-party ML APIs come with their own challenges, however. A major challenge is that different companies charge quite different amounts for similar tasks. For example, for image classification, Face++ charges $0.0005 per image [6], which is 67% cheaper than Google [9], while Microsoft charges $0.0010 [11]. Moreover, the prediction APIs of different providers perform better or worse on different types of inputs. For example, accuracy disparities in gender classification were observed for different skin colors [23, 37]. As we will show later in the paper, these APIs’ performance also varies by class—for example, we found that on the FER+ dataset, the Face++ API had the best accuracy on surprise images while the Microsoft API had the best performance on neutral images. The more expensive APIs are not uniformly better; and APIs tend to have specific classes of inputs where they perform better than alternatives. This heterogeneity in price and in performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget.
In this paper, we propose FrugalML, a principled framework to address this challenge. FrugalML jointly learns the strength and weakness of each API on different data, then
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
performs an efficient optimization to automatically identify the best adaptive strategy to use all the available APIs given the user’s budget constraint. FrugalML leverages the modular nature of APIs by designing adaptive strategies that can call APIs sequentially. For example, we might first send an input to API A. If A returns the label “dog” with high confidence—and we know A tends to be accurate for dogs—then we stop and report “dog”. But if A returns “hare” with lower confidence, and we have learned that A is less accurate for “hare,” then we might adaptively select a second API B to make additional assessment.
FrugalML optimizes such adaptive strategies to substantially improve prediction performance over simpler approaches such as model cascades with a fixed quality threshold (Figure 1). Through experiments with real commercial ML APIs on diverse tasks, we observe that FrugalML typically reduces costs more than 50% and sometimes up to 90%. Adaptive strategies are challenging to learn and optimize, because the choice of the 2nd predictor, if one is chosen, could depend on the prediction and confidence of the first API, and because FrugalML may need to allocate different fractions of its budget to predictions for different classes. We prove that under quite general conditions, there is natural sparsity in this problem that we can leverage to make FrugalML efficient.
Contributions To sum up, our contributions are:
1. We formulate and study the problem of learning to optimally use commercial ML APIs given a budget. This is a growing area of importance and is under-explored.
2. We propose FrugalML, a framework that jointly learns the strength and weakness of each API, and performs an optimization to identify the best strategy for using those APIs within a budget constraint. By leveraging natural sparsity in this optimization problem, we design an efficient algorithm to solve it with provable guarantees.
3. We evaluate FrugalML using real-world APIs from diverse providers (e.g., Google, Microsoft, Amazon, and Baidu) for classification tasks including facial emotion recognition, text sentiment analysis, and speech recognition. We find that FrugalML can match the accuracy of the best individual API with up to 90% lower cost, or significantly improve on this accuracy, up to 5%, with the the same cost.
4. We release our code and our dataset1 of 612,139 samples annotated by commercial APIs as a resource to aid future research in this area.
Related Work. MLaaS: With the growing importance of MLaaS APIs [2, 3, 6, 9, 10, 11], existing research has largely focused on individual API for performance [57], pricing [26], robustness [31], and applications [23, 32, 44]. On the other hand, FrugalML aims at finding strategies to select from or use multiple APIs to reduce costs and increase accuracy.
Ensemble methods: A natural approach to exploiting multiple predictors is ensemble methods [25, 29, 45]. While most ensemble methods such as stacking [53], and bagging [22] require predictions from all predictors and thus incur a high cost, mixture of experts
1https://github.com/lchen001/FrugalML
[35, 34, 58] uses gate functions to select one expert (predictor) per data point and is less expensive. Substantial research has focused on developing gate function models, such as SVMs [27, 56], Gaussian Process [28, 55], and neutral networks [47, 46]. However, applying mixture of experts for MLaaS would result in fixed cost and would not allow users to specify a budget as in FrugalML. As we will show later, sometimes FrugalML with a budget constraint can even outperform mixture of experts algorithms while using less budget.
Model Cascades: Cascades consisting of a sequence of models are useful to balance the quality and runtime of inference [49, 50, 24, 36, 48, 51, 54, 38]. While model cascades use predicted quality score alone to avoid calling computationally expensive models, FrugalML’ strategies can utilize both quality score and predicted class to select a downstream expensive add-on service. Designing such strategies requires solving a significantly harder optimization problem, e.g., choosing how to divide the available budget between classes (§3), but also improves performance substantially over using the quality score alone (§4).
2 Preliminaries
Notation. In our exposition, we denote matrices and vectors in bold, and scalars, sets, and functions in standard script. We let 1m denote the m× 1 all ones vector, while 1n×m denotes the all ones n×m matrix. We define 0m,0n×m analogously. The subscripts are omitted when clear from context. Given a matrix A ∈ Rn×m, we let Ai,j denote its entry at location (i, j), Ai,· ∈ R1×m denote its ith row, and A·,j ∈ Rn×1 denote its jth column. Let [n] denote {1,2, · · · , n}. Let 1 represent the indicator function.
ML Tasks. Throughout this paper, we focus on (multiclass) classification tasks, where the goal is to classify a data point x from a distribution D into L label classes. Many real world ML APIs aim at such tasks, including facial emotion recognition, where x is a face image and label classes are emotions (happy, sad, etc), and text sentiment analysis, where x is a text passage and the label classes are attitude sentiment (either positive or negative).
MLaaS Market. Consider a MLaaS market consisting of K different ML services which aim at the same classification task. Taken a data point x as input, the kth service returns to the user a predicted label yk(x) ∈ [L] and its quality score qk(x) ∈ [0,1], where larger score indicates higher confidence of its prediction. This is typical for many popular APIs. There is also a unit cost associated with each service. Let the vector c ∈ RK denote the unit cost of all services. Then ck = 0.005 simply means that users need to pay 0.005 every time they call the kth service. We use y(x) to denote x’s true label, and let rk(x) , 1yk(x)=y(x) be the reward of using the k service on x.
3 FrugalML: a Frugal Approach to Adaptively Leverage ML Services
In this section, we present FrugalML, a formal framework for API calling strategies to obtain accurate and cheap predictions from a MLaaS market. All proofs are left to the supplemental materials. We generalize the scheme in Figure 1 (c) to K ML services and L label classes. Let a tuple s , (p[1],Q,P[2]) represent a calling strategy produced by FrugalML. Given an input data x, FrugalML first calls a base service, denoted by A[1]s , which with probability p[1]i is the ith service and returns quality score qi(x) and label yi(x). Let Ds be the indicator
of whether the quality score is smaller than the threshold value Qi,yi(x). If Ds = 1, then FrugalML invokes an add-on service, denoted by A[2]s , with probability P
[2] i,yi(x),j being the jth service and producing yj(x) as the predicted label ŷs(x). Otherwise, FrugalML simply returns label ŷs(x) = yi(x) from the base service. This process is summarized in Figure 2. Note that the strategy is adaptive: the choice of the add-on API can depend on the predicted label and quality score of the base model.
The set of possible strategies can be parametrized as S , {(p[1],Q,P[2])|p[1] < 0 ∈ RK ,1Tp[1] = 1,Q ∈ RK×L,0 4 Q 4 1,P[2] ∈ RK×L×K ,P[2] < 0,1TP[2]k,`,· = 1}. Our goal is to choose the optimal strategy s∗ that maximizes the expected accuracy while satisfies the user’s budget constraint b. This is formally stated as below. Definition 1. Given a user budget b, the optimal FrugalML strategy s∗ = (p[1]∗,Q∗,P[2]∗) is
s∗ , argmax s∈S E[rs(x)] s.t. E[η[s](x, c)] ≤ b, (3.1)
where rs(x) , 1ŷs(x)=y(x) is the reward and η[s](x, c) the total cost of strategy s on x. Remark 1. The above definition can be generalized to wider settings. For example, instead of 0-1 loss, the reward can be negative square loss to handle regression tasks. We pick the concrete form for demonstration purposes. The cost of strategy s, η[s](x, c), is the sum of all services called on x. For example, if service 1 and 2 are called for predicting x, then η[s](x, c) becomes c1 + c2.
Given the above formulation, a natural question is how to solve it efficiently. In the following, We first highlight an interesting property of the optimal strategy, sparsity, which inspires the design of the efficient solver, and then present the algorithm for the solver.
3.1 Sparsity Structure in the Optimal Strategy
We show that if problem 3.1 is feasible and has unique optimal solution, then we must have ‖p[1]∗‖ ≤ 2. In other words, the optimal strategy should only choose the base service from at most two services (instead of K) in the MLaaS market. This is formally stated in Lemma 1. Lemma 1. If problem 3.1 is feasible, then there exists one optimal solution s∗ = (p[1]∗,Q∗,P[2]∗) such that ‖p[1]∗‖ ≤ 2.
To see this, let us first expand E[rs(x)] and E[ηs(x)] by the law of total expectation. Lemma 2. The expected accuracy is E[rs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s =
i]E[ri(x)|Ds = 0,A[1]s = i] + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s =
i]E[rj(x)|Ds = 1,A[1]s = i)]. The expected cost is E[ηs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s =
i]ci + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i] (ci + cj).
Note that both E[rs(x)] and E[ηs(x)] are linear in Pr[A[1]s = i], which by definition equals p[1]i . Thus, fixing Q and P
[2], problem 3.1 becomes a linear programming in p[1]. Intuitively, the corner points of its feasible region must be 2-sparse, since except E[ηs(x)] ≤ b and 1Tp[1] ≤ 1 , all other constraints (p[1] < 0) force sparsity. As the optimal solution of a linear programming should be a corner point, p[2] must also be 2-sparse.
This sparsity structure helps reduce the computational complexity for solving problem 3.1. In fact, the sparsity structure implies problem 3.1 becomes equivalent to a master problem
max (i1,i2,p1,p2,b1,b2)∈C p1gi1(b1/p1) + p2gi2(b2/p2) s.t.b1 + b2 ≤ b (3.2)
where c = {(i1, i2, p1, p2, b1, b2)|i1, i2 ∈ [K], p1, p2 ≥ 0, p1 + p2 = 1, b1, b2 ≥ 0}, and gi(b′) is the optimal value of the subproblem
max Q,P[2]:s=(ei,Q,P[2])∈S
E[rs(x) s.t. E[ηs(x)] ≤ b′ (3.3)
Here, the master problem decides which two services (i1, i2) can be the base service, how often (p1, p2) they should be invoked, and how large budgets (b1, b2) are assigned, while for a fixed base service i and budget b′, the subproblem maximizes the expected reward.
3.2 A Practical Algorithm
Now we are ready to give the sparsity-inspired algorithm for generating an approximately optimal strategy ŝ, summarized in Algorithm 1.
Algorithm 1 FrugalML Strategy Training.
Input :K,M, c, b, {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 Output : FrugalML strategy tuple ŝ = ( p̂[1], Q̂, P̂[2] ) 1: Estimate E[ri(x)|Ds,A[1]s ] from the training data {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 2: For i ∈ [K], b′m ∈ [0, ‖2c‖∞ M , · · · ,‖2c‖∞], solve problem 3.3 to find optimal value gi(b ′ m)
3: For i ∈ [K], construct function gi(·) by linear interpolation on b′0, b′1, · · · , b′M . 4: Solve problem 3.2 to find optimal solution i∗1, i∗2, p∗1, p∗2, b∗1, b∗2 5: For t ∈ [2], let i = i∗t , b′ = b∗t /p∗t , solve problem 3.3 to find the optimal solution Q[i∗t ],P [2] [i∗t ] 6: p̂[1] = p∗1ei∗1 + p ∗ 2ei∗2 , Q̂ = Q[i∗1 ] + Q[i∗2 ], P̂ [2] = P[2][i∗1 ] + P [2] [i∗2 ]
7: Return ŝ = ( p̂[1], Q̂, P̂[2] )
Algorithm 1 consists of three main steps. First, the conditional accuracy E[ri(x)|Ds,A[i]s ] is estimated from the training data (line 1). Next (line 2 to line 4), we find the optimal solution i∗1, i ∗ 2, p ∗ 1, p ∗ 2, b ∗ 1, b ∗ 2 to problem 3.2. To do so, we first evaluate gi(b′) for M +1 different budget values (line 2), and then construct the functions gi(·) via linear interpolation (line 3) while enforce gi(b′) = 0,∀b′ ≤ ci. Given (piece-wise linear) gi(·), problem 3.2 can be solved by enumerating a few linear programming (line 4). Finally, the algorithm seeks to find the optimal solution in the original domain of the strategy, by solving subproblem 3.3 for base service being i∗1 and i∗2 separately (line 5), and then align those solutions appropriately (line 6). We leave the details of solving subproblem 3.3 to the supplement material due to space constraint. Theorem 3 provides the performance analysis of Algorithm 1.
Theorem 3. Suppose E[ri(x)|Ds,A[1]s ] is Lipschitz continuous with constant γ w.r.t. each element in Q. Given N i.i.d. samples {y(xi),{(yk(xi), qk(xi))}Kk=1}Ni=1, the computational cost of Algorithm 1 is O ( NMK2 +K3M3L+MLK2 ) . With probability 1− , the produced strategy ŝ
satisfies E[rŝ(x)]−E[rs∗(x)] ≥ −O (√
log +logM+logK+logL N + γ M
) , and E[η[ŝ](x, c)] ≤ b.
As Theorem 3 suggests, the parameterM is used to balance between computational cost and accuracy drop of ŝ. For practical cases where K and L (the number of classes) are around ten and N is more than a few thousands, we have found M = 10 is a good value for good accuracy and small computational cost. Note that the coefficient of the KL terms is small: in experiments, we observe it takes only a few seconds for L = 31,M = 40. For datasets with very large number of possible labels, we can always cluster those labels into a few ”supclasses”, or adopt approximation algorithms to reduce O(ML) to O(M2) (see details in the supplemental materials). In addition, slight modification of ŝ can satisfy strict budget constraint: if budgets allows, use ŝ to pick APIs; otherwise, switch to the cheapest API.
4 Experiments
We compare the accuracy and incurred costs of FrugalML to that of real world ML services for various tasks. Our goal is four-fold: (i) understanding when and why FrugalML can reduce cost without hurting accuracy, (ii) evaluating the cost savings by FrugalML, (iii) investigating the trade-offs between accuracy and cost achieved by FrugalML, and (iv) measuring the effect of training data size on FrugalML’s performance.
Tasks, ML services, and Datasets. We focus on three common ML tasks in different application domains: facial emotion recognition (FER) in computer vision, sentiment analysis
(SA) in natural langauge processing), and speech to text (STT) in speech recognition. The ML services used for each task as well as their prices are summarized in Table 1. For each task we also found a small open source model from GitHub, which is much less expensive to execute per data point than the commercial APIs. Table 2 lists the statistics for all the datasets used for different tasks. More details can be found in the supplemental materials.
Facial Emotion Recognition: A Case Study. Let us start with facial emotion recognition on the FER+ dataset. We set budget b = 5, the price of FACE++, the cheapest API (except the open source CNN model from GitHub) and obtain a FrugalML strategy by training on half of FER+. Figure 3 demonstrates the learned FrugalML strategy. Interestingly, as shown in Figure 3(b), FrugalML’s accuracy is higher than that of the best ML service (Microsoft Face), while its cost is much lower. This is because base service’s quality score, utilized by FrugalML, is a better signal than raw image to identify if its prediction is trustworthy. Furthermore, the quality score threshold, produced by FrugalML also depends on label predicted by the base service. This flexibility helps to increase accuracy as well as to reduce costs. For example, using a universal threshold 0.86 leads to misclassfication on Figure 3(f), while 0.93 causes unnecessary add-on service call on Figure 3 (c).
The learned FrugalML strategy can be interpreted by the varying API accuracy given labels and quality scores produced by the base service. As shown in Figure 4, the GitHub API can achieve the highest accuracy given that its predicted label is happy or surprise. Thus, when prediction is surprise or happy, the base service is sufficient for most of the images and thus quite some budget can be saved.
For comparison, we also train a mixture of experts strategy with a softmax gating network and the majority voting ensemble method. The learned mixture of experts always uses Microsoft API, leading to the same accuracy (81%) and same cost ($10). The accuracy of majority voting on the test data is slightly better at 82%, but substantially worse than the performance of FrugalML using a small budget of $5. Majority vote, and other standard ensemble methods, needs to collect the prediction of all services, resulting in a cost ($30) which is 6 times the cost of FrugalML. Moreover, both mixture of experts and ensemble method require fixed cost, while FrugalML gives the users flexibility to choose a budget.
Analysis of Cost Savings. Next, we evaluate how much cost can be saved by FrugalML to reach the highest accuracy produced by a single API on different tasks, to obtain some qualitative sense of FrugalML. As shown in Table 3, FrugalML can typically save more than
half of the cost. In fact, the cost savings can be as high as 90% on the AUDIOMNIST dataset. This is likely because the base service’s quality score is highly correlated to its prediction accuracy, and thus FrugalML only needs to call expensive services for a few difficult data points. A relatively small saving is reached for SA tasks (e.g., on IMDB). This might be that the quality score of the rule based SA tool is not highly reliable. Another possible reason is that SA task has only two labels (positive and negative), limiting the power of FrugalML.
Accuracy and Cost Trade-offs. Now we dive deeply into the accuracy and cost trade-offs achieved by FrugalML, shown in Figure 5. Here we also compare with two oblations to
FrugalML, “Base=GH”, where the base service is forced to be the GitHub model, and “QS only”, which further forces a universal quality score threshold across all labels. While using any single ML service incurs a fixed cost, FrugalML allows users to pick any point in its trade-off curve, offering substantial flexibility. In addition to cost saving, FrugalML sometimes can achieve higher accuracy than any ML services it calls. For example, on FER+ and AFFECTNET, more than 2% accuracy improvement can be reached with small cost, and on RAFDB, when a large cost is allowed, more than 5% accuracy improvement is gained. It is also worthy noting that each component in FrugalML helps improve the accuracy. On WAIMAI, for instance, “Base=GH” and ”QS only” lead to significant accuracy drops. For speech datasets such as COMMAND, the drop is negligible, as there is no significant accuracy difference between different labels (utterance). Another interesting observation is that there is no universally “best” service for a fixed task. For SA task, Baidu NLP achieves the highest accuracy for WAIMAI and SHOP datasets, but Google NLP has best performance on YELP and IMDB. Fortunately, FrugalML adaptively learns the optimal strategy.
Effects of Training Sample Size Finally we evaluate how the training sample size affects FrugalML’s performance, shown in Figure 6. Note that FrugalML only requires a few thousands training data points for the testing accuracy to converge across all datasets evaluated. This is often more sample-efficient and cost-efficient than training a customized model from scratch. It is also worthy mentioning that larger number of labels usually needs more training samples. For example, 1500 samples might be enough for WAIMAI (#label=2), but 3000 samples are needed for AudioMNIST (#label=10).
5 Conclusion and Open Problems
In this work we proposed FrugalML, a formal framework for identifying the best strategy to call ML APIs given a user’s budget. Both theoretical analysis and empirical results demonstrate that FrugalML leads to significant cost reduction and accuracy improvement. FrugalML is also efficient to learn: it typically takes a few minutes on a modern machine. Our research characterized the substantial heterogeneity in cost and performance across available ML APIs, which is useful in its own right and also leveraged by FrugalML. Extending FrugalML to produce calling strategies for ML tasks beyond classification (e.g., object detection and language translation) is an interesting future direction. Our discussion with practitioners frequently using ML APIs indicates handling API updates and performance shift is another open problem. As a resource to stimulate further research in MLaaS, we also release a dataset used to develop FrugalML, consisting of 612,139 samples annotated by the APIs, and our code, available at https://github.com/lchen001/FrugalML.
Acknowledgement
This work was supported in part by a Google PhD Fellowship, NSF CCF 1763191, NSF CAREER 1651570 and 1942926, NIH P30AG059307, NIH U01MH098953, grants from the Chan-Zuckerberg Initiative, and affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco and SAP. We also thank anonymous reviewers for helpful discussion and feedback.
Potential Broader Impact
ML as a service is a growing industry with substantial economic and societal impact. In this paper, we identify the cost and performance heterogeneity across popular ML APIs, which contributes to the broader understanding of this important but under-explored industry. We proposed a method to automatically reduce user cost while improving accuracy. FrugalML can broadly contribute to the applied ML ecosystem by reducing the expense and complexity of using prediction APIs. This can be a positive impact by increasing accessibility to ML APIs for less well-resourced groups. A potential concern about the ML APIs in general is that they may be trained on biased data and produce biased predictions that could disadvantage certain sub-groups. To tackle this challenge, we are releasing our dataset of over 600k images, text, and utterances that we annotated using commercial APIs. This is a resource for the broad community to use to better understand the biases in existing APIs. | 1. What is the main contribution of the paper regarding ML inference APIs?
2. What are the strengths of the proposed framework, particularly in terms of its efficiency and guarantees?
3. What are the weaknesses of the paper's comparison between MLaaS APIs and custom GH-model-based inference?
4. Do you have any concerns regarding the cost benefits presented in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper helps users navigate the space of ML inference APIs, which have a large variation in terms of performance and cost. It proposes a framework that identifies the best strategy across APIs within a budget constraint, by learning the strengths and weaknesses of each API. The proposed framework is evaluated against real-world ML APIs from all the main cloud providers for a set of important classification tasks (facial recognition, speech recognition, etc.) on a large dataset. update based on authors's feedback: - I would like to thank the authors for the detailed feedback and addressing my main comment.
Strengths
This paper addresses an important real-world problem faced by MLaaS users. It presents a novel, formal framework that helps the MLaaS user transparently identify the best strategy for their particular ML classification task, within a budget constraint, across a large set of ML APIs offered by cloud vendors. The framework implements an efficient algorithm that solves this optimization problem, with provable guarantees. In an evaluation using real-world ML APIs from cloud vendors on a large dataset, the authors are able to show a significant cost reduction when using their method, while matching the accuracy of the best individual API, or an improvement over this accuracy at the same cost. Furthermore, the frameworks is robust in that it only requires a modest amount of training data to provide accurate strategies. The release of their annotated dataset is a nice addition to their work that benefits the research community.
Weaknesses
It seems that a lot of the cost benefits come from the fact that free models are available on Github and from the significantly lower cost per prediction on those models when ran on a cloud compute instance ("GitHub Model Cost" under D. in Supplementary Material) compared to the cost of commercial ML API calls. This GH cost does not include (i) the cost required for an expert to identify suitable open-source/free models alternative to MLaaS APIs, (ii) the cost required for setting up, maintaining and administering the GH-model-based inference "service" on general purpose compute resources. Moreover, for many users, especially enterprise-level ones, security and other features typically offered by production MLaaS are important, and hard to replicate in a custom deployment. For the above reasons, the comparison of MLaaS APIs with custom GH-model-based inference is not completely apples-to-apples. It would be nice to also see how FrugalML performs when limited to only using MLaaS services APIs, excluding GH. |